Student Publications

Author: Jemilson Pierrelouis
Title: Applied Database Management Systems

Area:
Country:
Program:
Avialable for Download: Yes

We like to keep important topics affecting our world close at hand. We will post news, articles, comments, and other publications from our students and other contributors. Please be sure to indicate how your topic relates or affects us all.

 


 
AIU Mission Vision
Bachelor Study
Masters Study
Doctoral Study
Areas of Study
Tuition
Press Room
Testimonials
Video Conferences
Open Access
Apply Online
 

For more information on the AIU's Open Access Initiative, click here.

 
 

ABSTRACT

As the integration of computer aided design and manufacturing (CAD/CAM) systems progresses, the need for management of the resulting data becomes critical. Database management systems (DBMS) have been developed to assist with this task, but currently do not satisfy all of the needs of CAD/CAM data. This thesis examines and proposes DBMS requirements for design and manufacturing data associated with mechanical parts. A case study approach was used, involving examples of parts produced by numerically controlled (NC) milling and sheet metal punching machines. Representative examples of currently available relational and object oriented DBMS's were used to construct prototype CAD/CAM databases. Insights concerning the application of relational and object oriented DBMS's to CAD/CAM data were gained. The advantages and deficiencies of each were examined and discussed. The prototypes and resulting discussions provided a basis for the development of the proposed DBMS requirements.

TABLE OF CONTENTS

  • ABSTRACT
  • Introduction.
  • Applied Database Management
  • Applied Database Systems.
  • Database Systems.
  • Special Track on Database Theory, Technology, and Applications (DTTA)
  • Stream-Based Data Management Systems.
  • Database consists of schema and test data.
  • Automatically Update all Database Developers.
  • Accessing database management systems.
  • Creating and deleting database tables.
  • Database menus.
  • Data Warehousing.
  • The Scope of Data Mining.
  • Conclusions.
  • References.

Introduction

This paper is to introduce fundamentals of modern database management systems, in particular relational database systems. Also, I will touch on many areas in applied database management slightly. Further, this paper is intended as a text that can be used as an overview. There are multitude parts of applied database management, and in this paper I will cover the basic concepts. Organization uses applied database management to make most business decision, and these decisions are by way of decision sciences of information. Information and Decision Sciences incorporates the use of data processing equipment, such as computers and their peripherals. These methods are applied to systems management, programming design, analysis of information flow, decision support, database organization, small business problems, data communication networking, and distributed processing. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems. In aggregates information and stored into a database, most companies feel a need to use data mining tools that can answer their business questions that traditionally were too time consuming to resolve. They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. The core components of data mining technology have been under development for decades, in research areas such as statistics, artificial intelligence, and machine learning. Today, the maturity of these techniques, coupled with high-performance relational database engines and broad data integration efforts, make these technologies practical for current data warehouse environments.

Applied Database Management

There are many parts to database management such as: 1. A data access interface that communicates with Microsoft Jet and ODBC-compliant data sources to connect to, retrieve, manipulate, and update data and the database structure. 2. The process of obtaining data from another source, usually one outside a specific system. It usually includes a description of the placement of the data blocks and their relation to the entire set. 3. Structural information about data that describes its context and meaning. 4. A file composed of records, each containing fields together with a set of operations for searching, sorting, recombining, and other functions. 4. Database administrator is who manages a database. The administrator determines the content, internal structure, and access strategy for a database, defines security and integrity, and monitors performance.

5. Database manager is one who provides the analytic functions needed to design and maintain applications requiring a database. 6. Database designer is also one who designs and implements functions required for applications that use a database. 7.Database engine is a program module or modules that provide access to a database management system (DBMS). 8. Database machine is a peripheral that executes database tasks, thereby relieving the main computer from performing them. A database server that performs only database tasks.

Further, in applied database management a software interface between the database and the user. A database management system handles user requests for database actions and allows for control of security and data integrity requirements. The use of desktop publishing or Internet technology to produce reports containing information obtained from a database. A network node, or station, dedicated to storing and providing access to a shared database. In addition, a database structure is a general description of the format of records in a database, including the number of fields, specifications regarding the type of data that can be entered in each field, and the field names used. In asynchronous communications, one of a group of from 5 to 8 bits that represents a single character of data for transmission. Data bits are preceded by a start bit and followed by an optional parity bit and one or more stop bits.

The following topics is to perform different database operations:

A brief tour of the Smalltalk classes for creating database applications

Instructions for accessing your database management system and binding to your database

How to create and delete databases and tables

Step-by-step instructions for querying databases using VA Smalltalk and IBM Smalltalk database classes

Tips for handling errors, ensuring row schema consistency, binary data limits, and intercepting SQL 000 codes

 

The database classes can be divided into four categories: base classes, classes for defining database resources and operations, classes for manipulating database data, and classes for using database data links.

Applied Database Systems

This course I’ve learned database systems with a focus on how to use them in practice. This gives an overview of the capabilities of modern database systems, and how to build database-backed applications. Topics covered include the relational model, SQL, transactions, database design and tuning, three-tier architectures, web data management with XML, service-oriented architectures, data mining, and data warehousing.

Database Systems

Database system is a modern relational database systems concentrating on the internals of relational database systems. Concepts covered include query languages (SQL, relational algebra and relational calculus), storage structures, access methods, query processing, query optimization, and database design. This course is usually offered in the fall semester. It consists of several large programming assignments where students build part of a small relational database system called Minibase. This deals with the architecture of large-scale information systems, with special emphasis on Internet-based systems. Topics covered include three-tier architectures, edge caches, distributed transaction management, web services, workflows, high-availability architectures, and content management. Also include a significant number of programming assignments in the context of three-tier architectures, involving web servers, application servers and database systems.

Special Track on Database Theory, Technology, and Applications (DTTA)

For many years, the Database Theory, Technology, and Applications track has been one of the important parts of the ACM SAC conference. To support ACM SAC, a special track on Database Theory, Technology, and Applications will be held again in SAC 2006. The DTTA track will be a forum for database scholars, research scientists, engineers, and practitioners throughout the world to share their theoretical results, technical ideas, and exploratory experiences relating to implementation and applications. You are cordially invited to submit technical papers to the DTTA track of SAC 2006, and major topics of interest for the track include, but are not limited to the following:

Active, Deductive, and Logic Databases

Audio/Video Database Systems

Cache and Buffer Management

Cooperative Database Systems and Workflow Management

Database Indexing and Tuning

Data Privacy and Security

Data Warehousing, Data Cubes, and Aggregate Processing

Digital Library

Disk Arrays and Tertiary Storage Systems for Very Large Databases

Distributed, Parallel and Heterogeneous Databases and Their Query Processing

Histogram and Sampling Techniques for Database Query Processing

Hypertext/Hypermedia/Multimaedia Database and Information Systems

Image, Pictorial and Visual Databases

Internet and Web-Based Database Systems

Knowledge Discovery and Data Mining in Databases

Mobile Data Management and Mobile Database Systems

Multi-Database Systems/Federated Database Systems/Trusted Database Systems

Multidimensional Data Models/Indices/Database Systems

Object-Oriented and Object-Relational Database Systems

Probabilistic/Fuzzy Databases and Similarity/Approximate Query Processing

Real-Time and High Performance Database Systems

Scientific, Biological and Bioinformatics Data Management and Data Mining

Semantic Modeling and management of Web-Based Databases

Semantic Web and Ontology

Semi-Structural Data Management, Meta Data, and XML

Spatial and Temporal Databases

Statistical and Historical Databases

Transaction Management and Secure Transaction Processing

Researchers and practitioners in the database, information systems and internet fields over the years have made significant progress towards the building of solutions that involve such systems for a wide range of application domains. In doing this, solutions necessarily concentrated mainly on syntax as the readily available unifying formalism for representation and structure, rather more than on the broad variety of semantics involved. One of the recent unifying visions is that of Semantic Web, which proposed semantic annotation of data, so that programs can understand it, and help in making decisions. Researchers have subsequently seen the value of using semantics to understand information and decision making needs of humans, so that data and human? needs can be semantically intermediated. The scope of semantics-based solutions has also moved from data and information to services and processes.

A review of active research funding and projects shows extensive investigations based AI and knowledge representation branches of computer science. For example, logic-based descriptions and inference techniques are being extensively investigated as part of projects under the Semantic Web umbrella. This includes many projects funded by DARPA and EC 5th Framework Program, including the DAML and Onto Web initiatives and programs. There is a visible dearth of investigations from the database and information systems community. This workshop seeks to investigate relationships between challenges in developing semantic solutions for the Web and Enterprises, and the experience and expertise of the database and information systems community.

Research in database management and workflow management has an extensive history of achieving high impact through improving methods of other scientific endeavors as well as in developing new technologies leading to commercialization and in establishing new high-tech industry sectors. This workshop will investigate research directions that can lead to similar long-term impact in Semantic Web and Enterprise solutions by our community.

Stream-Based Data Management Systems

Continuous query processing is a relatively new field in query processing. It deals with the execution of queries over infinite streams of data, rather than over fixed collections of data. Traditional query processing systems are powerful tools for examining stores of data. Continuous query systems are similarly powerful, but focus on processing and reacting to the data as it is collected. These systems are specially designed for “stream processing” problems. Stream processing problems involve input data that is coming into existence over time. The data rate may be very high, or come in bursts. Output is calculated as soon as the required input data is available. Output is a function of all input data available up to the present time. Stream processing problems often make explicitly use of the time domain of their input data. For example, calculating the maximum value seen in the last 5 minutes. There can also be real-time requirements on processing, where results are required within a specified amount of time after data becomes available. A continuous query system will allow stream-processing problems to be specified by programmers, and executed efficiently.

Database consists of schema and test data

When we talk about a database here, we mean not just the schema of the database, but also a fair amount of data. This data consists of common standing data for the application, such as the inevitable list of all the states in the US, and also sample test data such as a few sample customers. The data is there for a number of reasons. The main reason is to enable testing. We are great believers in using a large body of automated tests to help stabilize the development of an application. Such a body of tests is a common approach in agile methods. For these tests to work efficiently, it makes sense to work on a database that is seeded with some sample test data, which all tests can assume is in place before they run. As well as helping test the code, this sample test data also allows to test our migrations as we alter the schema of the database. By having sample data, we are forced to ensure that any schema changes also handle sample data.

 

In most projects we've seen this sample data be fictional. However in a few projects we've seen people use real data for the samples. In these cases this data's been extracted from prior legacy systems with automated data migration scripts. Obviously you can't migrate all the data right away, as in early iterations only a small part of the database is actually built. But the idea is to iteratively develop the migration scripts just as the application and the database are developed iteratively. Not just does this help flush out migration problems early, it makes it much easier for domain experts to work with the growing system as they are familiar with the data they are looking at and can often help to identify problem cases that may cause problems for the database and application design. As a result we are now of the view that you should try to introduce real data from the very first iteration of your project.

Automatically Update all Database Developers

It's all very well for people to make changes and update the master, but how do they find out the master has changed? In a traditional continuous integration environment with source code, developers update to the master before doing a commit. That way they can resolve any build issues on their own machine before committing their changes to the shared master. There's no reason you can't do that with the database, but we found a better way. We automatically update everyone on the project whenever a change is made to the database master. The same refractory script that updates the master automatically updates everyone's databases. When we've described this, people are usually concerned that automatically updating developer’s databases underneath them will cause a problem, but we found it worked just fine. This only worked when people were connected to the network. If they worked offline, such as on an airplane, then they had to resync with the master manually once they got back to the office.

Accessing database management systems

This section explains the concepts you need to understand to establish a connection between VA Smalltalk and a database management system. It also includes instructions for establishing a database connection.

Database connection concepts

Connecting to databases

Working with connection specifications

Working with logon specifications

Establishing database connections

Working with active connections

Working with database managers

Creating and deleting database tables

This section explains how to create and delete database tables using IBM Smalltalk. Each section is illustrated with examples. You can use the examples without modifying them to create and work with a database called CORPDATA. Each example builds on the one before it. Follow the examples in the sequence given. Some of the code samples also include a block of code you can evaluate to see the effect of the task you have just performed. These code samples do such things as display all databases, display the names of table columns, and display a result table from a query. The examples also provide instructions for modifying the sample code to create a database of your own design. Each section explains the parts of the sample code you need to change to work with your own database.

If you need to query an existing database below is a list of activity:

  • Adding database support
  • Creating the application
  • Accessing a database management system
  • Loading database features
  • Connecting to a database manager
  • Defining a database query
  • Creating a query
  • Using the SELECT Details window
  • Creating static DB2 queries
  • Setting fetch and update policies
  • Using the results of a query
  • Tearing off results
  • Using quick forms
  • Running a query
  • Working with the packeting container details part
  • Using a host variable
  • Running a query - host variables
  • Precompiling static SQL
  • Extra practice
  • What to watch for
  • More database techniques
  • Formatting query results
  • Displaying a result column
  • Displaying rows as strings
  • Creating more complex SELECT statements
  • Using high-level qualifiers
  • Sorting result table rows
  • Restricting result rows
  • Using a dynamic WHERE clause
  • Nesting SELECT statements
  • Using the SQL Statement part
  • Defining an UPDATE statement
  • Defining an INSERT statement
  • Defining a DELETE statement
  • Using the Single-Row Query part
  • Using stored procedures
  • Using the Stored Procedure part
  • Running stored procedures
  • Handling result sets from stored procedures
  • Using static SQL
  • Adding database queries to packages
  • Database basics
  • Base database classes
  • Database definition classes
  • Data manipulation classes
  • Data link support classes (DB/2 only)
  • Accessing database management systems
  • Database connection concepts
  • Connecting to databases
  • Working with connection specifications
  • Working with logon specifications
  • Establishing database connections
  • Working with active connections
  • Working with database managers
  • Creating and deleting database tables
  • Preparing to use the code samples
  • Creating and accessing tables
  • Adding rows and data
  • Deleting tables and databases
  • Querying databases
  • Writing SELECT statements
  • Selecting a row from a table
  • Selecting a row
  • Selecting rows from multiple tables (join operation)
  • Using a GROUP BY clause
  • Writing UPDATE, INSERT, and DELETE statements
  • Updating rows in a table
  • Inserting rows in a table
  • Deleting rows from a table
  • Using database classes for scripts
  • Error detection and other tips
  • Handling error objects
  • Ensuring row schema consistency
  • Intercepting SQL 000 codes

Database menus

  • Query
  • Create
  • LOB definitions
  • Query
  • Create
  • UPDATE
  • INSERT
  • DELETE
  • Edit
  • Import
  • Export
  • Manual create
  • Manual edit
  • Host variables
  • Options
  • High-level qualifiers
  • Pop-up menu for adding and deleting data fields
  • Add before
  • Add after
  • Edit
  • Delete
  • Get Schema
  • Pop-up menu for adding data fields
  • Get schema
  • Unary operator
  • Left operand
  • Right operand
  • Nested SELECT
  • Unary operator
  • Left operand
  • Right operand
  • Move before
  • Move after
  • Column value
  • Select all
  • Deselect all
  • Ascending (ASC)
  • Descending (DESC)
  • Move before
  • Move after
  • Move before
  • Move after
  • Select all
  • Deselect all
  • Select all in table
  • Deselect all in table
  • Select all
  • Deselect all
  • Create
  • Edit
  • Delete
  • System values
  • Clause
  • WHERE
  • GROUP BY
  • HAVING
  • ORDER BY
  • Column value
  • Specify expression
  • Unary operator
  • Left operand
  • Right operand
  • Nested SELECT
  • Pop-up menu for Database Query and Stored Procedure parts
  • Pop-up menu for Query Result Table and Current Row parts
  • Database Functions Category
  • Multi-row Query
  • Multi-Row Query - Settings
  • Query Result Table
  • Current Row
  • Single-Row Query
  • Single-Row Query - Settings
  • Result Row
  • SQL Statement
  • SQL Statement - Settings
  • Stored Procedure
  • Stored Procedure - Settings

 

Data Warehousing

Data warehousing takes a relatively simple idea and incorporates it into the technological underpinnings of a company. The idea is that a unified view of all data that a company collects will help improve operations. If hiring data can be combined with sales data, the idea is that it might be possible to discover and exploit patterns in the combined entity. The most basic component in a data warehouse is a relational database. This database is the place where the data is stored. Relational databases are designed to be able to efficiently insert new data and locate existing data using a standardized query language. Given the fact that a company usually has very large amounts of data, the sizes of these databases can reach terabytes (trillions of bytes). Underneath the database is a maze of connections and transformations connecting the data warehouse with other systems. Because data in a company is often created and stored in functionally specific systems (e.g., a payroll system), the data may need to be replicated and moved between a data warehouse and these other systems. There are a wide variety of tools that facilitate this replication and movement process.

The design of the data architecture is probably the most critical part of a data warehousing project. The key is to plan for growth and change, as opposed to trying to design the perfect system from the start. The design of the data architecture involves understanding all of the data and how different pieces are related. For example, payroll data might be related to sales data by the ID of the sales person, while the sales data might be related to customers by the customer ID. By connecting these two relationships, payroll data could be related to customers (e.g., which employees have ties to which customers).

Once the data architecture has been designed, you can then consider the kinds of reports that you are interested in. You might want to see a breakdown of employees by region, or a ranked list of customers by revenue. These kinds of reports are fairly simple. The power of a data warehouse becomes more obvious when you want to look at links between data associated with disparate parts of a organization (e.g., HR, accounts payable, and project management).

Consider an exception report showing all projects more than 90 days in arrears that are managed by someone with less than two years of experience. This report would be nearly impossible to generate without the links between different databases that the warehouse provides. In addition to the capability to link data together, a data warehouse can give users the ability to view data at different levels of aggregation.

The Foundations of Data Mining

Data mining techniques are the result of a long process of research and product development. This evolution began when business data was first stored on computers, continued with improvements in data access, and more recently, generated technologies that allow users to navigate through their data in real time. Data mining takes this evolutionary process beyond retrospective data access and navigation to prospective and proactive information delivery. Data mining is ready for application in the business community because it is supported by three technologies that are now sufficiently mature:

Massive data collection

Powerful multiprocessor computers

Data mining algorithms

Commercial databases are growing at unprecedented rates. A recent META Group survey of data warehouse projects found that 19% of respondents are beyond the 50 gigabyte level, while 59% expect to be there by second quarter of 1996.1 In some industries, such as retail, these numbers can be much larger. The accompanying need for improved computational engines can now be met in a cost-effective manner with parallel multiprocessor computer technology. Data mining algorithms embody techniques that have existed for at least 10 years, but have only recently been implemented as mature, reliable, understandable tools that consistently outperform older statistical methods.

The Scope of Data Mining

Data mining derives its name from the similarities between searching for valuable business information in a large database — for example, finding linked products in gigabytes of store scanner data — and mining a mountain for a vein of valuable ore. Both processes require either sifting through an immense amount of material, or intelligently probing it to find exactly where the value resides. Given databases of sufficient size and quality, data mining technology can generate new business opportunities by providing these capabilities:

Automated prediction of trends and behaviors. Data mining automates the process of finding predictive information in large databases. Questions that traditionally required extensive hands-on analysis can now be answered directly from the data — quickly. A typical example of a predictive problem is targeted marketing. Data mining uses data on past promotional mailings to identify the targets most likely to maximize return on investment in future mailings. Other predictive problems include forecasting bankruptcy and other forms of default, and identifying segments of a population likely to respond similarly to given events.

Automated discovery of previously unknown patterns. Data mining tools sweep through databases and identify previously hidden patterns in one step. An example of pattern discovery is the analysis of retail sales data to identify seemingly unrelated products that are often purchased together. Other pattern discovery problems include detecting fraudulent credit card transactions and identifying anomalous data that could represent data entry keying errors.
Data mining techniques can yield the benefits of automation on existing software and hardware platforms, and can be implemented on new systems, as existing platforms are upgraded and new products developed. When data mining tools are implemented on high performance parallel processing systems, they can analyze massive databases in minutes. Faster processing means that users can automatically experiment with more models to understand complex data. High speed makes it practical for users to analyze huge quantities of data. Larger databases, in turn, yield improved predictions.

Databases can be larger in both depth and breadth:

Conclusions

In conclusion, comprehensive data warehouses that integrate operational data with customer, supplier, and market information have resulted in an explosion of information. Competition requires timely and sophisticated analysis on an integrated view of the data. However, there is a growing gap between more powerful storage and retrieval systems and the users’ ability to effectively analyze and act on the information they contain. Both relational and OLAP technologies have tremendous capabilities for navigating massive data warehouses, but brute force navigation of data is not enough. A new technological leap is needed to structure and prioritize information for specific end-user problems.

References

Gale, George. Theory of Science: An Introduction to the History, Logic, and Philosophy

of Science. McGraw-Hill: New York. 1976.

Hunt, Shelby D. Modern Marketing Theory: Critical Issues in the Philosophy of

Marketing Science. Southwestern Publishing Co.: Cincinnati, OH. 1991.

Lachenmeyer, C.W. The Language of Sociology. Columbia University Press. 1971.

Moorthy, K. Sridhar. Theoretical Modeling in Marketing. Journal of Marketing Research,

Vol. 57, April, 1993. pp. 92-106.

JOHN G. WACKER, Ph.D. is a Professor of Production and Operations Management at

Iowa State University. He is a frequent international scholar and has taught in the Czech

Republics, Holland, Hong Kong, Italy, People's Republic of China, Slovakia, Spain, and

Taiwan.

Jianjun Chen, David J. DeWitt, Feng Tian, and Yuan Wang. Niagaracq: A scalable

continuous query system for internet databases. In Jeff Naughton Weidong Chen and Phil Bernstein, editors, Proceedings of the Special Interest Group on Management of Data (SIGMOD), Dallas, Tx, June 2000.

[DBT83] David DeWitt, Dina Bitton, and Carolyn Turbyfill. Benchmarking database systems: A systematic approach. In Proceedings of the 1983 Very Large Database Conference, October 1983.

[DEF+88] C. Anthony DellaFera, Mark W. Eichin, Robert S. French, David C. Jedlinsky,

John T. Kohl, and William E. Sommerfeld. The zephyr notification service. In Proceedings of the Winter 1988 USENIX Technical Conference, pages 213–219. USENIX, February 1988.

[FR00] Bruno Miguel Ferna´ndez Ruiz. Architecture for the integration of dynamic

traffic management systems. Master’s thesis, Massachusetts Institute of

Technology, 2000.

[Gar02] Garmin Incorporated. GPS V, Personal Navigator Owner’s Manual and Reference Guide, 2002. ”URL: http://www.garmin.com/manuals/gpsv.pdf”.

[Gra93] Jim Gray, editor. The Benchmark Handbook for Database and Transaction Systems (2nd Edition). Morgan Kaufmann, 1993.

[ITS] MIT ITS. Mit intelligent transport systems program. http://web.mit.edu/its/.

[ITS02] Conjestion Pricing: A Report From Intelligent Transportation Systems (ITS), May 2002. URL: www.path.berkeley.edu/ leap/TTM/Demand Manage/pricing.html.

[MF02] Sam Madden and Michael J. Franklin. Fjording the stream: An architecture

for queries over streaming sensor data. In Proceedings of the 18th International Conference on Data Engineering (ICDE 2002), 2002.

[MWA+03] Rajeev Motwani, Jennifer Widom, Arvind Arasu, Brian Babcock, Shivnath

Babu, Mayur Datar, Gurmeet Manku, Chris Olsten, Justin Rosenstein, and Rohit Varma. Query processing, resource management, and approximation in a data stream management system. In Michael Stonebraker, Jim Gray, and David DeWitt, editors, Proceedings of the 1st Biennial Conference on Innovative Database Research (CIDR), Asilomar, CA, January 2003.

[Poo02] Robert W. Poole. HOT Lanes Prompted by Federal Program, November

2002. URL: http://www.rppi.org/federalhotlanes.html. [SFGM93] Michael Stonebraker, James Frew, Kenn Gardels, and Jeff Meredith. The sequoia 2000 benchmark. In Peter Buneman and Sushil Jajodia, editors, Proceedings of the 1993 ACM SIGMOD International Conference on Management of Data, Washington, D.C., May 26-28, 1993, pages 2–11. ACM Press, May 1993.

Stein & Zwass Types of Organizational Memory IT systems (Stein & Zwass, 1995)

Transactive Memory Development in Virtual Teams

Allen, T. J. (1977). Managing the Flow of Technology, Technology Transfer and

the Dissemination of Technological Information within the R&D Organization.

Banker, R. D., Field, J. M., Schroeder, R. G., & Sinha, K. K. (1996). Impact of

Work Teams on Manufacturing Performance: A Longitudinal Field Study. Academy of

Management Journal, 39(4), 867-890.

Bhappu, A., Griffith, T. L., & Northcraft, G. B. (1997). Media Effects and

Communication Bias in Diverse Groups. Organizational Behavior and Human Decision

Processes, 70(3), 199-205.

Boutellier, R., Gassmann, O., Macho, H., & Roux, M. (1998). Management of

Dispersed Product Development Teams: The Role of Information Technologies. R&D

Management, 28(1), 13-25.

Brown, L. David (1983). Managing conflict at organizational interfaces. Reading MA:

Addison Wesley Publications.

Chidambaram, L. (1996). Relational Development in Computer-Supported

Groups. MIS Quarterly, 20(2), 143-163.

Daft, R. L., & Lengel, R. H. (1986). Organizational Information Requirements,

Media Richness, and Structural Design. Management Science, 32(5), 554-571.

Davenport, T. H., & Pearlson, K. (1998). Two Cheers for the Virtual Office.

Sloan Management Review, Summer, 51-65.

 

Davidow, W., & Malone, T. (1992). The Virtual Corporation. New York, NY:

HarperBusiness.

DeSanctis, G., & Fulk, J. (Eds.). (in press). Shaping Organizational Form:

Communication, Connection, and Community. Newbury Park, CA: Sage.

DeSanctis, G., & Gallupe, R. B. (1987). A Foundation for the Study of Group

Decision Support Systems. Management Science, 33, 589-609.

DeSanctis, G., & Poole, M. S. (1994). Capturing the Complexity in Advanced

Technology Use: Adaptive Structuration Theory. Organization Science, 5(2), 121-147.

Filley, Alan C. (1975). Interpersonal Conflict Resolution. Glenview, IL: Scott

Foresman, and Co.

Griffith. (1999). Technology Features as Triggers for Sensemaking. Academy of

Management Review.

Griffith, T. L., & Northcraft, G. B. (1994). Distinguishing Between the Forest

and the Trees: Media, Features, and Methodology in Electronic Communication Research.

Organization Science, 5(2), 272-285.

Handy, C. (1995). Trust and the Virtual Organization. Harvard Business Review,

73, 40-50.

Ilinitch, A. Y., D’Aveni, R. A., & Lewin, A. Y. (1996). New Organizational

Forms and Strategies for Managing in Hypercompetitive Environments. Organization Science, 7(3), 211-220.

Jackson, S. E., May, K. E., & Whitney, K. (1995). Understanding the Dynamics

of Diversity in Decision-Making Teams. In R. A. Guzzo & E. Salas (Eds.), Team Effectiveness and Decision Making in Organizations, (pp. 204-261). San Francisco: Jossey-Bass.

 

Jarvenpaa, S. L., Knoll, K., & Leidner, D. E. (1998). Is Anybody Out There?

Antecedents of Trust in Global Virtual Teams. Journal of Management Information Systems, 14(4), 29-64.

Kinney, S. T., & Panko, R. R. (1996, ). Project Teams: Profiles and Member

Perceptions -- Implications for Group Support System Research and Products. Paper presented at the Twenty-ninth Hawaii International Conference on System Sciences, Kihei, Maui.

 

Lewis, R. (1998). Membership and Management of a ’Virtual’ Team: The

Perspectives of a Research Manager. R&D Management, 28(1), 5-12.

Liang, D. W., Moreland, R., & Argote, L. (1995). Group versus Individual

Training and Group Performance: The Mediating Role of Transactive Memory. Personality and Social Psychology Bulletin, 21(4), 384-393.

Limayem, M. (1996). A Design Methodology for Embedding Decision Guidance

into GDSS. Group Decision and Negotiation, 5(2), 143-164.

Lipnack, J., & Stamps, J. (1997). Virtual Teams: Reaching Across Space,

Time, and Organizations with Technology. New York: John Wiley & Sons.

Mankin, D., Cohen, S. G., & Bikson, T. K. (1996). Teams and Technology:

Fulfilling the Promise of the New Organization. Boston, MA: Harvard Business School Press.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An Integrative Model of

Organizational Trust. Academy of Management Review, 20(3), 709-734.

McGrath, J. E. (1984). Groups: Interaction and Performance. Englewood Cliffs,

NJ: Prentice Hall.

McGrath, J. E. (1990). Time Matters in Groups. In J. Galegher, R. Kraut, & C.

Egido (Eds.), Intellectual Teamwork: Social and Technological Foundations of Cooperative Work, . Hillsdale, NJ: Lawrence Erlbaum.

McGrath, J. E., & Hollingshead, A. B. (1994). Groups Interacting with

Technology: Ideas, Evidence, Issues, and an Agenda. Thousand Oaks, CA: Sage.

Meyerson, D., Weick, K. E., & Kramer, R. M. (1996). Swift Trust and

Temporary Groups. In R. M. Kramer & T. R. Tyler (Eds.), Trust in Organizations: Frontiers of Theory and Research, (pp. 166-195). Thousand Oaks, CA: Sage.

Mittleman, D., & Briggs, R. O. (1999). Communication Technology for

Traditional and Virtual Teams. In E. Sundstrom (Ed.), Supporting Work Team Effectiveness.

San Francisco: Jossey-Bass.

Moreland, R. L., Argote, L., & Krishnan, R. (in press). Training People to Work

in Groups. In R. S. Tindale, J. Edwards, & E. J. Posvoc (Eds.), Applications of Theory and Research on Groups to Social Issues, : Plenum.

Mowshowitz, A. (1997). Introduction to Special Issue on Virtual Organization.

Communications of the ACM, 40(9), 30-37.

Ngwenyama, O. K., & Lee, A. S. (1997). Communication Richness in Electronic

Mail: Critical Social Theory and the Contextuality of Meaning. MIS Quarterly, 21(2), 145-167.

Nunamaker, J. F., Jr., Briggs, R. O., Romano, N. C., Jr., & Mittleman, D. D.

(1998). The Virtual Office Work-Space: GroupSystems Web and Case Studies. In D. Coleman (Ed.), Groupware: Collaborative Strategies for Corporate LANs and Intranets, . New York: Prentice-Hall.

Transactive Memory Development in Virtual Teams 33 Roberts, K., Kossek, E. E., & Ozeki, C. (1998). Managing the Global Workforce: Challenges and Strategies. Academy of Management Executive, 12(4), 93-106.

Stein, E. W., & Zwass, V. (1995). Actualizing Organizational Memory with

Information Systems. Information Systems Research, 6(2), 85-117.

Steiner, I. A. (1972). Group Processes and Productivity. New York, NY:

Academic Press.

Tajfel, H. (1981). Human groups and social categories: Studies in social psychology. New York: Cambridge University Press.

Tajfel, H. (1982). Social psychology of intergroup relations. Annual Review of Psychology, 33: 1-39.

Turner, J.C. (1984). Social identification and psychological group formation. In H. Tajfel (Ed.), The social dimension: European developments in social psychology, volume II (pp. 518 – 538). Cambridge, England: Cambridge University Press.

Turner, J.C. (1987). Rediscovering the social group: A self-categorization theory. New York: Basil Blackwell Inc.

Tajfel, H, & Turner, J.C. (1979). An integrative theory of intergroup conflict. In W.G. Austin & S. Worchel (Eds.) The social psychology of group relations (pp. 33 - 47). Monterey: Brooks-Cole.

Turner, J., Oakes, P., Haslam, S. & McGarty, C. (1994). Self and collective: Cognition and social context. Personality and Social Psychology Bulletin, 20(5): 454 - 463.

Walther, J. B. (1995). Relational Aspects of Computer-Mediated Communication:

Experimental Observations over Time. Organization Science, 6(2), 186-203.

Transactive Memory Development in Virtual Teams 34 Warkentin, M. E., Sayeed, L., & Hightower, R. (1997). Virtual Teams vs. Faceto-Face Teams: An Exploratory Study of a Web-Based Conference System. Decision Sciences, 28(4), 975-996.

Waterhouse, P. (1997). Technology Forecast: 1997 (PWTC-01-07). Menlo Park,

CA: Price Waterhouse Technology Centre.

Wegner, D. M. (1987). Transactive Memory: A Contemporary Analysis of the

Group Mind. In B. Mullen & G. R. Goethals (Eds.), Theories of Group Behavior, (pp. 185-208).

New York: Springer-Verlag.

Wegner, D. M. (1995). A Computer Network Model of Human Transactive

Memory. Social Cognition, 13, 319-339.

Whelan, Peter Timothy (1989) Doctoral Thesis, Georgia Institute of Technology, Atlanta.



 
dd

 

Recommend this Article

To
Subject
Message



Home | Spanish | Portugese | Chinese | French | Online Courses | Available Courses | View Course Demo | Career Center | Available Positions | Ask Career Coach | The Job Interview | Writing Resume | Accreditation | Areas of Study | Bachelor Degree Programs | Masters Degree Programs | Doctoral Degree Programs | Course and Curriculum | Human Rights | Online Library | Links Exchange | 54 Million Records | Press Room | New Look | Representations | Student Publications | Share with Us | Alumni | Graduates | Sponsors | General Information | Mission & Vision | School of Business and Economics | School of Science and Engineering | School of Social and Human Studies | Download Center | Admission Requirements | Tuition | Apply Online | Faculty & Staff | Distance Learning Overview | Student Testimonials | Frequently Asked Questions | Distance Learning Request Information | Register for Program | Admission Application Form

Copyright ® 1979 - 2006, 2007 Atlantic International University . All rights reserved.
Google