The present work addresses human error in socio-technical systems, by considering the influence and causality of human factors on dynamic decision making. The theoretical framework is built by the SHEL-Model from Hawkins (1993). This system-based model illustrates a spectrum of Human Factors (Software, Hardware, Environment, Liveware) and interfaces (Liveware-Software, Liveware-Hardware, Liveware-Environment, Liveware-Liveware) between its elements in socio-technical systems (STS). The model of situation awareness in dynamic decision making from Endsley (1995a) is additionally used to describe the complex decision making process in STS based on the situation awareness (SA), allocated in perception, comprehension and projection and influenced by system and task as well as individual factors.
A study was conducted in a company, producing airconditioning technology with the purpose to investigate and optimize problems related to the usage of company-specific software among the department’s sales and order processing. Additionally the information flow between these departments was examined. In order to specify the problem description from the company itself research questions were formulated within the scope of the mentioned theories: (1) Which SHEL-categories or interfaces should be improved in order to reduce failures and problems in the work within the configuration software? and (2) Which insights of the model of situation awareness should additionally be considered regarding the reduction of failures and problems in the decision making process within the configuration software?
For answering the research questions, first the need for imrprovement was identified within six self-conducted interviews with the sales staff from the outdoor service (n=2), sales staff from the indoor service (n=1), order processing (n=2) and one expert, who was responsible for the sales staff in other countries (n=1). Within Mayring’s (2002, 2010) qualitative data analysis, critical incidents were extracted and related to the SHEL-categories. Additionally categories were inductively builded, based on Endsley’s model.
Table of content
Abstract
List of figures
List of tables
List of abbreviations
1 Introduction
1.1 Company-specific problem description
1.2 Framework and content design of this work
2 Theoretical framework
2.1 SHEL-Model
2.1.1 Related concepts and studies
2.1.2 Evaluation
2.2 Model of SA in dynamic decision making
2.2.1 Measurement of SA
2.2.2 Related concepts and studies
2.2.3 Implications for system design
2.2.4 Evaluation
2.3 Interfaces between the SHEL-Model and the Model of SA
3 Method
3.1 Sample
3.2 Design and Procedure
3.2.1 Diagnosis
3.2.2 Scheduling
3.2.3 Data collection
4 Results
4.1 Data editing/ preparation
4.1.1 Transcription
4.1.2 Data analysis
4.2 Presentation of results
4.2.1 Quantitative results
4.2.2 Qualitative results
4.3 Answering the research questions
4.4 Derived actions
5. Discussion
5.1 Interpretation of results
5.2 Methodical aspects
5.2.1 Sample
5.2.2 Design and procedure
5.2.3 Data editing/ preparation
5.3 Recommendations
6 Conclusion
List of references
Appendix
Appendix A - Interviews
A1
A2
Appendix B – Analysis of the results
B1
B2
B3
Abstract
The present work addresses human error in socio-technical systems, by considering the influence and causality of human factors on dynamic decision making. The theoretical framework is built by the SHEL-Model from Hawkins (1993). This system-based model illustrates a spectrum of Human Factors (Software, Hardware, Environment, Liveware) and interfaces (Liveware-Software, Liveware-Hardware, Liveware-Environment, Liveware-Liveware) between its elements in socio-technical systems (STS). The model of situation awareness in dynamic decision making from Endsley (1995a) is additionally used to describe the complex decision making process in STS based on the situation awareness (SA), allocated in perception, comprehension and projection and influenced by system and task as well as individual factors.
A study was conducted in a company, producing airconditioning technology with the purpose to investigate and optimize problems related to the usage of company-specific software among the department’s sales and order processing. Additionally the information flow between these departments was examined. In order to specify the problem description from the company itself research questions were formulated within the scope of the mentioned theories: (1) Which SHEL-categories or interfaces should be improved in order to reduce failures and problems in the work within the configuration software? and (2) Which insights of the model of situation awareness should additionally be considered regarding the reduction of failures and problems in the decision making process within the configuration software?
For answering the research questions, first the need for imrprovement was identified within six self-conducted interviews with the sales staff from the outdoor service (n=2), sales staff from the indoor service (n=1), order processing (n=2) and one expert, who was responsible for the sales staff in other countries (n=1). Within Mayring’s (2002, 2010) qualitative data analysis, critical incidents were extracted and related to the SHEL-categories. Additionally categories were inductively builded, based on Endsley’s model.
In total 151 critical incidens were extracted. The allocation among the SHEL-categories indicates that the Software represents the main problem field (25 indications), followed by Liveware (17 indications), Environment (9 indications) and Hardware (2 indications). Considering the SHEL-interfaces, the Software-Liveware interface is found to be maladjusted (71 indications), including problems, related to Endsleys’ decision making process (dismatch between automation and scope for action, out of the loop performance problem and lack of usability). The examined information flow, which was related to the Liveware-Liveware interaction, demonstrates 27 indications.
Derived company-specific actions were: dynamic defaults, dynamic checklists, intuitive design and online help.
Although the problems were mostly related to the Software-Liveware interface, it is recommended to apply trainings for providing information and improve the cooperation in teams in order to ensure a holistic system design, which is required in socio-technical systems (STS).
Keywords: Socio-technical systems (STS), SHEL-Model, Situation Awareness (SA), dynamic decision making, performance, automation, scope of action, out of the loop performance problem, usability
List of figures
Figure 1: SHEL-Model as a conceptual model of Human Factors (Hawkins, 1993)
Figure 2: Swiss-Cheese-Model (Reason, 1990, p. 208)
Figure 3: Quantity of m-SHEL-Interfaces in the navigational phase (Itoh et al., 2004, p. 120)
Figure 4: Model of SA in dynamic decision making (Endsley, 1995, p. 35)
Figure 5: Levels of control in automation (Endsley & Kiris, 1995, p. 385)
Figure 6: Team SA (Endsley, 1989, in Endsley & Robertson, 2000a, p. 303)
Figure 7: The PRF for a practiced or easy task (B), and a novel or difficult task (A) (Wickens, 1981, p. 11a)
Figure 8: SA across levels of automation (Endsley & Kiris, 1995, p. 390).
Figure 9: The model of SA in dynamic decision making in relation to SHEL-components
Figure 10: Software and the task and system factors (ectract of Figure 9)
Figure 11: Hardware and the task and system factors (ectract of Figure 9)
Figure 12: Environment and the task and system factors (ectract of Figure 9)
Figure 13: Liveware and the individual factors (ectract of Figure 9)
Figure 14: Liveware-Liveware and team factors (ectract of Figure 9)
Figure 15: Absolute frequency distribution of the SHEL-Categories
Figure 16: Comparison of the frequency distribution between LS and LL
Figure 17: Frequency distribution of the subcategories from the LS-interface
Figure 18: Frequency distribution of the subcategories from the LL-interface
Figure 19: Consequences of the 151 extracted critical incidents
List of tables
Table 1: 8 main critical issues as questions (Rizzo et al., 2006, p. 92)
Table 2: Breakdowns of accident/ incidents contributing factors by HFACS-RR level (Reinach & Viale, 2006, p.398)
Table 3: SA error taxonomy (Endsley, 1995c, p. 288)
Table 4: Compared results from the classifications from the first and the second rater and the inter-rater agreement
List of abbreviations
App. Appendix
cf. confer
ch. Chapter
CI Critical Incident
et al. et alii
ibid. Ibidem
LE Liveware-Environment
LH Liveware-Hardware
LL Liveware-Liveware
LS Liveware-Software
OOTL Out of the loop
p. page
s. see
SA Situation Awareness
SAGAT Situation Awareness Global Assessment Technique
SART Situation Awareness Rating Technique
STS Socio-technical Systems
1 Introduction
“Work organizations exist to do work – which involves people using technological artifacts (whether hard or soft) to carry out sets of tasks related to specified overall purposes. Accordingly, a conceptual reframing was proposed in which work organizations were envisaged as socio-technical systems rather than simply as social systems (Trist, 1950, in Trist, 1981, p. 10).
This statement describes socio-technical systems (STS), which will be focused on in this paper within the scope of human factors research regarding human error, decision making and organizational performance. Human factors “involves the study of factors and development of tools that facilitate the achievement of these goals” (Wickens, Lee, Lui & Becker, 2004, p. 2). The key role is the diagnosis and solution of system failures as well as the comprehension of its elements and the related system design (Wickens et al., 2004). In order to optimize organizational performance in STS, relationships between humans and circumstances of the working environment should be improved by taking human sciences and system engineering into account (Edwards, 1985, in Hawkins, 1993, p. 20). The former focusses strengths and limitations of the human component (ibid.). Human nature includes physical, psychological and social characteristics (Badke-Schaub, Hofinger & Lauche, 2012). According to Hawkins (1993), “Human Factors attempts to research and explain the nature of human behavior and human performance, using human sciences. Armed with this knowledge it tries to predict how a person will react and respond in a given set of circumstances.” (p. 26).
Considering human error in STS, Wickens & Hollands (2000) emphasize that system breakdowns have their roots either in human error or poor system design. For the detection of error causes, people are seen as key roles in organizations, as only humans is able to identify dangers and prevent error (ibid.). Hawkins (1993) suggests, prevention of failures could be realized by a workplace design, which is adapted to strengths and limitations of all interacting components in STS. Consequently, an increase in systems safety, performance and satisfaction at work (Hawkins, 1993; Wickens, Lee, Liu, & Becker, 2004) as well as the identification of breakdowns is intended (Wickens & Hollands, 2000). Additionally, it has been investigated that poor SA leads to many critical situations (Kaber & Endsley, 1998), which could have fatal effects in STS. The highest amount of failures is attributed to human error. The human factor is with 88% the most critical factor in STS (ibid.). Nevertheless, some causes are attributed to the process of work, including multiple tasks and goals as well as time pressure, stress and a lack of communication between team members and among teams (ibid.). According to Jones & Endsley (2004), “[w]ith the ever-increasing complexity of systems, operators can quickly become cognitively overtaxed and unable to adequately process the large amounts of data with which they must contend” (p. 343). This statement emphasizes the challenge for operators in STS regarding the prediction of the constantly changing environment. Additionally, the understanding of factors influencing operations is important to know, why operational errors may occur (Endsley & Rodgers, 1998). STS are characterized by the multidimensionality of their elements and complex interactions between these elements: humans, machines and environmental conditions (Baxter & Sommerville, 2011). From this viewpoint, the enhancement of organizational performance in these systems requires an understanding of all elements, their features, limitations and interactions (Hawkins, 1993). Consequently, organizations should consider the human nature as well as the relation between human and technological factors in organizations (Badke-Schaub et al., 2012), including machines, procedures and equipment (Hawkins, 1993) as well as the software (Trist, 1981).
The elements of STS, such as Software, Hardware, Environment and Liveware, as well as their interfaces, Liveware-Software (LS), Liveware-Hardware (LH), Liveware-Environment (LE) and Liveware-Liveware (LL) are covered in Hawkins’ (1993) SHEL-Model. The need for adaptation of decisions by considering a broad spectrum of these elements in STS is described by using Endsley’s model of situation awareness (SA) in dynamic decision making. It illustrates how the state of environment and its elements are perceived, comprehended and projected in order to make a decision and accomplish adopted actions (cf. Endsley, 1995a).
Within the scope of the title of this paper “Human error in socio-technical systems: A research about the influence and causality of human factors on the dynamic decision making” a study was conducted in a company, producing aircon-technology. It was achieved to examine ciritical issues related to the usage of configuration software and the information flow between employees, who are involved in this configuration process.
The need to consider the multidimensionality of all elements and their interactions for the improvement of STS will be captured by the theoretical framework: the SHEL-Model and the model of situation awareness in dynamic desision making. It is achieved to make a diagnosis regarding the existing failures in the company and to derive adapted actions based on these models.
The specific problem description and the research questions (ch. 1.1) as well as the structure of the present work (ch. 1.2) will be presented in the following. uHHfff
1.1 Company-specific problem description
The purpose of this paper is to identify errors in STS and to describe the influence and causality of human factors on the decision making process within the scope of the problem description of the company itself, in which this study was conducted. The purpose of the company was to optimize complex company software, which is used for the configuration of company-specific devices. The author was assigned to identify possible errors and weaknesses regarding the work processes and steps within the software, the investigation of differences in application in the sales department and order processing and the identification of errors and problems related to the information flow between these departments. Main questions to clarify were: Which error types arise? What kinds of errors are provoked by the software? Where is a lack of information? What are possible actions for the future?
The authors’ classification of the company to STS, based on the knowledge about the branch “ventilation technology” evoke the necessity to comprehend all elements of the system as well as their interactions in order to derive adapted improvements (cf. Hawkins, 1993).
Although the problem description from the company was mainly focused on the optimization of the software for preventing problems and error in the process, the author aussumed that error causes lie on different factors in the system. Besides the specific purpose, which was given by the company itsef, first, the author assumes, based on the theoretical framework described in the introduction, that it is necessary to consider all elements of STS within the SHEL-Model and their interfaces in order to detect all problem fields connected to the work within the configuration software. Therefore, the purpose of this work is to answer the research question: Which SHEL-categories or interfaces should be improved in order to reduce failures and problems in the work within the configuration software? It is assumed that gaps in the system, which influence the usage of the software are allocated in Software, Hardware, Environment and Liveware and that the dismatch between Liveware-Software and Liveware-Liveware should be considered separately, based on the purpose of the company to improve the Software and the information flow. To answer the first question, the SHEL-categories Software, Hardware, Environment and Liveware as well as the interfaces Liveware-Software and Liveware-Liveware were used as framework for categorizing problems and failures derived from interviews with software users from the sales staff and the order processing in order to identify quantitative frequency allocations between the SHEL-components and qualitative explanations of problems and failures connected to the work within the configuration software (LS) and the information flow between the departments (LL): sales and order processing.
Secondly, it is assumed that the model of SA in dynamic decision making covers all components of the SHEL-Model from the viewpoint of the decision making process and could be used as framework for explaining characteristics of errors regarding the decision making, including perception, comprehension and projection of the environmental state, and to derive adapted actions for system design, as completion of the framework given from Hawkins’ (1993) SHEL-Model. The related question is: Which insights of the model of situation awareness should additionally be considered regarding the reduction of failures and problems in the decision making process within the configuration software?
This research question will be answered theoretically as well as empirically. First, both theories will be related to each other in order to identify connections (ch. 2.3). Later, insights of Endsley’s (1995a) model will be used to inductively develop categories, which present limitations and problems influencing the SA and the decision making process. Simultaneously, it is pursued that the analysis and the interpretation of the collected data from the interviews, based on these two models, could ensure the identification of connections between these two models. Based on this classification of errors, adapted improvements for the company will be recommended. It is assumed that a combination would extent both perspectives. On the one hand the composition of STS and their processes could be enlarged by the connection of the process of SA in dynamic decision making with characteristics of interfaces between the human nature, hardware, software and environmental conditions. On the other hand solutions for an adapted interface design could be defined by consideration of implementations for the development of SA.
1.2 Framework and content design of this work
The presented assumption, that the SHEL-Model could be related to the model of SA, will be theoretically clarified within the presentation of the SHEL-Model (ch. 2.1) and the model of SA in dynamic decision making (ch. 2.2). First, in the theoretical part, these models will be described, connected with other studies and evaluated separately. Afterwards, they will be connected in order to illustrate interfaces and similarities (ch. 2.3). Secondly, the empirical answering of the second research question is done by the inductive building of categories, which is based on the theoretical framework of the model of SA in dynamic decision making, in addition to the focused SHEL-categories, for the analysis of the conducted interviews in the company.
The empirical part is oriented on the problem description from the organization, the purpose to detect failures and problems related to the work activities within the configuration software and to identify the information flow between departments using the software. This companies problem was concerned within the purpose detect problem fields related to the SHEL-categories as well as the decision making process, influenced by SA. The method, including the sample (ch. 3.1) and design and procedure (ch. 3.2), will be described in chapter 3. Afterwards the results will be shown, including data editing/ preparation (ch. 4.1), the presentation of results (ch. 4.2), answering the research questions (ch. 4.3) and the illustration of the derived actions (ch. 4.4). Furthermore, the results (ch. 5.1), methodical aspects (ch. 5.2) and the derived actions (5.3) will be discussed and finally a conclusion is given (ch. 6).
2 Theoretical framework
This section is divided into three parts. In the first part of the theoretical framework, the relevance of the SHEL-Model for this paper will be described and the model itself will be presented in detail (ch. 2.1). Afterwards some related studies will be listed (ch. 2.1.1) in order to illustrate the empirical transferability and to demonstrate the necessity to consider all of its facets. Next, the evaluation of the model takes place.
In the second part, the reason for the selection of the model of situation awareness in dynamic decision making from Endsley (1995a) within the scope of this paper will be explained and the model itself will be introduced (ch. 2.2), allocated in individual, system and task factors and team SA. In chapter 2.2.1 tools for the measurement of situation awareness (SA) will be presented. Afterwards, in chapter 2.2.2 related concepts and studies will be described, also allocated in individual, system and task factors and team SA, in order to emphasize the necessity of these influences on the decision making process. As completion of the second part, the model will be evaluated.
The third part of the theory included the connection of both models (ch.2.3), based on the first and the second part. This is relevant for the theoretical clarification of the interfaces between the two models, which is aimed to be precisised within the scope of the second research question (cf. ch. 1.1).
2.1 SHEL-Model
As mentioned before, the SHEL-Model captures a spectrum of elements in STS. Besides that, interactions among these elements are considered (Hawkins, 1993). Based on the idea of Human Factors, that system design requires the comprehension of all factors as well as the optimal adaptation of all elements, it is necessary to identify characteristics and relationships of humans and the circumstances in the environment (ibid.). In STS failures could have multiple triggers (Wickens & Hollands, 2000). These should be identified and improved in order to guarantee optimal interactions of all components in the system (Hawkins, 1993).
This model is choosen, because it covers all facets and interactions which are necessary to examine within the scope of the company-specific problem, which is described by failures in the usage of software and the information flow. Although the problem description from the company was mainly focused on the optimization of the software for preventing problems and error in the process, the author achieved to investigate all facets in the system in order to detect main causes for error and implement adapted actions. Therefore all facets of the SHEL-Model will be illustrated.
This model provides the possibility to consider all facets in organizations individually as well as in relation to each other, which is achieved to do in the empirical part.
The first version of the SHEL-concept was established by Edwards and modified by Hawkins with the focus on flight (Hawkins, 1993). This model includes the spectrum of Human Factors and interfaces between its elements at a modern workplace. It is seen as a system-based model with the focus on the operator and the contextual conditions (Reinach & Viale, 2006). The operator, referred to as Liveware, builds the center in a STS including further components. More precisely, Liveware interacts with the components: Software, Hardware, Environment and Liveware.
The SHEL-Model expresses that the productive process in organizations is defined by the combinations of these components (Rizzo, Pasquini, Nucci, & Bagnara, 2000). Additionally, this model helps to understand, how human error and accidents arise if these components are maladjusted to each other (Reinach & Viale, 2006). According to Hawkins (1993), an optimal interaction and matching of the four components with Liveware is only possible by considering and understanding the characteristics of this essential element as well as by an ensuring adjustment of the other components on these characteristics. Otherwise the interaction might cause disturbance in the performance of the system (ibid.). In detail, “[a]ny lack of resources in one dimension, or requirements for new resources due to variances in the process, should be compensated by changes in the other dimensions. But bad coupling of the resources will hinder the accommodations” (Rizzo et al., 2000, p. 89).
The basic assumption of the SHEL-Model is that the human should be treated as the center of STS, because the person is the most flexible component (Hawkins, 1993). In order to increase safety in a system, components should be matched considering the human nature (ibid.). An increase of system performance requires knowledge about triggers of human behavior (Hawkins, 1993). Therefore inter- and intraindividual differences as well as capabilities and limitations of a person will be described following.
One of these triggers is motivation. Motivation “influences the level of performance, the efficiency achieved and the time spent on an activity” (Hawkins, 1993, p. 133). Locke & Latham (1990) emphasize in their model that major elements for high performance, job satisfaction and commitment were the “direction of attention and action, effort, persistence, and the development of task strategies” (p. 240). There are also human needs, which could be gerealize among humans and were major for the direction of performance. Maslow (1943) introduced the hierarchy of needs, in which the physiological needs, like the need for water, sleep and food, are the fundamental needs (ibid.). These and other motives in the lower stage of the hierarchy should be satisfied first. Only then the next higher need in the hierarchy can be focused (ibid.). The impact of the fulfillment of basic needs, like water and food, is mentioned by Hawkins (1993) as a necessity in order to increase well-being and functionality at work. Further characteristics of the human nature, are related to visual and auditory perception, cognition and decision making (Wickens et al., 2004). At this point, Hawkins (1993) concentrates on: physical abilities, sensing mechanisms, information processing, feedback system and load factors. More precisely, an adjusted development of the workplace around men should consider physical factors like etymology, age and sex. Resulting interpersonal varieties may cause differences in physical movement at work (ibid.). A further demand on the quality of the workplace is its adaptation on the senses of humans. Means, stimuli like “light, sound, smell, taste, movement, touch, heat and cold” (Hawkins, 1993, p. 23) are required to be adopted on human perception criteria in order to cause the needed response from the human to the external world (ibid.). By this, the varying sensitivities among senses towards different stimuli should be taken into account (ibid.). Additionally, human responses are influenced by information processing capabilities and their limitations. According to Hawkins (1993), information from the system, such as warning systems should consider also short- and long-term memory as well as motivational factors and influences like stress. Because, after the perception of information, the derived message causes feedback as an action over the muscles (ibid.). Finally, effects of environmental conditions on the functionality and performance at work, like temperature, noise, boredom, pressure or light, on human performance and well-being are mentioned by Hawkins (1993).
As mentioned before, the external workplace around the individum is allocated in Software, Hardware, Environment and Liveware. Related characteristics of these four components are the following (Schmorrow, 1998; Licu, Cioran, Hayward, & Lowe, 2007):
1. S oftware: manuals, symbology, rules, regulations, laws, orders, standard operating procedures, customs, practices, organization of information in the system, documents
2. H ardware: required equipment to carrying out the work, workplace layout, seating
3. E nvironment: influences of physical factors, like noise, temperature, lightning
4. L iveware: Human beings, who are involved in the organization, supervisors, teams
As pictured in Figure 1 each of these components are interacting with the Human in the center. These components and the usage of their subparts have an overall effect on the interfaces with the Liveware as the central component and the organizational performance (cf. Hawkins). A dismatch between components could evoke system errors (ibid.).
Abbildung in dieser Leseprobe nicht enthalten
Figure 1 : SHEL-Model as a conceptual model of Human Factors (Hawkins, 1993)
In STS, the most common interface is the Liveware-Hardware (LH) interaction (Hawkins, 1993). A matching between these components could be realized by the adaptation of the Hardware on the introduced human abilities, perception, processes and limitations. Examples are the designing of seats by considering ergonomic requirements or displays regarding human information processing properties (Hawkins, 1993). Human perception abilities and information processing could be considered within the way of presenting information, for example with marks, colors, symbols, the placement of materials and information as well as ergonomics (Kozuba, 2011). These all should be matched with the adaptation of the human component (ibid.). In turn, a mismatch could cause hazards with consequences, which could be serious, especially in complex systems or aviation (Hawkins, 1993).
The Liveware-Software (LS) interface includes the human interaction with “non-physical aspects of the system such as procedures, manual and checklist layout, symbology and, increasingly, computer programs” (Hawkins, 1993, p. 24) as well as “regulations, manuals, documentation […] standard operational procedures (SOPs), training and assisting computer programs” (Kozuba, 2011, p. 32). An optimal relationship is depending from the “commonness, exactness, legibility of illustration/transmission, specialized phraseology, explicitness, standard symbolism” (Kozuba, 2011, p. 32). The way of transferring information from the system to the user should be understandable and not complicated (Kozuba, 2011), which in part can be organized over symbols, which should trigger a specific action (Hawkins, 1993). Procedures and critical situations should also be known in order to sustain an effective LS -interaction (Kozuba, 2011). Hawkins (1993) recognizes that weak spots are not such transparent and more difficult to unlock, compared to the LH -interface, (Hawkins, 1993). Because of this, the development of such systems should be carefully adapted on the human nature. By this failures during the interaction between LS could be reduced.
Next, the Liveare-Environment (LE) interface encompasses external conditions like noise levels, temperature or work clothes, like helmets or earplugs in flight (Hawkins, 1993), lightning and other inside environment conditions, which effect human activity (Kozuba, 2011). Hawkins (1993) recommends that an exact measuring of influences on human behavior and limitations of humans regarding external load should be detected.
The interface between Liveare-Liveware (LL) describes the interaction between groups, including team-work, leadership, and cooperation, which influences behavior as well as the performance (ibid.). According to Kozuba (2011) “[t]hese relationships are seen through work organization prism, considering relationships between people at different levels and areas of management, and their understanding of safety problems” (p.32). Since Hawkins (1993) SHEL-Model is focused on aviation, he mentioned crew resource management (CRM) as a method to improve this interaction.
All in all, Hawkins (1993) emphasizes the necessity of awareness regarding human characteristics and their impact on the performance at the workplace. He claims, an optimal adaptation of the workplace-design could increase performance and decrease failures or failed human-workplace interaction, whereas, the variety in attitudes as well as motivation among individuals can be captured by training, selection or standardization (ibid.). Means, the human component is not eliminable by solely designing the external world around the human. Additionally, a failure or organizational performance is always defined by multidimensional combinations of Software, Hardware, Environment and Liveware (Rizzo, Pasquini, Dinucci, & Bagnara, 2006). According to Rizzo et al. (2006), “[t]here are no processes that can be carried out by one of these components alone” (p. 88). Therefore all of these SHEL-components should be considered by analyzing incident and sources of information from the disciplines psychology, engineering and medicine as well as human and physical sciences should be used in order to design the workplace, to use equipment effectively and to detect the need for training on the human component (ibid.).
Further interfaces outside the scope of human factors were also identified by Hawkins (1993), such as Hardware-Environment and Hardware-Software interactions (ibid.). But they will not be considered in this paper, because of the already presented company specific problem, to identify gaps in the configuration process within the software and in the information flow between depatments. This indicated the focusing on the LS- and the LL-interface.
2.1.1 Related concepts and studies
As presented in the further section, the SHEL-Model is a conceptual framework of Human Factors, related to STS, in which specific interfaces exists. If these interfaces are not optimized, system errors can arise (Hawkins, 1993). For example, limitations in the design of the software may disturb the interaction between user and software. The assumption of the necessity to understand the context of failure, in order to minimize limits, is also sympathized by Reason (1997). According to him, “human error is a consequence not a cause” (in Reinach & Viale, 2006, p. 397), and “errors […] are shaped and provoked by upstream workplace and organizational factors” (ibid.). In his related Swiss-Cheese-Model, Reason (1990) distinguishes between active and latent errors. The latent ones are related to concealed critical issues and weak spots, which provoke errors if they are combined with other defective factors in the system (Reason, Hollnagel, & Paries, 2006). According to Reason (1990) an analysis of a system failure should consider all elements in the system, because triggers of failures could lie on multidimensional factors in the system, as pictured in Figure 2. Local triggers, intrinsic defects, atypical conditions, unsafe acts and psychological precursors as examples for weak spots in Reason’s model (1990) could be related to mismatches in interfaces between the human with Software, Hardware, Environment or Liveware as well as with weak spots inside the components, for example a bad designed Software or a lack of knowledge in the Liveware.
Abbildung in dieser Leseprobe nicht enthalten
Figure 2 : Swiss-Cheese-Model (Reason, 1990, p. 208)
More detailed, Hobbs and Williamson (2003) report on errors, influenced from multidimensional conditions, such as “poor procedures and inadequate trade knowledge, but also nontechnical issues, such as time pressure, communication breakdowns, inadequate supervision, and the physical environment” (p. 187f.).
A study from Itoh, Mitomo, Matsuoka & Murohara (2004) illustrates the necessity to consider the Liveware separately from the interface Liveware-Software. In their study, they used the SHEL-Model as framework in order to classify the seafarer’s safety-related factors. For this, they add the component Management and extend the model to a m-SHEL-Model. Their explanation for this enhancement is that ships have to communicate with other ships as they are passing, which influences the operator regarding navigation practices in order to avoid collision (ibid.). The new category Management includes controlling the whole system. Within a questionnaire survey, the authors examine operators’ views about safe navigations. For this, 800 participants were asked to report critical incidents. These opinions are allocated to the m-SHEL-categories and the results were presented as pictured in Figure 3. The results show, that the interface between LS and the category Liveware are the most mentioned characteristics of critical incidents in the survey of grounding avoidance as well as the collision avoidance. The Liveware-Management interface is more often mentioned related to situations of grounding avoidance. The explanation of the authors is that preventing such a situation is “an important issue in management” (Itoh et al., 2004, p. 120).
Abbildung in dieser Leseprobe nicht enthalten
Figure 3 : Quantity of m-SHEL-Interfaces in the navigational phase (Itoh et al., 2004, p. 120)
A further work related to the SHEL-Model was conducted by Rizzo et al. (2006). In this study the allocation of failures among Hardware, Software and Liveware is investigated, which illustrates the multidimensionality of failures. They introduce a method for detecting and organizing possible sources of breakdowns and critical issues. The organizational background and the problem was introduced by describing operators’ working conditions and tasks as routine work, which requires quick answers, decisions with colleagues, work in team under stressful conditions and by interacting with technologies in the Italian National Railways organization (ibid.). The authors collected critical issues within documentations from operators in order to relate problems to the SHEL-components and -interactions (ibid.). By this, eight critical issues were identified. Three of them are related to the Hardware, also three to the Software and two to the Liveware, as basic mismatches between Liveware, Software and Hardware. Next, they were formulated as questions in order to dig deeper in the second phase within interviews (Table 1).
Table 1 : 8 main critical issues as questions (Rizzo et al., 2006, p. 92)
Abbildung in dieser Leseprobe nicht enthalten
Further work was done by Reinach & Viale (2006). In their work, the authors focus on safety implications in remote control locomotive operations in the railroad by detecting six accidents on the basis of interviews with operators, which include 36 multiple contributing factors, active failures and latent conditions (Table 2).
Table 2 : Breakdowns of accident/ incidents contributing factors by HFACS-RR level (Reinach & Viale, 2006, p.398)
Abbildung in dieser Leseprobe nicht enthalten
The authors identified three categories of operator acts. These were: skill-based errors, attentional failures and memory lapses. The factor preconditions for operator acts, includes technological environment and the physical environment (inadequate yard lightning). To the supervisory factors, inadequate supervision and planned inappropriate operations (poor crew pairing) were related. Organizational factors were allocated in organizational process and resource management. Additionally, a loss in SA was identified across the six incidents. All in all, the authors emphasize that on all levels in an organization, a combination of multiple system-factors can trigger incidents. The results point out the necessity to consider multiple triggers of failures. Causes are divided into operator acts (Liveware), preconditions for operator acts and organizational factors (Environment) and supervisory factors (Liveware-Liveware). Additionally it could be derived from this study, that all of these mentioned components (Liveware, Environment, Liveware-Liveware) could influence SA.
2.1.2 Evaluation
Following a validation of the SHEL-Model takes place based on the evaluation criteria for scientific theories from Asendorpf (2007). In this connection, the characteristics regarding explicitness, empirical foundation, consistency, verifiability, productivity, completeness, applicability and thriftiness will be considered (ibid.). Because of a lack of references to this model, the evaluation is primary done obviously by the author. The assumptions were based on the previous description of the model and the related studies.
A theory is explicit, if fundamental terms are presented intelligibly in order to ensure intersubjective agreement. Fundamental terms of the SHEL-Model are Software, Hardware, Environment and Liveware and the single interfaces: LS, LH, LE, LL. More detailed definitions of the components and descriptions of the content and scope of each component is desirable, if this model should be used for classifying errors and developing actions. Differently, the explicitness is given by the formulation of the basic statement that human factors in STS should consider all aspects, which interact with the human nature in the center. Additionally, this assumption is emphasized by the figurative illustration (Figure 1) of the multidimensionality of these interfaces in such organizations. Hawkins’ (1993) explanations of the individual interfaces and his recommendations towards the design of the components are reasonable. But the wholeness of these explanations and recommendations is doubtful because of the focus on the initially planned context of aviation. Therefore the usage of this model to classify errors and derive solutions in the context of flight is possible, but related to other contexts, specific characteristics of components and their interactions can vary among organizations. Nevertheless, it is assumed that this model is applicable to all STS, which include the components Software, Hardware, Environment and Liveware, although insights and functions can vary. But this difference between contexts does not hinder the usage of this model for classification of errors.
Next, information regarding the empirical foundation is not given by Hawkins (1993). The lack of research from further authors is also an indication for the necessity of investigations in this field. But nevertheless, the study of the authors Itoh et al. (2004) show that the assumptions of the existence of certain interfaces in organizations and their relevance for organizational performance are verifiable. The authors found out that the most termed interface for seafarers is between LS. Furthermore, Rizzo et al. (2006) extracted 8 critical issues for a railway organization, which were allocated in Hardware, Software and Liveware. Reinach & Viale (2006), detected factors of breakdowns including operator acts (Liveware) , preconditions for operator acts (LS, LH, LE), supervisory factors (LL) and organizational factors (Environment). These findings could also be used in order to explain the applicability.
The verifiability and the applicability are also an indication for the consistency and completeness of this model. One advantage of this model is that it offers sufficient categories for classification of all elements in STS and their interfaces. Additionally, it could be used as a framework for classification of failures and accidents as well as for describing certain processes and specific influences of components could be ordered accordingly in order to adapt system design. As mentioned before, it is assumed from the autho that the lack of definitions of each component could be related to the incompleteness as a disadvantage, but also to the flexibility to expand the model individually for each branch and organization as an advantage. As additional notion for the completeness of the Model it is to mention that the relation of the SHEL-Model to Reason’s (1990) Swiss-Cheese-Model indicates that Hawkins’ (1993) basic assumptions and claims are congruent with these from Reason (1990). Nevertheless, Hawkins (1993) neglected possible causalities of failures, which are shown as a process in Reason’s (1990) model. But if the SHEL-Model would be used as classification of errors, the causality would be in evidence. Means, the cause for an error would be transparent by relating the error to one component or one interface. One further disadvantage, regarding the completeness of this model, is the missing of detailed consideration of basic psychological and cognitive processes, such as information processing mechanisms, perception, awareness, decision making. In general, this model can be seen as a static classification model, without consideration of processes and causalities.
This aspect could be evaluated as an advantage for the productivity of this model. It offers the possibility to guide incidents regarding specific organizational components or their interfaces. By identifying these barriers, improving actions could be derived. For example, a detected weak spot in the Liveware, caused by lack of knowledge, could be improved by training or instruction limitations in the LS -interaction could be corrected by actions towards an optimized usability. One disadvantage regarding the productivity is the lack of recommendations regarding an optimized design of the components. An explanation for this is the previously envisaged domain. A formulation of cross-sector opportunities for actions is not possible, because of differences in the composition and varieties in components.
2.2 Model of SA in dynamic decision making
Within the scope of the title of this paper “Human error in socio-technical systems: A research about the influence and causality of human factors on dynamic decision making” the model of situation awareness is selected to address the second part of this framework by the illustration of individual, system- and task-based influences on the decision making process. As mentioned before, poor SA leads to critical situations (Kaber & Endsley, 1998). Therefore, the examination of SA, the accuracy of perception, comprehension and projection, is necessary. Additionally the relevance for consideration of SA in STS rises, because of the increased complexity in systems and the continuous growing difficulty to predict the ever changing environment. According to Flach (1995) this model provides “a deep appreciation for the rich interactions between human and environment and among perception, decision making, and action [as well as a] classification that may reveal common structural features that threaten the integrity of complex human-machine systems and that, in turn, may suggest important design guidelines for these systems” (p. 155). This described focus of this model is the major reason for selecting it for this work. Within the scope of the paper, it is aimed to identify gaps in the system, related to the model of situation awareness in dynamic decision making. By this, possible failures in perception, decision making or action regarding the usage of the configuration software could be explained.
According to Endsley (1995a), an increase in automation and technization, makes it difficult to acquire and retain SA. Because of the fact that the state of environment becomes more and more complex in modern organizations, the understanding and interpretation of the perceived current situation in its real meaning, is seen as a challenge for operators (ibid.). Especially in aviation systems, only the knowledge regarding rules and procedures is not sufficient for achieving organizational goals within adapted performances (Endsley, 1999). In this case, Endsley (1999) emphasizes the importance of the deeper comprehension of several influencing factors and their importance in order to understand the state of environment, those characteristics and relations. Only by this, future states of the system could be derived and the performance could be adapted (ibid.). Because of the temporal dynamics in STS and differences in contexts in a complex environment, operators are asked to adapt their decisions temporally and flexibly on individual situations (cf. Durso & Gronlund, 1999). This requires SA, the perception, comprehension and the projection of the current situation.
In the first level of SA, the perception of environmental attributes and elements plays a role (ibid.). The needed information in STS for example is presented by machines, lights or further elements in the working environment. Failures in the first level of SA, such as a lack of perceptibility, could arise by poor information transmission to the operator, poor system design or a design regardless of the matching interface between system and operator. For example poor warning mechanisms or incomplete visual and auditory signals could lead to failures in the first step of perceiving information (ibid.). There are also cognitive limitations of humans or environmental demands, which are caused by stress and workload, which can lead to errors in the first step of SA (ibid.).
The next stage, the comprehension of the current situation, contains the comprehension of those perceived elements by combining them with operator goals and integrating them to a holistic picture in order to understand the situation (Endsley, 1995a, 1999.). Means, the operator should know which of the perceived information are important and which could be ignored (Endsley, 1999). Errors in the comprehension of the elements from the environment are in most cases the result of failures in combining the value of the perceived information and the operator goals (Endsley, 1995a). Means, perceived information could be over- or underestimated in their relevance for the goals. Some triggers are a lack of mental models, selecting wrong models or wrong interpretation of the selected model in combination with the information, even if the selected model is correct. In the case, in which no model is available at the second level, Endsley (1995a) emphasizes the necessity to process the information in the working memory.
Next, the projection of the future status follows as the third level of SA. In this state, the operator derives possible future actions and detects possible future developments of the elements in the environment. Additionally, a selection of the most favorable decision as an adjusted answer to the state of the environment takes place (ibid.). The third level is also threatened by errors, if the operator has difficulties to project the mental model to the future and to derive consequences (ibid.). This especially may occur in complex systems with multiple goals. Another risk on this level could be caused by schemata, which enable automatic and inflexible processes. A dynamic environment requires adaptability and flexibility as well as the ability to modificate schemata (ibid.).
SA is directly linked to decision making. The interpretation of the situation has an impact on the decision process and the selection of the problem-solving strategy (Endsley, 1995a). SA also exhibits a relation to performance. According to Endsley (1995a) poor performance could occur, if “SA is incomplete or inaccurate, when the correct action for the identified situation is not known or calculated, or when time or some other factors limits a person’s ability to carry out the correct action” (p. 40). Beyond this, decision making and performance are described as self-contained processes and different stages outside SA. Endsley (1995a) emphasizes that these constructs, SA, decision making and performance, have different influences and should be focused separately. Nevertheless, these processes indicate a causal relation. Means, wrong decisions are possible in the case of an incorrect SA despite of training. In turn, a correct SA may also lead to wrong decisions or poor performance, because of a lack of training (ibid.).
In her model of SA in dynamic decision making, Endsley (1995a) shows relations between individual, task and system factors, which all influence SA. The purpose of this model is to develop implications for system design in order to increase the SA (Endsley, 1995a). SA is presented as a construct, divided into perception, comprehension and projection, which influences decision making and performance in dynamic environments, such as complex STS (ibid.). Influences of both, individual factors and task factors, which are shown in Figure 4, as well as team factors, will be described separately in the following.
Abbildung in dieser Leseprobe nicht enthalten
Figure 4 : Model of SA in dynamic decision making (Endsley, 1995, p. 35)
Individual factors
Influencing individual factors in Endsley’s (1995a) model of SA in dynamic decision making are goals and objectives, expectations, information processing mechanisms, including long term memory stores and automaticity, as well as abilities, experiences and training.
Durso & Gronlund (1999) emphasize that cognitive factors are major predictors of SA, captured by information processing mechanisms, such as short- and long-term memory, working memory and attention. The working memory has an impact on focusing attention based on experiences or current goals (ibid.). New information is perceived, evaluated and projected based on goals, objectives and expectations. One disadvantage is that humans have limited capacities of working memory and attention. For example, direct attention could be disturbed by “information overload, complex decision making, and multiple tasks” (Endsley, 1995a, p. 41). Consequently a decrease in SA, poor decision making and errors could emerge (Endsley, 1999). According to Druso & Gronlund (1999) “the operator in a complex environment must determine the items in the environment on which to focus. This is aided both by well-designed bottom-up alerting devices and displays, and well-practiced top-down scanning patterns coupled with priming or automatic detection. Attention is often allocated efficiently, but not always (e.g. time pressure, information overload, cognitive tunnel-vision)” (p. 291).
The limits of working memory can be captured by long-term memory, including semantic networks schemata, mental models and automaticity. Generally, the development of these varying information processing mechanisms is enabled by abilities, experiences and training (ibid.). This available knowledge takes place regarding the interpretation of the perceived information (Level 2). Long-term memory stores additionally influence the first level of perception, by navigation of the attention, and the derived future status of the current situation (Level 3) (Endsley, 1999). Among these levels, information is classified into stored categories and representations (ibid.). Schemata ensure the understanding of information by coding and organizing. Mental models are seen as “complex schemata” (Endsley, 1995a, p. 43), which provide “explanations of system functioning and observed system states, and predictions of future states” (Morris, 1985, in Endsley, 1995a, p. 43). Automaticity is also a construct, related to these, and is described as to be helpful to capture limitations of attention (Endsley, 1999). Mental models, based on experience, also evoke automatic processing, which is known as to be fast, effortless and unconscious (Endsley, 1995a). The latter implies the disadvantage, related to the process of SA, that the person is not aware of the process, how the perceived information leads to the decision and finally to the action, because actions are based on routines (Endsley, 1995a, 1999). In detail, the person “may not be able to identify the process, used to come to the decision, because it was directly retrieved from memory as the appropriate script for that situation (Bowers, 1991; Manktelow & Jones, 1987, in Endsley, 1995a, p. 46). A further disadvantage of automatic information processing is the limited sensitivity and flexibility towards unknown stimuli, which is different to the functionality of the working memory (Endsley, 1995a).
Further individual factors affecting decisions are goals. In some cases these goals may be in conflict to each other, for example a person’s individual and system goals (ibid.). Goals primarily affect the second level of SA, the interpretation of information from the environment, and the decision (ibid.). At this point on the one hand top-down processing and on the other hand bottom-up processing occurs (ibid.). The former activates goals as well as mental models or schemata, whereas the latter describes the perception of the state of environment.
Generally individuals differ in their SA, because of interindividual differences in information processing mechanisms, abilities, experiences and the obtained quantity of training (Endsley & Bolstad, 1994). Additionally, goals, objectives and preconditions can vary among people. These varieties could cause interindividual differences in perception, comprehension and projection of information (ibid.).
System and task factors
Task and system factors (Figure 4) include the system capability, interface design, stress and workload, complexity or automation. These factors also have an impact on SA, decisions and the resulting performance (Endsley, 1995a). External working conditions are mostly seen as confounding factors for developing SA (Endsley, 1999). Especially, of stress, workload, complexity and automation are identified as a challenge for the operator (ibid).
The first factor, stress, is allocated in physical stress, which could arise from noise, temperature, lightning or fatigue, and in social psychological stressors like “fear or anxiety, uncertainty, importance or consequences of events, aspects of task affecting monetary gain, self-esteem, prestige, job advancement or loss, mental load, and time pressure” (Endsley, 1995a, p. 52). One negative effect on SA is the decreased amount of attention, because only central information are focused and peripheral information, which could be essential in complex systems, are neglected (ibid.). In this case, the first SA level is affected. Based on this, the interpretation on the second and the projection on the third level could contain errors. Following, wrong decisions and performance could be realized (ibid.). But nevertheless it is supported that “a certain amount of stress may actually improve performance by increasing attention to important aspects of the situation” (Endsley, 1999, p. 265)
Differences in the amount of workload in combination with the amount of SA could have various consequences. According to Endsley (1995a), low SA and low workload could lead to vigilance and less motivation. Low SA and high workload induce a partial perception and incomplete classification of information. If the operator has a high SA and the system or the task provides low workload, the relationship between SA and workload is identified as to be the optimal allocation in the system, as the operator has to process only simple information. In the case, in which both emerge in a high amount, “the operator is working hard but is successful in achieving an accurate picture of the situation” (Endsley, 1995a, p. 53).
Third, the complexity of a system is seen cause workload. The higher the complexity, the more likely a decrease in SA will be precipitated, because of an increase of the mental workload (Endsley, 1999).
Similar consequences are described regarding automation. The implementation of high amounts of automation could have the effect that the ability to detect errors immediately decreases, because of vigilance and problems of complacency as well as the change from active to passive operator tasks (Endsley & Kiris, 1995; Endsley, 1996, Endsley, 1997). A lack of vigilance is also connected to false alarms, which leads to a decreased trust in automation. A further negative influence on operator’s ability to perform correctly is a lack of feedback out of the system. In this case the term out of the loop (OOTL) performance problem is introduced by Endsley (1995a). It describes the problem of low SA and the lack of ability to realize manual performance if automation breaks down in automated systems. Negative consequences of the OOTL performance problem are described by Kaber & Endsley (1997). These are: (1) Operator failure to observe system parameter changes and intervene when necessary; (2) Human over-trust in computer controllers (complacency); (3) Operator loss of system SA; and (4) Operator direct/ manual control skill decay (p.126).
According to Badke-Schaub et al. (2012), the relevance of humans increase with an increase of automation. In other words, the amount of work, which is transferred to the technic, is going to be higher. This means that the operators’ responsibility to intervene in critical situations rises, while manual abilities and knowledge about complex systems decrease (ibid.). This paradox regarding the function of human in STS is called “irony of automation” (Bainbridge, 1987, in Badke-Schaub et al., 2012, p. 5). The following figure shows levels of automation and related roles of human and the system.
Abbildung in dieser Leseprobe nicht enthalten
Figure 5 : Levels of control in automation (Endsley & Kiris, 1995, p. 385)
According to Endsley (1995a) an increase in automation, makes it difficult to acquire and retain SA. Because of the fact that the state of environment becomes more and more complex in modern organizations, the understanding and interpretation of the perceived current situation in its real meaning, is seen as a challenge for operators (ibid.). In this case experiences and structures in the long-term memory are seen as assistive equipment for understanding the situation (ibid.). Additionally, it is recommended that automated systems should offer necessary information to operators so that the error detection is simplified (Badke-Schaub et al., 2012).
Team SA
Team SA is not illustrated in Endsleys figurative illustration (Figure 4). Nevertheless, this aspect plays an important role in all operations in STS, in which more than two operators should cooperate, and it is more complex than considering individual SA only (Salas, Prince, Baker & Shrestha, 1995). Nevertheless, it is relevant to know, that team SA could only be understood, if all factors of individual SA are considered (ibid.).
Team SA, as construct, is seen to be difficult to examine (Salmon, Stanton, Walker, Baber, Jenkins, Mcmaster & Young, 2008). Contemporary, SA is described as a multidimensional construct including “individual team member SA, shared SA between team members and also the combined SA of the whole team [as well as processes, like] communication, coordination, collaboration” (Salmon et al., 2008, p. 308). Stanton et al. (2008) support that “[t]eam SA […] remains something of a challenge to the human factors community, both in terms of its description and its measurement” (p. 319).
Team SA is described as “the degree to which every team member possesses the SA required for his or her responsibilities” (Endsley, 1989, in Endsley & Robertson, 2000a, p. 303). Teams in the general means are a coalition in organizations, including two or more members, who work towards a joint goal (Kaber & Endsley, 1998). Each team members operate toward a specific subgoal, which is set to achieve the common goal of the organization. This allocation of subgoals to team members arises the responsibility for each, as part of the organization, to interact with other team members or across teams (cf. ibid.).
Endsley (1995a) describes that each team member has individual elements of SA, including individual requirements and responsibilities. Consequently, the processes of achieving SA may differ from those of the other team members (ibid.). A coordination of these elements could be achieved by a “verbal exchange” (Endsley, 1995a, p. 39). This sharing of information could impact an increase of the quality of each individuals SA (ibid.). It is suggested that each team member has information, which is required from another team member for his or her organizational performance (Endsley, 1999; Endsley & Robertson, 2000a). Conequently, individual deficits in knowledge could be covered by sharing information in teams and by transferring information from one team member to another (Endsley 1995a). If information is shared, some parts of individual knowledge overlap with the knowledge of other team members, as shown in Figure 6. An overlapping of the three levels indicates shared team SA (Stanton, Salmon, Walker & Jenkins, 2010).
Abbildung in dieser Leseprobe nicht enthalten
Figure 6 : Team SA (Endsley, 1989, in Endsley & Robertson, 2000a, p. 303)
Besides that, team coordination involves sharing also the higher levels of SA, such as the comprehension and projection (ibid.). This means, that varying goals and experiences as well as mental models are shared by coordination of team processes (ibid.). A previous exchange of mental models has the advantage that less communication is sufficient to predict future behavior of the other team members in current situations (ibid.).
2.2.1 Measurement of SA
Measures of SA are seen as existentially for system evaluation and identifying the need for system development in order to support SA (Jones & Endsley, 2000). Following two techniques for the prediction of SA were described in order to expand the theoretical knowledge regarding SA, because the terms will be used in the following chapter. These techniques were not used empitically in this paper.
One subjective measurement is called SART, the Situation Awareness Rating Technique (Taylor, 1990, in Jones & Endsley, 2004, p. 344). Within this technique the operators self-valued SA is predicted (ibid.), and the amount of supply and demand for workload, information processing and attention are evaluated (Durso & Gronlund, 1999). Ten items were used to question the participants to rate their “supply of attention, demand for attention, and understanding” (Jones & Endsley, 2004, p. 344). Endsley, Selcon, Hardiman & Croft (1998) name the simplicity and the transferability to a wide range of task types as the main advantages of this measurement. Additionally, they point out the possible usage in real world tasks as well as simulations. But the lack of objectiveness is seen as a disadvantage (Jones, & Endsley, 2004). Further limitations are seen to be the “(1) the inability of operators to rate their own SA […], (2) the possible influence of performance on their own […] ratings, and (3) possible confounding with workload issues […]” (Endsley et al., 1998, p. 2).
An alternative is given by the objective measurement: the Situation Awareness Global Assessment Technique (SAGAT) (Endsley, 1990; Endsley, 1995b; Endsley, 2000b, Jones & Endsley, 2000; Bolstad & Endsley, 2003; Wright, Taekman & Endsley, 2004). It is used within a simulation task, in which the operator is asked to answer questions about the current situation in unexpected periods, when the simulation is frozen (Endsley & Rodgers, 1998). Afterwards, these responses are compared to the actual situation (Jones, & Endsley, 2004). Within this technique the operator’s retrospection about the current situation is predicted. A high SA is identified, if a high amount of the questions are answered properly (Durso, Hackworth, Truitt, Crutchfield, Nikolic & Manning, 1998). A high SA is not the only predictor for good performance, because many other factors are influencing the decision making and performance (Salmon, Stanton, Walker, Jenkins & Ladra, 2009). But poor SA predicted within the SAGAT does indicate poor performance (ibid.). This technique is known to be valid, especially in order to predict pilot’s performance (predictive validity). The main advantage of this method is named to be the objectivity and the applicability in many domains, such as “fighter aircraft, bomber aircraft, commercial aircraft, air traffic control, maintenance systems, and nuclear power” (Endsley, et al. 1998, p. 2).
2.2.2 Related concepts and studies
As mentioned in the further section, the investigation of factors associated with operations is necessary in order to understand error sources and to derive adapted implications for the future. Rodgers & Nye (1993) examine the causality of factors influencing operational error in Air Traffic Control (ATC) (in Endsley & Rodgers, 1998, p. 1). They investigated, that 36% of these errors are related to communication. 20% of these are caused by feedback errors. Further 15% included coordination deficits, 3% were associated with problems in support by briefing and 13% involved deficits in data presentation. Finally, 59% included problems in radar display (Rodgers & Nye, in Endsley & Rodgers, 1998, p. 1). The authors concluded that some of these errors are related to multiple causal factors (ibid.). Endsley & Rodgers (1998) support that specially in the ATC the complexity of the system encompass a challenge for perceiving (Level 1), understanding (Level 2) and projection (Level 3). Disorders in one level could be critical for an effective functioning in such systems (Endsley & Rodgers, 1998, p. 2).
Further work was done by Endsley (1995c). In the following taxonomy of SA errors (Table 3), Endsley (1995c), links certain causal factors of SA errors, divided into the levels of SA. The related frequencies are based on an analysis of National Transportation Safety Board (NTSB) accident investigation reports in the United States from the years 1989-1992.
Table 3 : SA error taxonomy (Endsley, 1995c, p. 288)
Abbildung in dieser Leseprobe nicht enthalten
In the first level of SA, failures could occur if the system did not provide the necessary amount of information to the operator (data not available). This category includes among others the lack of signals if automation fails (Endsley, 1995c). Poor signals or further “poor visual information” (Endsley, 1995c, p. 289) are also related to data, which is difficult to detect/ perceive. In other cases, if data is available, individual omission, such as ignoring checklists, distraction or a high taskload, could inhibit the perception of major information. Furthermore, information could be misperceived, because of inappropriate expectations or wrong interpretations (ibid.).
Furthermore, information could be correctly perceived, but not properly comprehended (Endsley, 1995c). This could happen, caused by a lack of mental models or choosing the wrong mental models. Another opportunity is that the operators’ values and expectations regarding the system function are wrong. Memory failures because of limitations of the working memory are also seen as an aspect causing errors in the second level of SA.
If the level of perception and comprehension is completed successfully, errors in the projection could occur. This could arise, if mental models are incomplete and do not deliver information in order to develop the meaning of the situation for the future.
In their study, Endsley & Rodgers (1998), use two techniques, the SAGAT and the Systematic Air Traffic Operations Research Initiative (SATORI), for the purpose to indicate different factors included in operational scenarios and affecting operational errors. SATORI was developed in order to detect insights of operational errors by recreating the radar data recorded in Ait Trafic Control (ATC) on a visual display and to synchronize it with audio records including communications of operators. In this study the high automation of the system in relation to SA is highlighted. The authors consider the passivity of operators in controller tasks and the decreased requirement for manual performance, because of the automated ATC tasks. Fifteen errors of the Atlanta Air Proute Traffic Control Center from the years 1993 and 1994 were recreated within the SATORI technique. Respectively three errors were allocated in a randomized way to overall 20 participants. Within the SAGAT, the participants were asked, in unforeseen periods, to evaluate the current situation in the scenario. The results show that eight errors are related to the level 1 of SA, one included errors derived from level 2 and one error was related to problems in level 3. Error types identified for level 1 were: failure to monitor (5 times), misperception (3 times) and memory loss (2 times). Level 2 includes error types, like over-reliance on defaults (2 times) and inadequate mental model (1 time). Finally, level 3 only consists of the error type lack of projection (4 times).
Following concepts and studies were presented, divided into individual factors task and system factors and team SA as orientation on Figure 4.
Related studies to individual factors
Following studies and concepts related to the influence of individual factors on SA were presented, which are mainly connected to the amount of knowledge and experiences.
The importance of knowledge for the achievement of SA is emphasized by Gugerty (1997). He investigates the importance of knowledge regarding the spatial location of cars (as an aspect of SA) in order to focus attention. He figured out that in a real-time task of driving using a PC-simulation, drivers had more control over their driving task, if they remind the locations of potential risky cars. The global performance was measured by the crash avoidance. Besides that, it was detected that experienced drivers reflected and applied their attention in the context of their goals. Endsley (2000a) also points out that automaticity, based on experiences, “provides good performance with a very low level of attention demand in certain well understood environments” (p. 15).
The relevance of knowledge and experiences is also investigated by Klein (1993). He introduces the term Recognition-Primed-Decision (RPD) making and describes “how experienced people can make rapid decisions” (p. 140). This model is based on a study on firefighters, in which was measured that experienced firefighters first had the knowledge of the situation and possibilities of adapted reactions. Second, they spend time to evaluate the alternatives before their reaction. Further, they specified the process of implementing mentally, in order to prevent hazards. In the case of potential problems, they modified their strategies. Klein (1993, p. 142) presents some relevant aspects of situation assessment: (1) Understanding the types of goals that can be reasonably accomplished; (2) Increasing the salience of cues that are important; (3) Forming expectations; (4) Identifying typical actions.
Accoding to Sweller (1988) the amount of knowledge and experiences distinguishes experts from novices. He suggests that the main differences between experts and novices in problem-solving are the existence of schemas and learning processes. Experts differ from novices, because of their knowledge, formed by schemas. They have the advantage to use this knowledge as a problem-solving skill. Novices do not have fixed memories. Therefore, their problem solving process functions parallel as a learning process (ibid).
Knowledge and experiences were threatened with another focus from Rasmussen (1983). He distinguishes between skill-, rule- and knowledge-based performance. The former is associated with the performance activated unconsciously and automated by the sensory system. It is pointed out that a conscious feedback control in order to evaluate the response from the sensory system and detect possible errors is done rarely. In order to prove skill-based behavior, Rasmussen (1983) recommends control tasks, which require feedback control of the operator for describing the own actions as well as conscious action control. By this activated schemata could be reflected, although skill-based behavior is characteristically outside the conscious attention (ibid.).
Rule-based behavior represents conscious problem solving and planning, which is driven by learned and stored rules. These could be in detail: previous successful experiences observed or communicated behavior from other persons. Although rule-based behavior is conscious, detecting the actual trigger is difficult, because the main cause, the rules, are not necessarily well defined for consciously reflecting and verbalizing them (ibid).
The next higher level is the knowledge-based behavior. In this case, performance is influences by higher concepts and goals, which are formulated precisely and generally applied during unfamiliar situations. Rasmusssen emphasizes that on this level, mental models are dominating, which are represented explicitly.
Related to STS, Rasmussen (1983) suggests integrating of signs, symbols and signals as information in the external system in order to activate the desirable behavior by inducing the selection of adapted goals or rules at the level of skill-, rule- and knowledge based behavior. Signs, signals and symbols should be differentiated among skill-, rule- and knowledge-based behavior. Signals are presented as triggers for the skill-based level; signs are connected to rules in the second level and symbols are information, influencing processes in the higher level, which is knowledge-based. The higher the level, the more precisely information and explanations of relations are needed, so that strategies were activated and potential errors could be detected and corrected consciously as well as coping with the complexity of STS will be facilitated (ibid.).
Related studies to system and task factors
Following studies related to system and task factors were presented in order to specify the influence on SA. Here, workload and the allocation of the resources between two tasks and the influence on SA and performance as well as the level of automation and the effects on humans were focused.
The influence of multitasking or distraction, caused by other tasks, on the performance of both, is mentioned by Wickens’ (1981) resource theory. According to Wickens (1981), the resource theory ”infers that the subject is modulating the supply of resources between the tasks, in order to obtain the desired level of differential performance” (p. 7). Related to the performance-resource function (PRF), in dual-task conditions, each task is performed less well, because resources should be allocated. Wickens (1981) also considers the impact of task difficulty on performance Figure 7.
Abbildung in dieser Leseprobe nicht enthalten
Figure 7 : The PRF for a practiced or easy task (B), and a novel or difficult task (A) (Wickens, 1981, p. 11a)
Compared to a difficult task, high performance in easy task condition is reached by using fewer resources (about 50%). In the difficult condition more resources (100%) are needed to reach this amount of performance. The use of fewer resources in B does not mean that the easy task is performed better than the difficult task. The focus here is the amount of resources needed in both tasks. The easiness of task B is induced by having practice and automatisms regarding the demanded performance, which leads to “quantitative change in the PRF” (Wickens, 1981, p. 12).
A further point of view on dual-task is the multiple resources theory, which implies that an efficient sharing of resources is possible if the task demands different resources. The other task will only be affected, if the other task requires the same resources. Wickens (1981) allocates resources in stages of processing (perceptual, central, response), modalities of input (visual, auditory), and of response (manual, vocal) and codes of perception and central processing (verbal, spatial). According to this it is possible to combine similar modalities and codes, without the necessity to allocate attention and decreasing performance in each task. For example, it is possible to use two different modalities simultaneously, by dividing attention between eyes and ears (ibid.). Means, “differences in the qualitative demands of information processing structures led to differences in time-sharing efficiency” (Wickens, 2002, p. 162). Dual-task performance is described as to be poorer if two visual tasks or two auditory tasks were time shared (Wickens, 2002). Blandford & Wong (2004) describe that SA could “achieved through many channels simultaneously” (p. 423).
Wickens (2002) differentiates between the quality and the quantity of resources. Based on this, he points out that an increased amount of demand (quantity) for one task exhibits a challenge for the allocation of attention to the other task, although the demanded qualities of resources are different. He defines workload as the interface between supplied resources and demanded tasks (Wickens, 2003). If competing tasks transcend the supply of resources, the demand is higher than the potential supply and performance is expected to increase (ibid). In his further work, Wickens (2008) outlines the relevance of the multiple resource theory on the concept of mental workload and the influence on poor performance in times of dual-task overload.
Damos, Bittner, Kennedy and Harbeson (1981) examined the relation between dual-tasks and tracking performance of twelve Navy enlisted men over 15 sessions of single task critical tracking, 15 sessions of compensatory tracking and 15 of critical-compensatory tracking as dual-task. They investigated that tracking performance under dual-task performance conditions was increased in comparison to the single-task performance. Additionally, the authors report an increase in dual-task performance after 15 sessions practice. They concluded that the amount of practice is important to increase performance, especially for challenging dual-tasks (ibid.).
Endsley & Rodgers (1996) point out, by using re-created scenarios within the SATORI technique and detecting operator’ understanding of the situation within the SAGAT technique, that “[t]he frequency of correct responses on each variable provides some insights into the tradeoffs that the controllers make in allocating their limited attention across multiple aircraft and pieces of information that complete for that attention” (p. 84). According to this, they concluded that attention allocation strategies may lead to a low SA.
In his study on the influence of workload on pilots, Wickens (2003) examined workload within an experiment including a dual task scenario for pilots for flying an UAV and one experiment including a single task for flying UAV. Both experiments were divided in automation condition and auditory condition. They figured out that automation could be beneficial for both, dual- and single-tasks. More precisely, the task to monitor the 3-D image display for possible targets of opportunity (TOO) was simplified by automation, while the auditory condition was found to be the more suitable one for the detection of system failures (SF) in the dual-task.
Furthermore, Wickens (2002) describes how system design affects SA of pilots. He differentiates between three components of SA: spatial awareness, system awareness and task awareness. For the first component, he claims that experienced pilots make connections between their mental models and constraints of the environment. Additional cognitive challenges may lead to mental workload. Wickens (2002) recommends that displays should be consistent with mental models and limits of attention. Next, the system awareness is described as to influence SA negatively, if automated systems are poorly designed. Finally, task awareness is described by its insights communication and system management. Pilots must be aware about necessary actions always. He suggests that checklists, training and experience may be helpful. But unexpected challenges cannot be covered by checklists.
Endsley, Mogfort, Allendorfer, Snyder & Stein (1997) examined the effect of free flight on controller workload, control strategies and performance. They focus on selected requirements on free flight, such as “the use of direct routes, the ability of pilots to deviate from flight plans of their own accord, and the requirement to inform controllers of pilot intentions in making such deviations” (Endsley et al., 1997, p. 3). Endsley et al. (1997) figured out that all levels of SA were affected negatively in situations, in which active control decreases in free flight conditions. Especially, high levels of SA were influenced negatively. Participants were not able to comprehend and project happenings with the aircraft as well as influences of environmental conditions on their flight. Additionally, it was found out that the pilots’ workload increases, because they were aware of their low SA and tried to apply more effort to understand the situation. Furthermore, the ability to react timely was found to be poor. In their evaluation of these results, the authors build connections to the OOTL performance problem.
A previous study from Endsley & Kiris (1995) also investigated this topic. They found out that passivity of operatory under automated conditions is the most responsible result for a loss of SA. The authors measured the SA level across the previously presented five automation levels. The results indicate that SA is reduced in cases of full automation. The highest level is found to achieve in the manual condition, followed by consensual AI.
Abbildung in dieser Leseprobe nicht enthalten
Figure 8 : SA across levels of automation (Endsley & Kiris, 1995, p. 390).
The measured levels of automation in this study:
(1) Manual: The operator acts alone without automation
(2) Decision support: The system provides recommendations. But the operator acts alone
(3) Consensual Artificial Intelligence (AI): The system acts within the operators’ concurrence
(4) Monitored AI: The system acts with the possibility of a veto from the operator
(5) Full Automation: The system acts without possibilities of intervention from the operator
It was especially expressed that only level 2 of SA was affected by automation intensity. The participants were not able to comprehend the situation in higher levels of automation (ibid.). Workload was also not decreased by the higher levels of automation, although it was assumed that the assistance of the system could cause another effect. These results were explained with the poor practice of operators in automated conditions to perform tasks manually. It is recommended regarding the design of such systems that AI should provide the possibility for intervention and feedback (ibid). By this, manual abilities could be protected and practiced as well as level 2 could be addressed by responses from the system. People should understand “what is happening in order to be able to act in an appropriate manner” (Endsley, 1996, p. 1).
Kaber, Onal & Endsley (2000) focus on human controlled level of automation by considering human and technological qualities. In their study, based on the levels of automation from Endsley & Kiris (1995), the authors point out that the number of collusions increases if manual or full automation exist. According to the authors, this result supports that the level of control should be allocated between human and automation (cf. consensual AI: Endsley & Kiris, 1995).
According to Endsley & Kiris (1995) automation does not always result in comparable problems. They pointed out that “automation may improve situation awareness by reducing excessive workload” (Endsley & Kiris, 1995, p. 384).
Further work was done by Ma & Kaber (2005). They used a simulation of a virtual environment of driving. Control mechanisms were transferred to the operator by physical gas and bake pedals as well as a physical steering wheel. The participants were asked to drive a virtual car and to perform tasks involving changes in speed and lateral position. Some participants were required to talk with their cell phones. In this condition the experimenter asked the participants 10 questions. The SA was measured using the SAGAT. Ma & Kaber (2005) emphasize that the usage of call phones has no significant effect on level 1 and 2, but on level 3 of SA. The participants show poorer performance in projecting the current status into the future. The authors assume that the first level of perception may not require a high amount of demand compared to the third level. Furthermore, the authors point out that SA in the automated condition was higher in both conditions, with cell phone cand without, in comparison to the non-automated condition. According to the authors, these results support the usage of automation in cars for generic driving situations, especially in order to reduce workload.
[...]
- Quote paper
- Selin Kile (Author), 2017, Human error in sozio-technical systems. A research about the influence and causality of human factors on dynamic decision making, Munich, GRIN Verlag, https://www.grin.com/document/1127634
-
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X.