Associate Professor of Systems Engineering
Director of the
Formal Human Systems Laboratory
University of Virginia
Information on this page is out of date. It will be updated soon ...
Sponsor: Air Force Research Lab / Universal Technology Corporation
Prime Contract FA8650-1.6-C-2642 / Subcontract 18-S8401-13-C1
The level of measurement of a psychometric scales can have profound implications for what mathematics can be meaningfully applied to them. In this project, we introduce a new method for determining what the level of measurement is of psychological concepts captured by psychometric rating scales. We will use this method to determine the level of measurement of three common human-automation interaction psychometric scales: human trust in automation, mental workload, and situation awareness. Results will be used to draw conclusions about the validity of the analyzed scales, individual differences in psychometric rating’s levels, and the level of measurement of psychometric ratings in general.
Sponsor: Agency for Healthcare Research and Quality
Contract: R18HS024679
The failure of humans to respond to auditory medical alarms has resulted in numerous patient injuries and deaths and is thus a major safety concern. This project will use a novel computational technique to identify how auditory medical alarms described in the IEC 60601-1-8 international standard can interact in ways that can make alarms unhearable and construct tools engineers can use to design alarms that avoid this problem. By ensuring that standard-compliant medical alarms are perceivable, humans will be more likely to respond to them and thus prevent patient death and injury.
Sponsor: Army Research Office/Army Research Laboratory
Failures in complex systems often occur because of unexpected erroneous human interactions. Erroneous behavior classification systems (or taxonomies) are critical for understanding why and how erroneous human behaviors occur and predicting how they can cause system failures. In this work, Dr. Bolton is developing a novel erroneous behavior taxonomy based on where, why, and how human behavior deviates from a normative plan of actions. He plans to show that this taxonomy will subsume the leading existing erroneous behavior classification systems and resolve incompatibilities between them. Further, he will use this taxonomy as the basis for an erroneous behavior generation system that will be usable in computational safety analyses that will help engineers harden systems to erroneous human behavior. This work will give engineers unprecedented tools for understanding erroneous human behavior and predicting how it could contribute to system failure. It will thus have applications in safety critical systems in medicine, aviation, and the military.
Sponsor: NASA Ames Research Center / Georgia Institute of Technology
PI: Dr. Amy Pritchett (Georgia Tech)
Institutional PI: Dr. Matthew L. Bolton
This project is investigating how formal verification with model checking can be used to do sensitivity analyses of agent-based simulation scenarios. This method is being used to evaluate human performance for different allocations of authority and autonomy in NextGen aviation concepts and systems.
Sponsor: National Science Foundation
The concurrent nature of human-computer interaction can result in situations unanticipated by designers where usability is not properly maintained or human operators may not be able to complete the task goals the system was designed to support. Mathematical tools and techniques called formal methods exist for modeling and providing proof-based evaluations of different elements of HCI including the human-computer interface, the human operator's task analytic behavior, and usability. Unfortunately, these approaches require engineers to create formal models of the interface designs, something that is not standard practice and prone to modeling error. This work strives to address this problem by showing that a machine learning approach can be used to automatically generate formal human-computer interface designs guaranteed to always adhere to usability properties and support human operator tasks.
More information on this project can be found here.
Alarms are regularly used in medicine to alert individuals to conditions that require attention. In particular, telemetry monitoring describes a hospital process where a technician must respond to alarms that trigger in response to the monitored rhythms, heart rates, and oxygen saturations of remote patients. In this environment, the timing of a technician’s response is critical as seconds can mean the different between life and death. Clearly, any situation that prevents a technician from perceiving and thus properly responding to an alarm could have extremely adverse effects on a patient. Unfortunately, in cases where multiple alarms can alert concurrently, alarms can interact in ways that render one or more of them imperceptible. This is known as auditory masking, a condition that can occur based on physical and temporal relationships between sounds. Detecting such problems can be difficult due to the highly concurrent nature of alarms. Model checking is an analysis tool that is specifically designed to find problems in models of concurrent systems using a form of automated theorem proving. This research will develop tools and techniques for using model checking to detect auditory masking conditions in alarms used in medical telemetry monitory systems and investigate solutions to discovered problems.
Sponsor: European Space Agency
PI: Dr. Francisco Barreiro (IXION Industry and Aerospace)
Institutional PI: Dr. Matthew L. Bolton
Engineers have developed a number of tools and methods to help themselves eliminate design errors and minimize the risk of failure in safety critical systems. However, breakdowns in complex systems can still occur, often as a result of unanticipated interactions between system components. In particular, the increasing use of automation can result in unexpected, and potentially unsafe, human-automation interactions. Thus, for safety critical systems like those used in space and aerospace operations, there is a need for methods capable of providing safety guarantees for a system that considers both anticipated and unanticipated human-automation interactions. Formal verification, a method for mathematically proving whether a model of a system does or does not adhere to system safety properties, is capable of providing such guarantees . This project will develop novel methods for using formal verification to evaluate the safety of systems that rely on human-automation interaction. In particular, it will develop a technique for generating formally verifiable system safety properties from task analytic models of human behavior. The work will show how a model checker can be used to automatically verify (formally prove) if a model of the target system (which will contain both automation behavior, human behavior, and their interaction) does or does not adhere to these properties. The developed method will be applied to two aerospace case studies.
Sponsor: NASA Ames Research Center
PI: Dr. Ellen Bass (Drexel University)
Institutional PI: Dr. Matthew L. Bolton
NextGen systems are envisioned to be composed of human and automated agents interacting with dynamic flexibility in the allocation of authority and autonomy. The analysis of such concepts of operation requires methods for verifying and validating that the range of roles and responsibilities potentially assignable to the human and automated agents does not lead to unsafe situations. Agent-based simulation has shown promise toward modeling such complexity but requires a tradeoff between fidelity and the number of simulation runs that can be explored in a reasonable amount of time. Model checking techniques can verify that the modeled system meets safety properties but they require that the component models are of sufficiently limited scope so as to run to completion. By analyzing simulation traces, model checking can also help to ensure that the simulation's design meets the intended analysis goals. Thus leveraging these types of analysis methods can help to verify operational concepts addressing the allocation of authority and autonomy.