Who We Are

ASIST performers have developed artificial teammates, or agents, to collaborate with human teams in the virtual testbeds of Studies 3 and 4. Additionally, analytic components (ACs) were developed to capture and quantify individual and team processes and outcomes.

Testbed Development Team

Aptima and Arizona State University

The team of Aptima and the Arizona State University Center for Human, Artificial Intelligence, and Robot Teaming conducted experimentation and evaluation of performers’ theories and AI, created a distributed testbed on which to run those evaluations, and coordinated activities across the technical and scientific teams. There were two major versions of the testbed, both of which used Minecraft as a synthetic task environment to collect data on human interaction with ASI. The first version evolved through Studies 1-3 to support virtual laboratory studies with 3-person teams. This version established the architecture for integration of AC and ASI components, experimental controls, and data collection. For Study 4, the team significantly extended the capabilities of the ASIST testbed to support data collection at scale through parallel online data collection, taking inspiration from online gaming services and citizen science efforts.

ASI Agent Development Teams

Carnegie Mellon University Robotics Institute

The CMU-RI Team (consisting of researchers at Carnegie Mellon University and University of Pittsburgh) are developing agents to assist with human teaming that incorporate modular, deep-learning based Theory of Mind models capable of inferring intent and first- and second-order player beliefs, and predictors of various team attributes, in order to generate targeted and contextually relevant interventions.

DOLL

The DOLL Team (DOLL and MIT CSAIL) is developing RITA, an artificial social intelligence (ASI) agent that uses: Analytic Components developed by other ASIST performers, State Estimation components to construct a Theory of Mind (ToM) of all the human members of a team, Story Understanding to frame those beliefs into grounded stories, and then generates interventions that are shared with team members resulting in improved team behaviors.

Analytic Component Development Teams

Carnegie Mellon University

CMU is developing a socio-cognitive architecture for collective intelligence to enable ASI agents to diagnose the quality of teamwork and make interventions. The resulting metrics include measures of individual characteristics and abilities and the quality of collective attention, memory, and reasoning processes in teams as they collaborate.

Cornell University

The Cornell team studies how team trust relates to the goals of individual agents in the team. The Cornell Trust Analytic Component (AC) tracks each team member’s goals and subgoals, and how well they align over time. It also measures how and when team members comply with each others’ requests and with requests coming from the artificial agent. Team member compliance with others (humans or AI) is taken to be an example of trusting behavior. The Cornell AC is based on a model in which team members’ trust in other humans and artificial agents increases the more they believe that their goals are aligned.

Gallup

The Gallup Team (Gallup, ROSAN International, and Georgia Tech Research Institute) used machine-assisted literature review tools to derive domain-agnostic models of team effectiveness. These models were validated through an expert crowdsourced analytical hierarchy process, followed by experiments with human subjects. The final confirmed models were instantiated into Analytical Agents operating within the ASIST testbed to provide real-time indicators of team effectiveness such as emergent leadership, team planning, and use of transactive memory systems.

Institute for Human Machine Cognition

IHMC and CMU are partnered to develop a computational theory for human-machine teamwork. We use Interdependence Analysis to inform cognitive agents, developed using the ACT-R cognitive architecture, containing mental models for teamwork that can be used in inferring, assessing and predicting team performance. We are currently developing measures to quantify team processes and team performance.

University of Central Florida

UCF is utilizing interdisciplinary approaches that leverage social science research drawing from psychology as well as cognitive and teams’ sciences to predict and improve team processes and performance. Our approach integrates theories and methods for studying complex collaborative cognition in order to better examine factors contributing to successful team processes and how these are altered by, and can be improved with, artificial social intelligence (ASI). In addition to theory development in team social cognition, UCF’s empirical work is in developing team member profile models (aka “Player Profiles”). These are based on sparse data that capture individual and team traits and transform them into machine-readable formats that ASI can use to improve team interactions, coordination, and overall performance.

Additional Study 3 ASI Agents:

Charles River Analytics

The Charles River Analytics (CRA) model represents Urban Search and Rescue (USAR) strategies that can be compared at the individual and team level. CRA distributes tasks across a team by helping the team align individual skills with tasks. Interventions are focused on positional communication strategy, coordination strategy, resource use, and motivation.

University of Arizona

UAZ is implementing a Theory of Mind-based Cognitive Architecture for Teams (ToMCAT) that, uniquely, combines the abilities to infer the internal states of other agents, solve problems collaboratively with them, and communicate with them in a socially-aware manner. ToMCAT software components and information are available at https://ml4ai.github.io/tomcat

University of Southern California

The University of Southern California (USC) model represents players, the team, and team processes as Partially Observable Markov Decision Processes (POMDPs) that incorporate Analytic Component (AC) input as sensors about the state of the team and interventions as actions that affect the team. Our agent uses behavioral data to inform these POMDP models through Inverse Reinforcement Learning (IRL) and then uses these POMDPs recursively to form a Theory of Mind about the team to reason about the expected utility of alternate interventions.