您现在的位置是:MIT's new AI copilot can monitor human pilot performance >>正文

MIT's new AI copilot can monitor human pilot performance

上海品茶网 - 夜上海最新论坛社区 - 上海千花论坛62人已围观

简介By subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.Concerns ...

By subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.

Concerns regarding air safety have increasingly come to the forefront in the last few years, mainly due to multiple incidents of air crashes and disappearances. Contemporary pilots struggle to keep up with the deluge of information coming from many displays, especially in life-or-death situations. 

MIT's new AI copilot can monitor human pilot performance

Researchers have now leveraged AI systems' power to help act as a safety net to avert such instances, which helps blend human intuition with machine precision. Christened “Air-Guardian”, the program developed by a team at the MIT Computer Science and Artificial Intelligence Laboratory, is a "proactive copilot; a partnership between human and machine, rooted in understanding attention," said a media statement. 

See Also Related
  • Artificial general intelligence: Understanding the future of AI 
  • The FAA wants to license pilots for future eVTOL air taxis 
  • Startup Aims to Improve Air Traffic Safety With 210-Satellite Constellation 

The basis of the system works on the principle of having two copilots on board: a human and a computer. Although they both have "hands" on the controllers, their priorities constantly diverge. The human gets to steer if they are both focused on the same thing. But the machine rapidly takes control if the human is sidetracked or misses anything.

A second pair of eyes 

But how precisely does the system gauge attention? For humans, it uses eye-tracking, and for the neurological system, it depends on "saliency maps," which identify the areas of the brain where attention is focused. 

According to researchers, the maps act as visual guides that emphasize important areas of a picture, making it easier to understand and decode how complex algorithms behave. Instead of just acting when a safety violation occurs, as is the case with conventional autopilot systems, Air-Guardian detects early indications of possible threats using these attention markers. 

The system examines incoming pictures for important information using an optimization-based cooperative layer that combines human and machine visual attention with liquid closed-form continuous-time neural networks (CfC), noted for their skill in understanding cause-and-effect linkages. The VisualBackProp algorithm complements this by locating the system's focus spots inside a picture, ensuring a comprehensive knowledge of its attention mappings.

In real-world tests, the pilot and the algorithm made choices based on identical raw photos when navigating to the target waypoint. It is claimed that the system boosted the success percentage of navigating to target places while lowering the risk level of flights. According to the team, the cumulative rewards received during the flight and the quicker path to the waypoint were used to measure Air-Guardian's success.

Complementing in nature

Air-Guardian, combining a visual attention metric, displays the potential to detect and intervene, allowing for interpretability by human pilots. "This showcases a great example of how AI can be used to work with a human, lowering the barrier for achieving trust by using natural communication mechanisms between the human and the AI system," said Stephanie Gil, assistant professor of computer science at Harvard University, in a statement.

Looking at making such a system accessible to pilots, the team needs to refine the human-machine interface for widespread use in the future. According to feedback, a bar-shaped signal would be a more understandable way to indicate when the guardian system takes over.

This system can have a wider use case scenario that goes beyond aviation. In the future, automobiles, drones, and a more comprehensive range of robotics may all employ such cooperative control systems.

The details regarding their research have been published in the journal arXiv.

Abstract

The cooperation of a human pilot with an autonomous agent during flight control realizes parallel autonomy. We propose an air guardian system that facilitates cooperation between a pilot with eye tracking and a parallel end-to-end neural control system. Our vision-based air-guardian system combines a causal continuous-depth neural network model with a cooperation layer to enable parallel autonomy between a pilot and a control system based on perceived differences in their attention profiles. The attention profiles for neural networks are obtained by computing the networks' saliency maps (feature importance) through the VisualBackProp algorithm. In contrast, the attention profiles for humans are either obtained by eye tracking of human pilots or saliency maps of networks trained to imitate human pilots. When the attention profile of the pilot and guardian agents align, the pilot makes control decisions. Otherwise, the air guardian makes interventions and takes over the control of the aircraft. We show that our attention-based air-guardian system can balance the trade-off between its level of involvement in the flight and the pilot's expertise and attention. The guardian system is particularly effective in situations where the pilot is distracted due to information overload. We demonstrate the effectiveness of our method for navigating flight scenarios in simulation with a fixed-wing aircraft and on hardware with a quadrotor platform.

Tags:

相关文章



友情链接