The visual-motor coordination skills required to achieve top-level performances in drone racing are based on many years of repeated practice and flight experience in drone racing simulators and real-world races. Using only visual feedback from an FPV camera attached to the teleoperated unmanned aerial vehicle, human pilots are able to plan and execute appropriate control actions to navigate the drone along challenging race tracks. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Ĭompeting interests: The authors have declared that no competing interests exist.įirst-person view (FPV) drone racing is an increasingly popular televised sport in which human pilots compete to complete challenging obstacle courses in a minimum time. ![]() 871479 (AERIAL-CORE) and the European Research Council (ERC) under grant agreement No. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.ĭata Availability: The dataset used in this study is available in an Open Science Framework repository (, Dataset DOI: 10.17605/OSF.IO/UABX4).įunding: This work was supported by the Ernst Göhner Foundation and University of Zurich Alumni Fonds zur Förderung des Akademischen Nachwuchses (FAN Fellowship), by the National Centre of Competence in Research (NCCR) Robotics through the Swiss National Science Foundation (SNSF) and the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No. Received: DecemAccepted: FebruPublished: March 1, 2022Ĭopyright: © 2022 Pfeiffer et al. Our results demonstrate that human visual attention prediction improves the performance of autonomous vision-based drone racing agents and provides an essential step towards vision-based, fast, and agile autonomous flight that eventually can reach and even exceed human performances.Ĭitation: Pfeiffer C, Wengeler S, Loquercio A, Scaramuzza D (2022) Visual attention prediction improves performance of autonomous drone racing agents. Furthermore, visual attention-prediction and feature-track based models showed better generalization performance than image-based models when evaluated on hold-out reference trajectories. Comparing success rates for completing a challenging race track by autonomous flight, our results show that the attention-prediction based controller (88% success rate) outperforms the RGB-image (61% success rate) and feature-tracks (55% success rate) controller baselines. We compare the drone racing performance of the attention-prediction controller to those using raw image inputs and image-based abstractions (i.e., feature tracks). We then use this visual attention prediction model to train an end-to-end controller for vision-based autonomous drone racing using imitation learning. We test this hypothesis using eye gaze and flight trajectory data from 18 human drone pilots to train a visual attention prediction model. We hypothesize that gaze-based attention prediction can be an efficient mechanism for visual information selection and decision making in a simulator-based drone racing task. This work investigates whether neural networks capable of imitating human eye gaze behavior and attention can improve neural networks’ performance for the challenging task of vision-based autonomous drone racing. This may be related to the ability of human pilots to select task-relevant visual information effectively. ![]() ![]() Humans race drones faster than neural networks trained for end-to-end autonomous flight.
0 Comments
Leave a Reply. |