Methods for implementing quadrotors autonomy based on hybrid learning methods

This paper reviews and analyzes methods for achieving quadcopter autonomy. It shows disadvantages and lim itations of the classical "Perception-Planning-Control" pipeline. A fundamental limitation of this approach is the inability of mathematical models to take into account all com...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Datum:2026
Hauptverfasser: Ramyk, I.P., Linder, Ya.M.
Format: Artikel
Sprache:Ukrainisch
Veröffentlicht: PROBLEMS IN PROGRAMMING 2026
Schlagworte:
Online Zugang:https://pp.isofts.kiev.ua/index.php/ojs1/article/view/876
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Назва журналу:Problems in programming
Завантажити файл: Pdf

Institution

Problems in programming
Beschreibung
Zusammenfassung:This paper reviews and analyzes methods for achieving quadcopter autonomy. It shows disadvantages and lim itations of the classical "Perception-Planning-Control" pipeline. A fundamental limitation of this approach is the inability of mathematical models to take into account all complex effects of the unpredictable environment. In return, the application of machine learning algorithms enables the implementation of control agents based on experience of interactions with real or simulated environments, significantly improving system adaptability to non-standard conditions. The core of this work compares machine learning methods applied to quadcopter au tonomy task. It provides a detailed overview of reinforcement learning. It is shown that model-free algorithms are able to outperform professional human pilots in specific tasks. However, they require significant amounts of data and training time. In return, model-based reinforcement learning improves training efficiency. During the training, the agent learns a world model that can be used to predict environment dynamics. The article also explores imitation learning and derived methods. An effective approach is to sequentially apply imitation learn ing and reinforcement learning, which combines the strengths of both approaches. The paper reviews works relying on physics-informed methods using differentiable simulators. Differentiable simulators are used to cal culate loss function gradients relative to control parameters. All discussed methods are analyzed regarding data efficiency, computational resource requirements, and fundamental limitations. The analysis results can be used to select quadcopter control architectures based on available computational resources and specific task require ments.Problems in programming 2025; 4: 53-62