Methods for implementing quadrotors autonomy based on hybrid learning methods
This paper reviews and analyzes methods for achieving quadcopter autonomy. It shows disadvantages and lim itations of the classical "Perception-Planning-Control" pipeline. A fundamental limitation of this approach is the inability of mathematical models to take into account all com...
Збережено в:
| Дата: | 2026 |
|---|---|
| Автори: | , |
| Формат: | Стаття |
| Мова: | Українська |
| Опубліковано: |
PROBLEMS IN PROGRAMMING
2026
|
| Теми: | |
| Онлайн доступ: | https://pp.isofts.kiev.ua/index.php/ojs1/article/view/876 |
| Теги: |
Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
|
| Назва журналу: | Problems in programming |
| Завантажити файл: | |
Репозитарії
Problems in programming| Резюме: | This paper reviews and analyzes methods for achieving quadcopter autonomy. It shows disadvantages and lim itations of the classical "Perception-Planning-Control" pipeline. A fundamental limitation of this approach is the inability of mathematical models to take into account all complex effects of the unpredictable environment. In return, the application of machine learning algorithms enables the implementation of control agents based on experience of interactions with real or simulated environments, significantly improving system adaptability to non-standard conditions. The core of this work compares machine learning methods applied to quadcopter au tonomy task. It provides a detailed overview of reinforcement learning. It is shown that model-free algorithms are able to outperform professional human pilots in specific tasks. However, they require significant amounts of data and training time. In return, model-based reinforcement learning improves training efficiency. During the training, the agent learns a world model that can be used to predict environment dynamics. The article also explores imitation learning and derived methods. An effective approach is to sequentially apply imitation learn ing and reinforcement learning, which combines the strengths of both approaches. The paper reviews works relying on physics-informed methods using differentiable simulators. Differentiable simulators are used to cal culate loss function gradients relative to control parameters. All discussed methods are analyzed regarding data efficiency, computational resource requirements, and fundamental limitations. The analysis results can be used to select quadcopter control architectures based on available computational resources and specific task require ments.Problems in programming 2025; 4: 53-62 |
|---|