Багаторівнева архітектура системи автоматичного керування БПЛА для здійснення пошукових місій за відеоаналізом та металодетекцією

The article presents a multi-level automatic mission control system for an unmanned aerial vehicle designed to detect hazardous items in tasks involving the identification of suspicious objects. The proposed architecture combines edge–ground–cloud data processing from the onboard video camera and me...

Повний опис

Збережено в:
Бібліографічні деталі
Дата:2026
Автори: Роботько, С.П., Топалов, А.М.
Формат: Стаття
Мова:Українська
Опубліковано: Vinnytsia National Technical University 2026
Теми:
Онлайн доступ:https://oeipt.vntu.edu.ua/index.php/oeipt/article/view/804
Теги: Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
Назва журналу:Optoelectronic Information-Power Technologies

Репозитарії

Optoelectronic Information-Power Technologies
Опис
Резюме:The article presents a multi-level automatic mission control system for an unmanned aerial vehicle designed to detect hazardous items in tasks involving the identification of suspicious objects. The proposed architecture combines edge–ground–cloud data processing from the onboard video camera and metal detector, as well as the use of vision–language models (ChatGPT-4.1 Vision, Gemini 2.5 Flash) for semantic verification of suspected objects. At the ground station, initial detection of hazardous items is performed using YOLOv8 and metal-detector signal analysis. Frames with intermediate confidence are then sent to the cloud for additional verification by VLMs. Based on the combined assessment, a decision is generated regarding the presence of a hazardous item, which automatically adjusts the UAV mission via MAVLink: the drone is switched from AUTO to GUIDED mode, returns to the GPS coordinates of the suspicion, performs additional inspection, and then resumes the mission from the saved waypoint. Experimental field tests with mock-ups of hazardous items demonstrated that combining YOLOv8, the metal detector, and VLMs makes it possible to achieve increasing precision to approximately 95.7% and maintaining near-real-time performance (effective 5 fps). The scientific novelty of the work lies in implementing a closed loop of “detection – semantic verification – automatic mission correction” for UAVs, which integrates multimodal data fusion with cloud-based AI models and reduces operator workload.