Оцінка стійкості систем розпізнавання образів до впливу зловмисних втручань

Neural networks have become an integral part of everyday life, and are now used from economics to military applications. Protecting these systems from malicious interference is an important component that ensures the effective operation of algorithms and the possibility of their further improvement....

Повний опис

Збережено в:
Бібліографічні деталі
Дата:2025
Автори: Omelchenko, Bohdan, Shelestov, Andrii
Формат: Стаття
Мова:Ukrainian
Опубліковано: V.M. Glushkov Institute of Cybernetics of NAS of Ukraine 2025
Теми:
Онлайн доступ:https://jais.net.ua/index.php/files/article/view/530
Теги: Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
Назва журналу:Problems of Control and Informatics

Репозитарії

Problems of Control and Informatics
Опис
Резюме:Neural networks have become an integral part of everyday life, and are now used from economics to military applications. Protecting these systems from malicious interference is an important component that ensures the effective operation of algorithms and the possibility of their further improvement. The purpose of the study was to assess the stability of the pattern recognition algorithm to the use of malicious interference methods. The paper considers a recognition algorithm based on neural networks, namely a convolutional neural network, for training which a database of radar images of military equipment in the form of images of 8 different classes of vehicles was used [1]. Next, 4 different methods of generating modified data were applied to the selected images, namely the fast gradient method, the Carlini–Wagner algorithm, the predicted gradient descent method and the iterative impulse method, after which the modified data was added to the input data set and the stability of the recognition system was assessed. The article also provides the results of a comparative analysis of the considered methods. From the implemented malicious interventions on the image recognition system, it was determined that, considering the percentage of recognition, the algorithm turned out to be the most unstable to the predictive gradient descent (PGD) method, in which the image was recognized incorrectly with a probability of 92 %. But if we choose the method whose perturbations were the most imperceptible on the test image, then this is the Carlini–Wagner method, because with a probability of 52 % the image was recognized incorrectly, which will lead to a classification error when comparing images and classes using the maximum matching method. In the future, due to the fact that the recognition system turned out to be unstable to malicious interventions, there is a need for improved methods of protecting algorithms that will detect distorted images and effectively respond to changes in the input data.