Using Information Features in Computer Vision for 3d Pose Estimation in Space

Introduction. Autonomous rendezvous and docking is an important technological capability that enables various spacecraft missions. It requires the real-time relative pose estimation i.e. determination of the position and attitude of a target object relative to a chaser. The usage of techniques based...

Full description

Saved in:
Bibliographic Details
Published in:Кибернетика и вычислительная техника
Date:2017
Main Authors: Melnychuk, S.V., Gubarev, V.F., Salnikov, N.N.
Format: Article
Language:English
Published: Міжнародний науково-навчальний центр інформаційних технологій і систем НАН України та МОН України 2017
Subjects:
Online Access:https://nasplib.isofts.kiev.ua/handle/123456789/131495
Tags: Add Tag
No Tags, Be the first to tag this record!
Journal Title:Digital Library of Periodicals of National Academy of Sciences of Ukraine
Cite this:Using Information Features in Computer Vision for 3d Pose Estimation in Space / S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov // Кибернетика и вычисл. техника. — 2017. — Вип. 4 (190). — С. 33-55. — Бібліогр.: 14 назв. — англ.

Institution

Digital Library of Periodicals of National Academy of Sciences of Ukraine
id nasplib_isofts_kiev_ua-123456789-131495
record_format dspace
spelling Melnychuk, S.V.
Gubarev, V.F.
Salnikov, N.N.
2018-03-23T15:54:19Z
2018-03-23T15:54:19Z
2017
Using Information Features in Computer Vision for 3d Pose Estimation in Space / S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov // Кибернетика и вычисл. техника. — 2017. — Вип. 4 (190). — С. 33-55. — Бібліогр.: 14 назв. — англ.
0452-9910
DOI: https://doi.org/10.15407/kvt190.04.033
https://nasplib.isofts.kiev.ua/handle/123456789/131495
629.7.05; 681.518.3
Introduction. Autonomous rendezvous and docking is an important technological capability that enables various spacecraft missions. It requires the real-time relative pose estimation i.e. determination of the position and attitude of a target object relative to a chaser. The usage of techniques based on optical measurement has certain advantages at close range phases of docking. The purpose of the paper is to create a computer vision system, that estimates position and attitude of the target relative to the chaser. To develop the design of a computer vision system and suited mathematical methods. To use a new learning-based method, which can be implemented for the real-time execution with limited computing power. Methods. A non-standard approach to solving the problem was used. A combination of image processing techniques, machine learning, decision trees and piecewise linear approximation of functions were used. The tool of informative features computed by images was essentially used. Results. A two-stage algorithm, which involves training the computer vision system to recognize the attitude and position of the target in a changing lighting environment was developed. The calculation of the camera parameters was carried out to ensure a given accuracy of the solution of the problem. Conclusion. It was shown that the informative features can be used to create a highperformance on-board system for estimating relative attitude and position. Implementation of the proposed algorithm allows to create a competitive device for docking in space.
Рассмотрена задача создания системы компьютерного зрения, предназначенной для определения относительного положения и ориентации целевого объекта известной формы на основе его изображений. Был использован метод, предполагающий машинное обучение системы для каждого отдельного целевого объекта. При рассмотрении существующей классификации признаков изображений было выяснено, что существует группа признаков, которые могут быть эффективно использованы для решения задачи определения положения и ориентации тела в пространстве. Возможность реализации предложенного метода с использованием информативных признаков подтверждена экспериментально с помощью моделирования. Полученные результаты позволяют создать на их основе систему компьютерного зрения для решения задачи автоматического сближения и стыковки космических аппаратов, в том числе с некооперируемыми объектами космического мусора.
Розглянуто задачу створення системи комп'ютерного зору, яку призначено для визначення відносного положення і орієнтації цільового об'єкта відомої форми на основі його зображень. Розглянувши наявні класифікації ознак зображень, було з'ясовано, що є група ознак, які може бути ефективно використано для розв'язання завдання визначення положення і орієнтації тіла в просторі. Можливість реалізації запропонованого методу з використанням інформативних ознак підтверджена експериментально за допомогою моделювання. Було використано метод, який передбачає машинне навчання системи для кожного окремого цільового об'єкта. Отримані результати дозволяють створити на їх основі систему комп'ютерного зору для вирішення задачі автоматичного зближення і стикування космічних апаратів, в тому числі з об'єктами космічного сміття.
en
Міжнародний науково-навчальний центр інформаційних технологій і систем НАН України та МОН України
Кибернетика и вычислительная техника
Интеллектуальное управление и системы
Using Information Features in Computer Vision for 3d Pose Estimation in Space
Использование информационных признаков в системе компьютерного зрения космического аппарата для оценивания положения и ориентации
Використання інформаційних ознак у системі комп’ютерного зору космічного апарата для оцінювання положення та орієнтації
Article
published earlier
institution Digital Library of Periodicals of National Academy of Sciences of Ukraine
collection DSpace DC
title Using Information Features in Computer Vision for 3d Pose Estimation in Space
spellingShingle Using Information Features in Computer Vision for 3d Pose Estimation in Space
Melnychuk, S.V.
Gubarev, V.F.
Salnikov, N.N.
Интеллектуальное управление и системы
title_short Using Information Features in Computer Vision for 3d Pose Estimation in Space
title_full Using Information Features in Computer Vision for 3d Pose Estimation in Space
title_fullStr Using Information Features in Computer Vision for 3d Pose Estimation in Space
title_full_unstemmed Using Information Features in Computer Vision for 3d Pose Estimation in Space
title_sort using information features in computer vision for 3d pose estimation in space
author Melnychuk, S.V.
Gubarev, V.F.
Salnikov, N.N.
author_facet Melnychuk, S.V.
Gubarev, V.F.
Salnikov, N.N.
topic Интеллектуальное управление и системы
topic_facet Интеллектуальное управление и системы
publishDate 2017
language English
container_title Кибернетика и вычислительная техника
publisher Міжнародний науково-навчальний центр інформаційних технологій і систем НАН України та МОН України
format Article
title_alt Использование информационных признаков в системе компьютерного зрения космического аппарата для оценивания положения и ориентации
Використання інформаційних ознак у системі комп’ютерного зору космічного апарата для оцінювання положення та орієнтації
description Introduction. Autonomous rendezvous and docking is an important technological capability that enables various spacecraft missions. It requires the real-time relative pose estimation i.e. determination of the position and attitude of a target object relative to a chaser. The usage of techniques based on optical measurement has certain advantages at close range phases of docking. The purpose of the paper is to create a computer vision system, that estimates position and attitude of the target relative to the chaser. To develop the design of a computer vision system and suited mathematical methods. To use a new learning-based method, which can be implemented for the real-time execution with limited computing power. Methods. A non-standard approach to solving the problem was used. A combination of image processing techniques, machine learning, decision trees and piecewise linear approximation of functions were used. The tool of informative features computed by images was essentially used. Results. A two-stage algorithm, which involves training the computer vision system to recognize the attitude and position of the target in a changing lighting environment was developed. The calculation of the camera parameters was carried out to ensure a given accuracy of the solution of the problem. Conclusion. It was shown that the informative features can be used to create a highperformance on-board system for estimating relative attitude and position. Implementation of the proposed algorithm allows to create a competitive device for docking in space. Рассмотрена задача создания системы компьютерного зрения, предназначенной для определения относительного положения и ориентации целевого объекта известной формы на основе его изображений. Был использован метод, предполагающий машинное обучение системы для каждого отдельного целевого объекта. При рассмотрении существующей классификации признаков изображений было выяснено, что существует группа признаков, которые могут быть эффективно использованы для решения задачи определения положения и ориентации тела в пространстве. Возможность реализации предложенного метода с использованием информативных признаков подтверждена экспериментально с помощью моделирования. Полученные результаты позволяют создать на их основе систему компьютерного зрения для решения задачи автоматического сближения и стыковки космических аппаратов, в том числе с некооперируемыми объектами космического мусора. Розглянуто задачу створення системи комп'ютерного зору, яку призначено для визначення відносного положення і орієнтації цільового об'єкта відомої форми на основі його зображень. Розглянувши наявні класифікації ознак зображень, було з'ясовано, що є група ознак, які може бути ефективно використано для розв'язання завдання визначення положення і орієнтації тіла в просторі. Можливість реалізації запропонованого методу з використанням інформативних ознак підтверджена експериментально за допомогою моделювання. Було використано метод, який передбачає машинне навчання системи для кожного окремого цільового об'єкта. Отримані результати дозволяють створити на їх основі систему комп'ютерного зору для вирішення задачі автоматичного зближення і стикування космічних апаратів, в тому числі з об'єктами космічного сміття.
issn 0452-9910
url https://nasplib.isofts.kiev.ua/handle/123456789/131495
citation_txt Using Information Features in Computer Vision for 3d Pose Estimation in Space / S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov // Кибернетика и вычисл. техника. — 2017. — Вип. 4 (190). — С. 33-55. — Бібліогр.: 14 назв. — англ.
work_keys_str_mv AT melnychuksv usinginformationfeaturesincomputervisionfor3dposeestimationinspace
AT gubarevvf usinginformationfeaturesincomputervisionfor3dposeestimationinspace
AT salnikovnn usinginformationfeaturesincomputervisionfor3dposeestimationinspace
AT melnychuksv ispolʹzovanieinformacionnyhpriznakovvsistemekompʹûternogozreniâkosmičeskogoapparatadlâocenivaniâpoloženiâiorientacii
AT gubarevvf ispolʹzovanieinformacionnyhpriznakovvsistemekompʹûternogozreniâkosmičeskogoapparatadlâocenivaniâpoloženiâiorientacii
AT salnikovnn ispolʹzovanieinformacionnyhpriznakovvsistemekompʹûternogozreniâkosmičeskogoapparatadlâocenivaniâpoloženiâiorientacii
AT melnychuksv vikoristannâínformacíinihoznakusistemíkompûternogozorukosmíčnogoaparatadlâocínûvannâpoložennâtaoríêntacíí
AT gubarevvf vikoristannâínformacíinihoznakusistemíkompûternogozorukosmíčnogoaparatadlâocínûvannâpoložennâtaoríêntacíí
AT salnikovnn vikoristannâínformacíinihoznakusistemíkompûternogozorukosmíčnogoaparatadlâocínûvannâpoložennâtaoríêntacíí
first_indexed 2025-11-26T08:21:48Z
last_indexed 2025-11-26T08:21:48Z
_version_ 1850618338629124096
fulltext ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) Интеллектуальное управление и системы DOI: https://doi.org/10.15407/kvt190.04.033 UDC 629.7.05; 681.518.3 S.V. MELNYCHUK, PhD (Engineering), Researcher of Dynamic Systems Control Department e-mail: sergvik@ukr.net V.F. GUBAREV, Dr (Engineering), Professor, Corresponding Member of NAS of Ukraine, Head of Dynamic Systems Control Department e-mail: v.f.gubarev@gmail.com N.N. SALNIKOV, PhD (Engineering), Senior Researcher of Dynamic Systems Control Department e-mail: salnikov.nikolai@gmail.com Space Research Institute of the National Academy of Sciences of Ukraine and State Space Agency of Ukraine, Acad. Glushkov av. 40, 4/1, 03187, Kyiv 187, Ukraine USING INFORMATION FEATURES IN COMPUTER VISION FOR 3D POSE ESTIMATION IN SPACE Introduction. Autonomous rendezvous and docking is an important technological capability that enables various spacecraft missions. It requires the real-time relative pose estimation i.e. determina- tion of the position and attitude of a target object relative to a chaser. The usage of techniques based on optical measurement has certain advantages at close range phases of docking. The purpose of the paper is to create a computer vision system, that estimates position and attitude of the target relative to the chaser. To develop the design of a computer vision system and suited mathematical methods. To use a new learning-based method, which can be implemented for the real-time execution with limited computing power. Methods. A non-standard approach to solving the problem was used. A combination of image processing techniques, machine learning, decision trees and piecewise linear approximation of functions were used. The tool of informative features computed by images was essentially used. Results. A two-stage algorithm, which involves training the computer vision system to recognize the attitude and position of the target in a changing lighting environment was developed. The calculation of the camera parameters was carried out to ensure a given accu- racy of the solution of the problem. Conclusion. It was shown that the informative features can be used to create a high- performance on-board system for estimating relative attitude and position. Implementation of the proposed algorithm allows to create a competitive device for docking in space.  S.V. MELNYCHUK, V.F. GUBAREV, N.N. SALNIKOV, 2017 33 S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 34 Keywords: autonomous rendezvous, uncooperative pose estimation, model-based pose estimation, vision-based pose estimation, computer vision, decision tree, linear approxima- tion, informative features, image processing, machine learning, identification, relative posi- tion and attitude estimation. INTRODUCTION At present, the tasks of servicing space satellites during the whole life cycle, including maintaining and changing the orbit parameters of working and dead vehicles and objects, acquire particular urgency [1]. The most promising are the transportation tasks. These include the transportation of incorrectly orbiting spacecraft to the calculated orbits, the correction of orbits, the maneuvers to avoid collisions with space debris, and others. Another current task is a removal of space debris, including on the geostationary orbit. A possible way to solve these problems is to create a service system using a special transport service space vehicle, designed for docking and moving target orbital objects. The creation of such a system leads to the need of solving a number of problems, one of which is related to the automatic control of the process of rendezvous and docking. Potential target objects, with which it will be necessary to make docking, generally belong to the class of non-cooperated space objects, i.e. such that were not designed for docking and, accordingly, are not equipped with special elements (docking nodes, corner reflectors, etc.) used in existing docking systems. This fact significantly complicates the problem. The most important stage in the work of the service system is the process of approchement and docking. To perform maneuvering in the automatic mode, it is necessary to solve the problem of high-precision determination of the parameters of the relative position and attitude on the basis of on-board measurements. In this paper, we consider the solution of the problem of pose determining using the computer vision system (CVS). The solution is based on comparing the images obtained by the on-board video camera with the known three- dimensional graphic model of the target object preloaded into the on-board memory. To ensure the necessary system performance in conditions of limited power and memory capacity of the on-board computer complex, a standard approach for this task was not used. The new method based on learning has been applied. The transition from images to informative features, which represent a set of functions defined on a two-dimensional array of pixels of the image is used. The correlation between the requirements for the accuracy of determining the pose parameters and the characteristics of the camera is also considered. PROBLEM STATEMENT Preliminaries. The problem of autonomous rendezvous and docking in space is considered. Active spacecraft (chaser) maneuvers and approaches to the passive spacecraft (target) in automatic mode. The operation of the control system requires measuring of the pose (i.e. attitude and position) of the target relative to the chaser. Over long distances to the target the measurement is made by radio wave equipment, which isn’t covered in this paper. As the distance decreases, the Using Information Features in Computer Vision for 3d Pose Estimation in Space ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 35 required pose estimation accuracy increases. This forces to use more precise measuring instruments that operate in the shorter wavelength range. At the final stage of rendezvous the infrared or optical vision-based systems can be used. Vision-based pose estimation problem was considered in different formula- tions [2, 3]. This paper presents the design of an on-board computer vision sys- tem that performs high precision pose estimating of target under the following conditions: - the target is an uncooperative spacecraft, i.e. it is not equipped with known markers (uncooperative pose estimation); - three-dimensional CAD model of the target is given (model-based pose estimation); - single optical sensor is used. The CVS consists of a measuring device (digital video camera) and a computing unit. The camcorder is rigidly fixed to the body of the chaser and shoots at a rate of ten frames per second. In the memory of the computing unit a 3D CAD model of the target is stored. The purpose of the CVS is to calculate the position and attitude of the target relative to the video camera from a distance of about 30 meters and until the docking. Assuming that the camera position on the body of the chaser is known exactly, the relative pose of the target and chaser can be calculated. Pose estimation problem. In the field of view (FOV) of the camera the only target spacecraft is located. It’s a three-dimensional body whose shape is given and stored in the CVS memory in a CAD file. The target is illuminated by one or more light sources whose location and characteristics are unknown. We introduce a camera-fixed and a target-based coordinate systems. The reference frame 1111 zyxO has an origin in a projection center. Vector 1x coincides with the optical axis of the camera. Vectors 1y and 1z are parallel to the image plane and correspond to the "up" and "right" directions on the resulting image. The reference frame 2222 zyxO is associated with the target, in which its CAD model is defined (Fig. 1). Fig. 1. Camera, target and associated reference frames S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 36 The relative pose estimation consists in finding the vector ( )Tz,y,xOOr =−= 1221 and coordinates of the basis vectors of one reference frame relative to another. Unit vectors 312 ,i,ei = in coordinates of frame 311 ,i,ei = forms columns of rotation matrix ))e,e(()t(T ijij 122121 == , where )e,e( ij 12 is the scalar product of vectors je2 and ie1 . The matrix 33 21 ×∈ RT is orthogonal and can be uniquely determined by a smaller number of parameters. We will use the Euler angles: pitch ϑ , yaw ψ and roll γ . For the sequence of "pitch-yaw-roll" elemental rotations it has the form )(T)(T)(TT TTT γψϑ= 12321 , (1) where           ϕϕ− ϕϕ=ϕ cossin sincos)(T 0 0 001 1 ,           ϕϕ ϕ−ϕ =ϕ cossin sincos )(T 0 010 0 2 and           ϕϕ− ϕϕ =ϕ 100 0 0 3 cossin sincos )(T . So, the problem solution consists of the vector T)z,y,x(r =21 and the Euler angles ψϑ, , γ . We reduce the required quantities to a pose vector T),,,z,y,x(p γψϑ= , that consists of position and attitude parameters. Initial data. Vector p must be calculated from three components of input data: a captured picture with an image of target, a three-dimensional CAD model of the target, and a mathematical model of a camera, that describes geometric transformations performed by an optical system. The digital picture is formed on a rectangular photosensitive matrix of size HW × , where W and H denote width and height in pixels. Picture is mapped into RAM as two-dimensional array, each element of which stores the brightness of one pixel. The CAD model of target object describes the geometry of the surface as an approximation by a set of polygons, usually triangles. It can be stored in a file of any existing format. In the RAM this file is expanded into two data arrays: - an array of vertices (vertex coordinates relative to reference frame 2222 zyxO ); - an array of indices (specifying the order of vertices for constructing polygons). To combine the useful information contained in the captured picture and the stored CAD model, it is necessary to know the transformation of 3D object into 2D image, i.e. know the characteristics of the optical system, the distortions, the physical linear dimensions of the photosensitive matrix, etc. As a model of the camera, we will consider the model of the perspective projection (or pin-hole camera) shown in Fig. 2. Using Information Features in Computer Vision for 3d Pose Estimation in Space ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 37 Fig. 2. Pin-hole camera model We will assume that the perpendicular dropped from the point 1O to the image plane passes through the center of the picture. Parameters of the pin-hole model are the focal length f , width w and height h of the sensor matrix, and its size in pixels W and H . Requirements for solution. At each measure, the target is photographed. It is assumed that the geometry of target doesn’t change over time and corresponds to a CAD model. Parameters of the pin-hole camera model are considered known constants. On the basis of these data the vector p is estimated and passed to a control system. The scheme of the CVS functioning is shown in Fig. 3. Fig. 3. CVS scheme S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 38 Table 1. Technical requirements Name Value Distance range, m 0–30 Bounds for yaw and pitch angles, deg ± 15o / ± 30o Bounds for roll angle, deg ± 15o Position estimation maximum error, m ± 0,01 Attitude estimation maximum error, deg ± (0,15 + 0,02α)o The search for vector p is carried out not in the entire six-dimensional space, but in a bounded set P within which maneuvers of chaser will be performed. The size of P and the requirements for the accuracy of the solution are given in Table 1. When the chaser is maneuvering at a small distance from the target, there is a risk of collision. Therefore, additional requirements are imposed to performance: CVS must provide a definition of the pose vector with a period of 0.1 sec. DETERMINING CAMERA CHARACTERISTICS Designing the CVS includes the selection of components, which can ensure the principle solvability of the problem and satisfy the requirements for the solution. The measuring instrument that limits the potentially achievable accuracy is a video camera. A single measurement is a picture. It is discreted due to the discrete structure of the sensor matrix. On the plane of picture, a position of an object can be measured with pixel precision. We will assume that the images are distinguishable, if they differ in the position of the objects depicted on them at least by 1 pixel. In another case it is not possible to distinguish the corresponding vectors p . Hence, the vector p can be determined with finite accuracy. Having a mathematical model of the camera, it is possible to establish the potentially achievable accuracy of the problem solution. We will define the pin-hole model parameters { }H,W,h,w,fc = , which allow distinguishing the images for different vectors p , which differ by a value of resolution required. Consider a point on the surface of the CAD model and find out how much its image is shifted during small rotations and translations of the target relative to the camera. To do this, we write out in an explicit form the transformation of the 3D coordinates into the 2D coordinates in pixels. The coordinate of the point is given by the vector ( )Tz,y,x in the reference frame 2222 zyxO . From (1) we obtain the coordinates in the frame 1111 zyxO           +           ⋅=           z y x r r r z y x T z~ y~ x~ 21 . (2) Using Information Features in Computer Vision for 3d Pose Estimation in Space ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 39 On the image plane another coordinate system is given. Its origin lies in the lower left corner of the image. Axis t is directed to "right" and s — to "up". Vector 1x is perpendicular to the image plane, 1y is parallel to s and 1z is par- allel to t . Perspective projection of a point ( )Tz~,y~,x~ on the image plane gives its image on a picture. Points ( )T,, 00⋅ are projected to the center of picture. The coordinates (in pixels) of the point under consideration (2) take the form           +⋅ +⋅ =      2 2 Hk x~ y~ Wk x~ z~ s t h w , H h fk ,W w fk hw == . (3) The ratios hf ,wf determine the camera's field of view along the horizontal and vertical. They are the scaling factors when converting from meters to pixels We fix h,w,f and calculate W and H , based on the accuracy requirements (Table 1). Consider the change of the image coordinates (3) caused by a slight change of position ( z ,y ,x ) and attitude ( ϑ , ψ , γ ) in the vicinity of the pose vector p : 0=γ=ψ=ϑ , 0>xr , 0== zy rr . (4) The selected value of p corresponds to the location of 2O exactly in front of the camera at some fixed distance xr . For simplicity, we assume the vertical and horizontal resolution of the camera is the same .W w fkkk ,WH ,wh wh ===== (5) Assume the camera can capture a square with a side of 1 m from a distance of 1 m, so Wk = and horizontal and vertical field of view are equal to 053 1 502 ≈ .arctg . Let's consider a rotation only by the roll angle γ . From (1–5) we obtain the shift of image position of a point, measured in pixels             +⋅ + +⋅ +−             +⋅ + − +⋅ + + =      −      γ 2 2 2 2 0 Hk rx y Wk rx z Hk rx γsinzγcosy Wk rx γcoszγsiny s t s t x x x x . (6) We substitute the value Wk = and denote the shift by the vector ( )Ts,t ∆∆ ( ) ( ) xrx W γsinzγcosy γcoszγsiny s t +      −− −+ =      ∆ ∆ γ∆ 1 1 . (7) S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 40 Similarly, we get the shift for small rotations ϑ and ψ ,W rx y rsinycosx cosysinx rx z rsinycosx z s t ,W rx y rψsinzψcosx y rx z rψsinzψcosx ψcoszsinx s t xx xx xx xx             + − +ϑ−ϑ ϑ+ϑ + − +ϑ−ϑ=      ∆ ∆             + − ++ + − ++ +ψ− =      ∆ ∆ ϑ∆ ψ∆ (8) and translations along axes 111 z,y,x on z ,y ,x ∆∆∆ respectively ( )( )xx xx xx x rxxrx xW y z W rx y xrx y rx z xrx z s t +∆++ ∆       − − =             + − ∆++ + − ∆++=      ∆ ∆ ∆ , (9) . rx W 0 z s t , rx W y 0 s t xzxy +     ∆ =      ∆ ∆ +      ∆ =      ∆ ∆ ∆∆ (10) The obtained expressions (7) – (10) characterize the sensitivity of the image to changes of the attitude and position of the target. Minimally measurable change of p corresponds to a shift by one pixel vertically or horizontally. We will find the minimum value of W that satisfies the condition ( ) ( )11 ≥∆∨≥∆ st . It follows from (7)–(10) that the sensitivity of the image depends on the distance xr between the camera and the target. When xrx + increases, the sensitivity to the change of xr (i.e. distance) decreases quadratically, and of yr and zr (i.e., the parallel lateral shift relative to the camera) decreases linearly. For rotations the dependencies are more complex, so they should be investigated numerically. The sensitivity to the change of attitude and the coordinate xr also depends on the size of the target body. As a target, consider a cube with a side of two meters. Let the point 2O coincide with the center of the cube. The relative pose of the target is determined by the values (4). We choose a vertex with coordinates ( )111 ,,− in the reference frame 2222 zyxO , which lies on the face closest to the camera. Consider 3 variants of the distance to the surface of the object: xrx + = 2, 5 and 10 meters. Table 2 shows the shift of the vertex image when the orientation angles are changed by 0150. and position (along axes of 1111 zyxO ) are changed by 0.01 meter. The minimum value of the photosensitive matrix resolution W is determined from the condition that these shifts are equal to one pixel. Using Information Features in Computer Vision for 3d Pose Estimation in Space ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 41 Table 2. Camera sensitivity to pose vector changes Distance rx, m Distance to surface rx + x, m Variable Image shift, pixels Minimum value of W, pixels γ , ψ , ϑ W. 31031 −⋅ 770 x∆ W. 31052 −⋅ 400 3 2 y∆ , z∆ W. 31005 −⋅ 200 γ W. 41025 −⋅ 1920 ψ , ϑ W. 41014 −⋅ 2410 x∆ W. 41004 −⋅ 2500 6 5 y∆ , z∆ W. 31002 −⋅ 500 γ W. 41062 −⋅ 3850 ψ , ϑ W. 41032 −⋅ 4280 x∆ W. 41001 −⋅ 10000 11 10 y∆ , z∆ W. 31001 −⋅ 1000 The shift of the image during the small rotation of the target will be significantly influenced by the shape of its body and the position of 2O . So, for a very elongated object by the axis 1x , the image will be more sensitive to a change of pitch and yaw angles. The estimates of the necessary sensor resolution shown in Table 2 were computed for a special camera FOV and a special model target. For the target of another size and shape the value W will differ. Nevertheless, we can draw the following conclusions: - the accuracy of determining y and z is the best (linear decrease with distance); - the accuracy of determining x is the worst (quadratic decrease with distance); - the accuracy of determining γ , ψ , ϑ strongly depends on the target shape (linear decrease with distance). METHOD OF SOLVING Relations between variables. The CVS defies the pose vector T),,,z,y,x(p γψϑ= based on the captured images, the CAD model of target, and the mathematical model of the camera. It is necessary to establish how the initial data and the unknown quantities are related. The captured image a is an HW × array of pixel brightness. This image is completely determined by the set of values and factors shown in Fig. 4. Formation of image is influenced by the known and unknown quantities. According to the problem statement the position of the Sun and other sources of illumination are unknown. S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 42 Fig. 4. Formation of image In addition, the image is subject to the influence of many other factors, which due to their random nature are considered as noise. These include, for example, other objects in the FOV, noise of measurements caused by the action of high-energy particles and radiation, round-off errors etc. Dependence of the image on the listed factors can be written formally as a function ),,l,p,c,m(fa η= (11) where m denotes a given CAD model, с — parameters of the pin-hole camera, l — characteristics of the main light sources, η — noise of various kinds. We will take m and c parameters out of consideration, since they do not change in time. Then (11) takes the form ),l,p(fa η= . (12) We introduce the notation for the sets to which the quantities in (12) belong. According to Table. 1 the solution of the problem is in a limited set ( ) }.,,,zzz,yyy,xxx:,,,z,y,x{Ρ ,RPp γ≤γ≤γψ≤ψ≤ψϑ≤ϑ≤ϑ≤≤≤≤≤≤γψϑ= ⊂∈ 6 (13) The problem of finding p by the image a is not always solvable. So, in the absence of illumination, an image a does not contain the necessary information. Therefore, it is assumed that unknown illumination l and noise η belong to certain bounded sets, which correspond to the CVS operating mode Ν∈η∈ ,Ll , (14) where L is the set of admissible illuminations, and Ν is the set of permissible noises. The image a is uniquely defined on the set Ν⊕⊕ LP . Denote the set of admissible images WHRAa ⊂∈ . (15) Using Information Features in Computer Vision for 3d Pose Estimation in Space ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 43 The presence of unknown illumination l and noise η in (12) does not allow to obtain a direct functional dependence of the image only on the pose vector p . To each p there corresponds a set of images [ ] Aa ⊂ generated by all possible realizations of l and η . Suppose that there exists a function g defined on a set of images and acting in a vector space NR that is insensitive to l and η ( ) ( )21 222 111 2121 agag ),l,p(fa ),l,p(fa :, ,Ll,l ,Pp RR:g NWH =⇒ η= η= Ν∈ηη∀∈∀∈∀ → , (16) that performs a certain transformation, including filtering noise and illumination. We apply the function g to both sides of (12) and denote ( ) a~ag = , ( )( )η= ,l,pfg)p(h o . Then )p(ha~ = . (17) A function h is a mapping P into a set NRA~ ⊂ . Suppose that A~P:h → is injective and continuously differentiable, and each Pp ∈ corresponds to a single A~a~ ∈ . It follows from the assumptions that different vectors 21 pp ≠ correspond to different vectors 21 a~a~ ≠ . Suppose that )p(ha~ 11 = and )p(ha~ 22 = . If 21 a~a~ = from the injectivity follows 21 pp = . This property gives a unique solvability of the problem of determining p from a given ( )aga~ = . It can be shown that this property is satisfied if the target body does not have symmetry. It is known [4] that under the condition that the Jacobian [ ] 0≠∂∂ p/)p̂(hdet for some point p̂p = , in the neighborhood of this point an inverse function exists )a~(hp 1−= . (18) However, in the general case it is not possible to obtain an analytic expression for the function 1−h . In Fig. 5 the mappings of the sets is shown. It is impossible to find f in an analytical form, but it can be specified by defining a computational algorithm. By the given CAD model of target, mathematical model of the camera, pose vector Pp̂ ∈ and illumination Ll̂ ∈ , it is possible to calculate a synthetic image Aâ ∈ that will be formed on the sensor of camera in the absence of noise η . The assignment of f and g in the form of calculation algorithms allows to do this for the function h , but doesn’t allow to find the calculation algorithm for 1−h , so we can not calculate p directly from image ( ))ag(hp 1−= . Local solution. We consider the problem in a small local subdomain of P . Using the assumption of continuous differentiability of the function h , equation S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 44 (17) can be represented by an expansion in a Taylor series in a neighborhood of some point p̂ ( ) ||p||opp/)p(h)p̂(ha~a ~̂ p̂p ∆+∆⋅∂∂+=∆+ = , (19) where p̂pp −=∆ , )p̂(h)p(ha~ −=∆ . The Jacobi matrix [ ]ji p/)p(hp/)p(h ∂∂=∂∂ of the mapping (17) must have rank 6, i.e. the columns of this matrix must be linearly independent. Otherwise, the problem will not have a unique solution in the neighborhood of p̂ , since there will be different 1p∆ and 2p∆ which will correspond 21 a~a~ ∆=∆ . Since )p̂(ha ~̂ = from (19) follows ( ) ||p||opp/)p(ha~ p̂p ∆+∆⋅∂∂=∆ = , (20) and the local dimension of the set of a~ is equal to 6. This means that for the problem solvability in a small neighborhood of some p̂ , the dimension of the vectors a~ must not be less then 6. General approach. Most of the existing solutions are reduced to the model- to-image registration problem, which consists in detection of special feature points on the images and their matching [5–7]. In the developed CVS, a different approach is used. This is a kind of learning-based methods and consists in identi- fication of mapping 1−h . The solution of the problem consists of: - constructing a function g that ensures existence of 1−h on the set A~ , - identifying the function 1−h . First we consider the identification problem, and then the choice of the function g . To obtain the training sample for identification of 1−h , it is necessary to construct synthetic images and compare them with real ones. Let the pose vector Pp̂ ∈ and the illumination Ll̂ ∈ be given. The calculated image )0,l̂,p̂(fas = will differ from the real one ),l,p̂(fa rrr η= due to the illumination and noise. Applying the function g gives an equality ( ) ( ) rrss a~agaga~ === . Thus, by comparing sa~ and ra~ the coincidences of corre- sponding vectors p can be verified. If we define a functional RR:J N → on a set A~ , that ( ) 0a~a~J *a~a~ *  →− → , then for any pair of elements ( ) ( )2211 pha~,pha~ == it becomes possible to estimate the difference between 1p and 2p . We cover the set P with a grid of discrete values 654321 111111 n,k ,n,j ,n,i ,n,t ,n,s ,n,r ),,,,z,y,x(p kjitsrrstijk ====== γψϑ= (21) with a sufficiently small step: d∆ for the coordinates and α∆ for the angles corresponding to the required accuracy of solution (Table 1). For each grid node, using the CAD model, we build a synthetic image rstijkâ and calculate ( )rstijkrstijk âga ~̂ = . Using Information Features in Computer Vision for 3d Pose Estimation in Space ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 45 Fig. 5. Mapping scheme Captured by camera the real image ( )**** ,l,pfa η= is used to calculate ( )** aga~ = . Then *a~ is compared by some selected criterion J with nodes rstijka ~̂ . As a result, we can define a grid node (with parameters ẑ,ŷ,x̂ and γψϑ ˆ,ˆ,ˆ ), for which the synthetic image best coincides with the real. These values will differ from the true parameters *** z,y,x and *** ,, γψϑ not more than by the step size between the nodes, i.e. α∆≤γ−γα∆≤ψ−ψα∆≤ϑ−ϑ ∆≤−∆≤−∆≤− |ˆ|,|ˆ|,|ˆ| ,d|zẑ|,d|yŷ|,d|xx̂| *** *** (22) Immediate implementation of this approach is impossible because of the very large number of nodes. To meet the requirements (Table 1) it is necessary to use 1610 nodes. The number of nodes can be reduced by increasing the length of the grid step. Then, to obtain the required accuracy, it will not be sufficient to find the nearest node. The solution algorithm is divided into two stages. At the first stage, the first approximation is performed. It consists in finding the optimal node on a large-scale grid. Due to the large number of nodes in the grid, instead of a full search, more efficient methods should be used, for example, decision trees. To reduce the amount of computing performed in real time, the node values and auxiliary data must be calculated in advance. At the second stage, the solution is refined and the required accuracy is achieved. For this, optimization by criterion J can be used. An alternative is to construct an analytical approximation for a function 1−h in a local domain centered at the grid node. This method is preferable, since the calculation for each grid node can be carried out in advance on the basis of synthesized images. Consider an approximation of 1−h in the local subdomain of node Pp̂ ∈ . The function 61 RR:h N →− is non-linear and is determined by the target shape, the point p̂ and the function g . For different p from a subdomain the corre- sponding synthetic images a and vectors a~ are calculated. The obtained set of pairs a~,p is used to construct a linear approximation of the function 1−h . S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 46 This approach is equivalent to piecewise linear approximation of 1−h on the whole A~ . The use of linear approximation is advisable for the following reasons. Firstly, all the quantities a~ are approximate. Accordingly, it is impossible to build a complex high-precision model based on noisy data, since the problem of identifying the parameters of such a model will become incorrect. Secondly, since the dimension can not be less than 6, more complex models will have a much larger number of parameters, taking up more space in the memory during storage. The described algorithm puts forward a number of requirements for the function g : - all grid nodes rstijkp must have distinct values rstijka~ . Moreover, they must differ so much that it is possible to construct a fast search system; - on set A~ must be defined a computationally simple criterion J for comparing its elements; - the dimension of space A~ should be as small as possible. This is necessary to reduce the amount of memory required to store grid nodes; - the calculation of a~ by the image should be performed as quickly as possible. INFORMATIVE FEATURES OF IMAGES Informative features. In a variety of areas dealing with the signal processing, procedures are used to extract from them certain values — the informative features (IF) that quantify useful information. The measured input information is often presented in a form not suitable for the immediate usage. For CVS, the captured image contains a large amount of redundant data. The transition to more compact structures that retain all the necessary information is performed by the function A~A:g → . For working with images, IF are used in such areas as artificial intelligence and robotics in tasks of image processing, search by content, pattern recognition and classification. The used IF are divided [8] into several categories: the features of color, the features of texture and the features of form. We discover them for applicability as a function g . The IFs of color include different integral characteristics, calculated from the brightness of all pixels. The brightness is considered as a random variable, and the IF represented in the form of histograms or statistical characteristics of the its distribution. The use of such IF is appropriate when comparing images of equal illumination. In the problem under consideration, this does not hold. The distribution of the brightness will depend substantially on the illumination Ll ∈ and it will not be possible to fulfill condition (16). Textural IF are used to isolate such image characteristics that describe the general properties of local features and repetitive structures. The image is splitted on local areas and each of them is characterized by a certain vector value. These values, collected from all local areas, are used for calculating the integral characteristics: histograms, distribution parameters, expansion coefficients. Compared to the IF of color, texture IFs are more resistant to changing of illumination. As integral characteristics, texture IFs are weakly Using Information Features in Computer Vision for 3d Pose Estimation in Space ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 47 sensitive to changes in the position and attitude of the target. High-precision pose determination with them is not possible. IF of the form are characteristics describing the shape of the boundaries or homogeneous areas of the image. The IFs related to this group can be constructed to be non-invariant to a shift, rotation and scale, so that they will be weakly sensitive to variations of illumination and are highly sensitive to change of target pose, which allows satisfying (16). The procedure of constructing an outer contour as a function g is shown in Fig. 6. Two images that correspond to a single p and different illumination have the same outer contour. The outer contour can be used as an IF. For nonsymmetric target, it carries enough information to uniquely determine the vector p . However, the dimension of the space of outer contours is too large and does not satisfy the requirements for the function g given above. Fig. 6. Constructing an outer contour of the object as function g S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 48 To reduce the dimensionality of A~ , instead of the outer contour, we will use its descriptor — the vector of geometric features ( ) ( ) ( ) ( ) ( )( )TN T N ag,,ag,agaga~a~,,a~,a~ …… 2121 === , (23) where Ng,,g,g …21 are the functions that compute the various characteristics of the outer contour. Examples of possible characteristics are shown in Fig. 7. It is advisable to use distances to the bounding rectangle, the coordinates of its touch points by the outer contour, and others. In addition, integral characteristics, for example, area can be used. The main requirement for the IFs is their good sensitivity to change of p . Calculation of geometric features. To solve the problem, we will use only the IFs computed from the outer contour. Consider the finding of the outer contour on real and synthesized images. There are many works devoted to contours in the image [9–12]. The simplest solution is done in stages. First, the image is filtered from high- frequency noise, for example, using a Gaussian filter. Then, pixels of the image are defined, in which there is a sharp difference in brightness. It requires the finding of partial difference derivatives. The result of the selection of boundaries is shown in Fig. 8. The last step is finding an external closed loop. The construction of the outer contour of the synthesized image can be done without a full rendering of the target, reducing the amount of computation. To do this, we need to obtain from the 3D model an array of all possible edges. The projection of these edges gives a wireframe image that contains an outer contour. The closed outer contour cab be found by traversing the edges. An example is shown in Fig. 9. The accuracy of calculating the outer contour on the actual and synthesized image differs. On the actual, it is determined by the pixel value, and for the synthesized image, any accuracy can be obtained. Accordingly, the IF vector ( )TNa~,,a~,a~a~ …21= on the real image will be calculated with less accuracy than the synthesized one. Fig. 7. Calculating the coordinates of the IF vector Using Information Features in Computer Vision for 3d Pose Estimation in Space ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 49 Fig. 8. Finding boundaries on a noisy image Fig. 9. Finding an outer contour for a synthesized image Fig. 10. Binary tree for finding the optimal grid node S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 50 Building a decision tree. Consider the implementation of a fast procedure for finding the optimal node on the grid (21) covering the set P using the feature vectors. For each node ),,,z,y,x(p kjitsrrstijk γψϑ= , a synthesized image is constructed and a feature vector rstijka~ is determined. For a sufficiently large dimension of rstijka~ and asymmetric target, each rstijkp will have a unique rstijka~ . We will construct a binary tree, the passage along which will be determined by the coordinates of the IF vector a~ . We divide all nodes into two equal groups according to the first coordinate of the IF vector 1a~ . In the first group, the value of this attribute will be greater than the median value, and in the second — less. In turn, these groups will be divided in a half, already with respect to the median values of the second attribute. The process will continue until one node is left in each group. The received hierarchical structure will allow to determine the node nearest to the tested IF vector. Figure 10 shows the binary decision tree. To reduce the length, it can be used a non-binary tree, when the partitioning will be conducted for more than two branches. The partitioning of a set of nodes into groups can be done in different ways, for example, not into equal subgroups. Also can be applied different orders of attributes. The problem of constructing an optimal tree is a nontrivial problem [13, 14]. For the best separation of a large number of nodes, machine learning methods are required. Construction of linear approximation. According to the described algorithm, to clarify the value p , it is necessary to identify the function PA~:h →−1 in the neighborhood of the node ( )61 p̂,...,p̂p̂ = . Since the function 1h− will be approximated linearly, the approximation of the function A~P:h → will also be linear. Consider the system of equations ( ) ( ) ( )p̂pHp̂hph −⋅+≈ , (24) where [ ]ji p/)p(hp/)p(hH ∂∂=∂∂= is a 6-by-N matrix. We discard from (24) 6−N equations, so that the remaining rows of the matrix H are linearly independent. Denote the truncated feature vector 6Ra~reduced ∈ , and the truncated function h as reducedh . We obtain the system ( ) ( )p̂pHa ~̂ ph reducedreducedreduced −⋅+≈ , (25) where ( )p̂ha ~̂ reducedreduced = is the value at the node, reducedH is a square nonsingular matrix. The inverse function will have the form ( ) ( ) ( ) ( )reduced1reducedreduced1reduced a ~̂ a~Hp̂a~hp −⋅+== −− . (26) From the entire set of characteristics only six are used. These IFs will correspond to those rows of the system (24) that will not be discarded. The choice of rows in the transition from (24) to (25) is determined in such a way that the matrix reducedH has the least possible number of conditionality. Using Information Features in Computer Vision for 3d Pose Estimation in Space ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 51 We take a series of values p that differ from the nodal p̂ by coordinate i . ( ) 1211 +=−−⋅ε+= k,j ,kjp̂p i j i , (27) where k is a natural number. We denote this series { }ip . For each p of it we compute the IF vector. For each of its coordinates N,l ,a~l 1= a sample is obtained, consisting of pairs of values <parameter ip — feature la~ >. The linear approximation is constructed for this dependence by the LS method, and a coefficient il i l p/)p(h p a~ ∂∂= ∂ ∂ is obtained. It determines the sensitivity of a particular characteristic to the variation of a single parameter of the pose vector p . The coefficients calculated by the described method form a matrix H . After this, the procedure of sorting out all possible six-points of the IF is carried out, and truncated systems of the form (25) with nonsingular reducedH are formed. For each of them, the linear function is inversed and evaluated ( ) 1−reducedh . To select the best of the models obtained, each of them should be verified on a test sample. That of the models ( ) 1−reducedh which will have the smallest maximum discrepancy will be accepted as a local model. As a result, for each grid node a function will be found that allows us to find the solution of the problem on the basis of a certain subset of six IFs. The local linear model will be characterized by: - six flags, which determine which features are used in this model; - the vector p̂ specifying the value in the node; - matrix of the system ( ) 661 ×− ∈ RH reduced . REALIZATION OF COMPUTING EXPERIMENT We check the method on a simplified problem, when the z,y,x coordinates are known. It is necessary to find only the angles of the turns. Since there are only three unknowns, the dimension of the matrix ( ) 1−reducedH is three by and the number of features involved in each local model is. As IFs, 15 characteristics of the outer contour were used: - the coordinates of the tangency of the outer contour of the bounding rectangle: left point, right point, top point, bottom point (total of eight numbers); - coordinate of the center of gravity of the figure bounded by the outer contour (two numbers); - the area of the figure bounded by the outer contour (one number); - the part of the area of the figure, bounded by the outer contour, located in each of the quadrants of the bounding rectangle (four numbers). Three points were chosen as nodal points, in the neighborhood of which a local linear model was constructed. For different sizes of the local area, identification of ( ) 1−reducedh was performed. Optimal sets of characteristics and maximum errors on the verification samples were found. S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 52 Table 3. Errors of attitude estimation by local linear models Size of local domain, degrees Node 1 2 3 5 1p̂ 0.0085 0.0125 0.0610 2p̂ 0.0144 0.0118 0.0172 0.1137 3p̂ 0.0028 0.0070 0.0597 The nodes were at points with coordinates ,) , , ,.z ,y ,x(p̂ ,) , , ,z ,y ,x(p̂ ,) , , ,z ,y ,x(p̂ T T T 105106008 10510008 101010008 3 2 1 =γ=ψ=ϑ−==== =γ=ψ=ϑ==== =γ=ψ=ϑ==== where the linear quantities are given in meters, and the angular values in degrees. Local areas were defined by three-dimensional cubes, with sides parallel to axes equal in size to one, two, three, and five degrees. The determination of linear models was carried out with the help of LS on the data constructed at nine points along each of the coordinate axes. The models were tested at 729 points, evenly spaced within the areas under consideration. Table 3 shows the maximum errors in the orientation angles. When building the models, various IFs were used, including area ones. The above results show, that depending on the position of the node, it is possible to construct a satisfactory local linear model on a region of larger or smaller size. CONCLUSIONS The analysis of the pose estimation problem of a known three-dimensional object from its image allowed to develop a possible method of solution. It is shown that for the solution of the problem it is expedient to use informative features, calculated from the original images. Analysis of the algorithm for solving the problem made it possible to establish the requirements for them. When examining the existing classification of image attributes, it was found that there is a group of features that can be effectively used to solve the problem of determining the position and attitude of the body in space. The possibility of implementing the proposed method using informative features is confirmed experimentally with the help of modeling. A technique for determining the minimum values of the characteristics of a video camera is proposed, which allows at the design stage to estimate the potentially achievable accuracy of solving a problem for a given target object. Using Information Features in Computer Vision for 3d Pose Estimation in Space ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 53 REFERENCES 1. Gubarev V.F., et. al. Using Vision Systems for Determining the Parameters of Relative Motion of Spacecrafts. Journal of Automation and Information Sciences , 2016. №11. P. 23–39 ( in Russian). 2. Shi J.-F., et. al. Uncooperative Spacecraft Pose Estimation Using an Infrared Camera During Proximity Operations. AIAA Space 2015 Conference and Exposition. Issue AIAA 2015–4429. 17 pp. 3. Kelsey J.M., et. al. Vision-Based Relative Pose Estimation for Autonomous Rendez- vous and Docking. 2006 IEEE Aerospace Conference. 20 pp. 4. Zorich V.A. Mathematical Analysis. Part 2. Moskow: Nauka, 1984. 640 p. ( in Russian). 5. David, P. et. al. SoftPOSIT: Simultaneous Pose and Correspondence Determination. Interational Journal of Computer Vision . 2004. Vol. 59. I. 3. P. 259–284. 6. Philip N.K., Ananthasayanam M.R. Relative position and attitude estimation and con- trol schemes for the final phase of an autonomous docking mission of spacecraft. Acta Astronautica. 2003. Vol. 52. I. 7. P. 511–522. 7. Shijie et.al. Monocular Vision-based Two-stage Iterative Algorithm for Relative Posi- tion and Attitude Estimation of Docking Spacecraft. Chinese Journal of Aeronautics, 2010. Vol. 23. I. 2. P. 204–210. 8. Vassilieva N.S. Content-based image retrieval methods. Programming and Computer Software. 2009. Vol. 35. № 3. P. 158–180 (in Russian). 9. Prewitt J.M.S. Object enhancement and extraction. Picture Processing and Psychopictorics, B. Lipkin and A. Rosenfeld. New York: Academic Press. 1970. P. 75–149. 10. Sobel I., Feldman G. A 3x3 isotropic gradient operator for image processing, presented at a talk at the Stanford Artificial. Project in Pattern Classification and Scene Analysis, R. Duda and P. Hart. Eds.: John Wiley & Sons, 1968. P. 271–272. 11. Canny, J. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence . 1986. № 8(6). P. 679–698. 12. Arbelaez P., et al. Contour Detection and Hierarchical Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2010. P. 898–916. 13. Bentley J.L. Multidimensional binary search trees used for associative searching. Communications of the ACM. 1975. Vol. 18. I. 9. P. 509–517. 14. Samet H. The Design and Analysis of Spatial Data Structures. 1990. 493 p. Received 14.06.2017 ЛИТЕРАТУРА 1. Губарев В.Ф., Боюн В.П., Мельничук С.В., Сальников Н.Н. Использование систем технического зрения для определения параметров относительного движения косми- ческих аппаратов. Проблемы управления и информатики. 2016. №6. С. 103–119. 2. Shi J.-F., et. al. Uncooperative Spacecraft Pose Estimation Using an Infrared Camera During Proximity Operations. AIAA Space 2015 Conference and Exposition. Issue AIAA 2015–4429. 17 pp. 3. Kelsey J.M., et. al. Vision-Based Relative Pose Estimation for Autonomous Rendez- vous and Docking. 2006 IEEE Aerospace Conference. 20 pp. 4. Зорич В. А. Математический анализ. Часть II. Москва, 1984. 640 с. 5. David, P. et. al. SoftPOSIT: Simultaneous Pose and Correspondence Determination. Interational Journal of Computer Vision . 2004. Vol. 59. I. 3. P. 259–284. 6. Philip N.K., Ananthasayanam M.R. Relative position and attitude estimation and con- trol schemes for the final phase of an autonomous docking mission of spacecraft. Acta Astronautica. 2003. Vol. 52. I. 7. P. 511–522. 7. Shijie et.al. Monocular Vision-based Two-stage Iterative Algorithm for Relative Posi- tion and Attitude Estimation of Docking Spacecraft. Chinese Journal of Aeronautics . 2010. Vol. 23. I. 2. P. 204–210. 8. Васильева Н.С. Методы поиска изображений по содержанию. Программирование. 2009. Vol. 35. № 3. P. 51–80. S.V. Melnychuk, V.F. Gubarev, N.N. Salnikov ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 54 9. Prewitt J.M.S. Object enhancement and extraction. Picture Processing and Psychopic- torics. New York: Academic Press. 1970. P. 75–149. 10. Sobel I., Feldman G. A 3x3 isotropic gradient operator for image processing, presented at a talk at the Stanford Artificial. Project in Pattern Classification and Scene Analysis . R. Duda and P. Hart, Eds.: John Wiley & Sons, 1968. P. 271–272. 11. Canny, J. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence . 1986. № 8(6). P. 679–698. 12. Arbelaez P., et al. Contour Detection and Hierarchical Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence . 2010. P. 898–916. 13. Bentley J.L. Multidimensional binary search trees used for associative searching. Communications of the ACM. 1975. Vol. 18. I. 9. P. 509–517. 14. Samet H. The Design and Analysis of Spatial Data Structures. 1990. 493 p. Получено 14.06.2017 Мельничук С.В., канд. техн. наук, наук. співроб. відд. керування динамічними системами e-mail: sergvik@ukr.net Губарев В.Ф., член-кореспондент НАН України, проф., д-р. техн. наук, зав. відд. керування динамічними системами e-mail: v.f.gubarev@gmail.com Сальніков М.М., канд. техн. наук, старш. наук. співроб. відд. керування динамічними системами e-mail: salnikov.nikolai@gmail.com Інститут космічних досліджень НАН України та ДКА України, пр. Акад. Глушкова 40, корп. 4/1, 03187, Київ, Україна ВИКОРИСТАННЯ ІНФОРМАЦІЙНИХ ОЗНАК У СИСТЕМІ КОМП’ЮТЕРНОГО ЗОРУ КОСМІЧНОГО АПАРАТА ДЛЯ ОЦІНЮВАННЯ ПОЛОЖЕННЯ ТА ОРІЄНТАЦІЇ Розглянуто задачу створення системи комп'ютерного зору, яку призначено для визначен- ня відносного положення і орієнтації цільового об'єкта відомої форми на основі його зображень. Розглянувши наявні класифікації ознак зображень, було з'ясовано, що є група ознак, які може бути ефективно використано для розв'язання завдання визначення поло- ження і орієнтації тіла в просторі. Можливість реалізації запропонованого методу з вико- ристанням інформативних ознак підтверджена експериментально за допомогою моделю- вання. Було використано метод, який передбачає машинне навчання системи для кожно- го окремого цільового об'єкта. Отримані результати дозволяють створити на їх основі систему комп'ютерного зору для вирішення задачі автоматичного зближення і стикуван- ня космічних апаратів, в тому числі з об'єктами космічного сміття. Ключові слова: автоматичне стикування, визначення положення і орієнтації, некоо- перований об'єкт, комп'ютерний зір, дерево рішень, лінійна апроксимація, інформати- вні ознаки, оброблення зображень, машинне навчання, ідентифікація. Using Information Features in Computer Vision for 3d Pose Estimation in Space ISSN 2519-2205 (Online), ISSN 0454-9910 (Print). Киб. и выч. техн. 2017. № 4 (190) 55 Мельничук С.В., канд. техн. наук, науч. сотр. отд. управления динамическими системами e-mail: sergvik@ukr.net Губарев В.Ф., член-корреспондент НАН Украины, проф., д-р. техн. наук, зав. отд. управления динамическими системами e-mail: v.f.gubarev@gmail.com Сальников Н.Н., канд. техн. наук, старш. науч. сотр. отд. управления динамическими системами e-mail: salnikov.nikolai@gmail.com Институт космических исследований НАН Украины и ГКА Украины, пр. Акад. Глушкова 40, корп. 4/1, 03187, Киев, Украина ИСПОЛЬЗОВАНИЕ ИНФОРМАТИВНЫХ ПРИЗНАКОВ В СИСТЕМЕ КОМПЬЮТЕРНОГО ЗРЕНИЯ КОСМИЧЕСКОГО АППАРАТА ДЛЯ ОЦЕНИВАНИЯ ПОЛОЖЕНИЯ И ОРИЕНТАЦИИ Рассмотрена задача создания системы компьютерного зрения, предназначенной для определения относительного положения и ориентации целевого объекта известной формы на основе его изображений. Был использован метод, предполагающий машин- ное обучение системы для каждого отдельного целевого объекта. При рассмотрении существующей классификации признаков изображений было выяснено, что существу- ет группа признаков, которые могут быть эффективно использованы для решения за- дачи определения положения и ориентации тела в пространстве. Возможность реали- зации предложенного метода с использованием информативных признаков подтвер- ждена экспериментально с помощью моделирования. Полученные результаты позво- ляют создать на их основе систему компьютерного зрения для решения задачи автома- тического сближения и стыковки космических аппаратов, в том числе с некооперируе- мыми объектами космического мусора. Ключевые слова: автоматическая стыковка, определение положения и ориентации, некооперируемый объект, компьютерное зрение, дерево решений, линейная аппрокси- мация, информативные признаки, обработка изображений, машинное обучение, иден- тификация.