Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing

The paper presents the conception, theoretical grounds and mathematical tools for designing high-performance searching and iterative GMDH algorithms on the basis of recurrent-and-parallel computing for modelling and prediction of complex processes. Its effectiveness is experimentally tested. Intelli...

Повний опис

Збережено в:
Бібліографічні деталі
Опубліковано в: :Управляющие системы и машины
Дата:2019
Автори: Stepashko, V.S., Yefimenko, S.M., Pavlov, A.V.
Формат: Стаття
Мова:English
Опубліковано: Міжнародний науково-навчальний центр інформаційних технологій і систем НАН та МОН України 2019
Теми:
Онлайн доступ:https://nasplib.isofts.kiev.ua/handle/123456789/161650
Теги: Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
Назва журналу:Digital Library of Periodicals of National Academy of Sciences of Ukraine
Цитувати:Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing / V.S. Stepashko, S.M. Yefimenko, A.V. Pavlov // Управляющие системы и машины. — 2019. — № 3. — С. 38-51. — Бібліогр.: 27 назв. — англ.

Репозитарії

Digital Library of Periodicals of National Academy of Sciences of Ukraine
id nasplib_isofts_kiev_ua-123456789-161650
record_format dspace
spelling Stepashko, V.S.
Yefimenko, S.M.
Pavlov, A.V.
2019-12-17T18:36:57Z
2019-12-17T18:36:57Z
2019
Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing / V.S. Stepashko, S.M. Yefimenko, A.V. Pavlov // Управляющие системы и машины. — 2019. — № 3. — С. 38-51. — Бібліогр.: 27 назв. — англ.
0130-5395
DOI: https://doi.org/10.15407/csc.2019.03.038
https://nasplib.isofts.kiev.ua/handle/123456789/161650
519.163 + 681.5.015
The paper presents the conception, theoretical grounds and mathematical tools for designing high-performance searching and iterative GMDH algorithms on the basis of recurrent-and-parallel computing for modelling and prediction of complex processes. Its effectiveness is experimentally tested. Intelligent information technology for inductive modeling of complex processes on the basis of recurrent-and-parallel computing is constructed.
Мета цієї статті полягає у розробленні методів розпаралелювання обчислень у перебірному алгоритмі COMBI та узагальненому релаксаційному ітераційному алгоритмі GRIA і визначенні обчислювальної ефективності розпаралелювання. Результати. У статті описано розроблені принципи розпаралелювання операцій у комбінаторному алгоритмі
Цель этой статьи состоит в разработке методов распараллеливания вычислений в переборном алгоритме COMBI и обобщенном релаксационном итерационном алгоритме GRIA и определении вычислительной эффективности распараллеливания. Результаты. В статье описаны разработанные принципы распараллеливания операций в комбинаторном алгоритме COMBI МГУА с рекуррентным оцениванием параметров моделей. При распараллеливании использованы схемы вычислений со стандартным генератором двоичных чисел и последовательным усложнением структур моделей, согласно которым каждый процессор автономно вычисляет начальный двоичный структурный вектор и количество моделей, которые он будет строить. Также гарантируется неповторяемость структур в различных процессорах. Благодаря этому значительно повышается эффективность распараллеливания, поскольку нет потерь времени на межпроцессорное взаимодействие.
en
Міжнародний науково-навчальний центр інформаційних технологій і систем НАН та МОН України
Управляющие системы и машины
Intelligent Information Technologies and Systems
Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing
Рекурентно-паралельні алгоритми МГУА для високопродуктивних обчислень
Рекуррентно-параллельные алгоритмы МГУА для высокопроизводительных вычислений
Article
published earlier
institution Digital Library of Periodicals of National Academy of Sciences of Ukraine
collection DSpace DC
title Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing
spellingShingle Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing
Stepashko, V.S.
Yefimenko, S.M.
Pavlov, A.V.
Intelligent Information Technologies and Systems
title_short Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing
title_full Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing
title_fullStr Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing
title_full_unstemmed Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing
title_sort recurrent-and-parallel gmdh algorithms for high-performance computing
author Stepashko, V.S.
Yefimenko, S.M.
Pavlov, A.V.
author_facet Stepashko, V.S.
Yefimenko, S.M.
Pavlov, A.V.
topic Intelligent Information Technologies and Systems
topic_facet Intelligent Information Technologies and Systems
publishDate 2019
language English
container_title Управляющие системы и машины
publisher Міжнародний науково-навчальний центр інформаційних технологій і систем НАН та МОН України
format Article
title_alt Рекурентно-паралельні алгоритми МГУА для високопродуктивних обчислень
Рекуррентно-параллельные алгоритмы МГУА для высокопроизводительных вычислений
description The paper presents the conception, theoretical grounds and mathematical tools for designing high-performance searching and iterative GMDH algorithms on the basis of recurrent-and-parallel computing for modelling and prediction of complex processes. Its effectiveness is experimentally tested. Intelligent information technology for inductive modeling of complex processes on the basis of recurrent-and-parallel computing is constructed. Мета цієї статті полягає у розробленні методів розпаралелювання обчислень у перебірному алгоритмі COMBI та узагальненому релаксаційному ітераційному алгоритмі GRIA і визначенні обчислювальної ефективності розпаралелювання. Результати. У статті описано розроблені принципи розпаралелювання операцій у комбінаторному алгоритмі Цель этой статьи состоит в разработке методов распараллеливания вычислений в переборном алгоритме COMBI и обобщенном релаксационном итерационном алгоритме GRIA и определении вычислительной эффективности распараллеливания. Результаты. В статье описаны разработанные принципы распараллеливания операций в комбинаторном алгоритме COMBI МГУА с рекуррентным оцениванием параметров моделей. При распараллеливании использованы схемы вычислений со стандартным генератором двоичных чисел и последовательным усложнением структур моделей, согласно которым каждый процессор автономно вычисляет начальный двоичный структурный вектор и количество моделей, которые он будет строить. Также гарантируется неповторяемость структур в различных процессорах. Благодаря этому значительно повышается эффективность распараллеливания, поскольку нет потерь времени на межпроцессорное взаимодействие.
issn 0130-5395
url https://nasplib.isofts.kiev.ua/handle/123456789/161650
citation_txt Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing / V.S. Stepashko, S.M. Yefimenko, A.V. Pavlov // Управляющие системы и машины. — 2019. — № 3. — С. 38-51. — Бібліогр.: 27 назв. — англ.
work_keys_str_mv AT stepashkovs recurrentandparallelgmdhalgorithmsforhighperformancecomputing
AT yefimenkosm recurrentandparallelgmdhalgorithmsforhighperformancecomputing
AT pavlovav recurrentandparallelgmdhalgorithmsforhighperformancecomputing
AT stepashkovs rekurentnoparalelʹníalgoritmimguadlâvisokoproduktivnihobčislenʹ
AT yefimenkosm rekurentnoparalelʹníalgoritmimguadlâvisokoproduktivnihobčislenʹ
AT pavlovav rekurentnoparalelʹníalgoritmimguadlâvisokoproduktivnihobčislenʹ
AT stepashkovs rekurrentnoparallelʹnyealgoritmymguadlâvysokoproizvoditelʹnyhvyčislenii
AT yefimenkosm rekurrentnoparallelʹnyealgoritmymguadlâvysokoproizvoditelʹnyhvyčislenii
AT pavlovav rekurrentnoparallelʹnyealgoritmymguadlâvysokoproizvoditelʹnyhvyčislenii
first_indexed 2025-11-27T02:32:09Z
last_indexed 2025-11-27T02:32:09Z
_version_ 1850794580527546368
fulltext 38  iSSN 2706-8145, системи керування та комп'ютери, 2019, № 3 doi https://doi.org/10.15407/usim.2019.03.038 Udc 519.163 + 681.5.015 V.s. stepAshKo, doctor eng., professor, international research and training center for information technologies and Systems, of the NaS of Ukraine and of the meS of Ukraine, acad. glushkov ave., 40, kyiv, 03187, Ukraine, stepashko@irtc.org.ua s.m. yeFImenKo, phd eng., Senior researcher, international research and training center for information technologies and Systems, of the NaS of Ukraine and of the meS of Ukraine, acad. glushkov ave., 40, kyiv, 03187, Ukraine, syefim@ukr.net A.V. pAVloV, phd eng., researcher, international research and training center for information technologies and Systems, of the NaS of Ukraine and of the meS of Ukraine, acad. glushkov ave., 40, kyiv, 03187, Ukraine, andriypavlove@gmail.com reCurrent-AnD-pArAllel GmDh   AlGorIthms For hIGh-perFormAnCe ComputInG The paper presents the conception, theoretical grounds and mathematical tools for designing high-performance searching and it- erative GMDH algorithms on the basis of recurrent-and-parallel computing for modelling and prediction of complex processes. Its effectiveness is experimentally tested. Intelligent information technology for inductive modeling of complex processes on the basis of recurrent-and-parallel computing is constructed. Keywords: inductive modelling, recurrent-and-parallel computations, GMDH, COMBI, GRIA, vector autoregression. Inductive modelling problem Inductive modeling is a process of mathematical model generation for an object, process or a system based on the empirical dataset . The modeling aims for forecasting, classification, extrapolation, inter- polation and so on . Generally, the model generation problem can be stated as follows . It is necessary to find the optimal model * * f fŷ f ( X , )= Θ � in a given set of models Ψ by minimization of a given criterion CR as the solu- tion of the discrete optimization task: * ˆarg min ( , ( , ))ff Î f CR y f X Ψ = Θ , (1) where parameters estimation vector fΘ̂ , ˆdim 1f fsΘ = × , for each function f(X, Θ f ) ∈ Ψ is the solution of the following continuous optimiza- tion task: ˆ arg min ( , ( , )) s f f f fQR y f X Θ ∈ℜ Θ = Θ , (2) where QR(⋅) is a criterion that estimates the qua- lity of the solution for the parameter estimation problem; and CR(⋅) is a criterion that estimates the quality of the solution of the optimal model selec- tion problem (1) . Group Method of Data Handling (GMDH) is one of the most effective methods for resolving iSSN 2706-8145, control systems and computers, 2019, № 3 39 Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing problems (1, 2) . GMDH-based model generation process is based on the following principles: 1) suc- cessive complication of model structures; 2) “exter- nal supplement” [1]; and 3) non-final decisions [2] . The successive model structure complication principle is based on a self-organization concept of finding the optimal (adequate) model complexity for the modeling object in terms of minimization of the criterion CR . The model structure generation algorithm performs a set of consequent stages or ite- rations (r = 1, 2, . . .) . Each iteration generates a set of models (solutions) having more complex struc- tures than the models at the previous iteration . The iterations are performed while criterion CR value is decreasing (CR r <CR r-1 ) or the difference CR r –CR r-1 approaches a given ∉ε — the accuracy of the prob- lem solution . It is assumed that if CR r >CR r-1 then the structure of the model at r iteration is overfitted and the iteration process should be stopped . The external supplement principle is based on the Godel’s incompleteness theorem [3] . According to that principle, the criteria should be utilized in (1) that are based on involving new information and have a minimum in the model complexity increa- sing process . The principle of non-final decisions states that not the best model but several best models (solu- tions) are selected and passed to the next iteration of the algorithm . This allows increasing the probability to find the global minimum of the criterion CR . The problem (1–2) in terms of GMDH may be specified as follows . Let x i , 1i ,m= are input variables or factors that are observed . First of all, we specify the set ∉Ψ (model structure class), 1kk f k , { f ( X , = ∞ Ψ = Θ )} , which frequently is described by the following polynomial: 1 0 1 1 1 = = = = = = + = + + + + ∑ ∑∑ ∑∑∑ m m m m i i ij i j i i j i m m m ijk i j k i j i k i j f ( x , ..., x , ) x x x x x x ... θ θ θ θ Θ + . (3) Assume that the polynomial (3) is limited by the maximal power p, then the number of terms in (3) is p m pl C += . Let us denote variables after the coefficients ∉θ i , θ ij , θ ijk , …, as z j , and the correspon- ding vectors as z j , 1j ,l= , 1 2( ... )lZ z z z= � � � . Let us give a definition of model structure in the form (3) . Structural vector is an l-dimension bi- nary vector 1 T k k kld ( d ,..., d )= , 1,2lk = that is com- posed of s k unities where d kj = 1 in the vector mani- fests presence the corresponding variables z j in the model structure (3), s k ≤ l . The number s k is called the complexity of the model (3) . Consequently, the number of all structures of the model (3) and the power of the set Ψ∉ is 2l . The vector of unknown parameters kdΘ , dim( kdΘ ) = s k , we define as a vector composed of non-zero components of the vector diag(d k ) Θ∉ where diag(v) is a diagonal matrix with elements of the vector v . Then a matrix Z kd is defined as a matrix composed of the vector-columns z j of the matrix Z for which the corresponding elements of the vector T kd are equal to 1 . Then the function k k k kk d d d df ( Z , ) ZΘ = Θ with unknown parameters kdΘ is called a model structure . Let D is a set that holds all possible structural vectors d k , 1,2lk = , and some unimodal criterion ( , ( , )) k kd dQR y f Z Θ is given . Then the problem (2) becomes as follows: arg min ( , ( , )) skk k k dk d d dQR y f Z Θ ∈ℜ Θ = Θ � , ∀d k ∈D (4) and the problem (1) takes the form: * arg min ( , ( , )) k k k d dd D f CR y f Z ∈ = Θ � . (5) In forecasting, classification and extrapo- lation problems being solved using GMDH in the class of polynomial functions (3) being linear in parameters, the residual sum of squares 2 k kA A A,d A,dRSS || y Z ||= − Θ � is used as the QR criterion, and the regularity criterion 2 k kB B/ A B B ,d A,dAR AR || y Z || ∆ = = − Θ � is used as the CR criterion . Indices A and B indicate training and testing datasets, ∅=∩ BA , WBA =∪ . W is the initial dataset of observations or a set of vector-columns of the matrix (Zy) . All the variety of GMDH algorithms can be classified into searching and iterative algorithms [4] based on specifics of the model structure genera- tion process, Fig . 1 . The problem (1) is similar to a discrete program- ming problem and can be resolved using exhaus- 40  iSSN 2706-8145, системи керування та комп'ютери, 2019, № 3 V.S. Stepashko, S.M. Yefimenko, A.V. Pavlov tive or successive search algorithms [4] . The basic searching GMDH algorithm is the combinatorial algorithm COMBI [5–8] . In order to speed up the algorithm, paper [6] proposes to utilize a recurrent procedure to estimate the parameters (bordering method [9]) and a special method for generation of model structures . This allows decreasing by a factor of ten the computational complexity of the parameters estimation procedure comparing to the Gaussian method [10] . The aim of successive search algorithms is to find the optimal solution obtained via exhaustive search algorithms . An example of such successive search algorithm is the algorithm MULTI [4] based on the principle of non-final decisions . A number of iterative GMDH algorithms are well- known however the fastest among them is as for now the GRIA (Generalize Relaxational Iterative Algorithm) [10] . The computational complexity to build a model from iteration to iteration is linear in GRIA due to recurrent procedures for calculating the model parameters and QR and CR criteria [11] . The attention in this paper is paid to the development of parallelization methods for COMBI and GRIA and evaluation of their performance . Combinatorial   ComBI GmDh algorithm   with recurrent computations The combinatorial algorithm is designed to search for a better regression containing the most informative (effective) subset of input variables (regressors) . It contains the following main blocks (which correspond to the main stages of the modelling process) [4]: 1 . Data transformation according to the selected class of model structures linear in parameters . 2 . Formation of models of different complexity (generation of all possible structures and pa- rameters estimation by the least-squares method (LSM)) . 3 . Calculation of the value of external quality criteria and selection of the best models . 4 . Estimation of the predictive quality of the best models on the third, examination (validation) sample (if it is given) . Since the same classes of structures and criteria can be used in any structural identification algorithm, their difference is determined by the type of structures generator . Therefore, we consider the principles of organizing calculations in a combinatorial generator, where the main computing costs are concentrated . The basic operations performed in the block of models generation are: generating the next model structure (system of conditional equations); formation of a corresponding normal system of equations; solution of the received system (estimation the model coefficients) . In the case of a linear object with m inputs, the models of the form ˆˆ θv v vy x= , v = 1, . . ., 2m – 1, (6) are compared in the process of the exhaustive search . Decimal number Î corresponds to binary number dvÎ in (6) . Unit elements of dv indicate inclusion regressors with corresponding numbers in the model, whereas zero elements signify exclusion . The parameter estimation problem is the most time-consuming in the combinatorial algorithm . Actually the problem is in computational speedup of linear equations systems solving . The bordering method is the traditional and effective recurrent method . The idea of this algorithm consists in step- by-step improving estimations of parameters using Fig. 1. GMDH algorithms classification iSSN 2706-8145, control systems and computers, 2019, № 3 41 Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing recurrent calculation of inverse matrix elements in an estimation procedure . Efficient recurrent modifications of classic Gauss and Gramm-Schmidt algorithms were of- fered in [20] . Computational complexity of parameter es- timation is proportional to the square of the model complexity for any recurrent algorithm and to the cube for non-recurrent one . Generalized relaxational   Iterative GmDh Algorithm   with recurrent computations The model generation process in GRIA is shown in the figure 2 [10, 11] . As one can see from the figure above, the process consists of the three stages: Stage of a transformation of the initial matrix 1 . X . The model class is generated here . Preparation stage . The matrix of normal 2 . equations is generated here . Iterations stage . Models are generated here .3 . In this algorithm, F selected models are passed from iteration to iteration . The algorithm includes two ways to find the solution of the problem (5):  an exhaustive search in the given limited set of models at any iteration . It is performed based on a model structure generator with exhaustive search;  successive model search being performed ba- sed on a model structure generator with a successi- ve search; this generator reasonably truncates the exhaustive search in the case of a nonlinear model generation . Vector autoregressive models   for prediction of multidimensional  interrelated processes Vector autoregressive (VAR) model generalizes the autoregression model to multidimensional case [21] . It is built by the stationary time series . It is the system of equations in which every variable (com- ponent of multidimensional time series) is linear combination of all variables in the previous time points lags . The order of such model is determined by the order of the lags . In the general case for m time series and k lags, the model will be the system of m equations and it matrix form will be of the form: 1 ( ) ( ), k j j X t X t j = = Θ −∑ , (7) Fig. 2. Model generation process in GRIA 42  iSSN 2706-8145, системи керування та комп'ютери, 2019, № 3 V.S. Stepashko, S.M. Yefimenko, A.V. Pavlov where Θ j , 1,j k= – matrices of model (7) param- eters of the size m m× . The COMBI GMDH algorithm may be used for VAR modelling by exhaustive search of all possible variants and finding the best model for every time series containing the most informative subset of input arguments . The general models structure in the form of the system of m difference equations is determined as a result of the sequence of such operations: 1 . Data array of m – k arguments is composed un- der the number of interrelated processes m and lags k . 2 . Maximal complexity for restricted search is defined [22] . COMBI algorithm with sequentially complicated structures of models on the basis of recurrent-and-parallel computing is used for mo- delling . For every time series, the best F (by the value of the regularity criterion [23]) models are selected . Overall mF ⋅ models are passed to the next step . 3 . The sorting-out of G = F m variants of model systems is carried out . The best system model (by the value of the systemic integral criterion of vector models quality) is selected . The value of the crite- rion is calculated on the given part of initial data set in the prediction mode of the process for the given steps number n s :∉ 2 1 1 ( ) Sn m ij ij i j B x x∗ = = = −∑∑ , (8) where ijx∗ — result of step-by-step integration of the system of m equations . The exponential dependence of number of system models G on the number of interrelated processes m results in a huge amount of variants enumeration and requires improvement of modelling means . Therefore, paralleling of computations and recurrent algorithms of parameters estimation are reasonable here . theoretical grounds for paralleling  of computing in algorithms with   recurrent parameters estimation The problem of paralleling computations The problem of parallelization of GMDH algo- rithms is not a new one . Many foreign and domes- tic scientists have been solvin git . Combinatorial algorithms were parallelized in the first place [12, 13] as they are the most time-consuming and suffer from “curse of dimensionality” . Single-processor computational systems allow solving combinato- rial search problem of 20 variables in 10 sec . while generating 220 = 1048576 models . Parallelization allows increasing the number of generated models proportionally to the number of processor cores, i .e ., build 226 = 67108864 models in the same time using 64 cores . The major problem in parallelization of combi- natorial algorithms is approaching uniform load of all available cores, as the time to generate a model substantially depends on the number of variables (terms) included to the model . Paper [12] suggests using inverse structural vectors, in other words, a core computes a model whose structural vector is (1101110) and, in addition, computes its inverse version (0010001) . This technique allows obtaining the cores load at 96% regardless of their number . Paper [13] offers using a structural vector gene- ration method with a consecutive complication . This allowed approaching almost maximal load of the cores on the level of 99,8% . Methods and principles of parallelization of GMDH algorithms were considered in [14, 15] . Paper [14] proposes a parallelized version of GAME which is a GMDH-type neural network with diffe- rent types of neurons and interlayer links based on symmetric multiprocessor (SMP) architecture that allows several cores to access to a single shared me- mory . Since each neuron is calculated independent- ly and the time to create a thread is considerably less than the time to calculate a neuron, the algorithm computes each neuron in a single thread . However, the performance of such parallelization is turned out to be fairly low: if we get 1,7 speed up for 2 cores, then the algorithm performs only 3,5 times faster than its sequential version in the case of 8 cores . Paper [15] considers three different type of speeding up: 1) parallelization tasks using threads, 2) vector parallel processing (vectorization) us- ing an extended set of processor commands (SSE — Streaming SIMD Extensions), 3) using the 64-bit operating system . iSSN 2706-8145, control systems and computers, 2019, № 3 43 Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing The author suggests performing low-level com- putations (simple loops) using vectorization and more complex procedures (initialization functions, data manipulations, reading/writing data) in pa- rallel in separated threads . The effectiveness of the proposed principles is turned out to be fairly high and has almost linear speed up function: 2 cores — 2x speed up, 8 cores — 7,4x speed up . Libraries for parallelization The most well-known libraries are: Message Passing Interface (MPI), Open Multi-Processing (OpenMP) and Threading Building Blocks (TBB) . MPI is an API developed for message transfer- ring thatallow exchange messages between pro- cesses which perform one task . Primarily the interface designed for parallelizing of tasks betwe- en CPUs but not threads, therefore creating, dele- ting, synchronizing and message transfer betwe- en threads shoulder a programmer . That is the rea- son the interface is a quite low-level and complex, and isn’t easy to master in a short term . OpenMP is an open standard for program pa- rallelization describing a set of compiler direc- tives, library procedures and environment variab- les which purposed for creating multithread ap- plications based on shared memory multiproces- sor systems . The standard is implemented in the majority of well-known compilers (GCC, Microsoft Visual Studio, Intel, IBM, Oracle) and allows writ- ing parallel applications easily just by parallelizing local code segments adding minimum changes to the original code . TBB is a C++ template library for paralle- lism undertaking thread management that al- low (as OpenMP) directly specify the section of code which should be parallelized . A programmer should think in terms of tasks, not threads when applying the library . The advantages of TBB over OpenMP are: ∉  any data type support; ∉  ready to use class templates, allowing me- mory access from several threads simultaneously; ∉  automatic detection and creation the opti- mal number of working threads in runtime . The decision about using the TBB library for parallelization the GRIA was made, taking in- to account the advantages of the TBB mention- ed above . Parallelization of recurrent computations in iterative GMDH algorithms Paper [16] offers a parallelized version of GRIA with recurrent computations based on SMP archi- tecture [17] considered in [14] . The model generation process in GRIA consists of three stages: 1) transformation of the initial matrix, 2) calculation of the normal equations matrix, 3) iterations stage where models are built . When building models, it is necessary to calcu- late standard error measures at different parts of the dataset . If the number of models and observa- tions is big enough, this post-process stage could be time-consuming . Taking into account that the first stage does not take long, we should parallelize the last two stages of the algorithm and the post- process stage where error measures are calculated . The preparation stage calculates matrices A T AXX , B T BXX , A T AyX , B T ByX by imple- menting a double loop . To approach the uniform load of cores, it is offered [16] to create tasks in the following way . Each task should execute the body of the internal loop for two values of the variable i of the external loop that are equidistant from n/2 (for example, i = 0 and i = n – 1; i = 1 and i = n – 2;…), where n is the initial dataset length . Then, having k cores (n is even), each of them executes: 1 12 2 2 1 2 N n / ( k ) tasks, if N int first n / ( k ) cores execute n / ( k ) ,N the rest n / ( k ) = −  +   =       −    where    denotes a remainder,    denotes integer result of the division . Model generation process in GRIA is the itera- tive one [10] . Each model can be calculated inde- pendently of others at any iteration . The algorithm passes a given number F of best models from itera- tion to iteration . It is calculated m new models at the current iteration based on every model that was passed from the previous iteration . Thus, the total number of tasks (models that must be built) at every iteration of the algorithm is Fm . Articles [18, 26] 44  iSSN 2706-8145, системи керування та комп'ютери, 2019, № 3 V.S. Stepashko, S.M. Yefimenko, A.V. Pavlov suggest the following scheme for the tasks paral- lelization (k is the number of cores): 1 . Fm < k . Fm cores perform one task, the rest (k – Fm) are free . 2 . Fm = k . All the cores perform one task . 3 . Fm > k . The first /Fm k   cores perform /Fm k   + 1 tasks and the rest /Fm k   tasks . It is also offered to have the threads number equal to the number of cores in order to avoid de- lays due to switching between the threads . Parallelization of recurrent computations in COMBI GMDH algorithm The scheme of the combinatorial algorithm pa- rallelization with the standard binary generator of structural vectors and the recurrent parameters es- timation using the modified Gauss algorithm for solving linear equation systems developed in [24] . In this scheme, the change of states of binary struc- tural vector with elements 0 or 1 is organized on the basis of the binary counter . Table 1 shows the approximate modeling time using this scheme . Already for more than 50 argu- ments, an exhaustive search (in acceptable mode- ling time) becomes impossible even for cluster sys- tem containing one hundred processors . The scheme with sequential binary counter uses such sequence of binary numbers gene- ration when all combinations with one unit in structural vector appears first of all, then with two units, and so on to complete model com- prising all arguments . This scheme allows to partially solve the prob- lem of exhaustive search when arguments number exceeds capability of the algorithm with a stan- dard binary generator . In this case it is advisable to execute an exhaustive search not among all pos- sible models but only for models of the restricted complexity . Table 2 shows approximate modeling time of COMBI with successive complication of model structures of complexity no more than 15 out of the total 50 arguments, when all models with 50- elements binary structural vectors containing from 1 to 15 units are built [22] . Paralleling the recurrent computations in VAR modeling To build models of vector autoregression, the combinatorial COMBI GMDH algorithm (with sequential complication of model structures), re- current parameters estimation (with the use of the modified Gauss algorithm) and computations para- lleling is used . Linear models are compared for each of the m interrelated processes with g = m ⋅ k argu- ments (inputs) . The scheme of algorithm with sequential com- plication of structures for building VAR models is chosen because it allows partially solving the prob- lem of exhaustive search in the case when it be- comes impossible even with parallelization . In this case, it is advisable to perform exhaustive search not among all possible models, but only models of limi- ted complexity . Another reason for applying the restriction on complexity may be the insufficient number of points in the training part of the sample n A when the search is performed among the complexity models no more than n A . Since the modeling time is actu- ally determined by the time of the search of variants for the systems of models, the efficiency of paral- lelization of this stage of the algorithm will be even more important than the efficiency of paralleling the stage of constructing the F best models for each of the m outputs (time series) . Arguments Models Time 1 proc . 100 proc . 20 1 048 575 1 s 0,01 s 21 2 097 151 2 s 0,02 s . . . 40 1,1E+12 ~ 12 days ~ 3 hours . . . 50 1,1E+15 ~ 34 days ~ 124 hours Table1. Approximate time of the exhaustive search Arguments Complexity Models Time, hours 1 proc . 100 proc . 50 15 3,7E+12 984 ~ 10 100 9 2,1E+12 558 ~ 6 Table 2. Approximate time of the restricted search iSSN 2706-8145, control systems and computers, 2019, № 3 45 Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing the investigation of the paralleling  effectiveness The effectiveness of the constructed modeling tools was calculated according to the following criteria:  the acceleration of modeling time;  the percentage of the computers load . Performance evaluation of the parallelized GRIA algorithm Evaluation of the performance of the paralleli- zed GRIA was carried out using two different pro- cessors [16]:  4-core processor Intel Core i3 M 350 2 .27 GHz (2 physical and 2 logical cores);  8-core processor Intel Core i7 4700 HQ 2 .4 GHz (4 physical and 4 logical cores) . The initial matrix held 4000 observations, 10 true variables and 990 false variables . The dataset was generated using a pseudo-random genera- tor Mersenne twister [19] . The matrix values were generated using the uniform distribution law in the interval [0; 1] . The initial dataset was divided into an examination (500 records), learning (3000 records), and validation (500 records) sets . The values for the algorithm’s parameters were the fol- lowing: choice of freedom F = 400, the iteration number R = 50 .The algorithm should to discovere the following model: y = 6,29447 + 9,37736x 1 + 8,26752x 2 – –8,04919x 3+ + 8,11584x 4 – 7,46026x 5 – 7,29046x 6 + +6,70017x 7 – 5,57932x 8 – 3,83666x 9 + 2,64719x 10 . The algorithm has built the following model: �y = 6,29447 + 9,37736x 1 + 8,26752x 2 – –8,04919x 3 + 8,11584x 4 – 7,46026x 5 – 7,29046x 6 + +6,70017x 7 – 5,57932x 8 – 3,83666x 9 + 2,64719x 10 + 3,5∉10-8⋅x 145 + 1 .4∉10-9⋅x 258 . The figures 3 and 4 show the scalability of the three mentioned above stages of the algorithm that were parallelized . The legend description of the fi- gures: Errors — the post-process stage (errors mea- sure are calculated), Normal matrix — the prepa- ration stage, Iterations — the iterations stage . As one can see from the figures, the paralleliza- tion allows obtaining the almost equal time of the serial and the parallel versions using one core, hence usage of physical cores allow speeding up eve- ry stage in 2x using 2 cores of Core i3 and in 1,86x using 2 cores of Core i7 . Scalability of the stages Fig. 3 . Scalability of the GRIA algorithm stages using Intel Core i3 Fig. 4. Scalability of the algorithm stages using Intel Core i7 46  iSSN 2706-8145, системи керування та комп'ютери, 2019, № 3 V.S. Stepashko, S.M. Yefimenko, A.V. Pavlov differ when using 4 cores of Core i7: the iterations stage has been speeding up in 2,9x, the normal ma- trix calculation stage — in 3,15x and the errors cal- culation stage — in 3,55x . As it may be seen, we have a low scalability when involving logical cores into computations: usage of 4 processor cores Core i3 gives us only 2,3x speed up and usage of 8 cores Core i7 — 3 .8x at the itera- tion stage, 4,39x at the normal matrix calculation stage, and 4,26x at the error measures calculation stage . Let’s look at the scalability of parallelized GRIA (see figure 5) . The performance of the developed application is represented in table 3 . As one can see from the table, the parallelized version allowed obtaining 2,28x speed up at Core i3 processor and 4x speed up using Core i7 and ap- proaching uniform load of processor cores that one can see from the last two columns of the table . Performance evaluation of the parallelized COMBI algorithm The experiment was carried out at the SCIT IC complex of the National Academy of Sciences of Ukraine [25] with the use of CPU IntelXeonE5- 2600 2 .6 GHz . The test task was formed as follows: the matrix X of size 70 × 50 (70 records for 50 arguments) for the system of conditional equations was generated . The vector y was formed in the form of a linear combination of only five arguments: y = x 10 +x 20 + x 30 + x 40 + x 50 (9) The time of exhaustive search with a standard bi- nary generator based on recurrent computations for 50 arguments would last about 34 years . However, this task can be solved by using an algorithm with sequential complication of structures limited to models of complexity 7 . In this case 6 nodes with 24 cores of SCIT-4 computing cluster were used for modeling . Model (9) was received by such computing sys- tem in less than 2 seconds when 118145035 models Fig. 5. Performance of parallelized GRIA Processor Number of cores Performance Speed up Physical cores load, % Logical cores load, % Logical cores load, % 1 1x 100 Not used 2 2x 90 4 2,28x 90 99 Core i7 1 1x 100 Not used 2 1,86x 100 4 3x 99 8 4x 99 99 Table 3. The performance of the developed application Fig. 6. Comparative effectiveness iSSN 2706-8145, control systems and computers, 2019, № 3 47 Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing 0 50 100 150 200 250 Time, hours 30 31 32 33 34 35 36 37 38 39 40 Arguments 1 processor Fig. 7. Run-time estimation 0 0.5 1 1.5 2 2.5 Time, hours 30 31 32 33 34 35 36 37 38 39 40 Arguments 100 processors Fig . 8 . Run-time estimation Fig. 9. Flow chart of the intelligent information technology were built (structures were generated, parameters were estimated, and the quality criteria were calcu- lated) among which the best one according to the regularity criterion was chosen . The purpose of the next test experiment was to compute the comparative effectiveness E rec of the parallel algorithm with recurrent parameters esti- mation versus the parallel algorithm with non-re- current computations . The effectiveness was deter- mined as the ratio of the time of execution of the corresponding programs (for the number of argu- ments, equal to 20 and 25) . The result of the experiment, presented in figu- re 6, shows an increase in the value of E rec with an increase in the number of arguments . Figures 7 and 8 give estimates of the run-time of the combinatorial algorithm with recurrent com- putations on single and 100 processors . With the increase in the arguments number, the efficiency of recurrent-and-parallel computations increases, and for 40 arguments the gain reaches by a factor of a hundred . Construction of intelligent   information technology   for inductive modeling of complex  processes on the basis of recurrent- and-parallel computations Taking into account mentioned features for GMDH algorithms as well as schemes of parallel computa- tions, it is possible to propose intelligent informa- tion technology of inductive modeling on the basis of recurrent-and-parallel computations . The technology in automatic mode takes into account number of arguments, number of available 48  iSSN 2706-8145, системи керування та комп'ютери, 2019, № 3 V.S. Stepashko, S.M. Yefimenko, A.V. Pavlov computational resources and running time restric- tion . Figure 9 represents a block-diagram of infor- mation technology suggested in [27] . Conclusion The principle of high-performance parallel com- putations in the problems of inductive modelling on the basis of searching and iterative GMDH al- gorithms with recurrent parameters estimation is developed . A project of the intelligent information techno- logy for inductive modeling of complex processes on the basis of recurrent-and-parallel computa- tions is presented . REFERENCES Beer, S ., 1964 . Cybernetics and Management . John Wiley & Sons, Inc ., 214 p .1 . Gabor, D ., 1971 . “Cybernetics and the future of industrial civilization” . J . Cybern ., 1971, 2 (1), pp . 1–4 .2 . Nagel, E ., Newman J . R ., 1989 . Gedel’s Proof . Routledge, 118 p . 3 . Ivakhnenko, A .G ., Stepashko, V .S ., 1985 . Noise-immunity of modeling . Kiev: Naukova dumka, 216 p . 4 . (In Russian) . Stepashko, V .S ., 1979 . Optimization and Generalization of Model Sorting Schemes in Algorithms for the Group 5 . Method of Data Handling . Soviet Automatic Control . 12(4), pp . 28–33 . Stepashko, V .S ., 1981 . “A Combinatorial Algorithm of the Group Method of Data Handling with Optimal Model 6 . Scanning Scheme” . Soviet Automatic Control, 14(3), pp . 24–28 . Stepashko, V .S ., 1983 . “Potential noise stability of modelling using the combinatorial GMDH algorithm without 7 . information regarding the noise” . Soviet Automatic Control, 16(3), pp . 15–25 . Stepashko, V .S ., 1983 . “A Finite Selection Procedure for Pruning an Exhaustive Search of Models” . Soviet 8 . Automatic Control, 16(4), pp . 88–93 . Faddeev, D .K ., Faddeeva, V .N ., 1963 . Computational methods of linear algebra, 2nd ed . M .: Nauka, 656 p . 9 . (In Russian) . Pavlov, A .V ., 2011 . Generalized relaxation iterative algorithm of GMDH . Inductive modeling of complex systems . 10 . Coll . of science works, Issue 2 . K .: IRTC, pp . 95-108 . (In Russian) . Pavlov, А .V ., 2013 . “Generalized relaxational iterative GMDH algorithm of GMDH and its analysis” . Proc . of the 11 . Int . Conf . on Inductive Modelling ICIM-2013, pp . 89–96 . Koshulko, O .A ., Koshulko, A .I ., 2007 . “Adaptive parallel implementation of the combinatorial GMDH 12 . algorithm” . Proc . of the International workshop on inductive modelling IWIM-2007 . Prague: CTU, pp . 71–77 . Stepashko, V ., Yefimenko, S ., 2008 . “Optimal Paralleling for Solving Combinatorial Modelling Problems” . Proc . 13 . of the 2nd International Conference on Inductive Modeling IСIM-2008 . Kyiv, pp . 172–175 . Kordik, P ., Spirk, J ., Simecek, I ., 2008 . “Parallel computing of GAME models” . Proc . of the 2nd Int . conf . on 14 . inductive modeling IСIM-2008 . Kyiv, pp . 160–163 . Lemke, F ., 2008 . “Parallel Self-Organizing Modeling” . Proc . of the II Int . Conf . on Inductive Modelling ICIM-15 . 2008, 15–19 Sept . 2008, Kyiv, Ukraine . Kyiv: IRTC ITS NASU, pp . 176–183 . Pavlov, А ., 2014 . “Parallel Relaxational Iterative Algorithm of GMDH” . Inductive modelling of complex systems . 16 . Coll . of science works, Issue 6 . K .: IRTC, pp . 33–40 . (In Russian) . Message Passing Interface, [online] . Available at: <http://en .wikipedia .org/wiki/Message_Passing_Interface> 17 . [Accessed 27 Dec . 2018] . Pavlov, А .V ., 2013 . “Principles of parallel computations of relaxational iterative GMDH algorithm” . Inductive 18 . modeling of complex systems . of science works, Issue 5 . K .: IRTC, pp . 220–225 . (In Russian) . Mersenne twister, [online] . Available at: <http://en .wikipedia .org/wiki/Mersenne_twister> [Accessed 20 Dec . 19 . 2018] . Stepashko, V .S ., Efimenko, S .M ., 2005 . “Sequential Estimation of the Parameters of Regression Models” . 20 . Cybernetics and Systems Analysis, 41(4), pp .631–63 . Lutkepohl, H ., 1993 . “Introduction to multiple time series analysis” . Springer-Verlag Berlin Heidelberg, 545 p .21 . Yefimenko, S ., Stepashko, V ., 2015 . “Intelligent Recurrent-and-Parallel Computing for Solving Inductive 22 . Modeling Problems” . 16th Int . Conf . on Computational Problems of Electrical Engineering (CPEE), Lviv, pp . 236–238 . Madala, H .R ., Ivakhnenko, A .G ., 1994 . Inductive Learning Algorithms for Complex Systems Modeling . London, 23 . Tokyo: CRC Press Inc ., 384 p . iSSN 2706-8145, control systems and computers, 2019, № 3 49 Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing Yefimenko, S ., 2013 . “Comparative Effectiveness of Parallel and Recurrent Calculations in Combinatorial 24 . Algorithms of Inductive Modelling” . Proceedings of the 4th International Conference on Inductive Modelling ICIM’2013, Kyiv, pp . 231–234 . Supercomputer of IC, [online] . Available at: <http://icybcluster .org .ua/index .php?lang_id=3&menu_id=1> 25 . [Accessed 07 Dec . 2018] . Pavlov, A .V ., Stepashko, V .S ., 2011 . “Recurrent algorithms for calculating coefficients and selection criteria in the 26 . GMDH relaxational algorithm” . Cybernetics and computing engineering, 165, pp . 72–82 . (In Russian) . Yefimenko, S ., 2018 . “Construction of Intelligent Information Technology for Inductive Modeling of Complex 27 . Processes on the Basis of Recurrent-and-Parallel Computations” . Proc . of the XIII IEEE Int . Conf . CSIT-2018& International Workshop on Inductive Modeling, Sept . 11–14, 2018, Lviv, Ukraine . Lviv: Publisher “Vezha&Co”, pp . 440–443 . Received 29 .03 .2019 В.С. Степашко, д-р техн . наук, професор, старший науковий співробітник, Міжнародний науко-навчальний центр інформаційних технологій та систем НАН та МОН України, просп . Академіка Глушкова, 40, Київ, 03187, Україна, stepashko@irtc .org .ua С.М. Ефіменко, канд .техн . наук, старший науковий співробітник, Міжнародний науко-навчальний центр інформаційних технологій та систем НАН та МОН України, просп . Академіка Глушкова, 40, Київ, 03187, Україна, syefim@ukr .net А.В. Павлов, канд . техн . наук, науковий співробітник, Міжнародний науко-навчальний центр інформаційних технологій та систем НАН та МОН України, просп . Академіка Глушкова, 40, Київ, 03187, Україна, andriypavlove@gmail .com РЕКУРЕНТНО-ПАРАЛЕЛЬНІ АЛГОРИТМИ МГУА ДЛЯ ВИСОКОПРОДУКТИВНИХ ОБЧИСЛЕНЬ Вступ. Індуктивне моделювання є процесом побудови математичних моделей об'єктів, процесів та систем на основі статистичних даних . Метод групового урахування аргументів (МГУА) є одним з найбільш ефективних методів обчислювального інтелекту . Процес побудови моделей на основі МГУА базується на принципах послідовного ускладнення структур моделей, «зовнішнього доповнення» та неостаточних рішень . Все різноманіття алгоритмів МГУА, виходячи з особливостей процесу генерації структур моделей, можна розділити на перебірні та ітераційні алгоритми . Мета цієї статті полягає у розробленні методів розпаралелювання обчислень у перебірному алгоритмі COMBI та узагальненому релаксаційному ітераційному алгоритмі GRIA і визначенні обчислювальної ефективності розпаралелю вання . Результати. У статті описано розроблені принципи розпаралелювання операцій у комбінаторному алгоритмі COMBI МГУА з рекурентним оцінюванням параметрів моделей . При розпаралелюванні використано схеми об- числень зі стандартним генератором двійкових чисел та з послідовним ускладненням структур моделей, згідно з якими кожен процесор автономно обчислює початковий двійковий структурний вектор та кількість моделей, які він будуватиме . Також гарантується неповторюваність структур у різних процесорах . Завдяки цьому значно підвищується ефективність розпаралелювання, оскільки немає втрат часу на міжпроцесорну взаємодію . Схема розпаралелювання з послідовним ускладненням структур моделей дозволяє частково розв’язувати за- дачу повного перебору у випадку, коли кількість аргументів для перебору перевищує можливості алгоритму зі стандартним двійковим генератором, і повний перебір доцільно виконувати не серед усіх можливих моделей, а лише моделей обмеженої складності . Описано принцип розпаралелювання обчислень в комбінаторному алгоритмі COMBI МГУА з рекурент- ним оцінюванням параметрів моделей для побудови дискретних прогнозних моделей динаміки складних багатовимірних взаємозв’язаних процесів . Описано принципи та схеми розпаралелювання обчислень в узагаль V.S. Stepashko, S.M. Yefimenko, A.V. Pavlov 50  iSSN 2706-8145, системи керування та комп'ютери, 2019, № 3 неному релаксаційному ітераційному алгоритмі МГУА GRIA, які дозволяють збільшити швидкість роботи алго- ритму пропорційно кількості обчислювачів при максимальному (майже рівномірному) їх завантаженні . Виконано дослідження ефективності різних схем розпаралелювання обчислень в алгоритмі COMBI та GRIA за допомогою обчислювальних експериментів на персональному комп’ютері та кластерному багатопроцесорному комплексі . Як свідчать експерименти з тестування ефективності розпаралелювання алгоритму GRIA, швидкість роботи розроблених програмних засобів збільшується лінійно із додаванням нового обчислювального елемента (проце- сора чи ядра процесора) . Розроблені схеми дозволяють істотно підвищити ефективність алгоритмів МГУА шляхом виконання рекурентно-паралельних обчислень . Враховуючи особливості алгоритмів МГУА та схеми паралельних обчислень, розроблено концепцію інтелектуальної інформаційної технології індуктивного моделювання на основі рекурентно-паралельних обчис- лень . Така технологія при побудові моделей в автоматичному режимі враховує кількість аргументів, кількість до- ступних обчислювальних ресурсів і встановлені користувачем обмеження на час моделювання . Висновки. Розроблено технологію високопродуктивних паралельних обчислень в задачах індуктивного моде- лювання на основі перебірних та ітераційних алгоритмів МГУА з рекурентним оцінюванням параметрів моде- лей . Запропоновано концепцію інтелектуальної інформаційної технології індуктивного моделювання складних процесів на основі рекурентно-паралельних обчислень . Ключові слова: індуктивне моделювання, рекурентно-паралельні обчислення, МГУА, COMBI, GRIA, векторна авторегресія. В.С. Степашко, д-р техн . наук, профессор, старший научный сотрудник, Международный научно-учебный центр информационных технологий и систем НАН и МОН Украины, просп . Академика Глушкова, 40, Киев, 03187, Украина, stepashko@irtc .org .ua С.Н. Ефименко, канд . техн . наук, старший научный сотрудник, Международный научно-учебный центр информационных технологий и систем НАН и МОН Украины, просп . Академика Глушкова, 40, Киев, 03187, Украина, syefim@ukr .net А.В. Павлов, канд . техн . наук, научный сотрудник, Международный научно-учебный центр информационных технологий и систем НАН и МОН Украины, просп . Академика Глушкова, 40, Киев, 03187, Украина, andriypavlove@gmail .com РЕКУРРЕНТНО-ПАРАЛЛЕЛЬНЫЕ АЛГОРИТМЫ МГУА ДЛЯ ВЫСОКОПРОИЗВОДИТЕЛЬНЫХ ВЫЧИСЛЕНИЙ Введение. Индуктивное моделирование — это процесс построения математических моделей объектов, процессов и систем на основе статистических данных . Метод группового учета аргументов (МГУА) — один из наиболее эф- фективных методов вычислительного интеллекта . Процесс построения моделей на основе МГУА базируется на принципах последовательного усложнения структур моделей, «внешнего дополнения» и неокончательных реше- ний . Всe многообразие алгоритмов МГУА, исходя из особенностей процесса генерации структур моделей, можно разделить на переборные и итерационные алгоритмы . Цель этой статьи состоит в разработке методов распараллеливания вычислений в переборном алгоритме COMBI и обобщенном релаксационном итерационном алгоритме GRIA и определении вычислительной эффективности распараллеливания . Результаты. В статье описаны разработанные принципы распараллеливания операций в комбинаторном алго- ритме COMBI МГУА с рекуррентным оцениванием параметров моделей . При распараллеливании использованы схемы вычислений со стандартным генератором двоичных чисел и последовательным усложнением структур мо- делей, согласно которым каждый процессор автономно вычисляет начальный двоичный структурный вектор и количество моделей, которые он будет строить . Также гарантируется неповторяемость структур в различных про- iSSN 2706-8145, control systems and computers, 2019, № 3 51 Recurrent-and-Parallel GMDH Algorithms for High-Performance Computing цессорах . Благодаря этому значительно повышается эффективность распараллеливания, поскольку нет потерь времени на межпроцессорное взаимодействие . Схема распараллеливания с последовательным усложнением структур моделей позволяет частично решать за- дачу полного перебора в случае, когда количество аргументов для перебора превышает возможности алгоритма со стандартным двоичным генератором, и полный перебор целесообразно выполнять не среди всех возможных моделей, а только среди моделей ограниченной сложности . Описан принцип распараллеливания вычислений в комбинаторном алгоритме COMBI МГУА с рекуррентным оцениванием параметров моделей для построения дискретных прогнозных моделей динамики сложных много- мерных взаимосвязанных процессов . Описаны принципы и схемы распараллеливания вычислений обобщенного релаксационного итерационного алгоритма МГУА GRIA, которые позволяют увеличить скорость работы алгорит- ма пропорционально количеству вычислителей при максимальной (почти равномерной) их загрузке . Выполнены исследования эффективности различных схем распараллеливания вычислений в алгоритме COMBI и релаксационном итерационном алгоритме с помощью вычислительных экспериментов на персональном ком- пьютере и кластерном многопроцессорном комплексе . Как свидетельствуют результаты экспериментов по тестированию эффективности распараллеливания алго- ритма GRIA, скорость работы разработанных программных средств увеличивается линейно с добавлением нового вычислительного элемента (процессора или ядра процессора) . Разработанные схемы позволяют существенно повысить эффективность алгоритмов МГУА путем выполнения рекуррентно-параллельных вычислений . Учитывая особенности алгоритмов МГУА и схемы параллельных вычислений, разработана концепция интел- лектуальной информационной технологии индуктивного моделирования на основе рекуррентно-параллельных вычислений . Такая технология при построении моделей в автоматическом режиме учитывает количество аргу- ментов, количество доступных вычислительных ресурсов и установленные пользователем ограничения на время моделирования . Выводы. Разработана технология высокопроизводительных параллельных вычислений в задачах индуктивного моделирования на основе переборных и итерационных алгоритмов МГУА с рекуррентным оцениванием параме- тров моделей . Предложена концепция интеллектуальной информационной технологии индуктивного моделиро- вания сложных процессов на основе рекуррентно-параллельных вычислений . Ключевые слова: индуктивное моделирование, рекуррентно-параллельные вычисления, МГУА, COMBI, GRIA, вектор- ная авторегрессия.