Neural and statistical techniques for remote sensing image classification
This paper examines different approaches to remote sensing images classification. Included in the study are statistical approach, in particular Gaussian maximum likelihood classifier, and two different neural networks paradigms: multilayer perceptron trained with EDBD algorithm, and ARTMAP neural ne...
Gespeichert in:
| Datum: | 2010 |
|---|---|
| Hauptverfasser: | , , |
| Format: | Artikel |
| Sprache: | English |
| Veröffentlicht: |
Інститут програмних систем НАН України
2010
|
| Schlagworte: | |
| Online Zugang: | https://nasplib.isofts.kiev.ua/handle/123456789/14712 |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Назва журналу: | Digital Library of Periodicals of National Academy of Sciences of Ukraine |
| Zitieren: | Neural and statistical techniques for remote sensing image classification/ Iu. Grypych, N. Kussul, O. Kussul// Пробл. програмув. — 2010. — № 2-3. — С. 577-583. — Бібліогр.: 23 назв. — англ. |
Institution
Digital Library of Periodicals of National Academy of Sciences of Ukraine| id |
nasplib_isofts_kiev_ua-123456789-14712 |
|---|---|
| record_format |
dspace |
| spelling |
Grypych, Iu. Kussul, N. Kussul, O. 2010-12-27T17:16:23Z 2010-12-27T17:16:23Z 2010 Neural and statistical techniques for remote sensing image classification/ Iu. Grypych, N. Kussul, O. Kussul// Пробл. програмув. — 2010. — № 2-3. — С. 577-583. — Бібліогр.: 23 назв. — англ. 1727-4907 https://nasplib.isofts.kiev.ua/handle/123456789/14712 52(15).003 This paper examines different approaches to remote sensing images classification. Included in the study are statistical approach, in particular Gaussian maximum likelihood classifier, and two different neural networks paradigms: multilayer perceptron trained with EDBD algorithm, and ARTMAP neural network. These classification methods are compared on data acquired from Landsat-7 satellite. Experimental results showed that to achieve better performance of classifiers modular neural networks and committee machines should be applied. en Інститут програмних систем НАН України Прикладне програмне забезпечення Neural and statistical techniques for remote sensing image classification Article published earlier |
| institution |
Digital Library of Periodicals of National Academy of Sciences of Ukraine |
| collection |
DSpace DC |
| title |
Neural and statistical techniques for remote sensing image classification |
| spellingShingle |
Neural and statistical techniques for remote sensing image classification Grypych, Iu. Kussul, N. Kussul, O. Прикладне програмне забезпечення |
| title_short |
Neural and statistical techniques for remote sensing image classification |
| title_full |
Neural and statistical techniques for remote sensing image classification |
| title_fullStr |
Neural and statistical techniques for remote sensing image classification |
| title_full_unstemmed |
Neural and statistical techniques for remote sensing image classification |
| title_sort |
neural and statistical techniques for remote sensing image classification |
| author |
Grypych, Iu. Kussul, N. Kussul, O. |
| author_facet |
Grypych, Iu. Kussul, N. Kussul, O. |
| topic |
Прикладне програмне забезпечення |
| topic_facet |
Прикладне програмне забезпечення |
| publishDate |
2010 |
| language |
English |
| publisher |
Інститут програмних систем НАН України |
| format |
Article |
| description |
This paper examines different approaches to remote sensing images classification. Included in the study are statistical approach, in particular Gaussian maximum likelihood classifier, and two different neural networks paradigms: multilayer perceptron trained with EDBD algorithm, and ARTMAP neural network. These classification methods are compared on data acquired from Landsat-7 satellite. Experimental results showed that to achieve better performance of classifiers modular neural networks and committee machines should be applied.
|
| issn |
1727-4907 |
| url |
https://nasplib.isofts.kiev.ua/handle/123456789/14712 |
| citation_txt |
Neural and statistical techniques for remote sensing image classification/ Iu. Grypych, N. Kussul, O. Kussul// Пробл. програмув. — 2010. — № 2-3. — С. 577-583. — Бібліогр.: 23 назв. — англ. |
| work_keys_str_mv |
AT grypychiu neuralandstatisticaltechniquesforremotesensingimageclassification AT kussuln neuralandstatisticaltechniquesforremotesensingimageclassification AT kussulo neuralandstatisticaltechniquesforremotesensingimageclassification |
| first_indexed |
2025-11-25T22:54:44Z |
| last_indexed |
2025-11-25T22:54:44Z |
| _version_ |
1850576316668051456 |
| fulltext |
Прикладне програмне забезпечення
© Iu. Grypych, N. Kussul, O. Kussul, 2010
ISSN 1727-4907. Проблеми програмування. 2010. № 2–3. Спеціальний випуск 577
УДК 52(15).003
NEURAL AND STATISTICAL TECHNIQUES
FOR REMOTE SENSING IMAGE CLASSIFICATION
Iu. Grypych, N. Kussul, O. Kussul
Space Research Institute NASU-NSAU,
Glushkov Ave 40, Kyiv, 03650, Ukraine, tel. (+380-44)526 2553,
inform@ikd.kiev.ua
National Technical University ―Kyiv Politechnic Institute‖,
37 Prospect Peremogy, Kiev 03056, Ukraine, tel. (+380-67)500 6632,
olgakussul@gmail.com
This paper examines different approaches to remote sensing images classification. Included in the study are statistical approach, in particular
Gaussian maximum likelihood classifier, and two different neural networks paradigms: multilayer perceptron trained with EDBD algorithm,
and ARTMAP neural network. These classification methods are compared on data acquired from Landsat-7 satellite. Experimental results
showed that to achieve better performance of classifiers modular neural networks and committee machines should be applied.
Introduction
Recent advances in technologies made it possible to develop new satellite sensors with considerably improved
parameters and characteristics. For example, the spectral resolution increased up to 144 channels as in Hyperion sensor;
radiometric resolution increased up to 14 bits as in MODIS sensor, etc. In turn, the use of such space-borne satellite
sensors enables acquisition of valuable data that can be efficiently used for various applied problems solving in
agriculture, natural resources monitoring, land use management, environmental monitoring, etc.
Land cover classification represent one of the most important and typical applications of remote sensing data.
Land cover corresponds to the physical condition of the ground surface, for example, forest, grassland, artificial
surfaces etc. To this end, various approaches have been proposed, among which the most popular are neural networks
[1] and statistical [2] methods.
In this paper different approaches to remote sensing images classification are examined. The following
approaches are included in the study: statistical approach, namely Gaussian maximum likelihood (ML) classifier [2],
and two different types of neural networks: feed-forward multilayer perceptron (MLP) and ARTMAP neural network [].
MLP is trained by means of Extended-Delta-Bar-Delta (EDBD) algorithm [4] which represents a fast modification of
standard error backpropagation algorithm [5]. In turn, ARTMAP belongs to the family of adaptive resonance theory
(ART) networks [6], which are characterized by their ability to carry out fast, stable, on-line learning, recognition, and
prediction.
Comparative analysis of classification methods is done on data acquired by Enhanced Thematic Mapper Plus
(ETM+) sensor of Landsat-7 satellite [7], and land cover data from European Corine project [8].
Related Works
Nowadays, various approaches have been proposed to land cover classification of remote sensing data. In past
classification has traditionally been performed by statistical methods (e.g., Bayesian and k-nearest-neighbor classifiers).
In recent years, the remote sensing community has become interested in applying neural networks to data classification.
Neural networks provide an adaptive and robust approach for the analysis and generalization of data with no need of a
priori knowledge on statistical distribution of data. It is particularly important for remote sensing image classification
since information is provided by multiple sensors or by the same sensor in many measuring contexts. It is the main
problem associated with most statistical models, since it is difficult to define a single model for different types of space-
bourn sensors [9]. In this section we give a brief overview of approaches to remote sensing data classification.
In [10] classification of remote sensing data was done using MLP. The main goal was the investigation of
applicability of MLP to the classification of terrain radar images. MLP performances were compared with those of a
Bayesian classifier, and it was found that significant improvements can be obtained by the MLP classifier.
Benediktsson et al. [9] applied MLP to the classification of multisource remote sensing data. In particular,
Landsat MSS and topographic data were considered. Classification performances were compared with those of a
statistical parametric method that takes into account the relative reliabilities of the sources of data. They concluded that
the relative performances of the two methods mainly depend on priori knowledge about the statistical distribution of
data. MLPs are appropriate for cases where such distributions are unknown, for they are data-distribution-free. The
considerable training time required is one of the main drawbacks of MLP, compared with statistical parametric
methods.
Bischof et al. [11] reported the application of a three-layer perceptron for classification of Landsat TM data.
They compared MLP performances with those of Bayesian classifier. The obtained results showed that the MLP
performs better then Bayesian classifier.
Прикладне програмне забезпечення
578
Dawson and Fung [12] reviewed examples of the use of MLP to classification of remote sensing data. In their
study they proposed an interesting combination of clustering algorithms and scattering models to train MLP when no
ground truth is available.
Roli et al. [13] proposed a type of structured neural networks (treelike networks) to multisource remote sensing
data classification. This kind of architecture allows one to interpret the network operations. For example, the roles
played by different sensors and by their channels can be explained and quantitatively assessed. The proposed method
was compared with fully connected MLP and probabilistic neural networks on images acquired by synthetic aperture
radar (SAR) sensor.
Carpenter et al. [14] described the ARTMAP information fusion system. The fusion system uses distributed code
representations that exploit the neural network’s capacity for one-to-many learning in order to produce self-organizing
expert systems that discover hierarchical knowledge structures. The fusion system infers multi-level relationships
among groups of output classes, without any supervised labeling of these relationships. The proposed approach was
tested on two testbed images, but not limited to the image domain.
In [15] various algorithms are examined in order to estimate mixtures of vegetation types within forest stands
based on data from the Landsat TM satellite. The following methods were considered in that study: maximum
likelihood classification, linear mixture models, and a methodology based on the ARTMAP neural network. The
reported experiments showed that ARTMAP mixture estimation method provides the best estimates of the fractions of
vegetation types comparing to others.
Hwang et al. [16] described a structured neural network to classify Landsat-4 TM data. A one-network one-class
architecture is proposed to improve data separation. Each network is implemented by radial basis function (RBF) neural
network. The proposed approach outperformed other methodologies, such as MLP and a Bayesian classifier.
Methodology
In this section we give a brief overview of methodologies that will be compared for remote sensing image
classification.
MLP trained with EDBD. MLP represents a kind of feed-forward neural networks in which all the connections
are unidirectional. MLP consists of an input layer, output layer, and at least one hidden layer of hidden neurons.
Unidirectional connections exist from the input layer to hidden layer and from hidden layer to output neurons. There are
no connections between any neurons within the same layer.
Error backpropagation algorithm [5] is a popular method for MLP training, i.e. for neural networks weights
adjustment. However, despite its widespread use for many applications, it has a drawback of considerable training time
required. That is why in this study we use a fast modification of error backpropagation method Extended-Delta-Bar-
Delta (EDBD) rule [4]. This algorithm is based on the following heuristics:
— On each step of training process learning rate and momentum factor are automatically estimated for each
neural network weight. On the first step initial and maximum values for learning rates and momentum are set, and
remain constant during the whole training process.
— If partial derivative of error preserves its sign (positive or negative) within some training steps, then learning
rate and momentum for corresponding weight increase.
— If partial derivative of error changes its sign within some training steps, then learning rate and momentum for
corresponding weight decrease.
More detailed description of EDBD algorithm can be found in [1, 4]. In this study for EDBD simulations we use
MNN CAD software [17].
ARTMAP neural networks. ARTMAP belongs to the family of ART networks [6], which are characterized by
their ability to carry out fast, stable on-line learning, recognition, and prediction. These features differentiate ARTMAP
from the family of feed-forward MLPs, including backpropagation, which typically require slow learning. ARTMAP
systems self-organize arbitrary mappings from input vectors, representing features such as spectral values of remote
sensing images and terrain variables, to output vectors, representing predictions such as vegetation classes or
environmental variables. Internal ARTMAP control mechanisms create stable recognition categories of optimal size by
maximizing code compression while minimizing predictive error.
ARTMAP is already being used in a variety of application settings, including industrial design and
manufacturing, robot sensory motor control and navigation, machine vision, and medical imaging, as well as remote
sensing [14, 15]. A more detailed description of ARTMAP neural networks can be found in [3]. For ARTMAP
simulations we use ClasserScript v1.1 software [18] from http://profusion.bu.edu/techlab/.
Gaussian Maximum Likelihood Classification. The ML classifier is one of the most popular methods of
classification in remote sensing, in which a pixel with the maximum a posteriori probability is classified into the
corresponding class. In the case of multivariate Gaussian distribution a posteriori probability is defined as follows:
1
1
2 2
1
( | , ) (2 ) exp
2
p
T
i i i i i i if x x x , (1)
where i and i are ith class mean vector and covariance matrix, respectively, L is the number of classes and input
x Rp. Assuming equally likely classes, the ML classification rule then is given by:
Прикладне програмне забезпечення
579
1
ˆ ˆ argmax ( )i
i L
x i i d x , (2)
where ( )id x is a discriminate function in the form of:
( ) ln ( )i id x f x
11
ln
2
T
i i i ix x C .
The ML method has an advantage from the view point of probability theory, but care must be taken with respect
to the following items:
— Sufficient ground truth data should be sampled to allow estimation of the mean vector and the variance-
covariance matrix of population.
— The inverse matrix of the variance-covariance matrix becomes unstable in the case where there exists very
high correlation between two bands or the ground truth data are very homogeneous.
— When the distribution of the population does not follow the Gaussian distribution, the ML method cannot be
applied.
Data Description
An image acquired by ETM+ sensor of Landsat-7 satellite was used for comparative analysis of above-described
methods (Fig. 1, a). Parameters of image in World Reference System (WRS) [19] are path=186, row=25. Date of image
acquisition is 10.06.2000. Dimensions: 4336x2524 pixels (30 m resolution) = 130x76 km.
a Landsat-7 image
b Corine data
Fig. 1. Image acquired by ETM+ sensor of Landsat-7
satellite (spatial resolution: 30 m). Area covers south-
eastern part of Poland that borders with Ukraine – a;
Data for the same area provided by Corine project
(spatial resolution: 100 m). The study area is dominated
by forests, arable lands, and pastures – b
ETM+ sensor provides data in 6 visible and infra-red spectral ranges with spatial resolution 30 m (bands 1–5 and
7); in thermal spectral range with spatial resolution 60 m (band 6), and in panchromatic range with spatial resolution
15 m (band 8). In this study we use as input to classification methods the six spectral bands 1–5 and 7.
In raw Landsat-7 images pixel values are digital numbers (DN) ranging from 1 to 255 (8 bits per pixel). Since
these values are influenced by solar radiation [20], a procedure of converting DNs to at-satellite reflectance was applied
according to [21].
In such a case pixel values lie in range [0; 1].
Since in this study we examine methods of supervised classification we need to provide so called ground truth
data (sample pixels) in order to estimate weights and parameters of neural networks and statistical models.
Unfortunately, we didn’t have a possibility of gathering corresponding independent field (in-situ) data. In this case we
use data provided by European Corine project for land cover classification. In particular, we use CLC 2000 version of
this project (Fig. 1, b).
Прикладне програмне забезпечення
580
Additionally, the following information was also used to distinguish land cover classes on Landsat-7 image.
– Estimated Normalized Difference Vegetation Index (NDVI):
NDVI=(ETM4-ETM3)/(ETM4+ETM3)
where ETM3 and ETM4 are at-satellite reflectance values for spectral bands 3 and 4 respectively;
– Tasseled Cap transformation [20] that is based on principal component analysis (PCA) algorithm [22], and
allows one to have decorrelated components. Moreover, in tasseled cap transformation first three major components has
the following physical meaning: brightness, greenness, and wetness.
In this study eight target output classes were specified (Table 1).
Table 1. Class titles, Corine code levels, and number of sample pixels for each class*
# Class Title Corine Code Level Number of pixels
1 Broad-leaved forest 311 17890
2 Coniferous forest 312 20025
3 Mixed forest 313 10110
4 Non-irrigated arable land 211 25588
5 Pastures 231 9177
6 Inland waters 51x 7379
7 Artificial surfaces 1xx 12369
8 Open spaces with little or no
vegetation
33x 2799
Total 105337
* x symbol is used to denote lower level classes that cannot be discriminated on Landsat-7 images. For example,
it is hardly possible to distinguish water courses (e.g. rivers) from water bodies (e.g. lakes), or different types of
artificial surfaces since their spectral characteristics do not differ. For this purpose, additional information should be
provided.
Results of Experiments
Performance Measures and Training and Testing Protocols. For comparative analysis of neural networks and
statistical models for Landsat-7 image classification we use the same measure and the same training and testing sets.
Performance of classification methods was evaluated in terms of classification rate. Both overall classification rate for
all sample pixels and classification rate for each class separately were estimated.
Training and testing was done using five-fold cross-validation procedure [1, 23] as statistical design tool for
methods assessment. According to this procedure available set of sample pixels is divided into five disjoint subsets; i.e.
each subset consists of 20% of data. Models are trained on all subsets except for one, and classification rate is estimated
by testing it on subset left out. All reported results reflect values averaged across 5 training/testing runs. So, this
procedure produces robust performance measures while ensuring that no test sample pixels were ever used in training.
From table 1 it can be seen that number of sample pixels among target classes varies considerably. For example,
there are 25588 sample pixels labeled ―Non-irrigated arable land‖, and 7379 sample pixels labeled ―Inland waters‖. In
order to prevent imbalances of exemplars for neural networks models, we copied existing sample pixels for each class
to be the same size. Such procedure allows one to ―generate‖ training sets of the same size.
Input and Output Representation. Six channels from ETM+ sensor, namely 1–5 and 7, were selected to form
feature vector for each pixel. Components of such vector represent at-satellite reflectance values lying in the range [0; 1].
Considering output coding for neural networks models, both MLP and ARTMAP have 8 output neurons
corresponding to 8 target classes. During training target output is set to 1 for pixels belonging to such a class; otherwise,
they are set to 0.
Classification with MLP. Five-fold cross-validation procedure was repeated at different MLP architectures:
with 5, 15, 20, 25, 35, and 45 hidden neurons. Only one hidden layer was used in this study. For MLPs training EDBD
algorithm was applied. Training was stopped after 500 epochs. Save best mode was applied during training process.
Within this mode training and testing are sequentially applied to neural network. After each test the current
classification rate is compared with previous results, and neural network is saved as the best one if current result is
better than previous.
In all simulations initial values for learning rate and momentum factor in EDBD algorithm were set to 0.7 and
0.5 respectively.
Table 2 shows averaged classifications rates on testing sets for different MLP architectures.
Прикладне програмне забезпечення
581
Table 2. Averaged cross-validation results for MLP trained with EDBD algorithm*
MLP Architecture (number of hidden neurons)
Class N 5 15 20 25 35 45
1 97.63 98.78 98.99 99.02 99.15 98.97
2 80.95 83.57 83.99 84.20 84.64 85.67
3 67.09 68.70 68.12 68.38 68.00 67.37
4 85.44 87.72 88.24 89.03 89.84 89.56
5 86.16 90.42 91.55 90.41 91.01 91.43
6 97.14 97.71 97.66 97.75 97.63 97.64
7 69.09 83.45 84.09 83.99 83.46 83.56
8 95.57 96.82 96.28 96.53 96.79 96.52
Total 84.88 88.40 88.62 88.68 88.81 88.85
* the best estimates are indicated in boldface type
The best value of classification rate was obtained for MLP with 45 hidden neurons.
Classification with ML
Mean vectors and covariance matrixes were estimated for each class using each of five training sets. For this
purpose we use the following standard estimates
1
1
ˆ
iM
j
i i
ji
x
M
,
1
1ˆ ˆ ˆ( )( )
1
iM
j j T
i i i i i
ji
x x
M
,
where
j
ix is jth sample of ith class, and Mi is number of sample pixels in ith class.
Averaged classifications rates on testing sets for Gaussian ML classifier are shown in Table 3.
Table 3. Averaged cross-validation results for ML classifier
Class N
1 98.73
2 83.68
3 67.68
4 89.66
5 92.82
6 96.57
7 82.18
8 96.75
Total 88.02
Classification with ARTMAP. Five-fold cross-validation procedure was repeated for different vigilance
parameters of ARTMAP network: 0.1, 0.2, 0.3, 0.5, and 0.95. The obtained results are shown in Table 4.
The best value of classification rate was obtained for ARTMAP with vigilance parameter set to 0.95.
Table 4. Averaged cross-validation results for ARTMAP neural network*
Vigilance parameter
Class N 0.1 0.2 0.3 0.5 0.95
1 98.92 99.68 99.56 98.52 99.88
2 79.58 80.86 80.34 79.16 80.88
3 69.14 68.16 68.66 69.36 68.14
4 81.50 81.50 81.72 81.88 83.50
5 76.48 74.26 75.34 74.10 78.94
6 96.70 96.60 96.76 97.40 93.76
7 79.38 77.28 78.32 77.12 76.78
8 96.42 97.36 97.00 97.54 98.24
Total 83.68 83.80 83.74 83.24 84.22
* the best estimates are indicated in boldface type
Прикладне програмне забезпечення
582
Comparison of classification methods. The comparative analysis of best results obtained by neural networks
models with ML classifier show no preferences of one method on others (Table 5).
Table 4. Comparison of classification methods*
Method
Class N MLP ML ARTM
AP
1 98.97 98.73 99.88
2 85.67 83.68 80.88
3 67.37 67.68 68.14
4 89.56 89.66 83.50
5 91.43 92.82 78.94
6 97.64 96.57 93.76
7 83.56 82.18 76.78
8 96.52 96.75 98.24
Total 88.85 88.02 84.22
* the best estimates are indicated in boldface type
The best overall classification rate of 88.85% (on all sample pixels) was achieved by using MLP. Considering
classification rates obtained for classes separately, different methods performed better on different classes. For class no.
2, 6, and 7 MLP outperformed ARTMAP and ML classifier. In turn, ARTMAP neural network was better for classes 1,
3, 8, and ML classifier was better for classes 4 and 5.
The worst performance of all classification methods was achieved for class no. 3, ―Mixed forest‖ (maximum
68.14% by ARTMAP). This is due to the fact that mixed forests (class 3) consist of both broad-leaved (class 1) and
coniferous forests (class 2), and its corresponding spectral properties mix up.
Modular Neural Network
Definition. A neural network is said to be modular if the computation performed by the network can be
decomposed into two or more subsystems that operate on distinct inputs without communicating with each other.
The advantages of using modular neural network are:
- Possibility to divide a complex task into subtasks and select the most appropriate algorithms to solve the
subtasks;
- Reduction of resulted network complexity;
- Training time decrease.
In this paper we propose to use:
- Variation of the neural networks architectures (Arch)
- ―One class – one network‖ architecture. For each class its own neural network is used to distinguish this class
from other ones (One-One)
Fusion module is Multilayer perceptron trained with EDBD (Enhanced Delta-Bar-Delta) algorithm.
Table 5. Comparison of classification methods*
Class N
Method*
MLP (6-45-8) One-One Arch
1 98.97 99.39 99.19
2 85.67 84.44 84.24
3 67.37 67.51 68.64
4 89.56 89.18 89.61
5 91.43 92.48 93.08
6 97.64 97.15 97.97
7 83.56 79.63 84.20
8 96.52 97.86 96.79
Total 88.85 88.45 89.21
* the best estimates are indicated in boldface type
Прикладне програмне забезпечення
583
Conclusions
In this paper we examined different neural networks models, in particular MLP and ARTMAP networks, and
statistical approach, namely maximum likelihood method, for classification of remote sensing images. For comparative
analysis of these methods data acquired by ETM+ sensor of Landsat-7 satellite and land cover data from European
Corine project were used. The best overall classification rate for all classes (88.85%) was achieved by using MLP.
While considering classification rates obtained for classes separately, different methods performed better on different
classes. This, probably, is due to the complex topology of data that were used in this paper, and, thus, for different
classes different classification methods are appropriate. The analysis of available data set represents a separate task, and
is not covered in this article.
Usage of modular neural network gave us the best overall classification rate for all classes (89.21%). It shoes its
efficiency in such tasks. In order to improve performance of methods for remote sensing image classification future
works should be directed to the including of more classifiers to be examined, including modular neural networks
architecture, usage of other satellite data and in-situ observations.
1. Haykin S. Neural Networks: a comprehensive foundation, Upper Saddle River. New Jersey: Prentice Hall. – 1999. – 842 p.
2. Foody G.M., Campbell N.A., Trodd N.M., Wood T.F. Derivation and applications of probabilistic measures of class membership from maximum
likelihood classification, Photogramm. Eng. Remote. Sens. – 1992. – 58(9). – Р. 1335–1341.
3. G.A. Carpenter, S. Grossberg, J.H. Reynolds. ARTMAP: Supervised Real-Time Learning and Classification of Nonstationary Data by a Self-
Organizing Neural Network, Neural Networks. 1991. – Vol. 4. – Р. 565–588.
4. Minai A.A., Williams R.J. Back-propagation heuristics: A study of the extended delta-bar-delta algorithm. IEEE International Joint Conference
on Neural Networks. – 1990. – Vol. I. – Р. 595–600.
5. Werbos P.J. The roots of backpropagation: from ordered derivatives to neural networks and political forecasting, John Wiley & Sons, Inc., New
York, 1994. – 319 p.
6. Carpenter G.A., Grossberg S. ART 2: Stable selforganization of pattern recognition codes for analog input patterns, Applied Optics. – 1987. –
Vol. 26. – Р. 4919-4930.
7. NASA Landsat 7, http://landsat.gsfc.nasa.gov.
8. European Topic Centre on Terrestrial Environment, http://terrestrial.eionet.eu.int/CLC2000.
9. Benediktsson J.A., Swain P.H. and Ersoy O.K. Neural Network Approaches versus Statistical Methods in Classification of MultiSource
Remote sensing Data. IEEE Trans. On Geoscience and Remote Sensing. – 1990. – Vol. 28. – N 4. – Р. 540–552.
10. Decatur S.E. Applications of Neural Networks to Terrain Classification. Proceedings of the International Joint Conference on Neural Networks.
– 1989. – Vol. 1. – Р. 283–288.
11. Bischof H., Schneider W. and Pinz A.J. Multispectral Classification of Landsat Images Using Neural Networks, IEEE Trans. on Geoscience and
Remote Sensing. – 1992. – Vol. 30. – N. 3. – Р. 482–490.
12. Dawson M.S. and Fung A.K. Neural Networks and Their Applications to Parameter Retrieval and Classification, IEEE Geoscience and Remote
Sensing Society Newsletter. – 1993. – Р. 6–14.
13. Roli F., Serpico S.B. and Vernazza G. Neural Networks for Classification of Remotely-Sensed Images, In C.H. Chen, ―Fuzzy Logic and Neural
Networks Handbook‖, McGraw-Hills, 1996.
14. Carpenter G.A., Martens S., Ogas O.J. Self-organizing information fusion and hierarchical knowledge discovery: a new framework using
ARTMAP neural network, Neural Networks. – 2005. – 18. – Р. 287–295.
15. [15] G.A. Carpenter, S. Gopal, S. Macomber, S. Martens, C.E. Woodcock. A Neural Network Method for Mixture Estimation for Vegetation
Mapping, Remote Sens. Environ., 70 (1999). pp. 138–152.
16. Hwang J.N., Lay S.R. and Kiang R. Robust Construction Neural Networks for Classification of Remotely Sensed Data, Proceedings of World
Congress on Neural Networks. – 1993. – Vol. 4. – Р. 580–584.
17. Kussul M., Riznyk A., Sadovaya E., Sitchov A. T.Q. Chen. A visual solution to modular neural network system development, Proc. of the 2002
International Joint Conference on Neural Networks (IJCNN'02), Honolulu, HI, USA.– 2002. – Vol. 1. – Р. 749–754.
18. Martens S. ClasserScript v1.1 User’s Guide, Technical Report CAS/CNS–TR–05–009, 2005. – 51 p.
19. The Worldwide Reference System (WRS), http://landsat.gsfc.nasa.gov/documentation/wrs.html.
20. Huang C., Wylie B., Yang L., Homer C., Zylstra G. Derivation of a Tasseled Cap Transformation Based on Landsat 7 At-Satellite Reflectance,
International Journal of Remote Sensing. – 2002. – Vol. 23. – N. 8. – Р. 1741–1748.
21. Landsat–7 Science Data User's Handbook, http://ltpwww.gsfc.nasa.gov/IAS/handbook/handbook_toc.html.
22. Jolliffe I.T. Principal Component Analysis, New York: Springer-Verlag. – 1986. – 487 p.
23. Stone M. Cross-validatory choice and assessment of statistical predictions, Journal of the Royal Statistical Society. – 1974. – Vol. B36.
– Р. 111–133.
|