Globally robust stability analysis for stochastic Cohen – Grossberg neural networks with impulse control and time-varying delays
By constructing suitable Lyapunov functionals, in combination with the matrix-inequality technique, a new simple sufficient linear matrix inequality condition is established for the globally robustly asymptotic stability of the stochastic Cohen – Grossberg neural networks with impulsive control and...
Saved in:
| Date: | 2017 |
|---|---|
| Main Authors: | , |
| Format: | Article |
| Language: | English |
| Published: |
Institute of Mathematics, NAS of Ukraine
2017
|
| Online Access: | https://umj.imath.kiev.ua/index.php/umj/article/view/1757 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Journal Title: | Ukrains’kyi Matematychnyi Zhurnal |
| Download file: | |
Institution
Ukrains’kyi Matematychnyi Zhurnal| _version_ | 1860507612010774528 |
|---|---|
| author | Guo, Y. Го, Ю. |
| author_facet | Guo, Y. Го, Ю. |
| author_sort | Guo, Y. |
| baseUrl_str | https://umj.imath.kiev.ua/index.php/umj/oai |
| collection | OJS |
| datestamp_date | 2019-12-05T09:25:58Z |
| description | By constructing suitable Lyapunov functionals, in combination with the matrix-inequality technique, a new simple sufficient
linear matrix inequality condition is established for the globally robustly asymptotic stability of the stochastic Cohen –
Grossberg neural networks with impulsive control and time-varying delays. This condition contains and improves some
previous results from the earlier references.
|
| first_indexed | 2026-03-24T02:12:04Z |
| format | Article |
| fulltext |
UDC 517.9
Y. Guo (School Math. Sci., Qufu Normal Univ. and School Control Sci. and Eng., Shandong Univ., China)
GLOBALLY ROBUST STABILITY ANALYSIS
FOR STOCHASTIC COHEN – GROSSBERG NEURAL NETWORKS
WITH IMPULSE CONTROL AND TIME-VARYING DELAYS*
ГЛОБАЛЬНО РОБАСТНИЙ АНАЛIЗ СТАБIЛЬНОСТI СТОХАСТИЧНИХ
НЕЙРОННИХ СIТОК КОЕНА – ГРОССБЕРГА З IМПУЛЬСНИМ
УПРАВЛIННЯМ ТА ЗАТРИМКАМИ, ЩО ЗАЛЕЖАТЬ ВIД ЧАСУ
By constructing suitable Lyapunov functionals, in combination with the matrix-inequality technique, a new simple sufficient
linear matrix inequality condition is established for the globally robustly asymptotic stability of the stochastic Cohen –
Grossberg neural networks with impulsive control and time-varying delays. This condition contains and improves some
previous results from the earlier references.
За допомогою побудови вiдповiдних функцiоналiв Ляпунова, в комбiнацiї з технiкою матричної нерiвностi, встанов-
лено нову просту достатню лiнiйну умову матричної нерiвностi для глобально робастно асимптотичної стабiльностi
стохастичних нейронних сiток Коена – Гроссберга з iмпульсним управлiнням та затримками, що залежать вiд часу.
Ця умова мiстить та покращує деякi вiдомi результати, що отриманi ранiше.
1. Introduction. Notations. Let R denotes the set of real numbers, R+ the set of nonnegative
real numbers, Z+ the set of positive integers and Rn the n-dimensional Euclidean space, | \cdot | the
Euclidean norm. For any \ell \subset R, let PC(\ell , Rn) =
\bigl\{
\varphi : \ell \rightarrow Rn is continuous everywhere except at
a finite number of points tk, at which \varphi (t+k ), \varphi (t
-
k ) exist and \varphi (t+k ) = \varphi (tk)
\bigr\}
.
In this paper, we are concerned with the model of continuous-time neural networks described by
the following systems of the form:
x\prime i(t) =
= ci(xi(t))
\Biggl[
- di(xi(t)) +
n\sum
j=1
aijfj(xj(t)) +
n\sum
j=1
bijfj
\bigl(
xj(t - \tau j(t))
\bigr)
+ Ji
\Biggr]
, t \geq 0, t \not = tk,
\bigtriangleup xi| t=tk = xi(tk) - xi(t
-
k ) = Iik(xi(t
-
k )), k \in Z+, (1)
xi(s) = \phi i(s), s \in [t0 - \tau , t0], i = 1, 2, . . . , n,
or equivalently
x\prime (t) = C(x(t))
\Bigl[
- D(x(t)) +Af(x(t)) +Bf
\bigl(
x(t - \tau (t))
\bigr)
+ J
\Bigr]
, t \geq 0, t \not = tk,
\bigtriangleup x| t=tk = Ik(x(t
-
k )), k \in Z+, (1\ast )
x(s) = \phi (s), s \in [t0 - \tau , t0],
where n denotes the number of the neurons in the network, xi(t) is the state of the ith neuron at time
t, x(t) = (x1(t), x2(t), . . . , xn(t))
T \in Rn, f(x(t)) =
\bigl(
f1(x1(t)), f2(x2(t)), . . . , fn(xn(t))
\bigr) T \in Rn
* This work was supported financially by the Open Research Project of the State Key Laboratory of Industrial
Control Technology, Zhejiang University, China (No. ICT170289) and the Postdoctoral Science Foundation of China
(2014M551738).
c\bigcirc Y. GUO, 2017
ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 8 1049
1050 Y. GUO
denote the activation functions of the j th neuron at time t, D(x(t)) = (d1(x1(t)), d2(x2(t)), . . .
. . . , dn(xn(t)))
T , A = (aij)n\times n, B = (bij)n\times n are the feedback matrix and the delayed feedback
matrix, respectively, C(x(t)) = \mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g}
\bigl(
c1(x1(t)), c2(x2(t)), . . . , cn(xn(t))
\bigr)
> 0, J = (J1, J2, . . .
. . . , Jn)
T \in Rn be a constant external input vector, the time delay \tau (t) is any nonnegative continuous
function with 0 \leq \tau j(t) \leq \tau , and 0 < \tau \prime j(t) \leq \delta < 1, where \tau , \delta is a constant. 0 \leq t0 < t1 <
< t2 < . . . < tk < tk+1 < . . . , \mathrm{l}\mathrm{i}\mathrm{m}k\rightarrow \infty tk = \infty , with \mathrm{s}\mathrm{u}\mathrm{p}k\in Z+
\{ tk+1 - tk\} < \infty and x\prime denotes
the right-hand derivative of x. The functions Iik(t) represents the abrupt change of the state xi(t) at
the impulsive moment tk, \phi i(s), i = 1, 2, . . . , n, are bounded and continuous for s \in [ - \tau , 0].
The past few decades have witnessed tremendous developments in the research field of neural net-
works [1 – 7]. Various neural networks, such as Hopfield neural networks [1], cellular neural networks
[2, 3], bidirectional associative neural networks [4] and Cohen – Grossberg neural networks [5], have
been widely investigated and successfully applied in many areas. Among them, system (1) is one of
the most popular and generic neural network models. The Cohen – Grossberg neural network models
were firstly proposed and studied by Cohen and Grossberg [5], which have been widely applied in
various engineering and scientific fields such as neural biology, population biology, and computing
technology. In such applications, it is important to know the convergence properties of the designed
neural networks. Usually, this kind of neural networks can be described by the system (1).
On the other hand, a real system is usually affected by external perturbations which in many cases
are of great uncertainty and hence may be treated as random, as pointed out by Haykin [8] that in
real nervous systems, the synaptic transmission is a noisy process brought on by random fluctuations
from the release of neurotransmitters and other probabilistic causes. Friedman [9] issues stochastic
differential equations and their applications. It has also been known that a neural network could
be stabilized or destabilized by certain stochastic inputs. Hence, the stability analysis problem for
stochastic neural network becomes increasingly significant, and some results on stability have been
derived (see, for example, [10 – 17]).
However, taking more factors into account leads to the development of the theory of impulsive
differential equations [18], where the wide range of topics of the impulse systems theory, in particular
stability theory, are considered. The authors [19, 20] investigate the impulsive stabilization of delay
differential systems. Samoilenko and Stanzhytskyi [21] consider the stability of stochastic systems
with impulse acting. According to Arbib [22] and Haykin [23], when a stimulus from the body or the
external environment is received by receptors the electrical impulses will be conveyed to the neural
net and impulsive effects arise naturally in the net. Therefore, neural network model with stochastic
and impulsive effects should be more accurate to describe the evolutionary process of the systems.
Since randomness and impulses can affect the dynamical behaviors of the system [24 – 26], it is
necessary to investigate both randomness and impulsive effects on the stability of neural networks.
In this paper, we will consider the global asymptotic stability of the Cohen – Grossberg neural
networks with distributed delays described by (1). The organization of this paper is as follows. In
Section 2, problem formulation and preliminaries are given. In Section 3, some new results are given
to the Cohen – Grossberg neural networks with distributed delays described by (1) based on Lyapunov
method. Section 4 gives an example to illustrate the effectiveness of our results.
2. Preliminaries. In our analysis, we assume that the following conditions are satisfied:
(H1) there exist constant scalers li > 0 such that
ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 8
GLOBALLY ROBUST STABILITY ANALYSIS FOR STOCHASTIC COHEN – GROSSBERG NEURAL . . . 1051
0 \leq fi(\eta 1) - fi(\eta 2)
\eta 1 - \eta 2
\leq li \forall \eta 1, \eta 2 \in R, \eta 1 \not = \eta 2;
(H2) 0 < \alpha i \leq ci(xi(t)) \leq \alpha i, ci and ci are constant scalers, i = 1, 2, . . . , n;
(H3) for all \eta 1, \eta 2 \in R, \eta 1 \not = \eta 2, there exist constant scalers \mu i > 0 such that
di(\eta 1) - di(\eta 2)
\eta 1 - \eta 2
\geq \mu i > 0.
In the following, we will use the notation A > 0 (or A < 0) to denote the matrix A is a symmetric
and positive definite (or negative definite) matrix. The notation AT and A - 1 means the transpose
of and the inverse of a square matrix A. If A,B are symmetric matrices, A > B (A \leq B) means
that A - B is positive definite (positive semidefinite). Next we give the results about existence and
uniqueness of equilibriums of system (1).
Assume x\ast = (x\ast 1, x
\ast
2, . . . , x
\ast
n)
T is an equilibrium of equation (1), one can derive from (1) that
the transformation yi(t) = xi(t) - x\ast i transforms system (1) into the following system:
y\prime i(t) = \alpha i(yi(t))
\Biggl[
- \beta i(yi(t)) +
n\sum
j=1
aijgj(yj(t)) +
n\sum
j=1
bijgj
\bigl(
yj(t - \tau j(t))
\bigr) \Biggr]
, t \geq 0, t \not = tk,
(2)
\bigtriangleup yi| t=tk = Jik
\bigl(
yi(t
-
k )
\bigr)
, k \in Z+,
or
y\prime (t) = \alpha (y(t))
\bigl[
- \beta (y(t)) +Ag(y(t)) +Bg
\bigl(
y(t - \tau (t))
\bigr) \bigr]
, t \geq 0, t \not = tk,
\bigtriangleup y| t=tk = Jk(y(t
-
k )), k \in Z+,
where
\alpha i(yi(t)) = ci
\bigl(
yi(t) + x\ast i ), \alpha (y(t)
\bigr)
= \mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g}
\bigl(
\alpha 1(y1(t)), \alpha 2(y2(t)), . . . , \alpha n(yn(t))
\bigr)
,
\beta i(yi(t)) = di
\bigl(
yi(t) + x\ast i ) - di(x
\ast
i ), \beta (y(t)
\bigr)
= \mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g}
\bigl(
\beta 1(y1(t)), \beta 2(y2(t)), . . . , \beta n(yn(t))
\bigr)
,
gj(yj(t)) = fj
\bigl(
yj(t) + x\ast j ) - fj(x
\ast
j ), g(y(t)
\bigr)
= \mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g}
\bigl(
g1(y1(t)), g2(y2(t)), . . . , gn(yn(t))
\bigr)
,
Jjk(yj(t
-
k )) = Ijk
\bigl(
yj(t
-
k ) + x\ast j ), Jk(y(t
-
k )
\bigr)
= \mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g}
\bigl(
Q1k(y1(t
-
k )), . . . , \alpha nk(yn(t
-
k ))
\bigr)
.
Note that since each function fj(\cdot ) satisfies the hypothesis (H1), hence, each gj(\cdot ) satisfies
0 \leq gj(yj)
yj
\leq lj \forall yj \in R, yj \not = 0, and gj(0) = 0, j = 1, 2, . . . , n,
and since each function dj(\cdot ) satisfies the hypothesis (H3), hence, each \beta j(\cdot ) satisfies
\beta j(yj)
yj
\geq \mu j > 0 \forall yj \in R, yj \not = 0, and \beta j(0) = 0, j = 1, 2, . . . , n.
To prove the stability of x\ast of equation (1), it is sufficient to prove the stability of the trivial solution
of equation (2).
ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 8
1052 Y. GUO
As discussed in Section 1, in the real world, the neural network is often disturbed by environmental
noises that affect the stability of the equilibrium. In this paper, impulsive Cohen – Grossberg neural
network with stochastic perturbations is introduced as follows:
dy(t) = \alpha (y(t))
\bigl[
- \beta (y(t)) +Ag(y(t)) +Bg
\bigl(
y(t - \tau (t))
\bigr) \bigr]
dt+
+\sigma
\bigl(
t, y(t), y(t - \tau (t))
\bigr)
dw(t), t \geq 0, t \not = tk,
\bigtriangleup y | t=tk = Jk(y(t
-
k )), k \in Z+, (3)
y(t) = \phi (t), - \tau \leq t \leq 0, \phi \in L2
\scrF 0
\bigl(
[ - \tau , 0], Rn
\bigr)
,
where w(t) =
\bigl(
w1(t), w2(t), . . . , wn(t)
\bigr) T
is an m-dimensional Brownian motion defined on a com-
plete probability space (\Omega ,F, P ) with a natural filtration \{ Ft\} t\geq 0 generated by \{ w(s) : 0 \leq s \leq t\} .
L2
Ft
(\ell ;Rn) is the family of all bounded Ft-measurable, PC(\ell ;Rn)-valued stochastic variables sa-
tisfying \| \varphi \| := \mathrm{s}\mathrm{u}\mathrm{p}s\in \ell \BbbE | \varphi (s)| 2 < \infty . \BbbE denotes the mathematical expectation operator. \sigma (t, x, y) :
R+ \times Rn \times Rn \rightarrow Rn\times m is locally Lipschitz continuous and satisfies the linear growth condition as
well, \sigma (t, 0, 0) = 0. Furthermore, \sigma satisfies:
(H4) \mathrm{t}\mathrm{r}\mathrm{a}\mathrm{c}\mathrm{e}
\bigl[
\sigma T
\bigl(
t, y(t), y(t - \tau (t))
\bigr)
\sigma
\bigl(
t, y(t), y(t - \tau (t))
\bigr) \bigr]
\leq | \Theta 1y(t)| 2 + | \Theta 2y(t - \tau (t))| 2,
where \Theta 1 and \Theta 2 are known constant matrices with appropriate dimensions. In addition, we always
assume that Jk(y(t)) = 0 if and only if y = 0, t \geq t0, k \in Z+. Let y(t;\phi ) denote the solution of
the neural network (3) from the initial data y(s) = \phi (s) on - \tau \leq s \leq 0 in L2
F0
\bigl(
[ - \tau , 0];Rn
\bigr)
, then
system (3) admits a trivial solution y(t; 0) = 0 corresponding to the initial data \phi = 0.
Remark 1. If n = 2, \sigma 1 = 0.5y1(t)+0.5y1(t - \tau 1(t)), \sigma 2 = 0.4y2(t)+0.4y2(t - \tau 2(t)), where
\tau 1(t) = 0.3 + 0.5 \mathrm{s}\mathrm{i}\mathrm{n} t, \tau 2(t) = 0.3 + 0.5 \mathrm{c}\mathrm{o}\mathrm{s} t. Then \sigma = [\sigma 1, \sigma 2]
T satisfies the condition (H4) for
\Theta 1 =
\Biggl[
0.5 0
0 0.25
\Biggr]
, \Theta 2 =
\Biggl[
0.25 0
0 0.16
\Biggr]
.
Definition 1. The function V : [0,\infty )\times PC
\bigl(
[0,\infty ), Rn
\bigr)
\rightarrow R+ belongs to class \scrV if
(1) V is continuous on each of the sets [tk - 1, tk)\times PC([0,\infty ), Rn) and
\mathrm{l}\mathrm{i}\mathrm{m}
(t,\varphi 1)\rightarrow (t - k ,\varphi 2)
V (t, \varphi 1) = V (t - k , \varphi 2)
exists;
(2) V (t, y) is locally Lipschitzian in y and V (t, 0) \equiv 0.
Definition 2. Suppose V \in \scrV ; for any (t, y) \in [tk - 1, tk) \times PC([0,\infty ), Rn), the upper right-
hand Dini derivative of V (t, y(t)) along the solution of (3) is defined by
D+V (t, y(t)) = \mathrm{l}\mathrm{i}\mathrm{m} \mathrm{s}\mathrm{u}\mathrm{p}
s\rightarrow 0+
1
s
\bigl\{
V
\bigl(
t+ s, y(t) + sh(t, y(t), y(t - \tau (t)))
\bigr)
- V (t, y(t))
\bigr\}
,
where
h
\bigl(
t, y(t), y(t - \tau (t))
\bigr)
= \alpha (y(t))
\bigl[
- \beta (y(t)) +Ag(y(t)) +Bg
\bigl(
y(t - \tau (t))
\bigr) \bigr]
.
Definition 3. For the system (3) and every \xi \in L2
F0
\bigl(
[ - \tau , 0];Rn
\bigr)
, the trivial solution (equilibrium
point) is robustly, globally, asymptotically stable in the mean square if , the following holds:
\mathrm{l}\mathrm{i}\mathrm{m}
t\rightarrow \infty
\BbbE
\bigm| \bigm| y(t; \xi )\bigm| \bigm| 2 = 0.
ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 8
GLOBALLY ROBUST STABILITY ANALYSIS FOR STOCHASTIC COHEN – GROSSBERG NEURAL . . . 1053
Lemma 1. For any vectors a, b \in Rn, the inequality
2aT b \leq \varepsilon aTa+ \varepsilon - 1bT b
holds for any scalar \varepsilon > 0.
Lemma 2 [28]. Given constant matrices \Sigma 1, \Sigma 2, \Sigma 3, where \Sigma 1 = \Sigma T
1 and 0 < \Sigma 2 = \Sigma T
2 , then
\Sigma 1 +\Sigma T
3 \Sigma
- 1
2 \Sigma 3 < 0
if and only if \Biggl(
\Sigma 1 \Sigma T
3
\Sigma 3 - \Sigma 2
\Biggr)
< 0 or
\Biggl(
- \Sigma 2 \Sigma 3
\Sigma T
3 \Sigma 1
\Biggr)
< 0.
3. Existence and uniqueness theorem. By [31, 32], we have the following definition.
Definition 4. A function y(\cdot ) \in L2
Ft
\bigl(
[ - \tau ,\infty ), Rn
\bigr)
is said to be a solution of (3) if y(s) = \phi (s),
for s \in [ - \tau , 0], and the following integral equation is satisfied:
y(t) = \phi (0) +
t\int
0
h(s, y(s)) ds+
t\int
0
\sigma
\bigl(
t, y(t), y(t - \tau (t))
\bigr)
dw(s)+
+
\sum
0<tk<t
Jk(y(tk)), t \geq 0.
Theorem 1. Suppose that both h and \sigma satisfy the local Lipschitz condition and the linear growth
condition. That is, for each k = 1, 2, . . . , there is a constant pk > 0 such that for t \in [tk - 1, tk),
(H5)
\bigm| \bigm| h(t, u, v) - h(t, u, v)
\bigm| \bigm| \vee \bigm| \bigm| \sigma (t, u, v) - \sigma (t, u, v)
\bigm| \bigm| \leq pk
\bigl(
| u - u| + | v - v|
\bigr)
and there is a constant qk > 0 such that
(H6)
\bigm| \bigm| h(t, u, v)\bigm| \bigm| \vee \bigm| \bigm| \sigma (t, u, v)\bigm| \bigm| \leq qk
\bigl(
1 + | u| + | v|
\bigr)
,
then system (3) has a unique global solution.
Proof. Without loss of generality, we assume that t0 > 0. For t \in [0, t - 0 ], it is well-known [33]
that for any initial condition \phi (0), system (3) has a unique global continuous solution y(t) =
= y(t;\phi (0)) that is defined on segment t \in [0, t - 0 ]. At t = t0, there exists an impulse which transfers
solution y(t) = y(t;\phi (0)) into J0(y(t
-
0 )). By induction one get a unique global continuous solution
y(t) = y(t; t - 0 ) of system (3) that is defined on segment t \in [t0, t
-
1 ] and so on. Thus we infer that
system (3) has a unique global solution.
Theorem 1 is proved.
4. Impulsive stability analysis. In the section, we present and prove our main results.
Theorem 2. Assume that (H1) – (H6) are satisfied and there exist real scalars \rho > 0, \varepsilon 1 >
> 0, \varepsilon 2 > 0, matric X, positive define matrices P = P T > 0, Q = QT > 0 such that the three
LMIs
P < \rho I, (4)
ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 8
1054 Y. GUO\left[
- P\alpha \mu - \alpha \mu P +
1
\tau
Q 0 \alpha PA \alpha PB \rho \Theta T
1 0 0 \varepsilon 1l
\ast - 1 - \delta
\tau
Q 0 0 0 \rho \Theta T
2 \varepsilon 2l 0
\ast \ast - \varepsilon 1I 0 0 0 0 0
\ast \ast \ast - \varepsilon 2I 0 0 0 0
\ast \ast \ast \ast - \rho I 0 0 0
\ast \ast \ast \ast \ast - \rho I 0 0
\ast \ast \ast \ast \ast \ast - \varepsilon 2I 0
\ast \ast \ast \ast \ast \ast \ast - \varepsilon 1I
\right]
< 0, (5)
\Biggl[
X +XT XT
\ast - P
\Biggr]
\leq 0 (6)
hold, where \alpha = \mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g}(\alpha i)n\times n, \alpha = \mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g}(\alpha i)n\times n, l = \mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g}(li)n\times n, \mu = \mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g}(\mu i)n\times n. Then
system (3) can be robustly, globally, asymptotically stable in the mean square via an impulsive
controller:
Iku = P - 1Xu. (7)
Proof. Let V (t) = yT (t)Py(t) +
1
\tau
\int t
t - \tau (t)
yT (s)Qy(s) ds. By it’s differential formula (see
[29]), the stochastic derivative of V (t) along (3) can be obtained as follows:
(1) for t \in [tk, tk+1),
D+V (t) = \scrL V (t)dt+ 2yT (t)P\sigma
\bigl(
t, y(t), y(t - \tau (t))
\bigr)
dw(t),
where
\scrL V (t) = - 2yT (t)P\alpha (y(t))
\bigl[
\beta (y(t)) - Ag(y(t)) - Bg
\bigl(
y(t - \tau (t))
\bigr) \bigr]
+
+\mathrm{t}\mathrm{r}\mathrm{a}\mathrm{c}\mathrm{e}
\bigl[
\sigma T (t, y(t), y
\bigl(
t - \tau (t))
\bigr)
P\sigma
\bigl(
t, y(t), y(t - \tau (t))
\bigr) \bigr]
+
+
1
\tau
yT (t)Qy(t) - 1 - \tau \prime (t)
\tau
yT (t - \tau (t))Qy(t - \tau (t)).
Since
\scrL V (t) \leq - 2yT (t)P\alpha \mu y(t)+
+2yT (t)P\alpha (y(t))Ag(y(t)) + 2yT (t)P\alpha (y(t))Bg
\bigl(
y(t - \tau (t))
\bigr) \bigr]
+
+\mathrm{t}\mathrm{r}\mathrm{a}\mathrm{c}\mathrm{e}
\bigl[
\sigma T
\bigl(
t, y(t), y(t - \tau (t))
\bigr)
P\sigma
\bigl(
t, y(t), y(t - \tau (t))
\bigr) \bigr]
+
+
1
\tau
yT (t)Qy(t) - 1 - \delta
\tau
yT (t - \tau (t))Qy(t - \tau (t)).
Recall that the inequality 2ab \leq 1
\varepsilon
a2 + \varepsilon b2 holds for any a, b \in R and for any \varepsilon > 0. Then
ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 8
GLOBALLY ROBUST STABILITY ANALYSIS FOR STOCHASTIC COHEN – GROSSBERG NEURAL . . . 1055
2yT (t)P\alpha (y(t))Ag(y(t)) \leq
\leq 1
\varepsilon 1
yT (t)P\alpha (y(t))AAT\alpha (y(t))Py(t) + \varepsilon 1g
T (y(t))g(y(t)) \leq
\leq 1
\varepsilon 1
yT (t)
\bigl(
PAATP\alpha 2
\bigr)
y(t) + \varepsilon 1y
T (t)l2y(t),
2yT (t)P\alpha (y(t))Bg
\bigl(
y(t - \tau (t))
\bigr)
\leq 1
\varepsilon 2
yT (t)P\alpha (y(t))BBT\alpha (y(t))y(t)+
+\varepsilon 2g
T
\bigl(
y(t - \tau (t))
\bigr)
g
\bigl(
y(t - \tau (t))
\bigr)
\leq
\leq 1
\varepsilon 2
yT (t)
\bigl(
PBBTP\alpha 2
\bigr)
y(t) + \varepsilon 2y
T (t - \tau (t))l2y(t - \tau (t)),
and
\mathrm{t}\mathrm{r}\mathrm{a}\mathrm{c}\mathrm{e}
\bigl[
\sigma T
\bigl(
t, y(t), y(t - \tau (t))
\bigr)
P\sigma
\bigl(
t, y(t), y(t - \tau (t))
\bigr) \bigr]
\leq
\leq \lambda max(P )\mathrm{t}\mathrm{r}\mathrm{a}\mathrm{c}\mathrm{e}
\bigl[
\sigma T
\bigl(
t, y(t), y(t - \tau (t))
\bigr)
\sigma
\bigl(
t, y(t), y(t - \tau (t))
\bigr) \bigr]
\leq
\leq \rho
\bigl[
yT (t)(\Theta T
1 \Theta 1)y(t) + yT (t - \tau (t))\Theta T
2 \Theta 2y(t - \tau (t))
\bigr]
,
thus
\scrL V (t) \leq
\Biggl[
y(t)
y(t - \tau (t))
\Biggr] T\Biggl[
\Omega 1 0
0 \Omega 2
\Biggr] \Biggl[
y(t)
y(t - \tau (t))
\Biggr]
,
\Omega 1 = - P\alpha \mu - \alpha \mu P +
1
\tau
Q+
1
\varepsilon 1
PAATP\alpha 2 + \varepsilon 1l
2 +
1
\varepsilon 2
PBBTP\alpha 2 + \rho \Theta T
1 \Theta 1,
\Omega 2 = - 1 - \delta
\tau
Q+ \varepsilon 2l
2 + \rho \Theta T
2 \Theta 2.
By Lemma 2, it is obvious from (5) that
\biggl[
\Omega 1 0
0 \Omega 2
\biggr]
< 0. There must exist a scalar \eta > 0 such that
\Biggl[
\Omega 1 0
0 \Omega 2
\Biggr]
+
\Biggl[
\eta I 0
0 0
\Biggr]
< 0.
Define
\xi (t) :=
\Biggl[
y(t)
y(t - \tau (t))
\Biggr]
, \Xi :=
\Biggl[
\Omega 1 0
0 \Omega 2
\Biggr]
.
ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 8
1056 Y. GUO
So
D+V (t) \leq \xi T (t)\Xi \xi (t) dt+ 2yT (t)P\sigma
\bigl(
t, y(t), y(t - \tau (t))
\bigr)
dw(t).
Then, we have
d\BbbE V (t)
dt
\leq \BbbE \xi T (t)\Xi \xi (t) \leq - \eta \BbbE | y(t)| 2. (8)
(2) For t = tk, using the condition (6) we get
V (tk) - V (t - k ) = yT (tk)Py(tk) - yT (t - k )Py(t - k )+
+
tk\int
tk - \tau (tk)
yT (s)Qy(s) ds -
t - k\int
t - k - \tau (t - k )
yT (s)Qy(s) ds =
= yT (t - k )
\bigl[
(I + P - 1X)TP (I + P - 1X) - P
\bigr]
y(t - k ) =
= yT (t - k )
\bigl(
XT +X +XTP - 1X
\bigr)
y(t - k ) \leq 0,
which gives
V (tk) \leq V (t - k ).
This and (1) imply that (8) holds for t = tk. Then, by (1) and (2), the system (3) can be robustly,
globally, asymptotically stable in the mean square.
Remark 2. In this paper, we do not need any restriction on the time interval of impulsive,
however, for the impulsive delay differential systems containing model (3) discussed in [24 – 26,
30], the time interval of impulsive is necessary. Such as \mathrm{s}\mathrm{u}\mathrm{p}k\in Z+
\{ tk - tk - 1\} <
\mathrm{l}\mathrm{n} q
c
(see, for
example, [30]), where q, c are constants. Therefore, the results of this paper are new and they
complement previously known results.
Remark 3. In Theorem 1, we do not need the assumptions of boundedness, monotonicity, and
differentiability for the activation functions, moreover, the model discussed is with time-varying
delays. Clearly, the proposed results are different from those in [1 – 5, 13 – 15, 27, 28] and the
references cited therein. Therefore, the results of this paper are new and they complement previously
known results.
Remark 4. In this paper, we need the assumptions of differentiability for the time-varying delay
functions \tau (t), but the assumptions of differentiability for \tau (t) is not necessary. This can be seen
from that when we choose V (t) = yT (t)Py(t) +
1
\tau
\int t
t - \tau
yT (s)Qy(s)ds.
When the time-varying delay functions \tau (t) is not differentiable, we can choose V (t) =
= yT (t)Py(t) +
1
\tau
\int t
t - \tau
yT (s)Qy(s)ds. Then we have the following theorem.
Theorem 3. Assume that (H1) – (H6) are satisfied and there exist real scalars \rho > 0, \varepsilon 1 > 0,
\varepsilon 2 > 0, matric X, positive define matrices P = P T > 0, Q = QT > 0 such that the three LMIs (4),
(6) and
ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 8
GLOBALLY ROBUST STABILITY ANALYSIS FOR STOCHASTIC COHEN – GROSSBERG NEURAL . . . 1057\left[
- P\alpha \mu - \alpha \mu P +
1
\tau
Q 0 \alpha PA \alpha PB \rho \Theta T
1 0 0 \varepsilon 1l
\ast - 1
\tau
Q 0 0 0 \rho \Theta T
2 \varepsilon 2l 0
\ast \ast - \varepsilon 1I 0 0 0 0 0
\ast \ast \ast - \varepsilon 2I 0 0 0 0
\ast \ast \ast \ast - \rho I 0 0 0
\ast \ast \ast \ast \ast - \rho I 0 0
\ast \ast \ast \ast \ast \ast - \varepsilon 2I 0
\ast \ast \ast \ast \ast \ast \ast - \varepsilon 1I
\right]
< 0
hold. Then system (3) can be robustly, globally, asymptotically stable in the mean square via an
impulsive controller (7).
In the following, we first consider the Cohen – Grossberg neural network without stochastic
perturbations (2). By the Theorem 1, we obtain the following results.
Corollary 1. Assume that (H1) – (H3), (H5) and (H6) are satisfied and there exist real scalars
\varepsilon 1 > 0, \varepsilon 2 > 0, matric X, positive define matrices P = P T > 0, Q = QT > 0 such that the two
LMIs (6) and\left[
- P\alpha \mu - \alpha \mu P +
1
\tau
Q 0 \alpha PA \alpha PB 0 \varepsilon 1l
\ast - 1 - \delta
\tau
Q 0 0 \varepsilon 2l 0
\ast \ast - \varepsilon 1I 0 0
\ast \ast \ast - \varepsilon 2I 0 0
\ast \ast \ast \ast - \varepsilon 2I 0
\ast \ast \ast \ast \ast - \varepsilon 1I
\right]
< 0
hold. Then system (2) can be robustly, globally, asymptotically stable in the mean square via an
impulsive controller (7).
Second, if there appears only stochastic cellular neural networks, i.e., \alpha (y(t)) = I in system (3),
the model (3) can now be simplified to
dy(t) =
\bigl[
- \beta (y(t)) +Ag(y(t)) +Bg
\bigl(
y(t - \tau (t))
\bigr) \bigr]
dt+
+\sigma
\bigl(
t, y(t), y(t - \tau (t))
\bigr)
dw(t), t \geq 0, t \not = tk, (9)
\bigtriangleup y| t=tk = Jk(y(t
-
k )), k \in Z+.
In this case, \alpha = \alpha = I in Theorem 1, then we have the following corollary.
Corollary 2. Assume that (H1) – (H6) are satisfied and there exist real scalars \rho > 0, \varepsilon 1 > 0,
\varepsilon 2 > 0, matric X, positive define matrices P = P T > 0, Q = QT > 0 such that the three LMIs (4),
(6) and
ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 8
1058 Y. GUO\left[
- P\mu - \mu P +
1
\tau
Q 0 PA PB \rho \Theta T
1 0 0 \varepsilon 1l
\ast - 1 - \delta
\tau
Q 0 0 0 \rho \Theta T
2 \varepsilon 2l 0
\ast \ast - \varepsilon 1I 0 0 0 0 0
\ast \ast \ast - \varepsilon 2I 0 0 0 0
\ast \ast \ast \ast - \rho I 0 0 0
\ast \ast \ast \ast \ast - \rho I 0 0
\ast \ast \ast \ast \ast \ast - \varepsilon 2I 0
\ast \ast \ast \ast \ast \ast \ast - \varepsilon 1I
\right]
< 0,
hold. Then system (9) can be robustly, globally, asymptotically stable in the mean square via an
impulsive controller (7).
Third, we consider the following system:
dy(t) =
\bigl[
- \beta (y(t)) +Ag(y(t)) +Bg
\bigl(
y(t - \tau (t))
\bigr) \bigr]
dt, t \geq 0, t \not = tk,
\bigtriangleup y| t=tk = Jk(y(t
-
k )), k \in Z+. (10)
Corollary 3. Assume that (H1) – (H3) are satisfied and there exist real scalars \rho > 0, \varepsilon 1 > 0,
\varepsilon 2 > 0, matric X, positive define matrices P = P T > 0, Q = QT > 0 such that the two LMIs (6)
and \left[
- P\mu - \mu P +
1
\tau
Q 0 PA PB 0 \varepsilon 1l
\ast - 1 - \delta
\tau
Q 0 0 \varepsilon 2l 0
\ast \ast - \varepsilon 1I 0 0 0
\ast \ast \ast - \varepsilon 2I 0 0
\ast \ast \ast \ast - \varepsilon 2I 0
\ast \ast \ast \ast \ast - \varepsilon 1I
\right]
< 0,
hold. Then system (10) can be robustly, globally, asymptotically stable in the mean square via an
impulsive controller (7).
5. Numerical example. In this section, an example is used to demonstrate that the method
presented in this paper is effective.
Example. Consider the following three state neural networks (3) with
A =
\left[
0.5 0.2 0.3
0.3 0.2 - 0.2
0.1 0.2 - 0.2
\right] , B =
\left[ 0.6 0.2 0.4
0.3 0.2 - 0.6
0.5 0.2 - 0.3
\right] ,
\alpha =
\left[ 0.5 0 0
0 0.6 0
0 0 0.6
\right] , \alpha =
\left[ 0.7 0 0
0 0.8 0
0 0 0.9
\right] ,
ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 8
GLOBALLY ROBUST STABILITY ANALYSIS FOR STOCHASTIC COHEN – GROSSBERG NEURAL . . . 1059
l = 0.3I, \mu = 0.9I, \delta = 0.5, \tau = 2, \Theta 1 = 0.08I, \Theta 2 = 0.09I.
By solving the LMIs (4), (5) and (6) for \rho > 0, \varepsilon i > 0, i = 1, 2, P > 0, Q > 0 we obtain
P =
\left[
1.4201 - 0.1001 - 0.1629
- 0.1001 1.4362 - 0.2535
- 0.1629 - 0.2535 1.4319
\right] , Q =
\left[
0.2734 - 0.0198 - 0.0705
- 0.0198 0.4202 - 0.1834
- 0.0705 - 0.1834 0.4591
\right] ,
X =
\left[
- 0.5364 0.0125 0.0189
0.0125 - 0.5375 0.0289
0.0189 0.0289 - 0.5367
\right] , K =
\left[
- 0.3834 - 0.0242 - 0.0348
- 0.0242 - 0.3849 - 0.0507
- 0.0347 - 0.0507 - 0.3878
\right] ,
\rho = 2.2035, \varepsilon 1 = 1.5345, \varepsilon 2 = 1.7937,
which implies from Theorem 1 that the above delayed stochastic Cohen – Grossberg neural network is
robustly, globally, asymptotically stable in the mean square via an impulsive controller Ik(u) = Ku.
6. Conclusion. In this paper, we have dealt with the problem of global asymptotic stability
analysis for stochastic Cohen – Grossberg neural networks with impulsive. We have removed the
monotonicity and smoothness assumptions on the activation function. A LMI approach has been
developed to solve the problem addressed. The stability criteria have been derived in terms of
the positive definite solution to three LMIs involving several scalar parameters, which can be easily
solved by using the Matlab toolbox. A simple example has been used to demonstrate the effectiveness
of the main results.
References
1. Guo S., Huang L. Stability analysis of a delayed Hopfield neural network // Phys. Rev. E. – 2003. – 67. – P. 1 – 7.
2. Civalleri P. P., Gilli M., Pandolfi L. On stability of cellular neural networks with delay // IEEE Trans. Circuits Syst.
I. – 1993. – 40. – P. 157 – 165.
3. Roska T., Chua L. O. Cellular neural networks with delay-type template elements nonuniform grid // Int. J. Circuit
Theory and Appl. – 1992. – 20. – P. 469 – 481.
4. Li Y. Global exponential stability of BAM neural networks with delays and impulses // Chaos, Solitons and Fractals. –
2005. – 24. – P. 279 – 285.
5. Cohen M. A., Grossberg S. Absolute stability of global parallel memory storage by competitive neural networks //
IEEE Trans. Syst. Man Cybermet. – 1983. – 13. – P. 815 – 826.
6. Guo Y. Global asymptotic stability analysis for integro-differential systems modeling neural networks with delays //
Z. angew. Math. und Phys. – 2010. – 61. – S. 971 – 978.
7. Guo Y., Liu S. T. Global exponential stability analysis for a class of neural networks with time delays // Int. J. Robust
and Nonlinear Control. – 2012. – 22. – P. 1484 – 1494.
8. Haykin S. Neural networks. – Prentice-Hall, NJ, 1994.
9. Friedman A. Stochastic differential equations and their applications. – New York: Academic, 1976.
10. Guo Y. Mean square global asymptotic stability of stochastic recurrent neural networks with distributed delays //
Appl. Math. and Comput. – 2009. – 215. – P. 791 – 795.
11. Yang J., Zhong S., Luo W. Mean square stability analysis of impulsive stochastic differential equations with delays //
J. Comput. and Appl. Math. – 2008. – 216. – P. 474 – 483.
12. Guo Y. Global stability analysis for a class of Cohen – Grossberg neural network models // Bull. Korean Math. Soc. –
2012. – 49, № 6. – P. 1193 – 1198.
13. Wang Z., Liu Y., Li M., Liu X. Stability analysis for stochastic Cohen – Grossberg neural networks with mixed time
delays // IEEE Trans. Neural Networks. – 2006. – 17, № 3. – P. 814 – 820.
ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 8
1060 Y. GUO
14. Blythe S., Mao X., Liao X. Stability of stochastic delay neural networks // J. Franklin Inst. – 2001. – 338. – P. 481 – 495.
15. Guo Y. Mean square exponential stability of stochastic delay cellular neural networks // Electron. J. Qual. Theory
Different. Equat. – 2013. – 34. – P. 1 – 10.
16. Yang Z., Xu D., Xiang L. Exponential p-stability of impulsive stochastic differential equations with delays // Phys.
Lett. A. – 2006. – 359. – P. 129 – 137.
17. Wan L., Sun J. Mean square exponential stability of stochastic delayed Hopfield neural networks // Phys. Lett. A. –
2005. – 343, № 4. – P. 306 – 318.
18. Samoilenko A. M., Perestyuk M. O. Impulsive differential equations. – Singapore etc.: World Sci., 1995.
19. Wang Q., Liu X. Impulsive stabilization of delay differential systems via the Lyapunov – Razumikhin method // Appl.
Math. Lett. – 2007. – 20. – P. 839 – 845.
20. Luo Z., Shen J. Impulsive stabilization of functional differential equations with infinite delays // Appl. Math. Lett. –
2003. – 61. – P. 695 – 701.
21. Samoilenko A., Stanzhytskyi O. Qualitative and asymptotic analysis of differential equations with random perturba-
tions. – Singapore: World Sci., 2011.
22. Arbib M. A. Branins, machines, and mathematics. – New York: Springer-Verlag, 1987.
23. Haykin S. Neural networks: a comprehensive foundation. – Englewood Cliffs, NJ: Prentice-Hall, 1998.
24. Wang P., Li B., Li Y. Square-mean almost periodic solutions for impulsive stochastic shunting inhibitory cellular
neural networks with delays // Neurocomputing. – 2015. – 167, № 1. – P. 76 – 82.
25. Cheng P., Deng F. Global exponential stability of impulsive stochastic functional differential systems // Statist. and
Probab. Lett. – 2010. – 80. – P. 1854 – 1862.
26. Yao F., Cao J., Cheng P. et al. Generalized average dwell time approach to stability and input-to-state stability of
hybrid impulsive stochastic differential systems // Nonlinear Anal: Hybrid Syst. – 2016. – 22. – P. 147 – 160.
27. LaSalle J. P. The stability of dynamical system. – Philadelphia: SIAM, 1976.
28. Blythe S., EI Ghaoui L., Feron E., Balakrishnan V. Linear matrix inequalities in system and control theory. –
Philadelphia, PA: SIAM, 1994.
29. Karatzas I., Shreve S. E. Brownian motion and stochastic calculus. – New York: Springer, 1991.
30. Xu L., Xu D. Mean square exponential stability of impulsive control stochastic systems with time-varying delay //
Phys. Lett. A. – 2009. – 373, № 3. – P. 328 – 333.
31. Lakshmikantham V., Bainov D. D., Simeonov P. S. Theory of impulsive differential equations. – Teaneck, NJ: World
Sci. Publ., 1989. – Vol. 6.
32. Oksendal B. Stochastic differential equations. – 5th ed. – Berlin: Springer, 2002.
33. Mao X. Ruzumikhin-type theorems on exponential stability of stochastic functional differential equations // Stochast.
Proces. and Appl. – 1996. – 65. – P. 233 – 250.
Received 23.09.13,
after revision — 16.03.17
ISSN 1027-3190. Укр. мат. журн., 2017, т. 69, № 8
|
| id | umjimathkievua-article-1757 |
| institution | Ukrains’kyi Matematychnyi Zhurnal |
| keywords_txt_mv | keywords |
| language | English |
| last_indexed | 2026-03-24T02:12:04Z |
| publishDate | 2017 |
| publisher | Institute of Mathematics, NAS of Ukraine |
| record_format | ojs |
| resource_txt_mv | umjimathkievua/26/b9067b1d23324094dbf0069533a46326.pdf |
| spelling | umjimathkievua-article-17572019-12-05T09:25:58Z Globally robust stability analysis for stochastic Cohen – Grossberg neural networks with impulse control and time-varying delays Глобально робастний аналiз стабiльностi стохастичних нейронних сiток Коена – Гроссберга з iмпульсним управлiнням та затримками, що залежать вiд часу Guo, Y. Го, Ю. By constructing suitable Lyapunov functionals, in combination with the matrix-inequality technique, a new simple sufficient linear matrix inequality condition is established for the globally robustly asymptotic stability of the stochastic Cohen – Grossberg neural networks with impulsive control and time-varying delays. This condition contains and improves some previous results from the earlier references. За допомогою побудови вiдповiдних функцiоналiв Ляпунова, в комбiнацiї з технiкою матричної нерiвностi, встанов- лено нову просту достатню лiнiйну умову матричної нерiвностi для глобально робастно асимптотичної стабiльностi стохастичних нейронних сiток Коена – Гроссберга з iмпульсним управлiнням та затримками, що залежать вiд часу. Ця умова мiстить та покращує деякi вiдомi результати, що отриманi ранiше. Institute of Mathematics, NAS of Ukraine 2017-08-25 Article Article application/pdf https://umj.imath.kiev.ua/index.php/umj/article/view/1757 Ukrains’kyi Matematychnyi Zhurnal; Vol. 69 No. 8 (2017); 1049-1060 Український математичний журнал; Том 69 № 8 (2017); 1049-1060 1027-3190 en https://umj.imath.kiev.ua/index.php/umj/article/view/1757/739 Copyright (c) 2017 Guo Y. |
| spellingShingle | Guo, Y. Го, Ю. Globally robust stability analysis for stochastic Cohen – Grossberg neural networks with impulse control and time-varying delays |
| title | Globally robust stability analysis for stochastic Cohen – Grossberg neural networks with
impulse control and time-varying delays |
| title_alt | Глобально робастний аналiз стабiльностi стохастичних
нейронних сiток Коена – Гроссберга з iмпульсним
управлiнням та затримками, що залежать вiд часу |
| title_full | Globally robust stability analysis for stochastic Cohen – Grossberg neural networks with
impulse control and time-varying delays |
| title_fullStr | Globally robust stability analysis for stochastic Cohen – Grossberg neural networks with
impulse control and time-varying delays |
| title_full_unstemmed | Globally robust stability analysis for stochastic Cohen – Grossberg neural networks with
impulse control and time-varying delays |
| title_short | Globally robust stability analysis for stochastic Cohen – Grossberg neural networks with
impulse control and time-varying delays |
| title_sort | globally robust stability analysis for stochastic cohen – grossberg neural networks with
impulse control and time-varying delays |
| url | https://umj.imath.kiev.ua/index.php/umj/article/view/1757 |
| work_keys_str_mv | AT guoy globallyrobuststabilityanalysisforstochasticcohengrossbergneuralnetworkswithimpulsecontrolandtimevaryingdelays AT goû globallyrobuststabilityanalysisforstochasticcohengrossbergneuralnetworkswithimpulsecontrolandtimevaryingdelays AT guoy globalʹnorobastnijanalizstabilʹnostistohastičnihnejronnihsitokkoenagrossbergazimpulʹsnimupravlinnâmtazatrimkamiŝozaležatʹvidčasu AT goû globalʹnorobastnijanalizstabilʹnostistohastičnihnejronnihsitokkoenagrossbergazimpulʹsnimupravlinnâmtazatrimkamiŝozaležatʹvidčasu |