Weak convergence of first-rare-event times for semi-Markov processes
Necessary and sufficient conditions for weak convergence of first-rareevent times for semi-Markov processes with finite set of states in series of schemes are obtained.
Gespeichert in:
| Datum: | 2007 |
|---|---|
| 1. Verfasser: | |
| Format: | Artikel |
| Sprache: | English |
| Veröffentlicht: |
Інститут математики НАН України
2007
|
| Online Zugang: | https://nasplib.isofts.kiev.ua/handle/123456789/4512 |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Назва журналу: | Digital Library of Periodicals of National Academy of Sciences of Ukraine |
| Zitieren: | Weak convergence of first-rare-event times for semi-Markov processes / M. Drozdenko // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 4. — С. 29–63. — Бібліогр.: 19 назв.— англ. |
Institution
Digital Library of Periodicals of National Academy of Sciences of Ukraine| id |
nasplib_isofts_kiev_ua-123456789-4512 |
|---|---|
| record_format |
dspace |
| spelling |
Drozdenko, M. 2009-11-24T15:24:25Z 2009-11-24T15:24:25Z 2007 Weak convergence of first-rare-event times for semi-Markov processes / M. Drozdenko // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 4. — С. 29–63. — Бібліогр.: 19 назв.— англ. 0321-3900 https://nasplib.isofts.kiev.ua/handle/123456789/4512 Necessary and sufficient conditions for weak convergence of first-rareevent times for semi-Markov processes with finite set of states in series of schemes are obtained. en Інститут математики НАН України Weak convergence of first-rare-event times for semi-Markov processes Article published earlier |
| institution |
Digital Library of Periodicals of National Academy of Sciences of Ukraine |
| collection |
DSpace DC |
| title |
Weak convergence of first-rare-event times for semi-Markov processes |
| spellingShingle |
Weak convergence of first-rare-event times for semi-Markov processes Drozdenko, M. |
| title_short |
Weak convergence of first-rare-event times for semi-Markov processes |
| title_full |
Weak convergence of first-rare-event times for semi-Markov processes |
| title_fullStr |
Weak convergence of first-rare-event times for semi-Markov processes |
| title_full_unstemmed |
Weak convergence of first-rare-event times for semi-Markov processes |
| title_sort |
weak convergence of first-rare-event times for semi-markov processes |
| author |
Drozdenko, M. |
| author_facet |
Drozdenko, M. |
| publishDate |
2007 |
| language |
English |
| publisher |
Інститут математики НАН України |
| format |
Article |
| description |
Necessary and sufficient conditions for weak convergence of first-rareevent times for semi-Markov processes with finite set of states in series of schemes are obtained.
|
| issn |
0321-3900 |
| url |
https://nasplib.isofts.kiev.ua/handle/123456789/4512 |
| citation_txt |
Weak convergence of first-rare-event times for semi-Markov processes / M. Drozdenko // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 4. — С. 29–63. — Бібліогр.: 19 назв.— англ. |
| work_keys_str_mv |
AT drozdenkom weakconvergenceoffirstrareeventtimesforsemimarkovprocesses |
| first_indexed |
2025-11-24T07:35:18Z |
| last_indexed |
2025-11-24T07:35:18Z |
| _version_ |
1850444209896554496 |
| fulltext |
Theory of Stochastic Processes
Vol.13 (29), no.4, 2007, pp.29–63
MYROSLAV DROZDENKO
WEAK CONVERGENCE OF FIRST-RARE-EVENT
TIMES FOR SEMI-MARKOV PROCESSES
Necessary and sufficient conditions for weak convergence of first-rare-
event times for semi-Markov processes with finite set of states in
series of schemes are obtained.
1. Introduction
Limit theorems for random functionals of similar first-rare-event times
known under such names as first hitting times, first passage times, first
record times, etc. were studied by many authors. Revue of the literature
related to the subject can be found in Silvestrov (2004) and in the recent
papers by Silvestrov and Drozdenko (2005, 2006a, 2006b).
The main features for the most previous results are that they give suf-
ficient conditions of convergence for such functionals. As a rule, those con-
ditions involve assumptions, which imply convergence of distributions for
sums of i.i.d random variables distributed as sojourn times for the semi-
Markov process (for every state) to some infinitely divisible laws plus some
ergodicity condition for the embedded Markov chain plus condition of van-
ishing probabilities of occurring rare event during one transition step for
the semi-Markov process.
Our results are related to the model of semi-Markov processes with a
finite set of states. In the papers by Silvestrov and Drozdenko (2005, 2006a,
2006b) necessary and sufficient conditions of first-rare-event times for semi-
Markov processes were obtained for the non-triangular-array case of stable
type asymptotics for sojourn times distributions.
In the present paper we generalize results of those papers to a general
triangular array model.
Instead of using traditional approach based on conditions for “individ-
ual” distributions of sojourn times, we use more general and weaker condi-
tions imposed on distributions of sojourn times averaged by the stationary
2000 Mathematics Subject Classifications: 60K15, 60F17, 60K20.
Key words and phrases: weak convergence, semi-Markov processes, first-rare-event
times, limit theorems, necessary and sufficient conditions.
29
30 MYROSLAV DROZDENKO
distribution of the limit embedded Markov chain. Moreover, we show that
these conditions are not only sufficient but also necessary conditions for the
weak convergence for first-rare-event times, and describe the class of all pos-
sible not-concentrated in zero limit laws. The results presented in the paper
give some kind of a “final solution” for limit theorems for first-rare-event
times for semi-Markov process with a finite set of states in triangular array
mode.
In addition to the references given in Silvestrov and Drozdenko (2005,
2006a, 2006b), we would like to mention some recent publications rele-
vant to our research: Anisimov (2005), Avrachenkov and Haviv (2003),
Dayar (2005), Di Crescenzo and Nastro (2004), Fuh (2004), Harrison and
Knottenbelt (2002), Hunter (2005), Janssen and Manca (2006), Koroliuk
and Limnios (2005), Limnios, Ouhbi, and Sadek (2005), Nguyen, Vuong,
and Tran (2005), Solan and Vielle (2003), Symeonaki and Stamou (2006),
Szewczak (2005).
The paper is organized in the following way. In Section 2, we formulate
and prove our main Theorem 1, which describes the class of all possible
limit distributions for first-rare-event times for semi-Markov processes and
give necessary and sufficient conditions of weak convergence to distributions
from this class. Several lemmas describing asymptotical solidarity cyclic
properties for sum-processes defined on Markov chains are used in the proof
of Theorem 1. These lemmas and their proofs are collected in Section 3.
2. Main results
Let
(
η
(ε)
n , κ
(ε)
n , ζ
(ε)
n
)
, n = 0, 1, . . . be, for every ε > 0, a Markov re-
newal process, i.e. a homogenous Markov chain with phase space Z =
X × [0, +∞)×Y (here X = {1, 2, . . . , m}, and Y is some measurable space
with σ–algebra of measurable sets BY ) and transition probabilities,
P
{
η
(ε)
n+1 = j, κ
(ε)
n+1 ≤ t, ζ
(ε)
n+1 ∈ A/η(ε)
n = i, κ(ε)
n = s, ζ (ε)
n = y
}
= P
{
η
(ε)
n+1 = j, κ
(ε)
n+1 ≤ t, ζ
(ε)
n+1 ∈ A/η(ε)
n = i
}
= Q
(ε)
ij (t, A), i, j ∈ X, s, t ≥ 0, y ∈ Y, A ∈ BY .
(1)
The characteristic property, which specifies Markov renewal processes
in the class of general multivariate Markov chains
(
η
(ε)
n , κ
(ε)
n , ζ
(ε)
n
)
, is (as
shown in (1)) that transition probabilities do depend only of the current
position of the first component η
(ε)
n .
As is known, the first component η
(ε)
n of the Markov renewal process is
also a homogenous Markov chain with the phase space X and transition
probabilities p
(ε)
ij = Q
(ε)
ij (+∞, Y ), i, j ∈ X.
FIRST-RARE-EVENT TIMES 31
Also, the first two components of Markov renewal process (namely η
(ε)
n
and κ
(ε)
n ) can be associated with the semi-Markov process η(ε)(t), t ≥ 0
defined as,
η(ε)(t) = η(ε)
n for τ (ε)
n ≤ t < τ
(ε)
n+1, n = 0, 1, . . . ,
where τ
(ε)
0 = 0 and τ
(ε)
n = κ
(ε)
1 + . . . + κ
(ε)
n , n ≥ 1.
Random variables κ
(ε)
n represent inter–jump times for the process η(ε)(t).
As far as random variables ζ
(ε)
n are concerned, they are so-called, “flag vari-
ables” and are used to record “rare” events.
Let Dε, ε > 0 be a family of measurable “small” in some sense subsets
of Y . Then events
{
ζ
(ε)
n ∈ Dε
}
can be considered as “rare”.
Let us introduce random variables
νε = min
(
n ≥ 1 : ζ (ε)
n ∈ Dε
)
,
and
ξε =
νε∑
n=1
κ
(ε)
n .
A random variable νε counts the number of transitions of the embedded
Markov chain η
(ε)
n up to the first appearance of the “rare” event, while a
random variable ξε can be interpreted as the first-rare-event time for the
semi-Markov process η(ε)(t).
Let us consider the distribution function of the first-rare-event time ξε,
under fixed initial state of the embedded Markov chain η
(ε)
n ,
F
(ε)
i (u) = Pi{ξε ≤ u}, u ≥ 0.
Here and henceforth, Pi and Ei denote, respectively, conditional proba-
bility and expectation calculated under condition that η0 = i.
We give necessary and sufficient conditions for weak convergence of dis-
tribution functions F
(ε)
i (uuε), where uε > 0, uε → ∞ as ε → 0 is a non-
random normalising function, and describe the class of possible limit distri-
butions.
The problem is solved under the four general model assumptions.
The first assumption A guaranties that the last summand in the random
sum ξε is negligible under any normalization uε, i.e. κ
(ε)
νε
/
uε
P→ 0 as ε → 0:
A: lim
t→∞
lim
ε→0
Pi
{
κ
(ε)
1 > t/ζ
(ε)
1 ∈ Dε
}
= 0, i ∈ X.
Let us introduce the probabilities of occurrence of rare event during one
transition step of the semi-Markov process η(ε)(t),
piε = Pi
{
ζ
(ε)
1 ∈ Dε
}
, i ∈ X.
32 MYROSLAV DROZDENKO
The second assumption B, imposed on probabilities piε, specifies inter-
pretation of the event
{
ζ
(ε)
n ∈ Dε
}
as “rare” and guarantees the possibility
for such event to occur:
B: 0 < max1≤i≤m piε → 0 as ε → 0.
The third assumption C is a condition of convergence of transition ma-
trix of embedded perturbed Markov chain η
(ε)
n to transition matrix of em-
bedded limit Markov chain η
(0)
n :
C: p
(ε)
ij → p
(0)
ij as ε → 0, i, j ∈ X.
The forth assumption D is a standard ergodicity condition for the limit
embedded Markov chain η
(0)
n :
D: Markov chain η
(0)
n with matrix of transition probabilities
∥∥∥p(0)
ij
∥∥∥ is er-
godic with stationary distribution π
(0)
i , i ∈ X.
Let us define a probability which is the result of averaging of the prob-
abilities of occurrence of rare event in one transition step by the stationary
distribution of the embedded limit Markov chain η
(0)
n ,
pε =
m∑
i=1
π
(0)
i piε.
Let us also introduce the distribution functions of a sojourn times κ
(ε)
1
for the semi-Markov processes η(ε)(t),
G
(ε)
i (t) = Pi
{
κ
(ε)
1 ≤ t
}
, t ≥ 0, i ∈ X,
and the distribution function, which is a result of averaging of distribution
functions of sojourn times by the stationary distribution of the embedded
Markov chain η
(0)
n ,
G(ε)(t) =
m∑
i=1
π
(0)
i G
(ε)
i (t), t ≥ 0.
Now we are in position to formulate the necessary and sufficient condi-
tions for weak convergence of distribution functions of first-rare-event times
ξε. Mentioned conditions have the following form:
E: p−1
ε
(
1 − G(ε)(uuε)
) → h(u) as ε → 0 for all u > 0, which are points of
continuity of the limit function h(u).
F: p−1
ε
∫ uuε
0
s G(ε)(ds) → f(u) as ε → 0 for some u > 0 which is a point of
continuity of h(u).
FIRST-RARE-EVENT TIMES 33
The limits here satisfy a number of conditions:
(a1) h(u) is a non-negative, non-increasing, and right-continuous function
for u > 0 and h(∞) = 0;
(a2) The measure H(A) on σ-algebra H+, the Borel σ-algebra of subsets of
(0,∞), defined by the relation H((u1, u2]) = h(u1) − h(u2), 0 < u1 ≤
u2 < ∞, satisfies the condition
∫∞
0
s
1+s
H(ds) < ∞;
(a3) Under E, condition F can only hold simultaneously for all continuity
points of h(u) and f(u1) = f(u2) −
∫ u2
u1
sH(ds) for any such points
0 < u1 < u2 < ∞;
(a4) f(u) is a non-negative function.
We use the symbol ⇒ to show weak convergence of distribution func-
tions (pointwise convergence in points of continuity of the limit distribution
function).
Conditions E and F are necessary and sufficient conditions for the weak
convergence,
ϑ(ε)(t) =
[tp−1
ε ]∑
k=1
ϑ
(ε)
k
uε
, t ≥ 0 ⇒ ϑ(t), t ≥ 0 as ε → 0, (2)
where ϑ
(ε)
k are i.i.d. random variables with joint distribution G(ε)(t) and the
cumulant a(s) of the limit process ϑ(t) (i.e. Ee−sϑ(t) = e−a(s)t), according
to Lévy-Khintchine representation formula, has the following form
a(s) = as −
∫ ∞
0
(e−sx − 1)H(dx), (3)
where the constant
a = f(u) −
∫ u
0
sH(ds)
does not depend on the choice of the point u in condition F.
The main result of the paper is the following theorem.
Theorem. Let conditions A, B, C, and D hold. Then:
(i): The class of all possible non-concentrated in zero limit distribution func-
tions (in the sense of weak convergence) for the distribution functions
of first-rare-event times F
(ε)
i (uuε) coincides with the class of distribu-
tion functions F (u) with Laplace transforms φ(s) = 1
1+a(s)
.
34 MYROSLAV DROZDENKO
(ii): Conditions E and F are necessary and sufficient for the following rela-
tion of weak convergence to hold (for some or every i ∈ X, respectively,
in the statements of necessity and sufficiency),
F
(ε)
i (uuε) ⇒ F (u) as ε → 0, (4)
where F (u) is the distribution function with Laplace transform 1
1+a(s)
.
Remark 1. F (u) is the distribution function of a random variable ξ(ρ),
where (b1) ξ(t), t ≥ 0 is a non-negative homogeneous stable process with in-
dependent increments and the Laplace transform Ee−sξ(t) = e−a(s)t, s, t ≥ 0,
(b2) ρ is an exponentially distributed random variable with parameter 1,
(b3) the random variable ρ and the process ξ(t), t ≥ 0 are independent.
Proof. We split the proof of Theorem 1 into several steps.
As the first step, we obtain an appropriate representation for the first-
rare-event time ξε in the form of geometric type random sum of random
variables connected with cyclic returns of the semi-Markov process η(ε)(t)
to a fixed state i ∈ X.
Let τ
(ε)
i (n) be the number of transitions after which the embedded
Markov chain η
(ε)
n reaches a state i ∈ X for the n-th time,
τ
(ε)
i (n) = min
{
k > τ
(ε)
i (n − 1) : η
(ε)
k = i
}
, n = 1, 2, . . . ,
where τ
(ε)
i (0) = 0. For simplicity, we will write τ
(ε)
i (1) as τ
(ε)
i .
Let β
(ε)
i (n) be the duration of the n-th i-cycle between the moments of
(n− 1)-th and n-th return of the semi-Markov process η(ε)(t) to the state i,
β
(ε)
i (n) =
τ
(ε)
i (n)∑
k=τ
(ε)
i (n−1)+1
κ
(ε)
k , n = 1, 2, . . . .
For simplicity, we will also write β
(ε)
i (1) as β
(ε)
i . The moments of return
of the semi-Markov process η(ε)(t) to a fixed state i ∈ X are regenerative
moments for this process. Due to this property, β
(ε)
i (n), n = 1, 2 . . . are
i.i.d. random variables for n ≥ 2. As far as the random variable β
(ε)
i (1) is
concerned, it has the same distribution as β
(ε)
i (2) if the initial distribution
of the embedded Markov chain η
(ε)
n is concentrated in state i. Otherwise,
the distribution of β
(ε)
i (1) can differ from the distribution of β
(ε)
i (2).
Let us also introduce the random variable νiε which counts the number
of cycles ended before the moment νε,
νiε = max
{
n : τ
(ε)
i (n) ≤ νε
}
.
FIRST-RARE-EVENT TIMES 35
Finally, let β̃iε be the duration of the residual sub-cycle, between the
moment of the last return of the semi-Markov process η(ε)(t) to the state i
before the first-rare-event time ξε, and the time ξε,
β̃iε =
νε∑
n=τ
(ε)
i (νiε)+1
κ
(ε)
n .
Now, the following representation, in the form of random sum, can be
written down for the first-rare-event time ξε,
ξε =
νiε∑
n=1
β
(ε)
i (n) + β̃iε. (5)
It should be noted that the random index νiε and summands β
(ε)
i (n), n =
1, 2, . . ., and β̃iε are not independent random variables. However, they are
conditionally independent with respect to the indicator random variables
χiε(n) = χ
(
τ
(ε)
i (n − 1) < νε ≤ τ
(ε)
i (n)
)
, n = 1, 2, . . .. It will be seen in the
best way when we shall rewrite the representation formula (5) in terms of
Laplace transforms.
Let us introduce Laplace transforms of the first-rare-event time,
Φiε(s) = Ei exp {−sξε} , s ≥ 0, i ∈ X.
Let us denote qiε the probability of occurrence the rare event during the
first i-cycle,
qiε = Pi
{
νε ≤ τ
(ε)
i
}
, i ∈ X.
Let us also introduce the conditional Laplace transforms of the duration
of the first i-cycle β
(ε)
i under condition νε > τ
(ε)
i of non-occurrence of the
rare event in the first i-cycle,
ψ̄iε(s) = Ei
{
exp
{
−sβ
(ε)
i
}
/νε > τ
(ε)
i
}
, s ≥ 0,
and the conditional Laplace transform of the duration of residual sub-cycle
βiε under condition that νε ≤ τ
(ε)
i of occurrence of the rare event in the first
i-cycle,
ψ̃iε(s) = Ei
{
exp
{
−sβ̃iε
}
/νε ≤ τ
(ε)
i
}
, s ≥ 0.
The Markov renewal process
(
η
(ε)
n , κ
(ε)
n , ζ
(ε)
n
)
regenerates at moments of
return to every state i, and νε is a Markov moment for this process. Due to
36 MYROSLAV DROZDENKO
these properties the representation formula (5) takes, in terms of Laplace
transforms, the following form,
Φiε(s) = Ei exp{−sξε}
=
∞∑
n=0
(1 − qiε)
nqiεψ̄iε(s)
nψ̃iε(s)
=
qiεψ̃iε(s)
1 − (1 − qiε)ψ̄iε(s)
=
ψ̃iε(s)
1 + (1 − qiε)
(1−ψ̄iε(s))
qiε
, s ≥ 0. (6)
As the second step, we prove that the weak convergence for the first-
rare-event times is invariant with respect to the choice of initial distribution
of the embedded Markov chain η
(ε)
n .
At this stage we are interested in solidarity statements concerned the
relation of weak convergence,
F
(ε)
i (uuε) ⇒ F (u) as ε → 0, (7)
where (c1) F (u) is a distribution function concentrated on non-negative
half-line but not concentrated in zero, and (c2) uε is a positive normalizing
function such that uε → ∞ as ε → 0.
We shall prove that, under conditions A, B, C, and D, (d) the assump-
tion that relation (7) holds for some i ∈ X implies that this relation holds
for every i ∈ X and, in this case, (e) the limit distribution function F (u) is
the same for all i ∈ X.
In terms of Laplace transforms relation (7) is equivalent to the relation,
Φiε(s/uε) → Φ(s) as ε → 0, s ≥ 0, (8)
where (f) Φ(s) is a Laplace transform of some non-negative random variable,
(g) Φ(s) < 1 for s > 0 (this is equivalent to the requirement that the
corresponding limit distribution function is not concentrated in zero).
Thus, in order to prove the solidarity statement formulated above, we
should prove that, under conditions A, B, C, and D, (h) the assumption
that relation (8) holds for some i ∈ X implies that this relation holds for
every i ∈ X and, in this case, (i) the limit Laplace transform Φ(s) is the
same for all i ∈ X.
In what follows, we the use several lemmas describing asymptotical sol-
idarity cyclic properties for functional defined on trajectories of Markov
renewal processes
(
η
(ε)
n , κ
(ε)
n , ζ
(ε)
n
)
.
FIRST-RARE-EVENT TIMES 37
It will be proved in Lemma 3 that conditions B, C, and D imply the
following asymptotic relation, for every i ∈ X,
qiε ∼ pε
π
(0)
i
as ε → 0. (9)
Here and henceforth relation a(ε) ∼ b(ε) as ε → 0 means that a(ε)/b(ε) → 1
as ε → 0.
It follows from (9) that, for every i ∈ X,
qiε → 0 as ε → 0. (10)
It will be shown in Lemma 4, with the use of (9), that conditions A, B,
C, and D implies the following asymptotic relation, for every i ∈ X,
ψ̃iε(s/uε) → 1 as ε → 0, s ≥ 0. (11)
Relation (11) implies that, under conditions A, B, C, and D for every
i ∈ X,
Φiε(s/uε) ∼ 1
1 + (1 − qiε)
(1−ψ̄iε(s/uε))
qiε
as ε → 0, s ≥ 0. (12)
It follows from relations and (10) and (12) that, under conditions A, B,
C, and D relation (8) holds, for given i ∈ X, if and only if,
1 − ψ̄iε(s/uε)
qiε
→ ς(s) as ε → 0, s ≥ 0, (13)
where ς(s) is a function such that (j) 1
1+ς(s)
is a Laplace transform of some
non-negative random variable, and (k) ς(s) > 0 for s > 0.
Obviously, that the limit functions in relations (8) and (13) are connected
by the following relation,
Φ(s) =
1
1 + ς(s)
, s ≥ 0. (14)
To simplify the following asymptotic analysis, we shall now try to re-
place the conditional Laplace transform ψ̄iε(s) in the relation (13) by the
unconditional Laplace transform of the duration of the first i-cycle β
(ε)
i ,
ψ
(ε)
i (s) = Ei exp
{
−sβ
(ε)
i
}
, s ≥ 0.
The Laplace transform ψ
(ε)
i (s) can obviously be represented in the fol-
lowing form,
ψ
(ε)
i (s) = (1 − qiε)ψ̄iε(s) + qiεψ̂iε(s), s ≥ 0, (15)
38 MYROSLAV DROZDENKO
where ψ̂iε(s) is the conditional Laplace transform of the duration of the first
i-cycle β
(ε)
i under condition νε ≤ τ
(ε)
i of occurrence of the rare event in the
first i-cycle,
ψ̂iε(s) = Ei
{
exp
{
−sβ
(ε)
i
}
/νε ≤ τ
(ε)
i
}
, s ≥ 0.
Relation (15) can be re-written in the following form,
1 − ψ
(ε)
i (s/uε)
qiε
= (1 − qiε)
1 − ψ̄iε(s/uε)
qiε
+ qiε
1 − ψ̂iε(s/uε)
qiε
, s ≥ 0. (16)
It will be shown in Lemma 3 that conditions A, B, C, and D imply
that, for every i ∈ X,
ψ̂iε(s/uε) → 1 as ε → 0, s ≥ 0. (17)
It follows from relation (17) that, under conditions A, B, C, and D,
relation (13) holds, for given i ∈ X, if and only if,
1 − ψ
(ε)
i (s/uε)
qiε
→ ς(s) as ε → 0, s ≥ 0, (18)
where ς(s) is a function such that (j) 1
1+ς(s)
is a Laplace transform of some
non-negative random variable, and (k) ς(s) > 0 for s > 0.
It will be shown in Lemma 4 that, under conditions B, C, and D, (l)
the assumption that relation (18) holds for some i ∈ X implies that this
relation holds for every i ∈ X and, in this case, (m) the limit function ς(s)
is the same for all i ∈ X, (n) ς(s) is a cumulant of an infinitely divisible
law concentrated on non-negative half-line and not concentrated in zero.
Note that, in this case, (o1) the function 1
1+ς(s)
is a Laplace transform
of the random variable ξ(ρ), where (o2) ξ(t), t ≥ 0 is a non-negative ho-
mogeneous process with independent increments and the Laplace transform
Ee−sξ(t) = e−ς(s)t, (o3) ρ is exponentially distributed random variable, with
parameter 1, (o4) the random variable ρ is independent of the process ξ(t),
t ≥ 0, and (o5) ς(s) > 0 for s > 0. These properties are consistent with
requirements (j) and (k).
Let introduce the Laplace transforms for the sojourn times κ
(ε)
1 ,
ϕ
(ε)
i (s) = Ei exp
{
−sκ
(ε)
1
}
=
∫ ∞
0
e−stG
(ε)
i (dt), s ≥ 0,
and the corresponding Laplace transform averaged by the stationary distri-
bution of the embedded Markov chain η
(ε)
n ,
ϕ(ε)(s) =
m∑
i=1
π
(0)
i ϕ
(ε)
i (s) =
∫ ∞
0
e−stG(ε)(dt), s ≥ 0.
FIRST-RARE-EVENT TIMES 39
Finally, it will be shown in Lemma 5 that, under conditions A, B, C,
and D, relation (18) holds, for given i ∈ X, if and only if,
1 − ϕ(ε)(s/uε)
pε
→ ς(s) as ε → 0, s ≥ 0, (19)
where (p) ς(s) is a cumulant of an infinitely divisible law concentrated on
non-negative half-line and not concentrated in zero.
Relation (19) is the final point in series the solidarity statements con-
cerned the distributions of first-rare-event times and based on conditions
A, B, C, and D.
The last step in the proof is the standard one. As was mentioned above
E and F are equivalent to asymptotic relation (2). In terms of Laplace
transform (2) is equivalent (for every t > 0) to the following relations
E exp
{−sϑ(ε)(t)
}
=
(
ϕ(ε)(s/uε)
)[t/pε]
∼ exp
{−(1 − ϕ(ε)(s/uε))t/pε
}
→ exp{−a(s)t} as ε → 0. (20)
It follows from (20) that relation (2) is equivalent to (19) and in this case
ς(s) = a(s), s ≥ 0. (21)
This completes the proof of Theorem 1. �
3. Cyclic conditions of convergence
In this section we prove Lemmas 1-7 used in the proof of Theorem 1.
These lemmas present a series of so-called cyclic solidarity conditions of
convergence connected with the first-rare-event times and, as we think, have
their own value.
Conditions C and D obviously imply that Markov chain η
(ε)
n is also
ergodic for all ε small enough. Let denote by π(ε) stationary distribution of
Markov chain η
(ε)
n . As is known, stationary distributions are unique solution
of the system ⎧⎨⎩
π
(ε)
i =
∑m
k=1 π
(ε)
k p
(ε)
ki , i = 1, m,∑m
k=1 π
(ε)
k = 1.
(22)
Lemma 1. Conditions C, D imply that
π
(ε)
i → π
(0)
i as ε → 0, i ∈ X. (23)
40 MYROSLAV DROZDENKO
Proof. For every L ∈ (0, 1) exists n such that
max
i∈X
Pi
{
τ
(0)
j ≥ n
}
< L. (24)
By condition C, for any i, j ∈ X, and n ≥ 1,
Pi
{
τ
(ε)
j ≥ n
}
=
∑
i0=i,··· ,in �=j
(
n∏
k=1
p
(ε)
ik−1,ik
)
→
∑
i0=i,··· ,in �=j
(
n∏
k=1
p
(0)
ik−1,ik
)
= Pi
{
τ
(0)
j ≥ n
}
as ε → 0. (25)
Relation (25) means that random variables τ
(ε)
j converge weakly to τ
(0)
j as
ε → 0 for any j ∈ X. Relations (24) and (25) imply that exists ε0 such that
for all ε ≤ ε0, j ∈ X
max
i∈X
Pi
{
τ
(ε)
j ≥ n
}
< L, (26)
Using (26) we get for r = 1, 2, · · · and ε ≤ ε0 and i, j ∈ X
Pi
{
τ
(ε)
j ≥ rn
}
=∑
k
Pi
{
τ
(ε)
j ≥ r(n − 1), ηr(n−1) = k
}
Pk
{
τ
(ε)
j ≥ n
}
≤ Lr. (27)
Finally, for any x > 0, ε ≤ ε0 and i, j ∈ X, using (27), we get
max
i∈X
Pi
{
τ
(ε)
j ≥ x
}
≤ max
i∈X
Pi
{
τ
(ε)
j ≥
[x
n
]
n
}
≤ Lx/n. (28)
Relation (28) implies, in an obvious way, that, for any m ≥ 1 and i, j ∈ X
sup
ε≤ε0
Ej [τ
(ε)
i ]m < ∞. (29)
It follows from (25) and (29), for i, j ∈ X,
Ei[τ
(ε)
j ]m → Ei[τ
(0)
j ]m as ε → 0. (30)
As is known,
π
(ε)
i =
[
Ejτ
(ε)
j
]−1
, j ∈ X. (31)
Relations (30) and (31) imply asymptotic relation given in Lemma 1. �
FIRST-RARE-EVENT TIMES 41
Let us define
p̄ε =
m∑
i=1
π
(ε)
i piε.
Lemma 2. Conditions B, C, and D imply that
p̄ε ∼ pε as ε → 0.
Proof. Using Lemma 1, we get∣∣∣∣ p̄ε − pε
pε
∣∣∣∣ =
|∑m
i=1 π
(ε)
i pεi −
∑m
i=1 π
(0)
i pεi|∑m
i=1 π
(0)
i pεi
≤
m∑
i=1
∣∣∣π(ε)
i − π
(0)
i
∣∣∣ · pεi∑m
j=1 π
(0)
j pεj
≤
m∑
i=1
|π(ε)
i − π
(0)
i |
π
(0)
i
→ 0 as ε → 0.
�
It follows from this relation and condition B that normalizing function
pε can be replaced by p̄ε in conditions E, F and in Theorem 1.
The next lemma describes asymptotic behavior of the probability of oc-
currence the rare event during one i-cycle.
Lemma 3. Let conditions B, C, and D hold. Then, for every i ∈ X,
qiε ∼ pε
π
(0)
i
as ε → 0. (32)
Proof. Let us define the probabilities of occurrence of the rare event before
the first hitting of the embedded Markov chain to the state i under condition
that the initial state of this Markov chain η
(ε)
0 = j,
qjiε = Pj
{
νε ≤ τ
(ε)
i
}
, i, j ∈ X.
By the definition,
qiiε = qiε, i ∈ X. (33)
The probabilities qjiε, j ∈ X satisfy, for every i ∈ X, the following
system of linear equations,{
qjiε = pjε +
∑
k �=i
p̄
(ε)
jk qkiε
j ∈ X,
(34)
42 MYROSLAV DROZDENKO
where
p̄
(ε)
jk = Pj
{
η
(ε)
1 = k, ζ
(ε)
1 /∈ Dε
}
,
= p
(ε)
jk − Pj
{
η
(ε)
1 = k, ζ
(ε)
1 ∈ Dε
}
, j, k ∈ X. (35)
System (34) can be rewritten, for every i ∈ X, in the following matrix
form,
qiε = pε + iP
(ε)qiε, (36)
where
qiε =
⎡⎢⎣ q1iε
...
qmiε
⎤⎥⎦ , pε =
⎡⎢⎣ p1ε
...
pmε
⎤⎥⎦ ,
and
iP
(ε) =
⎡⎢⎣ p̄
(ε)
11 . . . p̄
(ε)
1(i−1) 0 p̄
(ε)
1(i+1) . . . p̄
(ε)
1m
...
...
...
...
...
p̄
(ε)
m1 . . . p̄
(ε)
m(i−1) 0 p̄
(ε)
m(i+1) . . . p̄
(ε)
mm
⎤⎥⎦ .
Let us show that the matrix I − iP
(ε) has the inverse matrix for all
ε small enough, and, therefore, the solution of the system (36) has the
following form, for every i ∈ X,
qiε =
[
I − iP
(ε)
]−1
pε. (37)
Let us also introduce the matrix,
iP
(0) =
⎡⎢⎣ p
(0)
11 . . . p
(0)
1(i−1) 0 p
(0)
1(i+1) . . . p
(0)
1m
...
...
...
...
...
p
(0)
m1 . . . p
(0)
m(i−1) 0 p
(0)
m(i+1) . . . p
(0)
mm
⎤⎥⎦ .
By conditions B and C,
iP
(ε) → iP
(0) as ε → 0. (38)
Let us introduce random variable δ
(ε)
ik which is the number of visits of
the embedded Markov chain η
(ε)
n to the state k up to the first visit to the
sate i,
δ
(ε)
ik =
τ
(ε)
i∑
n=1
χ
(
η
(ε)
n−1 = k
)
, i, k ∈ X.
As is known, due to the ergodicity of the Markov chain η
(0)
n , Ejδ
(0)
ik < ∞
for all j, i, k ∈ X. Moreover, for every i ∈ X, there exists the inverse matrix,[
I − iP
(0)
]−1
=
∥∥∥Ejδ
(0)
ik
∥∥∥ . (39)
FIRST-RARE-EVENT TIMES 43
This means that (a) det
(
I − iP
(0)
) = 0. Thus by relation (38) (b) det (I−
iP
(ε)
) = 0 for all ε small enough. Since the elements of the inverse matrix[
I − iP
(ε)
]−1
are continuous rational functions of the elements of the matrix
I − iP
(ε) with non-zero denominator det(I − iP
(ε)), we get[
I − iP
(ε)
]−1 → [
I − iP
(0)
]−1
as ε → 0. (40)
Let us also introduce random variable δikε which is the number of visits
of the embedded Markov chain η
(ε)
n to the state k before the first visit to
the sate i or the occurrence of the first-rare-event,
δikε =
τ
(ε)
i ∧ νε∑
n=1
χ
(
η
(ε)
n−1 = k
)
, i, k ∈ X.
The matrix
iP
(ε)n =
∥∥∥Pj
{
η(ε)
n = k, νε ∧ τ
(ε)
i > n
}∥∥∥ , n ≥ 1
and, therefore,[
I− iP
(ε)
]−1
= I + iP
(ε) +
(
iP
(ε)
)2
+ · · · = ‖Ejδikε ‖ (41)
Using relations (33) and (41) we get the following formula,
qiε =
m∑
k=1
Eiδikε pkε, (42)
and
Eiδikε → Eiδ
(0)
ik as ε → 0. (43)
As is known, the following formula holds, since the Markov chain η
(0)
n is
ergodic,
Eiδ
(0)
ik =
π
(0)
k
π
(0)
i
, i, k ∈ X. (44)
Using formulas (42) and (44) we get,∣∣∣∣qiε − pε
π
(0)
i
∣∣∣∣
pε
π
(0)
i
≤
m∑
k=1
∣∣∣∣∣Eiδikε − π
(0)
k
π
(0)
i
∣∣∣∣∣ · π
(0)
i pkε∑m
j=1 π
(0)
j pjε
≤
m∑
k=1
∣∣∣∣∣Eiδikε − π
(0)
k
π
(0)
i
∣∣∣∣∣ · π
(0)
i
π
(0)
k
→ 0 as ε → 0. (45)
Relation (45) implies asymptotic relation (32). The proof is complete. �
44 MYROSLAV DROZDENKO
Lemma 4. Let conditions A, B, C, D hold. Then, for any normalization
function 0 < uε → ∞ as ε → 0, and for i ∈ X,
ψ̃iε(s/uε) → 1 as ε → 0, s ≥ 0. (46)
Proof. Let us introduce the Laplace transforms,
ψ̃jiε(s) = Ej exp
{
−sβ̃iε
}
χ
(
νε ≤ τ
(ε)
i
)
, s ≥ 0, i, j ∈ X.
Obviously,
ψ̃iε(s) =
ψ̃iiε(s)
qiε
, s ≥ 0, i ∈ X. (47)
Let us also introduce the Laplace transforms,
p̄
(ε)
jk (s) = Eje
−s
(ε)
1 χ
(
ζ
(ε)
1 /∈ Dε, η
(ε)
1 = k
)
, s ≥ 0, j, k ∈ X,
and
p
(ε)
j (s) = Eje
−s
(ε)
1 χ
(
ζ
(ε)
1 ∈ Dε
)
= ϕ̂jε(s)pjε, s ≥ 0, j ∈ X,
where
ϕ̂jε(s) = Ej
{
e−s
(ε)
1 /ζ
(ε)
1 ∈ Dε
}
, s ≥ 0, j ∈ X.
Functions ψ̃jiε(s/uε), j ∈ X satisfy, for every s ≥ 0 and i ∈ X, the
following system of linear equations,{
ψ̃jiε(s/uε) = p
(ε)
j (s/uε) +
∑
k �=i
p̄
(ε)
jk (s)ψ̃kiε(s/uε),
j ∈ X.
(48)
System (48) can be rewritten in the following equivalent matrix form
Ψ̃
(ε)
i (s/uε) = p(ε)(s/uε) + iP
(ε)(s/uε) Ψ̃
(ε)
i (s/uε), (49)
where
Ψ̃
(ε)
i (s) =
⎡⎢⎣ ψ̃1iε(s)
...
ψ̃miε(s)
⎤⎥⎦ , p(ε)(s) =
⎡⎢⎣ p
(ε)
1 (s)
...
p
(ε)
m (s)
⎤⎥⎦ ,
and
iP
(ε)(s) =
⎡⎢⎣ p̄
(ε)
11 (s) . . . p̄
(ε)
1(i−1)(s) 0 p̄
(ε)
1(i+1)(s) . . . p̄
(ε)
1m(s)
...
...
...
...
...
p̄
(ε)
m1(s) . . . p̄
(ε)
m(i−1) 0 p̄
(ε)
m(i+1)(s) . . . p̄
(ε)
mm(s)
⎤⎥⎦ .
FIRST-RARE-EVENT TIMES 45
Let us show that, for every s ≥ 0 and i ∈ X, matrix I− iP
(ε)(s/uε) has
the inverse matrix for all ε small enough, and, therefore, the solution to the
system (49) has the following form,
Ψ̃
(ε)
i (s/uε) =
[
I− iP
(ε)(s/uε)
]−1
p(ε)(s/uε). (50)
Conditions A and B implies, in an obvious way, that, for every s ≥ 0
and j, k ∈ X,
p̄
(ε)
jk (s/uε) = Ej exp
{
−sκ
(ε)
1 /uε
}
χ
(
ζ
(ε)
1 /∈ Dε, η
(ε)
1 = k
)
−Ejχ
(
η
(ε)
1 = k
)
→ 0 as ε → 0. (51)
Since
Ejχ
(
η
(ε)
1 = k
)
= p
(ε)
jk ,
we conclude that
p̄
(ε)
jk (s/uε) → p
(0)
jk as ε → 0. (52)
Thus (c) iP
(ε)(s/uε) → iP
(0) as ε → 0, for every s ≥ 0 and i ∈ X. It
was shown in the proof of Lemma 3 that, under condition D, the inverse
matrix [I− iP
(0)]−1 exists. Thus, (c) implies that (d) there exists, for every
s ≥ 0 and i ∈ X, the inverse matrix [I−iP
(ε)(s/uε)]
−1 for all ε small enough.
Moreover, for every s ≥ 0 and i ∈ X,
[I − iP
(ε)(s/uε)]
−1 = ‖Δ(ε)
jik(s)‖
→ [I − iP
(0)]−1 = ‖Ejδ
(0)
ik ‖ as ε → 0. (53)
Taking in account formulas (47), (50) and the definition of p
(ε)
j (s), we
get, for every s ≥ 0 and i ∈ X,
ψ̃iiε(s/uε) =
m∑
k=1
Δ
(ε)
iik(s)ϕ̂kε(s/uε)pk(ε). (54)
Condition A implies that, for every s ≥ 0 and k ∈ X,
ϕ̂kε(s/uε) → 1 as ε → 0. (55)
Indeed, using condition A, we get, for any v > 0,
0 ≤ limε→0(1 − ϕ̂kε(s/uε)) ≤ 1 − exp{−sv}
+limε→0Pk
{
κ
(ε)
1 > vuε/ζ
(ε)
1 ∈ Dε
}
= 1 − exp{−sv} → 0 as v → 0. (56)
46 MYROSLAV DROZDENKO
Relations (53) and (55) imply that, for every s ≥ 0 and i, k ∈ X,
Δ
(ε)
iik(s)ϕ̂kε(s/uε) → Eiδ
(0)
ik =
π
(0)
k
π
(0)
i
as ε → 0. (57)
Using relation (57) we get, for every s ≥ 0 and i, k ∈ X,∣∣∣∣ψ̃iiε(s/uε) − pε
π
(0)
i
∣∣∣∣
pε
π
(0)
i
≤
m∑
k=1
∣∣∣∣∣Δ(ε)
iik(s)ϕ̂kε(s/uε) − π
(0)
k
π
(0)
i
∣∣∣∣∣ π
(0)
i pk(0)∑m
j=1 π
(0)
j pjε
≤
m∑
k=1
∣∣∣∣∣Δ(ε)
iik(s)ϕ̂kε(s/uε) − π
(0)
k
π
(0)
i
∣∣∣∣∣ π
(0)
i
π
(0)
k
→ 0 as ε → 0. (58)
Relation (58) means that, for every s ≥ 0 and i ∈ X,
ψ̃iiε(s/uε) ∼ pε
π
(0)
i
as ε → 0. (59)
Finally, relation (32) given in Lemma 3, formula (47), and relation (59),
we get, for every s ≥ 0 and i ∈ X,
ψ̃iε(s/uε) =
ψ̃iiε(s/uε)
qiε
→ 1 as ε → 0. (60)
The proof is complete. �
Lemma 5. Let conditions A, B, C and D hold. Then for any normaliza-
tion function 0 < uε → ∞ as ε → 0, and for i ∈ X,
ψ̂iε(s/uε) → 1 as ε → 0, s ≥ 0. (61)
Proof. The following representation can be written, for every i ∈ X,
ψ̂iε(s) = q−1
iε Ei exp
{
−sβ
(ε)
i
}
χ
(
νε ≤ τ
(ε)
i
)
=
m∑
k=1
q−1
iε Ei exp{−s(
νε∑
n=1
κ
(ε)
n +
τ
(ε)
i∑
n=νε+1
κ
(ε)
n )}χ
(
νε ≤ τ
(ε)
i , η(ε)
νε
= k
)
=
m∑
k=1
q−1
iε Ei exp {−sξε}χ
(
νε ≤ τ
(ε)
i , η(ε)
νε
= k
)
ψ
(ε)
k (s).
FIRST-RARE-EVENT TIMES 47
By condition A, ψ
(ε)
k (s/uε) → 1 as ε → 0 for every s ≥ 0 and k ∈ X.
Thus, for every s ≥ 0 and i ∈ X,
ψ̂iε(s/uε) ∼
m∑
k=1
q−1
iε Ei exp {−sξε/uε}χ
(
νε ≤ τ
(ε)
i , η(ε)
νε
= k
)
= q−1
iε Ei exp {−sξε/uε}χ
(
νε ≤ τ
(ε)
i
)
= ψ̃iε(s/uε) → 1 as ε → 0. (62)
The proof is complete. �
In what follows we assume that η
(ε)
0 = j and shall mark the correspond-
ing processes based on the Markov renewal process
(
η
(ε)
n , κ
(ε)
n , ζ
(ε)
n
)
by the
index j in order to distinguish the cases with different initial states η
(ε)
0 .
Let us introduce, for every i, j ∈ X, the following “cyclic” stochastic
process,
ξjiε(t) =
[tq−1
iε ]+1∑
n=1
β
(ε)
i (n)
uε
, t ≥ 0. (63)
Note that ξjiε(t) is a step sum-process with independent increments.
Indeed, by the definition, random variables β
(ε)
i (n), n = 1, 2, . . . are inde-
pendent and,
E exp
{
−sβ
(ε)
i (n)
}
=
{
ψ
(ε)
ji (s) for n = 1,
ψ
(ε)
ii (s) for n ≥ 2,
(64)
where
ψ
(ε)
ji (s) = Ej exp
{
−sβ
(ε)
i
}
, s ≥ 0, i, j ∈ X.
We are interested to prove some solidarity statements concerned two
asymptotic relations.
The first one is the following relation of weak convergence,
ξjiε(t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0, (65)
where (e) ξ(t), t ≥ 0 is a non-zero, non-decreasing and stochastically con-
tinuous process with the initial value ξ(0) = 0.
The second one is the following asymptotic relation,
1 − ψ
(ε)
i (s/uε)
qiε
→ ς(s) as ε → 0, s ≥ 0, (66)
where (f) ς(s) > 0 for s > 0.
48 MYROSLAV DROZDENKO
The following lemma presents the variant of so-called solidarity propo-
sition concerned weak convergence for cyclic step sum-processes ξjiε(t).
Lemma 6. Let conditions B, C, D hold and η
(ε)
0 = j. Then: (α) the as-
sumption that the relation of weak convergence (65) holds for some i, j ∈ X
implies that this relation holds for every i, j ∈ X; (β) the limit process
ξ(t), t ≥ 0 in (65) is the same for any i, j ∈ X; (γ) ξ(t), t ≥ 0 is a non-zero
and non-decreasing homogenous process with independent increments; (δ)
relation (65) holds for given i, j ∈ X if and only if relation (66) holds for
the same i; (ε) the limit function ς(s) in (66) is the same for any i ∈ X; (ζ)
ς(s) is a cumulant of the process ξ(t), t ≥ 0, i.e. Ee−sξ(t) = e−ς(s)t, s, t ≥ 0;
(η) conditions E and F (with replacement of function pε by qiε in these con-
ditions), imposed on the distribution of random variable β
(ε)
i , are necessary
and sufficient for relation (66) to hold; (θ) cumulant ς(s) = a(s) in this case.
Proof. Let us first prove that (g) the assumption that (65) holds for given
i, j ∈ X implies that this relation holds for the same i and every j ∈ X,
moreover the limit process ξ(t), t ≥ 0 does not depend on j.
Indeed, the pre-limit process ξjiε(t) can be represented in the form of
the following sum,
ξjiε(t) = β
(ε)
i /uε + ξ′iε(t), t ≥ 0, (67)
where
ξ′iε(t) =
[tq−1
iε ]+1∑
n=2
β
(ε)
i (n)/uε, t ≥ 0.
The random variable β
(ε)
i /uε and the process ξ′iε(t), t ≥ 0 are indepen-
dent. The distribution of random variable β
(ε)
i /uε depends on j while the
finite-dimensional distributions of process ξ′iε(t), t ≥ 0 do not depend on j.
Conditions B–F readily imply β
(ε)
i /uε
P−→ 0 as ε → 0, for every j ∈ X,
or, equivalently, (f1) the random variables ξjiε(t) − ξ′iε(t)
P−→ 0 as ε → 0,
for every t > 0 and j ∈ X. Thus, the assumption that (65) holds for given
i, j ∈ X implies weak convergence of the process ξ′jiε(t), t ≥ 0 to the same
limit process. This convergence, due to (g1), implies that (g2) the process
ξjiε(t), t ≥ 0 converge weakly to the same limit process, for every j ∈ X,
moreover, the finite-dimensional distributions of the limit process do not
depend on j since it is so for the pre-limit process ξ′iε(t), t ≥ 0.
Let us now prove that (h) the assumption that (65) holds for given
i, j ∈ X implies that this relation holds for the same j and every i ∈ X,
moreover the limit process ξ(t), t ≥ 0 does not depend on i.
Note that two partial solidarity propositions (g) and (h), formulated
above, imply the solidarity statements (α) and (β) formulated in Lemma 4.
FIRST-RARE-EVENT TIMES 49
To prove the proposition (h), let us introduce, for j ∈ X, the following
step sum-processes based on sojourn times for semi-Markov process η(ε)(t),
ξjε(t) =
[tp−1
ε ]∑
n=1
κ
(ε)
n
uε
, t ≥ 0, (68)
Let us also introduce, for i, j ∈ X, the processes μjiε(t) which counts
the number of transitions for the semi-Markov process η(ε)(t) that occurs in
[tq−1
iε ] + 1 cycles,
μjiε(t) = pετ
(ε)
i
(
[tq−1
iε ] + 1
)
, t ≥ 0.
The process ξjiε(t) can be represented, for every i, j ∈ X, in the form of
superposition of the processes introduced above,
ξjiε(t) = ξjε(μjiε(t)), t ≥ 0. (69)
Let us now consider the following relation of weak convergence for the
processes ξjε(t),
ξjε(t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0, (70)
where (e) ξ(t), t ≥ 0 is non-zero, non-decreasing, and stochastically contin-
uous process with the initial value ξ(0) = 0.
Let us now prove that (h1) relation (65) holds, for given i, j ∈ X, if and
only if the relation (70) holds, for the same j, moreover the limit process
ξ(t), t ≥ 0 can be taken the same in both relations.
Note that (h1) implies (h). Indeed, due to “iff” character, the relation
(70) for given j ∈ X implies that (65) should hold for the same j and every
i ∈ X, and with the same limit process. Moreover, the limit process in
(70) does not depend on i since the pre-limit process ξjε(t), t ≥ 0 does not
depend on i.
We display the proof of (h1) for one-dimensional distributions. The
proof for multi-dimensional distributions is similar.
Let us first prove that (h2) the weak convergence of random variables
ξjε(t) in (70), assumed to hold for every t > 0 and given j ∈ X, implies the
weak convergence of random variables ξjiε(t) in (65) for every t > 0, the
same j and every i ∈ X, moreover the limit random variable ξ(t) can be
taken the same in both relations.
The process μjiε(t) can be represented, for every i, j ∈ X, in the form
of sum-process with independent increments,
μjiε(t) = pε
(
[q−1
iε ] + 1
) [tq−1
iε ]+1∑
n=1
α
(ε)
i (n)
[q−1
iε ] + 1
, t ≥ 0, (71)
50 MYROSLAV DROZDENKO
where α
(ε)
i (n) = τ
(ε)
i (n) − τ
(ε)
i (n − 1), n = 1, 2, . . .. Indeed, the random
variables α
(ε)
i (n), n ≥ 1 are independent and,
E exp
{
−sα
(ε)
i (n)
}
=
{
ϑ
(ε)
ji (s) for n = 1,
ϑ
(ε)
ii (s) for n ≥ 2,
(72)
where
ϑ
(ε)
ji (s) = Ej exp
{
−sα
(ε)
i (1)
}
, s ≥ 0, i, j ∈ X.
As was pointed out above conditions C and D Markov chain η
(ε)
n is
ergodic for all ε small enough and
Eiα
(ε)
i (1) = 1/π
(ε)
i → Eiα
(0)
i (1) = 1/π
(0)
i as ε → 0.
Moreover, under conditions B and C exists limit
lim
ε→0
Ei(α
(ε)
i )2 < ∞.
Thus, using the standard weak law of large numbers for i.i.d. random
variables α
(ε)
i (n) with bounded variance, the asymptotic relation (32) given
in Lemma 3, and representation (96), we get, for every t > 0 and i, j ∈ X,
μjiε(t)
P−→ π
(0)
i tEiα
(0)
i (1) = t as ε → 0. (73)
Let us choose an arbitrary t > 0 and a sequence 0 < cn < t, n = 1, 2, . . .
such that cn → 0 as n → ∞.
By the definition, the processes ξjiε(t), ξjε(t), and μjiε(t) are non-negative
and non-decreasing. Taking into account this fact and the representation
(69), we get, for every t > 0, i, j ∈ X, any real-valued x, and n ≥ 1,
P{ξjiε(t) > x} = P{ξjiε(t) > x, μjiε(t) ≤ t + cn}
+P{ξjiε(t) > x, μjiε(t) > t + cn}
≤ P{ξjε(t + cn) > x}
+P{μjiε(t) > t + cn}. (74)
Let Ut be the set of continuity points the distribution functions of the
limit random variables ξ(t) and ξ(t ± cn), n = 1, 2, . . . in (70). This set is
the real line R except at most a countable set of points.
Using the estimate (74), relation (73), and the assumptions that relation
(70) holds for one-dimensional distributions, for every t > 0 and given
j ∈ X, and that the limit process ξ(t) in (70) is stochastically continuous,
we get, for every t > 0, the same j, and every i ∈ X,
lim
ε→0
P{ξjiε(t) > x} ≤ lim
n→∞
lim
ε→0
(P{ξjε(t + cn) > x}
+P{μjiε(t) > t + cn})
= lim
n→∞
P{ξ(t + cn) > x}
= P{ξ(t) > x}, x ∈ Ut, (75)
FIRST-RARE-EVENT TIMES 51
or, equivalently,
lim
ε→0
P{ξjiε(t) ≤ x} ≥ P{ξ(t) ≤ x}, x ∈ Ut. (76)
We can also employ the following estimate, for every t > 0, i, j ∈ X,
any real x, and n ≥ 1,
P{ξjiε(t) ≤ x} ≤ P{ξjε(t − cn) ≤ x} + P{μjiε(t) ≤ t − cn}. (77)
Then, using the estimate (77), relation (73), and the assumptions that
relation (70) holds for one-dimensional distributions, for every t > 0 and
given j ∈ X, and that the limit process ξ(t) in (70) is stochastically contin-
uous, we get, for every t > 0, the same j, and every i ∈ X,
lim
ε→0
P{ξjiε(t) ≤ x} ≤ P{ξ(t) ≤ x}, x ∈ Ut. (78)
Relations (76) and (78) implies that P{ξjiε(t) ≤ x} → P{ξ(t) ≤ x} as
ε → 0, x ∈ Ut, Since the set Ut is dense in R, this relation implies that, for
every t > 0, given j (for which relation (70) is assumed to hold) and every
i ∈ X,
ξjiε(t) ⇒ ξ(t) as ε → 0. (79)
Let us now prove that (h3) the weak convergence of random variables
ξjiε(t) in (65), assumed to hold for every t > 0 and given i, j ∈ X, implies
the weak convergence of random variables ξjε(t) in (65) for every t > 0 and
the same j, moreover the limit random variable ξ(t) can be taken the same
in both relations.
Let us choose an arbitrary t > 0 and a sequence 0 < dn < t, n = 1, 2, . . .
such that dn → 0 as n → ∞.
Using again that the processes ξjiε(t), ξjε(t), and μjiε(t) are non-negative
and non-decreasing, and the representation (69), we get, for every t > 0,
given i, j ∈ X, any real-valued x, and n ≥ 1,
P{ξjε(t) > x} = P{ξjε(t) > x, μjiε(t + dn) > t}
+P{ξjε(t) > x, μjiε(t + dn) ≤ t}
≤ P{ξjiε(t + dn) > x}
+P{μjiε(t + dn) ≤ t}. (80)
Let Vt be the set of continuity points for the distribution functions of
the limit random variables ξ(t) and ξ(t ± dn), n = 1, 2, . . . in (65). This set
is the real line R except at most a countable set of points.
Using the estimate (80), relation (73), and the assumptions that relation
(65) holds for one-dimensional distributions, for every t > 0 and given
52 MYROSLAV DROZDENKO
i, j ∈ X, and that the limit process ξ(t) in (65) is stochastically continuous,
we get, for every t > 0 and the same j,
lim
ε→0
P{ξjε(t) > x} ≤ lim
n→∞
lim
ε→0
(P{ξjiε(t + dn) > x}
+P{μjiε(t + dn) ≤ t})
= lim
n→∞
P{ξ(t + dn) > x}
= P{ξ(t) > x}, x ∈ Vt, (81)
or, equivalently,
lim
ε→0
P{ξjε(t) ≤ x} ≥ P{ξ(t) ≤ x}, x ∈ Vt. (82)
We can also employ the following estimate, for every t > 0, i, j ∈ X,
any real-valued x and n ≥ 1,
P{ξjε(t) ≤ x} ≤ P{ξjiε(t − dn) ≤ x} + P{μjiε(t − dn) ≤ t}. (83)
Then, using the estimate (83), relation (73), and the assumptions that
relation (65) holds for one-dimensional distributions, for every t > 0 and
for given i, j ∈ X, and that the limit process ξ(t) in (65) is stochastically
continuous, we get, for every t > 0 and the same j,
lim
ε→0
P{ξjε(t) ≤ x} ≤ P{ξ(t) ≤ x}, x ∈ Vt. (84)
Relations (82) and (84) implies that P{ξjε(t) ≤ x} → P{ξ(t) ≤ x} as
ε → 0, x ∈ Vt. Since the set Vt is dense in R, this relation implies that, for
every t > 0 and given j (for which relation (65) is assumed to hold),
ξjε(t) ⇒ ξ(t) as ε → 0. (85)
The proof of statements (α) and (β) formulated in Lemma 6 is complete.
As was mention above ξjiε(t) − ξ′iε(t)
P−→ 0 as ε → 0, for every t ≥ 0,
and, therefore, the weak convergence for the processes ξjiε(t), t ≥ 0 and
ξ′iε(t), t ≥ 0 is equivalent.
The statement (γ) follows directly from the definition of the sum-process
ξ′iε(t), t ≥ 0 since the random variables β
(ε)
i (n), n ≥ 2 are independent and
identically distributed and ξ′iε(t), t ≥ 0 is the homogeneous step sum-process
with independent increments. As is known, the class of possible limit pro-
cesses (in the sense of weak convergence) for such step sum-process coincides
with the class of stochastically continuous homogeneous processes with in-
dependent increments.
Moreover, as is known, the weak convergence of finite-dimensional dis-
tributions follows in this case from the weak convergence of one-dimensional
FIRST-RARE-EVENT TIMES 53
distributions. The statements (δ) and (ε) follows, in an obvious way, from
the following formula,
E exp {−sξ′iε(t)} = ψ
(ε)
i (s/uε)
[tq−1
iε ], s, t ≥ 0, i ∈ X. (86)
Indeed, (86) implies that, for given t > 0 and i ∈ X, the random
variables ξ′iε(t) converge weakly to some non-zero limit random variable if
and only if relation (66) holds and, in this case,
E exp {−sξ′iε(t)} = ψ
(ε)
i (s/uε)
[tq−1
iε ]
∼ exp
{
−
(
1 − ψ
(ε)
i (s/uε)
)
tq−1
iε
}
→ exp{−ς(s)t} as ε → 0, s ≥ 0, (87)
where ς(s) > 0 for s > 0.
Since, according the remarks above, the random variable ξ(t) has, for
every t > 0, an infinitely divisible distribution, and ς(s)t is the cumulant of
this random variable. This proves the statement (ζ).
Last statements (η) and (θ) of Lemma 6 are given in Lemma 7 and
Remark 3. �
Remark 2. The proof presented above shows that the only property of the
quantities qiε and pε, used in the proof of Lemma 4, is (i) 0 < qiε/π
(ε)
i ∼
pε → 0 as ε → 0, i ∈ X. Lemma 4 and its proof remain to be valid if
any functions qiε and pε, satisfying the assumption (i), would be used in the
formulas (63) and (68) defining, respectively, the processes ξjiε(t), t ≥ 0 and
ξjε(t), t ≥ 0, and in the expression (1 − ψ
(ε)
i (s/uε))/qiε used in the asymp-
totic relation (66). In this case, conditions A and B in Lemma 6 can be
replaced by the simpler assumption (i) while condition C should remain.
The proof of Lemma 6 is based on the proposition about equivalence of
weak convergence of the cyclic step sum-processes ξjiε(t), t ≥ 0 introduced
in (63) and the step sum-processes ξjε(t), t ≥ 0 introduced in (68).
Let us now formulate the proposition about equivalence of the relation
of weak convergence (70) for processes ξjε(t), t ≥ 0 and the following asymp-
totic relation formulated in terms of averaged Laplace transforms ϕ(ε)(s),
1 − ϕ(ε)(s/uε)
pε
→ ς(s) as ε → 0, s ≥ 0, (88)
where (j) ς(s) > 0 for s > 0.
Lemma 7. Let conditions B, C hold, and η
(ε)
0 = j. Then: (ι) the relation
of weak convergence (65) holds, for given i, j ∈ X, if and only if the relation
of weak convergence (70) holds, for the same j, (κ) the limit process ξ(t),
54 MYROSLAV DROZDENKO
t ≥ 0 is the same in relations (65) and (70); (λ) the assumption that the
relation of weak convergence (70) holds for some j ∈ X implies that this
relation holds for every j ∈ X; (μ) the limit process ξ(t), t ≥ 0 in (70) is
the same for any j ∈ X; (ν) ξ(t), t ≥ 0 is a non-zero and non-decreasing
homogenous process with independent increments; (ξ) relation (70) holds for
given j ∈ X if and only if relation (88) holds; (π) the limit function ς(s) in
(88) is a cumulant of the process ξ(t), t ≥ 0, i.e. Ee−sξ(t) = e−ς(s)t, s, t ≥ 0;
(ρ) conditions E and F are necessary and sufficient for relation (88) to
hold; (σ) cumulant ς(s) = a(s) in this case.
Proof. The statements (ι) – (ν) have been already verified in the proof of
Lemma 4.
Let us introduce conditional distribution functions for sojourn times κ
(ε)
n
for the semi-Markov process η(ε)(t),
G
(ε)
ij (t) = P
{
κ
(ε)
1 ≤ t/η
(ε)
0 = i, η
(ε)
1 = j
}
, t ≥ 0, i, j ∈ X.
Obviously
Q
(ε)
ij (t) = p
(ε)
ij G
(ε)
ij (t), t ≥ 0, i, j ∈ X,
and
G
(ε)
i (t) =
m∑
j=1
Q
(ε)
ij (t) =
m∑
j=1
p
(ε)
ij G
(ε)
ij (t), t ≥ 0, i, j ∈ X,
Note that one can choose G
(ε)
ij (t) as arbitrary distribution functions con-
centrated on the positive half-line if p
(ε)
ij = 0. This does not affect transition
probabilities Q
(ε)
ij (t) and distribution functions G
(ε)
i (t).
As is known from the theory of semi-Markov Processes that the so-
journ times κ
(ε)
n are conditionally independent with respect to the values of
the embedded Markov chain η
(ε)
n . More precisely this means that, for any
t1, . . . , tn ≥ 0, i0, i1, . . . , in, n = 1, 2, . . .,
P
{
κ
(ε)
1 ≤ t1, . . . , κ
(ε)
k ≤ tn/η
(ε)
0 = i0, . . . , η
(ε)
n = in
}
= G
(ε)
i0i1
(t1) × · · · × G
(ε)
in−1in
(tn). (89)
As in the proof of Lemma 4, we assume that η
(ε)
0 = j.
It follows from relation (89) that the process ξjε(t) has, for every j ∈ X,
the same finite-dimensional distribution as the following process ξ̆jε(t) (we
use the symbol
d
= to show this stochastic equality),
ξjε(t) =
[tp(ε)−1]∑
n=1
κ
(ε)
n
uε
, t ≥ 0
d
= ξ̆jε(t), t ≥ 0, (90)
FIRST-RARE-EVENT TIMES 55
where
ξ̆jε(t) =
[tp(ε)−1]∑
n=1
κ
(ε)
n
(
η
(ε)
n−1, η
(ε)
n
)
uε
, t ≥ 0, (91)
and
(k1)
{
η
(ε)
n , n = 1, 2, . . .
}
is a Markov chain with a state space X and the
matrix of transition probabilities
∥∥∥p(ε)
ij
∥∥∥;
(k2) κ
(ε)
n (i, j), i, j ∈ X, n ≥ 1 are mutually independent random variables;
(k3) P
{
κ
(ε)
n (i, j) ≤ t
}
= G
(ε)
ij (t), t ≥ 0 for i, j ∈ X, n ≥ 1;
(k4) the set of random variables
{
κ
(ε)
n (i, j), i, j ∈ X, n ≥ 1
}
and the Markov
chain
{
η
(ε)
n , n = 1, 2, . . .
}
are independent.
It follows from the stochastic equality (90) that (l) the relation of weak
convergence (69), treated in Lemma 4, is equivalent to the following relation,
ξ̆jε(t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0, (92)
where (e) ξ(t), t ≥ 0 is a non-zero and non-decreasing and stochastically
continuous process with the initial value ξ(0) = 0.
Let us define, for every j, i, k ∈ X, the counting random variables for
the random sequence η̄
(ε)
n =
(
η
(ε)
n−1, η
(ε)
n
)
, n = 1, 2, . . .,
ν
(ε)
jn (i, k) =
n∑
r=1
χ
{(
η
(ε)
r−1, η
(ε)
r
)
= (i, k)
}
, n = 0, 1, . . . .
It follows from the (k1) - (k4) that the process ξ̆jε(t) has, for every j ∈ X,
the same finite-dimensional distribution as the following process ξ̃jε(t),
ξ̆jε(t), t ≥ 0
d
= ξ̃jε(t), t ≥ 0, (93)
where
ξ̃jε(t) =
∑
(i,k)∈X
ν
(ε)
j[tp−1
ε ]
(i,k)∑
n=1
κ
(ε)
n (i, k)
uε
, t ≥ 0. (94)
and
X̃ε =
{
(i, k) ∈ X : p
(ε)
ik > 0
}
.
Note that, due to C,
X̃ε ⊆ X̃0
56 MYROSLAV DROZDENKO
for all ε small enough.
Note that the definition of the process ξ̃jε(t) takes into account that
random variables ν
(ε)
jn (i, k) = 0, n = 0, 1, . . . with probability 1, if p
(ε)
ik = 0.
The stochastic equalities (90) and (93) let us replace the processes ξjε(t)
by the processes ξ̃jε(t) when we study their weak convergence.
It follows from the stochastic equality (93) that the relation of weak
convergence (69), treated in Lemma 5, is also equivalent to the following
relation,
ξ̃jε(t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0, (95)
where (e) ξ(t), t ≥ 0 is a non-zero and non-decreasing and stochastically
continuous process with the initial value ξ(0) = 0.
Let us also introduce the following step sum-processes,
ξ̂ε(t) =
∑
(i,k)∈X0
tπ
(0)
i p
(ε)
ik p−1
ε∑
n=1
κ
(ε)
n (i, k)
uε
, t ≥ 0. (96)
We are also interested in the following relation of weak convergence,
ξ̂ε(t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0, (97)
where (e) ξ(t), t ≥ 0 is a non-zero, non-decreasing and stochastically con-
tinuous process with the initial value ξ(0) = 0.
Let us prove the equivalence of relations (95) and (97). This means
that (m) the relation (95) holds for some j ∈ X if and only if the relation
(97) holds, and, moreover, the limit process can be taken the same in both
relations.
We display the proof for one-dimensional distributions. The proof for
multi-dimensional distributions is similar.
Let us prove that (m1) the assumption that relation (97), assumed to
hold for every t > 0, implies that relation (95) holds for every t > 0 and
j ∈ X, moreover the limit random variable ξ(t) can be taken the same in
both relations.
The law of large numbers for ergodic Markov chains in triangular array
settings (see, for example, Silvestrov (1974)) implies that, under conditions
C and D, for every t > 0 and j, i, k ∈ X,
ν
(ε)
j[tp(ε)−1](i, k)
p−1
ε
P−→ π
(0)
i p
(0)
ik t as ε → 0. (98)
Let us choose an arbitrary t > 0 and a sequence 0 < cn < t, n = 1, 2, . . .
such that cn → 0 as n → ∞.
The processes
∑[tp−1
ε ]
n=1 κ
(ε)
n (i, k)/uε, t ≥ 0 and pεν
(ε)
j[tp−1
ε ]
(i, k), t ≥ 0 are
non-negative and non-decreasing, for every j, i, k ∈ X. Taking into account
FIRST-RARE-EVENT TIMES 57
this fact, and representation (94), we get, for every t > 0, j ∈ X, any
real-valued x, and n ≥ 1,
P
{
ξ̃jε(t) > x
}
= P{ξ̃jε(t) > x,
⋂
(i,k)∈Xε
A
(ε)
jik(t, t + cn)}
+P{ξ̃jε(t) > x,
⋃
(i,k)∈Xε
Ā
(ε)
jik(t, t + cn)}
≤ P
{
ξ̂ε(t + cn) > x
}
+
∑
(i,k)∈Xε
P
{
Ā
(ε)
jik(t, t + cn)
}
, (99)
where
A
(ε)
jik(t, s) =
{
ν
(ε)
j[tp−1
ε ]
(i, j) ≤ sπ
(0)
i p
()
ikp
−1
ε
}
, t, s > 0, j, i, k ∈ X.
Note that (98) implies that, for every 0 < t < s and j ∈ X, (i, k) ∈
X̃ε ⊆ X̃0,
P
{
A
(ε)
jik(s, t)
}
+ P
{
Ā
(ε)
jik(t, s)
}
→ 0 as ε → 0. (100)
Let Yt be the set of continuity points for the distribution functions of
the limit random variables ξ(t) and ξ(t ± cn), n = 1, 2, . . . in (97). This set
is the real line R except at most a countable set of points.
Using the estimate (99), relation (100), and the assumptions that rela-
tion (97) holds for one-dimensional distributions, for every t > 0, and that
the limit process ξ(t) in (97) is stochastically continuous, we get, for every
t > 0 and j ∈ X,
lim
ε→0
P
{
ξ̃jε(t) > x
}
≤ lim
n→∞
lim
ε→0
(
P
{
ξ̂ε(t + cn) > x
}
+
∑
(i,k)∈Xε
P
{
Ā
(ε)
jik(t, t + cn)
})
= lim
n→∞
P {ξ(t + cn) > x}
= P {ξ(t) > x} , x ∈ Yt, (101)
or, equivalently,
lim
ε→0
P
{
ξ̃jε(t) ≤ x
}
≥ P {ξ(t) ≤ x} , x ∈ Yt. (102)
58 MYROSLAV DROZDENKO
Similarly, we can get, for every t > 0 and j ∈ X,
lim
ε→0
P
{
ξ̃jε(t) ≤ x
}
≤ P {ξ(t) ≤ x} , x ∈ Yt. (103)
Relations (102) and (103) implies that P
{
ξ̃jε(t) ≤ x
}
→ P {ξ(t) ≤ x}
as ε → 0, x ∈ Yt, for every j ∈ X. Since the set Yt is dense in R, this
relation implies that, for every t > 0 and j ∈ X,
ξ̃jε(t) ⇒ ξ(t) as ε → 0. (104)
We omit details in the proof of an inverse proposition that (m2) the
assumption that relation (95), assumed to hold for every t > 0 and given
j ∈ X, implies that relation (97) holds for every t > 0 and, moreover the
limit random variable ξ(t) can be taken the same in both relations.
Let us choose an arbitrary t > 0 and a sequence 0 < dn < t, n = 1, 2, . . .
such that dn → 0 as n → ∞.
Analogously to (99), we get the following “inverse” to (99) estimate, for
any every t > 0, real-valued x and n ≥ 1,
P
{
ξ̂ε(t) > x
}
= P
{
ξ̂ε(t) > x,
⋂
(i,k)∈Xε
Ā
(ε)
jik(t + dn, t)
}
+P
{
ξ̂ε(t) > x,
⋃
(i,k)∈Xε
A
(ε)
jik(t + dn, t)
}
≤ P
{
ξ̃jε(t + dn) > x
}
+
∑
(i,k)∈Xε
P
{
A
(ε)
jik(t + dn, t)
}
. (105)
Let Zt be the set of continuity points for the distribution functions of
the limit random variables ξ(t) and ξ(t ± dn), n = 1, 2, . . . in (95). This set
is the real line R except at most a countable set of points.
Using the estimate (105), relation (100), and the assumptions that rela-
tion (95) holds for one-dimensional distributions, for every t > 0 and given
j ∈ X, and that the limit process ξ(t) in (95) is stochastically continuous,
we get, for every t > 0 and x ∈ Zt,
lim
ε→0
P
{
ξ̂ε(t) > x
}
≤ lim
n→∞
lim
ε→0
(
P
{
ξ̃jε(t + dn) > x
}
+
∑
(i,k)∈Xε
P
{
A
(ε)
jik(t + dn, t)
})
= lim
n→∞
P {ξ(t + dn) > x}
= P{ξ(t) > x}. (106)
FIRST-RARE-EVENT TIMES 59
The continuation of the proof for the proposition (m2) is analogous to
those given above in the proof of the proposition (m1).
Let us introduce now the step sum-process,
ξ̆∗ε (t) =
[tp(ε)−1]∑
n=1
κ
∗(ε)
n
(
η
′(ε)
n , η
′′(ε)
n
)
uε
, t ≥ 0, (107)
where
(n1)
{
η̄
∗(ε)
n =
(
η
′(ε)
n , η
′′(ε)
n
)
, n = 1, 2, . . .
}
a sequence of i.i.d. random vec-
tors which takes values (i, j) with probabilities π
(0)
i p
(ε)
ij for i, j ∈ X;
(n2) κ
∗(ε)
n (i, j), i, j ∈ X, n ≥ 1 are mutually independent random variables;
(n3) P
{
κ
∗(ε)
n (i, j) ≤ t
}
= G
(ε)
ij (t), t ≥ 0 for i, j ∈ X, n ≥ 1;
(n4) the set of random variables
{
κ
∗(ε)
n (i, j), i, j ∈ X, n ≥ 1
}
and the ran-
dom sequence
{
η̄
∗(ε)
n , n = 1, 2, . . .
}
are independent.
We are interested in the following relation of weak convergence,
ξ̆∗ε (t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0, (108)
where (e) ξ(t), t ≥ 0 is a non-zero and non-decreasing and stochastically
continuous process with the initial value ξ(0) = 0.
Let us define, for every i, k ∈ X, the counting random variables for the
random sequence η̄
∗(ε)
n =
(
η
′(ε)
n , η
′′(ε)
n
)
, n = 1, 2, . . .,
ν∗(ε)
n (i, k) =
n∑
r=1
χ
{(
η′(ε)
r , η′′(ε)
r
)
= (i, k)
}
, n = 0, 1, . . . .
It follows from the properties (n1) - (n4) that the process ξ̆∗ε (t) has,
for every j ∈ X, the same finite-dimensional distribution as the following
process ξ̃∗ε (t),
ξ̆∗ε (t), t ≥ 0
d
= ξ̃∗ε (t), t ≥ 0, (109)
where
ξ̃∗ε (t) =
∑
(i,k)∈X
ν
∗(ε)
[tp−1
ε ]
(i,k)∑
n=1
κ
∗(ε)
n (i, k)
uε
, t ≥ 0. (110)
It follows from stochastic equality (110) that (o) the relation of weak
convergence (108) is equivalent to the following relation,
ξ̃∗ε (t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0, (111)
60 MYROSLAV DROZDENKO
where (e) ξ(t), t ≥ 0 is a non-zero and non-decreasing and stochastically
continuous process with the initial value ξ(0) = 0.
Let us also introduce the following step sum-processes,
ξ̂∗ε (t) =
∑
(i,k)∈X
tπ
(0)
i p
(0)
ik p−1
ε∑
n=1
κ
∗(ε)
n (i, k)
uε
, t ≥ 0. (112)
Let us also consider the following relation of weak convergence,
ξ̂∗ε (t), t ≥ 0 ⇒ ξ(t), t ≥ 0 as ε → 0, (113)
where (e) ξ(t), t ≥ 0 is a non-zero, non-decreasing and stochastically con-
tinuous process with the initial value ξ(0) = 0.
We state that relations (111) and (113) are equivalent. This means that
(p) the assumption that relation (111) holds if and only if relation (113),
moreover the limit stochastic process ξ(t), t ≥ 0 can be taken the same in
both relations.
By the definition, χ{(η′(ε)
r , η
′′(ε)
r ) = (i, k)}, r = 1, 2, . . . are i.i.d. random
variables taking value 1 and 0 with probabilities π
(0)
i p
(ε)
ik and 1 − π
(0)
i p
(ε)
ik .
Thus, under condition D, due to standard weak law of large number for
binary random variables, for every t > 0 and i, k ∈ X,
ν
∗(ε)
[tp−1
ε ]
(i, k)
p−1
ε
P−→ π
(0)
i p
(0)
ik t as ε → 0. (114)
The careful analysis of the proof of the proposition (l) about the equiv-
alence of the relations of weak convergence (95), for processes ξ̃ε(t), t ≥ 0,
and (97), for processes ξ̂ε(t), t ≥ 0, shows that conditions (n2) - (n4) were
used in this proof plus the asymptotic relation (98), which is a weak law
of large numbers for the corresponding frequency random variables for the
random sequence η
(ε)
n . Condition (n1) was used together with condition C
only as conditions providing the asymptotic relation (98).
These remarks let us state that the proof given for the proposition (m)
can be just replicated in order to prove the proposition (p). Indeed, condi-
tions (n2) - (n4) replace, in this case, conditions (k2) - (k4), and the asymp-
totic relation (114), implied by the condition (n1), replaces the asymptotic
relation (98).
Now let us use the following stochastic equality that obviously follows
from comparison of conditions (k2) - (k4) and (n2) - (n4),
ξ̂ε(t), t ≥ 0
d
= ξ̂∗ε (t), t ≥ 0. (115)
The propositions (m) and (p) combined with the stochastic equalities
(90), (93), (109), and (115) implies that (q) the assumption that relation of
FIRST-RARE-EVENT TIMES 61
weak convergence (69), treated in Lemma 5, holds if and only if the relation
(108) holds, moreover the limit stochastic process ξ(t) can be taken the
same in both relations.
We are now in position to make the last step in the proof. Conditions
(n2) - (n4) imply that κ
∗(ε)
n
(
η
′(ε)
n , η
′′(ε)
n
)
, n = 1, 2, . . . are i.i.d. random
variables. Moreover, the corresponding distribution has the following form,
P
{
κ
∗(ε)
1
(
η
′(ε)
1 , η
′′(ε)
1
)
≤ t
}
=
∑
i,k∈X
G
(ε)
ik (t)π
(0)
i p
(ε)
ik
=
∑
i∈X
π
(0)
i
∑
j∈X
G
(ε)
ik (t)p
(ε)
ik
=
∑
i∈X
π
(0)
i G
(ε)
i (t) = G(ε)(t), t ≥ 0. (116)
The statements (ξ) and (π) follows, in an obvious way, from the propo-
sition (q). Indeed, ξ̆∗ε (t), t ≥ is the step sum-process based on i.i.d. random
variables, and, therefore,
E exp
{
−sξ̆∗ε (t)
}
= ϕ(ε)(s/uε)
[tp−1
ε ], s, t ≥ 0. (117)
Relation (117) implies that, for given t > 0 the random variables ξ̆∗ε (t)
converge weakly to some non-zero limit random variable if and only if rela-
tion (88) holds and, in this case,
E exp
{
−sξ̆∗ε (t)
}
= ϕ(ε)(s/uε)
[tp−1
ε ]
∼ exp
{−(1 − ϕ(ε)(s/uε))tp
−1
ε
}
→ exp {−ς(s)t} as ε → 0, s ≥ 0, (118)
where ς(s) > 0 for s > 0.
The random variable ξ(t) has, for every t > 0, an infinitely divisible
distribution, as a week limit of sums of i.i.d. random variables, and ς(s)t is
the cumulant of the process ξ(t).
As was pointed out in the proof of Theorem 1 relation (118) is equivalent
to E and F and in this case ς(s) ≡ a(s). This remark completes the proof
of statements (ρ) and (σ) of Lemma 7. �
Remark 3. The proof presented above shows that the only property of the
quantities pε, used in the proof of Lemma 5, was (r) 0 < pε → 0 as ε → 0.
Lemma 5 and its proof remain to be valid if any function pε, satisfying the
assumption (r), will be used in the formulas (63) and (68) defining, respec-
tively, the process ξjε(t), t ≥ 0, and in the expression
(
1 − ϕ(ε)(s/uε)
)
/pε
used in the asymptotic relation (88). In this case, condition B in Lemma 5
62 MYROSLAV DROZDENKO
can be replaced by the simpler assumption (r).
Remark 4. The proof presented above can be applied to any sum-process of
conditionally independent random variables ξ̆∗ε (t), t ≥ 0 defined by formula
(107) under the assumption that (s1) conditions (n2) - (n4) hold. Condition
(n1) can be replaced by a general assumption that (s2) {η̄∗(ε)
n = (η
′(ε)
n , η
′′(ε)
n ),
n = 1, 2, . . .} is a sequence of random vectors taking values in the space
X × X such that the weak law of large numbers in the form of the asymp-
totic relation (114). Also, (s3) the positivity of π
(0)
i is not needed, and (s4)
any function satisfying assumption (r) can be taken as pε. Under the as-
sumptions (s1) - (s4), the asymptotic relation (88) is necessary and sufficient
condition for weak convergence of processes ξ̆∗ε (t), t ≥ 0. The limit process
is a non-negative homogeneous process with independent increments with
the cumulant ς(s) which appears in (88). Moreover conditions E and F are
necessary and sufficient for relation (88) to hold, and cumulant ς(s) = a(s)
in this case.
References
1. Anisimov, V.V., Asymptotic analysis of stochastic block replacement poli-
cies for multicomponent systems in a Markov environment, Oper. Res.
Lett. 33, no. 1, (2005), 26–34.
2. Avrachenkov, K.E., Haviv, M., Perturbation of null spaces with application
to the eigenvalue problem and generalized inverses, Linear Algebra Appl.
369, (2003), 1–25.
3. Dayar, T., Akar, N., Computing moments of first passage times to a subset
of states in Markov chains, SIAM J. Matrix Anal. Appl., 27, no. 2, (2005),
396–412.
4. Di Crescenzo, A., Nastro, A., On first-passage-time densities for certain
symmetric Markov chains, Sci. Math. Jpn., 60, no. 2, (2004), 381–390.
5. Feller, W., An Introduction to Probability Theory and Its Applications,
Vol. II. Wiley Series in Probability and Statistics, Wiley, New York, (1966,
1971).
6. Fuh, C.-D., Uniform Markov renewal theory and ruin probabilities in Mar-
kov random walks, Ann. Appl. Probab., 14, no. 3, (2004), 1202–1241.
7. Harrison, P.G., Knottenbelt, W.J., Passage time distribution in large Mar-
kov chains, Performance Evaluation Review, no. 30, (2002), 77–85.
8. Hunter, J.J., Stationary distributions and mean first passage times of per-
turbed Markov chains, Linear Algebra Appl., 410, (2005), 217–243.
9. Janssen, J., Manca, R., Applied semi-Markov processes, Springer, New
York, (2006).
10. Koroliuk, V.S., Limnios, N., Stochastic systems in merging phase space,
World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, (2005).
FIRST-RARE-EVENT TIMES 63
11. Limnios, N., Ouhbi, B., Sadek, A., Empirical estimator of stationary dis-
tribution for semi-Markov processes, Comm. Statist. Theory Methods, 34,
no. 4, (2005), 987–995.
12. Loève, M., Probability Theory, Van Nostrand, Toronto and Princeton,
(1955, 1963).
13. Nguyen V.H., Vuong Q.H., Tran M.N., Central limit theorem for functional
of jump Markov processes, Vietnam J. Math., 33, no. 4, (2005), 443–461.
14. Silvestrov D.S., Limit Theorems for Randomly Stopped Stochastic Pro-
cesses, Springer, London, (2004).
15. Silvestrov D.S., Drozdenko M.O., Necessary and sufficient conditions for
weak convergence of the first-rare-event times for semi-Markov processes,
Dopov. Nats. Akad. Nauk Ukr. Mat. Prirodozn. Tekh. Nauki, no. 11,
(2005), 25–28. (in Ukrainian).
16. Silvestrov D.S., Drozdenko M.O., Necessary and sufficient conditions for
weak convergence of first-rare-event times for semi-Markov processes, I, II,
Theory of Stochastic Processes, 12 (28), no.3–4, (2006a, 2006b), 151–186
and 187–202.
17. Solan, E., Vielle, N., Perturbed Markov chains, J. App. Probab., 40,
(2003), 107–122.
18. Symeonaki, M.A., Stamou, G.B., Rate of convergence, asymptotically at-
tainable structures and sensitivity in non-homogeneous Markov systems
with fuzzy states, Fuzzy Sets and Systems, 157, no. 1, (2006), 143–159.
19. Szewczak, Z. S., A remark on a large deviation theorem for Markov chain
with a finite number of states, Teor. Veroyatn. Primen., 50, no. 3, (2005),
612–622.
Department of Mathematics and Physics, Mälardalen University,
SE-72123 Väster̊as, Sweden.
E-mail address: drozdenko@hotmail.com
|