On the rate of convergence in the invariance principle for weakly dependent random variables
UDC 519.21 We consider nonstationary sequences of $\varphi$-mixing random variables. By using the Levy–Prokhorov distance, we estimate the rate of convergence in the invariance principle for nonstationary $\varphi$-mixing random variables. The obtained results extend...
Saved in:
| Date: | 2022 |
|---|---|
| Main Author: | |
| Format: | Article |
| Language: | English |
| Published: |
Institute of Mathematics, NAS of Ukraine
2022
|
| Online Access: | https://umj.imath.kiev.ua/index.php/umj/article/view/6244 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Journal Title: | Ukrains’kyi Matematychnyi Zhurnal |
| Download file: | |
Institution
Ukrains’kyi Matematychnyi Zhurnal| _version_ | 1860512303679537152 |
|---|---|
| author | Mukhamedov, A. K. Mukhamedov, A. K. |
| author_facet | Mukhamedov, A. K. Mukhamedov, A. K. |
| author_sort | Mukhamedov, A. K. |
| baseUrl_str | https://umj.imath.kiev.ua/index.php/umj/oai |
| collection | OJS |
| datestamp_date | 2023-01-07T13:45:34Z |
| description |
UDC 519.21
We consider nonstationary sequences of $\varphi$-mixing random variables. By using the Levy–Prokhorov distance, we estimate the rate of convergence in the invariance principle for nonstationary $\varphi$-mixing random variables. The obtained results extend and generalize several known results for nonstationary $\varphi$-mixing random variables. |
| doi_str_mv | 10.37863/umzh.v74i9.6244 |
| first_indexed | 2026-03-24T03:26:39Z |
| format | Article |
| fulltext |
DOI: 10.37863/umzh.v74i9.6244
UDC 519.21
A. K. Mukhamedov1 (Nat. Univ. Uzbekistan, Tashkent)
ON THE RATE OF CONVERGENCE IN THE INVARIANCE PRINCIPLE
FOR WEAKLY DEPENDENT RANDOM VARIABLES
ПРО ШВИДКIСТЬ ЗБIЖНОСТI В ПРИНЦИПI IНВАРIАНТНОСТI
ДЛЯ СЛАБКО ЗАЛЕЖНИХ ВИПАДКОВИХ ВЕЛИЧИН
We consider nonstationary sequences of \varphi -mixing random variables. By using the Levy – Prokhorov distance, we estimate
the rate of convergence in the invariance principle for nonstationary \varphi -mixing random variables. The obtained results
extend and generalize several known results for nonstationary \varphi -mixing random variables.
Розглянуто нестацiонарнi послiдовностi \varphi -мiшаних випадкових величин. За допомогою вiдстанi Левi – Прохорова
оцiнено швидкiсть збiжностi в принципi iнварiантностi для нестацiонарних \varphi -мiшаних випадкових величин. Одер-
жанi результати розширюють та узагальнюють ряд вiдомих результатiв про нестацiонарнi \varphi -мiшанi випадковi
величини.
1. Introduction. Let \{ \xi kn, k = 1, 2, . . . , k(n), n = 1, 2, . . .\} be a sequence of random variables
(r.v.’s) on a probability space \{ \Omega ,\Im , P\} . Let M b
a(n) = \sigma \{ \xi kn, a \leq k \leq b\} , 1 \leq a \leq b \leq k(n). For
each m \geq 1 define (see [11])
\alpha (m) = \mathrm{s}\mathrm{u}\mathrm{p}
k,n
\mathrm{s}\mathrm{u}\mathrm{p}
A\in Mk
1 (n), B\in Mk(n)
k+m(n)
| P (A \cap B) - P (A)P (B)| ,
\beta (m) = E
\left\{ \mathrm{s}\mathrm{u}\mathrm{p}
k,n
\mathrm{s}\mathrm{u}\mathrm{p}
A\in Mk(n)
k+m(n)
\bigm| \bigm| \bigm| P (A/Mk
1 (n)) - P (A)
\bigm| \bigm| \bigm|
\right\} ,
\varphi (m) = \mathrm{s}\mathrm{u}\mathrm{p}
k,n
\mathrm{s}\mathrm{u}\mathrm{p}
A\in Mk
1 (n), B\in Mk(n)
k+m(n)
| P (B/A) - P (B)| , P (A) > 0.
The sequence is said to be strongly mixing (s.m.), absolutely regular (a.r.), uniformly strong
mixing (u.s.m.), if \alpha (m) \rightarrow 0, \beta (m) \rightarrow 0 and \varphi (m) \rightarrow 0 as m\rightarrow \infty , respectively.
Let
Skn =
\sum
j\leq k
\xi jn, Sn = Sk(n)n, B2
kn = ES2
kn, B2
n = B2
k(n)n, S0n = B2
0n = 0,
Lns = B - s
n
\sum
j\leq k(n)
E | \xi jn| s, E\xi kn = 0, \varphi (0) = 1.
By C(\cdot ) with an index or without it, we denote a positive constants (not always the same in the
various formulas) depending only on the values in parentheses, by C an absolute positive constant.
Consider the points tkn =
\mathrm{m}\mathrm{a}\mathrm{x}1\leq i\leq k B
2
in
\mathrm{m}\mathrm{a}\mathrm{x}1\leq i\leq k(n)B
2
in
in the interval [0; 1], order them and construct on
the interval [0; 1] continuous random polygon Wn(t) with vertices
\biggl(
tkn;
Skn
Bn
\biggr)
. If some tkn are the
same, i.e.,
1 e-mail: muhamedov1955@mail.ru.
c\bigcirc A. K. MUKHAMEDOV, 2022
1216 ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
ON THE RATE OF CONVERGENCE IN THE INVARIANCE PRINCIPLE FOR WEAKLY DEPENDENT . . . 1217
B2
k1n = B2
k2n = . . . = B2
krn, ki \not = kj ,
then we take any of these points
\biggl(
tkrn;
Skin
Bn
\biggr)
.
Consider the space C[0; 1] of continuous functions on [0; 1] equipped with the norm \| x(t)\| =
= \mathrm{s}\mathrm{u}\mathrm{p}0\leq t\leq 1 | x(t)| , which generates \sigma -algebra \Im C . If a Wn is distribution of the process \{ Wn(t),
t \in [0; 1]\} and W is distribution of the standard Winer process \{ W (t), t \in [0; 1]\} , then the weak
convergence Wn to W means that
\mathrm{l}\mathrm{i}\mathrm{m}
n\rightarrow \infty
P (Wn(t) \in A) = P (W (A))
for any Borel set A such that W (\partial A) = 0. This fact is usually called the invariance principle
(IP). Donsker [8] proved IP for i.i.d. random variables and Yu. V. Prokhorov [16] proved IP for the
triangular arrays
\bigl\{
\xi kn, k = 1, 2, . . . , k(n), n = 1, 2, . . .
\bigr\}
of independent in each series r.v.’s under
Lundeberg’s condition:
\Lambda n(\varepsilon ) =
1
B2
n
n\sum
k=1
E
\bigl\{
X2
kn; | Xkn| > \varepsilon Bn
\bigr\}
\rightarrow 0 as n\rightarrow \infty for all \varepsilon > 0.
Under Lundeberg’s condition T. M. Zuparov, A. K. Muhamedov [26] and M. Peligrad, S. Utev
[15] proved IP for a nonstationary \varphi -mixing and \alpha -mixing r.v.’s, respectively.
Define L(P ;Q) the Levy – Prokhorov distance between the distributions P and Q in C[0; 1] (see
[3, p. 327])
L(P ;Q) = \mathrm{i}\mathrm{n}\mathrm{f}\{ \varepsilon > 0 : P (A) \leq Q(A\varepsilon ) + \varepsilon and Q(A) \leq P (A\varepsilon ) + \varepsilon for all A \in \Im C\} ,
where A\varepsilon is a \varepsilon -neighborhood of A. Then IP can be written as L(Wn;W ) \rightarrow 0 as n\rightarrow \infty .
It is known that
L(Wn;W ) = \mathrm{m}\mathrm{a}\mathrm{x}\{ \varepsilon : P (\| Wn(\cdot ) - W (\cdot )\| > \varepsilon )\} . (1)
In order to estimate (1) it is enough to estimate P (\| Wn(\cdot ) - W (\cdot )\| > \varepsilon ). A rate of convergence
in the IP was studied in detail when the sequence of r.v.’s are independent. The first estimation in this
case was proposed by Yu. V. Prokhorov [16]. He proved that
L(Wn;W ) = o
\Bigl(
L
1/4
n3 \mathrm{l}\mathrm{n}2 Ln3
\Bigr)
, n\rightarrow \infty .
This latter estimate was improved in i.i.d. case by Heyde [10], Dudley [7], and others. A. A. Borov-
kov [4] proved that
L(Wn;W ) = C(s)L1/(s+1)
ns , 2 < s \leq 3. (2)
It should be noted that in all the above estimates the one probability spaсe method was used.
R. M. Dudley [7] and A. A. Borovkov [4] showed that neither method of Prokhorov nor method of
Skorokhod can be used to get (2) in the case s > 5. J. Komlos, P. Major, G. Tusnady (KMT) [13]
proposed method which allowed them in i.i.d. case to prove (1) for all s > 2. Modifying the method
of KMT, A. I. Sakhanenko [17 – 21] extends (2) to the general case.
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
1218 A. K. MUKHAMEDOV
The fact that (2) is the best possible was proved by several authors A. A. Borovkov [4],
A. I. Sakhanenko [17 – 21], T. V. Arak [1], J. Komlos, P. Major and G. Tusnady [14]. I. Berkes,
W. Philipp [2], and A. A. Borovkov, A. I. Sakhanenko [5], T. M. Zuparov, A. K. Muhamedov [26,
27] proposed the methods to obtain estimates of Levy – Prokhorov distances for different classes of
weakly dependent sequence.
Yoshihara [25] obtained the first result:
L(Wn;W ) = O
\Bigl(
n - 1/8 \mathrm{l}\mathrm{n}1/2 n
\Bigr)
for a.r. strictly stationary sequence \{ \xi k, k \in N\} satisfying
\infty \sum
k=1
k \cdot (\beta (k))\delta /(4+\delta ) <\infty ,
under the existence of an absolute moment of order 4 + \delta , \delta > 0. Kanagawa [12] obtained the rate
of convergence for the u.s.m. and s.m. strictly stationary sequences of r.v.’s.
Using the Prokhorov method, the best estimate in IP is obtained [9] in the stationary case with
s.m. conditions, namely,
1) if the coefficients \alpha (k) of s.m. decreases exponentially to zero and
0 < \sigma = E\xi 21 + 2
\infty \sum
i=2
E\xi 1\xi i <\infty , (3)
then
L(Wn;W ) = O
\Bigl(
n
- s - 2
2(s - 1) \mathrm{l}\mathrm{n}
2s+1
6 n
\Bigr)
;
2) if the coefficients \alpha (k) of s.m. decreases to zero as following:
\alpha (k) \leq Cn - \theta s(s - 1)/(s - 2)2 , C > 0, \theta > 1,
and condition (3) holds, then
L(Wn;W ) = O
\Bigl(
n
- (s - 2)(\theta - 1)
6(\theta +1)+2(\theta - 1)(s - 2)
\surd
\mathrm{l}\mathrm{n}n
\Bigr)
.
For the case u.s.m. S. A. Utev [23] for weak stationary sequences \{ \xi k, k \in N\} showed that
L(Wn;W ) = C(s; g;\sigma )
\Biggl(
n - s/2
n\sum
i=1
E| \xi i| s
\Biggr) 1/(s+1)
, 2 < s < 5,
under the conditions (3) and \phi (k) \leq A \cdot k - g(s), g(s) > j(u)(j(u) - 1), u = (2 + 5s)/2(5 - s),
j(u) = 2\mathrm{m}\mathrm{i}\mathrm{n}\{ k \in N : 2k \geq u\} .
T. M. Zuparov and A. K. Muhamedov [27] announced the estimate for nonstatsionary u.s.m.
sequence
L(Wn;W ) \leq C(s; \theta ;K)L
1
s+1
ns
under 2 < s < 6 and \phi (k) \leq Ak - \theta (s), here \theta (s) > 2s.
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
ON THE RATE OF CONVERGENCE IN THE INVARIANCE PRINCIPLE FOR WEAKLY DEPENDENT . . . 1219
In this paper, using Levy – Prokhorov distance, Bernshtein’s method, I. Berkes, W. Philipp [2]
approximation theorem’s, S. A. Utev’s [24] moment inequalities and results of A. I. Sakhanenko [19],
we will obtain the best possible rate of convergence in the IP, extend and generalize several known
results on a nonstationary \varphi -mixing random variables.
This paper is organized as follows. The main results will be given in Section 2. In Section 3, we
will give auxiliary lemmas, and in Section 4, we will prove the results.
2. Main results.
Theorem 2.1. Suppose that for any numbers \theta and s such that
\theta > \mathrm{m}\mathrm{a}\mathrm{x}(4, s, s(s - 2)/4), s > 2,
the following conditions hold:
\varphi (\tau ) \leq K\tau - \theta , K > 0,
E | \xi kn| s <\infty , k = 1, 2, . . . , k(n), n = 1, 2, . . . .
Then there exist a Wiener process \{ W (t), t \in [0; 1]\} and a constant C(s; \theta ;K) such that
inequality
P (\| Wn(t) - W (t)\| > x) \leq C(s; \theta ;K)
Lns
xs
holds for all x > 0.
Corollary. Under the conditions of Theorem 2.1 the following inequality takes place:
L(Wn;W ) \leq C(s; \theta ;K)L
1
s+1
ns .
Theorem 2.2. Under the conditions of Theorem 2.1 and \theta > \mathrm{m}\mathrm{a}\mathrm{x}(4, s, 3s(s - 2)/4) there exist
a Wiener process \{ W (t), t \in [0; 1]\} and a constant C(s; \theta ;K) such that inequality
E\| Wn(t) - W (t)\| s \leq C(s; \theta ;K)Lns
holds.
Remark. S. A. Utev [24] proved convergence of E\| Wn(t) - W (t)\| s to zero. The inequality in
Theorem 2.2 for nonstationary sequence of \varphi -mixing random variables is obtained the first time.
Concerning the existence of the sequences which satisfy the conditions of Theorems 2.1 and 2.2,
we can say the following:
R. C. Bradley [6] proved in the Theorem 3.3 that if X := (Xk, k \in Z) is a (not necessarily
stationary) Markov chain and \varphi (n) < 1/2 for some n \geq 1, then \varphi (n) \rightarrow 0 at least exponentially
fast as n\rightarrow \infty .
From X := (Xk, k \in Z) strictly stationary sequence of Markov chain we constructed nonstati-
onary sequence \xi := (\xi kn, 1 \leq k \leq n) following: \xi 2k - 1n = - X2k - 1, 1 \leq 2k - 1 \leq n, and
\xi 2kn = X2k, 1 \leq 2k \leq n, for every series. As X := (Xk, k \in Z) strictly stationary sequence
are satisfying \varphi -mixing condition with exponentially fast as n \rightarrow \infty , then \xi := (\xi kn, 1 \leq k \leq n)
sequence are also satisfying \varphi -mixing condition with exponentially fast as n \rightarrow \infty . In addition, if
E | Xk| s, s > 2, then \xi := (\xi kn, 1 \leq k \leq n) nonstationary sequence satisfies the conditions of the
main theorems.
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
1220 A. K. MUKHAMEDOV
3. Auxiliary lemmas.
Lemma 3.1 (see [11]). Let r.v.’s \xi and \eta be measurable with respect to \sigma -algebras Mk
1 and
M
k(n)
k+\tau , respectively, where k \geq 1, k + \tau \leq k(n). If E| \xi | p < \infty and E| \eta | q < \infty for p > 1, q > 1
such that
1
p
+
1
q
= 1, then
| E\xi \cdot \eta - E\xi \cdot E\eta | \leq 2\varphi
1
p (\tau )E
1
p | \xi | pE
1
q | \eta | q.
Lemma 3.2 (see [2]). Let \{ (Sk, \sigma k), k \geq 1\} be a sequence of complete separable metric spaces.
Let \{ Xk, k \geq 1\} be a sequence of random variables with values in Sk and let \{ Bk, k \geq 1\} be a
sequence of \sigma -fields such that Xk is Bk -measurable. Suppose that, for some \varphi k \geq 0,
| P (AB) - P (A)P (B)| \leq \varphi kP (A)
for all B \in Bk, A \in
\bigcup
j<k
Bj . Then without changing its distribution we can redefine the sequence
\{ Xk, k \geq 1\} on a richer probability space together with a sequence \{ Yk, k \geq 1\} of independent
random variables such that Yk has the same distribution as Xk and
P (\sigma k(Xk, Yk) \geq 6\varphi k) \leq 6\varphi k, k = 1, 2, . . . .
Lemma 3.3 (see [24]). Let \{ Xk, k \geq 1\} the sequence of random variables satisfying u.s.m.
condition and \varphi (p) <
1
4
. Then there exists a constant C(\varphi (p)), depending only on \varphi (p), such that
for all t \geq 1 and all 1 \leq q \leq t the following inequality takes place:
E \mathrm{m}\mathrm{a}\mathrm{x}
1\leq k\leq n
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
k\sum
j=1
Xj
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
t
\leq (C(\varphi (p)t))t
\left\{ ptE \mathrm{m}\mathrm{a}\mathrm{x}
1\leq k\leq n
| Xk| t + \mathrm{m}\mathrm{a}\mathrm{x}
1\leq k\leq n
\left( E
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
k\sum
j=1
Xj
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
q\right)
t
q
\right\} .
Lemma 3.4 (see [19]). Let \{ Xk, k \geq 1\} be a sequence of independent random variables such
that EXk = 0,
\sum n
k=1
EX2
k = 1. Suppose that t0 = 0, tk =
\sum k
i=1
EX2
i , k = 1, 2, . . . , n, Lns =
=
\sum n
i=1
E| X| s. Let S(t) be continuous random polygon with vertices
\biggl(
tk, S(tk) =
\sum k
j=1
Xj
\biggr)
.
Then, for any numbers s \geq 2 and b \geq 1, there exists a Wiener process \{ W (t), t \in [0, 1]\} such that
inequality
P (\| S(t) - W (t)\| \geq C1sbx) \leq
\biggl(
Lns
bx
\biggr) b
+ P
\biggl(
\mathrm{m}\mathrm{a}\mathrm{x}
1\leq i\leq n
| Xi| > x
\biggr)
is true for all x > 0.
We introduce the following notation:
\xi jn(x) = \xi jnI\{ | \xi jn| \leq CxBn\} - E\xi jnI\{ | \xi jn| \leq CxBn\} , \=\xi jn(x) = \xi jn - \xi jn(x),
where x > 0 an arbitrary real number,
Skn(b) =
b+k\sum
j=b+1
\xi jn, Skn(b, x) =
b+k\sum
j=b+1
\xi jn(x), \=Skn(b, x) =
b+k\sum
j=b+1
\=\xi jn(x),
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
ON THE RATE OF CONVERGENCE IN THE INVARIANCE PRINCIPLE FOR WEAKLY DEPENDENT . . . 1221
Sn(x) = Sk(n)n(0, x),
B2
kn(b) = ES2
kn(b), B2
kn(b, x) = ES2
kn(b, x), B2
n(x) = ES2
n(x), \varphi t =
k(n)+1\sum
i=0
\varphi 1/t(i),
Lns = B - s
n
\sum
j\leq k(n)
E| \xi jn| s, Lnsx(a, b) = B - s
n
b\sum
j=a+1
E| \xi jn(x)| s, s > 2,
\=\varphi t =
k(n)+1\sum
i=0
(i+ 1)\varphi 1/t(i).
We define the positive integers mi using the algorithm
m0 = 0, mi+1 = \mathrm{m}\mathrm{i}\mathrm{n}
\left\{ m : mi < m < n : E
\left( m+1\sum
k=mi+1
\xi kn(x)
\right) 2 > h(n)
\right\}
for i = 1, 2, . . . ,M - 1, where M - 1 is the last, for which we can define mM - 1 , i.e.,
E
\left( k(n)\sum
j=mM - 1+1
\xi jn(x)
\right) 2 < h(n),
where h(n) is a sequence of positive numbers.
By \eta j and \eta j(x), respectively, we denote the amount
\eta j =
mj\sum
i=mj - 1+1
\xi in, \eta j(x) =
mj\sum
i=mj - 1+1
\xi in(x).
We describe the positive integers li using the mentioned algorithm
l0 = 0, li+1 = \mathrm{m}\mathrm{i}\mathrm{n}
\left\{ l : li < l < M : E
\left( l+1\sum
k=li+1
\eta k(x)
\right) 2
> T (n)
\right\}
for i = 1, 2, . . . , N - 1, where M - 1 is the last, for which we can define lN - 1 , i.e.,
E
\left( M\sum
j=lN - 1+1
\eta j(x)
\right) 2
< T (n),
where T (n) is a sequence of positive numbers. T (n) and h(n) will be selected later.
By \psi j and \psi j(x), respectively, we denote the amount
\psi j =
lj - 1\sum
i=lj - 1+1
\eta i, \psi j(x) =
lj - 1\sum
i=lj - 1+1
\eta i(x).
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
1222 A. K. MUKHAMEDOV
Lemma 3.5. The following inequalities are true:\bigm| \bigm| B2
kn(b) - B2
kn(b, x)
\bigm| \bigm| \leq C(\varphi s)B
2
nx
2 - sLns(b), (4)
\mathrm{m}\mathrm{a}\mathrm{x}
1\leq k\leq N
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
k\sum
j=1
(D\psi j - D\psi j(x))
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \leq C(\varphi s)B
2
nx
2 - sLns, (5)
\mathrm{m}\mathrm{a}\mathrm{x}
1\leq k\leq N
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| B2
mk
-
k\sum
j=1
D\psi j(x)
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \leq C(\varphi 2)N \cdot h(n), (6)
E\psi 2
j (x) \leq T (n) + \theta \cdot h(n), | \theta | \leq C(\varphi 2), (7)
M \leq C( \=\varphi 2)
B2
n(x)
h(n)
, N \leq C( \=\varphi 2)
B2
n(x)
T (n)
. (8)
Proof. It is obvious that
\bigm| \bigm| B2
kn(b) - B2
kn(b, x)
\bigm| \bigm| =
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| E
\left( b+k\sum
j=b+1
\bigl(
\xi jn(x) + \=\xi jn(x)
\bigr) \right) 2 - E
\left( b+k\sum
j=b+1
\xi jn(x)
\right) 2\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \leq
\leq
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\sum
b+1\leq i \not =j\leq b+k
E\xi in(x)\=\xi jn(x)
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| +
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\sum
b+1\leq i \not =j\leq b+k
E \=\xi in(x)\xi jn(x)
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| +
+
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\sum
b+1\leq i \not =j\leq b+k
E \=\xi in(x)\=\xi jn(x)
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| .
We estimate first term on the right-hand side of the inequality. Another term will be estimated
analogously. Due to Lemma 3.1 and the Hölder inequality, we have\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\sum
b+1\leq i \not =j\leq b+k
E\xi in(x)\=\xi jn(x)
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \leq
\leq
\sum
b+1\leq i \not =j\leq b+k
\varphi 1/s(| j - i| )E1/s| \xi in(x)| sE(s - 1)/s
\bigm| \bigm| \=\xi jn(x)\bigm| \bigm| s(s - 1) \leq
\leq C
\left( k(n)\sum
i=0
\varphi 1/s(i)
\right) B2
nx
2 - sLks(b) \leq C(\varphi s)B
2
nx
2 - sLks(b).
Inequality (4) is proved. Inequality (5) can be estimated analogously.
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
ON THE RATE OF CONVERGENCE IN THE INVARIANCE PRINCIPLE FOR WEAKLY DEPENDENT . . . 1223
Now, we prove inequality (6). For this, the difference
\bigm| \bigm| \bigm| \bigm| B2
n(x) -
\sum N
j=1
D\psi j(x)
\bigm| \bigm| \bigm| \bigm| will be esti-
mated when k = N , and other cases will be proved analogously. It is obvious that B2
n(x) =
= E
\biggl( \sum N
j=1
(\psi j(x) + \eta lj (x))
\biggr) 2
. By Lemma 3.1, we have
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| B2
n(x) -
N\sum
j=1
E\psi 2
j (x)
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| =
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| E
\left( N\sum
j=1
(\psi j(x) + \eta lj (x))
\right) 2
-
N\sum
j=1
E\psi 2
j (x)
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \leq
\leq
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| 2
\sum
1\leq j\leq l\leq N
E(\psi j(x) + \eta lj (x))(\psi k(x) + \eta lk(x))
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \leq
\leq 2
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
N\sum
j=1
E(\psi j(x) + \eta lj (x))
\left( N\sum
l=j+1
E(\psi k(x) + \eta lk(x))
\right) \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \leq
\leq 2
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
N\sum
j=1
E
\left( lj\sum
i=1
\eta i(x)
\right) \left( lM\sum
i=lj+1
\eta i(x)
\right) \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \leq
\leq 2
k(n)\sum
i=1
(i+ 1)\varphi 1/2(i)N \cdot h(n) \leq C( \=\varphi 2)N \cdot h(n).
Proof of inequality (7). By the definitions of random variables \psi j(x) and \eta ij(x), we obtain
E\eta 2mj+1n(x) \leq h(n) and
E\psi 2
j (x) \leq T (n) < E
\bigl(
\psi j(x) + \eta lj (x)
\bigr) 2 \leq E\psi 2
j (x) + 2E\psi j(x)\eta lj (x)+
+E\eta 2lj (x) \leq T (n) + 2E
\left( lj\sum
i=lj - 1+1
\eta i(x)
\right) \eta lj (x) + E\eta 2lj (x) \leq
\leq T (n) + 2
N\sum
i=1
\varphi 1/2(i)E1/2\eta 2li(x)E
1/2\eta 2lj+1(x) + E\eta 2lj+1(x) \leq
\leq T (n) + C(\varphi 2)h(n).
Relations (4) and (5) imply that
B2
n(x) \geq
N\sum
i=1
D\psi j(x) - C(\varphi 2)N \cdot h(n) \geq
N - 1\sum
i=1
D\psi j(x) - C(\varphi 2)N \cdot h(n) \geq
\geq (N - 1) \cdot T (n) - C(\varphi 2)N \cdot h(n).
Hence, we obtained second inequality (8). Since h(n) = o(T (n)), first inequality (8) estimated
analogously this. Consequently, Lemma 3.5 is proved.
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
1224 A. K. MUKHAMEDOV
4. Proofs of theorems. Proof of Theorem 2.1. Denote by Wnx(t), the random polygon with
vertices
\biggl(
tkn;
Sk(x)
Bn
\biggr)
. Polygon with vertices
\biggl(
tmkn;
Smk
(x)
Bn
\biggr)
denoted by Wnx(t). Denote by W
and \widehat Wnx(t) the random polygons with vertices
\left( tmkn;
\sum k
j=1
\psi j(x)
Bn
\right) and
\left( tmkn;
\sum k
j=1
\widehat \psi j(x)
Bn
\right) ,
respectively, where \widehat \psi j(x), j = 1, 2, . . . , N, are independent r.v.’s marginal distributions of which
coincide with the distributions of r.v.’s \psi j(x). Polygon with vertices\left(
\sqrt{} \sum k
j=1
D\psi j(x)\sqrt{} \sum N
j=1
D\psi j(x)
;
\sum k
j=1
\widehat \psi j(x)\sqrt{} \sum N
j=1
D\psi j(x)
\right)
denoted by \widetilde Wnx(t).
It is obvious that
P (\| Wn(t) - W (t)\| > x) \leq P
\Bigl(
\| Wn(t) - Wnx(t)\| >
x
6
\Bigr)
+
+P
\Bigl( \bigm\| \bigm\| Wnx(t) - Wnx(t)
\bigm\| \bigm\| > x
6
\Bigr)
+ P
\Bigl( \bigm\| \bigm\| \bigm\| Wnx(t) - Wnx(t)
\bigm\| \bigm\| \bigm\| > x
6
\Bigr)
+
+P
\Bigl( \bigm\| \bigm\| \bigm\| Wnx(t) - \widehat Wnx(t)
\bigm\| \bigm\| \bigm\| > x
6
\Bigr)
+ P
\Bigl( \bigm\| \bigm\| \bigm\| \widehat Wnx(t) - \widetilde Wnx(t)
\bigm\| \bigm\| \bigm\| > x
6
\Bigr)
+
+P
\Bigl( \bigm\| \bigm\| \bigm\| \widetilde Wnx(t) - W (t)
\bigm\| \bigm\| \bigm\| > x
6
\Bigr)
=
6\sum
i=1
Pi. (9)
Now to prove Theorem 2.1, we estimate each terms on the right-hand side of (9). Without loss of
generality, we assume that Lns < 1. Let T (n) = C(s, \theta ,K)B2
nx
2(t - s)
t - 2 L
2
t - 2
ns , t > s. Then
N \leq C(s, \theta ,K)
B2
n(x)
T (n)
\ll C(s, \theta ,K)x -
2(t - s)
t - 2 L
- 2
t - 2
ns .
Estimate \bfitP \bfone . It is apparent that
P1 = P
\Bigl(
\| Wn(t) - Wnx(t)\| >
x
6
\Bigr)
\leq P
\biggl(
\mathrm{m}\mathrm{a}\mathrm{x}
k\leq k(n)
| \xi kn| > C1Bnx
\biggr)
\leq C
Lns
xs
.
Estimate \bfitP \bftwo . By virtue of the Chebyshev inequality, Lemmas 3.3 and 3.5 for q = 2, t > s, we
have
P2 = P
\Bigl( \bigm\| \bigm\| Wnx(t) - Wnx(t)
\bigm\| \bigm\| > x
6
\Bigr)
\leq
\leq
\sum
j\leq N
P
\biggl(
\mathrm{m}\mathrm{a}\mathrm{x}
mj - 1\leq k\leq mj
| Skn(x) - Smj - 1n(x)| > C
xBn
12
\biggr)
\leq
\leq C
1
xtBt
n
\sum
j\leq N
E \mathrm{m}\mathrm{a}\mathrm{x}
mj - 1\leq k\leq mj
| Skn(x) - Smj - 1n(x)| t \leq
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
ON THE RATE OF CONVERGENCE IN THE INVARIANCE PRINCIPLE FOR WEAKLY DEPENDENT . . . 1225
\leq C(t, \theta ,K)
\Biggl[
Lnt(x)
xt
+
1
xt
\biggl(
T (n)
B2
n
\biggr) t - 2
2
\Biggr]
\leq C(s, \theta ,K)
Lns
xs
.
Estimate \bfitP \bfthree . It is obvious that
P3 = P
\Bigl( \bigm\| \bigm\| \bigm\| Wnx(t) - Wnx(t)
\bigm\| \bigm\| \bigm\| > x
6
\Bigr)
\leq P
\left( \mathrm{m}\mathrm{a}\mathrm{x}
k\leq N
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\sum
j\leq k
\eta mj (x)
Bn
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| > x
6
\right) .
Now estimate the P3 , analogously P2 we obtain
P3 = P
\Bigl( \bigm\| \bigm\| \bigm\| Wnx(t) - Wnx(t)
\bigm\| \bigm\| \bigm\| > x
6
\Bigr)
\leq C(s, \theta ,K)
Lns
xs
.
Estimate \bfitP \bffour . It is obvious that
P
\Bigl( \bigm\| \bigm\| \bigm\| Wnx(t) - \widehat Wnx(t)
\bigm\| \bigm\| \bigm\| > x
6
\Bigr)
\leq P
\left( \mathrm{m}\mathrm{a}\mathrm{x}
k\leq N
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\sum
j\leq k
\Biggl(
\psi j(x)
Bn
-
\widehat \psi j(x)
Bn
\Biggr) \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| > x
6
\right) .
Using the Berkes – Philipp approximation theorem (see Lemma 3.2), Lemmas 3.3 and 3.4, we get
P4 \leq
\sum
j\leq N
P
\Biggl( \bigm| \bigm| \bigm| \bigm| \bigm| \psi j(x)
Bn
-
\widehat \psi j(x)
Bn
\bigm| \bigm| \bigm| \bigm| \bigm| > x
6N
\Biggr)
\leq
\leq
\sum
j\leq N
P
\Biggl( \bigm| \bigm| \bigm| \bigm| \bigm| \psi j(x)
Bn
-
\widehat \psi j(x)
Bn
\bigm| \bigm| \bigm| \bigm| \bigm| > 6\varphi (p)
\Biggr)
\leq 6N\varphi (p)
when
x
6N\varphi (p)
> 6 or 36N\varphi (p) \leq x, where p = \mathrm{m}\mathrm{i}\mathrm{n}j\leq N (mj - mj - 1). To obtain the estimation
P4 \leq C(s, \theta ,K)
Lns(x)
xs
, we find p from condition
N\varphi (p) \leq Cx,
N\varphi (p) \leq C
Lns
xs
.
From this and due to Lemma 3.5, we have
N\varphi (p) \leq nKp - \theta \leq C(s, \theta ,K)x -
2(t - s)
t - 2 L
- 2
t - 2
ns p - \theta \leq C(s, \theta ,K)\mathrm{m}\mathrm{i}\mathrm{n}
\biggl(
x,
Lns
xs
\biggr)
.
Then
p \geq C(s, \theta ,K)
\biggl(
\mathrm{m}\mathrm{a}\mathrm{x}
\biggl(
x -
3t - 2(s+1)
t - 2 L
- 2
t - 2
ns ;x
t(s - 2)
t - 2 L
- t
t - 2
ns
\biggr) \biggr) 1
\theta
.
Estimate \bfitP \bffive . It is clear that
P
\Bigl( \bigm\| \bigm\| \bigm\| \widehat Wnx(t) - \widetilde Wnx(t)
\bigm\| \bigm\| \bigm\| > x
5
\Bigr)
\leq
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
1226 A. K. MUKHAMEDOV
\leq P
\left( \mathrm{m}\mathrm{a}\mathrm{x}
k\leq N
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\left( 1 - Bn\sqrt{} \sum
j\leq N
D \widehat \psi j(x)
\right) \sum
j\leq k
\Biggl( \widehat \psi j
Bn
\Biggr) \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| >
x
5
\right) \leq
\leq P
\left( \mathrm{m}\mathrm{a}\mathrm{x}
k\leq N
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\sum
j\leq k
\left( \widehat \psi j(x)\sqrt{} \sum
j\leq N
D \widehat \psi j(x)
\right)
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| >
xBn
\sqrt{} \sum
j\leq N
D \widehat \psi j(x)
5
\biggl(
Bn -
\sqrt{} \sum
j\leq N
D \widehat \psi j(x)
\biggr)
\right) \leq
\leq C
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
Bn -
\sqrt{} \sum
j\leq N
D \widehat \psi j(x)
xBn
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
t
E
\left( \mathrm{m}\mathrm{a}\mathrm{x}
k\leq N
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\sum
j\leq k
\left( \widehat \psi j(x)\sqrt{} \sum
j\leq N
D \widehat \psi j(x)
\right)
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
t\right) .
Hence, by Lemma 3.3, we obtain
E
\left( \mathrm{m}\mathrm{a}\mathrm{x}
k\leq N
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\sum
j\leq k
\left( \widehat \psi j(x)\sqrt{} \sum
j\leq N
D \widehat \psi j(x)
\right)
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
t\right) \leq C(t, \theta ,K). (10)
As
Bn -
\sqrt{} \sum
j\leq N
D \widehat \psi j(x)
Bn
=
B2
n -
\sum
j\leq N
D \widehat \psi j(x)
Bn
\biggl(
Bn +
\sqrt{} \sum
j\leq N
D \widehat \psi j(x)
\biggr)
and D \widehat \psi j(x) = D\psi j(x), from Lemma 3.5 we have
\sum
j\leq N
D\psi j(x) = B2
n(1 + o(1)). As a result,
estimation of B2
n -
\sum
j\leq N
D \^\psi j(x) will be enough. Let h(n) = T (n)x
t - s
t L
1
t
ns , Lemma 3.5 implies
that \bigm| \bigm| \bigm| \bigm| \bigm| B2
n -
\sum
j\leq N D\psi j(x)
xB2
n
\bigm| \bigm| \bigm| \bigm| \bigm| \leq C(\varphi 2)
\biggl(
Nh(n) +B2
nx
2 - sLns
xB2
n
\biggr)
=
= C(\varphi 2)
\biggl(
h(n)
xT (n)
+ x1 - sLns
\biggr)
\leq C(t, \varphi 2)
\biggl(
x -
s
tL
1
t
ns + x1 - sLns
\biggr)
. (11)
It follows that
P5 = P
\Bigl( \bigm\| \bigm\| \bigm\| \widehat Wnx(t) - \widetilde Wnx(t)
\bigm\| \bigm\| \bigm\| > x
6
\Bigr)
\leq C(t, \varphi 2)
\biggl(
x -
s
tL
1
t
ns + x1 - sLns
\biggr) t
\leq
\leq C(t, \varphi 2)
\Biggl(
Lns
xs
+
\biggl(
x
Lns
xs
\biggr) t
\Biggr)
. (12)
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
ON THE RATE OF CONVERGENCE IN THE INVARIANCE PRINCIPLE FOR WEAKLY DEPENDENT . . . 1227
It is obvious that if 0 < x \leq 1, then P5 \leq C(t, \varphi 2)
Lns
xs
. Now let x \geq 1, then to obtain estimation of
P5 \leq C(t, \varphi 2)
Lns
xs
, second term of inequality (12) should satisfy the condition
\biggl(
x
Lns
xs
\biggr) t
\leq Lns
xs
.
This inequality holds, if x \geq L
t - 1
s(t - 1) - t
ns . Hence, inequality P4 \leq C(t, \varphi 2)
Lns
xs
holds for all x > 0.
Estimate \bfitP \bfsix . Using Lemma 3.4, we have
P6 = P
\Bigl( \bigm\| \bigm\| \bigm\| \widetilde Wnx(t) - W (t)
\bigm\| \bigm\| \bigm\| > x
6
\Bigr)
\leq C
\biggl(
1
x
\biggr) t
\left( \sum
j\leq N
E
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\widehat \psi j(x)\sqrt{} \sum
j\leq N
D \widehat \psi j(x)
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
t\right) .
Now we estimate
\sum
j\leq N
E
\bigm| \bigm| \bigm| \widehat \psi j(x)
\bigm| \bigm| \bigm| t. Since \widehat \psi j(x) are independent r.v.’s marginal distributions
of which coincide with the distributions of r.v.’s \psi j(x), by Lemmas 3.3 and 3.5, we find
\sum
j\leq N
E | \psi j(x)| t \leq
\sum
j\leq N
\left( lj\sum
i=lj - 1
E| \eta i(x)| t + (D\psi j(x))
t/2
\right) \leq
\leq C(t)
\left( k(n)\sum
i=1
E| \xi in(x)| t +N(T (n))t/2
\right) . (13)
Hence, from Lemma 3.5 and the definition of T (n), we get
P6 = P
\Bigl( \bigm\| \bigm\| \bigm\| \widetilde Wnx(t) - W (t)
\bigm\| \bigm\| \bigm\| > x
5
\Bigr)
\leq C(t, \varphi )
\Biggl(
1
xt
Lnt +
1
xt
\biggl(
T (n)
B2
n
\biggr) t - 2
2
\Biggr)
\leq C(t, \varphi )
Lns
xs
. (14)
We will demonstrate the possibility of dividing above mentioned isolated groups, namely, when
n \rightarrow \infty , the conditions B2
n, T (n), h(n) \rightarrow \infty , T (n) = o(B2
n), h(n) = o(T (n)), Lns \rightarrow 0 should
be satisfied and we will explain the necessity of curtailing in order to prove Theorem 2.1. The
conditions are clear in the stationary case. In this case, the following asymptotical relations will be
valid, i.e., Lns \approx n -
s - 2
2 for s > 2, T (n) \approx n
t - s
t - 2 for some t, t > s, and h(n) \approx n
2t2 - (3s - 2)t+2s - 4
2t(t - 2)
for some t, t > t0 =
3s - 2 +
\surd
9s2 - 28s+ 36
4
> s, p \gg n
t(s - 2)
2\theta (t - 2) , N \ll n
t(s - 2)
s(t - 2) and \theta >
> \mathrm{m}\mathrm{a}\mathrm{x}
\biggl(
4, s,
s(s - 2)
4
\biggr)
.
To obtain necessary estimation of P2 and P4 , it will be demanded the availability of a moment
of t which is bigger than s. That is why, curtailing is necessary.
Theorem 2.1 is proved.
As it was mentioned above, Levy – Prokhorov distance between the distributions Wn and W were
determined in (1). Through selecting \varepsilon = x = L
1
s+1
ns in relation (1) and Theorem 2.1, respectively, a
proof of corollary can be obtained.
Proof of Theorem 2.2. The method of the proof of Theorem 2.2 remains the same as of
Theorem 2.1. Here we only list those places in which we make the appropriate changes.
As in the proof of Theorem 2.1, the following inequality is valid:
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
1228 A. K. MUKHAMEDOV
E\| Wn(t) - W (t)\| s \leq E\| Wn(t) - Wnx(t)\| s + E
\bigm\| \bigm\| Wnx(t) - Wnx(t)
\bigm\| \bigm\| s+
+E
\bigm\| \bigm\| \bigm\| Wnx(t) - Wnx(t)
\bigm\| \bigm\| \bigm\| s + E
\bigm\| \bigm\| \bigm\| Wnx(t) - \widehat Wnx(t)
\bigm\| \bigm\| \bigm\| s+
+E
\bigm\| \bigm\| \bigm\| \widehat Wnx(t) - \widetilde Wnx(t)
\bigm\| \bigm\| \bigm\| s + E
\bigm\| \bigm\| \bigm\| \widetilde Wnx(t) - W (t)
\bigm\| \bigm\| \bigm\| s = 6\sum
i=1
Ei. (15)
Now, to prove Theorem 2.2, we estimate each term on the right-hand side of (15) and we take
x = L
1/s
ns . Then we have
T (n) = B2
nL
2t
s(t - 2)
ns , h(n) = T (n)L1/s
ns = B2
nL
3t - 2
s(t - 2)
ns , N =
B2
n
T (n)
= L
- 2t
s(t - 2)
ns ,
h(n)
T (n)
= L
1
s
ns.
Estimate \bfitE \bfone . It is obvious that
E1 = E\| Wn(t) - Wnx(t)\| s \leq E
\biggl(
\mathrm{m}\mathrm{a}\mathrm{x}
k\leq k(n)
| \xi kn| s/Bs
n
\biggr)
\leq Lns.
Estimate \bfitE \bftwo . Based on moment inequality, Lemmas 3.3 (for q = 2, t > s) and 3.5, the definition
of T (n), the following inequality takes place:
E2 = E
\bigm\| \bigm\| Wnx(t) - Wnx(t)
\bigm\| \bigm\| s \leq Es/t
\bigm\| \bigm\| Wnx(t) - Wnx(t)
\bigm\| \bigm\| t \leq
\leq
\left( \sum
j\leq N
E
\biggl(
\mathrm{m}\mathrm{a}\mathrm{x}
mj - 1\leq k\leq mj
\bigm| \bigm| Skn(x) - Smj - 1n(x)
\bigm| \bigm| t/Bt
n
\biggr) \right) s/t \leq
\leq C
\left( \sum
j\leq N
E
\biggl(
1
Bt
n
\mathrm{m}\mathrm{a}\mathrm{x}
mj - 1\leq k\leq mj
\bigm| \bigm| Skn(x) - Smj - 1n(x)
\bigm| \bigm| t\biggr) \right) s/t \leq
\leq C(t, \theta ,K)
\Biggl(
Lnt(x) +
\biggl(
T (n)
B2
n
\biggr) t - 2
2
\Biggr) s/t
\leq C(s, \theta ,K)Lns. (16)
Estimate \bfitE \bfthree . It is obvious that
E3 = E
\bigm\| \bigm\| \bigm\| Wnx(t) - Wnx(t)
\bigm\| \bigm\| \bigm\| s \leq E\mathrm{m}\mathrm{a}\mathrm{x}
k\leq N
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\sum
j\leq k
\eta mj (x)
Bn
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
s
.
Now estimate the E3 , analogously E2 we obtain
E3 = E
\bigm\| \bigm\| \bigm\| Wnx(t) - Wnx(t)
\bigm\| \bigm\| \bigm\| s \leq C(s, \theta ,K)Lns.
Estimate \bfitE \bffour . By Lemmas 3.2, 3.3, and 3.5 and replicating a paper [24], E4 can be estimated as
follows:
E4 \leq E
\left( \mathrm{m}\mathrm{a}\mathrm{x}
k\leq N
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\sum
j\leq k
\Biggl(
\psi j(x)
Bn
-
\widehat \psi j(x)
Bn
\Biggr) \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
s\right) \leq N s\mathrm{m}\mathrm{a}\mathrm{x}
j\leq N
E
\bigm| \bigm| \bigm| \bigm| \bigm| \psi j(x)
Bn
-
\widehat \psi j(x)
Bn
\bigm| \bigm| \bigm| \bigm| \bigm|
s
\leq
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
ON THE RATE OF CONVERGENCE IN THE INVARIANCE PRINCIPLE FOR WEAKLY DEPENDENT . . . 1229
\leq N s
\Biggl(
(6\varphi (p))s +\mathrm{m}\mathrm{a}\mathrm{x}
j\leq N
\Biggl(
E
\bigm| \bigm| \bigm| \bigm| \bigm| \psi j(x)
Bn
-
\widehat \psi j(x)
Bn
\bigm| \bigm| \bigm| \bigm| \bigm|
s
, 6\varphi (p) <
\bigm| \bigm| \bigm| \bigm| \bigm| \psi j(x)
Bn
-
\widehat \psi j(x)
Bn
\bigm| \bigm| \bigm| \bigm| \bigm| \leq 1
\Biggr) \Biggr)
+
+N s\mathrm{m}\mathrm{a}\mathrm{x}
j\leq N
E
\bigm| \bigm| \bigm| \bigm| \bigm| \psi j(x)
Bn
-
\widehat \psi j(x)
Bn
\bigm| \bigm| \bigm| \bigm| \bigm|
t
\leq
\leq CN s
\Biggl(
\varphi s(p) + \mathrm{m}\mathrm{a}\mathrm{x}
j\leq N
P
\Biggl( \bigm| \bigm| \bigm| \bigm| \bigm| \psi j(x)
Bn
-
\widehat \psi j(x)
Bn
\bigm| \bigm| \bigm| \bigm| \bigm| \geq 6\varphi (p)
\Biggr)
+
\biggl(
T (n)
B2
n
\biggr) t/2
\Biggr)
\leq Lns.
In this case, mixing coefficients decreases be as N s\varphi (p) \leq Lns. In its turn, N s\varphi (p) \leq L
- 2t
t - 2
ns p - \theta \leq
Lns \Rightarrow p \geq L
- 3t - 2
\theta (t - 2)
ns for \theta > \mathrm{m}\mathrm{a}\mathrm{x}
\biggl(
4, s,
3s(s - 2)
4
\biggr)
.
Estimate \bfitE \bffive . It is obvious that
E
\bigm\| \bigm\| \bigm\| \widehat Wnx(t) - \widetilde Wnx(t)
\bigm\| \bigm\| \bigm\| s \leq
\leq C
\left( Bn -
\sqrt{} \sum
j\leq N
D \widehat \psi j(x)
Bn
\right)
s
E
\left( \mathrm{m}\mathrm{a}\mathrm{x}
k\leq N
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\sum
j\leq k
\left( \widehat \psi j(x)\sqrt{} \sum
j\leq N D \widehat \psi j(x)
\right) \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
s\right) .
By Lemma 3.5 and inequalities (10), (11), we get
E
\bigm\| \bigm\| \bigm\| \widehat Wnx(t) - \widetilde Wnx(t)
\bigm\| \bigm\| \bigm\| s \leq C(s, \varphi 2)
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
Bn -
\sqrt{} \sum
j\leq N D \widehat \psi j(x)
Bn
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
s
\leq
\biggl(
h(n)
T (n)
+ x2 - sLns
\biggr) s
\leq Lns.
Estimate \bfitE \bfsix . Due to moment inequality and analogous estimates for (13), (14) and (16), by
Lemmas 3.3 and 3.4, we have
E
\bigm\| \bigm\| \bigm\| \widetilde Wnx(t) - W (t)
\bigm\| \bigm\| \bigm\| s \leq Es/t
\bigm\| \bigm\| \bigm\| \widetilde Wnx(t) - W (t)
\bigm\| \bigm\| \bigm\| t \leq
\left( \sum
j\leq N
E
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
\widehat \psi j(x)\sqrt{} \sum
j\leq N
D \widehat \psi j(x)
\bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm| \bigm|
t\right)
s/t
\leq
\leq C(t,K, \theta )
\Biggl(
Lnt(x) +
\biggl(
T (n)
B2
n
\biggr) t - 2
2
\Biggr) s/t
\leq C(t,K, \theta )Lns.
Theorem 2.2 is proved.
Acknowledgment. The author would like to thank Professor O. Sh. Sharipov for detailed and
helpful suggestions and discussions on this study.
References
1. T. V. Arak, An estimate of A. A. Borovkov, Theory Probab. and Appl., 20, № 2, 380 – 381 (1976).
2. I. Berkes, W. Philipp, Approximation theorems for independent and weakly dependent random vectors, Ann. Probab.,
7, № 1, 29 – 54 (1979).
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
1230 A. K. MUKHAMEDOV
3. P. Billingsley, Convergence of probability measures, Wiley, New York (1968).
4. A. A. Borovkov, On the convergence rate in the invariance principle, Theory Probab. and Appl., 29, № 3, 550 – 553
(1985).
5. A. A. Borovkov, A. I. Sakhanenko, On the estimates of the rate of convergence in the invariance principle for Banach
spaces, Theory Probab. and Appl., 25, № 4, 734 – 744 (1981).
6. R. C. Bradley, Basic properties of strong mixing conditions. A survey and some open questions, Probab. Surv., 2,
107 – 144 (2005).
7. R. M. Dudley, Distance of probability measures and random variables, Ann. Math. Statist., 39, 1563 – 1572 (1968).
8. M. Donsker, An invariance principle for certain probability limit theorems, Mem. Amer. Math. Soc., 6, 250 – 268
(1951).
9. V. V. Gorodetsky, On the rate of convergence in the invariance principle for strongly mixing sequences, Theory
Probab. and Appl., 28, № 4, 780 – 785 (1983).
10. C. C. Heyde, Some properties of metrics on a study on convergence to normality, Z. Wahrscheinlichkeitstheor. und
verw. Geb., 11, № 3, 181 – 192 (1969).
11. I. A. Ibragimov, V. Linnik, Independent and stationary sequences of random variables, Wolters-Noordhoff, Groningen,
the Netherlands (1971).
12. S. Kanagawa, Rates of convergence of the invariance principle for weakly dependent random variables, Yokohama
Math. J., 30, № 1-2, 103 – 119 (1982).
13. J. Komlos, P. Major, G. Z. Tusnady, An approximation of partial sums of independent RV’s and the sample DF. I,
Z. Wahrscheinlichkeitstheor. und verw. Geb., 32, № 2, 111 – 131 (1975).
14. J. Komlos, P. Major, G. Z. Tusnady, An approximation of partial sums of independent RV’s and the sample DF. II,
Z. Wahrscheinlichkeitstheor. und verw. Geb., 34, № 1, 33 – 58 (1976).
15. M. Peligrad, S. Utev, A new maximal inequality and invariance principle for stationary sequences, Ann. Probab., 33,
№ 2, 798 – 815 (2005).
16. Yu. V. Prokhorov, Convergence of random processes and limit theorems in probability theory, Theory Probab. and
Appl., 1, № 2, 157 – 214 (1956).
17. A. I. Sakhanenko, Estimates for the rate of convergence in the invariance principle, Dokl. Akad. Nauk SSSR, 219,
1076 – 1078 (1974).
18. A. I. Sakhanenko, Estimates in the invariance principle, Trudy Inst. Mat. SO RAN, vol. 5 (in Russian), Nauka,
Novosibirsk (1985), p. 27 – 44.
19. A. I. Sakhanenko, On the accuracy of normal approximation in the invariance principle, Trudy Inst. Mat. SO RAN,
19, 40 – 66 (1989) (in Russian).
20. A. I. Sakhanenko, Estimates in the invariance principle in terms of truncated power moments, Siberian Math. J., 47,
№ 6, 1113 – 1127 (2006).
21. A. I. Sakhanenko, A general estimate in the invariance principle, Siberian Math. J., 52, № 4, 696 – 710 (2011).
22. A. V. Skorokhod, Research on the theory of stochastic processes, Kiev Univ. Press, Kiev (1961).
23. S. A. Utev, Inequalities for sums of weakly dependent random variables and estimates of rate of convergence in the
invariance principle, Limit Theorems for Sums of Random Variables, Tr. Inst. Mat., 3, 50 – 77 (1984) (in Russian).
24. S. A. Utev, Sums of \varphi -mixing random variables, Asymptotic Analysis of Distributions of Random Processes, Nauka,
Novosibirsk (1989), p. 78 – 100 (in Russian).
25. K. Yoshihara, Convergence rates of the invariance principle for absolutely regular sequence, Yokohama Math. J.,
27, № 1, 49 – 55 (1979).
26. T. M. Zuparov, A. K. Muhamedov, An invariance principle for processes with uniformly strongly mixing, Proc. Funct.
Random Processes and Statistical Inference, Fan, Tashkent (1989), p. 27 – 36.
27. T. M. Zuparov, A. K. Muhamedov, On the rate of convergence of the invariance principle for \varphi -mixing processes,
Proc. Rep. VI USSR-Japan Symp. Probab. Theory and Math. Statistics, Kiev, August 5 – 10 (1991), p. 65.
Received 27.07.20
ISSN 1027-3190. Укр. мат. журн., 2022, т. 74, № 9
|
| id | umjimathkievua-article-6244 |
| institution | Ukrains’kyi Matematychnyi Zhurnal |
| keywords_txt_mv | keywords |
| language | English |
| last_indexed | 2026-03-24T03:26:39Z |
| publishDate | 2022 |
| publisher | Institute of Mathematics, NAS of Ukraine |
| record_format | ojs |
| resource_txt_mv | umjimathkievua/03/1c3f6b178fa365eec1969822cac7e603.pdf |
| spelling | umjimathkievua-article-62442023-01-07T13:45:34Z On the rate of convergence in the invariance principle for weakly dependent random variables On the rate of convergence in the invariance principle for weakly dependent random variables Mukhamedov, A. K. Mukhamedov, A. K. uniformly strong mixing random variables, the rate of convergence, an invariance principle. UDC 519.21 We consider nonstationary sequences of $\varphi$-mixing random variables.&nbsp;By using the Levy–Prokhorov distance, we estimate the rate of convergence in the invariance principle for nonstationary $\varphi$-mixing random variables.&nbsp;The obtained results extend and generalize several known results for nonstationary $\varphi$-mixing random variables. УДК 519.21 Про швидкість збіжності в принципі інваріантності для слабко залежних випадкових величин Розглянуто нестаціонарні послідовності $\varphi$-мішаних випадкових величин. За допомогою відстані Леві–Прохорова оцінено швидкість збіжності в принципі інваріантності для нестаціонарних $\varphi$-мішаних випадкових величин. Одержані результати розширюють та узагальнюють ряд відомих результатів про нестаціонарні $\varphi$-мішані випадкові величини. Institute of Mathematics, NAS of Ukraine 2022-11-08 Article Article application/pdf https://umj.imath.kiev.ua/index.php/umj/article/view/6244 10.37863/umzh.v74i9.6244 Ukrains’kyi Matematychnyi Zhurnal; Vol. 74 No. 9 (2022); 1216 - 1230 Український математичний журнал; Том 74 № 9 (2022); 1216 - 1230 1027-3190 en https://umj.imath.kiev.ua/index.php/umj/article/view/6244/9298 Copyright (c) 2022 Abdurahmon Muhamedov |
| spellingShingle | Mukhamedov, A. K. Mukhamedov, A. K. On the rate of convergence in the invariance principle for weakly dependent random variables |
| title | On the rate of convergence in the invariance principle for weakly dependent random variables |
| title_alt | On the rate of convergence in the invariance principle for weakly dependent random variables |
| title_full | On the rate of convergence in the invariance principle for weakly dependent random variables |
| title_fullStr | On the rate of convergence in the invariance principle for weakly dependent random variables |
| title_full_unstemmed | On the rate of convergence in the invariance principle for weakly dependent random variables |
| title_short | On the rate of convergence in the invariance principle for weakly dependent random variables |
| title_sort | on the rate of convergence in the invariance principle for weakly dependent random variables |
| topic_facet | uniformly strong mixing random variables the rate of convergence an invariance principle. |
| url | https://umj.imath.kiev.ua/index.php/umj/article/view/6244 |
| work_keys_str_mv | AT mukhamedovak ontherateofconvergenceintheinvarianceprincipleforweaklydependentrandomvariables AT mukhamedovak ontherateofconvergenceintheinvarianceprincipleforweaklydependentrandomvariables |