Integral Functionals of the Gasser–Muller Regression Function
For integral functionals of the Gasser–Muller regression function and its derivatives, we consider the plug-in estimator. The consistency and asymptotic normality of the estimator are shown.
Gespeichert in:
| Datum: | 2015 |
|---|---|
| Hauptverfasser: | , , , , , , , |
| Format: | Artikel |
| Sprache: | Englisch |
| Veröffentlicht: |
Institute of Mathematics, NAS of Ukraine
2015
|
| Online Zugang: | https://umj.imath.kiev.ua/index.php/umj/article/view/1994 |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Назва журналу: | Ukrains’kyi Matematychnyi Zhurnal |
| Завантажити файл: | |
Institution
Ukrains’kyi Matematychnyi Zhurnal| _version_ | 1860507897571573760 |
|---|---|
| author | Arabidze, D. Babilua, P. Nadaraya, E. Sokhadze, G. A. Арабідзе, Д. Бабілуа, П. К. Надарая, Е. А. Сохадзе, Г. А. |
| author_facet | Arabidze, D. Babilua, P. Nadaraya, E. Sokhadze, G. A. Арабідзе, Д. Бабілуа, П. К. Надарая, Е. А. Сохадзе, Г. А. |
| author_sort | Arabidze, D. |
| baseUrl_str | https://umj.imath.kiev.ua/index.php/umj/oai |
| collection | OJS |
| datestamp_date | 2019-12-05T09:48:26Z |
| description | For integral functionals of the Gasser–Muller regression function and its derivatives, we consider the plug-in estimator. The consistency and asymptotic normality of the estimator are shown. |
| first_indexed | 2026-03-24T02:16:37Z |
| format | Article |
| fulltext |
UDC 519.21
D. Arabidze, P. Babilua, E. Nadaraya (I. Javakhishvili Tbilisi State Univ., Georgia),
G. Sokhadze, A. Tkeshelashvili (I. Vekua Inst. Appl. Math. I. Javakhishvili Tbilisi State Univ., Georgia)
INTEGRAL FUNCTIONALS OF THE GASSER – MULLER
REGRESSION FUNCTION*
IНТЕГРАЛЬНI ФУНКЦIОНАЛИ ФУНКЦIЇ РЕГРЕСIЇ ГАССЕРА – МЮЛЛЕРА
For integral functionals of the Gasser – Muller regression function and its derivatives, the plug-in estimator is considered.
The consistency and asymptotic normality of the estimator are shown.
Для iнтегральних функцiоналiв функцiї регресiї Гассера – Mюллера та їх похiдних розглядається оцiнка, що пiд-
ключається. Встановлено обґрунтованiсть та асимптотичну нормальнiсть цiєї оцiнки.
1. Introduction. The study of functionals of a probability distribution density function or a regression
function and its derivatives is an interesting task and attracts an active interest on the part of researchers
(see, e.g., [4 – 9]). There are detailed studies functionals of a probability distribution density function
and its derivatives (see [6 – 8] and the references therein). Investigations of functionals of a regression
function and its derivatives are more modest [4, 5].
In the present paper we investigate the integral functional of a regression function and its deriva-
tives. In our investigation we use the Gasser – Muller regression function introduced and studied in
[1 – 3].
As it follows from these works consideration of these types of problems are important, particularly,
while choosing asymptotical optimal bandwidth (see formula (5) in [5, p. 2584]). Our approach in
this paper is based on the derivation of a representation theorem which we further use to obtain
the results connected with asymptotic properties, in particular with consistency and the central limit
theorem.
Suppose we have n measurements taken at the points
t1, t2, . . . , tn (0 ≤ t1 ≤ t2 ≤ . . . ≤ tn ≤ 1),
where the tk, k = 1, . . . , n, depend on n. The model considered is the following:
Y (tk) = a(tk) + εk, k = 1, . . . , n,
εk i.i.d. with E(εk) = 0, D(εk) = σ2 <∞.
The estimator of the unknown regression function a(t) was introduced by Gasser and Muller [1]
and defined by the expression
ân(t) =
1
hn
n∑
i=1
si∫
si−1
W
(
t− u
hn
)
du · Y (ti),
* The work is supported by the Shota Rustaveli National Scientific Foundation, Project No. FR/308/5-104/12.
c© D. ARABIDZE, P. BABILUA, E. NADARAYA, G. SOKHADZE, A. TKESHELASHVILI, 2015
ISSN 1027-3190. Укр. мат. журн., 2015, т. 67, № 4 435
436 D. ARABIDZE, P. BABILUA, E. NADARAYA, G. SOKHADZE, A. TKESHELASHVILI
where 0 = s1 ≤ s2 ≤ . . . ≤ sn = 1, ti ≤ si ≤ ti+1, i = 1, 2, . . . , n − 1, and maxi |si − si−1| =
= O
(
1
n
)
; {hn, n = 1, 2, . . . } is a sequence of positive numbers monotonically tending to zero.
W (u) is the function with probability density properties.
In the same paper [1], Gasser and Muller introduced the estimator of the k th derivative of the
regression function a(k)(t)
â (k)
n (t) =
1
hk+1
n
n∑
i=1
si∫
si−1
W (k)
(
t− u
hn
)
du · Y (ti) (1)
for all k = 0, 1, . . . ,m. It was assumed that â (0)
n (t) + ân(t).
In the works, which we have referred to above, the theorems of consistency and asymptotic
normality of estimators were obtained by imposing certain conditions.
Let ϕ : Rm+2 → R be a continuous bounded smooth function. Consider an integral functional of
the form
I(a) =
∞∫
−∞
ϕ
(
t, a(t), a′(t), . . . , a(m)(t)
)
dt.
We have the (ti, Yi), i = 1, 2, . . . , n. This means that
Yi = Y (ti) = a(ti) + εi.
To estimate I(a), we use the plug-in estimator, i.e., consider the functional
I(ân) =
∞∫
−∞
ϕ
(
t, ân(t), â
′
n(t), . . . , â
(m)
n (t)
)
dt.
Here â (k)
n (t)is defined from (1).
2. Representation theorem. Our consideration is based on the representation theorem which
will lead to obtaining the results we are interested in. Let us list the conditions which the considered
variables are supposed to satisfy:
Conditions on a :
(a1) The function a = a(t) is defined and continuous on [0, 1] and takes its values in the interval
[−k;k].
(a2) The function a(t) has continuous derivatives up to order m inclusive.
Conditions on εk :
(ε1) Random values εk, k = 1, 2, . . . , are independent and equally distributed.
(ε2) Eεk = 0, Dε2k = σ2 <∞.
(ε3) The growth condition is P{|εk| > n} < e−n.
In the sequel, for brevity, we will use the notation
∂ϕ
∂xi
= ϕ(i) for i = 0, 1, . . . ,m
and
ISSN 1027-3190. Укр. мат. журн., 2015, т. 67, № 4
INTEGRAL FUNCTIONALS OF THE GASSER – MULLER REGRESSION FUNCTION 437
∂2ϕ
∂xi∂xj
= ϕ(ij) for i, j = 0, 1, . . . ,m.
Conditions on ϕ :
(ϕ1) The function ϕ : Rm+2 → R is continuous, bounded, integrable and has bounded continuous
derivatives up to second order, inclusive, in some open convex domain A which contains the domain
R× [−k;k]m+1.
(ϕ2) All first and second derivatives of the function ϕ are uniformly bounded in the domain A
by a constant Cϕ > 0.
Therefore, by this condition, for the function ϕ we have for all i, j = 0, 1, . . . ,m:
sup
{
|ϕ(ij)|(s, s0, s1, . . . , sm) : (s, s0, s1, . . . , sm) ∈ A
}
≤ Cϕ. (2)
Conditions on W :
(w1)
∫ ∞
−∞
W (t) dt = 1.
(w2) The function W (t) has continuous derivatives up to order m, m ≥ 1.
(w3) Function W (t) has the compact support [−τ, τ ].
W (−τ) =W (τ) = 0;
(w4) For any i = 0, 1, . . . ,m, W (i) ∈ L1([−τ, τ ]).
Denote by an(t) the mathematical expectation ân(t):
an(t) = Eân(t) = E
1
hn
n∑
i=1
si∫
si−1
W
(
t− u
hn
)
du · Y (ti) =
1
hn
n∑
i=1
si∫
si−1
W
(
t− u
hn
)
du · a(ti).
Then we obtain
a(k)n (t) = Eâ (k)
n (t) =
1
hi+1
n
n∑
i=1
si∫
si−1
W (k)
(
t− u
hn
)
du · a(ti).
Let us ascertain that there also exist expressions I(a), I(an) and I(ân) and they are finite. Using
the Taylor formula, for any point (s, s0, s1, . . . , sm) ∈ A and some s̃m ∈ A we can write
|ϕ|(s, s0, s1, . . . , sm) =
∣∣∣∣∣∣
m∑
i=0
ϕ(i)(s, 0, 0, . . . , 0)si +
1
2
m∑
i,j=0
ϕ(ij)(s, s̃0, s̃1, . . . , s̃m)sisj
∣∣∣∣∣∣ .
Accordingly, there exists a constant C such that
|ϕ|(s, s0, s1, . . . , sm) ≤ C
(
m∑
i=0
|si|+
m∑
i=0
|si|2
)
.
Hence it follows that for any bounded measurable functions f0(t), f1(t), . . . , fm(t) from L1(R) we
have
ISSN 1027-3190. Укр. мат. журн., 2015, т. 67, № 4
438 D. ARABIDZE, P. BABILUA, E. NADARAYA, G. SOKHADZE, A. TKESHELASHVILI
∞∫
−∞
|ϕ| (t, f0(t), . . . , fm(t)) dt <∞ (3)
and therefore I(a) exists.
The conditions which are imposed on the function W ensure its boundedness and membership in
L1(R). Then condition (w4) and (2), (3) imply the finiteness of both variables I(an) and I(ân) for
any n.
By the Taylor formula we can write
I(ân)− I(an) = Sn(hn) +Rn, (4)
where for any hn > 0, Sn(h) is the sum of independent random variables
Sn(hn) =
m∑
i=0
1∫
0
ϕ(i)
(
t, an(t), a
′
n(t), . . . , a
(m)
n (t)
)(
â (i)
n (t)− a(i)n (t)
)
dt. (5)
A remainder Rn has the form
Rn =
1
2
m∑
i,j=1
1∫
0
ϕ(ij)(̃bm(t))
(
â (i)
n (t)− a(i)n (t)
)(
â (j)
n (t)− a(j)n (t)
)
dt, (6)
where b̃m(t) is the straight line connecting the points
(t, an(t), a
′
n(t), . . . , a
(m)
n (t)) and (t, ân(t), â
′
n(t), . . . , â
(m)
n (t)).
Let us estimate the remainder Rn. Applying the standard procedure, from (3) and (6) we obtain
|Rn| ≤ Cϕ
1∫
0
m∑
i=0
(
â (i)
n (t)− a(i)n (t)
)2
dt.
Let Cm[0, 1] denote the space of bounded real functions that are defined and continuous on [0, 1],
having continuous derivatives of at least mth order. In this space we introduce the norm
‖f‖m =
m∑
i=0
1∫
0
(
dif
dti
)2
dt
1/2
, f ∈ Cm[0, 1].
The closure of Cm[0, 1] in this norm is defined by Wm
2 and called the Sobolev space. This is
completer separable Hilbert space with the scalar product
〈f, g〉m =
m∑
i=0
1∫
0
dif
dti
dig
dti
dt, f, g ∈ Cm[0, 1].
Denote
rn(m) = ‖ân − an‖2m,
ISSN 1027-3190. Укр. мат. журн., 2015, т. 67, № 4
INTEGRAL FUNCTIONALS OF THE GASSER – MULLER REGRESSION FUNCTION 439
then we can write
|Rn| ≤ Cϕrn(m). (7)
Assume
Uk = Uk(t) =
1
hn
sk∫
sk−1
W
(
t− u
hn
)
du
[
Y (tk)− a(tk)
]
, k = 1, 2, . . . , n,
where a(tk) = EY (tk). Then
n∑
k=1
Uk =
1
hn
n∑
k=1
sk∫
sk−1
W
(
t− u
hn
)
du
[
Y (tk)− a(tk)
]
= ân(t)− an(t).
Therefore
rn(m) =
∥∥∥∥∥
n∑
k=1
Uk
∥∥∥∥∥
2
m
. (8)
Let us estimate the norm of one of the summands Uk in (8) for each k = 1, 2, . . . , n. We obtain
‖Uk‖m =
m∑
i=0
1∫
0
∣∣∣∣ 1
hi+1
n
sk∫
sk−1
W (i)
(
t− u
hn
)
du
[
Y (tk)− a(tk)
]∣∣∣∣2 dt
1/2
=
=
m∑
i=0
1
h2i+2
n
1∫
0
∣∣∣∣∣∣∣∣hn
t−sk−1
hn∫
t−sk
hn
W (i)
(
t− u
hn
)
d
(
t− u
hn
)∣∣∣∣∣∣∣∣
2 ∣∣Y (tk)− a(tk)
∣∣2 dt
1/2
≤
≤ 2|εk|CW
m∑
i=0
1
h2in
1∫
0
∣∣∣∣ t− sk−1hn
− t− sk
hn
∣∣∣∣2 dt
1/2
=
= 2|εk|CW
|sk − sk−1|
√
1− h2m+2
n
hm+1
n
√
1− h2n
≤ L 1
nhm+1
n
=
=Mm ∼ O
(
1
nhm+1
n
)
for sufficiently large L > 0. (9)
To estimate rn(m), we use the McDiarmid’s inequality which we give here for convenience (for
details see [10]).
McDiarmid’s inequality. Let H(t1, . . . , tk) be a real function such that for each i = 1, . . . , k and
some ci, the supremum in t1, . . . , tk, t, of the difference
ISSN 1027-3190. Укр. мат. журн., 2015, т. 67, № 4
440 D. ARABIDZE, P. BABILUA, E. NADARAYA, G. SOKHADZE, A. TKESHELASHVILI∣∣∣H(t1, . . . , ti−1, ti, ti+1, . . . , tk)−H(t1, . . . , ti−1, t, ti+1, . . . , tk)
∣∣∣ ≤ ci.
If X1, . . . , Xk are independent random variables taking values in the domain of the function
H(t1, . . . , tk), then for every ε > 0,
P
{∣∣H(X1, . . . , Xk)−EH(X1, . . . , Xk)
∣∣ > ε
}
≤ 2 exp
− 2ε2∑k
i=1
c2i
.
Let us apply McDiarmid’s inequality for the functions
H(U1, . . . , Um) =
∥∥∥∥∥
n∑
k=1
Uk
∥∥∥∥∥
m
.
As ck we take ck ≡ 2Mm, k = 1, . . . , n. From (9), for any δ > 0 we obtain
P
{∣∣∣∣∣
∥∥∥∥∥
n∑
k=1
Uk
∥∥∥∥∥
m
−E
∥∥∥∥∥
n∑
k=1
Uk
∥∥∥∥∥
m
∣∣∣∣∣ ≥ δ
}
≤ 2 exp
{
−δ
2nh2m+2
n
2M2
m
}
.
We substitute here
δ =
√
2 log n
√
nhm+1
n
and, by the Borel – Cantelli lemma, write∥∥∥∥∥
n∑
k=1
Uk
∥∥∥∥∥
m
= E
∥∥∥∥∥
n∑
k=1
Uk
∥∥∥∥∥
m
+O
( √
log n
√
nhm+1
n
)
. (10)
Using the Jensen’s inequality
E
∥∥∥∥∥
n∑
k=1
Uk
∥∥∥∥∥
2
m
≤ E
∥∥∥∥∥
n∑
k=1
Uk
∥∥∥∥∥
2
m
=
=
n∑
k=1
m∑
i=0
1∫
0
E
∣∣∣∣∣∣∣
1
hi+1
n
sk∫
sk−1
W (i)
(
t− u
hn
)
du
[
Y (tk)− a(tk)
]∣∣∣∣∣∣∣
2
dt ≤
≤ 2C2
W
n∑
k=1
m∑
i=0
1∫
0
1
h2i+2
n
∣∣∣∣∣∣∣
sk∫
sk−1
du
∣∣∣∣∣∣∣
2
E
[
Y (tk)− a(tk)
]2
dt =
= 2C2
Wσ
2 (1− h2m+2
n )
(1− h2n)h2m+2
n
n∑
k=1
(sk − sk−1)2 ≤ K ·
1
nh2m+2
n
, (11)
from (7), (8), (10) and (11) we conclude that
Rn = O
(
log n
nh2m+2
n
)
.
Therefore the following statement is true.
ISSN 1027-3190. Укр. мат. журн., 2015, т. 67, № 4
INTEGRAL FUNCTIONALS OF THE GASSER – MULLER REGRESSION FUNCTION 441
Theorem 1. Assume that conditions (a1) – (a2), (ε1) – (ε3), (ϕ1) – (ϕ2) and (w1) – (w4) are
fulfilled. Then representation (4), where the remainder with probability 1 has the order
Rn = O
(
log n
nh2m+2
n
)
(12)
is valid.
3. Consistency. In this section of the paper we use Theorem 1 to prove that the estimator I(ân)
is strictly consistent.
Theorem 2. Let the conditions of Theorem 1 be fulfilled. If the positive sequence (hn)
∞
n=1,
0 < hn < 1, is chosen so that
log n
nh2m+2
n
→ 0,
then with probability 1 we have
I(ân)→ I(a) as n→∞.
Proof. By Theorem 1 and formula (4)
I(ân)− I(an) = Sn(hn) +Rn, Rn = o(1) a.e.,
where
Sn(hn) =
m∑
i=0
1∫
0
ϕ(i)
(
t, an(t), a
′
n(t), . . . , a
(m)
n (t)
)(
â (i)
n (t)− a(i)n (t)
)
dt.
By condition (a1),{(
t, an(t), a
′
n(t), . . . , a
(m)
n
)
: t ∈ [0, 1]
}
⊂ [0, 1]× [−k;k]m+1.
This and condition (ϕ2) imply that there exists a constant Cϕ > 0, such that
sup
{
|ϕ(i)|(t, t0, . . . , tm) : (t, t0, . . . , tm) ∈ [0, 1]× [−k; k]m+1
}
≤ Cϕ.
Keeping this in mind, we can write
|Sn(hn)| ≤ Cϕ
m∑
i=0
1∫
0
1
hi+1
n
n∑
k=1
sk∫
sk−1
∣∣∣∣W (i)
(
t− u
hn
)∣∣∣∣ du · [Y (tk)− a(tk)
]
dt ≤
≤ 2CϕCW
m∑
i=0
1∫
0
1
hi+1
n
n∑
k=1
|εk| |sk − sk−1| dt ∼ (by (w2))
∼M 1
nhm+1
n
(for some M ). (13)
ISSN 1027-3190. Укр. мат. журн., 2015, т. 67, № 4
442 D. ARABIDZE, P. BABILUA, E. NADARAYA, G. SOKHADZE, A. TKESHELASHVILI
Hoeffding’s inequality. Suppose X1, X2, . . . , Xn are independent real-valued random variables,
such that for each i, Xi takes values from the interval [ri, pi].
Let Y +
∑n
i=1
Xi. Then for all t > 0,
P
{
|Y −E|Y || ≥ t
}
≤ 2 exp
− 2t2∑n
i=1
(pi − ri)2
. (14)
As Xk we take
Xk =
m∑
i=0
1∫
0
1
hi+1
n
sk∫
sk−1
∣∣∣∣W (i)
(
t− u
hn
)∣∣∣∣ du · [Y (tk)− a(tk)
]
dt.
Analogously to (13), it can be shown that Xk takes its values in the interval[
−M 1
n2hm+1
n
,M
1
n2hm+1
n
]
.
Therefore
n∑
i=1
(pi − ri)2 =
2M
n3h2m+2
n
.
Besides, we take
t =
2
√
M log n
n3/2hm+1
n
.
Then from (14) we obtain
P
{∣∣Sn(hn)∣∣ > 2
√
M log n
n3/2hm+1
n
}
≤ 2 exp
{
− 2 log n
}
by means of which, using the Borel – Cantelli lemma, we can conclude that
Sn(hn) = O
( √
log n
n3/2hm+1
n
)
.
It is obvious that, for condition (12),
√
log n
n3/2hm+1
n
, too, tends to zero. Thus we conclude that Sn(hn)→
→ 0 as n→∞.
By formula (6) from [2] we can write
Ea(k)n (t) =
τ∫
−τ
W (u)a(k)(t− uhn) du+O
(
1
nhkn
)
.
Hence we make the following conclusions:
(i) for condition (12),
1
nhkn
, too, tends to zero for any k = 0, 1, . . . ,m;
(ii) Ea
(k)
n (t)→ a(k)(t) as n→∞.
ISSN 1027-3190. Укр. мат. журн., 2015, т. 67, № 4
INTEGRAL FUNCTIONALS OF THE GASSER – MULLER REGRESSION FUNCTION 443
Summarizing the above discussion, we ascertain that
I(an) =
1∫
0
ϕ
(
t, an(t), a
′
n(t), . . . , a
(m)
n (t)
)
dt −→
−→
1∫
0
ϕ
(
t, a(t), a′(t), . . . , a(m)(t)
)
dt = I(a) as n→∞.
Since I(ân)− I(an) = o(1), we conclude that
I(ân)− I(a) −→ 0 a.e.
The theorem is proved.
4. Central limit theorem. Using our representation theorem we can obtain the limit distribution
property for the integral functional
I(ân) =
1∫
0
ϕ
(
t, ân(t), â
′
n(t), . . . , â
(m)
n (t)
)
dt.
Consider the difference (4), where for any hn > 0, Sn(h) is the sum of independent random variables
(5). Rn is a remainder having the form (6). Clearly,
ESn(hn) = 0 and ERn → 0 as n→∞. (15)
Moreover,
E(Sn(hn))
2 = σ2
m∑
i=0
1∫
0
ϕ(i)(t, an(t), a
′
n(t), . . . , a
(m)
n (t)) dt
2
and DRn → 0 as n→∞.
(16)
Using appropriate conditions, we have to prove that the variable
√
n (I(ân)− I(an))
is asymptotically normal and calculate the limiting variance. For this, according to the theorem and
formulas (4), (15) and (16), we have to show the asymptotic normality of the variable
√
nSn(hn).
As follows from (6), in this case it suffices to study this property for the variables
dk = Y (tk)
m∑
i=0
1
hi+1
n
1∫
0
sk∫
sk−1
W (i)
(
t− u
hn
)
ϕ(i)
(
t, an(t), a
′
n(t), . . . , a
(m)
n (t)
)
dt du.
It can be easily verified that
ISSN 1027-3190. Укр. мат. журн., 2015, т. 67, № 4
444 D. ARABIDZE, P. BABILUA, E. NADARAYA, G. SOKHADZE, A. TKESHELASHVILI
Edk = a(tk)
m∑
i=0
1
hi+1
n
1∫
0
sk∫
sk−1
W (i)
(
t− u
hn
)
ϕ(i)
(
t, an(t), a
′
n(t), . . . , a
(m)
n (t)
)
dt du.
Thus we consider the sequence of independent random variables
fk(n) = α(n, k)(Y (tk)− a(tk)) = α(n, k)εk,
where
α(n, k)=
m∑
i=0
1
hi+1
n
1∫
0
sk∫
sk−1
W (i)
(
t− u
hn
)
ϕ(i)
(
t, an(t), a
′
n(t), . . . , a
(m)
n (t)
)
dt du.
Let consider the sum
Sn(hn) =
n∑
k=1
α(n, k)εk.
Let Fk,n be the probability distribution function of a random variable α(n, k)εk, and Fε be
the distribution function of a random variable εk. The Lindeberg’s condition is written in the form
limn→∞ Ln(δ) = 0 ∀ δ > 0, where
Ln(δ) =
(
σ2
n∑
k=1
α2(n, k)
)−1 n∑
j=1
∫
x2J
|x| ≥ δσ( n∑
k=1
α2(n, k)
)1/2
dFk,n(x),
where J(A) is the indicator function of the set A.
It is easy to see that
Ln(δ) ≤
1
σ2
max
0≤j≤n
∫
x2J (|x| ≥ δσv(n, j)) dFε,
where
v(n, j) =
|α(n, j)|(∑n
j=1
α2(n, j)
)1/2 .
It remains to show that
max
1≤j≤n
v(n, j)→ 0 as n→∞.
But since
max
1≤j≤n
|α(n, j)| = O
(
1
nhm+1
n
)
,
we have
max
1≤j≤n
v(n, j) = O
(
1√
nhm+1
n
)
.
Thus the Lindeberg’s condition is fulfilled and we can conclude that the theorem is valid.
ISSN 1027-3190. Укр. мат. журн., 2015, т. 67, № 4
INTEGRAL FUNCTIONALS OF THE GASSER – MULLER REGRESSION FUNCTION 445
Theorem 3. Let the conditions of Theorem 1 be fulfilled. Then if
hn → 0 and nhm+1
n →∞ as n→∞,
we have √
n (I(ân)− I(a))
d−→ N(0, r2),
where
r2 = σ2
m∑
i=0
1∫
0
ϕ(i)
(
t, a(t), a′(t), . . . , a(m)(t)
)
dt
2
.
5. Example. As an example we consider the problem of estimation of total curvature (see
[11, p. 22]) of a regression function a:
I =
1∫
0
(a′′(t))2 dt.
We obtain
ϕ(t, x0, . . . , xm) = x22.
Then we have
r2 = 4σ2
(
a′(1)− a′(0)
)2
.
For hn → 0 and
√
nh6n →∞, we have the convergence
√
n(I2(a)− I2(ân))
d−→ N(0, r2).
These considerations can be used for checking hypothesis about total curvature of regression
function.
6. Iterated logarithm law. Applying the well-known iterated logarithm law from Kuelbs paper
[12], we ascertain that the following statement is true.
Theorem 4. If the sequence hn is chosen so that
hn =
(
log n√
n log log n
) 1
2m
,
then
lim sup
n→∞
±
√
n [I(ân)− I(an)]√
2 log log n
= r.
Proof. Note that for this hn we have
Rn = o
(√
log logn
n
)
.
It can be easily verified that
lim sup
n→∞
±
√
n [I(ân)− I(an)]√
2 log log n
= lim sup
n→∞
±
√
n
[
α(n, k)Y (tk)− α(n, k)a(tk)
]
√
2 log log n
= r.
1. Gasser T., Muller H.-G. Nonparametric estimation of regression functions and their derivatives. – Heidelberg, 1979.
– (Preprint / Univ. Heidelberg, № 38).
ISSN 1027-3190. Укр. мат. журн., 2015, т. 67, № 4
446 D. ARABIDZE, P. BABILUA, E. NADARAYA, G. SOKHADZE, A. TKESHELASHVILI
2. Gasser T., Muller H.-G. Estimating regression functions and their derivatives by the kernel method // Scand. J. Statist. –
1984. – 11, № 3. – P. 171 – 185.
3. Hardle W., Gasser T. On robust kernel estimation of derivatives of regression functions // Scand. J. Statist. – 1985. –
12, № 3. – P. 233 – 240.
4. Goldstein L., Messer K. Optimal plug-in estimators for nonparametric functional estimation // Ann. Statist. – 1992. –
20, № 3. – P. 1306 – 1328.
5. Quintela del Rı́o A. A plug-in technique in nonparametric regression with dependence // Communs Statist.: Theory
and Methods. – 1994. – 23, № 9. – P. 2581 – 2603.
6. Mason D. M., Nadaraya E., Sokhadze G. Integral functionals of the density // Nonparametrics and Robustness in
Todern Statist. Inference and Time Series Anal.: a Festschrift in honor of Prof. Jana Jurečková. – Beachwood, OH,
2010. – P. 153 – 168.
7. Babilua P. K., Nadaraya E. A., Patsatsia M. B., Sokhadze G. A. On the integral functionals of a kernel estimator of
a distribution density // Proc. I. Vekua Inst. Appl. Math. – 2008. – 58. – P. 6 – 14.
8. Levit B. Ya. Asymptotically efficient estimation of nonlinear functionals // Probl. Inform. Transmiss. – 1978. – 14,
№ 3. – P. 65 – 72.
9. Hlávka Z. On nonparametric estimators of location of maximum // Acta Univ. carol. Math. et phys. – 2011. – 52,
№ 1. – P. 5 – 13.
10. Devroye L. Exponential inequalities in nonparametric estimation // Nonparametric Function. Estimation and Relat.
Top. (Spetses, 1990). – Dordrecht: Kluwer Acad. Publ., 1991. – P. 31 – 44.
11. Wand M. P., Jones M. C. Kernel smoothing // Monogr. Statist. and Appl. Probab. – London: Chapman and Hall, Ltd.,
1995. – 60.
12. Kuelbs J. The law of the iterated logarithm and related strong convergence theorems for Banach space valued random
variables // Lect. Notes Math. – Berlin: Springer, 1976. – 539.
Received 17.10.13,
after revision — 09.01.15
ISSN 1027-3190. Укр. мат. журн., 2015, т. 67, № 4
|
| id | umjimathkievua-article-1994 |
| institution | Ukrains’kyi Matematychnyi Zhurnal |
| keywords_txt_mv | keywords |
| language | English |
| last_indexed | 2026-03-24T02:16:37Z |
| publishDate | 2015 |
| publisher | Institute of Mathematics, NAS of Ukraine |
| record_format | ojs |
| resource_txt_mv | umjimathkievua/a6/313dd10c10d014e5e5a21282428858a6.pdf |
| spelling | umjimathkievua-article-19942019-12-05T09:48:26Z Integral Functionals of the Gasser–Muller Regression Function Інтегральні функціонали функції регресії Гассера-Мюллера Arabidze, D. Babilua, P. Nadaraya, E. Sokhadze, G. A. Арабідзе, Д. Бабілуа, П. К. Надарая, Е. А. Сохадзе, Г. А. For integral functionals of the Gasser–Muller regression function and its derivatives, we consider the plug-in estimator. The consistency and asymptotic normality of the estimator are shown. Для інтегральних функцiоналiв Функції регресії Гассера-Мюллера та їх похідних розглядається оцінка, що підключається. Встановлено обґрунтованість та асимптотичну нормальність цієї оцінки. Institute of Mathematics, NAS of Ukraine 2015-04-25 Article Article application/pdf https://umj.imath.kiev.ua/index.php/umj/article/view/1994 Ukrains’kyi Matematychnyi Zhurnal; Vol. 67 No. 4 (2015); 435-446 Український математичний журнал; Том 67 № 4 (2015); 435-446 1027-3190 en https://umj.imath.kiev.ua/index.php/umj/article/view/1994/1006 https://umj.imath.kiev.ua/index.php/umj/article/view/1994/1007 Copyright (c) 2015 Arabidze D.; Babilua P.; Nadaraya E.; Sokhadze G. A. |
| spellingShingle | Arabidze, D. Babilua, P. Nadaraya, E. Sokhadze, G. A. Арабідзе, Д. Бабілуа, П. К. Надарая, Е. А. Сохадзе, Г. А. Integral Functionals of the Gasser–Muller Regression Function |
| title | Integral Functionals of the Gasser–Muller Regression Function |
| title_alt | Інтегральні функціонали функції регресії Гассера-Мюллера |
| title_full | Integral Functionals of the Gasser–Muller Regression Function |
| title_fullStr | Integral Functionals of the Gasser–Muller Regression Function |
| title_full_unstemmed | Integral Functionals of the Gasser–Muller Regression Function |
| title_short | Integral Functionals of the Gasser–Muller Regression Function |
| title_sort | integral functionals of the gasser–muller regression function |
| url | https://umj.imath.kiev.ua/index.php/umj/article/view/1994 |
| work_keys_str_mv | AT arabidzed integralfunctionalsofthegassermullerregressionfunction AT babiluap integralfunctionalsofthegassermullerregressionfunction AT nadarayae integralfunctionalsofthegassermullerregressionfunction AT sokhadzega integralfunctionalsofthegassermullerregressionfunction AT arabídzed integralfunctionalsofthegassermullerregressionfunction AT babíluapk integralfunctionalsofthegassermullerregressionfunction AT nadaraâea integralfunctionalsofthegassermullerregressionfunction AT sohadzega integralfunctionalsofthegassermullerregressionfunction AT arabidzed íntegralʹnífunkcíonalifunkcííregresíígasseramûllera AT babiluap íntegralʹnífunkcíonalifunkcííregresíígasseramûllera AT nadarayae íntegralʹnífunkcíonalifunkcííregresíígasseramûllera AT sokhadzega íntegralʹnífunkcíonalifunkcííregresíígasseramûllera AT arabídzed íntegralʹnífunkcíonalifunkcííregresíígasseramûllera AT babíluapk íntegralʹnífunkcíonalifunkcííregresíígasseramûllera AT nadaraâea íntegralʹnífunkcíonalifunkcííregresíígasseramûllera AT sohadzega íntegralʹnífunkcíonalifunkcííregresíígasseramûllera |