Consistency of M-estimates in general nonlinear regression models

Nonlinear regression model with continuous time and weak dependent or long-range dependent stationary noise is considered. Strong consistency suffient conditions of M-estimates of regression parameters are obtained.

Збережено в:
Бібліографічні деталі
Дата:2007
Автори: Ivanov, A.V., Orlovsky, I.V.
Формат: Стаття
Мова:Англійська
Опубліковано: Інститут математики НАН України 2007
Онлайн доступ:https://nasplib.isofts.kiev.ua/handle/123456789/4479
Теги: Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
Назва журналу:Digital Library of Periodicals of National Academy of Sciences of Ukraine
Цитувати:Consistency of M-estimates in general nonlinear regression models / A.V. Ivanov, I.V. Orlovsky // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 1-2. — С. 86-97. — Бібліогр.: 8 назв.— англ.

Репозитарії

Digital Library of Periodicals of National Academy of Sciences of Ukraine
_version_ 1860041968477798400
author Ivanov, A.V.
Orlovsky, I.V.
author_facet Ivanov, A.V.
Orlovsky, I.V.
citation_txt Consistency of M-estimates in general nonlinear regression models / A.V. Ivanov, I.V. Orlovsky // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 1-2. — С. 86-97. — Бібліогр.: 8 назв.— англ.
collection DSpace DC
description Nonlinear regression model with continuous time and weak dependent or long-range dependent stationary noise is considered. Strong consistency suffient conditions of M-estimates of regression parameters are obtained.
first_indexed 2025-12-07T16:56:11Z
format Article
fulltext Theory of Stochastic Processes Vol.13 (29), no.1-2, 2007, pp.86-97 ALEXANDER V. IVANOV AND IGOR V. ORLOVSKY CONSISTENCY OF M-ESTIMATES IN GENERAL NONLINEAR REGRESSION MODELS Nonlinear regression model with continuous time and weak depen- dent or long-range dependent stationary noise is considered. Strong consistency sufficient conditions of M -estimates of regression param- eters are obtained. 1. Introduction Consider a regression model X(t) = g(y(t), θ) + ε(t), t ≥ 0, (1) where g(y, τ) is a non random function defined on Y ×Θc, Θc is the closure in Rq of an open set Θ, Y ⊂ Rm is a compact region of regression ex- periment design. Borel function y(t) : [0,∞) → Y is a design of regression experiment, θ = (θ1, ..., θq) ∈ Θc is an unknown parameter. Let ε(t), t ∈ R1 be a random process satisfying the assumption A1. ε(t), t ∈ R1 is a real valued mean-square continuous measurable stationary process with zero mean on a complete probability space (Ω,�, P ). We do not assume function g(y, θ) to be a linear form of coordinates of the vector θ. Definition 1. M-estimate of unknown parameter θ obtained by the observations X(t), t ∈ [0, T ), of the type (1), is said to be any random vector θ̂T that minimizes in τ ∈ Θc the functional MT (τ) = 1 T T∫ 0 ρ(X(t) − g(y(t), τ))dt with continuous risk function ρ : R1 → R1. The consistency property of M-estimates for nonlinear regression model with independent identically distributed observation errors is considered in [1]. Some facts on consistency of the least squares estimates and least moduli estimates can be found in [2]. Sufficient conditions for strong consistency of M-estimates of an un- known parameter θ of the model (1) with random noise that satisfies weak or long-range dependence conditions are presented in this paper. 2000 Mathematics Subject Classifications. Primary 62J02; Secondary 62J99 Key words and phrases. Consistency, M-estimates, nonlinear regression model. 86 CONSISTENCY OF M-ESTIMATES 87 2. Assumptions and the main results Let us impose some restriction on the random process ε(t), t ∈ R1. A2. ε(t), t ∈ R1 is a strictly stationary process, such that for some δ > 0 μ2+δ = E|ε(0)|2+δ < ∞ and ∞∫ 0 (α(r)) δ 2+δ dr < ∞, where α(r) = sup A∈σ(−∞,s], B∈σ[s+r,∞) |P (AB) − P (A)P (B)|, σ(a, b] is σ-algebra generated by random variables (r.v.) {ε(t), t ∈ (a, b]}. Definition 2. If for symmetric r.v. ξ the probabilities P{|ξ − b| < x}, x ∈ [0,∞) are nonincreasing functions of the variable b ∈ [0,∞), then we say that ξ is a symmetric and unimodal r.v.. A3. ε(0) is a symmetric and unimodal r.v. with the distribution func- tion (d.f.) F (x). Let B be a σ-algebra of Borel subsets of Y . For any A ∈ B μT (A) = T−1m{t ∈ [0, T ] : y(t) ∈ A}, where m is Lebesgue measure on [0,∞). Let Δg(y, τ) = g(y, θ) − g(y, τ) and vθ(ε) = {τ ∈ Rq : ‖τ − θ‖ < ε}. B1. The measures μT are weakly converge, as T → ∞, to some measure μ: μT =⇒ μ and for any ε > 0 μ {y ∈ Y : Δg(y, τ) = 0} < 1 for each τ /∈ vθ(ε). Example. Assume {yi}i≥1 ⊂ Y to be some sequence and y(t) = yi, t ∈ [i − 1; i), i = 1, 2, ... . Introduce the measure μT = 1 T [T ]∑ i=1 δyi + {T} T δy[T ]+1 , where [T ] and {T} are integer and fractional parts of T . Then, if 1 n n∑ i=1 δyi ⇒ μ as n → ∞, then μT ⇒ μ as T → ∞. Requirement on the measure μ in the condition B1 can be written as follows: for any ε > 0 μ{y ∈ Y : g(y, τ) = g(y, θ)} > 0 for each τ /∈ vθ(ε). Suppose that the measure μ is absolutely continuous with respect to Lebesgue measure l on Y , furthermore l(Y ) > 0 and μ has the density f(y) separated from zero: infy∈Y f(y) ≥ f∗ > 0. Then μ{y ∈ Y : g(y, τ) = g(y, θ)} = ∫ {y∈Y : g(y,τ)�=g(y,θ)} f(y)dy ≥ 88 ALEXANDER V. IVANOV AND IGOR V. ORLOVSKY ≥ f∗l{y ∈ Y : g(y, τ) = g(y, θ)} > 0, if l{y ∈ Y : g(y, τ) = g(y, θ)} > 0. But the last inequality is the property of the regression function to distinguish parameters. Definition 3. Function J(·) : R1 → R1 is called symmetric, if there exists some point b0 ∈ R1(which is called the center of symmetry) and some function ϕ(·) : [0,∞) → R1 such that J(b) = ϕ(|b − b0|). If ϕ is a monotonically nondecreasing function and ϕ(x) > ϕ(0) for x > 0, then J is called unimodal and the center of symmetry is called the mode. Impose some restriction on risk function. Let Eρ(ε(t)) = Eρ(ε(0)) < ∞. C1. ρ(x) is continuous unimodal, with mode in zero, function such that ρ(0) = 0. C2. There exists c > 0 such that |ρ(x1) − ρ(x2)| ≤ c|x1 − x2| for any x1, x2 ∈ R1. Assume also A4. ∞∫ 0 [P{|ε(0)| < z} − P{|ε(0)− b| < z}] dρ(z) > 0, b > 0. Note that from C1 it follows that ρ(x) is monotonically nondecreasing function in the region x ≥ 0. It means that Lesbegue-Stilties integral in A4 exists. Moreover, from A3 it follows that the difference in square brackets A4 is nonnegative. Theorem 1. Suppose that assumptions A1-A4, B1, C1 and C2 are fulfilled. Then M-estimate θ̂T → θ a.s. as T → ∞. To state the second result of the paper we need to introduce additional condition on ε(t). Definition 4. Stationary process ε(t), t ∈ R1 Eε(t) = 0 is called a process with long-range dependence if Eε(0)ε(t) = B(t) = L(|t|) |t|α , 0 < α < 1, (2) where L(t) : [0,∞) → [0,∞) is a slowly varying function (at infinity). A5. Gaussian random process ε(t), t ∈ R1 is a process with long-range dependence, B(0) = 1. Theorem 2. Suppose that assumptions A1, A4, A5, B1, C1 and C2 are fulfilled. Then M-estimate θ̂T → θ a.s. as T → ∞. 3. Auxiliary assertions Set δT (τ) = QT (τ) − EQT (τ), ΔT (τ) = QT (τ) − QT (θ). Definition 5. An unknown parameter θ is said to be identifiable, if for any ε > 0 there exist the numbers T0 = T0(ε) and δ = δ(ε) > 0 such that EΔT (τ) > δ when T > T0 and τ /∈ vθ(ε). CONSISTENCY OF M-ESTIMATES 89 Lemma 1. Assume that θ is identifiable parameter and sup τ∈Θc |δT (τ)| −→ T→∞ 0 a.s., (3) then θ̂T −→ θ a.s. as T → ∞. Proof. Let us denote by Ω1 the event of the probability 1, for which the condition (3) is fulfilled. For elementary events ω ∈ Ω1 from the definition of the estimate θ̂T we have ΔT (θ̂T ) ≤ 0. (4) Suppose that for some fixed ω ∈ Ω1 θ̂T −→θ as T → ∞. It means that there exists some number ε0 > 0 and the sequence of numbers Tn ↑ ∞ as n → ∞ such that for n > n(ε0) ∥∥∥θ̂Tn − θ ∥∥∥ ≥ ε0. As for these Tn (4) also holds, then infτ /∈vθ(ε0) ΔTn(τ) ≤ 0. Let Tn ≥ T0(ε0) and for n > n(ε0) supτ∈Θc |δTn(τ)| < δ0 4 , where δ0 = δ(ε0). Then for n > n(ε0) 0 ≥ inf τ /∈vθ(ε0) ΔTn(τ) = inf τ /∈vθ(ε0) (δTn(τ) + EΔTn(τ)) − δTn(θ) ≥ inf τ /∈vθ(ε0) δTn(τ) + inf τ /∈vθ(ε0) EΔTn(τ) − δTn(θ) ≥ inf τ∈Θc δTn(τ) + inf τ /∈vθ(ε0) EΔTn(τ) − δTn(θ) > δ0 2 . We obtain contradiction. Hence, for ω ∈ Ω1 θ̂T −→ θ as T → ∞. � Introduce function J(b) = Eρ(ε(t) − b) = Eρ(ε(0) − b), b ∈ R1. The next lemma states sufficient conditions of identifiability of parame- ter θ. Lemma 2. An unknown parameter θ is identifiable if (i) for any ε > 0 there exist T0 = T0(ε) and x = x(ε) > 0 such that for any T > T0 and any τ /∈ vθ(ε) μT {y ∈ Y : |Δg(y, τ)| > x} > x; (ii) J(b) is unimodal; (iii) J(b) > J(0) for any b = 0. Proof. It is easily seen that under the conditions (ii) and (iii) the mode is in b = 0. Furthermore, EΔT (τ) = 1 T T∫ 0 [J(Δg(y(t), τ)) − J(0)] dt. (5) Fix some ε > 0 and consider numbers T0 and x taken from the condition (i) of the Lemma. From the condition (ii) it follows that the right hand side of 90 ALEXANDER V. IVANOV AND IGOR V. ORLOVSKY the relation (5) permits the lower bound 1 T T∫ 0 [J(Δg(y(t), τ)) − J(0)] dt ≥ (J(x) − J(0)) 1 T T∫ 0 χ(x,∞) (|Δg(y(t), τ)|)dt = (J(x) − J(0))μT {y ∈ Y : |Δg(y, τ)| > x} , where χA(x) is the indicator of the set A. From (i) and (ii) it follows that in the definition of the identifiability of parameter one can set δ = x(J(x) − J(0)). � Further we formulate some sufficient conditions on the validity of Lem- mas 1 and 2. Lemma 3. If the assumption B1 holds, then the condition (i) of the Lemma 2 fulfiles. Proof. Let ε > 0 be an arbitrary number. It is necessary to show that there exists some numbers T0 and x > 0 such that μT {y ∈ Y : |Δg(y, τ)| > x} > x, T > T0, τ /∈ vθ(ε). (6) Assume that (6) does not hold. Then there exist some sequences Tn ↑ ∞ as n → ∞ and τn ∈ Θc\vθ(ε) such that μTn { y ∈ Y : |Δg(y, τn)| > n−1 } ≤ n−1, n ≥ 1. (7) As the set Θc\vθ(ε) is compact, there exists some point τ ∗ ∈ Θc\vθ(ε) and the sequence nk, k ≥ 1 such that τnk → τ ∗ as k → ∞. Let δ > 0 be an arbitrary fixed number. Then there exists some number kδ such that for k > kδ, uniformly in y ∈ Y , |Δg(y, τnk ) − Δg(y, τ ∗)| ≤ δ 2 . (8) Thanks to (8), for k > kδ {|Δg(y, τ ∗)| > δ} ⊂ {|Δg(y, τ ∗) − Δg(y, τnk )| + |Δg(y, τnk )| > δ} ⊂ { |Δg(y, τnk )| > δ 2 } . (9) Taking into account the inequality (7) for nk > 2 δ one has μTnk { y ∈ Y : |Δg(y, τnk )| > δ 2 } ≤ 1 nk . (10) Then, from (9) and (10) it follows that μTnk {y ∈ Y : |Δg(y, τ ∗)| > δ} ≤ n−1 k , (11) CONSISTENCY OF M-ESTIMATES 91 which is true for any k > k′ δ = max ( kδ, min { k : nk > 2 δ }) . Denote by Yδ = {y ∈ Y : |Δg(y, τ ∗)| ≤ δ}. From (11) it follows that μTnk (Yδ) > 1 − n−1 k for all k > k′ δ. As Yδ is a closed set, then thanks to weak convergence of μT to the measure μ, we obtain (see, for example, [3], p. 21) lim k→∞ μTnk (Yδ) ≤ μ(Yδ), δ > 0. For δ ↓ 0, from the continuity of the measure μ it follows that μ{y ∈ Y : Δg(y, τ) = 0} = 1. (12) But the relation (12) contradicts to the condition B1. � Lemma 4. If the assumptions A3, A4 and C1 hold, then the conditions (ii) and (iii) of the Lemma 2 are fulfilled. Proof. Without loss of generality, assume that ρ(x), x ≥ 0 is strictly monotonically increasing function. From the formula for the mean of the nonnegative r.v. (see, for example, [4], p. 190) one has J(b) − J(0) = ∞∫ 0 (P{ρ(ε(0)) < x} − P{ρ(ε(0) − b) < x}) dx = ∞∫ 0 ( P {−ρ−1(x) < ε(0) < ρ−1(x) } − P {−ρ−1(x) < ε(0) − b < ρ−1(x) }) dx, where ρ−1(x) is the inverse of the function ρ(x), x ≥ 0. By the change of variable x = ρ(z), z ≥ 0 in the last integral, J(b) − J(0) = ∞∫ 0 (P {|ε(0)| < z} − P {|ε(0) − b| < z}) dρ(z) = = ∞∫ 0 (F (z) − F (z − b) − F (z + b) + F (z)) dρ(z), (13) where F (x) is the d.f. of the r.v. ε(0). The integral in the first equality of the relations (13) coincides with the expression of A4, and the condition (iii) of Lemma 2 is fulfilled. From the symmetry of ρ and r.v. ε(0) it follows the symmetry of J(b). Denote by Δ2 bF (z) = (F (z) − F (z − b)) − (F (z + b) − F (z)) , b, z ≥ 0. Then A4 can be rewritten in the form ∞∫ 0 Δ2 bF (z)dρ(z) > 0, b > 0. 92 ALEXANDER V. IVANOV AND IGOR V. ORLOVSKY From (13) it follows that Δ2 bF (z) = P{|ε(0)| < z} − P{|ε(0)− b| < z}. Consider for b2 > b1 the difference J(b2) − J(b1) = ∞∫ 0 ( Δ2 b2 F (z) − Δ2 b1 F (z) ) dρ(z). It is easily seen that Δ2 b2 F (z) − Δ2 b1 F (z) = P{|ε(0)− b1| < z} − P{|ε(0)− b2| < z} ≥ 0 from the unimodality of the r.v. ε(0). It means that J(b2)− J(b1) ≥ 0, and the condition (ii) of Lemma 2 is a corollary of A3 and C1. � Assume that the d.f. F (x) is continuously differentiable and the density of the distribution p(x) is an even strictly decreasing for x ≥ 0 function. Suppose that a continuous even function ρ(x) is such that ρ(0) = 0 and strictly monotonically increasing for x ≥ 0. Then one can use Lemma 10.2 of the book [3], p. 217-218, and for any b = 0 J(b) − J(0) = Eρ(ε(0) − b) − Eρ(ε(0)) > 0, and the integral in A4 is strictly positive. Consider next sufficient conditions of the uniform convergence in (3) of Lemma 1. Lemma 5. Suppose the condition C2 fulfiles and δT (τ) −→ T→∞ 0 a.s., τ ∈ Θc, (14) then (3) holds. Proof. From C2 it follows that for τ1, τ2 ∈ Θc |QT (τ1) − QT (τ2)| ≤ c T T∫ 0 |g(y(t), τ1) − g(y(t), τ2)|dt. Similarly, from C2 for τ1, τ2 ∈ Θc one has |δT (τ1) − δT (τ2)| ≤ 2c T T∫ 0 |g(y(t), τ1) − g(y(t), τ2)|dt. Hence, the family of functions {δT (τ) : ω ∈ Ω, T > 0} is an equicontin- uous on the set Θc. So for any δ > 0 there exists a finite number of points τ1, ..., τk ∈ Θc such that sup τ∈Θc |δT (τ)| ≤ max 1≤j≤k |δT (τj)| + δ, ω ∈ Ω, T > 0. From (14) it follows that max1≤j≤k |δT (τj)| −→ 0 a.s. as T → ∞, and, hence, supτ∈Θc |δT (τ)| −→ 0 a.s. as T → ∞. � CONSISTENCY OF M-ESTIMATES 93 4. Proof of Theorem 1 We shall prove that (14) holds under the assumptions of Theorem 1. Using the notation ξ(t) = ρ(ε(t) − Δg(y(t), τ)) − Eρ(ε(t) − Δg(y(t), τ)), τ ∈ Θc one has δT (τ) = 1 T T∫ 0 ξ(t)dt, Eδ2 T (τ) = 1 T 2 T∫ 0 T∫ 0 Eξ(t)ξ(s)dtds ≤ ≤ 10 T 2 T∫ 0 T∫ 0 [ Eρ2+δ(ε(t) − Δg(y(t), τ)) ] 1 2+δ × × [ Eρ2+δ(ε(s) − Δg(y(s), τ)) ] 1 2+δ α δ 2+δ (|t − s|)dtds. (15) To obtain (15) the Davidov inequality has been used with p = q = 2+δ, r = 1 + 2 δ (see [5], and also Lemma 1.6.2 of the book [6]). As ρ(0) = 0, then from the condition C2 one obtains Eρ2+δ (ε(t) − Δg(y(t), τ)) ≤ c2+δE |ε(0) − Δg(y(t), τ)|2+δ . By obvious inequalities |a + b|κ ≤ 2κ−1 (|a|κ + |b|κ) , |a + b| 1 κ ≤ |a| 1 κ + |b| 1 κ , κ = 2 + δ, (16)[ Eρ2+δ (ε(t) − Δg(y(t), τ)) ] 1 2+δ ≤ 2 1+δ 2+δ c ( μ 1 2+δ 2+δ + |Δg(y(t), τ)| ) , i.e. Eδ2 T (τ) ≤ 2 δ 2+δ c2 20 T 2 T∫ 0 T∫ 0 α δ 2+δ (|t − s|) [ μ 1 2+δ 2+δ + |Δg(y(t), τ)| ] × × [ μ 1 2+δ 2+δ + |Δg(y(s), τ)| ] dtds ≤ ≤ 2 δ 2+δ c2 20 T 2 T∫ 0 T∫ 0 α δ 2+δ (|t − s|) [ μ 1 2+δ 2+δ + |Δg(y(t), τ)| ]2 dtds. Using the first inequality of (16) with κ = 2, Eδ2 T (τ) ≤ 2 δ 2+δ c2 40 T 2 T∫ 0 T∫ 0 α δ 2+δ (|t − s|) [ μ 2 2+δ 2+δ + |Δg(y(t), τ)|2 ] dtds. 94 ALEXANDER V. IVANOV AND IGOR V. ORLOVSKY It remains to estimate two integrals, namely: I1 = 1 T 2 T∫ 0 T∫ 0 α δ 2+δ (|t − s|)dtds ≤ 1 T 2 T∫ 0 ds T∫ −T α δ 2+δ (|t|)dt = O(T−1) as T → ∞, under assumption A2. On the other hand, I2 = 1 T 2 T∫ 0 T∫ 0 α δ 2+δ (|t − s|)|Δg(y(t), τ)|2dtds ≤ ⎛⎝2 ∞∫ 0 α δ 2+δ (s)ds ⎞⎠ 1 T 2 T∫ 0 |Δg(y(t), τ)|2dt. (17) As g(y, τ) is continuous function on the compact Y ×Θc , the right hand side of the inequality (17) is of the order O(T−1) as T → ∞. Thus, Eδ2 T (τ) = O(T−1) as T → ∞, and δT (τ) −→ 0 in probability as T → ∞ . Note that for the sequence Tn = n2, n ≥ 1 ∞∑ n=1 Eδ2 Tn (τ) < ∞, i.e. δTn(τ)−→n→∞ 0 a.s. If T ∈ [Tn, Tn+1], then |δT (τ)| ≤ sup Tn≤T≤Tn+1 |δT (τ) − δTn(τ)| + |δTn(τ)| , and the Theorem will be proved, if sup Tn≤T≤Tn+1 |δT (τ) − δTn(τ)| −→n→∞ 0 a.s. Obviously δT (τ) − δTn(τ) = 1 T T∫ 0 ξ(t)dt− 1 Tn Tn∫ 0 ξ(t)dt = = ( 1 T − 1 Tn ) Tn∫ 0 ξ(t)dt + 1 T T∫ Tn ξ(t)dt = I3 + I4. Furthermore, for T ∈ [Tn, Tn+1] |I3| ≤ Tn+1 − Tn Tn |δTn(τ)| −→ n→∞ 0 a.s.; |I4| ≤ 1 Tn Tn+1∫ Tn |ξ(t)|dt ≤ 1 Tn Tn+1∫ Tn ρ (ε(t) − Δg(y(t), τ)) dt+ CONSISTENCY OF M-ESTIMATES 95 + 1 Tn Tn+1∫ Tn Eρ (ε(t) − Δg(y(t), τ)) dt = I5 + I6. As under the Lipshits condition C2 ρ (ε(t) − Δg(y(t), τ)) ≤ c (|ε(t)| + |Δg(y(t), τ)|) , then I5 ≤ c Tn Tn+1∫ Tn |ε(t)|dt + c Tn Tn+1∫ Tn |Δg(y(t), τ)|dt = I7 + I8, I8 = c ⎛⎝Tn+1 Tn · 1 Tn+1 Tn+1∫ 0 |Δg(y(t), τ)|dt− 1 Tn Tn∫ 0 |Δg(y(t), τ)|dt ⎞⎠ . From the assumption B1 of the Theorem it follows 1 Tn Tn∫ 0 |Δg(y(t), τ)|dt = ∫ Y |Δg(y, τ)|μTn(dy) −→ n→∞ ∫ Y |Δg(y, τ)|μ(dy), then I8 −→ 0 as n → ∞. On the other hand, I7 = c ⎛⎝ 1 Tn Tn+1∫ Tn (|ε(t)| − E|ε(t)|) dt + E|ε(0)|Tn+1 − Tn Tn ⎞⎠ −→ n→∞ 0 a.s. by Davidov inequality. Similarly, it can be shown that I6 −→ 0 as n → ∞. Consequently, (14) is fulfilled. The validity of Theorem 1 follows now from the Lemmas 1-5 proved above. � 5. Proof of Theorem 2 Similarly to proof of Theorem 1 we need to proof that (14) holds. Then the result of Theorem 2 will follow from the Lemmas 1-5. Consider a random process G(ε(t), t) = ρ(ε(t) − Δg(y(t), τ)). (18) From C2 and A5 EG2(ε(t), t) ≤ c2E |ε(t) − Δg(y(t), τ)|2 = c2 ( 1 + |Δg(y(t), τ)|2) ≤ C < ∞ (19) 96 ALEXANDER V. IVANOV AND IGOR V. ORLOVSKY uniformly in t ≥ 0 and τ ∈ Θc. Therefore in Gilbert space L2(R 1, ϕ(u)du), where ϕ(x) = 1√ 2π e− x2 2 is a standard Gaussian density, there exists an ex- pansion (see, for example, [6]) G(u, t) = ∞∑ m=0 Cm(t) m! Hm(u), Cm(t) = ∫ R1 G(u, t)Hm(u)ϕ(u)du, m ≥ 0 by Chebyshev-Hermite polynomials Hm(u) = (−1)me u2 2 dm dum e− u2 2 , m ≥ 0. (20) Note that C0(t) = Eρ(ε(0) − Δg(y(t), τ)) = J (Δg(y(t), τ)). Thanks to relations EHm(ε(t))Hk(ε(s)) = δk mm!Bm(t − s), (21) where δk m is Kroneker delta we have Eξ(t)ξ(s) = cov (G(ε(t), t), G(ε(s), s)) = ∞∑ m=1 Cm(t)Cm(s) m! Bm(t − s). Hence, taking into account that B(0) = 1, we obtain Eδ2 T (τ) = ∞∑ m=1 1 m! 1 T 2 T∫ 0 T∫ 0 Cm(t)Cm(s)Bm(t − s)dtds ≤ ∞∑ m=1 1 m! 1 T 2 T∫ 0 T∫ 0 C2 m(t)Bm(t − s)dtds ≤ 1 T 2 T∫ 0 T∫ 0 ( ∞∑ m=1 C2 m(t) m! ) B(t − s)dtds. Note that, thanks to (19), ∞∑ m=1 C2 m(t) m! = EG2(ε(0), t) − (EG(ε(0), t))2 = DG(ε(0), t) ≤ C < ∞, and Eδ2 T (τ) ≤ C 1 T 2 T∫ 0 T∫ 0 B(t − s)dtds. CONSISTENCY OF M-ESTIMATES 97 On the other hand, as T → ∞, 1 T 2 T∫ 0 T∫ 0 B(t−s)dtds = 1∫ 0 1∫ 0 B (T (t − s)) dtds = 1 T α 1∫ 0 1∫ 0 L (T |t − s|) |t − s|α dtds ∼ ⎛⎝ 1∫ 0 1∫ 0 dtds |t − s|α ⎞⎠ L(T ) T α = 2 (1 − α)(2 − α) L(T ) T α by the properties of the slowly varying function (see, for example [7],[8]). For the sequence Tn = n 1 α +ν , where ν > 0 is some number, ∞∑ n=1 L(Tn) T α n < ∞, and so δTn(τ) −→ 0 a.s., as n → ∞. Taking into account the proof of Theorem 1, it remains to show that 1 Tn Tn∫ 0 (|ε(t)| − E|ε(t)|) dt −→ n→∞ 0 a.s. (22) But the proof of (22) is similar to the previous reasoning for G(ε(t), t).� Bibliography 1. F. Liese and I. Vajda Consistency of M -estimates in general regression models, Journal of Multivariate Analysis, 50, 1, (1994), 93-114. 2. A. V. Ivanov Asymptotic Theory of Nonlinear Regression, Kluwer Aca- demic Publishers, Dordrecht, (1997). 3. Ibrahimov I.A. and Hasminsky R.Z. Asymptotic theory of estimation, Nau- ka, Moskow, (1979). (in Russian) 4. Feller V. Introduction to probability theory and its applications, Mir, Mos- kow, volume 2, (1967). (in Russian) 5. Davidov U.A. About convergence of the distributions, generated by station- ary random processes, TVIP, Vol. 15, 3, (1970), 498-509.(in Russian) 6. Ivanov A.V. and Leonenko N.N. Statistical analysis of random fields, Vys- cha Shkola, Kyiv,(1986). (in Russian) 7. Seneta E. Regularly varying function, Nauka, Moskow, (1985). (in Russian) 8. Ivanov A.V. and Orlovsky I.V. Lp-Estimates in Nonlinear Regression with Long-Range Dependence, Theory of Stochastic Processes, Vol. 7(23), . 3-4, (2002), 38-49. National technical university of Ukraine, ”KPI”. Peremogi avenue 37, Kiev E-mail address: ivanov@paligora.kiev.ua National technical university of Ukraine, ”KPI”. Peremogi avenue 37, Kiev E-mail address: avalon@ln.ua
id nasplib_isofts_kiev_ua-123456789-4479
institution Digital Library of Periodicals of National Academy of Sciences of Ukraine
issn 0321-3900
language English
last_indexed 2025-12-07T16:56:11Z
publishDate 2007
publisher Інститут математики НАН України
record_format dspace
spelling Ivanov, A.V.
Orlovsky, I.V.
2009-11-19T10:12:36Z
2009-11-19T10:12:36Z
2007
Consistency of M-estimates in general nonlinear regression models / A.V. Ivanov, I.V. Orlovsky // Theory of Stochastic Processes. — 2007. — Т. 13 (29), № 1-2. — С. 86-97. — Бібліогр.: 8 назв.— англ.
0321-3900
https://nasplib.isofts.kiev.ua/handle/123456789/4479
Nonlinear regression model with continuous time and weak dependent or long-range dependent stationary noise is considered. Strong consistency suffient conditions of M-estimates of regression parameters are obtained.
en
Інститут математики НАН України
Consistency of M-estimates in general nonlinear regression models
Article
published earlier
spellingShingle Consistency of M-estimates in general nonlinear regression models
Ivanov, A.V.
Orlovsky, I.V.
title Consistency of M-estimates in general nonlinear regression models
title_full Consistency of M-estimates in general nonlinear regression models
title_fullStr Consistency of M-estimates in general nonlinear regression models
title_full_unstemmed Consistency of M-estimates in general nonlinear regression models
title_short Consistency of M-estimates in general nonlinear regression models
title_sort consistency of m-estimates in general nonlinear regression models
url https://nasplib.isofts.kiev.ua/handle/123456789/4479
work_keys_str_mv AT ivanovav consistencyofmestimatesingeneralnonlinearregressionmodels
AT orlovskyiv consistencyofmestimatesingeneralnonlinearregressionmodels