Estimation in an implicit multivariate measurement error model with clustering in the regressor

An implicit linear multivariate model DZ ≈ 0 is considered, where the data matrix D is observed with errors, and Z is a parameter matrix. The error matrix is partitioned into two uncorrelated blocks, and the total covariance structure in each block is supposed to be known up to a corresponding scala...

Full description

Saved in:
Bibliographic Details
Date:2008
Main Author: Polekha, M.
Format: Article
Language:English
Published: Інститут математики НАН України 2008
Online Access:https://nasplib.isofts.kiev.ua/handle/123456789/4542
Tags: Add Tag
No Tags, Be the first to tag this record!
Journal Title:Digital Library of Periodicals of National Academy of Sciences of Ukraine
Cite this:Estimation in an implicit multivariate measurement error model with clustering in the regressor / M. Polekha // Theory of Stochastic Processes. — 2008. — Т. 14 (30), № 1. — С. 117–125. — Бібліогр.: 9 назв.— англ.

Institution

Digital Library of Periodicals of National Academy of Sciences of Ukraine
_version_ 1859795672544313344
author Polekha, M.
author_facet Polekha, M.
citation_txt Estimation in an implicit multivariate measurement error model with clustering in the regressor / M. Polekha // Theory of Stochastic Processes. — 2008. — Т. 14 (30), № 1. — С. 117–125. — Бібліогр.: 9 назв.— англ.
collection DSpace DC
description An implicit linear multivariate model DZ ≈ 0 is considered, where the data matrix D is observed with errors, and Z is a parameter matrix. The error matrix is partitioned into two uncorrelated blocks, and the total covariance structure in each block is supposed to be known up to a corresponding scalar factor. Moreover, the row data are clustered into two groups. Based on the method of corrected objective function, the strongly consistent estimators of scalar factors and the kernel of the matrix D are constructed, as the numbers of rows in the clusters tend to infinity.
first_indexed 2025-12-02T13:21:31Z
format Article
fulltext Theory of Stochastic Processes Vol. 14 (30), no. 1, 2008, pp. 117–125 UDC 519.21 MARIA POLEKHA ESTIMATION IN AN IMPLICIT MULTIVARIATE MEASUREMENT ERROR MODEL WITH CLUSTERING IN THE REGRESSOR An implicit linear multivariate model DZ ≈ 0 is considered, where the data matrix D is observed with errors, and Z is a parameter matrix. The error matrix is partitioned into two uncorrelated blocks, and the total covariance structure in each block is supposed to be known up to a corresponding scalar factor. Moreover, the row data are clustered into two groups. Based on the method of corrected objective function, the strongly consistent estimators of scalar factors and the kernel of the matrix D are constructed, as the numbers of rows in the clusters tend to infinity. 1. Introduction We deal with an implicit multivariate model D̄Z = 0, D = D̄ + D̃, where Z is the parameter matrix, D is the data matrix, D̄ is the unobserved matrix, and D̃ is the error matrix. The model is important for the system identification. In fact, our model can be obtained from an explicit model AX ≈ B, where the matrices A and B are observed with errors, and X is a matrix parameter. The approximate equality AX ≈ B means that, for certain unobserved matrices Ā and B̄, the equality ĀX = B̄ holds, and A = Ā+ Ã, B = B̄ + B̃, where à and B̃ are error matrices. For this model, there are some results concerning the estimation of the parameter. In [1, 2], the covariance structure of [A B] was known up to a constant, and the consistency of the total least squares estimator was proven. In [4], the consistent estimator was constructed for the situation when the covariance structure of [A B] is unknown. In [3,8,9], the model AX ≈ B was studied when the covariance structure of A is known up to one scaling factor, while the covariance structure of B is known up to another scaling factor. For identifiability reasons, it was assumed in [3,8,9] that the data consist of two independent clusters, and, in [4], the number of clusters depended of the size of X. The idea of the use of clusters belongs to A. Wald [6]. He used it for a linear scalar model and proposed a slope estimator based on the first empirical moments. In [4], similar moments were used, while, in [3,8,9], the second empirical moments were exploited. We modify the results of [3]. But our model is more general, although the matrix D can be considered as a compound matrix [A B], here the input and output signals are not separated. The paper is organized as follows. Section 2 describes the model and introduces main assumptions. In Section 3, we use the method of corrected objective function to derive an estimator of the kernel of Z in case of known scalar factors λ0 1 and λ0 2. A preliminary attempt to derive an objective function for λ0 1 and λ0 2, when they are unknown, is made in Section 4. In Section 5, we introduce a model with two clusters and the final objective function for the scalar factors and state the consistency result. In Section 6, the consistent 2000 AMS Mathematics Subject Classification. Primary 65F20, 93E12, 62H30, 62J05, 62H12, 62F12, 65P99. Key words and phrases. Linear measurement error model, corrected objective function, clusters, consistent estimator. 117 118 MARIA POLEKHA estimator for the kernel of Z is proposed, and Section 7 concludes. The proofs of the results are given in Appendix. In the paper, we use a standard notation: ‖A‖ is the Frobenius norm of a matrix A; the symbol tr denotes the trace of a matrix; Ip denotes a unit matrix of size p; and the symbol E denotes the expectation of a random matrix. All the vectors in the paper are column vectors. For a symmetric matrix C, μ1(C) ≤ . . . ≤ μp(C) are p smallest eigenvalues of C. 2. The model and assumptions Consider the model of observations (1) DZ ≈ 0, where Z ∈ R(n+p)×p is the unknown matrix parameter, and the data matrix D ∈ Rm×(n+p) is observed, D = D̄ + D̃. Here, D̄ is the unknown nonrandom matrix, and D̃ is the matrix of random errors. For the true matrix D̄, we have D̄Z = 0. We want to estimate the matrix parameter Z with fixed n and p and increasing m. We assume that dim KerD̄ = p, and the kernel is not changed with increase in m. Let Z = [z1, . . . , zp] and rankZ = p. Then D̄[z1, . . . , zp] = 0, D̄zi = 0, i = 1, ..., p, and span(z1, . . . , zp) = KerD̄ =: Vp. Thus, we have to construct the estimator V̂p of the kernel KerD̄. Let D̃� = [d̃1, · · · , d̃m]. By concerning the error vectors d̃i, we make some assump- tions: (i) Ed̃i = 0, for all i. (ii) There exists δ > 0 such that supi≥1 E‖d̃i‖4+δ <∞. (iii) The sequence of random vectors { d̃i, i ≥ 1 } is independent. (iv) There exists n1, 1 ≤ n1 ≤ n + p − 1, such that D̃ = [D̃1 D̃2], D̃1 ∈ Rm×n1 , and ED̃� 1 D̃2 = 0. This means that the error matrix D̃ can be partitioned into two uncorrelated blocks. (v) ED̃� k D̃k = λ0 kWk, where Wk are the known positive definite matrices, and λ0 k are the unknown positive scalars, k = 1, 2. (vi) 1 m‖D̄‖2 ≤ const, m ≥ 1. 3. The estimator under known scalar factors Suppose that λ0 1 and λ0 2 are known. We will use the corrected objective function method [5]. The least squares objective function is Qls(D̄;Z) := ‖D̄Z‖2, Z ∈ R (n+p)×p, or Qls(D̄;Z) = tr(Z�Ψls(D)Z), where Ψls(D̄) := D̄�D̄. By the corrected objective function method, we have to construct a matrix Qc(D;Z) such that E[Qc(D;Z)] = Qls(D̄;Z), for all Z. ESTIMATION IN AN IMPLICIT MULTIVARIATE MEASUREMENT ERROR MODEL 119 It is possible under the known λ0 j and Wj , j = 1, 2, defined in conditions (iv) - (v). Let Ψc(D) = D�D − [ λ0 1W1 0 0 λ0 2W2 ] . Then Qc(D;Z) = tr(Z�Ψc(D)Z). We minimize this objective function under the condition Z�Z = Ip. Consider the ordered eigenvalues of Ψc(D), μ1 ≤ μ2 ≤ . . . ≤ μn+p, and the corresponding orthonormal eigen- basis {ϕi, i = 1, 2, . . . , n+ p}. The function Qc(D;Z) is minimized at Z0 = [ϕ1, ..., ϕp]. Therefore, we obtain the estimator V̂p = span(ϕ1, ..., ϕp). It is a linear span of p eigen- vectors which correspond to p smallest eigenvalues. The next lemma is similar to the corresponding lemma in [3]. Lemma 1. Assume (i) to (vi). Then∥∥∥∥ 1 m Ψc(D)− 1 m Ψls(D̄) ∥∥∥∥→ 0, as m→∞, a.s. 4. The estimator under unknown scalar factors Let λ0 j , j = 1, 2, be the unknown scalar factors. Then, for any λ1, λ2 ≥ 0, we define Ψc(D;λ1, λ2) = Ψc(λ1, λ2) := D�D − [ λ1W1 0 0 λ2W2 ] , and Ψls(D̄;λ1, λ2) = Ψls(λ1, λ2) := D̄�D̄ − [ (λ1 − λ0 1)W1 0 0 (λ2 − λ0 2)W2 ] . By Lemma 1, ∥∥∥ 1 m Ψc(λ1, λ2)− 1 m Ψls(λ1, λ2) ∥∥∥→ 0, as m→∞, a.s. We will study the properties of the approximating matrix 1 mΨls(λ1, λ2). Assume the following condition. (vii) lim infm→∞ μp+1(D̄�D̄/m) > 0. For large m, we have the approximate equality 1 m Ψc(λ1, λ2) ≈ 1 m Ψls(λ1, λ2), and, for the approximate matrix Ψls(λ0 1, λ 0 2), we have μi ( Ψls(λ0 1, λ 0 2) ) = 0, for all i = 1, . . . , p, and μp+1 ( Ψls(λ0 1, λ 0 2) ) > 0. Note that Vp(λ0 1, λ 0 2) = span(z1, . . . , zp) = Ker(Ψls(λ0 1, λ 0 2)). Therefore, in order to estimate λ0 1, λ 0 2, it seems natural to use the objective function Q(λ1, λ2) := p∑ i=1 μ2 i ( 1 m Ψc(λ1, λ2) ) . 120 MARIA POLEKHA Unfortunately, its minimization does not yield a consistent estimator of λ0 1, λ 0 2, since, for the approximating matrix-valued function Ψls(λ1, λ2)/m, there could exist other values λ∗1 and λ∗2 which are separated from λ0 1 and λ0 2, and such that μi ( Ψls(λ∗1, λ ∗ 2) ) = 0, for all 1 ≤ i ≤ p. Therefore, we cannot estimate the scalars λ0 1 and λ0 2 by this way. 5. Model with two clusters Consider two copies of model (1): (2) D(k)Z ≈ 0, k = 1, 2, where Z ∈ R(n+p)×p is the unknown parameter matrix to be estimated with rankZ = p, and D(k) ∈ Rmk×(n+p) are observed, D(k) = D̄(k) + D̃(k), D̄(k)Z = 0, k = 1, 2. Here, D̄(k) are the unknown nonrandom matrices, and D̃(k) are the error matrices, D̃�(k) = [d̃1(k) · · · d̃mk (k)], k = 1, 2. We assume that dim KerD̄(k) = p, k = 1, 2. We want to estimate Vp := KerD̄(1) = KerD̄(2) for increasing m1 and m2 under fixed n and p. For each k = 1, 2 we assume the following. (a) Ed̃i(k) = 0, i ≥ 1. (b) There exists δ > 0 such that supi≥1 E‖d̃i(k)‖4+δ <∞. (c) { d̃i(k), i ≥ 1 } are two mutually independent sequences of random vectors. (d) There exists n1, 1 ≤ n1 ≤ n + p − 1, such that D̃(k) = [D̃1(k) D̃2(k)], D̃1(k) ∈ Rmk×n1 and ED̃� 1 (k)D̃2(k) = 0. (e) ED̃� j (k)D̃j(k) = λ0 jWj(k), j = 1, 2, where Wj(k) are the known positive definite matrices, and λ0 i are the unknown positive scalars. (f) 1 mk ‖D̄(k)‖2 ≤ const, mk ≥ 1. (g) lim infmk→∞ μp+1(D̄�(k)D̄(k)/mk) > 0. (h) lim infm1,m2→∞ σp+1(D̄�(1)D̄(1)/m1 − D̄�(2)D̄(2)/m2) > 0, where σp+1(C) is the (p+ 1)-th singular value of the symmetric matrix C. [We mention that, for the matrix C in brackets in (h), σ1(C) = . . . = σp(C) = 0]. Let Z = [ Z1 Z2 ] , Z1 ∈ Rn1×p, Z2 ∈ Rn2×p, n1 + n2 = n + p. Then D̃(k)Z = D̃1(k)Z1 + D̃2(k)Z2. (i) lim infm1→∞ tr(ZT j Wj(1) m1 Zj) > 0, j = 1, 2, for certain fixed Z = [z1 . . . zp] with span(z1, . . . , zp) = Vp. (j) Wj(1) m1 − Wj(2) m2 → 0, as m1,m2 →∞, j = 1, 2. Let, for λ := (λ1, λ2) ∈ [0,∞)× [0,∞), Ψ(k) c (λ) := D�(k)D(k)− [ λ1W1(k) 0 0 λ2W2(k) ] , and let μ1k(λ) ≤ μ2k(λ) ≤ . . . ≤ μpk(λ) be p smallest eigenvalues of the matrix Ψ(k) c (λ) with the corresponding orthonormal eigenvectors f1k(λ), f2k(λ), . . . , fpk(λ), k = 1, 2. Note that if the given eigenvalues are multiple, then the eigenvectors are not uniquely defined. In this case, we define them in such a way that they are Borel measurable vector functions of D(k) and λ. ESTIMATION IN AN IMPLICIT MULTIVARIATE MEASUREMENT ERROR MODEL 121 Let Vpk(λ) = span(f1k(λ), f2k(λ), . . . , fpk(λ)). Then an objective function for estimat- ing λ is Qc(λ) := 2∑ k=1 p∑ i=1 μ2 ik(λ) + c‖ sinΘ(λ)‖2, where c is a fixed positive constant, Θ(λ) is a diagonal matrix of canonical angles between the subspaces Vp1(λ) and Vp2(λ), and sin Θ(λ) is defined as the diagonal matrix with the sines of these angles as diagonal elements. Recall the next definition of canonical angles. Definition 1. [7]. Let �(X) and �(Y ) be two subspaces in Rn with the same dimension. The given subspaces are spanned by columns of matrices X and Y , respectively. Let the columns of the matrix X⊥ form an orthogonal basis for (�(X))⊥ which is an orthogonal complement to �(X). Then nonzero singular values of the matrix XT ⊥Y are the sines of nonzero canonical angles between the subspaces �(X) and �(Y ). Let {εt, t ≥ 1} be a fixed positive sequence, and εt → 0 as t→∞. Then the estimator λ̂ = λ̂(t) is defined as any measurable solution to the inequality (3) Qc(λ̂) ≤ inf λ1,λ2≥0 Qc(λ) + εt, where t := min(m1,m2). Theorem 1. Suppose that conditions (a) to (j) are satisfied, then λ̂→ λ0 := (λ0 1, λ 0 2), as m1,m2 →∞, a.s. 6. Estimator of the kernel Under the conditions of Theorem 1, one can obtain the estimator λ̂ for the unknown scalars λ0 and then construct two estimators Vp1(λ̂) and Vp2(λ̂) for a subspace Vp. But we want to construct a single subspace estimator using both clusters. It is reasonable to consider a compound data matrix Dc := [ D(1) D(2) ] and Wcj := Wj(1) +Wj(2), j = 1, 2. Then Dc = D̄c + D̃c and H := ED̃T c D̃c = [ λ0 1Wc1 0 0 λ0 2Wc2 ] . Define the matrix Ĥ = DT c Dc − [ λ̂1Wc1 0 0 λ̂2Wc2 ] . Let Vp(Ĥ) denote the subspace spanned by the first p eigenvectors of Ĥ corresponding to the smallest eigenvalues, then V̂p = Vp(Ĥ) = span(ẑ1, ẑ2, . . . , ẑp). Definition 2. Let V1 and V2 be two subspaces in R n with the same dimension p, sinΘ be a diagonal matrix of the sines of canonical angles between the subspaces V1 and V2. A measurement of the proximity of the subspace V1 to the subspace V2 is a norm ‖ sinΘ‖ =√∑ i≥1 sin2 Θii. Theorem 2. Under the conditions of Theorem 1, V̂p → Vp holds, i.e. ‖ sin Θ̂‖ → 0, a.s., as m1,m2 →∞, where Θ̂ is the matrix of canonical angles between the subspaces V̂p and Vp. 122 MARIA POLEKHA 7. Conclusions We considered a multivariate errors-in-variables model DZ ≈ 0 for the case where the total error covariance structure of the data matrix is known up to two scalar factors. The main assumption was that we observe two independent copies of the model. In practice, this means that the data can be partitioned in two separated groups, i.e. clusters. Based on the objective function method and using the second empirical moments, we have constructed the consistent estimators of the scalar factors and of the kernel of the data matrix. Acknowledgement The author thanks Prof. A. Kukush for posing the problem and fruitful discussions. Appendix Proof of Theorem 1. 1. Behavior of Qc(λ0) We have Qc(λ0) = 2∑ k=1 p∑ i=1 μ2 ik(λ0) + c‖ sinΘ(λ0)‖2. By Lemma 1, αk := ∥∥∥ 1 mk Ψ(k) c (λ0)− 1 mk D̄�(k)D̄(k) ∥∥∥ → 0, as mk → ∞, a.s., for k = 1, 2. We have μi ( D̄�(k)D̄(k)/mk ) = 0, 1 ≤ i ≤ p. Let δN := inf mk≥N μp+1 ( 1 mk D̄�(k)D̄(k) ) . Then, due to assumption (g), limN→∞ δN > 0. Thus, lim m1,m2→∞ 2∑ k=1 p∑ i=1 μ2 ik(λ0) = 0, and ‖ sinΘk(λ0)‖ ≤ αk δN , by Wedin’s theorem [7]. Then ‖ sinΘk(λ0)‖ → 0, as mk → ∞, a.s., k = 1, 2. Here, Θk(λ0) is the diagonal matrix of canonical angles between the subspaces Vp(Ψ (k) c (λ0)/mk) and Vp(D̄�(k)D̄(k)/mk), and Vp denotes the span of p eigenvectors corresponding to p smallest eigenvalues. Now, Vp(D̄�(1)D̄(1)/m1) = Vp(D̄�(2)D̄(2)/m2) = span(z1, . . . , zp). Hence, ‖ sinΘ(λ0)‖ → 0, as m1,m2 →∞, a.s. Thus, Qc(λ0)→ 0 as m1,m2 →∞, a.s. By inequality (3), we have (4) Qc(λ̂)→ 0 as m1,m2 →∞, a.s. 2. λ̂ is eventually bounded Now we want to construct such a nonrandom L > 0 that, eventually, (5) ‖λ̂‖ ≤ L. ESTIMATION IN AN IMPLICIT MULTIVARIATE MEASUREMENT ERROR MODEL 123 (”Eventually” means: for all n ≥ n0(ω), a.s.) From (4) for any ε0 > 0, we have (6) 2∑ k=1 p∑ i=1 μ2 ik(λ̂) ≤ ε0, eventually. By Lemma 1, (7) ∣∣∣∣ 1 m1 Ψ(1) c (λ̂)− 1 m1 Ψ(1) ls (λ̂) ∣∣∣∣→ 0, as m1,m2 →∞, a.s., where Ψ(1) ls (λ) := D̄�(1)D̄(1)− [ (λ1 − λ0 1)W1(1) 0 0 (λ2 − λ0 2)W2(1) ] , λ := (λ1, λ2) ∈ [0,∞)× [0,∞). Let Z satisfy condition (i). Then tr ( Z�(Ψ(1) ls (λ̂)/m1)Z ) = −tr ( (λ̂1 − λ0 1)Z � 1 ( W1(1)/m1 ) Z1 + (λ̂2 − λ0 2)Z � 2 ( W2(1)/m1 ) Z2 ) . Suppose that λ̂1 − λ0 1 > L0, where L0 > 0. Then tr ( Z�(Ψ(1) ls (λ̂)/m1Z ) ≤ −L0 tr ( Z� 1 ( W1(1)/m1 ) Z1 ) + const ·λ0 2. But, due to (i) and (j), lim inf m1,m2→∞ tr ( Z� 1 (W1(1)/m1)Z1 ) > 0. Thus, for L0 large enough and all m1 ≥ m10 , we have tr ( Z�(Ψ(1) ls (λ̂)/m1)Z ) ≤ −1. So μp+1 ( Ψ(1) ls (λ̂)/m1 ) is negative and separated from 0. Then we have from (7) that μp+1 ( Ψ(1) c (λ̂)/m1 ) is also negative and separated from 0, starting from some random number. But this contradicts (6). Therefore, for a large enough nonrandom L0, λ̂1−λ0 1 ≤ L0 holds eventually. A similar inequality can be shown for λ̂2 − λ0 2, based on condition (i) for j = 2. Thus, (5) holds eventually. 3. Consistency of the estimator Let us have any fixed set Ω0, Pr(Ω0) = 1, such that, for all ω ∈ Ω0 and for m1 ≥ m10(ω), m2 ≥ m20(ω), ‖λ̂(ω)‖ ≤ L holds. Fix ω ∈ Ω0 and consider a bounded sequence (8) { λ̂(ω;m1,m2) : m1 ≥ m10(ω),m2 ≥ m20(ω)}. We will show that sequence (8) tends to λ0, as m1,m2 →∞. Let{ λ̂ ( ω;m1(q),m2(q) ) , q ≥ 1 } be any convergent subsequence, i.e. for a certain λ∞ ∈ R2 lim q→∞ λ̂ ( ω;m1(q),m2(q) ) = λ∞. We want to prove the convergence of (8) and show that λ∞ = λ0. Let M (k)(mk) := diag(μ1k, μ2k, . . . , μpk) and Y (k)(mk) := [f (k) 1 (mk) . . . f (k) p (mk)] 124 MARIA POLEKHA be a matrix, the columns of which are the first p eigenvalues of the matrix, Ψ(k) c (λ̂)/mk. These columns form an orthogonal basis for Vpk(λ̂). According to (4), M (k)(mk)→ 0, as mk →∞. Moreover, sin Θ(λ̂)→ 0, as q →∞. Here, λ̂ = λ̂(m1(q),m2(q)). Therefore, we can assume ∥∥∥Y (1)(m1)− Y (2)(m2) ∥∥∥→ 0, as m1,m2 →∞, where mk = mk(q), k = 1, 2. Then (9) 1 mk Ψ(k) c (λ̂)Y (k)(mk) = M (k)(mk)Y (k)(mk). We suppose that (10) Y (k) ( mk(q) )→ Y (k) ∞ , q →∞, k = 1, 2, and Y (k) ∞ = [y(k) ∞1 . . . y (k) ∞p ]. (Otherwise we should consider an embedded subsequence of numbers mk(q′)). The matrix Θ(λ̂) is a matrix of canonical angles between span(f (1) 1 (m1) . . . f (1) p (m1)) and span(f (2) 1 (m2) . . . f (2) p (m2)). Then, due to (10), we have sinΘ∞ = 0, where Θ∞ is the matrix of canonical angles between span(y(1) ∞1 . . . y (1) ∞p) and span(y(2) ∞1 . . . y (2) ∞p). Thus, these spans coincide. Since Y (k)(mk)(q) are matrices of orthonormal columns, the matrices Y (1) ∞ and Y (2) ∞ have the same property. Then, from the coincidence of the linear spans of the columns of the matrices, we have Y (1) ∞ = Y (2) ∞ U, where U is the p× p orthogonal matrix corresponding to the choose of another basis in the linear span of columns of the matrix Y (1) ∞ . Due to (9), sup ‖λ‖≤L ∥∥∥∥ 1 mk Ψ(k) c (λ) − 1 mk Ψ(k) ls (λ) ∥∥∥∥→ 0, as q →∞. Thus 1 mk(q) Ψ(k) ls (λ∞)Y (k) ∞ → 0, as q →∞, k = 1, 2. Let Y∞ = Y (2) ∞ . Multiplying it by U for k = 1, we obtain 1 mk(q) Ψ(k) ls (λ∞)Y∞ → 0, but this also holds for k = 2. Next, ( 1 m1(q) Ψ(1) ls (λ∞)− 1 m2(q) Ψ(2) ls (λ∞) ) Y∞ → 0, as q →∞. Due to (j), we have( 1 m1(q) D̄�(1)D̄(1)− 1 m2(q) D̄�(2)D̄(2) ) Y∞ → 0, as q →∞. On the other hand, assumption (h) yields Y∞ = [y∞1 . . . y∞p ] and span(y∞1 . . . y∞p) = span(z1, . . . , zp). Due to (9), 1 mk(q) Ψ(k) ls (λ∞)Z → 0, as q →∞, k = 1, 2. Therefore, [ (λ∞1 − λ0 1)W1(1)/m1(q) 0 0 (λ∞2 − λ0 2)W2(1)/m2(q) ] Z → 0, ESTIMATION IN AN IMPLICIT MULTIVARIATE MEASUREMENT ERROR MODEL 125 (λ∞j − λ0 j )tr ( Z� j Wj(1)Zj/mj(q) )→ 0, as t→∞, j = 1, 2. Conditions (i) and (j) imply that λ∞j = λ0 j , j = 1, 2. Hence, any convergent subsequence of the bounded sequence (8) converges to λ0; therefore, the sequence (8) itself converges to λ0. The convergence holds for all ω ∈ Ω0, Pr(Ω0) = 1. Thus, λ̂→ λ0, as m1,m2 →∞, a.s. � Proof of Theorem 2. By Theorem 1,∥∥∥∥ 1 m1 +m2 (Ĥ − D̄� c D̄c) ∥∥∥∥→ 0, as m1,m2 →∞, a.s. Due to condition (g), μ1 ( 1 m1 +m2 D̄� c D̄c ) = . . . = μp ( 1 m1 +m2 D̄� c D̄c ) = 0, and, by Lemma 1, μp+1 ( D̄� c D̄c/m ) > 0 for large m1,m2. Moreover, the kernel equals Vp ( 1 m1 +m2 D̄� c D̄c ) = span(z1, . . . , zp). By Wedin’s theorem [7], this implies that Θ̂ → 0, as m1,m2 → ∞., where Θ̂ is the diagonal matrix of the canonical angles between Lp(Ĥ) and span(z1, . . . , zp). � Bibliography 1. A.Kukush, I.Markovsky, and S.Van Huffel, Consistency of the structured total least squares esti- mator in a multivariate errors-in-variables model, Journal of Statistical Planning and Inference 133 (2005), no. 2, 315-358. 2. A.Kukush and S.Van Huffel, Consistency of element-wise weighted total least squares estimator in multivariate errors-in-variables model AX = B, Metrika 59 (2004), no. 1, 75-97. 3. A.Kukush, I.Markovsky, and S.Van Huffel, Estimation in a linear multivariate measurement error model with clustering in the regressor, Internal Report 05-170. ESAT-SISTA. K.U.Leuven (Leuven, Belgium) (2005). 4. A.Kukush and M.Polekha, Consistent estimator in multivariate errors-in-variables model under unknown error covariance structure, Ukrainian Mathematical Journal 59 (2007), no. 8, 1026- 1033. 5. A.Kukush and S.Zwanzing, About the adaptive minimum contrast estimator in a model with nonlinear functional relations, Ukrainian Mathematical Journal 53 (2001), 1145-1452. 6. A.Wald, The fitting of straight lines if both variables are subject to error, Ann. Math. Stat. 11 (1940), 284-300. 7. G.Stewart and J.Sun, Matrix Perturbation Theory, Academic Press, Boston, 1990. 8. I.Markovsy, A.Kukush, and S.Van Huffel, On errors-in-variables estimation with unknown noise variance ratio, 14th IFAC Symposium on System Identification. Newcastle. Australia (2006). 9. I.Markovsy, A.Kukush, and S.Van Huffel, Estimation in a linear multivariate measurement error model with a change point in data, Computational Statistics and Data Analysis 52 (2007), no. 2, 1167-1182. ���������� �������� ��� ������ �� ����� ���%����� &� % ��� ��� $� %��� ��� (� ������ � �� 9����� % �%�+� &� % , ##� $��� �� E-mail : poleha@bigmir.net
id nasplib_isofts_kiev_ua-123456789-4542
institution Digital Library of Periodicals of National Academy of Sciences of Ukraine
issn 0321-3900
language English
last_indexed 2025-12-02T13:21:31Z
publishDate 2008
publisher Інститут математики НАН України
record_format dspace
spelling Polekha, M.
2009-11-25T11:07:39Z
2009-11-25T11:07:39Z
2008
Estimation in an implicit multivariate measurement error model with clustering in the regressor / M. Polekha // Theory of Stochastic Processes. — 2008. — Т. 14 (30), № 1. — С. 117–125. — Бібліогр.: 9 назв.— англ.
0321-3900
https://nasplib.isofts.kiev.ua/handle/123456789/4542
519.21
An implicit linear multivariate model DZ ≈ 0 is considered, where the data matrix D is observed with errors, and Z is a parameter matrix. The error matrix is partitioned into two uncorrelated blocks, and the total covariance structure in each block is supposed to be known up to a corresponding scalar factor. Moreover, the row data are clustered into two groups. Based on the method of corrected objective function, the strongly consistent estimators of scalar factors and the kernel of the matrix D are constructed, as the numbers of rows in the clusters tend to infinity.
en
Інститут математики НАН України
Estimation in an implicit multivariate measurement error model with clustering in the regressor
Article
published earlier
spellingShingle Estimation in an implicit multivariate measurement error model with clustering in the regressor
Polekha, M.
title Estimation in an implicit multivariate measurement error model with clustering in the regressor
title_full Estimation in an implicit multivariate measurement error model with clustering in the regressor
title_fullStr Estimation in an implicit multivariate measurement error model with clustering in the regressor
title_full_unstemmed Estimation in an implicit multivariate measurement error model with clustering in the regressor
title_short Estimation in an implicit multivariate measurement error model with clustering in the regressor
title_sort estimation in an implicit multivariate measurement error model with clustering in the regressor
url https://nasplib.isofts.kiev.ua/handle/123456789/4542
work_keys_str_mv AT polekham estimationinanimplicitmultivariatemeasurementerrormodelwithclusteringintheregressor