Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures
A matrix-valued measure Θ reduces to measures of smaller size if there exists a constant invertible matrix M such that MΘM∗ is block diagonal. Equivalently, the real vector space A of all matrices T such that TΘ(X)=Θ(X)T∗ for any Borel set X is non-trivial. If the subspace Ah of self-adjoints elemen...
Saved in:
| Published in: | Symmetry, Integrability and Geometry: Methods and Applications |
|---|---|
| Date: | 2016 |
| Main Authors: | , |
| Format: | Article |
| Language: | English |
| Published: |
Інститут математики НАН України
2016
|
| Online Access: | https://nasplib.isofts.kiev.ua/handle/123456789/147427 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Journal Title: | Digital Library of Periodicals of National Academy of Sciences of Ukraine |
| Cite this: | Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures / E. Koelink, P. Román // Symmetry, Integrability and Geometry: Methods and Applications. — 2016. — Т. 12. — Бібліогр.: 12 назв. — англ. |
Institution
Digital Library of Periodicals of National Academy of Sciences of Ukraine| id |
nasplib_isofts_kiev_ua-123456789-147427 |
|---|---|
| record_format |
dspace |
| spelling |
Koelink, E. Román, P. 2019-02-14T18:23:07Z 2019-02-14T18:23:07Z 2016 Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures / E. Koelink, P. Román // Symmetry, Integrability and Geometry: Methods and Applications. — 2016. — Т. 12. — Бібліогр.: 12 назв. — англ. 1815-0659 2010 Mathematics Subject Classification: 33D45; 42C05 DOI:10.3842/SIGMA.2016.008 https://nasplib.isofts.kiev.ua/handle/123456789/147427 A matrix-valued measure Θ reduces to measures of smaller size if there exists a constant invertible matrix M such that MΘM∗ is block diagonal. Equivalently, the real vector space A of all matrices T such that TΘ(X)=Θ(X)T∗ for any Borel set X is non-trivial. If the subspace Ah of self-adjoints elements in the commutant algebra A of Θ is non-trivial, then Θ is reducible via a unitary matrix. In this paper we prove that A is ∗-invariant if and only if Ah=A, i.e., every reduction of Θ can be performed via a unitary matrix. The motivation for this paper comes from families of matrix-valued polynomials related to the group SU(2)×SU(2) and its quantum analogue. In both cases the commutant algebra A=Ah⊕iAh is of dimension two and the matrix-valued measures reduce unitarily into a 2×2 block diagonal matrix. Here we show that there is no further non-unitary reduction. This paper is a contribution to the Special Issue on Orthogonal Polynomials, Special Functions and Applications. The full collection is available at http://www.emis.de/journals/SIGMA/OPSFA2015.html. We thank I. Zurri´an for pointing out a similar example to Example 4.1 to the first author. The research of Pablo Rom´an is supported by the Radboud Excellence Fellowship. We would like to thank the anonymous referees for their comments and remarks, that have helped us to improve the paper. en Інститут математики НАН України Symmetry, Integrability and Geometry: Methods and Applications Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures Article published earlier |
| institution |
Digital Library of Periodicals of National Academy of Sciences of Ukraine |
| collection |
DSpace DC |
| title |
Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures |
| spellingShingle |
Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures Koelink, E. Román, P. |
| title_short |
Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures |
| title_full |
Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures |
| title_fullStr |
Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures |
| title_full_unstemmed |
Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures |
| title_sort |
orthogonal vs. non-orthogonal reducibility of matrix-valued measures |
| author |
Koelink, E. Román, P. |
| author_facet |
Koelink, E. Román, P. |
| publishDate |
2016 |
| language |
English |
| container_title |
Symmetry, Integrability and Geometry: Methods and Applications |
| publisher |
Інститут математики НАН України |
| format |
Article |
| description |
A matrix-valued measure Θ reduces to measures of smaller size if there exists a constant invertible matrix M such that MΘM∗ is block diagonal. Equivalently, the real vector space A of all matrices T such that TΘ(X)=Θ(X)T∗ for any Borel set X is non-trivial. If the subspace Ah of self-adjoints elements in the commutant algebra A of Θ is non-trivial, then Θ is reducible via a unitary matrix. In this paper we prove that A is ∗-invariant if and only if Ah=A, i.e., every reduction of Θ can be performed via a unitary matrix. The motivation for this paper comes from families of matrix-valued polynomials related to the group SU(2)×SU(2) and its quantum analogue. In both cases the commutant algebra A=Ah⊕iAh is of dimension two and the matrix-valued measures reduce unitarily into a 2×2 block diagonal matrix. Here we show that there is no further non-unitary reduction.
|
| issn |
1815-0659 |
| url |
https://nasplib.isofts.kiev.ua/handle/123456789/147427 |
| citation_txt |
Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures / E. Koelink, P. Román // Symmetry, Integrability and Geometry: Methods and Applications. — 2016. — Т. 12. — Бібліогр.: 12 назв. — англ. |
| work_keys_str_mv |
AT koelinke orthogonalvsnonorthogonalreducibilityofmatrixvaluedmeasures AT romanp orthogonalvsnonorthogonalreducibilityofmatrixvaluedmeasures |
| first_indexed |
2025-11-25T12:51:19Z |
| last_indexed |
2025-11-25T12:51:19Z |
| _version_ |
1850514888847261696 |
| fulltext |
Symmetry, Integrability and Geometry: Methods and Applications SIGMA 12 (2016), 008, 9 pages
Orthogonal vs. Non-Orthogonal Reducibility
of Matrix-Valued Measures?
Erik KOELINK † and Pablo ROMÁN †‡
† IMAPP, Radboud Universiteit, Heyendaalseweg 135, 6525 GL Nijmegen, The Netherlands
E-mail: e.koelink@math.ru.nl
URL: http://www.math.ru.nl/~koelink/
‡ CIEM, FaMAF, Universidad Nacional de Córdoba, Medina Allende s/n Ciudad Universitaria,
Córdoba, Argentina
E-mail: roman@famaf.unc.edu.ar
URL: http://www.famaf.unc.edu.ar/~roman/
Received September 23, 2015, in final form January 21, 2016; Published online January 23, 2016
http://dx.doi.org/10.3842/SIGMA.2016.008
Abstract. A matrix-valued measure Θ reduces to measures of smaller size if there exists
a constant invertible matrix M such that MΘM∗ is block diagonal. Equivalently, the real
vector space A of all matrices T such that TΘ(X) = Θ(X)T ∗ for any Borel set X is non-
trivial. If the subspace Ah of self-adjoints elements in the commutant algebra A of Θ is non-
trivial, then Θ is reducible via a unitary matrix. In this paper we prove that A is ∗-invariant
if and only if Ah = A , i.e., every reduction of Θ can be performed via a unitary matrix.
The motivation for this paper comes from families of matrix-valued polynomials related to
the group SU(2)× SU(2) and its quantum analogue. In both cases the commutant algebra
A = Ah ⊕ iAh is of dimension two and the matrix-valued measures reduce unitarily into
a 2× 2 block diagonal matrix. Here we show that there is no further non-unitary reduction.
Key words: matrix-valued measures; reducibility; matrix-valued orthogonal polynomials
2010 Mathematics Subject Classification: 33D45; 42C05
1 Introduction
The theory of matrix-valued orthogonal polynomials was initiated by Krein in 1949 and, since
then, it was developed in different directions. From the perspective of the theory of orthogonal
polynomials, one wants to study families of truly matrix-valued orthogonal polynomials. Here
is where the issue of reducibility comes into play. Given a matrix-valued measure, one can
construct an equivalent measure by multiplying on the left by a constant invertible matrix and
on the right by its adjoint. If the equivalent measure is a block diagonal matrix, then all the
objects of interest (orthogonal polynomials, three-term recurrence relation, etc.) reduce to block
diagonal matrices so that we could restrict to the study of the blocks of smaller size. An extreme
situation occurs when the matrix-valued measure is equivalent to a diagonal matrix in which
case we are, essentially, dealing with scalar orthogonal polynomials.
Our interest in the study of the reducibility of matrix-valued measures was triggered by the
families of matrix-valued orthogonal polynomials introduced in [2, 9, 10, 11]. In [10] the study
of the spherical functions of the group SU(2)× SU(2) leads to a matrix-valued measure Θ and
a sequence of matrix-valued orthogonal polynomials with respect to Θ. From group theoretical
?This paper is a contribution to the Special Issue on Orthogonal Polynomials, Special Functions and Applica-
tions. The full collection is available at http://www.emis.de/journals/SIGMA/OPSFA2015.html
mailto:e.koelink@math.ru.nl
http://www.math.ru.nl/~koelink/
mailto:roman@famaf.unc.edu.ar
http://www.famaf.unc.edu.ar/~roman/
http://dx.doi.org/10.3842/SIGMA.2016.008
http://www.emis.de/journals/SIGMA/OPSFA2015.html
2 E. Koelink and P. Román
considerations, we were able to describe the symmetries of Θ and pinpoint two linearly inde-
pendent matrices in the commutant of Θ, one being the identity. The proof that these matrices
actually span the commutator required a careful computation. It then turns out that it is possi-
ble to conjugate Θ with a constant unitary matrix to obtain a 2× 2 block diagonal matrix. An
analogous situation holds true for a one-parameter extension of this example [9]. In [2] from the
study of the quantum analogue of SU(2)×SU(2) we constructed matrix-valued orthogonal poly-
nomials which are matrix analogues of a subfamily of Askey–Wilson polynomials. The weight
matrix can also be unitarily reduced to a 2× 2 block diagonal matrix in this case, again arising
from quantum group theoretic considerations.
In [12], the authors study non-unitary reducibility for matrix-valued measures and prove that
a matrix-valued measure Θ reduces into a block diagonal measure if the real vector space A of
all matrices T such that TΘ(X) = Θ(X)T ∗ for any Borel set X is not trivial, in contrast to the
reducibility via unitary matrices that occurs when the commutant algebra of Θ is not trivial.
The aim of this paper is to develop a criterion to determine whether unitary and non-unitary
reducibility of a weight matrix W coincide in terms of the ∗-invariance of A . Every reduction
of Θ can be performed via a unitary matrix if and only if A is ∗-invariant, in which case A = Ah
where Ah is the Hermitian part of the commutant of Θ, see Section 2. We apply our criterion
to our examples [2, 9, 10] and we conclude that there is no further reduction than the one via
a unitary matrix. We expect that a similar strategy can be applied to more general families of
matrix-valued orthogonal polynomials as, for instance, the families related to compact Gelfand
pairs given in [7]. We also discuss an example where A and Ah are not equal.
It is worth noting that unitary reducibility strongly depends on the normalization of the
matrix-valued measure. Indeed, if the matrix-valued measure is normalized by Θ(R) = I, then
the real vector space A is ∗-invariant and by our criterion, unitary and non-unitary reduction
coincide. This is discussed in detail in Remark 3.7.
2 Reducibility of matrix-valued measures
Let MN (C) be the algebra of N × N complex matrices. Let µ be a σ-finite positive measure
on the real line and let the weight function W : R→MN (C) be strictly positive definite almost
everywhere with respect to µ. Then
Θ(X) =
∫
X
W (x)dµ(x), (2.1)
is a MN (C)-valued measure on R, i.e., a function from the σ-algebra B of Borel subsets of R
into the positive semi-definite matrices in MN (C) which is countably additive. Note that any
positive matrix measure can be obtained as in (2.1), see for instance [4, Theorem 1.12] and [5].
More precisely, if Θ̃ is a MN (C)-valued measure, and Θ̃tr denotes the scalar measure defined by
Θ̃tr(X) = Tr(Θ̃(X)), then the matrix elements Θ̃ij of Θ̃ are absolutely continuous with respect
to Θ̃tr so that, by the Radon–Nikodym theorem, there exists a positive definite function V such
that
dΘ̃i,j(x) = V (x)i,j dΘ̃tr(x).
Note that we do not require the normalization Θ(R) = I as in [5]. A detailed discussion about
the role of the normalization in the reducibility of the measure is given at the end of Section 3.
Going back to the measure (2.1), we have dΘtr(x) = Tr(W (x)) dµ(x) so that Θtr is absolutely
continuous with respect to µ. Note that Tr(W (x)) > 0 a.e. with respect to µ so that µ is
absolutely continuous with respect to Θtr. The unicity of the Radon–Nikodym theorem implies
W (x) = V (x)Tr(W (x)), i.e., W is a positive scalar multiple of V .
Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures 3
We say that two MN (C)-valued measures Θ1 and Θ2 are equivalent if there exists a constant
nonsingular matrix M such that Θ1(X) = MΘ2(X)M∗ for all X ∈ B, where ∗ denotes the
adjoint. A MN (C)-valued measure matrix Θ reduces to matrix-valued measures of smaller size
if there exist positive matrix-valued measures Θ1, . . . ,Θm such that Θ is equivalent to the block
diagonal matrix diag(Θ1(x),Θ2(x), . . . ,Θm(x)). If Θ is equivalent to a diagonal matrix, we say
that Θ reduces to scalar measures. In [12, Theorem 2.8], the authors prove that a matrix-valued
measure Θ reduces to matrix-valued measures of smaller size if and only if the real vector space
A = A (Θ) =
{
T ∈MN (C) |TΘ(X) = Θ(X)T ∗ ∀X ∈ B
}
,
contains, at least, one element which is not a multiple of the identity, i.e., RI ( A , where I
is the identity. Note that our definition of A differs slightly from the one considered in [12].
If W is a weight matrix for Θ, then we require that T ∈ A satisfies TW (x) = W (x)T ∗ almost
everywhere with respect to µ.
If there exists a subspace V ⊂ CN such that Θ(X)V ⊂ V for all X ∈ B, since Θ(X) is
self-adjoint for all X ∈ B, it follows that Θ(X)V ⊥ ⊂ V ⊥ for all X ∈ B. If ιV : V → CN is
the embedding of V into CN , then PV = ιV ι
∗
V ∈MN (C) is the orthogonal projection on V and
satisfies
PV Θ(X) = Θ(X)PV , for all X ∈ B.
Hence, the projections on invariant subspaces belong to the commutant algebra
A = A(Θ) =
{
T ∈MN (C) |TΘ(X) = Θ(X)T ∀X ∈ B
}
.
Since Θ(X) is self-adjoint for all X ∈ B, A is a unital ∗-algebra over C. We denote by Ah the
real subspace of A consisting of all Hermitian matrices. Then it follows that A = Ah ⊕ iAh.
If CI ( A, then there exists T ∈ Ah such that T /∈ CI. The eigenspaces of T for different
eigenvalues are orthogonal invariant subspaces for Θ. Therefore Θ is equivalent via a unitary
matrix to matrix-valued measures of smaller size.
Remark 2.1. Let S ∈ A and T ∈ A . Then we observe that S∗ ∈ A and therefore STS∗Θ(x) =
Θ(x)ST ∗S∗ = Θ(x)(STS∗)∗ for all X ∈ B. Hence there is an action from A into A which is
given by
S · T = STS∗.
Lemma 2.2. A does not contain non-zero skew-Hermitian elements.
Proof. Suppose that S ∈ A is skew-Hermitian. Then S is normal and thus unitarily diago-
nalizable, i.e., there exists a unitary matrix U and a diagonal matrix D = diag(λ1, . . . , λN ),
λi ∈ iR, such that S = UDU∗, see for instance [8, Chapter 4]. Since S ∈ A , we get
DU∗Θ(X)U = U∗Θ(X)UD∗ = −U∗Θ(X)UD, for all X ∈ B.
The (i, i)-th entry of the previous equation is given by
λi(U
∗Θ(X)U)i,i = −(U∗Θ(X)U)i,iλi.
Take any X0 ∈ B such that Θ(X0) is strictly positive definite. Since U is unitary, U∗Θ(X0)U
is strictly positive definite and therefore (U∗Θ(X0)U)i,i > 0 for all i = 1, . . . , N , which implies
that λi = 0. �
Theorem 2.3. A ∩A ∗ = Ah.
4 E. Koelink and P. Román
Proof. Observe that if T ∈ Ah, then TΘ(X) = Θ(X)T = Θ(X)T ∗ for all X ∈ B, and thus
Ah ⊂ A . Since T is self-adjoint, we also have T = T ∗ ∈ A ∗.
On the other hand, let T ∈ A ∩A ∗. Then T ∗ ∈ A ∗ ∩A ⊂ A , and since A is a real vector
space, (T − T ∗) ∈ A . The matrix (T − T ∗) is skew-Hermitian and therefore by Lemma 2.2 we
have (T − T ∗) = 0. Hence T is self-adjoint and T ∈ Ah. �
Corollary 2.4. If T ∈ A ∩A ∗, then T = T ∗.
Proof. The corollary follows directly from the proof of Theorem 2.3. �
Corollary 2.5. A is ∗-invariant if and only if A = Ah.
Proof. If A = Ah then A is trivially ∗-invariant. On the other hand, if we assume that A is
∗-invariant then the corollary follows directly from Theorem 2.3. �
Remark 2.6. Corollary 2.4 says that if A is ∗-invariant, then it is pointwise ∗-invariant, i.e.,
T = T ∗ for all T ∈ A .
Remark 2.7. Suppose that there exists X ∈ B such that Θ(X) ∈ R>0I, then every T ∈ A
is self-adjoint and Corollary 2.5 holds true trivially. Since TΘ(X) = Θ(X)T ∗ for all X ∈ B, if
there is a point x0 ∈ supp(µ) such that
lim
δ→0
∥∥∥∥ 1
µ((x0 − δ, x0 + δ))
Θ((x0 − δ, x0 + δ))− I
∥∥∥∥ = 0,
then it follows that T = T ∗ and so Corollary 2.5 holds true. This is the case, for instance, for
the examples given in [3, 1], where W (x0) = I for some x0 ∈ supp(µ). For Examples 4.2 and 4.3,
in general, there is no x0 ∈ [−1, 1] for which W (x0) = I.
3 Reducibility of matrix-valued orthogonal polynomials
Let MN (C)[x] denote the set of MN (C)-valued polynomials in one variable x. Let µ be a finite
measure and W be a weight matrix as in Section 2. In this section we assume that all the
moments
Mn =
∫
xnW (x) dµ(x), n ∈ N,
exist and are finite. Therefore we have a matrix-valued inner product on MN (C)
〈P,Q〉 =
∫
P (x)W (x)Q(x)∗ dµ(x), P,Q ∈MN (C)[x],
where ∗ denotes the adjoint. By general considerations, e.g., [5, 6], it follows that there exists
a unique sequence of monic matrix-valued orthogonal polynomials (Pn)n∈N, where Pn(x) =
n∑
k=0
xkPnk with Pnk ∈ MN (C) and Pnn = I, the N × N identity matrix. The polynomials Pn
satisfy the orthogonality relations
〈Pn, Pm〉 = δnmHn, Hn ∈MN (C),
where Hn > 0 is the corresponding squared norm. Any other family (Qn)n∈N of matrix-valued
orthogonal polynomials with respect to W is of the form Qn(x) = EnPn(x) for invertible matri-
ces En. The monic orthogonal polynomials satisfy a three-term recurrence relation of the form
xPn(x) = Pn+1(x) +BnPn(x) + CnPn−1(x), n ≥ 0, (3.1)
where P−1 = 0 and Bn, Cn are matrices depending on n and not on x.
Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures 5
Lemma 3.1. Let T ∈ A . Then we have
(1) The operator T : MN (C)[x]→MN (C)[x] given by P 7→ PT is symmetric with respect to Θ.
(2) TPn = PnT for all n ∈ N.
(3) THn = HnT
∗ for all n ∈ N.
(4) TMn = MnT
∗ for all n ∈ N.
(5) TBn = BnT and TCn = CnT for all n ∈ N.
Proof. Let P,Q ∈MN (C)[x]. Then
〈PT,Q〉 =
∫
P (x)TW (x)Q(x)∗ dµ(x) =
∫
P (x)W (x)T ∗Q(x)∗ dµ(x)
=
∫
P (x)W (x)(Q(x)T )∗ dµ(x) = 〈P,QT 〉,
so that T is a symmetric operator. This proves (1). It follows directly from (1) that the
monic matrix-valued orthogonal polynomials are eigenfunctions for the operator T , see, e.g.,
[6, Proposition 2.10]. Thus, for every n ∈ N there exists a constant matrix Λn(T ) such that
PnT = Λn(T )Pn. Equating the leading coefficients of both sides of the last equation, and using
that Pn is monic, yields T = Λn(T ). This proves (2).
The proof of (3) follows directly from (2) and the fact that T ∈ A . We have
THn =
∫
Pn(x)TW (x)Pn(x)∗ dµ(x) =
∫
Pn(x)W (x)T ∗Pn(x)∗ dµ(x)
=
∫
Pn(x)W (x)(Pn(x)T )∗dµ(x) =
∫
Pn(x)W (x)(TPn(x))∗ dµ(x) = HnT
∗.
The proof of (4) is analogous to that of (3), replacing the polynomials Pn by xn. Finally, we
multiply the three-term recurrence relation (3.1) by T on the left and on the right and we
subtract both equations. Using that TPn = PnT and TPn+1 = Pn+1T we get
TBnPn + TCnPn−1 = BnTPn + CnTPn−1.
The coefficient of xn is TBn = BnT and therefore we also have TCn = CnT . �
Corollary 2.5 provides a criterion to determine whether the set of Hermitian elements of the
commutant algebra A is equal to A . However, for explicit examples, it might be cumbersome to
verify the ∗-invariance of A from the expression of the weight. Our strategy now is to view A
as a subset of a, in general, larger set whose ∗-invariance can be established more easily and
that implies the ∗-invariance of A . Motivated by Lemma 3.1 we consider a sequence (Γn)n of
strictly positive definite matrices such that if T ∈ A , then TΓn = ΓnT
∗ for all n. Then for each
n ∈ N and I ⊂ N we introduce the ∗-algebras
AΓ
n = A(Γn) = {T ∈MN (C) |TΓn = ΓnT}, AΓ
I =
⋂
n∈I
AΓ
n,
and the real vector spaces
A Γ
n = A (Γn) = {T ∈MN (C) |TΓn = ΓnT
∗}, A Γ
I =
⋂
n∈I
A Γ
n . (3.2)
It is clear from the definition that A ⊂ A Γ
n for all n ∈ N.
6 E. Koelink and P. Román
Remark 3.2. For any subset I ⊂ N, the sequence (Γn)n induces a discrete MN (C)-valued
measure supported on I
dΓI(x) =
∑
n∈I
Γnδn,x.
Theorem 2.3 applied to the measure dΓI yields that A Γ
I ∩ (A Γ
I )∗ is the subset of Hermitian
matrices in AΓ
I .
Theorem 3.3. If A Γ
I is ∗-invariant for some non-empty subset I ⊂ N, then A = Ah. In
particular, the statement holds true if A Γ
n is ∗-invariant for some n ∈ N.
Proof. If T ∈ A , then T ∈ A Γ
n for all n ∈ I. Since A Γ
I is ∗-invariant, then T ∗ ∈ A Γ
n for all
n ∈ I. If we apply Corollary 2.4 to the measure in Remark 3.2, we obtain T = T ∗. Therefore
T ∈ Ah ⊂ A and thus A is ∗-invariant. Hence the theorem follows from Corollary 2.5. �
Remark 3.4. Two obvious candidates for sequences (Γn)n are given in Lemma 3.1, namely the
sequence of squared norms (Hn)n and the sequence of even moments (M2n)n.
Remark 3.5. Let Θ be a MN (C)-valued measure, not necessarily with finite moments and
take a positive definite matrix Γ such that TΓ = ΓT ∗ for all T ∈ A (Θ). Let S be a positive
definite matrix such that Γ = S2. We can now consider the MN (C)-valued measure S−1ΘS−1.
By a simple computation, we check that T ∈ A (Θ) if and only if S−1TS ∈ A (S−1ΘS−1). This
gives
A
(
S−1ΘS−1
)
= S−1A (Θ)S.
Moreover, if T ∈ A (Θ), then TΓ = ΓT ∗ implies that S−1TS = ST ∗S−1 = (S−1TS)∗. Hence
S−1TS is self-adjoint for all T ∈ A (Θ). Then we have by Corollary 2.5
A
(
S−1ΘS−1
)
h
= A
(
S−1ΘS−1
)
.
On the other hand, if U ∈ Ah(Θ), then
S−1USS−1Θ(X)S−1 = S−1UΘ(X)S−1 = S−1Θ(X)S−1SUS−1
= S−1Θ(X)S−1
(
S−1US
)∗
,
for all X ∈ B. Therefore S−1A(Θ)hS ⊂ A (S−1ΘS−1) = A(S−1ΘS−1)h. In general this is an
inclusion, see Example 4.1.
Remark 3.6. Suppose that Θ is a MN (C)-valued measure with finite moments and that M2n ∈
R>0I, respectively Hn ∈ R>0I, for some n ∈ N. Then it follows from Lemma 3.1 and (3.2) that
T = T ∗ for all T ∈ A M
2n , respectively T ∈ A N
n . Then Theorem 3.3 says that A = Ah.
Remark 3.7. Let Θ be a MN (C)-valued measure such that the first moment M0 is finite.
Then there exists a positive definite matrix S such that M0 = S2. The measure Θ̃ = S−1ΘS−1
satisfies
Θ̃(R) = S−1Θ(R)S−1 = S−1M0S
−1 = I.
Therefore by Remark 3.6 we have that A (S−1ΘS−1) = A(S−1ΘS−1)h. Observe that the nor-
malization Θ̃(R) = I is assumed in [5] so that in the setting of that paper the real subspace of
Hermitian elements in the commutant coincides with the real vector space A (Θ).
Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures 7
4 Examples
In this section we discuss three examples of matrix-valued weights that exhibit different features.
The first example is a slight variation of [12, Example 2.6].
Example 4.1. Let µ be the Lebesgue measure on the interval [0, 1], and let W be the weight
W (x) =
(
x2 + x x
x x
)
=
(
1
√
6
3
0 1
)(
x2 0
0 x
)(
1 0√
6
3 1
)
.
A simple computation gives that A and A are given by
A = CI, A = R
(
1 0
0 1
)
+ R
(
1 −
√
6
3
0 0
)
.
Observe that A is clearly not ∗-invariant since
(
1 −
√
6
3
0 0
)∗
/∈ A . Now we consider the sequen-
ce (M2n)n of even moments. The first moment is given by M0 =
(
2
3
√
6
6√
6
6
1
2
)
and the algebras AM0
and A M
0 are
AM0 = C
(
1 0
0 1
)
+ C
(√
6
6 1
1 0
)
,
A M
0 = R
(
1 0
0 1
)
+ R
(√
6
6 1
1 0
)
+ R
(
1 −
√
3
6
0 0
)
+ iR
(
1 −2
√
6
3√
6
2 −1
)
. (4.1)
This gives the inclusions RI ( (AM0 )h ( A M
0 . It is also clear from (4.1) that A M
0 ∩ (A M
0 )∗ =
(AM0 )h.
Now we proceed as in Remark 3.7, we take the positive definite matrix S such that M0 = S2.
Here S = 1
15
( √
6+9 3
√
6−3
3
√
6−3 3
2
√
6+6
)
. Then
S−1W (x)S−1 =
1
25
(
(33 + 12
√
6)x2 + (28− 8
√
6)x −(6 + 9
√
6)x2 + (4 + 6
√
6)x
−(6 + 9
√
6)x2 + (4 + 6
√
6)x (42− 12
√
6)x2 + (22 + 8
√
6)x
)
.
We finally have that
A (S−1ΘS−1) = RI + RE + RF, A (S−1M0S
−1) = RI + RE + RF + RG,
where
E = S−1
(√
6
6 1
1 0
)
S =
(√
6
6 1
1 0
)
, G = iS−1
(
1 −2
√
6
3√
6
2 −1
)
S =
(
0 −i
i 0
)
,
F = S−1
(
1 −
√
3
6
0 0
)
S =
1
25
(
11 + 4
√
6 −(2 + 3
√
6)
−(2 + 3
√
6) 14− 4
√
6
)
.
Then we have the following inclusions:
RI = S−1A(Θ)hS ( A
(
S−1ΘS−1
)
h
= A (S−1ΘS−1),
and
S−1
(
AM0
)
h
S ( A
(
S−1M0S
−1
)
h
= A
(
S−1M0S
−1
)
.
8 E. Koelink and P. Román
Example 4.2. Our second example is a family of matrix-valued Gegenbauer polynomials in-
troduced in [9]. For ` ∈ 1
2N and ν > 0, let dµ(x) = (1 − x2)ν−1/2dx where dx is the Lebesgue
measure on [−1, 1] and let W (ν) be the (2`+ 1)× (2`+ 1) matrix
(
W (ν)(x)
)
m,n
=
m∑
t=max(0,n+m−2`)
α
(ν)
t (m,n)C
(ν)
m+n−2t(x),
α
(ν)
t (m,n) = (−1)m
n!m!(m+ n− 2t)!
t!(2ν)m+n−2t(ν)n+m−t
(ν)n−t(ν)m−t
(n− t)!(m− t)!
(n+m− 2t+ ν)
(n+m− t+ ν)
× (2`−m)!(n− 2`)m−t(−2`− ν)t
(2`+ ν)
(2`)!
,
where n,m ∈ {0, 1, . . . , 2`} and n ≥ m. The matrix W (ν) is extended to a symmetric matrix,(
W (ν)(x)
)
m,n
=
(
W (ν)(x)
)
n,m
. In [9, Proposition 2.6] we proved that A is generated by the
identity matrix I and the involution J ∈M2`+1(C) defined by ej 7→ e2`−j
Now we will use Theorem 3.3 to prove that Ah = A . This says that there is no further
non-unitary reduction of the weight W . As a sequence of positive definite matrices we take the
squared norms of the monic polynomials, (Γn)n = (Hn)n, that were explicitly calculated in [9,
Theorem 3.7] and are given by the following diagonal matrices(
H(ν)
n
)
i,k
= δi,k
√
π
Γ(ν + 1
2)
Γ(ν + 1)
ν(2`+ ν + n)
ν + n
n!(`+ 1
2 + ν)n(2`+ ν)n
(ν + k)n(2`+ 2ν + n)n(2`+ ν − k)n
× k!(`+ ν)n(2`− k)!(n+ ν + 1)2`
(2`+ ν + 1)n(2`)!(n+ ν + 1)k(n+ ν + 1)2`−k
.
For any n ∈ N we choose I = {n, n + 1}. If we take T ∈ A Γ
I , i.e., TH
(ν)
n = H
(ν)
n T ∗ and
TH
(ν)
n+1 = H
(ν)
n+1T
∗, it follows that
Ti,j =
(
H
(ν)
n
)
j,j
(H
(ν)
n )i,i
T j,i =
j!(2`− j)!(ν + i)n(2`+ ν − i)n(n+ ν + 1)i(n+ ν + 1)2`−i
i!(2`− i)!(ν + j)n(2`+ ν − j)n(n+ ν + 1)j(n+ ν + 1)2`−j
T j,i.
It follows directly from this equation that Ti,i ∈ R and Ti,2`−i = T 2`−i,i. Now we observe that
Ti,j =
(
H
(ν)
n
)
j,j(
H
(ν)
n
)
i,i
T j,i =
(
H
(ν)
n
)
j,j
(H
(ν)
n )i,i
(
H
(ν)
n+1
)
i,i(
H
(ν)
n+1
)
j,j
Ti,j
=
(ν + j + n)(ν + j + n+ 1)(2`+ n+ ν − j)(2`+ n+ ν + 1− j)
(ν + i+ n)(ν + i+ n+ 1)(2`+ n+ ν − i)(2`+ n+ ν + 1− i)
Ti,j . (4.2)
Equation (4.2) implies that Ti,j = 0 unless j = i or j = 2` − i. Hence T is self-adjoint and
thus A Γ
I is ∗-invariant and from Theorem 3.3 we have A = Ah. We conclude that A is the real
span of {I, J}, and so there is no further non-unitary reduction.
Example 4.3. Our last example is a q-analogue of the previous example for ν = 1. This
sequence of matrix-valued orthogonal polynomials matrix analogues of a subfamily of Askey–
Wilson polynomials and were obtained by studying matrix-valued spherical functions related to
the quantum analogue of SU(2)× SU(2). For any ` ∈ 1
2N and 0 < q < 1, we have the measure
dµ(x) =
√
1− x2 dx supported on [−1, 1] and a (2` + 1) × (2` + 1) weight matrix W which is
given in [2, Theorem 4.8], we omit the explicit expression here and we give, instead, the explicit
expression for the squared norms Hn of the monic orthogonal polynomials
(Hn)i,j = δi,j
q−2`2−2n(q2, q4`+4; q2)2
n(1− q4`+2)2
(q2i+2, q4`−2i+2; q2)2
n(1− q2n+2i+2)(1− q4`−2i+2n+2)
.
Orthogonal vs. Non-Orthogonal Reducibility of Matrix-Valued Measures 9
The expression for Hn is obtained combining Theorem 4.8 and Corollary 4.9 of [2]. The com-
mutant algebra is generated by {I, J} as in the previous example, see [2, Proposition 4.10].
We take (Γn)n = (Hn)n, I = {n, n + 1} for any n ∈ N and observe that THn = HnT
∗ and
THn+1 = Hn+1T
∗ implies
Ti,j =
(
H
(ν)
n
)
j,j(
H
(ν)
n
)
i,i
T j,i =
(
H
(ν)
n
)
j,j(
H
(ν)
n
)
i,i
(
H
(ν)
n+1
)
i,i(
H
(ν)
n+1
)
j,j
Ti,j
=
(1− q2n+2j+2)(1− q2n+2j+4)(1− q4`+2n−2i+2)(1− q4`+2n−2i+4)
(1− q2n+2i+2)(1− q2n+2i+4)(1− q4`+2n−2j+2)(1− q4`+2n−2j+4)
Ti,j .
As in the previous example, it follows that T is self-adjoint and therefore, A = Ah. Hence there
is no further non-unitary reduction for W .
Remark 4.4. Theorem 3.3 can be used to determine the irreducibility of a weight matrix. In
fact, with the commutant algebras already determined in [9] and [2], Theorem 3.3 implies that
the restrictions of the weight matrices of Examples 4.2 and 4.3 to the eigenspaces of the matrix J
are irreducible. For some explicit cases see [10, Section 8].
Acknowledgements
We thank I. Zurrián for pointing out a similar example to Example 4.1 to the first author. The
research of Pablo Román is supported by the Radboud Excellence Fellowship. We would like to
thank the anonymous referees for their comments and remarks, that have helped us to improve
the paper.
References
[1] Aldenhoven N., Koelink E., de los Ŕıos A.M., Matrix-valued little q-Jacobi polynomials, J. Approx. Theory
193 (2015), 164–183, arXiv:1308.2540.
[2] Aldenhoven N., Koelink E., Román P., Matrix-valued orthogonal polynomials for the quantum analogue of
(SU(2) × SU(2), diag), arXiv:1507.03426.
[3] Álvarez-Nodarse R., Durán A.J., de los Ŕıos A.M., Orthogonal matrix polynomials satisfying second order
difference equations, J. Approx. Theory 169 (2013), 40–55.
[4] Berg C., The matrix moment problem, in Coimbra Lecture Notes on Orthogonal Polynomials, Editors
A. Branquinho, A. Foulquié Moreno, Nova Sci. Publ., New York, 2008, 1–57.
[5] Damanik D., Pushnitski A., Simon B., The analytic theory of matrix orthogonal polynomials, Surv. Approx.
Theory 4 (2008), 1–85, arXiv:0711.2703.
[6] Grünbaum F.A., Tirao J., The algebra of differential operators associated to a weight matrix, Integral
Equations Operator Theory 58 (2007), 449–475.
[7] Heckman G., van Pruijssen M., Matrix-valued orthogonal polynomials for Gelfand pairs of rank one, Tohoku
Math. J., to appear, arXiv:1310.5134.
[8] Horn R.A., Johnson C.R., Matrix analysis, Cambridge University Press, Cambridge, 1985.
[9] Koelink E., de los Ŕıos A.M., Román P., Matrix-valued Gegenbauer polynomials, arXiv:1403.2938.
[10] Koelink E., van Pruijssen M., Román P., Matrix-valued orthogonal polynomials related to (SU(2) ×
SU(2), diag), Int. Math. Res. Not. 2012 (2012), 5673–5730, arXiv:1012.2719.
[11] Koelink E., van Pruijssen M., Román P., Matrix-valued orthogonal polynomials related to (SU(2) ×
SU(2), diag), II, Publ. Res. Inst. Math. Sci. 49 (2013), 271–312, arXiv:1203.0041.
[12] Tirao J., Zurrián I., Reducibility of matrix weights, arXiv:1501.04059.
http://dx.doi.org/10.1016/j.jat.2014.10.007
http://arxiv.org/abs/1308.2540
http://arxiv.org/abs/1507.03426
http://dx.doi.org/10.1016/j.jat.2013.02.003
http://arxiv.org/abs/0711.2703
http://dx.doi.org/10.1007/s00020-007-1517-x
http://dx.doi.org/10.1007/s00020-007-1517-x
http://arxiv.org/abs/1310.5134
http://dx.doi.org/10.1017/CBO9780511810817
http://arxiv.org/abs/1403.2938
http://dx.doi.org/10.1093/imrn/rnr236
http://arxiv.org/abs/1012.2719
http://dx.doi.org/10.4171/PRIMS/106
http://arxiv.org/abs/1203.0041
http://arxiv.org/abs/1501.04059
1 Introduction
2 Reducibility of matrix-valued measures
3 Reducibility of matrix-valued orthogonal polynomials
4 Examples
References
|