General First Order Matrix Difference System — Existence and Uniqueness via New Lattice Based Cryptographic Construction
This paper is concerned with the existence and uniqueness of solutions to two-point boundary value problems associated with general first order matrix difference systems. Modified Gram—Schmidt process and modified QR-algorithm are presented to find the best least square solution of the system of equ...
Збережено в:
| Опубліковано в: : | Электронное моделирование |
|---|---|
| Дата: | 2007 |
| Автори: | , , |
| Формат: | Стаття |
| Мова: | Англійська |
| Опубліковано: |
Інститут проблем моделювання в енергетиці ім. Г.Є. Пухова НАН України
2007
|
| Теми: | |
| Онлайн доступ: | https://nasplib.isofts.kiev.ua/handle/123456789/101768 |
| Теги: |
Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
|
| Назва журналу: | Digital Library of Periodicals of National Academy of Sciences of Ukraine |
| Цитувати: | General First Order Matrix Difference System — Existence and Uniqueness via New Lattice Based Cryptographic Construction / B.R. Sastry, K.N. Murty, V.V.S.S.S. Balaram // Электронное моделирование. — 2007. — Т. 29, № 3. — С. 27-40. — Бібліогр.: 8 назв. — англ. |
Репозитарії
Digital Library of Periodicals of National Academy of Sciences of Ukraine| _version_ | 1859943735261921280 |
|---|---|
| author | Sastry, B.R. Murty, K.N. Balaram, V.V.S.S.S. |
| author_facet | Sastry, B.R. Murty, K.N. Balaram, V.V.S.S.S. |
| citation_txt | General First Order Matrix Difference System — Existence and Uniqueness via New Lattice Based Cryptographic Construction / B.R. Sastry, K.N. Murty, V.V.S.S.S. Balaram // Электронное моделирование. — 2007. — Т. 29, № 3. — С. 27-40. — Бібліогр.: 8 назв. — англ. |
| collection | DSpace DC |
| container_title | Электронное моделирование |
| description | This paper is concerned with the existence and uniqueness of solutions to two-point boundary value problems associated with general first order matrix difference systems. Modified Gram—Schmidt process and modified QR-algorithm are presented to find the best least square solution of the system of equations. An efficient closest point search algorithm is presented to further improve the best least square solution. Modified encoding and decoding algorithms are presented in the process of finding shortest lattice vector.
Рассмотрено существование и единственность решений двухточечных граничных задач, связанных с обобщенными матричными разностными системами первого порядка. Для нахождения наилучшего решения системы уравнений методом наименьших квадратов использован модифицированный процесс Грама—Шмидта и модифицированный QR-алгоритм. Для дальнейшего улучшения решения наименьших квадратов представлен эффективный алгоритм поиска ближайшей точки. В процессе нахождения кратчайшего вектора решетки получены модифицированные алгоритмы кодирования и декодирования.
Розглянуто існування та єдиність розв’язувань двоточечних граничних задач, зв’язаних з узагальненими матричними різницевими системами першого порядку. Для пошуку найкращого розв’язування системи рівнянь методом найменших квадратів використано модифікований процес Грама—Шмідта і модифікований QR-алгоритм. Для подальшого покращення розв’язування найменших квадратів наведено ефективний алгоритм пошуку найближчої точки. У процесі пошуку найкоротшого вектора решітки знайдено модифіковані алгоритми кодування та декодування.
|
| first_indexed | 2025-12-07T16:13:18Z |
| format | Article |
| fulltext |
ÓÄÊ 621
B. R. Sastry, K.N. Murty, V.V.S.S.S. Balaram
Aurora’s Engineering College
(India, Bhongir, Nalgonda Dist.—508116 (AP), E-mail: nkanuri@hotmail.com)
General First Order Matrix Difference
System — Existence and Uniqueness via
New Lattice Based Cryptographic Construction
(Recommended by Prof. E. Dshalalow)
This paper is concerned with the existence and uniqueness of solutions to two-point boundary
value problems associated with general first order matrix difference systems. Modified Gram —
Schmidt process and modified QR-algorithm are presented to find the best least square solution
of the system of equations. An efficient closest point search algorithm is presented to further im-
prove the best least square solution. Modified encoding and decoding algorithms are presented in
the process of finding shortest lattice vector.
Ðàññìîòðåíî ñóùåñòâîâàíèå è åäèíñòâåííîñòü ðåøåíèé äâóõòî÷å÷íûõ ãðàíè÷íûõ çàäà÷,
ñâÿçàííûõ ñ îáîáùåííûìè ìàòðè÷íûìè ðàçíîñòíûìè ñèñòåìàìè ïåðâîãî ïîðÿäêà. Äëÿ
íàõîæäåíèÿ íàèëó÷øåãî ðåøåíèÿ ñèñòåìû óðàâíåíèé ìåòîäîì íàèìåíüøèõ êâàäðàòîâ
èñïîëüçîâàí ìîäèôèöèðîâàííûé ïðîöåññ Ãðàìà—Øìèäòà è ìîäèôèöèðîâàííûé QR-àëãî-
ðèòì. Äëÿ äàëüíåéøåãî óëó÷øåíèÿ ðåøåíèÿ íàèìåíüøèõ êâàäðàòîâ ïðåäñòàâëåí ýôôåê-
òèâíûé àëãîðèòì ïîèñêà áëèæàéøåé òî÷êè.  ïðîöåññå íàõîæäåíèÿ êðàò÷àéøåãî âåêòîðà
ðåøåòêè ïîëó÷åíû ìîäèôèöèðîâàííûå àëãîðèòìû êîäèðîâàíèÿ è äåêîäèðîâàíèÿ.
K e y w o r d s: two-point boundary value problem, difference system, existence and uniqueness.
1. Introduction. Difference equations play a crucial role in understanding dis-
crete phenomena of nature. The theory of difference equations is a lot richer
than the corresponding theory of differential equations. For example, a simple
difference equation resulting from a first order differential equation may have a
phenomenon often called appearance of «Ghost» solutions or the existence of
chaotic orbits that can only happen for higher order differential equations. Con-
sequently, the theory of difference equations is interesting in itself and assumes
great importance in solving real world problems. The application of the theory
of difference equations is already extended to various fields such as cryptology,
numerical analysis, finite element techniques, computer science and controlla-
bility. All these reasons inspired us to consider the general first order matrix dif-
ference system of the form
T n AT n B F n( ) ( ) ( )� � �1 , (1.1)
ISSN 0204–3572. Ýëåêòðîí. ìîäåëèðîâàíèå. 2007. Ò. 29. ¹ 3 27
where A, B and F are all square matrices of order s and, whose elements are real,
defined on Nn n n n n k
0
0 0 0 0
1 2
�
� � � �{ , , ,..., ,...}, where k N�
�
and n N
0
� , N be-
ing the set of integers.
In communication engineering, cryptology played a crucial rule to enhance
security. In the year 1992, the National Bureau of Standards (NBS), now the Na-
tional Institute of Standards and Technologies (NIST) initiated a program to pro-
tect computer and communication data. The intricacies of relating key varia-
tions or key variables is of special importance. In communication theory lattices
are used for modulation and quantization [1]. A comprehensive survey of clos-
est point search methods for lattices without a regular structure are presented in a
recent paper by E. Agrell et. al [2]. They also presented existing search strate-
gies in a unified framework and highlighted differences between them. The
closest point problem mainly deals with the problem of finding a given lattice �
and a given input point x R n
� , a vector �y�� such that
x y x y� � �� , for all y�� ,
where denotes the Euclidean norm. In channel coding the closest point prob-
lem is often referred to as decoding and this is the terminology used by many au-
thors in recent years. If a lattice is used as a code for Gaussian channel, the maxi-
mum likelihood of decoding in the demodulator is a closest point search. Analo-
gously, if a lattice is used as a codebook for vector quantization and the mean
square error criterion is used then the encoding of each input vector is also clos-
est point search. The method for solving the closest point problem, in fact de-
pends on the structure of the lattice. Intuitively the more structure a lattice has,
the faster can the closest point be found. A common approach to the general
closest point problem is to identify a certain critical point region in R n
within
which the optimal lattice point lie, and then investigate all lattice points in the re-
gion and thereby reducing the size dynamically. For a comprehensive review on
closest point search, we refer to an excellent survey made by E. Agrell et al [2].
This paper is organized as follows. In section 2, we present the general so-
lution of the homogeneous matrix difference system
T n AT n B( ) ( )� �1 (1.2)
in terms of two fundamental matrix solutions of T n AT n( ) ( )� �1 and T (n+1) =
= B*T(n) and then develop a particular solution of (1.1) by using variation of pa-
rameters formula. Section 3, presents a criteria for the existence and uniqueness
of solution to two-point boundary value problem
T n AT n B F n( ) ( ) ( )� � �1 ,
MT n NT n f( ) ( )
0
� �
, (1.3)
B. R. Sastry, K.N. Murty, V.V.S.S.S. Balaram
28 ISSN 0204–3572. Electronic Modeling. 2007. V. 29. ¹ 3
where n
0
, n Nf n�
0
, n n f0
� , M, N and
are given constant square matrices of
order s. In section 4, we present a generalized inverse concept and the method of
residual to find the best least square solution of the system of equations Ax = b.
Modified Gram—Schmidt process is presented in section 5. Closest point search
algorithm is presented in the last section. Section 5 also presents MINLS algo-
rithm and then the best least square solution using modified QR algorithm.
2. Solution of the non-homogeneous system. In this section, we present
the general solution of the homogeneous system (1.2) in terms of two fundamen-
tal matrix solutions and then develop variation of parameters formula for the
non-homogeneous difference system (1.1). We shall denote � ( , )n n
0
and
( , )n n
0
as the fundamental matrix solutions of T n AT n( ) ( )� �1 and T n( )� �1
� B T n*
( ) respectively. With this notation, the proof of the following lemma is
immediate.
L e m m a 2. 1. � ( , )n n
0
is a fundamental matrix solution of T n( )� �1
� AT n( ) if , and only if �
*
( , )n n
0
is a fundamental matrix solutions of T n( )� �1
�T n A( )
*
(* refers to the transpose of the complex conjugate matrix).
Theorem 2.1. Let � ( , )n n
0
and
( , )n n
0
be two fundamental matrix solu-
tion of T n AT n( ) ( )� �1 and T n B T n( ) ( )
*
� �1 respectively. Then any solution
T (n) of (1.2) is of the form T n n n( ) ( , )��
0
C �
*
( , )n n
0
where C is a constant
square matrix of order s.
P r o o f. We seek a solution T (n) of (1.2) in the form T n n n K n( ) ( , ) ( )
*
��
0
,
where K n( ) is a square matrix of order s whose elements are defined on N n
0
�
.
Then
� � �( , ) ( ) ( , ) ( ) ( , ) ( )n n K n A n n K n B A n n K n� � � � � �1 1 1
0 0 0
� � � � � � �A n n K n B K n K n B K n B K n� ( , ) ( ) ( ) ( ) ( ) ( )
* * *
0
1 1 .
Since �
�
�
( , )n n
0
is a fundamental matrix solution of T n B T n( ) ( )
*
� �1 , it fol-
lows that there exists an ( )s s� constant matrix C*
such that K n n n C* *
( ) ( , )�
�
�
�
0
and hence T n n n K n n n C n n( ) ( , ) ( ) ( , ) ( , )
*
� �
�
�
� �
0 0 0
� .
Theorem 2.2. Any solution T(n) of (1.1) is of the form T n n n( ) ( , )� ��
0
� �
�
�
C n n n�
*
( , ) ( )
0
T , where T ( )n is a particular solution of (1.1).
P r o o f . It can easily be verified that T (n) defined by T n n n( ) ( , )� ��
0
� �
�
�
C n n n�
*
( , ) ( )
0
T is a solution of (1.1). Now, to prove that every solution of
(1.1) is of this form, let T (n) be any solution of (1.1) and T ( )n be a particular so-
lution of (1.1). Then T (n) – T ( )n is a solution of (1.2). Hence by theorem 2.1,
T n n n n C n n( ) ( ) ( , ) ( , )
*
� �
�
�
T �
0 0
� .
General First Order Matrix Difference System – Existence and Uniqueness
ISSN 0204–3572. Ýëåêòðîí. ìîäåëèðîâàíèå. 2007. Ò. 29. ¹ 3 29
Therefore T n n n n C n n( ) ( ) ( , ) ( , )
*
� �
�
�
T �
0 0
� .
Theorem 2.3. A particular solution T ( )n of (1.1) is given by
T ( ) ( , ) ( ) ( , )
*n n j F J n j
j n
n
� � �
�
�
�
�
�� 1 1
0
1
� .
P r o o f . Any solution of the homogeneous system (1.2) is of the form
T n n n C n n( ) ( , ) ( , )
*
�
�
�
�
0 0
� . Such a solution cannot be a solution of (1.1) unless
F n( ) �0. Let C be a function of n defined on N n
0
�
and seek a particular solution
of (1.1) in the form T ( ) ( , ) ( ) ( , )
*n n n C n n n�
�
�
�
0 0
� .
Since T ( )n must satisfy (1.1) we have
� �( , ) ( ) ( , ) ( , ) ( ) ( , )
* *n n C n n n A n n C n n n� � � �
�
�
�
�
1 1 1
0 0 0 0
� � B F n� �( )
� � � � �
�
�
� �( , ) ( ) ( , ) ( )
*n n C n n n F n1 1
0 0
�
� � � � �
�
�
� �C n n n F n n n( ) ( , ) ( ) ( , )
*
0 0
1 1�
� � � � �
�
�
�
�
�
�
�
�
�
�
�
�C n Cn n j F j n j
j n
n
( ) ( , ) ( ) ( , )
*
0
1
0 0
0
1 1� �
�
.
Thus
T n n n Cn n n n n n j
j n
n
( ) ( , ) ( , ) ( , ) ( , )
*
� � �
�
�
�
�
�� � �
0 0 0 0
1
0
0
1� F j n j( ) ( , )�
�
�
�
�
�
�
�
�
�
�
�
�
0
1
� �
�
�
� ( , ) ( , ) ( )
*n n C n n T n
0 0
� .
3. Two-point boundary value problem. In this section, we consider the
two-point boundary value problem associated with the non-homogeneous gen-
eral matrix difference system (1.1), satisfying the boundary condition
MT n NT n f( ) ( )
0
� �
, (3.1)
where n
0
, n Nf n�
0
*
, n n f0
� . Substituting the general form of the solution given in
(2.1) in the boundary condition matrix (3.1), we get M n n C n nn�( , ) ( , )
*
0 0 0 0
0
�
�
�
�
� � �
�
�
M n j F j n jf f� ( , ) ( ) ( , )
*
1 1� � � � �
�
�
�
�
�
�M n j F j n jf f
j n
n
� ( , ) ( ) ( , )
*
1 1
0
1
.
B. R. Sastry, K.N. Murty, V.V.S.S.S. Balaram
30 ISSN 0204–3572. Electronic Modeling. 2007. V. 29. ¹ 3
The above equation is equivalent to
M C N M C N Yn n1 1 2 2
0 0
� � , (3.2)
where
M M n n
1 0 0
� � ( , ), N n n
1 0 0
�
�
�
�
*
( , ), M M n nf2 0
� � ( , ), N n nf2 0
�
�
�
�
*
( , ),
and Y M n j F j n jf f
j n
n
� � � �
�
�
�
�
�
�� ( , ) ( ) ( , )
*
1 1
0
1
are all known square matrices
of order s. Note that N1 and N2 are in fact fundamental matrix solutions and hence
invertible in the usual sense. For the analysis of C n
0
, we employ the following
notion on kronecker product of matrices. Note that� ( , ) ( , )n n n n I
0 0 0 0
� �
�
�
� .
If A, B R s s
�
�
are two square matrices of orders s, then their Kronecker prod-
uct or tensor product denoted by (A B� ) is defined as
A B a Bij� � for all i s�1 2, ,..., ; j s
a B a B a B
a B a B a B
a B a B a
s
s
s s ss
� �1 2
11 12 1
21 22 2
1 2
, ,...,
�
�
� B
�
�
�
�
�
�
�
and is in R s s2 2
�
With this one can easily verify that if
A M N M N� � � �( ) ( )
* *
1 1 2 2
,
then (3.2) is equivalent to be
AC yn
0
� , (3.3)
where A is an (s s2 2
� ) matrix and C n
0
and y are column matrices of order s2
1�
corresponding to the square matrices C n
0
and Y. In fact by viewing (3.3) as a
system of s2
scalar equations for the elements ofC n
0
, (3.3) is exactly the same set
of equations written in a vector form.
4. Closest point search in lattices. The problem of finding a shortest,
non-zero lattice vector in a lattice of dimension s2
is a landmark problem in com-
plexity theory. Lenstra A. K., Lenstra H. W. and Lavasz L. [3] known as LLL –
algorithm is used in basic reduction criteria. Kannan [4] has proposed an algo-
rithm to find the shortest lattice vector in time n n0( )
, which was later improved
by Helfrich [5] to nn n/ ( )2 0�
. The LLL reduction is often used in most cases
whereas the Korkine—Zotareff (KZ) reduction is time consuming. In this sec-
tion, we present the modified Gram—Schimidt process of Rice [6] for the com-
General First Order Matrix Difference System – Existence and Uniqueness
ISSN 0204–3572. Ýëåêòðîí. ìîäåëèðîâàíèå. 2007. Ò. 29. ¹ 3 31
putation of a best least square solution of (3.3) in the general case. For, we con-
sider the general first order matrix system of equations
Ax =b, (4.1)
where A is an (m n� ) matrix and x is an n-vector (unknown) and b is a given (m�1)
vector. The problem is to find the existence of solutions of the system (4.1). If A
is singular and if R (A) and N (A) represent respectively, the range and null
spaces of A, then (4.1) will have solutions if b R A� ( ). In this case if x is any
n-vector in N (A) and x is any solution of (4.1) then the vector x x� will also be a
solution. If b R A! ( ), then the problem (4.1) will not have solutions.
If A is an (m n� ) rectangular matrix, then for a given A R m n
�
�
(or C m n�
) and
b R m
� , the linear system (4.1) is consistent if, and only if b R A� ( ). Otherwise,
the residual vector
R = b – Ax (4.2)
is non-zero for all x R n
� , and it may be desired to find an approximate solution
of (4.1), by which we mean a vector x making the residual vector (4.2) «closest»
to zero in some sense.
The following theorem shows that Ax b� is minimized by choosing
x A b�
�
, where A+
is such that
AA A A�
� , (4.3)
( )
*AA AA� �
� . (4.4)
Theorem 4.1. Let A R m n
�
�
(C m n�
) and b R m
� (C n
) , then Ax b� is smal-
lest when x A b�
�
where A+
satisfies (4.3) and (4.4). Conversely, if A R n m� �
�
has the properties, that for all b, Ax b� is smallest when x A b�
�
, then A+
satis-
fies (4.3) and (4.4).
P r o o f. We write Ax b Ax P b P b bR A R A� � � � �( ) ( )
( ) ( )
. Where PR A( )
is the
projection matrix on R(A). Then
Ax b Ax P b P b bR A R A� � � � �
2 2 2
( ) ( )
. (4.5)
Since ( ) ( )
( )
Ax P b R AR A� � and � � �( ) ( )
( )
I P x R AR A ; it follows that (4.5)
assumes its minimum value if, and only if
Ax P bR A�
( )
(4.6)
which certainly holds, if x A b�
�
for any A+
satisfying (4.3) and (4.4). Hence
AA+
=PR(A) . Conversely, if A+
is such that for all b, Ax b� is smallest when
x A b�
�
, then by (4.6) we have AA b PR A
x x� "
�
( )
and hence AA b P
R A
�
�
( )
. Thus A+
satisfies (4.3) and (4.4). Hence the proof.
B. R. Sastry, K.N. Murty, V.V.S.S.S. Balaram
32 ISSN 0204–3572. Electronic Modeling. 2007. V. 29. ¹ 3
Suppose A+
satisfies the following two conditions:
A+ A A+ = A+
, (4.7)
(A+ A)
*
=A+ A. (4.8)
Then we have the following theorem.
Theorem 4.2. Let A R m n
�
�
( C m n�
) , x R m
� ( C m
) . If Ax = b has a solution
for x, the unique solution for which x is smallest is given by x A b�
�
, where
A�
satisfies (4.7) and (4.8). Conversely, if A R Cn m n m� � �
� ( ), is such that, when-
ever Ax = b has solution, x A b�
�
is the solution with minimum norm, then A+
satisfies (4.7) and (4.8).
P r o o f . By Theorem 4.1, equation (4.1) has a unique solution say x
0
in
R A( )
*
. Now the general solution is given by
x x y� �
0
for some y N A� ( ). Clearly x x y
2 2 2
0� � � proving that x x#
0
and
equality holds only if x = x0.
5. Modified Gram—Schmidt process. Let A be an (m n� ) matrix of rank
p m n� min{ , }. The algorithm discussed here depends upon the rank factoriza-
tion of the form AP = QR, where P is an (n m� ) permutation matrix such that the
first P columns of AP are linearly independent, Q is an (m p� ) matrix with
orthonormal columns, and R is an upper-trapezoidal of rank p. We shall denote
Im ( ) { / }A Ax R x Rm n
� � � , the column space of A and ker ( ) { / }A x R Axn
� � �0 .
We further need the following results for constructing least square algorithm and
the best least square algorithm [6].
Results 5.1. Let A be an (m n� ) given matrix of rank p. Then there exists a
factorization AP = QR with the following properties:
(i) P is an (n n� ) permutation matrix with the first p columns of AP form a
basis of Im(A);
(ii) Q is an (m p� ) matrix with orthonormal columns and R is a ( p n� ) upper
trapezoidal matrix of the form R R R� [ , ]
1 2
where R1 is a non-singular ( p p� ) up-
per triangular matrix and R2 is a p n p� �( ) matrix.
Result 5.2. Let A be an (m n� ) matrix with rank p. Write A a a an� [ , ,..., ]
1 2
,
where a Rj
m
� and let P be an (n n� ) permutation matrix such that AP = QR
where Q is an (m p� ) matrix with orthonormal columns and R is a ( p n� ) upper
trapezoidal of rank p. Then the first p columns of AP are linearly independent
and all the least square solutions of this system Ax = b can be obtained by solving
the consistent system RPTx=Q*b.
General First Order Matrix Difference System – Existence and Uniqueness
ISSN 0204–3572. Ýëåêòðîí. ìîäåëèðîâàíèå. 2007. Ò. 29. ¹ 3 33
If we write R R R� [ , ]
1 2
, R1 is a ( p p� ) upper triangular, then
x P
u
v n p
p
�
�
�
�
�
�
�
,
where v R n p
�
�
is arbitrary and u R Q b R v� �
�
1
1
2
( )
*
are the least square solu-
tions of Ax = b. A basic least square solution is obtained by making v = 0.
Algorithms. Let A be an (m n� ) matrix and b R m
� is given and let rank
A be p m n� min{ , }. The following is the algorithm to compute least square solu-
tion. We use the notation a bij ij:� if aij becomes bij for all i = 1, 2, ..., M,
and j = 1, 2, ..., n.
(i) A l g o r i t h m:
qij := aij, i = 1, 2, …, m; j = 1, 2, …, n
rij := 0, i = 1, 2, …, m; j = 1, 2, …, n+1
sj := j, j = 1, 2, …, n
p: = n
for k = 1, 2, …, n $ j ij
i
m
q�
�
�
2
1
, j k k n� �, , ...,1
COMPUTE INDEX c, k � c � n such that $c = max$j 1 � j � n
IF $c = 0, go to 30
30 p:k – 1 go to 40
interchange column k of Q with column C of Q
interchange column k of R with column C of R
interchange number $k with number $c,
interchange index Sk with index C
rkk := $ k
qk := qk/rkk.
rkj := q qk j
*
, j = k + 1, ..., n
qj := qj – rkj qk, j = k + 1, ..., n
rk, n + 1:= q bk
*
.
40 for j = p + 1, ..., n
xj := 0.
Back solve the system of equations
r11 x1 + ...... + r1p xp = r 1,n+1
r22 x2 + ...... + r2p xp = r 2,n+2
..................................
rpp xp = rp,n+1
to determine x1, x2, ...., xp.
B. R. Sastry, K.N. Murty, V.V.S.S.S. Balaram
34 ISSN 0204–3572. Electronic Modeling. 2007. V. 29. ¹ 3
For j = n, n – 1, ..., 1
k: =Sj
if k � j, xk � xj
x = (x1, x2, ..., xn) is a least square solution of Ax = b
(ii) A l g o r i t h m M I N L S
IF p = n STOP
The least square solution already found is the minimal norm least square so-
lution of Ax = b.
Else v := n – p
bj := xj, j = 1, ..., n
xj := 0, j = p+1, ..., n
for k = p + 1, ..., n
xk := 1.
Back solve the equation system Rx = 0, to determine x1, x2, ..., xp
j := k – p
aij := xi, i = 1, 2, ..., n
xk := 0.
For i = n, n – 1, ..., 1.
k : = Si
IF K � i, interchange akj and aij for j = 1, 2, …, v.
(iii) C o m p u t a t i o n o f t h e p s e u d o i n v e r s e.
If A is an (m n� ) matrix, then we can utilize algorithm MINLS to compute
A+
, the pseudo inverse of A. Using MINLS m times, solve for ai
�
the minimal
norm least square solution of the problem Ax = ei, where ei,1� �i m are the stan-
dard Euclidian basis for Rm
. Then A a a a p
� � � �
� [ , ,..., ]
1 2
.
The vector x x x x p� ( , ,..., )
1 2
computed from the above algorithm is the
closet point search algorithm. This algorithm as detailed above is better than
the Sehnorr norm — Euchner strategy algorithm [7] and that of [2]. We can
apply this algorithm to find a unique solution of the boundary value problem
(1.3). We need the following DECODE algorithm and the closet Point al-
gorithm. Before we present these algorithms in the next section, we first
present with suitable examples to find minimum least square solutions of
equations Ax = b.
General First Order Matrix Difference System – Existence and Uniqueness
ISSN 0204–3572. Ýëåêòðîí. ìîäåëèðîâàíèå. 2007. Ò. 29. ¹ 3 35
E x a m p l e 1. Consider the system of equations Ax = b, where
A � � �
�
�
�
�
�
�
�
�
1 1 1
2 1 1
2 4 5
, x
x
x
x
�
�
�
�
�
�
�
�
1
2
3
and b � �
�
�
�
�
�
�
�
4
1
1
.
Using QR factorization, we find AP = QR, where
Q �
�
� �
� �
�
�
�
�
�
�
�
1
3
1 2 2
2 1 2
2 2 1
, R �
� �
�
�
�
�
�
�
�
�
3
1 1 1
0 1 1
0 0 1
.
Note that Q is orthonormal and R is upper trapezoidal. Using the algorithms
given in sections 4 and 5, we find that the minimum least square solution is given
by MINLS x = [0.99999, 2.00001, 1.00000]
T
.
E x a m p l e 2. Consider the system of equations Ax = b, where
A �
�
�
�
�
�
�
�
�
1 2 3
1 5 6
1 8 9
1 11 12
, x
x
x
x
�
�
�
�
�
�
�
�
1
2
3
and b �
�
�
�
�
�
�
�
�
6
13
19
24
.
This is an over determined system. Using the algorithms given in sections 4
and 5, we find that the minimum least square solution is given by MINLS x =
= [1, 0.5, 1.5]
T
.
E x a m p l e 3. Consider the system of equations Ax = b, where
A �
�
�
�
�
�
�
�
�
1 3 3 2
2 6 9 5
1 3 3 0
, x
x
x
x
x
�
�
�
�
�
�
�
�
�
1
2
3
4
and b �
�
�
�
�
�
�
�
15
6
225
.
.
.
Note that the system is an underdetermined system and its minimum least square
solution is MINLS x = [–0.211009174, –0.6330275230, 0.963302752,
0.110091743]
T
.
6. Closest point search algorithm. The concept of public-key cryptogra-
phy algorithm have a long mathematical history right from 1976. Many of the
public-key cryptography were in secure. Of those still considered secure, many
are not practicable. Either they have too large a key or the Cipher text is much
larger than the plain text. Only a very few algorithms are both secure and practi-
cal. The method, we present here is more secure and more practical and is based
on closest point general lattice search algorithm. This algorithm can be regarded
B. R. Sastry, K.N. Murty, V.V.S.S.S. Balaram
36 ISSN 0204–3572. Electronic Modeling. 2007. V. 29. ¹ 3
as a «front end» to Decode, where explicit pre processing and post processing is
performed to allow generator matrices that are not lower triangular and are not
even square matrices. We first present an algorithm that computes a closest vec-
tor without any representation choice, but the speed with which it reaches the re-
quired result varies significantly between two different representations. We then
present a DECODE algorithm which is of practical importance. The main ques-
tion we answer in this section is the following: How should a given search prob-
lem be preprocessed in order to make the most efficient use of DECODE ?
Definition 6.1. A matrix G is said to be a generator matrix if it has real en-
tries and rows of G are linearly independent over R.
Firstly, we assume that a generator matrix A and an input vector x are given.
Let A be an (m n� ) matrix and x R m
� . By means of a linear integer transforma-
tion, we first transform A into another matrix R2 which generates an identical lat-
tice and then rotate and reflect R2 into a lower triangular matrix R3, so that
� � �( ) ( ) ( )R R A3 2% � .
It is very essential to rotate and reflect the input vector x in the same way, so
that the transformed input vector, say x3, is in the same relation to �( )R3 as x is
related to �( )A . All this can be regarded as a change of coordinate system. Note
that by the above transformation the input vector x also changes, so that the
transformed input vector becomes x2. By reversing the operations of rotation and
reflection enables us to produce x
�
, which is the lattice point closest to x in �( )A .
Following the above steps, we are now in a position to present the detailed algo-
rithm as follows.
A l g o r i t h m. CLOSEST POINT (A, x).
Input: A lattice point x A
�
�� ( ) the closest to x.
S t e p 1. Let R WA
2
� where W is an (m m� ) matrix with integer entries and
detW � �1.
S t e p 2. Compute an (m n� ) orthogonal matrix Q with orthonormal col-
umns such that R2 = R3 Q, where R3 is an (m n� ) lower-triangular matrix with
all diagonal elements positive.
S t e p 3. Let H3: = R
2
1�
.
S t e p 4. Let x3:= xQ
T
.
S t e p 5. Let u
�
3
: = DECODE (R3, x3).
S t e p 6. Return x
�
:= u R
�
3 2
.
Step 1 is in fact a basic reduction. If no basic reduction is needed, we can
take W as the unit matrix. Note that the speed and numerical stability of the
search can be improved significantly if proper search is made. Step 2 implies ro-
General First Order Matrix Difference System – Existence and Uniqueness
ISSN 0204–3572. Ýëåêòðîí. ìîäåëèðîâàíèå. 2007. Ò. 29. ¹ 3 37
tation and reflection of R2 into lower triangular form. The usual method to
achieve this is modified algorithm of QR presented in section 4. In our context
QR decomposition of R2 gives both QT
and R3 with R3 being equal to RT
. All the
transformation can be thought of a change of coordinate system. Measure first
coordinate along v1 (the first row of R2) the second in the rows spanned by v1 and
v2 and the third in the row spanned by v1, v2 and v3 and so on. The generator ma-
trix in this coordinate system will be in general square and lower triangular.
As an alternative to QR decomposition, R3 can be obtained by Cholesky de-
composition and it states that one can find a lower triangular matrix (real) L such
that A = LLT
. In our context, R3 is equal to L and the rotation matrix is given by
Q R R
T
�
�
3
1
2
. Another approach to find QR decomposition is decomposition of
A= LU, where L is lower triangular and U is upper triangular and in our context
Q RT
� R
3
2. If A is an (m n� ) positive definite matrix, its cholesky decomposition
is a factorization of A in the form A=UU T
where U is an (m m� ) — upper triangu-
lar matrix. In our context, R3 is equal to U T
and the rotation matrix Q is given
by Q R�
�
R
3
1
2. All these algorithms can be found in [8]. One can also compute
QR by Householder’s reflection and the Householder matrix H is symmetric and
orthogonal. The Household reflection in fact reflect every vector x m
�R in the
hyperplane span{ }v &
and is given by H I
vv
v v
T
T
T
� � �
2
H , where v is the House-
holder vector. However, the QR-method is the generally recommended method
for calculating the least square solutions so far and in this paper, we replaced QR
algorithm by the modified QR-algorithm and this method is the most effective
tools in finding the least square solutions of the system of equations Ax = b. Note
that the decomposition of A = QR is unique, what ever technique we adopt and
further in our modified QR-algorithm Q is orthonormal implies det (Q) = �1. For
A is an ill-conditioned matrix, the method we presented to our belief is the most
effective tool.
In steps 4—6 the input vectors are processed. They are transformed into the
coordinate system of R2 decode, and transformed back again.
We now present DECODE algorithms.
Algorithm DECODE (H, x)
Input: an m m� lower triangular matrix H with positive diagonal element,
and an m-dimensional Vector x m
�R to decode in the lattice �( )H
�1
.
Output: an m-dimensional vector u z m
�
� such that u
�
H
�1
is a lattice point x
�
that is close to x.
1. m: = the Size of H \ * dimension* \
2. bestdist : = ' \ * current distance record *\
3. K : = m \* dimension of the matrix cude examination*\
B. R. Sastry, K.N. Murty, V.V.S.S.S. Balaram
38 ISSN 0204–3572. Electronic Modeling. 2007. V. 29. ¹ 3
4. dis sk := 0 \* distance to examined layer *\
5. ek: xH \* um
xv
v
�
�
11
2
used to compute um
�
*
*\1
6. uk := ekk \* examined lattice point \
7. y: =
e u
h
kk k
kk
�
\* m= \u um m�
�
\ v& | is the orthonormal distance *\
8. Step k : = Sgn*(y) \*off set to next layer in (15)*\
9. loop
10. new dis : = Dist k + y2
11. if new dist < best dist then {
12. if k ( 1 then {
13. ek–1,i : = eki – yhki for i – 1, 2, ..., k – 1
14. k: = k – 1
15. dist k: New dist
16. uk : = ekk
17. y: =
e u
h
kk k
kk
�
18. step k: = sgn*(y)
19. }else{
20. u
�
: = u
21. best dist : = new dist
22. k:= k + 1
23. uk : = uk + step k
24. y : =
e u
h
kk k
kk
�
25. Step k : = Step k – sgn* (step k)
26. }
27. }else {
28. if k = m then return u
�
(and exit)
29. else {
30. k: = k +1 \*move up*\
31. ukj : = uk+Step k \*next layer*\
32. y : =
e u
h
kk k
kk
�
33. go to step 25
34. }
35. }
36. go to <loop>
General First Order Matrix Difference System – Existence and Uniqueness
ISSN 0204–3572. Ýëåêòðîí. ìîäåëèðîâàíèå. 2007. Ò. 29. ¹ 3 39
In the above algorithm m, k is the dimension of the sublayer structure that is
currently being investigated. In case A is an ill-conditioned, the algorithm finds a
k-dimensional layer, the distance to which is less than the currently smallest dis-
tance, this layer is expanded into (k – 1) dimensional sub layer. Conversely, if
the distance to the examined layer is greater than the lower distance the algo-
rithm moves one step up in the hierarchy of layer. This is done in case 6. Case B
is invoked when the algorithm has been successfully moved down all the way
down to the zero dimensional layer without exceeding the lowest distance (that
is, a lattice point). This lattice point is stored in the output, the lowest distance is
updated, and the algorithm moves back up again, without restarting [2]. Note
that in [2], the closest point search algorithm is based upon carefully selected
preprocessing. Such a selection is not possible in each and every case. The meth-
ods we presented in section 4 and 5 will eliminate such careful selection and
minimizes decoding time and at the same time reduce the complexity of the clos-
est point search significantly.
Ðîçãëÿíóòî ³ñíóâàííÿ òà ºäèí³ñòü ðîçâ’ÿçóâàíü äâîòî÷å÷íèõ ãðàíè÷íèõ çàäà÷, çâ’ÿçàíèõ ç
óçàãàëüíåíèìè ìàòðè÷íèìè ð³çíèöåâèìè ñèñòåìàìè ïåðøîãî ïîðÿäêó. Äëÿ ïîøóêó íàé-
êðàùîãî ðîçâ’ÿçóâàííÿ ñèñòåìè ð³âíÿíü ìåòîäîì íàéìåíøèõ êâàäðàò³â âèêîðèñòàíî ìî-
äèô³êîâàíèé ïðîöåñ Ãðàìà—Øì³äòà ³ ìîäèô³êîâàíèé QR-àëãîðèòì. Äëÿ ïîäàëüøîãî ïî-
êðàùåííÿ ðîçâ’ÿçóâàííÿ íàéìåíøèõ êâàäðàò³â íàâåäåíî åôåêòèâíèé àëãîðèòì ïîøóêó
íàéáëèæ÷î¿ òî÷êè. Ó ïðîöåñ³ ïîøóêó íàéêîðîòøîãî âåêòîðà ðåø³òêè çíàéäåíî ìîäèô³-
êîâàí³ àëãîðèòìè êîäóâàííÿ òà äåêîäóâàííÿ.
1. Blacke I. F. «Lattices and Cryptography» in codes and systems /R. E Blahut and R.Kotter,
eds. — Norwell : M. A Kluloev, 2002. — P. 317—332.
2. Agrell E., Eriksson T., Vardy A., Zeger K. Closen-point search in lattices//IEE Transactions
on transformation theory. — 2002. — 148, ¹ 8. — P. 2201—2215.
3. Lenstra A.K., Lenstra H.W, Lovasz L. Factoring Polynomials with rational coefficients//
Math.Ann. — 1982. — 261. — P. 515—534.
4. Kannan R. Minkowski’s convex body theorems and integer programming Mathematics of
operations research. —1987. —12. — P. 415—440.
5. Hel Frich B. Algorithm to construct Minkowski reduced and Hermite reduces bases//Theo-
retical computer science. — 1985. — 41. — P.125—139.
6. Rice J.R. Experiments of Gram—Schmidt orthogonalization Math. computers. — 1996.—
20. — P. 325—328.
7. Atkinson K. An Introduction to Numerical Analysis, Second edition. — NY, Brisbane, To-
ronto: John Wiley and sons, 1987. — P. 209.
8. Schnorr C. P. A more efficient algorithm for Lattice based reduction//J. Algorithms. —
1988. — 9. — Ð. 47—62.
Ïîñòóïèëà 18.08.06
B. R. Sastry, K.N. Murty, V.V.S.S.S. Balaram
40 ISSN 0204–3572. Electronic Modeling. 2007. V. 29. ¹ 3
|
| id | nasplib_isofts_kiev_ua-123456789-101768 |
| institution | Digital Library of Periodicals of National Academy of Sciences of Ukraine |
| issn | 0204-3572 |
| language | English |
| last_indexed | 2025-12-07T16:13:18Z |
| publishDate | 2007 |
| publisher | Інститут проблем моделювання в енергетиці ім. Г.Є. Пухова НАН України |
| record_format | dspace |
| spelling | Sastry, B.R. Murty, K.N. Balaram, V.V.S.S.S. 2016-06-07T06:18:14Z 2016-06-07T06:18:14Z 2007 General First Order Matrix Difference System — Existence and Uniqueness via New Lattice Based Cryptographic Construction / B.R. Sastry, K.N. Murty, V.V.S.S.S. Balaram // Электронное моделирование. — 2007. — Т. 29, № 3. — С. 27-40. — Бібліогр.: 8 назв. — англ. 0204-3572 https://nasplib.isofts.kiev.ua/handle/123456789/101768 This paper is concerned with the existence and uniqueness of solutions to two-point boundary value problems associated with general first order matrix difference systems. Modified Gram—Schmidt process and modified QR-algorithm are presented to find the best least square solution of the system of equations. An efficient closest point search algorithm is presented to further improve the best least square solution. Modified encoding and decoding algorithms are presented in the process of finding shortest lattice vector. Рассмотрено существование и единственность решений двухточечных граничных задач, связанных с обобщенными матричными разностными системами первого порядка. Для нахождения наилучшего решения системы уравнений методом наименьших квадратов использован модифицированный процесс Грама—Шмидта и модифицированный QR-алгоритм. Для дальнейшего улучшения решения наименьших квадратов представлен эффективный алгоритм поиска ближайшей точки. В процессе нахождения кратчайшего вектора решетки получены модифицированные алгоритмы кодирования и декодирования. Розглянуто існування та єдиність розв’язувань двоточечних граничних задач, зв’язаних з узагальненими матричними різницевими системами першого порядку. Для пошуку найкращого розв’язування системи рівнянь методом найменших квадратів використано модифікований процес Грама—Шмідта і модифікований QR-алгоритм. Для подальшого покращення розв’язування найменших квадратів наведено ефективний алгоритм пошуку найближчої точки. У процесі пошуку найкоротшого вектора решітки знайдено модифіковані алгоритми кодування та декодування. en Інститут проблем моделювання в енергетиці ім. Г.Є. Пухова НАН України Электронное моделирование Математические методы и модели General First Order Matrix Difference System — Existence and Uniqueness via New Lattice Based Cryptographic Construction Article published earlier |
| spellingShingle | General First Order Matrix Difference System — Existence and Uniqueness via New Lattice Based Cryptographic Construction Sastry, B.R. Murty, K.N. Balaram, V.V.S.S.S. Математические методы и модели |
| title | General First Order Matrix Difference System — Existence and Uniqueness via New Lattice Based Cryptographic Construction |
| title_full | General First Order Matrix Difference System — Existence and Uniqueness via New Lattice Based Cryptographic Construction |
| title_fullStr | General First Order Matrix Difference System — Existence and Uniqueness via New Lattice Based Cryptographic Construction |
| title_full_unstemmed | General First Order Matrix Difference System — Existence and Uniqueness via New Lattice Based Cryptographic Construction |
| title_short | General First Order Matrix Difference System — Existence and Uniqueness via New Lattice Based Cryptographic Construction |
| title_sort | general first order matrix difference system — existence and uniqueness via new lattice based cryptographic construction |
| topic | Математические методы и модели |
| topic_facet | Математические методы и модели |
| url | https://nasplib.isofts.kiev.ua/handle/123456789/101768 |
| work_keys_str_mv | AT sastrybr generalfirstordermatrixdifferencesystemexistenceanduniquenessvianewlatticebasedcryptographicconstruction AT murtykn generalfirstordermatrixdifferencesystemexistenceanduniquenessvianewlatticebasedcryptographicconstruction AT balaramvvsss generalfirstordermatrixdifferencesystemexistenceanduniquenessvianewlatticebasedcryptographicconstruction |