UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Some combinatorial properties of the diagonal sums of doubly stochastic matrices Wang, Edward Tzu-Hsia 1971

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
831-UBC_1971_A1 W35.pdf [ 3.56MB ]
Metadata
JSON: 831-1.0080535.json
JSON-LD: 831-1.0080535-ld.json
RDF/XML (Pretty): 831-1.0080535-rdf.xml
RDF/JSON: 831-1.0080535-rdf.json
Turtle: 831-1.0080535-turtle.txt
N-Triples: 831-1.0080535-rdf-ntriples.txt
Original Record: 831-1.0080535-source.json
Full Text
831-1.0080535-fulltext.txt
Citation
831-1.0080535.ris

Full Text

SOME COMBINATORIAL PROPERTIES OF THE DIAGONAL SUMS OF DOUBLY STOCHASTIC MATRICES by EDWARD TZU-HSIA WANG B.Sc , National Taiwan University, Taipei, Taiwan, 1965 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in the Department of MATHEMATICS We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA September, 1971 In p r e s e n t i n g t h i s t h e s i s in p a r t i a l f u l f i l m e n t o f the requirements f o r an advanced degree at the U n i v e r s i t y of B r i t i s h Columbia, I agree that the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r reference and study . I f u r t h e r agree t h a t permiss ion fo r e x t e n s i v e copy ing o f t h i s t h e s i s f o r s c h o l a r l y purposes may be granted by the Head of my Department o r by h i s r e p r e s e n t a t i v e s . I t i s understood that copying o r p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l ga in s h a l l not be a l lowed wi thout my w r i t t e n p e r m i s s i o n . Department of I v^A^r^l WJL\ \C ^ The U n i v e r s i t y o f B r i t i s h Columbia Vancouver 8, Canada Date Supervisor: Professor R. Westwick ABSTRACT Let ft denote the convex polyhedron of a l l nxn d.s. n (doubly stochastic) matrices. The main purpose of this thesis is to study some combinatorial properties of the diagonal sums of matrices in Q . In Chapter I, we determine, for a l l d.s. matrices unequal to J n ; the maximum number of diagonals that can have a common diagonal sum. The key wi l l be a Decomposition Theorem that enables us to characterize completely the structure of a d.s. matrix when this maximum number is attained, provided that the common sum is not one. When the common sum is one, the question is more difficult and remains open. Several applications of the Decomposition Theorem are also given. In Chapter II, we concentrate on the diagonals with maximum diagonal sum h and the diagonals with minimum diagonal sum k . We obtain the best possible bounds for entries on these diagonals and for various kinds of functions of h and k . The key wi l l be a Covering Theorem that enables us to analyze the cases when those bounds are attained. A conjecture is given. In Chapter III, we study the properties of the h-function and the k-function, the functions that associate with each d.s. matrix its maximum and minimum diagonal sums respectively. In particular, we investigate the behavior of these functions on the Kronecker product of d.s. matrices. Furthermore, we show that the h-function is very similar to the rank function p in many respects. We also prove that for A e fi^ , h(A) <_ p (A) and per (A) <^  {b ^ }^  which improves a result by M. Marcus and H. Mine. A conjecture is given. INTRODUCTION 1 CHAPTER I : DIAGONAL SUMS OF d.s. MATRICES 6 CHAPTER II : MAXIMUM AND MINIMUM DIAGONAL SUMS OF d.s. MATRICES 24 CHAPTER III : PROPERTIES OF THE h-FUNCTION AND THE k-FUNCTION OF d.s. MATRICES 51 BIBLIOGRAPHY 71 ACKNOWLEDGEMENT I am deeply indebted to my supervisor, Dr. R. Westwick for rending geneous assistance and invaluable guidance throughout the research and preparation of this thesis. My further thanks are due to Dr. B. N. Moyls, Dr. D. C. Murdoch, and Dr. A. Adler who carefully read the draft of this thesis and gave helpful suggestions and criticisms. The financial support of the National Research Council of Canada and the University of British Columbia is gratefully acknowledged. I dedicate this work to my wife for her constant encouragements during my research and for her excellent typing of this thesis. INTRODUCTION Let denote the ful l symmetric group of degree n . If A is any nxn matrix and ac , then the sequence of elements a l o ( l ) ' a2o(2)' 3na(n) "*"S c a ^ e d t n e diagonal of A corresponding to 0" . Following the usual convention, we shall use a to denote both the permutation a and the diagonal corresponding to i t . If a is the identity permutation, then the diagonal corresponding to i t is called the main diagonal of A . If for some a e , a. , . s =1 , i = l , 2, n , and a.. = 0 otherwise, then A is ia(i) i j called a permutation matrix. If a is a diagonal of A , then the sum n £ a - £0 ( - j \ ^ s called the diagonal sum of a . Diagonal product is i=l defined in a similar way. An n-square real matrix A = (a..) , where 0 < a. . < 1 1 J - i j ~ for a l l i , j = 1, 2, n is called row stochastic i f each row sum is one, column stochastic i f each column sum is one, stochastic i f i t is either row stochastic or column stochastic and doubly stochastic i f i t is both row stochastic and column stochastic. The fundamental theorem in the theory of doubly stochastic matrices is the theorem of Birkhoff [1] which states that the set of a l l n-square doubly stochastic matrices is a convex polyhedron with permutation matrices as vertices. (For more about this theorem, see [15, 18]). Much of the study of doubly stochastic matrices is motivated by the well-known, long-standing and challenging van der Waerden conjecture [22] which states that for A e , per(A) _> n ! /n n with equality i f and only i f A = , the doubly stochastic matrix with a l l entries equal to 1/n . Here, per denotes the permanent function. Desipte a great deal of effort, this conjecture is s t i l l unsolved though i t is generally believed to be true [13]. The conjecture has been verified to be true for n <_ 5 [4, 5, 16], and many partial results, mainly due to Marcus, Mine and Newman have also been obtained [4, 5, 13, 16]. It has been pointed out by P. Erdos that the following two statements are consequences of the van der Waerden conjecture: n (1) There exists a diagonal a of A such that | | a. /•. \ >. 1/n n (2) There exists a diagonal a of A such that ) a. , . s > 1 and i=l i a ( 1 ) ~~ a. . . . > 0 for i = 1, 2, n . icr(x) In 1958, Marcus and Ree [18] verified (2) by proving n n that for A e tt , there is a diagonal o* such that ) a. , . s > ) a?. i-1 i a ( l ) ~ i , j = l 1 J and a. / < N > 0 for i = 1, 2, . . . , n . Then in 1962, Marcus and Mine ia(x) n [10] verified (1) by proving that for A e tt , max | | a. , . . _> 1/n n a i=l with equality i f and only i f A = . Erdos' observation and the proofs furnished by Marcus, Mine, and Ree suggested that local properties such as diagonal sum (as well as diagonal product) are closely related to the structure of doubly stochastic matrices. As an extreme case, notice that i f any diagonal sum is n , then the doubly stochastic matrix must be a permutation matrix. A less t r iv ia l proposition, which wil l be used frequently in this work, also illustrates this point. Proposition: If A e ft^ , then n (i) max V a. / > N > 1 with equality i f and only i f A = J _ . i ia(i) — J J N aeS i=l n n (i i) min Y a, , . x < 1 with equality i f and only i f A = J ceS 1-1 l a ( l ) " n n n Since for A e Q , Y a?. > 1 with equality i f and i,J=l J only i f a j^ = 1/n for a l l i , j = 1, 2, n , i . e . , i f and only i f A = J"n , (i) is an immediate consequence of Marcus and Ree's result, (ii) follows by applying (i) to the doubly stochastic matrix B = (nJn - A)/(n - 1) . As we wi l l see later, the above proposition can also be obtained independently as a corollary of one of our theorems (Theorem 1.3) With this picture in mind, the main purpose of this work is to study some combinatorial properties of the diagonal sums of doubly stochastic matrices. The starting point wi l l be the theorem of Marcus and Mine [10] which states that i f A e Q has more than n (n-l)(n-l)! diagonals with a common non-zero diagonal product, then A = J . In Chapter I, we first show that the above theorem s t i l l holds i f we replace "non-zero diagonal product" by "diagonal sum". Then we pose this question, "For A e n , what is the maximum number of diagonals with sum greater than one and what is the maximum number of diagonals with sum less than one?" To answer this question, we obtain a decomposition theorem (Theorem 1.11) which shows that this common maximum number is again (n-1)(n-1)! , a somewhat surprising, i f not significant, result. This theorem also enables us to characterize completely the structure of a doubly stochastic matrix when i t has precisely (n-1)(n-1)! diagonals with a common sum a / 1 . The decomposition theorem gives no information for a = 1 in which case the question becomes more difficult and we are only able to obtain partial answer. Several consequences of this theorem are also derived. In Chapter II, we confine ourselves to the diagonals with maximum diagonal sum h and the diagonals with minimum diagonal sum k. The key wi l l be a covering theorem (Theorem 2.3) which enables us to obtain the best possible bounds for entries on these diagonals and for various kinds of functions of h and k , and to analyze the cases when these bounds are attained. When investigating the lower bound for h + k , a conjecture (Conjecture 2.21) naturally presents itself . In Chapter III, we study the properties of the h-function and the k-function, the functions that associate with each doubly stochastic matrix its maximum and minimum diagonal sum respectively. In particular, we show that the h-function is convex and is very similar to the rank function p in many respects. We also investigate the behavior of these functions on the Kronecker product of doubly stochastic matrices. Furthermore, we prove that for A e fiR > h(A) £ p(A) and per(A) <_ {h(A)/n}2 which improves a result of Marcus and Mine [ l l j . We also make the conjecture that the h-function defined on obeys Sylvester's law for the rank function, i . e . , h(A) + h(B) - h(AB) j< n for A,B e £2 . n On the whole, our study wil l be of a combinatorial nature. In particular, we wil l apply twice in this work (cf. Theorem 2.13 and Proposition 3.18) the well-known theorem of P. Hall [20] on systems of distinct representatives which states that the subsets S ,^ S2, • • • » S^  of a set S have a system of distinct representatives i f and only i f for each k = 1, 2, n and for a l l sequences u of k terms such that 1 <_ a>^  < < • • • < _< n , i t is true that |S U S U " * U S | > k . 1 2 k CHAPTER I DIADONAL SUMS OF d.s. MATRICES In this chapter, we show that for d.s. matrices unequal to J r , the maximum number of diagonals that can have a common sum is (n-1) (n-1)! . We then pose the questions, "For A e °, , what is the maximum number of diagonals with sum greater than one and what is the maximum number of diagonals with sum less than one?" To answer these questions, we obtain a decomposition theorem which shows that the answers to both questions are again (n-1)(n-1)! . This theorem also enables us to characterize completely the structure of a d.s. matrix when there are precisely (n-1)(n-1)! diagonals with a common sum a ^ 1 . For a = 1 , the question is more difficult and remains open. In [10], Marcus and Mine proved the following results: Lemma 1.1: Let A be an nxn matrix such that more than (n-1)(n-1)! diagonals have a common non-zero product. Then p(A) = 1 . Theorem 1.2: Let A E tt be such that more than (n-1)(n 1)! diagonal n have a common non-zero product. Then A = . We begin this chapter by obtaining, using Lemma 1.1, an analogue of Theorem 1.2 concerning the diagonal sums of d.s. matrices. Theorem 1.3: Let A e tt be such that more than (n-1)(n-1)! diagonal n have a common sum. Then A = J n Proof: Let a denote the common diagonal sum. Define the matrix B = (b..) as: b. . - exp(a..) . Then b,. > 0 and £n b. . = a., for i j i j i j i j i j iJ n n all i,j = 1, 2, n . Hence £n I I b , . = Y a . . = a or i=i lCT^1-' XOK^-) II b. ..v = exp(a) ^ 0 for more than (n-l)(n-l)! diagonals a • By Lemma 1.1, p(B) = 1 and hence there Is one row of B , the 1st row say, such that every other row is a scalar multiple of i t , i = 2, 3, . . . , n . Hence exp (a ) /expia^) = exp(a i 2)/exp(a1 2) = • • • = = exp(a i n)/exp(a l n) = a. > 0 or a±± - a±± = a . 2 - a 1 2 = . . . = = a. - a, = Zn (a.) = 8. for each i = 2 , 3, • • • , n . Therefore, in In i i n n £ a. . = n 8. + £ a.. . . Since each row sum of A is one, we get j=l 1 J 1 j=l 1 3 8. = 0 or a ± 1 = a±1 , a ± 2 = a^ , • • •, a ± n - a l n for a l l i = 2, 3, n . Since each column sum of A is also one, we get, for a l l j = 1, 2, • • • » n , n a = i D r a „ = an „ = • • • = a, = 1/n , J l j 11 12 In Hence A = J n n Corollary 1.4: Let A e o, . Then I I Y a. < 1 , with equality n aeS i=l l a ( l ) " n i f f A = J . n Proof: By the elementary arithmetic-geometric inequality, we have TT I a . n < ( 7 I I a. = U- (n-l)! j[ a . . ) n ! = 0 i=l l o ( l ) - 0 i-1 l n I i,j=l 1 J J t n = (n!/n!) = 1 . We have equality i f f £ aj_g^ a r e equal for a l l diagonals a , and hence i f f A = J n The next corollary wi l l be used frequently. Corollary 1.5: Let A e ft Then n (i) max Y a . v , . N > 1 with equality i f f A = J , oeS i=l n n (ii) min Y a. < 1 with equality i f f A = J asS i i i l a ( l ) ~ n n Proof: (i) Since n! • max Y a. ... > Y Y a. . . . = cr ^ i o ( I ) - £ ^ i o ( I ) n n = (n-1)! £ a. . = n! , we get max £ a. • . \ >. 1 • Equality holds i , j=l 1 J a i=i n i f f Y a. , . N are equal for a l l diagonals a and hence i f f A ± ^ 1 0 ( l ) n The proof of (ii) is similar. The next theorem is an analogue to a result of Sinkhorn and Knopp [21, corollary 4]. Theorem 1.6: Let A,B e Q be such that more than (n-1)(n-1)! n corresponding diagonals of A and B have equal sums. Then A = B . Proof: Put C = — ( A - B + n J ) . Then C e Q and more than n n n (n-1) (n-1).' diagonals of C have the common sum one. Hence C = J n by Theorem 1.3. Therefore A = B . Remark 1.7: (i) The number (n-1)(n-1)! in Theorem 1.2 and 1.3 above is best possible in the sense that i t is attainable for a l l n A = y z • • • z • • • y z • • • z where 0 <_ x,y,z _< 1 are such that x + (n-l)y = y + (n-l)z = 1 . Then clearly A e . Since this particular matrix wil l be considered frequently in the sequel, we call i t a (x,y,z)-matrix, and denote i t by A(x, y, z) . If we choose x, y, and z such that y 2 z n 2 ^ xz" ^ ( i .e . , y 2 f xz) , then A 4 3^ and there are (n-1)! diagonals passing through x with the common non-zero product xz11 ^ and (n-1) (n-1) ! diagonals missing x with the common non-zero product y 2 z n 2 . Similarly, i f we choose x, y, and z such that x + (n-l)z ? 2y + (n-2)z ( i .e . , 2y ± x + z) , then A ? J n a n d A has precisely (n-1)(n-1)! diagonals with the common sum 2y + (n-2)z . In fact, we shall prove later on that i f A e £2^  has precisely (n-1)(n-1)! diagonals with a common sum a ^ 1 , then A = A(x, y, z) for some suitable choices of x, y, and z . ( i i) Later on we shall show via a decomposition theorem that Theorem 1.3 and Theorem 1.6 are in fact equivalent. ( i i i ) In Theorem 1.6, the assumption "corresponding" is clearly indispensable for we can always permute the rows and columns of A e ^ n to get B £ A . Obviously, A and B have a l l the diagonal sums equal (not corresponding diagonals). Furthermore, the number (n-1)(n-1)! in Theorem 1.6 is attained for n = 3 . For example, let A = » B = 1/2 1/3 1/6' 1/6 1/2 1/3 1/3 1/6 1/2 Then A ^ B , and there are precisely 4 corresponding diagonals with equal sums. However, we are unable to show that the number (n-1)(n-1)! is attained for a l l n or for any n ^ 4. The next proposition shows that this question is equivalent to the existence of a d.s. matrix having precisely (n-1)(n-1)! diagonals with the common sum one. Proposition 1.8: The following two statements are equivalent. (i) There exist A,B e , A f B such that precisely (n-1)(n-1)! corresponding diagonals of A and B have equal sums. (i i) There exists C e n with precisely (n-1)(n-1)! diagonals having sum one. Proof: (i) => (ii) . Put C = — (A - B + nJ ) . Then C e Jl n n n and, since A ^ B , C f 3^ . Since C has at least (n-1)(n-1)! diagonals with sum one, i t has precisely (n-1)(n-1)! diagonals with sum one by Theorem 1.3. (i i) => (i) . Since C has precisely (n-1)(n-1)! diagonals with sum one, C f 3_^ by Theorem 1.3. Now, C and have precisely (n-1)(n-1)! corresponding diagonals with equal sums. Since for a d.s. matrix A , each row sum and column sum is identically one, i t is natural to ask the following questions, "How many diagonals of A can have sums greater than one, and how many diagonals of A can have sums smaller than one?" In the following, we shall obtain a decomposition theorem from which the answers to both questions follow immediately. Since this theorem is a combinatorial theorem concerning the positions in a matrix without reference to the actual entries, we find i t convenient to extend the notion of diagonal as follows: Definition 1.9: Let a e S^  . By a a-diagonal or simply a diagonal we mean the set of ordered pairs {(i, cr(i)); i = 1, 2, n} Two diagonals a and x are said to be disjoint i f for a l l i = 1, 2, n , a(i) f T(1) . A collection E of diagonals is a mutually disjoint collection i f for a l l a , T e E , a ^ x , a and x are disjoint. Definition 1.10: A collection E of diagonals is said to cover a matrix A i f each entry of A appears in at least one of the diagonals of E . If, in addition, E is a mutually disjoint collection, then we say the covering is exact. It is clear that a mutually disjoint collection E of diagonals covers A exactly i f f |E | = n . Theorem 1.11: (Decomposition Theorem for the set of a l l diagonals.) Let D be the set of a l l diagonals of order n . Then there exists a decomposition of D into (n-l)! mutually disjoint subsets each containing n mutually disjoint diagonals. Proof; Let OQ e be any ful l cycle permutation, and let G = <0Q> be the cyclic subgroup of order n generated by a . Consider the class of a l l (left) cosets aG of G , a E S . Since n |aG| = |0| = n for a l l a e S and \ \aG\ = |S | =n! , where the a summation is taken over a complete set of coset representatives c , i t is clear that there are (n-1)! such (disjoint) cosets. Furthermore, since aQ is a ful l cycle, we have, for each i = 1, 2, n , that (ag-^ (i) = (ag2)(i) => a(g-L(i)) = a(g 2(i)) => g 1(i) = g 2(i) => g± = g 2 = > ~> crg-^  = ag2 for a l l g^'^l e G » c E S n " T n e r e f ° r e > the diagonals in each coset are mutually disjoint. This completes the proof. As an immediate application, we have the following: Theorem 1.12: Let m be an integer, 1 < m < n . Let K be the — — m smallest positive integer with the property that any collection of diagonals would contain m disjoint ones. Then Km = (m-1)(n-1)! + 1 . Proof: Let E be a collection of diagonals such that |E| _> (m-1) (n-1)! + 1 . Let D = D 1 \j T>2 <J • • • \j u(n_-jj j b e a decomposition of the set D of a l l diagonals as described in Theorem 1.11. By the pigeon-hole principle, |E H D^| >^m for some i , 1 _< i _< (n-1)! , for otherwise |E | _< (m-1) (n-1) ! , a contradiction. Since is a mutually disjoint collection, <_ (m-1) (n-1) ! + 1 follows. On the other hand, let F be the collection of a l l diagonals passing through any one of a ^ , i = 1, 2, m-1 . Then |F| = (m-1)(n-1)! . Clearly, F does not contain m mutually disjoint diagonals for otherwise the pigeon-hole principle would imply that some two of these disjoint diagonals must pass through the smae a ^ for some i , which is obviously absurd. Hence K > (m-1)(n-l)! + 1 . m — Therefore, K = (m-1)(n-l)! + 1 . m Corollary 1.13: Let A e ft . Let A (A) and 6 (A) be the maximum * n n n number of diagonals of A with sums greater than one, and smaller than one, respectively. Let A = max A (A) and 6 = max 6 (A) . Then Aeft Aeft n n A = <5 = (n-l)(n-l)! . This bound is always attainable, n n Proof: If A (A) > (n-l)(n-l)! + 1 for some A e ft , then by n — n Theorem 1.12, we can select n mutually disjoint diagonals each having n sum > 1 . Since they cover A exactly, we get £ a . . > n , a contradiction. Hence A n <_ (n-l) (n-l) ! . Similarly, 6^ <_ (n-l) (n-l) ! . To see that this bound is always attainable, consider the d.s. matrices A(0, l / (n- l ) , (n-2)/(n-l)2) and A(l , 0, l/(n-l)) respectively (cf. Remark 1.7.(i)). In both cases, there are precisely (n-l)(n-l)! diagonals with sum 2y + (n-2)z , which is equal to [(n-l) 2 + l ] / (n- l ) 2 in the first case and (n-2)/(n-l) in the second. This completes the proof. In [10], i t was shown that for any d.s. matrix A , n n max I I a. > 1/n . The next corollary is now clear from the II ia(i) -aeSn i-1 elementary arithmetic-geometric inequality and Corollary 1.13 above. Corollary 1.14: Let Aeft . Then at most (n-l) (n-l) ! diagonals n can have diagonal product > l/n* 1 . We mention that Corollaries 1.13 and 1.14 are in fact true for the class of stochastic matrices. As another immediate application of the. Decomposition Theorem, we show that Theorem 1.3 and Theorem 1.6 are indeed equivalent. (Remark 1.7.(ii).) Theorem 1.15: The following two statements for d.s. matrices are equivalent. (i) If A e ft^ has more than (n-l)(n-l)! diagonals with a common sum, then A = J n (ii) If A,B e ft^ have more than (n-l) (n-l)! corresponding diagonals with equal sum, then A = B . Proof: (i) => (ii) . This is just Theorem 1.6. (ii) => (i) . Let Aef t be such that more than n (n-l)(n-1)! diagonals have the common sum a . Then by Corollary 1.13, i t is impossible that a > 1 or a < 1 . Hence a = 1 and so A and have more than (n-l)(n-l)! corresponding diagonals with equal sum. Therefore A = J n by (ii) . This completes the proof. For Aef t , we have seen (Corollary 1.5) that n n n max aeS n V a. ... > 1 and min Y a. ... < 1 with e i t h e r (and hence i=l aeS i - i both) equality i f f A = . Regarding Corollary 1.13, the follox^ing question naturally presents itself . "If A E ft , A f J , what is the n maximum number A (A) of diagonals a such that Y a. . . . > 1 , and n b io (1) -what is the maximum number Sn(A) ° f diagonals T such that n _ _ _ _ Y a. . . . < 1 ?" Let A = max A (A) and 6~ = max 6~ (A) . -i ix (I) n n v ' n . n r T , n i=l Asft ^{J } Asft ^{J } n n n n It can be easily seen that for n = 2 , A 2 = ^ = 1 • 1° general, however, the question seems to be quite difficult. When n = 3, for the following matrices show that A3 = 6 3 = 5 . "1/2 1/3 1/6" "1/2 1/4 1/4 A = 1/6 1/2 1/3 , B 1/4 1/2 1/4 1/3 1/6 l/2_ _l/4 1/4 1/2 In the next proposition, we show that A =6 and n n obtain an upper bound for this value in general. T. i T T / -I \ / -i \ i i r ( n - l ) (n-1) ! , where Proposition 1.16: A = <5 < (n-1) (n-1) ! + [- — —1 c n n — n [ ] denotes the greatest integral part function. This bound is attained for n = 2 and n = 3 . Proof: nJ - A For any A e ft , i t is clear that z— E ft and that J n n - 1 n nJ - A n A f J i f f r — ± J . Since Y a. . . . n n - 1 n ^ xa (l) > 1 i f f n - Y a. . . . ^ ia(i) n - 1 Let E < 1 , i t follows that A =6 . Now, let A E ft , A 4 J . — n n n n Since A 4- J , we have, by Theorem 1.3 n that | E| <_ (n-1) (n-1)! . Let D = D± (J D£ U " " U D( n_i)j b e a decomposition of the set D of a l l diagonals as stated in Theorem 1.11. Let m be the number of D.'s such that D . C E . Then 1 l m < r(n-l) (n-1)! j ^ e a c h g u c h t h a ( . D_ < £ E there is at least — n l l one diagonal with sum < 1 . Hence there are at least (n-1)! - [ ^ n ' ] diagonals with sum < 1 . Therefore, A n ± n ! -{(n-1)! - [(n-D (n-1) 1 j| = ( n _ 1 } ( n _ 1 } , + {(n-1) (n-1) 1 } ^ f Q T n = 2 and n = 3 , we get A 2 = ^2 = 1 a n d A^ = <5 ^ = 5 respectively. Remark 1.17; We are unable to determine i f the bound given in the above proposition is attainable in general. For n = 4 , (n-1) (n-1) ! + [ ( n " 1 ) ( - n ~ 1 ) ! ] = 22 . The best we can get for A , (A) and n H 6.(A) so far is 20 as shown by the matrix below: X y X y y X y X X y X y y X y X where x + y = 1/2 , x > 1/4 , y < 1/4 . Then A e , A ± and there are 16 diagonals with sum 2x + 2y = 1 , 4 diagonals with sum 4x > 1 and 4 diagonals with sum 4y < 1 . Hence A^(A) = <5^  (A) = 20 . In what follows, we wil l derive more consequences of the Decomposition Theorem. Proposition 1.18: Let Aeft . If A (A) = (n-l)(n-l)! , then the n n remaining (n-l)! diagonals must have sums strictly less than one and i f 6 r ( A ) = (n-l)(n-l)! , then the remaining (n-l)! diagonals must have sums strictly greater than one. (For notations, cf. Corollary 1.13.) Proof: Let E = (a I T a. , . s > l l . Then I El = (n-l)(n-l)!. I it=i J Let D = U' n2 U • • • U ^ , be a decomposition as stated in Theorem 1.11. If D. C E for any i , then since D. covers A l l exactly, we get a contradiction. Hence for a l l i = 1, 2, (n-l)! , D. <£. E or |E fl D.l < n-l . Furthermore, since the D.'s are disjoint, so are the E O D_^'s . Hence (n-l) (n-l)! = | E| = |E H D| = (n-l)! (n-l)! (n-l)! = |E (1 ( U D.)| = | U (E fl D.) I = I |E n D.| < (n-l)(n-l)!. Thus 1=1 1 1=1 1 1=1 1 |E n Dj = n-l for a l l i = 1, 2, (n-l)! . Therefore, for each i , there is exactly one diagonal e ^ E . Since each covers A exactly, these cr^'s must have sum strictly less than one. The other assertion follows by a parallel argument. This completes the proof. We are now in a position to characterize a l l nxn d.s. matrices that have precisely (n-l)(n-l)! diagonals with the common sum a * 1 . Lemma 1.19: If Aeft has precisely (n-l)(n-l)! diagonals with the n common sum a ^ 1 , then the remaining (n-l)! diagonals also have a common sum which is 8 = n - (n-l)a f a Proof: Let E = { a ; T a. ... = at . Then I El = (n-l) (n-l)! . \ £ = 1 w ( i ) j Following the notations and arguments given in Proposition 1.18, we get | E H D\| = n-l for a l l j = 1, 2, (n-l)! , and hence for each j , there is exactly one a.c D. % E . Since D. covers A exactly, J J J n y a. . . . = n - (n-l)a for a l l i = 1, 2, (n-l)! . Since a ^ 1, i l l ^ j W i t is clear that g ^ a Theorem 1.20: If A e has precisely (n-l)(n-l)! diagonals with the common sum a £ 1 , then there exist permutation matrices P and Q such that for some suitable choices of x, y and z , PAQ = A(x, y, z). (cf. Remark 1.7.(i).) Proof: Since by Lemma 1.19, A has only 2 distinct values a and 3 for diagonal sums, a theorem by J. Kapoor ([7], Theorem 2.15) implies that there exist permutation matrices P and Q and a positive integer k , 1 < k < n such that PAQ or (PAQ)fc has the following form: (g-n)+Xn-H51 X1+61 ($-a)+\n+62 xi+52 (B-a)+Xn+Sk_1 X1+6k_± X +5, n k X l * k X +6 X..-H n n I n X T+5 n-l 1 X .+5, n-1 2 n-l k-1 n-l k X ,+S n-l n n Since A e ftn , n X . + ^ 5 ± = 1 for a l l j = 1, 2, n-l . Hence X - = A for some A . Similarly, n-l Hence Also (0tt) + ( I X .) + n <5 . = 1 for a l l j = 1, 2, • • • , k-1 . (1) i=l 1 J (2) (3) 6 i = 6 0 - = * - - = 6 1 , = 6 for some 6 1 2 k-1 n 5 . + J A . = 1 f o r a l l j = k, k+1, •••, n . J i=l 1 From (1) , (2) and (3) , n 6 . - nfi - (3-a) =0 or 6 . = 6 + -(3-a) = 6 ' 1 3 n for some 6' for a l l j = k, k+1, •> n . Consequently, PAQ or (PAQ)1" takes the form: (B-a)+A +5 A+6 n (3-a)+A +5 A+5 n X +5 ' A+5 ' n A +5 ' A+5 ' n A+5 A+5 (3~a)+A +6 A+6 • • • A+5 n A+5 * A+S ' -» (k-l)th row For convenience, put x= (3~a )+A n + 6 , y = A + 6 > y' = A + 5 ' , and z = A + 61 . Then PAQ or (PAQ)fc becomes x y y" z -» (k-l)th row Clearly, this matrix has two distinct diagonal sums values: there are (k-1)(n-1)! diagonals passing through some x with sum x + (k-2)y + (n-k+l)z and (n-k+1)(n-1)! diagonals passing through some y' with sum y' + (n-k)z + (k-l)y . Since, by assumption, there are precisely (n-1)(n-1)! diagonals with the same sum, we have either k-1 = n-1 or n-k+1 = n-1 . Hence k = n or k = 2 . For k = n , PAQ or (PAQ)t becomes x y y x y y x y y • • • y y' z z • • • z Since A e ft , we have x = z and hence after permuting the rows, the n matrix becomes y x x y x y Since this matrix is symmetric, we can conclude that for some permutation matrices P' and Q' , P'AQ' = A(y', x, y) . For k = 2 , PAQ = (PAQ)*" becomes x y • • • y y' z • • • z • • • y' z • • • z J Since A e ft^ , we have y = y' . Therefore, PAQ = A(x, y, z) As an application of the above theorem, we have: Corollary 1.21: If A e ft has precisely (n-1) (n-1) ! diagonals with the common sum a 4 1 , then 1 - \ < a < 1 + —,—\ s? n-1 — — (n-l) z Conversely, given an}' a 4 1 , with 1 - ±_ a <_ 1 + (n-3j_)2 > there exists a matrix A e ftn that has precisely (n-1)(n-1)! diagonals with the common sum a . This matrix is unique up to permutation of rows and columns. Proof: If A e Q has precisely (n-1)(n-1)! diagonals with the n common sum a ^ 1 , then by Theorem 1.20, there exist permutation matrices P and Q such that PAQ = A(x, y, z) = x y y z for some x, y, and z . Since _x + (n-l)y = y + (n-l)z = 1 , we have 1-x 1-y n+x-2 „ „ , , „v y " n^I ' Z = n^l Tn^lp- " H e n C £ a = 2? + ^ z = = ( n _ ^ ) 2 C 2 ( l - x ) (n-1) + (n-2) (n+x-2)} = [n2-2n+2-nx> = 1 + ( * ~ " * 2 . Consequently, a attains its maximum 1 + ( n_"jj2 a n c * l t s minimum 1 -^r when x = 0 and x = 1 respectively. Conversely, let a ^ 1 n-1 be given with 1 < a < 1 + —\TT . A matrix A E 8 that has 6 n-1 — — (n-l) z n precisely (n-1)(n-1)! diagonals with a common sum a must be, in view of Theorem 1.20, equivalent to A(x, y, z) for some x, y, and z via x + (n-l)y = 1 , (4) y + (n-l)z = 1 , (5) 2y + (n-2)z = a . (6) 2 - a From (5) and (6), we get z = > 0 since a < 2 . Substitute this n into (5), we get y = { (2-n) + (n-l)a) >_ 0 since (n-l)a >_ n-2 . Substitute this into (4), we get x = 1 - (n-l) + 2 ( n ~ 1 ) 2 " a(n-l) 2 ^ > 2 - n + ^ n — = 0 since a(n-l) 2 < (n-l) 2 + 1 . Hence the matrix — n — A(x, y, z) where x = (2-n) + (n-l) 2 ( 2 ~ a ) , y = ^ {(2-n) + (n-l)a> , 2 - a and z = — - — is d.s. and has precisely (n-l)(n-l)! diagonals with the common sum a . This proves both the existence and uniqueness up to permutation of rows and columns. Remark 1.22: (i) Corollary 1.21 above shows that the bound given in Theorem 1.3, though best possible in general, is not uniform in the sense that i f the value a ^ 1 of the common diagonal sum does not belong to n—2 1 the interval [—r- , 1 + -,—7-rrr] , then the bound (n-l) (n-l) ! can not n-l (n-l) z be attained. Consequently, a better (smaller) bound depending on the value of a should be expected. To get an explicit formula for such a bound, however, seems to be difficult . For instance, i t is intuively clear (though a simple and rigorous proof using the Konig-Frobenius theorem [6, 8] can be given) that for A E , the maximum number of zero diagonals is - the number of derangements of n object. (cf. e.g. [20], p. 22). An explicit formula, i f i t exists, must then give the value D for a = 0 . n (ii) Theorem 1.20 and Corollary 1.21 above do not hold for a = 1 . In fact, the example given in Remark 1.7.(i) is not valid for a = 1 since 2y + (n-2)z = 1 together with y + (n-l)z = 1 = x + (n-l)y implies x = y = z or A = J n . For A = , clearly a l l diagonal sums are equal to one. Hence, excluding , we can ask the question: "What is the maximum number of diagonals with sum one?" The bound (n-l)(n-l)! s t i l l holds in general, but we are unable to construct examples to show that this bound is attained for a l l n . (cf. Proposition 1.8) For n = 3 , the matrix B given in Remark 1.7.(iii) shows that this bound is attained. For n = 4 , the best example we can get is the one given in Remark 1.17 which gives 16, rather than 18, diagonals with sum one. We close this chapter by giving a proposition that answers partially the problem mentioned in Remark 1.22.(i) above. Proposition 1.23: Let A e ft^ and let a be a given real number such that 1 < a <^  n . Then at most (m-1)(n-l)! diagonals of A can have diagonal sums >_ a , where m = [^ ] + 1 . Proof: Suppose more than (m-1)(n-l)! diagonals of A have sums >_ a . Then by Theorem 1.12, we can select from them m mutually disjoint ones with total sum > ma = a([—1 + 1) > a(—) = n , a — a a ' contradiction. CHAPTER II MAXIMUM AND MINIMUM DIAGONAL SUM OF d.s. MATRICES The purpose of this chapter is to carry out a combinatorial investigation of the diagonals with maximum diagonal sum h and the diagonals with minimum diagonal sum k . First, we obtain the best possible upper and lower bounds for entries on these diagonals. Secondly, we obtain the best possible upper and lower bounds for various kinds of functions of h and k . The key wil l be a covering theorem that enables us to analyze the cases when these bounds are attained. When studying the lower bound for h+k , an interesting combinatorial question presents i tself . Concerning this question, a conjecture and partial solutions are given. Finally, we obtain the bounds for h and k when the d.s. matrices under consideration have properties discussed in Chapter I. Throughout, we shall use A >_ 0 to mean that a l l the entries of the matrix A are non-negative, and we shall assume that n ^ 2 , since the case for n = 1 is always a tr iv ia l i ty . Definition 2.1: Let Ae f t . A diagonal a of A is called a n maximum diagonal i f its sum is a maximum among a l l diagonal sums of A , and a minimum diagonal i f its sum is a minimum among a l l diagonal sums of A . The first result gives the upper bound for entries on a minimum diagonal. P r o p o s i t i o n 2.2: Let A e 9, , n> 3 and l e t a be a minimum n — 2 2 diagonal. Then max a. ... < — — with e q u a l i t y i f f a. . = — - r f o r £ io" ( l ) ~ n+1 N J ICJ( I) n+1 some i and a. = 0 for a l l j ^ i . Proof: Since permuting the rows and columns of A does not a f f e c t the set of diagonal sums, we may assume, without lose of gene r a l i t y , that a i s the main diagonal, and that max a ^ = a ^ . By assumption, a.,.. + a.. < a.. + a.-, for a l l i = l , 2, • • • , n . Summing over i , we 11 i x — l i i l ° n i=2 n get ( n + l ) a n 1 + 7 a. . < 7 (a-. + a...) = 2 . Hence a... < - ^ r -° 11 i i — - l i i l 11 — n+1 - i = l with e q u a l i t y i f f a.. 11 n+1 ' i i i = 2, 3, • • • , n . , a.. = 0 and a n . + a l i i l n+1 fo r a l l Remark 2.3: ( i ) I f n = 2 , the upper bound i s 1/2 and i s attained i f f A = ( i i ) For a l l n > 3 , the upper bound i s the best p o s s i b l e . Consider A = n+1 1 n+1 n+1 0 n n+1 (n+1)(n-2) n+1 (n+1)(n-2) where a and a.. = 11 n+1 ' l i n a.. . = a .., = — r r , a. . = 0 for a l l i = 2, 3, n, i l n+1 i i i j (n+1)(n-2) otherwise. C l e a r l y , the main diagonal i s a minimum diagonal, and i t s maximum entry i s ^-_rr . I n c i d e n t a l l y , t h i s example also shows that in general the maximum entry of a d.s. matrix can l ie on a minimum diagonal. If we were to apply the argument used in the proof of Proposition 2.2 to a maximum diagonal CT , we should get the lower bound n max a. . . . > 1/n which is t r iv ia l since Y a. . . . > 1 by Corollary 1.5, I lo ( i ; — ^ lo (l) — J However, the upper bound for max a. , . . and the lower bound for i min a. , . , can be obtained in terms of the diagonal sum h and the i 10(1) bounds are non-trivial when h <_ 2 . Since h >_ 1 by Corollary 1.5, we have n-max a. , . N > h > 2-h and n'min a. . . . -1 < h-1 < (n-l) (h-1) , ± 10(1) - - i 10(1) -2-h , . (n-l)(h-1) +1 i . e . , max a. , > and mm a. , < . However, i 10(1) — n £ 10(1) — n the following stronger results hold. Proposition 2.4: Let A e £2 and let 0 be a maximum diagonal with 2—h sum h . Then (i) min a. . . . > ± 10(1) — n . . . . (n-l)(h-1) + 1 (11) max a. . . . < — — . ± 10(1) - n Proof: As before, assume that 0 is the main diagonal. (i) Suppose ann = min a.. . By assumption, a..., + a.. > a n . + a., ^ r 11 i 11 J r ' 11 11 — l i i i for a l l i = l , 2, n . Summing over i , we get n a,, + h > Y (an . + a.,) = 2 , or a,, > —— . 11 — . L , l i i i 11 — n 1=1 (ii) Suppose a ^ = max a ^ . By assumption, for each fixed j = 1, 2, • • • , n . Summing over a l l j ^ 1, i , we get y ( a . . +a..) < (n-2)a.. + 7 a.. , or 2 - a,. - a.. - 2a.. < (n-2)a.. + 7 a.. ; or n i i l i i - i i f i J J 1 j r l , i 2 - a,. - a... < (n-1)a.. + 7 a.. . Summing over a l l i ^ 1 l i i l - i i J X J J n we get 2(n-1) - £ (a + a. ) <_ 2(n-1) £ a.. , or i=2 1 1 1 1 j£L 2 2 2 ( n - l ) - (2 - 2 a u ) <_ 2 ( n - l ) ( h - a.^) . S i m p l i f y i n g , we get a ^ <_ ^  { (n-1) (h-1) + 1} . This completes the proof. I f we consider a minimum diagonal a i n s t e a d , the same argument as above y i e l d s : P r o p o s i t i o n 2.5: Let A e ft^ , and l e t a be a minimum diagonal wi • 1 ~ ( n - D ( l - k ) sum k . Then ( l ) mm a. ... > ± i a ( l ) - n 2-k ( i i ) max a. ,.. < U d A  ...  ^ . i i a ( i ) — n Since 1 and 0 are the upper and lower bound, 2 —h r e s p e c t i v e l y , f o r any entry of a d.s. m a t r i x , and s i n c e >_ 0 i f f h < 2 i f f ( n 1 ) ( n 1) + i < ^ P r o p o s i t i o n 2.4 can be r e s t a t e d as — n — f o l l o w s : P r o p o s i t i o n 2.6: w i t h sum h . Then Let A e ft , and l e t a be a maximum diagonal n (1) mm a. ,.. > ± 10(1) -2 _ h i f h < 2 0 i f h > 2 ( h - 1 } + 1 i f h < 2 (ix) max a. ... < i 10(1) -1 i f h > 2 . Similarly, since - { 1 - (n-l)(1-k)} > 0 i f f k> n — — n—1 2-k 2 2 and < —-r- i f f — — < k , Propositions 2.2 and 2.4 can be put n — n+1 n+1 — together and restated in: Proposition 2.7: Let A e ft , and let 0 be a minimum diagonal with n 0 sum k . Then (i) min a. ... > i 10(1) -1 - fa"1) ^ i f k>2-2 n — n-l _ ._ , n—2 0 i f k < —--— n-l 2-k . . . 2 i f k > ( i i ) max a. ,.. < ± 10(1) — n - n+1 2 i f k < n+1 - n+1 Clearly, ( i i ) above is a refinement of Proposition 2.2. In the following examples, we discuss the case of equality for the bounds given in Propositions 2.6 and 2.7 for ar b i t r a r i l y preassigned values of h and k . Example 2.8: Let h be a given number such that 1 <_ h <_ 2 . Consider the d.s. matrix A = A(x, y, z) with x = fa-1)(h-1) + 1 ^ 2 - h j n + h - 2 y = and z = -,—-r ; i . e . , J n n(n-l) ' ' A = (n-1)(h-l)+l n 2-h n 2-h n 2-h n n+h-2 n(n-l) n+h-2 n(n-l) 2-h n n+h-2 n(n-l) n+h-2 n(n-l) n Then £ a_„ = h , and any diagonal missing a ^ has sum i=l 2 (2-h) J (n-2) (n+h-2) n-h TT . . 1 ; — = —~- < 1 < h . Hence the mam diagonal is a n n(n-l) n-1 — — maximum diagonal with sum h . Finally, (n-1)(h-1) + 1 _ (n-1)2(h-1) + (n-1) (h-1) + (n-1) _ n + h max a.. = a,, i i i 11 n(n-l) - n(n-l) _ (n-1)(h-1) + 1 implies that n n(n-l) . I f h > 2 , then the permutation matrices are examples for which max a. , = 1 . Hence the upper bound ± i a ( i ) in Proposition 2.6 is always attainable. Remark 2.9: If we consider the lower bound in Proposition 2.7 we have two cases: (i) 0 < k < n-2 n-1 In this case, permutation matrices n— 2 provide examples for which min a. . . . =0 . (ii) — r r r i io(l) n-1 < k < 1 . In this case, Example 2.8 above with h replaced by k suffices since a change from h > 1 to k < 1 merely reverses a l l the inequalities therein and consequently the main diagonal becomes a minimum diagonal and mm a. . = a,, 1 1 11 1 - (n-l) q-k.) n Hence the lower bound in Proposition 2.7 is always attainable. Example 2.10: Let n 3 and let k be a given number such that o < k < 1 . Consider the d.s. matrix n+1 -2-k n n+k-2 n(n-l) nCn-1) n+k-2 n(n-l) (n+l)k-2 n(n-l) n+k-2 (n2-2h+4)-(n+2)k n(n-l) (n-2) n+k-2 n(n-l) (n2-2n+4)-(n+2)k n(n-l)(n-2) (n+l)k-2 n(n-l) where a 2 - k 11 n > a,--- = (n+l)k - 2 i i n(n-l) a.. . = a. 1 l i i i n + k n(n-l) for a l l „ o „ (n2-2n+4) - (n+2)k fc, i = 2, 3, • • • , n , and a.. =- —-^r-,— — otherwise. Since i j n(n-l)(n-2) (n+l)k - 2 >_ 0 , and since n 2 - 3n + 2 = (n-l) (n-2) >_ 0 implies that n 2 - 2 n + 4 > n + 2 > (n+2)k , we have A > 0 . Straightforward computations show that indeed Aeft . Furthermore, ][ a. . = i=l 2 - k ^ (n+l)k - 2 . 2 - k , (n+l)k - 2 _ 2(n+k-2) , = + - K. , T -,—TT - T rr— , ana n n n n(n-l) n(n-l) (n2-2n+4) - (n+2)k (n+l)k - 2 n(l-k) _ „ . , . . -. -4—. ^ - / ,. = T 1 W 0 , > 0 . Hence the mam diagonal n(n-l)(n-2) n(n-l) (n-l)(n-2) — is a minimum diagonal. Finally, since 2 - k 2 - k (n+l)k - 2 _ 2(l-k) n we have max a.. = a n 1 ^ i i 11 n(n-l) n-l The case for n = 2 is easy since l l l 2 - k . . . ^, ^ — - — implies that a^ 2 - k , k T , — ^ — and a 1 2 = a^ = ^ . If — - — <_ — , t h e n k >_ 1 . Hence k = 1 o r A = T h e r e f o r e , t h e e q u a l i t y h o l d s i f f k = 1 . Hence t h e upper bound i n P r o p o s i t i o n 2.7 i s 2 2 al w a y s a t t a i n a b l e f o r a l l n , p r o v i d e d t h a t k >_ --_j^r . I f k < ^-_rr , however, t h e f a c t t h a t max a i a ( i ) -< k < n+1 c l e a r l y shows t h a t t h e upper bound i s n o t a t t a i n a b l e . The e q u a l i t y c a s e f o r t h e l o w e r bound i n P r o p o s i t i o n 2.6 i s somewhat c o m p l i c a t e d . B e f o r e making a c o m p l e t e a n a l y s i s , we need a lemma w h i c h g i v e s a n o n - t r i v i a l l o w e r bound f o r any e n t r y on a d i a g o n a l w i t h " c o m p a r a t i v e l y l a r g e " d i a g o n a l sum a Lemma 2.11: L e t A e ft and l e t a be a d i a g o n a l w i t h sum a > n-2 n — Then f o r a l l i = 1, 2. a. ,.* > 1 0 ( l ) -n + 2 w i t h e q u a l i t y i f f t h e r e e x i s t p e r m u t a t i o n m a t r i c e s P and Q s u c h t h a t PAQ t a k e s t h e fo r m : a 2 l - a 2 n 1-a where Y a. = 1 i - 1 1 Proof: Permuting the rows and columns of A , i f necessary, we may assume that a is the main diagonal and a ^ is the entry being considered. Let R. and C. denote the ith row sum and column sum 1 1 n n respectively. Then 2n - 2 = £ (R. + C.) = 2( £ a..) + 1=2 1 1 i=2 1 1 n n + £ ( a u + a±1) +2 I a = 2Ox - a ) + (n - a) + 3 where i=2 i,j=2 ir j n 8 = T a.. > 0 . Therefore 2 a 1 1 = a _ n + 2 + s , and thus . k „ i i — 11 i,J=2 J i r j n, I 2 a^ ^ >_ with equality i f f 3 =0 ; i . e . , i f f a „ = 0 for a l l i f j > i » j = 2, 3, n . Since A is d.s., a ^ = a ^ for a l l i = 2, 3, • • * , n . Therefore, i f we write a. for a, . , A takes the 1 l i n described form where 7 a. = 1 . This completes the proof. 1=1 1 Now we are in a position to analyze the equality case for the lower bound in Proposition 2.6. There are two cases to be considered: Case I: 1 <_ h <_ 2 . In this case, Example 2.10 with k replaced by h suffices provided that n >_ 4 since then (n2 - 2n + 4) - (n + 2)h >_ (n2 - 2n + 4) - 2(n + 2) = n2 - 4n >_ 0 and hence A >_ 0 . Furthermore, a change from k <_ 1 to h >_ 1 merely reverses a l l the inequalities therein. Consequently, the main diagonal becomes a maximum diagonal and a^ ^ becomes a minimum entry thereon. 2 — h The case for n = 2 is again easy since a ^ = ——— implies that h = 1 and hence equality is attained i f f h = 1 , i . e . , i f f A = • The case for n = 3 calls for separate consideration. Since (rg - 2n + 4). - (n + 2)h = 7 - 5h for n = 3 , Example 2.10 (with k replaced by h) s t i l l stands for h y . If h > y , however, we claim that the equality can not hold. Assume the contrary, and let A e ft be such that the main diagonal is a maximum diagonal with sum h, 2 - h -=• < h < 2 , and that an .. = min a. . = 5 — 11 i i i 3 Examining the proof of Proposition 2.4. ( i ) , we see that a + a^ = a^ + a^ . Hence A takes the form: A = 2-h 2-h - x+y x-y+ 2h-l 1-x-y 1+h 3 x-2y + 2y + - x 1+h 3 l-2h 2h-l But £ a ^ = h implies that 3y + l = h or y = i=l Substituting this into the matrix, we find that A takes the form: A = 2-h 3 1+h 2h-l 3 4-2h - x - x 1+h 3 1+x-h 2h-l Since 1 + x - h > 0 , we get x >_ h - 1 > On the other hand, ~3 - x >_ 0 => x <_ -j (4 - 2h) < — (4 - 2*j) = j , a contradiction. In this case, however, Lemma 2.11 gives the lower bound a ^ >_ n ^ • . 7 . h -Since h > — => —— 1 2 ~* h — > — - — , this bound is a better one. (Note that since there is a "gap" between — ^ and , the lower bound —^p-is not even approachable.) Furthermore, this new lower bound is A = the d.s. matrix h-1 3-h 3-h 2 4 4 3-h 4 h+1 4 0 3-h 4 0 h+1 4 on. v i- h - 1 , h + 1 3 h - l „ .3-h. . . . , 7X Then I a.. = h , — — + —r— = — > 2(—-) ( . h > -) , i=l and n z ^ < ^ ^ ^ (*•" h <_ 2) . Hence the main diagonal is a maximum h — 1 diagonal on which the minimum entry is — ^ — . Case II: h > 2 . In this case, we claim that the lower bound 0 in Proposition 2.6.(i) is attainable i f f h <_ n-2 . (Hence in particular, i t is not attainable for n = 1, 2, 3, 4.) Suppose h > n-2 . Then by h — n + 2 Lemma 2.11, a. > ^ > 0 for a l l entries on this maximum 30 ( l ) - 2 diagonal and hence the lower bound 0 is not attainable. For h <_ n-2 , consider the d.s. matrix: A = 0 1 n-1 n-1 h n-1 n-h-2 n-1 (n-1)(n-2) n-1 n-h-2 (n-1)(n-2) n-1 where a... = 0 , a... = a... = —\r , a. . = — ^ r - for a l l i = 2, 3, • • • , n , 11 l i i i n-l ' n n-l and a. . = „ n - , w 2 o \ otherwise. Since n-h-2 > 0 , A > 0 and i j (n-l) (n-2) - - • h 2 straightforward computations show that A E ft . Since —— > —— and n n-l n-l h n-h-2 h(n-l) - (n-2) n . J . n^l ~ (n-l)(n-2) = —(n-l)(n-2)— > ' m a i n d i a § o n a l 1 S a maximum diagonal with minimum entry 0 . Now we can summarize and state the equality case for the lower bound given in Proposition 2.6 in the following: Proposition 2.12: Let h be a given number such that 1 <_ h < n . (I) If 1 <_ h <_ 2 , then there exists a d.s. matrix with maximum diagonal sum h and such that the minimum entry on the diagonal 2—h is i f f (i) n = 2 and h = 1 or (ii) n = 3 and h < y or ( i i i ) n >_ 4 . (II) If 2 < h <_ n , then there exists a d.s. matrix with maximum diagonal sum h and such that the minimum entry on the diagonal is 0 i f f h <_ n-2 . In what follows, we shall obtain the best possible upper and lower bounds for various kinds of functions of h and k and discuss the cases for equality. When the bounds are attained, examples wil l be constructed. The key wil l be Theorem 2.13 referred to as the "Covering Theorem", the proof of which depends on a well-known combinatorial theorem of P. Hall on SDR (systems of distinct representa-tives). A conjecture is given. Theorem 2.13: (The Covering Theorem) Let A be a given n><n matrix, and let a^, o^, • be p given mutually disjoint diagonals of A , 1 <_ p <_ n . Then one can select n-p mutually disjoint diagonals o"p+-^» o^+2' '"> 0 " n s u c n that {a i; i = 1, 2, • • • , n} cover A exactly. (Definition 1.10.) Proof: If p = n , the given diagonals already cover A exactly and there is nothing to prove. Hence assume that 1 <_ p <_ n-1 . Since each diagonal of A corresponds to a permutation on the set S={1, 2, n} , the assertion is equivalent to saying that given p mutually disjoint permutations a^, a2> " • , a on S , 1 <_ p <_ n-1 , one can always exhibit another permutation -a on S such that P a H a . = <j> for a l l i = 1, 2, • • • , p . Let S = S ^  U {a . (1)} , 1 1=1 1 P P S„ = S ^  U {a. (2)} , S = S % U{a. (n)} . Let m be an integer 1=1 1=1 such that 1 < m < n . We claim that I S U S U • • • U S I > 1 M - M / i \ 1 m co-, co„ co 1 2 m for a l l sequences {co} such that l < c o - , < c o „ < * * " < co < n . Since — 1 2 m — for a l l i / j , i , j = 1, 2, • • • » P » a ^ O a j = d) , we have | Sj| = n-p for a l l j = 1, 2, n , and hence (I) the number of elements, counting repetitions, in the set S u S i i ••• M S is CO •« CO « CO 1 2 m m(n-p) . On the other hand, each index from S appears exactly p n p n times in \J \J {a .(i)} ,hence appears exactly n-p times in \J S. , i=l j=l 3 i=l 1 and hence (II) each index from S appears at most n-p times in S U S U • • • U S . The two statements (I) and (II) imply that u i to 1 L m |S u S u '*' II S | > m • By a well-known theorem of P. Hall on 1 to •, to o KJ to 1 — 1 2 m SDR [20, p. 48], there exists an SDR for S2, ; i . e . , there exist d. e S. , i = 1, 2, n that form an SDR. Define the permutation a on S by a (i) = d_^  for a l l i = 1, 2, n . It is clear from the construction of S. that a f\ a . = <f> for a l l i 3 Y j = 1, 2, p . This completes the proof. As an immediate consequence, we get: Corollary 2.14: Let A e fiR . Then (i) h + (n-l)k <^  n with equality i f f for any set of n diagonals 01* a2' '"' CTn t n a t cover A exactly where is a maximum n diagonal, we have V a. = k for a l l j = 2, 3, • • • , n . (ii) (n-l)h + k >_ n with equality i f f for any set of n diagonals T l ' T 2 ' * " " ' T n t n a t cover A exactly where x^ is a minimum n diagonal, we have £ a ix . (i) = n ' a 1 1 , J = 2> 3, n . Proof: (i) Let be any maximum diagonal. By the Covering Theorem, let o2> cr^s * * * ' ° n ^ e n _ 1 diagonals such that (a^; i = 1, 2, • • • , n} cover A exactly. Then h + (n-l)k <^  n n n < h + y 7 a. = n , with equality i f f y a. . . . = k for ~ j-2 i=l i a j ( l ) i=l i a j ( l ) j = 2, 3, • • • j n . The proof of (ii) is similar. (i) The equalities in the above corollary are always attainable. For example, i f A is a permutation matrix or A = J , then n h + (n-l)k = n , and i f A = J or A = — r^-(nJ - I) , then n n-1 n (n-l)h + k = n . ( i i) The above corollary is , in fact, true for any n*n matrix B n with Y b. . < n , e.g., substochastic matrices, L X j -i , j=l Proposition 2.16: Let A e ftn . Then h+k <_ n with equality i f f n = 2 or A is a permutation matrix. Proof: For n = 2 , h+k = 2 always holds. We assume that n >_ 3 . By Corollary 2.14, h+k <_ h+(n-l)k <_ n . If h+k = n , then h+k = h+(n-l)k = n . Hence k = (n-l)k and so k = 0 . Therefore h = n and A is a permutation matrix. The converse is obvious. Proposition 2.17: Let A e ft . Then h+k > — -n — n-1 with equality i f f n = 2 or there exist permutation matrices P and Q such that n-1 n-1 0 n-1 1 n-1 PAQ = -^r(nJ - I) = n-1 n Proof: For n = 2 , h+k = 2 always holds. We assume that n >_ 3 . By Corollary 2.14, (n-l) (h+k) >_ (n-l)h + k >_ n . Hence h+k > —2— . If h+k = — , then (n-l)k = k , from which k = 0 . — n-l n-l Hence h = n ^ . Furthermore, let x be any minimum (zero) diagonal and be any diagonal disjoint from x . Then the Covering Theorem implies the existence of n-2 diagonals a^, a^, such that {x, a. ; i = 2, 3, • • • , n} cover A exactly. Since (n-l)h + k = n , n we get from Corollary 2.14. (ii) that Y a. = h = —r- for a l l 1 0 ^ (l) n-l j = 2, 3, n . In other words, i f h+k = , then there is a zero diagonal x such that any diagonal disjoint from x is a maximum diagonal with sum h = n , . We claim that this implies the existence of permuta-n-1 tion matrices P and Q such that PAQ = —^r(nJ - I) ; i . e . , A has n-l n a zero diagonal x and a l l entries off x are equal to - ^ y . The proof of this which turns out to be an interesting combinatorial argument must be split into several steps and wi l l follow from the subsidiary results contained in Lemma 2.18, Theorem 2.19, and Theorem 2.20. Lemma 2.18: If an nxn matrix A has the property that there is a certain diagonal x such that every diagonal disjoint from x has a constant sum, then every 2x2 submatrix that does not contain any entry from x must have both diagonal sums equal. Proof: Without loss of generality, we may assume that x is the main diagonal. For n = 2 or n = 3 , there is no 2x2 submatrix that does not contain any entry from the main diagonal. We assume that n >_ 4. Consider any 2x2 submatrix A[i , j | i ' , j ' ] that does not contain any entry from the main diagonal. Interchanging the 1st row with the ith row, the 2nd row with the i ' th row, the 1st column with the jth column, and the 2nd column with the j ' th column, we can bring A[i , j | i ' , j ' ] to the upper left corner. On the 1st and 2nd row, there is an entry from the original main diagonal at the (1, k) and (2, &) positions, say, k >_ 2 , I >_ 2 . Interchanging the 3rd column with the kth column, the 4th column with the j£th column, we can bring a , and a„ into the I K Zx, submatrix A[l,2 | 3,4] . Similarly, we can bring the two main diagonal entries on the 1st and 2nd column into the submatrix A[3,4 | 1,2] . Consequently, the matrix takes the form: a.. a. . , * * * * a 6 B where a * denotes an entry form the original main diagonal of A . Now, in the (n-4)x(n-4) submatrix B = A(l,2,3,4 | 1,2,3,4) , we can chose any diagonal o such that a "and T (to be precise, x restricted to B) are disjoint. Consider the diagonals a U {a, 3, a . . , a.,.,} and a U {a , 3 > »> a i ' j ^ ' B y assumption, they have the same sum, and hence a.. +a . , . , =a . . , + a. . . . Theorem 2.19: If an nxn matrix A has the property that there is a certain diagonal x such that every 2x2 submatrix that does not contain any entry from x has both diagonal sums equal, then for some permutation P and Q , PAQ takes the form: * 3 £ 3 2 a 2+3 x * a, 2*^ 3 ^  • • • a 2 ^ n B = PAQ = a3+31 • a 3+3 2 * _ a n * l • • a +3 9 a +3., n z n 3 • * = a . + 3 • i J for a l l i f j , i , j = 1, 2, n convenience = 0 , and a * denotes an entry from x Proof: We prove this by induction on n . For n = 2 or n = 3, there is no 2x2 submatrix satisfying the assumed condition and hence the assertion is "vacuously true". We start with the case n = 4 . Without loss of generality, we may assume that x is the main diagonal. If we put a ; L = 0 , a 2 = a 2 4 - a u , a 3 = a 3 1 - + a 2 4 - a^ , a 4 = a41 ~ a21 + a24 " a14 ' 31 = a21 " a24 + a14 ' 32 = a12 » 3 3 = a13 and 3^ = a ^ , then from the assumption, one can verify easily that a.. = a. + 3 . for a l l i ? i . Now assume that the assertion is true for n > 4 . Let A be an (n-l)x(n-l) matrix with the described property. As usual, assume that T is the main diagonal. Since the nxn submatrix A (n+1 | n+1) also has the described property, the induction hypothesis implies that A takes the form: * 3 3 e n a l n+1 * a 2 + 3 3 a 2 ^ n a 2 n+1 a 3 + 3 2 ft a 3 ^ n a3 n+1 A = • n-2 Kn-1 * a 0 + 3 n-2 n n-l n an-2 n+1 a n - l n+1 ° n + * l a n + e 2 a +B, n j n n-l * a n n+1 3n+l 1 3n+l 2 an+l 3 • an+l n-l an+l n ft Let 3n+1 = a l n+1 " For each a. X n+1 , 1 = 2, 3, n-l , consider the 2x2 submatrix A [ l , i | n,n+l] . By assumption, 3 + a. = a-i ,-, + a . + 3 > and hence a. = a. + an , . n x n+1 1 n+1 i n l n+1 l 1 n+1 = a. + 8 , , . For a , consider the 2x2 submatrix l Mn+1 n n+1 ' A[n-2,n I n-l,n+1] . By assumption, a ~ + 3 i + a , . = 1 n-2 n-l n n+1 = an-2 n+1 + a n + Pn-1 » a n d h e n C 6 3 n n+1 = &n-2 n+1 + % " a n-2 = " a n-2 + pn+l + a n " a n-2 = a n + 3n+l " S i m i l a r l y > d e f l n e an+l = an+l 1 " h • F ° r e a G h an+l,j > j = 2 ' 3 ' " • ' n _ 1 ' consider the 2x2 submatrix A[n,n+1 | l , j ] . By assumption, a + B. + a . = a + g . + a J 1 1 , and hence a ,.. . = g. + a 1 n M l n+1 j n Mj n+1 1 n+1 j j n+1 a , , + 3 . . Finally, for a .., , consi n+1 j J n+1 n der the 2x2 submatrix A[n-2,n+l I n-l,n] . By assumption, a _ + B . . + a , , 1 n-2 n-1 n+1 n = a o + 6 o + a , i i > and hence a , . = a ,.. . + S - 8 . n-2 2 n+1 n-1 n+1 n n+1 n-1 n *n-1 = a ,i + 8 • Therefore a.. = a. + 8 . for a l l i ^ j , n+1 H n l j i * j i , j = 1, 2, n+1 i f we set a-^  = 0 for notational convenience. This completes the proof of Theorem 2.19. Theorem 2.20: If Ac fi^ has a zero diagonal T such that every diagonal disjoint from T has a constant sum, then a l l entries off x 1 are equal to n-1 Proof: For n = 2 and n = 3 , this is clear. We assume n > 4 As usual, assume that x is the main diagonal. By Lemma 2.18 and Theorem 2.19, A takes the form: A = 0 3 2 3 3 3 n a 2 + 3 x 0 ct 2 + 3 3 ' a 2 ^ n a 3 + 3 l a 3 + 3 2 0 ' a 3 ^ n a +S-, n p l a n * 2 a +3 o n F 3 • 0 Clearly, any diagonal disjoint from x has the constant sum J ( a . + i=l 1 n n where we set a, = 0 as before. Let a = J a. > 3 = £ 3 • • Since 1 i=l 1 i=l 1 by the Covering Theorem, we can always select n-1 diagonals such that together with x , they cover A exactly, we have a + 3 = ~^ or (n-l)a + (n-l)3 = n . (1) Since the 1st row sum and the 1st column sum are one, we have B - 6 X = 1 (2) and a + (n-1)^ = 1 . (3) From (2) and (3), we get a + (n-l)3 = n . (4) From (1) and (4), we get (n-2)a = 0 from which a = 0 . Hence B = n n-l and from (2), 3-^  = 3 - 1 = ~ y . Since the ith row sum and the ith column sum are one, i = 2, 3, • • • , n , we get (n-l)a^ +3 - 6 ^ = 1 and a - a. + (n-l)B. = 1 . Hence l l (n-l)a. - 3. = 1 " 3 = ^ (5) and a. - (n-l)3. = a - 1 = -1 . (6) l i From (5) and (6), we get [(n-l) 2 - l]a^ =0 or n(n-2)cu = 0 , from which = 0 for a l l i = 2, 3, n . Therefore, from (5), 1 for a l l i = 2 , 3, • • • , n . This completes the proof. M i n- l The proof of Proposition 2.17 is thus also completed. Theorem 2.20 has a generalized from which we shall state as a conjecture since we have not been able to prove i t completely. Conjecture 2.21: Let A e 0, , and let T T T be m * n 1 2 m mutually disjoint zero diagonals of A , 1 <_ m <_ n-l . If every diagonal disjoint from each T ^ , i = 1, 2, • • • , m has a constant sum (this constant sum is —— by the Covering Theorem), then a l l entries off the n-m m zero diagonals are equal to n-m We remark that t h i s conjecture i s c l e a r l y true f o r m = n-1 since A i s then a permutation matrix. Also, Theorem 2.20 i s the s p e c i a l case where m = 1 . Furthermore, the arguments used i n the proof of Lemma 2.18 and Theorem 2.19 do not y i e l d an answer f o r Conjecture 2.21 since for a r b i t r a r y n and m , there may not be any 2x2 submatrix that does not contain any entry from the given m zero diagonals. The next p r o p o s i t i o n shows that Conjecture 2.21 i s true for m = n - 2 also. P roposition 2.22: Let A e ft and l e t T , , T „ , T „ be n-2 c n 1 2 n-2 mutually d i s j o i n t zero diagonals of A . I f every diagonal d i s j o i n t from each x^ , i = 1, 2, n-2 , has a constant sum, then a l l e n t r i e s o f f the n-2 zero diagonals are equal to 1/2 . Proof: I t i s c l e a r that i n each row and each column, there are exactly two e n t r i e s o f f every x^ , i = 1, 2, n-2 . For convenience, we c a l l any such entry a " s t a r " . By the Covering Theorem, the constant sum i n the hypothesis must be y . Permuting the rows and columns, we can assume'that there are st a r s at the (1, l ) t h , (1, 2)th, and the (2, 2)th p o s i t i o n s . I f the other s t a r on the 2nd row i s at the (2, l ) t h p o s i t i o n , then we get a d i r e c t sum. Otherwise, by permuting the columns, we can assume that i t i s at the (2, 3)th p o s i t i o n , and by permuting the rows, we can assume that the other s t a r on the 3rd column i s at the (3, 3)th p o s i t i o n . Repeat t h i s process. Eventually, we can write A as and each A_^  as two diagonals c o n s i s t i n g of stars only - the main diagonal and the diagonal corresponding to the permutation (1,2,•••,n.). r a d i r e c t sum, A = © Y A. , where A. e ft r 1=1 1 n , each n. ^ 2 ; Since each A. is d.s., we can write x. for a l l entries on the main 1 i diagonal and l ~ x ^ ^ o r a x x entries on the other diagonal. For each i = 1, 2, r , we have, by assumption, that xn + n„x„ + • • • + n. x. + • • • + n x = and 1 1 2 2 i i r r 2 n. x., + n 0x 0 + • • • + n.(1-x.) + • • •+ n x = ~ . Hence x. = 1 - x. or 1 1 2 2 i i r r 2 l i x. = -ij- . This completes the proof. In view of Theorem 2.20 and Proposition 2.22, Conjecture 2.21 is true for n = 2, 3, 4. Since h+k <_ n by Proposition 2.16, we get hk <_ ( —— ) = . Hence there exists a smallest constant g (n) such h+k 2 n 2 2 ; 4 that hk < ^- g(n)-n2 . We claim that g (n) = — -^r . — H n—1 n 2 Proposition 2.23: Let Aef t . Then hk < -7-7— -TT- with equality n — 4 (n-l) n J i f f h = I" and any diagonal disjoint from a maximum diagonal is a n minimum diagonal with sum k = 2(n-l) Proof: Since n 2 - 4nh + 4h2 = (n-2h)2 >_ 0 , we get 2 h(n-h) = hn - h 2 <_ ^ - . (7) Also, from Corollary 2.14.(i), we get (n-l)k <_ n-h . (8) n 2 n 2 From (7) and (8), we get (n-l)hk <_-^ - or hk <_ ^ TJ^Zjy • i f equality holds, i t must hold in both (7) and (8). Hence h = y and the equality case in Corollary 2.14.(i) implies that any diagonal from a maximum n diagonal is a minimum diagonal with sum obvious. This completes the proof. 2(n.-l) The converse is The next examples show that the above bound is always attainable and is not unique for n >_ 3 . Example 2.24: (i) Let A be the d.s. matrix: A = 1 2 2(n-1) 2(n-1) 1 2 2(n-1) 2(n-1) 2(n-1) 1 2(n-1) 1 2 where a.. = for a l l i = 1, 2, n and a.. = „ , ^ , > otherwise, i i 2 I J 2(n-1) n Clearly, h = £ a. . = -r- and any diagonal disjoint from the main i=l 1 1 diagonal is a minimum diagonal with sum k = ^ (ri-1) ' (ii) For n > 3 , let A be the d.s. matrix: A = n+1 _1_ 2n 2n 1 n 2 -n-l 2n 2n(n-l) 1 n2-2n+2 2n 2n(n-l)(n-2) 2n n2-2n+2 2n(n-l)(n-2) n 2 -n-l 2n(n-l) , n+1 1 n 2 -n-l r , , where a-, = -z— , a.. . = a... = 77— , a. . = -r -7—TT~ for a l l 11 2n lx i i 2n xi 2n(n-l) 1 = 2, 3, n , and a.. = — 7 — ? ? > " 2 W\ otherwise. It is readily xj 2n(n-l)(n-2) j ^ A • J J 1 , n2-2n+2 ' n 2 -n-l „ n+1 verxfxed that A xs d.s. and -r— < -r—,—rv?—oT < o~7—TT < o — • 2n 2n(n-l)(n-2) 2n(n-l) 2n Hence h = £ a ^ = + " 2 ^ = Y a n ^ a n ^ diagonal disjoint from i=l the main diagonal is a minimum diagonal with sum , 2 , n -2n+2 n k = h 2n 2n(n-l) 2(n-l) ' For any A e ftn , since h >_ 1 and k <_ 1 with either equality i f f A = J n (Corollary 1.5), we have (h-1)(1-k) ^ 0 or h+k-hk >^  1 with equality i f f A = . The next proposition gives the upper bound for the corresponding quantity h+k+hk . Proposition 2.25: Let Aef t . Then h+k+hk < n + -Kr with c n — n-l equality i f f h = n-l and any diagonal disjoint from a maximum diagonal 1 is a minimum diagonal with sum k = n-l Proof: Let B = -^ (A + J ) . Then B e ft and the maximum 2 n n diagonals and minimum diagonals of A and B correspond to each other. h+1 k+1 2 An application of Proposition 2.23 to B gives (——) (—-j—) <^  4( n-l) ' n 2 1 or h+k+hk < —=— 1 = n H 7- . If equality holds, then by Proposition — n-l n-l h+1 n 2.23, —— = "2 or h = n-l and any diagonal disjoint from a maximum diagonal is a minimum diagonal with sum k such that = 2 (n-l) ° r k = \ . The converse is obvious. This completes the proof, n-l We close this chapter by obtaining bounds for h, k and h+k when the d.s. matrices under consideration have properties discussed in Chapter I. Proposition 2.26: If A e ft^ has precisely (n-1)(n-1)! diagonals n—2 with common sum a f 1 > then h <_ 2 , k >_ -- j^- and 2 •—,n- 2 No < h+k < 2 + n \ . These bounds are always attainable. (n-l) z — — n-1 1 Proof: By Theorem 1.20, there exist permutation matrices P and Q such that PAQ is a (x, y, z)-matrix. Examining the proof of l~nx Corollary 1.21 shows that a = 1 + (n_]jz • Hence g = x + (n-l)z = / I N i 1-nx , . „ „ (1-nx)(n-2) , = n - (n-l)a = 1 - ~ j ~ a n d a + 3 = 2 - (n-i)2 where 3 denotes the only other diagonal sum (Lemma 1.19). Clearly, a attains its maximum 1 + , \ N 0 when x = 0 and its minimum ^ — 7 - when x = 1; (n-l)2 n-1 and s attains its maximum 2 when x = 1 and its minimum ^—\ when H n-1 x = 0 . Finally, a + 3 attains its maximum 2 + ^ when x = 1 and n 2 n 2 its minimum 2 — 7 — r - r - 5 ~ when x = 0 . Therefore, h < 2 , k > — r - and (n-l) z — — n-1 2 — , n 2 N 9 < h+k < 2 + ^ — 7 - . It is clear from the proof that these (n-l) z — — n-1 bounds are always attainable. Proposition 2.27: If A e (n >_ 3) has precisely (n-1)(n-1)! diagonals with sum (i) greater than one, then h + k < 2 , (ii) smaller than one, than h + k > 2 . This bound is not attainable but best possible in the sense that h+k can become arbitrarily close to 2 in either case. Proof: Let E = ja :; £ a i a (i) * ' T h e n ' E ' = Cn-1)(n-1)! . (n-1)! Let D = [J D. be a decomposition of the set D of a l l diagonals as described in Theorem 1.11. Then we have seen (cf. Proposition 1.18) that |E H D | = n-1 for a l l j = 1, 2,. (n-1)! . In case (i) , let OQ be a maximum diagonal with sum h where OQ e , say. Let h, c t 2 > • * • » an_^> P be the diagonal sum of each diagonal in where g < 1 and ou > 1 , i = 2, 3, n-1 . Since covers A n-1 exactly, n = h + g+ £ a. > h + 0 + n-2 and hence h+k h+0 < 2 . i=2 1 Similarly, in case ( i i ) , let T q be a minimum diagonal with sum k where t e , say. Let k , 0 2 , $y *•*> n^_]_> 0 1 be the diagonal sum of each diagonal in where a > 1 and 0^  < 1 , i = 2, 3, n-1 . n-1 Since D.. covers A exactly, n = k + a + £ 0 . < k + a + n-2 and 1 i=2 1 hence h+k >_ ct+k > 2 . Furthermore, i t is clear from the proof that this bound is not attainable. Finally, to see that this bound is approachable, consider a d.s. (x, y, z)-matrix A with x ^ ~ ( s o that A ^  J^ ) . As seen before, A has only 2 distinct diagonal sums a and 0 with a + 0 = 2 - ^"^n-l)^ 2 ^ " ^ c f ' P r ° P ° s i t i o n 2.26.) It is clear that „-o l - o j . „ „+o 1+o a + 0 — 2 as x —*• — , and a + 0 —>- 2 as x —*• — CHAPTER H I PROPERTIES OF THE h-FUNCTION AND THE k-FUNCTION OF d.s. MATRICES The main prupose of this chapter is to study the properties of the h-function and the k-function, the functions that associate with each d.s. matrix its maximum and minimum diagonal sums, respectively. In particular, we investigate the behaviour of these functions on the Kronecker product of d.s. matrices. We wi l l show that the h-function is very similar to the rank function p in many respects. Furthermore, we h(A))% shall prove that, for Aeft , h(A) < p(A) and per(A) < n — — I n which improves a result of Marcus and Mine [11]. A conjecture is given. We mention first that the functions h and k are, in fact, defined for any nxn matrices. It is when restricted to d.s. matrices that they have interesting properties. We shall denote the set of a l l nxn matrices with non-negative entries by H^ . Lemma 3.1: Let A,B be in H . Then (i) h(A+B) < h(A) + h(B) n — with equality i f f A and B have a corresponding maximum diagonal, ( i i) k(A + B) >_ k(A) + k(B) with equality i f f A and B have a corresponding minimum diagonal. Proof: (i) Let a ,x and y be any maximum diagonals of A, B n and A + B rest jpectively. Then h(A + B) = Y (a. . . . + b. ,..) = i = 1 ipd) iyd) n n n n Y a. . . . + Y b. . . . < Y a. . . . + Y b. . . . = h(A) + h(B) . If ^ iy(i) i £ 1 ip(i) ~ ±i± la (i) ±i± I T ( I ) n rt n n equality holds, then Y- a. ... = Y a. ... and Y b. ... = Y b. . . . ± % X IU CO ^ ^ Ci) iy Ci) ^ IT Ci) Hence u is a maximum diagonal for both A and B . The converse is obvious. The proof of (ii) is similar. The above lemma can be generalized immediately to: Corollary 3.2: Let A^ e H n , where i = 1 , 2, • • • , m . Then m m (i) h( I A.) <_ I h(A.) with equality i f f a l l A.'s have a 1=1 1=1 corresponding maximum diagonal, m m (ii) k( £ A.) >_ £ k(A.) with equality i f f a l l A. 's have a 1=1 1 1=1 1 1 corresponding minimum diagonal. Proof: By Lemma 3 . 1 and induction on m . Corollary 3.3: The function h is convex on H^ , and the function k is concave on H n Proof: Let A,B be in H and A e [0 , 1] . Then n h{XA + (l-A)B} ^ h(AA) +h((l-A)B) = Ah (A) + (l-A)h(B) and k{AA + (l-A)B} > k(AA) + k((l-A)B) = Ak(A) + (l-A)k(B) . Theorem 3.4: Let A,B be in Q . Then n (i) h(AB) <_ min (h(A), h(B)} (ii) k(AB) > max{k(A), k(B)} Proof: (i) By the well-known theorem of Birkhoff [1], every d.s. matrix can be expressed as a convex combination of permutation m m matrices. Hence B = Y A.P. , where A. > 0 such that J X. = 1 , and i=l 1=1 is a permutation matrix, i = 1, 2, — , m . Since hCAP )^ = h(A) m m for a l l i , Corollary 3.3 gives that h(AB) = h( £ A.AP.) <_ £ A.h(AP.) = a - X X . — X X 1=1 1=1 m = ( £ A.)(h(A)) = h(A) . Similarly, h(AB) <_ h(B) . Therefore, i=l 1 h(AB) <_ min {h(A), h(B)} . The proof of (ii) is similar. The next two corollaries are immediate consequences of Theorem 3.4. Corollary 3.5: Let A,B be in ftn . Then AB is a permutation matrix i f f both A and B are permutation matrices. Proof: If AB is a permutation matrix, then h(AB) = n , and hence h(A) >_ n and h(B) >_ n by Theorem 3.4. Therefore h(A) = h(B) = n, and A and B are both permutation matrices. The converse is obvious. Corollary 3.6: Let A e ftn . Then for a l l m , (i) h(Am) < h(A) ; and (ii) k(Am) >_ k(A) . Proof: Since A e ft implies that Am e ft for a l l n n m—1 m = 1, 2, i t suffices to put B = A in Theorem 3.4. Remark 3.7: To determine when equality holds in Theorem 3.4 seems to be quite difficult. For example, h(AB) = h(A) wil l hold i f B is a r permutation matrix or i f PAQ = J © J © . . . ® J where / n. = n n. n„ n , u . I 1 2 r i=l and P and Q are permutation matrices. The equality can hold, however, when both A and B are fully indecomposable. For example, let A = B = 0 -=-0 1 2 £ £ 0 Then AB = 1_ 4 1 2 1 4 Hence h(AB) = — = h(A) = h(B) . We also observe another condition for equality as follows. Let g(B) denote the minimum number of permutation matrices necessary to represent B as a convex combination [15]. Then, in view of Corollary 3.2 and the proof of Theorem 3.4, A must have at least g(B) maximum diagonals in order that h(AB) = h(B) can possibly hold. This condition, however, is rather weak since g(B) is small compared with n! , the number of diagonals of a matrix. Corollary 3.8: Let A,B be in ft Then (i) h(AB) < h(A)h(B) with equality i f f A = B = J . — n (ii) k(AB) > k(A)k(B) with equality i f f A = B = J n Proof: (i) For any d.s. matrix X , we have, by Corollary 1.5, that h(X) > 1 with equality i f f X = J n Hence Theorem 3.4 implies that h(AB) < min {h(A), h(B)} <_ h(A)h(B) with equality only i f h(A) = h(B) 1 or A = B = J The converse is obvious. The proof of (ii) is similar. Let A,B be in ft^ . Since hCA) >_ 1 and hCAB) ^hCB), i t follows that 1 < h(A) +• hCB) - h(AB) with equality i f f A = B = J . — * n The problem of determining the corresponding best possible upper bound for h(A) + h(B) - h(AB) seems to be a d i f f i c u l t one. Clearly, 2n is an upper bound. The following conjecture which seems plausible i s exactly the analogue of Sylvester's law for rank of two matrices: p (A) + p (B) - p(AB) <_ n [14, p. 28]. Conjecture 3.9: Let A,B be in Q,^ . Then h(A) + h(B) - h(AB) <_ n. Remark 3.10: With l i t t l e computation, Conjecture 3.9 can be verified to be true for n = 2 . However, even for n = 3 , the manipulations get too involved to give any answer. The d i f f i c u l t y of this problem can perhaps be seen from the fact that the equality w i l l not be attained uniquely. It is clear that h(A) + h(B) - h(AB) = n i f either A or B is a permutation matrix. This i s , however, not the only case when equality can hold; e.g., consider the 4x4 d.s. matrices: A = I 2 0 0 J2 J B = — 0 J o 2 I- 0 2 _ _ Then h(A) = h(B) = 3 , and AB = - — 0 2 J o 0 2 Hence h(AB) = 2 and h(A) + h(B) - hCAB) = 4 = n . For arbitrary A,B in H^ , i t is easy to see that h(A) + h(B) - h(A + B) and k(A + B) - k(A) - k(B) have no upper bound though zero is the lower bound in both cases by Lemma 3.1. If, however, A and B are d.s. matrices, then i t is readily seen that the upper bounds exist as indicated in the next proposition. Proposition 3.11: Let A,B be in ft . Then — n (i) h(A) + h(B) - h(A + B) <_ n with equality i f f A and B are permutation matrices corresponding to disjoint permutations. (ii) k(A + B) - k(A) - k(B) <_ 2 with equality i f f both A and B have 2 a zero diagonal and a.. .+ b. . = — for a l l i , j = 1, 2, n . Proof: (i) Since h(A) <_ n , h(B) <_ h(A + B) , i t is clear that h(A) + h(B) - h(A + B) <_ n . If equality holds, then n = h(A) = h(B) = h(A + B) and hence A and B are permutation matrices corresponding to disjoint permutations. The converse is obvious. (ii) Since -|(A + B) e ftR , we have, - | k(A + B) = M^y^-) <_ 1 or k(A + B) <_ 2 , and hence k(A + B) - k(A) - k(B) <_ 2 . If equality A+B holds, then k(A) = k(B) = 0 and k(—^ —) = 1 . Hence both A and A+B B have a zero diagonal and ^ = , which implies that 2 a.. + b. . = — for a l l i , j = 1, 2, • • • , n . The converse is i j i j n obvious. This completes the proof. Next, we study the behaviour of the h-function and the k-function on the Kronecker product (or direct product) [9] of d.s. matrices. The fact that A,B e ft implies that A * B e ft ? makes the n n^ consideration of h(A x B) and k(A x B) quite natural. Lemma 3.12: For any n-square matrices A and B , (i) h(A x B) = h(B x A) and k(A x B) = k(B x A) . ( i i) h(PAQ x P BQ^ ) = h(A x B) and k(PAQ x PjBO )^ = k(A x B) for any permutation matrices P, Q, P^, and Q^  . Proof: (i) Since i t is known [2] that there exists a permutation matrix P such that Pfc(A x B)P = B x A , the assertions are clear, (ii) Since the Kronecker product satisfies the property that AC x BD = (Ax B)(C x D) , we have (P x P1)(A x B)(Q x Q1) = (PAQ) x (P^BQ^ ; i . e . , permuting the rows and columns of A and B only permutes the rows and columns of A x B . Hence the assertions follows. This completes the proof. We remark that the above lemma can be of practical use sometimes when one of the matrices A x B or B x A has a form easily handled (e.g. Proposition 3.18 below). The next proposition presents a strong contrast between the Kronecker product and the ordinary product of d.s. matrices (cf. Corollary 3.8). Proposition 3.13: Let A,B be in tt^ . Then CD h(A x B) h(A)h(B) and (ii) k(A x B) <_ k(A)k(B) . Proof: (i) Let a be a maximum diagonal of A with sum h(A) . In A x B , consider the blocks at the ( i , cr(i))th position, where i = l , 2, • • • , n . In each of these blocks, there is a diagonal (of that block) with sum a. ...-hCB) , i = 1, 2, • • • n . The union of these xa ix) diagonals clearly forms a diagonal for A x B with sum h(A)*h(B) . Hence h(A x B) >_ h(A)h(B) . The proof of (ii) is similar. As an immediate consequence, we have: Corollary 3.14: Let A,B be in fl . Then A x B = J ? i f f - n rr A = B = J . n Proof: If A x B = J 2 > then h(A x B) = 1 and hence h(A)h(B) <_ 1 by Proposition 3.13. But h(A) >_ 1 , and h(B) >1 by Corollary 1.5. Hence h(A) = h(B) = 1 or A = B = J r . The sufficiency is clear. Remark 3.15: The case for equality in Proposition 3.13 is of some interest. It can be verified directly that for n = 2 , equality always holds in both cases. In general, we are unable to determine the conditions for equality. For n >_ 3 , however, there exist matrices such that the inequality is strict as shown by the next example. Then h(A) = h(B) = AxB = JL_ 16 B = =-0 2 2 1 2 1 e ft. and h(A)h(B) = 2 5 -z-T . Now, 16 0 0 0 0 4 4 0 4 4 0 0 0 4 2 2 4 2 2 0 0 0 4 2 2 _4 2 2 0 4 4 0 2 2 0 2 2 4 2 2 2 1 1 2 1 1 4 2 2 2 1 1 2 1 A 0 4 4 0 2 2 0 2 2 4 2 2 2 1 1 2 1 1 4 2 2 2 1 1 2 1 1 If we consider the diagonal of A x B consisting of the underlined entries, we see that h(A x B) > ^(24 + 3) = TT " > h(A)h(B) . — l o lb The next two propositions show that we have equality in both cases in Proposition 3.13 i f one of A and B is some special d.s, matrix. Let A e ft^ and let P be any permutation matrix. Proposition 3.17: Then (i) h(A x P) = h(P x A) = h(P)h(A) = nh(A) (ii) kCA x P) = k(P x A) = k(P)k(A) = 0 . Proof: In view of Lemma 3.12.(i), i t suffices to consider P x A. Let a be the permutation corresponding to P . Then P x A has a copy of A in the ( i , a(i))th block, where i = 1, 2, n , and a l l the other blocks are 0 . Hence h(P x A) = nh(A) and k(P * A) =0 . Let Aeft . Then n (i) h(A x J ) = hCJ x A) = h(A) . n n (ii) k(A x J ) = k(J x A) = k(A) . n n Proof: (i) In view of Lemma 3.12.(i) and Proposition 3.13, i t suffices to show that h(J x A) < h(A) . We have n — J x A n I n A A A A A A Let a be any diagonal of x A . Consider the following sets of indices (mod n) taken from the set {1, 2, n} : T% = {a(kn + 1) , (mod n) ; k = 0, 1, • • • , n-l} T 2 = {o-(kn + 2) , (mod n) ; k = 0, 1, n-l} T n = ^ a C k n + n) » (m o d n) j k = 0, 1, • • • , n-l} In other words, T^ consists of the 2nd indices (mod n) of a l l entries of a which l ie on the ith row of each block, where i = l , 2, n . Note that the elements in each T. are not necessarily distinct. We l • . claim that there is an SDR for the sets T, , T~ , • • • , T . Let m be 1 2 n an integer such that 1 <_ m <_ n . Let w be an increasing sequence of m terms, 1 < to, < to0 < • • • < to < n . We claim that — 1 2 m — |T U T U • • • U T I > m . Since each T , where to-, to- to ' — to. 1 2 m l i = 1, 2,. • • • , m , has n elements counting repetition, T u T U' • • • U T has mn elements counting repetition. con co0 to 1 2 m Furthermore, i t is clear from the definition of diagonal that each index n (mod n) occurs precisely n times in \J T. and hence occurs at most m n times in U T . Therefore, T U T U • • • U T > m . Now, . _ •, to . 1 0), co 0 co1 — i=l l 1 2 m the theorem of P. Hall [20, p. 48] implies the existence of an SDR for the sets T.. , T„ , T ; a(k.n + i) e T. say, where i = l , 2, n, 1 2 n l I J ' ' 0 < k. < n-1 . Consider the set S = {a. , . „ , . N : i = 1, 2, • • • , n} . — l — k^n+ijo- (k^n+i) ' ' ' We identify the rows of J x A that are the same rows of A .and the J n ' columns of J x A that are the same columns of A . Since n k_^ n+i ^ kj n + J (mod n) , and a (k^n + i) ^ a (k^n + j) (mod n) for i ^ j , i , j = 1, 2, n , the elements of S l ie on distinct rows and columns even under the identification. Hence they constitute a diagonal for A . Now we permute the (k^n + i)th row with the ith row, where 1 = 1, 2, n . Since these two rows are the same row under the identification, this permutation wil l not affect the block structure of J x A . Furthermore, since {a(k.n + i ) , (mod n); i = 1, 2, • • • , n} = n I = {1 2 •••> n} > there exists, for each i , a unique j (depending on i) such that a(k .^n + j) = i (mod n) , i = l , 2, n . Hence permuting the ith column with the a(k^n + j)th (mod n) column, where i = 1, 2, n , wi l l not affect the block structure of J x A . In ' n this manner, we can bring the above found diagonal into the (1, l)th block of J x A . Since there are no entries of a left on the first n n rows and columns, we can consider the (n2-n)-square matrix E obtained by deleting the first n rows and columns of x A , and the remaining n2 - n entries of cr wil l form a diagonal for H . Now, we can repeat the above process and eventually bring the entries of o into the (1,1)th, (2, 2)th, and (n, n)th block such that each block has n entries from a that form a diagonal for that block (namely, A), and such that the block structure of x A remains unchanged. Hence h(JR x A) <_i{nh(A)} = h(A) . The proof of (ii) is similar. The Birkhoff theorem can be used to yield an upper bound for h(A x B) in terms of h(A) and h(B) . Proposition 3.19: Let A,B be in Q,^ . Then h(A x B) <_ <_ min (nh(A), nh(B)} with equality i f either A or B is a permutation matrix. m Proof: By Birkhoffs theorem, let B = Y A.P. , where A. > 0 x=l m such that £ A. =1 , and each P. is a permutation matrix, i=l i = 1, 2, m . Since the Kronecker product is distributive over the summation [9, p. 82], we get, by Corollary 3.2.(i) and Proposition 3.17.(i), m m m that h(A x B) = h(A x \ A.P.) = h( £ A . (A x P.)) <_ £ A .h(A x P.) = 1=1 1 1 1=1 1 1 1=1 1 1 m = nh(A) £ A. = nh(A) . Similarly, h(A x B) <_ nh(B) . The assertion for i=l 1 equality is obvious from Proposition 3.17. (i) . In the rest of this chapter, we shall study the relation between h(A) and per (A) , the permanent function of A , where A E 9, . First, however, in order to increase, understanding of the function hCA) , we l i s t the similarities between hCA) and the rank function p(A). (1) If A M , then 1 <_ p (A) <_ n . (1) ' If A e ftn , then 1 <_ h(A) <_ n . (Corollary 1.5) (2) p (A) = p (AC) . (2) ' h(A) = h(A t) . (Trivial) (3) p (A + B) <_ p (A) + p (B) . (3) ' h(A + B) < h(A) + h(B) for A,B in D . (Lemma 3.1.(i)) — n (4) p(AB) <_ min {p (A), p(B)} . (4) ' h(AB) <_ min (h(A), h(B)} for A,B in R . (Theorem 3.4. (i)) m m (5) If A = © I A. , then p (A) = 7 p (A) . i=l 1 i=l m m (5) * If A = €> I A. , then h(A) = £ h(A.) . (Trivial) 1=1.1 i=l 1 (6) If P and Q are non-singular, then p(A) = p(PAQ) . (6)' If P and Q are permutation matrices, then h(A) = h(PAQ) . (Trivial) The similarities would be even more striking i f Conjecture 3.9 and its generalization (Conjecture 3.20 below) are true since they are exactly the analogue of Sylvester's law and Frobenius' inequality for the rank function, respectively [14, pp. 27-28]. Conjecture 3.20: Let A,B, and C be in ft^ . Then h(AB) + h(BC) <_ h(B) + h(ABC) . (Conjecture 3.9 is a special case of this for B = I .) In fact, i t was the above listed similarities together with some observations and experiments that tempted us to make the Conjectures 3.9 and 3.20. In [11], Marcus and Mine proved the following theorem. Theorem 3.21: If A £ , then per(A) <_ j 2 " ^ ^ and in addition, i f A is normal, then per(A) <_ P ^ with equality i f f A is a permutation matrix or n = 2 and A = • Our main result concerning this wi l l be to show that this theorem s t i l l holds i f we replace p (A) by h(A) and that the new upper bound is an improvement on the old one. But first of a l l , a simple inequality between per(A) and h(A) : Proposition 3.22: If A e ft , then per (A) <_ — -jh(A)t n with, n ' equality i f f A = J . Proof: This is immediate from the definition of per(A) and the arithmetic - geometric mean inequality. Theorem 3.23: If A e ft^ , then per (A) <_ j ^ p - j ^ with equality i f f A is a permutation matrix. If A is also normal, then per(A) < n ( A ) with equality i f f A is a permutation matrix or n = 2 and A = • Proof: Assume first that A is normal. Then Marcus and Mine 1 n n proved [11, Theorem 1] that per(A) < — V |A.I , where A, , A„, A — n>, 1 l 1 1 2 n r=l denote the eigenvalues of A . Since i t is well known that each eigenvalue of a d.s. matrix does not exceed one in modulus, we have n 1 n I A . I < I A'. I2 for a l l i , and hence per CA) <, —•• J U . l 2 . Since A is ' 1' • 1 f — n . u , i i=l n n normal, the classical Schur's inequality gives £ | X . | 2 = £ | a . . | 2 = i=l 1 i , j=l 1 J n 1 n = £ a . . 2 and hence perCA) <_ —- £ a. . 2 . Furthermore, by a i , j=l 1 J n i , j=l 1 J result of Marcus and Ree [18], there exists a diagonal a such that n n 1 n h(A) V a.. 2< y a. .... and consequently, per (A) < — y a. . . . < i j=l 1 3 ~ i=l n i=i 1 0 ^ ~ n If equality holds, then, for a l l i = 1, 2, n , | A . J N = | A . J 2 . If n >_ 3 , this implies that | A ^ | =1 for a l l i = 1, 2, n . Hence by a result of Mirsky and Perfect [19, Theorem 5], we conclude that A is a permutation matrix. If n = 2 , then A = x 1 - x 1 - x X Hence per(A) = ~ ~ would imply that x2 + (1-x)2 = x or 1-x depending on whether x >_ or x <_ y . If x2 + (1-x)2 = x , we get x = 1 or x = -| . If x2 + (1-x)2 = 1-x , we get x = 0 or x = -|- . Hence A = or A is a permutation matrix. The converse is obvious. For general A , we apply the inequality (per(AB))2 <_ per (AA1") per (BfcB) of Marcus and Newman [17, Theorem 5] to get, by putting B = I , t t (per(A))2 <_ per(AA ) . Since AA e ftn is normal, the above result and Theorem 3.4. (i) together imply that (per(A))2 <_ ^ <_ o r h,(A)te per(A) <_ | n j * ^ equality holds, i t must also hold in the inequality (per(AB))2 <_ per(AAt)per(BtB) . Equality implies that either (i) a row of A or a column of B consists of zeros or (ii) A = BDP where D is a diagonal matrix and P is a permutation matrix. In our case, since B = I and since (i) is impossible, we get A*" = DP or D = A 1 ? 1 E 0, and hence D = I . Therefore Afc = P or A = P1" is a n n permutation matrix. The converse is again obvious. As an immediate consequence, we obtain the following well known upper bound for per(A) [16, Lemma 1]. Corollary 3.24: If Ae ^ , then per(A) <_ 1 with equality i f f A is a permutation matrix. Proof: This follows from Theorem 3.23 since h(A) <_ n , with equality i f f A is a permutation matrix. Theorem 3.25: If A e , then h(A) <_ p (A) . Proof: Let A^, A 2 > " * ' » ^ r b e t n e non-zero eigenvalues of A . Then p (A) >_ r . Since A is d.s., | A.J <_ 1 for a l l i = l , 2, r. Hence trace ( A ) = A - , + A 0 + * " + A = | A 1 + A „ + - " + A | < 1 2 r 1 1 2 r1 — <_ |AjJ + IA2I + + |A r | <^  r <_ p (A) . Now choose permutation matrices P and Q such that B = PAQ has the main diagonal as one of its maximum diagonals. Then, since B e ft,^ , we have h(A) = h(B) = = trace(B) <_ p(B) = p (A) . In view of Theorem 3.25 and the fact that p(A) = n for a l l non-singular nxn matrices whence Theorem 3.21 yields the tr ivia l bound perCA) <_ 1 , we see that the. bounds obtained in Theorem 3.23 are indeed improvements on those in Theorem 3.21. Concerning Theorem 3.23, we do not know whether the square root can be removed in general. The following propositions answer this question partially. Proposition 3.26: Let A e ft^ . If for some permutation matrices P r and Q , PAQ = # Y A. , where r > 1 , A. e ft , i = 1, 2, * • • , r , •tli i i n . h. and Y n. = n , and i f per(A.) < — for a l l i = l , 2, • • • r , where , U -. I i — n. i=l I = h(A_^ ) , then per(A) <_ with equality i f f A is a permutation matrix. r y n. r ^ — = Y ± < r Proof: Since —— = Yj <_ J n— i=l n • • • n . , - - n i=l h • • • h . , , , h n. .1 I r 1 I r r i=l 1 I h. i=l 1 , where " denotes the deletion of that factor, we have h. .' ', i r r r i = l 1 h per(A) = per(PAQ) = " J J p e r ^ ) <_ " J f ~ - - i - _ i _ = - . T f e q u a l i t y i = l i = l i y n i = l 1 h o l d s , then f o r a l l i = 1, 2, ***, r , n.. ••• n. ••• n =h- ••• h. ••• h . ' ' 1 i ( r l I r r r Hence [ n. = h. . Since h. < n. , we get h. = n. for a l l ' I J ' i l — i i i i=l i=l i = l , 2, r . Hence each A^ is a permutation matrix and therefore so is A . The converse is obvious. Proposition 3.27: . Let A e f t . I f there is a normal matrix B e ft n n, such that per(A) <_ per(B) and h(B) h(A) , then per'(A) 5, with equality i f f A is a permutation matrix or n = 2 and A = • Proof: Since per(A) <; per(B) < < by Theorem 3.23, the — — n — n inequality is clear. If equality holds, then either (i) B is a permutation matrix or (ii) n = 2 , and B = . In case (i) , h(B) = n implies that h(A) = n and hence A is a permutation matrix. In case 1 1 (ii) , per(B) = — and hence per(A) <_ — . Since the van de Waerden conjecture is true for n = 2 , per(A) = — , and therefore A = . The converse is obvious. It was once conjectured that for Aeft , n per(A) >_ max {per(AAt), per(AtA)} (e.g. [12], Conjecture 2). Newman has given examples to show that this is false in general. It turns out that i f A e ftn is such that the above inequality is reversed, then the square root can be removed from Theorem 3.23. Corollary 3.28: If A e ft^ satisfies per(A) <_ max (per(AA t), per(AtA)}, then per (A) <_ , with equality i f f A is a permutation matrix or n = 2 and A = . Proof: We may assume that per(AAt) >_ per(AfcA) . Since AAC is normal, and since h(AAt) < h(A) by Theorem 3.4.(i), the assertion follows from Proposition 3.27, A + A^ Corollary 3.29: Lf A e satisfies per(A) perC—2~~^ ' t n e n per(A) <_ } with equality i f f A is a symmetric permutation matrix or n = 2 and A = . A + A*" Proof: Since — is symmetric and hence normal, and since h(A * A ) = yh(A + A*") <_-|{h(A) + hCAt)} = h(A) by Lemma 3.1.(i), the assertion except "symmetric" follows from Proposition 3.27. But i f A + A*" — is a permutation matrix, then a. , . N + a , . v . = 2 for some a , 2 r 10(1) a (i) i where i = l , 2, ' ' * , n . Hence A is a symmetric permutation matrix. The converse is obvious. We close this chapter by giving some upper bounds for per (A x B) in terms of h(A) and h(B) , where A and B are in £2 .^ In [2], the upper bound for per(A x B) was studied by Brualdi. Proposition 3.30: Let A,B be in ft^ . Then per(A x B) <_ <^  min " | ( ~ ~ " ) ^ » (~~~)^ • If A and B are also normal, then per(A x B) <_ min | ~ ~ ~ » ^ ^^j - with equality i f f A and B are both permutation matrices. Proof: Since A x B e tt^z » Theorem 3.23 and Proposition 3.19 imply that per(A x B) <_ | • • ^  -y <_ j ^ - min[nh(A) , nh(B)] = min I ^ rf ^ ^  ' ^r^^ }^ " ^ A a n ^ B a r £ a ^ s o normal, then so is A x B , and hence, by the same argument, we get per(A x B) <_ h ( A I B ) 1 min {^~, ^ \ . If equality holds, then since n2 ^ 2 , A x B must be a permutation matrix. Hence A and B are both permutation matrices. The converse is obvious. Remark 3.31: In [2], Brualdi proved that for A e H and B e H , r n m per(A x B) < K (per A)n(per B)m where K is a certain constant r — mn mn depending on m and n . He conjectured that K = (™lL: ^ jf (m!) (n!) we restrict A and B to be d.s. matrices and put n = m , then i t is natural to compare the bound given by this conjecture and that given in Proposition 3.30. We remark that in general, they are not comparable. For example, i f A and B are permutation matrices, then Proposition 3.30 yields per(A x B) <^  1 while Brualdi's conjecture yields per(A x B) <_ ^ n \ * , and i t is known that ^ n \.' >_ 1 with equality i f f (n ! ) 2 n ( n i r n n = 1 [2, § 3 . 7 ] . On the other hand, i f A = B = , then Proposition 3.30 yields per(A x B) <_ ^  while Brualdi's conjecture yields (n 2)! , n! . n , n! . n (n2)! (n 2)! _. per (A x B) < — — ( — ) ( — ) = — — i r ~ = —~T • Since ^ v — , ,.2n n ' v n / n.2n , 0 \ n (n!) n n (n ) (nz) Cn2) ' (n2) ! 1 per(A x B) = per(J x J ) = per(J 2) = ~—rr > clearly ~z < - . n n n ( n 2 ) n 2 ) n BIBLIOGRAPHY G. Birkhoff, Tres observaciones sobre el algebra lineal, Univ. Nac. de Tucuman, Rev. Ser. A 5 (1946), 147-151. R. A. Brualdi, Permanent of the Direct Product of Matrices, Pacific J. of Math., Vol. 16, No. 3 (1966), 471-482. R. A. Brualdi and H. W. Wielandt, A Spectral Characterization of Stochastic Matrices, Linear Algebra and Its Application, Vol. 1, No. 1 (1968), 65-71. P. J. Eberlein, Remarks on the van der Waerden Conjecture, II, Linear Algebra and Its Application, 2 (1969), 311-320. P. J. Eberlein and G. S. Mudholkar, Some Remarks on the van der Waerden Conjecture, J. Comb. Theory (1968), 386-396. II G. Frobenius, Uber Matrizen aus nicht negativen Elementen, S.-B. Kgl. Preuss Akad. Wiss. (1912), 456-477. J. Kapoor, Matrices which, Under Row Permutations, Give Specified Values of Certain Matrix Functions, University of British Columbia Ph.D. thesis, June 1970. D. Konig, Theorie der endlichen und unendlichen Graphen, Chelsea, New York, 1950. C. C. MacDuffee, The Theory of Matrices, rev. ed. New York, Chelsea 1946. M. Marcus and H. Mine, Some Results on Doubly Stochastic Matrices, Proc. of Amer. Math. Soc. 13, No. 4 (1962), 571-579. , Inequalities for General Matrix Functions, Bull, of Amer. , Permanent, Amer. Math. Monthly 72 (1965), 577-591. , On a conjecture of B. L. van der Waerden, Proc. Comb. . 63 (1967), 305-309. , A Survey of Matrix Theory and Matrix Inequalities, Prindle, M. Marcus, H. Mine, and B. Moyls, Some Results on Non-Negative Matrices, J. of Research of the National Bureau of Standards, Vol. 65B, No. 3, July-Sept. 1961, 205-209. Math. Soc Phil . Soc Weber and M. Marcus and M. Newman, On the inimum of the Permanent of a Doubly Stochastic Matrix, Duke Math. J . , Vol. 26, No. 1 (1959), 61-72. [17] M. Marcus and M. Newman, Inequalities for the permanent function, Ann. of Math. (2) 75 Q962), 47-62. [18] M. Marcus and R. Ree, Diagonals of Doubly Stochastic Matrices, Quart. J. of Math. Oxford 10, Q959), 296-302. [19] H. Perfect and L. Mirsky, Spectral Properties of Doubly Stochastic Matrices, Monat. fur Math. 69 (1965), 35-57. [20] H. J. Ryser, Combinatorial Mathematics, Carus Monograph 14, Wiley, New York, 1963. [21] R. Sinkhorn and P. Knopp, Problems Involving Diagonal Products in Non-Negative Matrices, Trans, of Amer. Math. Soc. 136 (1969), 67-75. [22] van der Waerden, B. L . , Aufgabe 45, Jber. Deutsch. Math. Verein., 35 (1926), 117. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0080535/manifest

Comment

Related Items