ABSTRACT Title of thesis: AN ANALYSIS OF IMPROVEMENTS TO BUCHBERGER?S ALGORITHM FOR GR?OBNER BASIS COMPUTATION Clinton E. McKay, Master of Arts, 2004 Thesis directed by: Professor William W. Adams Department of Mathematics Improvements to Buchberger?s Algorithm generally seek either to define a criterion for the removal of unnecessary S-pairs or to describe a strategy for improving the choices which one must make in the course of the algorithm. This paper surveys significant improvements to Buch- berger?s original algorithm for Gr?obner basis computation including the Gebauer-M?oller Criteria, the ?Sugar? strategy, and Jean-Charles Faug`ere?s F4 algorithm. Since Faug`ere?s F4 is generally accepted as being a particularly efficient approach to Gr?obner basis computation, we test several variants of the F4 algorithm on a variety of benchmark ideals in an effort to judge the efficiency of the Gr?obner basis computation process, while also being mindful of the memory constraint issues occurring in computer algebra. AN ANALYSIS OF IMPROVEMENTS TO BUCHBERGER?S ALGORITHM FOR GR?OBNER BASIS COMPUTATION by Clinton E. McKay Thesis submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Master of Arts 2004 Advisory Commmittee: Professor William W. Adams, Chair/Advisor Professor Lawrence C. Washington Assistant Professor Niranjan Ramachandran c? Copyright by Clinton E. McKay 2004 ACKNOWLEDGMENTS I owe my gratitude to all the people who have made this thesis possible. I would especially like to thank my advisor, Professor William Adams, for the insights and guidance he provided throughout this effort. He has always been available to provide help and advice. It has been a pleasure to have had the opportunity to learn from a professor who is extremely knowledgable and also incredibly patient with those who are less knowledgable. Special thanks also to Alyson Reeves for her insights and for providing the implementations of the F4 algorithm, which I used extensively in my research. My most sincere thanks also to Mom, Dad, Jessica Fitzgerald, Boo Barkee, M. E. Yao, Jim Fennell, Charlie Toll, Jacquie Holmgren, Jim Schatz, Jack Clark, Bob Boner, and Harry Rosenzweig. ii TABLE OF CONTENTS List of Tables iv 1 Introduction 1 2 Buchberger?s Algorithm 4 3 The Gebauer-M?oller Criteria 6 4 The ?Sugar? Strategy 13 5 Faug`ere?s F4 Algorithm 16 6 Variants of F4 - Test Data Analysis 23 7 Conclusion 29 A Benchmark Polynomial Ideals 31 Bibliography 37 iii LIST OF TABLES 2.1 Buchberger?s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.1 Gebauer and M?oller?s Improved Buchberger Algorithm . . . . . . . . . . . . . . . . 11 5.1 Faug`ere?s F4 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.2 Reduction Subalgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.3 SymbolicPreprocessing Subalgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 18 5.4 Modified F4 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 5.5 Modified SymbolicPreprocessing Subalgorithm . . . . . . . . . . . . . . . . . . . . 19 5.6 Simplify Subalgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.1 Cyclic 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 6.2 Cyclic 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 6.3 Cyclic 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 6.4 Katsura 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 6.5 Katsura 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 6.6 Katsura 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 6.7 Katsura 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6.8 hCyclic 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6.9 f633 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 A.1 Cyclic6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 A.2 Cyclic7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 A.3 Cyclic8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 A.4 Cyclic9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 A.5 Katsura7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 A.6 Katsura8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 A.7 Katsura9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 iv A.8 Katsura10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 A.9 hCyclic8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 A.10 f633 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 v Chapter 1 Introduction Gr?obner bases provide a means for studying polynomial ideals in many variables, and the theory of Gr?obner bases may be applied to problems in a variety of disciplines including mathematics, computer science, and engineering. Bruno Buchberger first developed an algorithm for computing Gr?obner bases in 1965, and since then, there have been numerous efforts to improve the efficiency of the algorithm. These efforts have in many cases proven successful, and improved versions of Buchberger?s Algorithm are utilized in nearly all computer algebra systems. Before beginning our analysis of these improvements to Buchberger?s Algorithm, we first introduce the basic terminology and results necessary to study Gr?obner bases. Let K[x1,...,xn] be the ring of polynomials in n variables with coefficients from the field K. We denote the set of monomials, also referred to as power products, by Tn = {x?11 ...x?nn | ?i ? N, i = 1,...,n}. Let x? = x?11 ...x?nn , for non-negative integers ?1,...,?n, and let deg(?) = summationtextni=1 ?i be the total degree of the monomial x?. Given a set of nonzero polynomials F = {f1,...,fs} in K[x1,...,xn], we denote the ideal generated by F as ?F?. Definition 1.1 A term order >T on the monomials of K[x1,...,xn] is a total order such that for any exponents ?,?,? ? Nn, x? ? 1 and x? >T x? implies x?x? >T x?x?. We now provide three examples of commonly utilized term orders: Example 1.2 Lexicographic (lex): x? >lex x? if the leftmost nonzero entry of ??? is positive. Example 1.3 Degree Lexicographic (degl): x? >degl x? if deg(?) > deg(?), or deg(?) = deg(?) and x? >lex x?. Example 1.4 Degree Reverse Lexicographic (rev): x? >rev x? if deg(?) > deg(?), or deg(?) = deg(?) and the rightmost nonzero entry of ??? is negative. 1 Definition 1.5 Fix a term order >, and let f be a polynomial in K[x1,...,xn] with f negationslash= 0. We may write f = a1x?1 +a2x?2 +???+arx?r, where 0 negationslash= ai ? K, x?i ? Tn, and x?1 > x?2 > ??? > x?r. Then we define LT(f) = a1x?1, the leading term of f, and LM(f) = x?1, the leading monomial of f. Definition 1.6 Given a set F = {f1,...,fs} of polynomials in K[x1,...,xn]. We define the set LT(F) = {LT(f) | f ? F}, and similarly we define the set LM(F) = {LM(f) | f ? F}. Definition 1.7 Given a subset S of K[x1,...,xn], we define the leading term ideal of S to be the ideal LTI(S) = ?LT(s) | s ? S?. Definition 1.8 A Gr?obner basis G of a polynomial ideal I, with respect to >, is a finite set of nonzero polynomials {g1,...,gt} ? I such that LTI(G) = LTI(I). G is a reduced Gr?obner basis for I if for all g ? G, g is monic and no monomial of g lies in LTI(G?{g}). Proposition 1.9 Let I negationslash= {0} be an ideal in K[x1,...,xn]. Then, for a given term order, I has a unique reduced Gr?obner basis. The purpose of requiring the elements of G to be monic in Definition 1.8 is to guarantee the uniqueness of the reduced Gr?obner basis. The proof of this proposition is given in [1] and [6]. Definition 1.10 Given f,g,h ? K[x1,...,xn], with g negationslash= 0, we say that f reduces to h modulo g in one step, written f g?? h, if and only if LM(g) divides a nonzero term X that appears in f and h = f ? XLT(g)g. Definition 1.11 Let f,h, and f1,...,fs be polynomials in K[x1,...,xn], with fi negationslash= 0 (1 ? i ? s), and let F = {f1,...,fs}. We say that f reduces to h modulo F, denoted f F??+ h, 2 if and only if there exist a sequence of indices i1,i2,...,it ? {1,...,s} and a sequence of polynomials h1,...,ht?1 ? K[x1,...,xn] such that f fi1?? h1 fi2?? h2 fi3?? ... fit?1?? ht?1 fit?? h. Definition 1.12 A polynomial r is called reduced with respect to a set of non-zero polynomials F = {f1,...,fs} if r = 0 or no monomial that appears in r is divisible by any one of the LM(fi),i = 1,...,s. In other words r cannot be reduced modulo F. Definition 1.13 If f F??+ r and r is reduced with respect to F, then we call r a remainder for f with respect to F. The reduction process enables us to define a division algorithm analagous to the well-known Division Algorithm in one variable. Given f,f1,...,fs ? K[x1,...,xn] with fi negationslash= 0(1 ? i ? s), this algorithm returns quotients u1,...,us ? K[x1,...,xn], and a remainder r ? K[x1,...,xn], such that f = u1f1 +???+usfs +r. Theorem 1.14 Let I be a non-zero ideal of K[x1,...,xn]. The following statements are equivalent for a set of non-zero polynomials G = {g1,...,gt} ? I. 1. G is a Gr?obner basis for I. 2. f ? I iff f reduces to 0 modulo G. 3. f ? I iff f =summationtextti=1 higi with LM(f) = max1?i?t(LM(hi)LM(gi)). The preceeding theorem and its proof may be found in Adams and Loustaunau [1]. Definition 1.15 Let 0 negationslash= f,g ? K[x1,...,xn]. Let L = LCM(LM(f),LM(g)). The polynomial S(f,g) = LLT(f)f ? LLT(g)g is called the S-polynomial of f and g. Theorem 1.16 (Buchberger) Let G = {g1,...,gt} be a set of non-zero polynomials in K[x1,...,xn]. Then G is a Gr?obner basis for the ideal I = ?g1,...,gt? if and only if for all i negationslash= j, S(gi,gj) G??+ 0. Again, this theorem along with the proof thereof may be found in [1]. 3 Chapter 2 Buchberger?s Algorithm As a result of Buchberger?s Theorem, there is a definitive, though not necessarily desirable, process which one may go through to compute a Gr?obner basis for an ideal I. In particular, the process be- gins with reducing an S-polynomial formed from the generators of the ideal. Then, if non-zero, the remainder is appended to the list of polynomials in our generating set. Additional S-polynomials are formed, and the process then repeats until there are enough polynomials in the generating set to reduce all S-polynomials to zero. Using the Noetherian property of K[x1,...,xn], we know that indeed this process will always terminate. The basic Buchberger Algorithm is as follows: INPUT: F = {f1,...,fs} ? K[x1,...,xn] with fi negationslash= 0 (1 ? i ? s) OUTPUT: G = {g1,...,gt}, a Gr?obner basis for ?f1,...,fs? INITIALIZATION: G := F,SP := {{fi,fj}|fi negationslash= fj ? G} WHILE SP negationslash= ? DO Choose any (f,g) ? SP SP := SP ?{(f,g)} S(f,g) G??+ h, where h is reduced with respect to G IF h negationslash= 0 THEN SP := SP ?{(u,h)| for all u ? G} G := G?{h} Table 2.1: Buchberger?s Algorithm This algorithm for computing the Gr?obner basis of an ideal is often extremely computation- ally intense, since the number of S-polynomials may grow to be quite large. Also, it is the case that an unfortunate choice as to the order in which the S-polynomials are considered could result in drastically more S-polynomial computations and reductions than would have been the case had a ?better? choice been made. Although the reduction of an S-polynomial may be carried out rather quickly, the process of reducing all S-polynomials will often be quite time consuming when there are large quantities of S-polynomials needing to be reduced. Thus, Buchberger and other leading 4 mathematicians sought to develop methods for predicting when S-polynomials would reduce to zero without actually computing them. 5 Chapter 3 The Gebauer-M?oller Criteria Though nearly twenty years old, R?udiger Gebauer and H. Michael M?oller?s paper On an Installation of Buchberger?s Algorithm presents an improved version of Buchberger?s Algorithm which remains an important benchmark against which the most modern algorithms for Gr?obner basis computation are compared. Gebauer and M?oller build on Buchberger?s prior achievements and describe practical criteria for detecting superfluous S-polynomial reductions. Before introducing this criteria, we first define the concept of a syzygy and present another equivalent condition for being a Gr?obner basis of an ideal: Definition 3.1 Given a set of nonzero polynomials F = {f1,...,fr} in K[x1,...,xn], we define a syzygy of the tuple (LM(f1),..., LM(fr)) to be any tuple (h1,...,hr) of K[x1,...,xn]r such that h1LM(f1)+???+hrLM(fr) = 0. The set of all syzygies of (LM(f1),..., LM(fr)) forms a K[x1,...,xn]-module which we call the first syzygy module of (LM(f1),...,LM(fr)) and denote by S(1). Theorem 3.2 The first syzygy module has a generating set, L(1), which is as follows: L(1) := {Sij | 1 ? i < j ? r} with Sij := LCM(LM(fi),LM(fj))LM(f i) ei ? LCM(LM(fi),LM(fj))LM(f j) ej where ek is the kth canonical unit vector of K[x1,...,xn]r. The preceding theorem was originally proved by D. K. Taylor in [11]. A proof may also be found in [1]. Theorem 3.3 Let G = {g1,...,gr} ? K[x1,...,xn]\{0} and I = ?G?. Then the following condi- tions are equivalent: 1. G is a Gr?obner basis of I. 6 2. Let L be a generating set of the module of syzygies S(1) := {(h1,...,hr) ? K[x1,...,xn]r vextendsinglevextendsingle vextendsinglevextendsingle vextendsingle rsummationdisplay i=1 hiLM(gi) = 0}. Then for each (f1,...,fr) ? L, rsummationdisplay i=1 figi G??+ 0. For the proof of this theorem, we refer the reader to H.M. M?oller?s paper, [10]. Definition 3.4 An element gi of a Gr?obner basis G is redundant if Gprime := G\{gi} is also a Gr?obner basis and ?G? = ?Gprime?. Definition 3.5 Using the result of Theorem 3.3, we will similarly define an element Sik of L(1) to be redundant if L(1)\{Sik} still generates S(1). Definition 3.6 A minimal Gr?obner basis for a polynomial ideal I is a Gr?obner basis G for I such that no element of G is redundant and all elements of G are monic. The Gebauer-M?oller Criteria, which we present below, applied to a polynomial ideal with a particular term order, enable one to avoid carrying out the reduction of many S-polynomials, which if reduced, would have reduced to zero. From Theorem 3.3 we may seek to eliminate redundant elements in a Gr?obner basis, G, by eliminating redundant elements in a basis of the first syzygy module of the the leading monomials of G. This practice of avoiding unnecessary S-polynomial reductions has been experimentally shown to improve the efficiency of a Gr?obner basis computation. It should be pointed out, however, that use of the Gebauer-M?oller Criteria do not necessarily result in obtaining a minimal Gr?obner basis. Nevertheless, these criteria are an extremely effective means of producing a nearly minimal, if not minimal, set of generators for a polynomial ideal. Definition 3.7 To find redundant elements in a basis for S(1), we define the module of syzygies for L(1), which we denote S(2). Given a term order j}. Proof. We first observe that if LCM{LM(fi),LM(fj),LM(fk)} = max{LCM{LM(fi),LM(fj)},LCM{LM(fi),LM(fk)},LCM{LM(fj),LM(fk)}}, then one of the three components of Sijk is a constant, and MS(i,j,k) can be expressed in terms of lesser syzygies with respect to <1 . Suppose that Sik may be written in terms of the syzygies Sij and Sjk, then we may remove Sik from L(1) since L(1)\{Sik} still generates S(1). 9 If M holds for (fi,fk), then ?j < k, such that LCM{LM(fj),LM(fk)} properly divides LCM{LM(fi),LM(fk)} in fact LCM{LM(fi),LM(fk)} = LCM{LM(fi),LM(fj),LM(fk)} and j negationslash= i. So either Sijk (if i < j) or Sjik (if j < i) has Sik for its maximal syzygy: Sjk <1 Sik because LCM{LM(fj),LM(fk)} x1 > x2 > x3. 18 INPUT: Ld,G as defined in Table 5.1, The matrices (Fi)i=1,...,(d?1) OUTPUT: A matrix F of polynomials in K[x1,...,xn] INITIALIZATION: F := {mult(Simplify(m,f,(Fi)i=1,...,(d?1))) | (m,f) ? Ld}, Done := LM(F) WHILE M(F) negationslash= Done DO Choose an m ? M(F)?Done Done := Done?{m} IF m is reducible modulo G THEN m = mprime ?LM(g) for some g ? G and some mprime ? K[x1,...,xn] F := F ?{mult(Simplify(mprime,f,(Fi)i=1,...,(d?1)))} Table 5.5: Modified SymbolicPreprocessing Subalgorithm INPUT: t a monomial in K[x1,...,xn], f a polynomial in K[x1,...,xn] The matrices (Fi)i=1,...,(d?1) OUTPUT: A tuple (a,b) where a is a monomial in K[x1,...,xn] and b is a polynomial in K[x1,...,xn] FOR u ? list of divisors of t DO IF ?j(1 ? j < d) such that (u?f) ? Fj THEN ?F j is the row echelon form of Fj with respect to the ordering < there exists a unique p ? ?F +j such that LM(p) = LM(u?f) IF u negationslash= t THEN RETURN Simplify( tu,p,(Fi)i=1,...,(d?1)) ELSE RETURN (1,p) RETURN (t,f) Table 5.6: Simplify Subalgorithm Let F = {f1 = x0x1x2x3 ?1,f2 = x0x1x2 +x0x1x3 +x0x2x3 +x1x2x3,f3 = x0x1 +x1x2 + x0x3 +x2x3,f4 = x0 +x1 +x2 +x3}. After two iterations through the first WHILE loop, we enter the second WHILE loop and have G = {f4}, SP1 = {(f3,f4)}, and L1 = {(1,f3),(x1,f4)}. We now enter into SymbolicPreprocess? ing(L1,G,?); F1 = L1, Done = LM(F1) = {x0x1} and M(F1) = {x0x3,x0x1,x21,x1x2,x1x3,x2x3}, we choose an element in M(F1)?Done, say x0x3, but x0x3 is reducible modulo G; we now have Done = {x0x1,x0x3}, F1 = F1 ? {x3f4}, and M(F1) = M(F1) ? {x23}. Since all other elements of M(F1) are not reducible modulo G, SymbolicPreprocessing returns F1 = {x3f4,f3,x1f4}. In matrix form, we have: 19 ? ? 0 0 0 1 1 1 1 1 0 1 1 0 1 0 1 1 1 0 1 0 0 ? ? where the rows correspond to the elements of F1 and the columns correspond to the terms in M(F1). Since x3f4 is the first polynomial listed in F1, the first row in the matrix corresponds to the polynomial x3f4. Similarly, since x0x1 is the greatest term in M(F1) with respect to our degree reverse lexicographical term order, the first column in the matrix corresponds to the x0x1 term. Now we reduce our matrix to row echelon form and obtain: ? ? 0 0 0 1 1 1 1 1 0 1 0 ?1 0 ?1 0 1 0 0 2 0 1 ? ? which implies that ?F 1 = {f5 = x0x3 + x1x3 + x2x3 + x23,f6 = x0x1 + x1x2 ? x1x3 ? x23, f7 = x21 + 2x1x3 + x23}. Since x0x1,x0x3 ? LM(F1), we have ( ?F +1 = {f7}, and now G = {f4,f7}. Note that we do not need to reduce the pair (f4,f7) because the least common multiple of the leading monomials is x0x21, which is a multiple of x0x1. Recall that x0x1 is the least common multiple of the leading monomials of the pair (f3,f4). Thus, criterion M of Gebauer-M?oller holds for (f4,f7). At this point, we temporarily exit the second WHILE loop, but remain within the first WHILE loop. We now consider f2 for the first time, and as we re-enter the second WHILE loop, we have SP2 = {(f2,f4)}. Thus L2 = {(1,f2),(x1x2,f4)}. Then during SymbolicPreprocessing, we attempt to simplify (1,f2) and (x1x2,f4) with F1. We see that x1f4 ? F1 and LM(f6) = LM(x1f4) = x0x1, so Simplify(x1x2,f4,F1) = (x2,f6). At this point, F2 = {f2,x2f6} and M(F2) = {x0x1x2,x1x22,x0x1x3, x0x2x3,x1x2x3, x2x23}. Done = LM(F2) = x0x1x2. Now we choose any element of M(F2) ? Done. We select x0x1x3, which is reducible modulo G. Indeed, x0x1x3 = x1x3LM(f4), so we Simplify(x1x3,f4,F1). In the Simplify subalgorithm, we ini- tially choose u = d, a choice that results in the subalgorithm returning Simplify(x1,f5,F1). Then after carrying out Simplify(x1,f5,F1), we return the tuple (x1,f5). The mult function is immediately applied to this tuple, thereby yielding x1f5, which is appended to F2. After test- ing the remaining elements of M(F2) for reducibility modulo G, we find that only x0x2x3 and x21x3 are reducible. After going through the simplification process for each, we ultimately obtain F2 = {x2f5,x3f7,x1f5,f2,x2f6}. Because of the new elements appended to F2, x22x3,x2x23, and 20 x33 are added to M(F2). These new elements of M(F2) are not reducible modolo G. Therefore, SymbolicPreprocessing returns F2 = {x2f5,x3f7,x1f5,f2,x2f6}, or in matrix form: ? ?? ?? ? 0 0 0 0 1 1 1 0 1 0 0 0 0 1 0 0 0 2 0 1 0 0 1 1 0 1 0 1 0 0 1 0 1 0 1 1 0 0 0 0 1 1 0 0 0 ?1 0 0 ?1 0 ? ?? ?? ? where the rows correspond to the elements of F2 and the columns correspond to the terms in M(F2). Since x2f5 is the first polynomial listed in F2, the first row in the matrix corresponds to the polynomial x2f5. Similarly, since x0x1x2 is the greatest term in M(F2) with respect to our degree reverse lexicographical term order, the first column in the matrix corresponds to the x0x1x2 term. Now we reduce our matrix to row echelon form and obtain: ? ?? ?? ? 0 0 0 0 1 1 1 0 1 0 0 0 0 1 0 0 0 2 0 1 0 0 1 0 0 1 0 ?1 0 ?1 1 0 0 0 0 ?1 ?1 1 ?1 1 0 1 0 0 0 0 1 ?1 0 ?1 ? ?? ?? ? which implies that ?F 2 = {f8 = x0x2x3 + x1x2x3 + x22x3 + x2x23,f9 = x21x3 + 2x1x23 + x33, f10 = x0x1x3 + x1x2x3 ? x1x23 ? x33, f11 = x0x1x2 ? x1x2x3 ? x22x3 + x1x23 ? x2x23 + x33, f12 = x1x22 + x22x3 ?x1x23 ?x33}. Since x0x2x3,x21x3,x0x1x3,x0x1x2 ? LM(F2), we have ( ?F +2 = {f12}, and now G = {f4,f7,f12}. Note that we do not need to reduce the pair (f4,f12) because the least common multiple of the leading monomials is x0x1x22. As was the case for the pair (f4,f7), criterion M of Gebauer-M?oller holds for (f4,f12). At this point, we temporarily exit the second WHILE loop, but remain within the first WHILE loop. In the next step, we consider f1 for the first time, and as we re-enter the second WHILE loop, we have SP3 = {(f1,f4),(f7,f12)}. Thus L3 = {(1,f1),(x1x2x3,f4), (x22,f7),(x1,f12)}. Then during SymbolicPreprocessing, we recursively call Simplify : Simplify(x1x2x3,f4) = Simplify(x2x3,f6) = Simplify(x3,f11) = (x3,f11). We now have F3 = {f1,x3f11,x22f7,x1f12}. Then after carrying out the remainder of the SymbolicPreprocessing, we ultimately obtain F3 = {f1,x3f11,x22f7, x1f12, x3f12, x3f9}. After constructing the matrix F3 and reducing to row ech- elon form, as was done for F1 and F2, we see that ?F 3 = {f13 = x21x22 ? x22x23 + 2x1x33 + 2x43, f14 = x0x1x2x3?1, f15 = ?x1x2x23?x22x23+x1x33?x2x33+x43+1, f16 = x1x22x3+x22x23?x1x33?x43, 21 f17 = x21x23 + 2x1x33 + x43}. Thus, we have that an entire row in the matrix F3 reduces to zero. In other words, the rank of F3 is five, as is demonstrated by the fact that ?F 3 contains five polynomials. After comparing the leading monomials of ?F 3 with LM(F3), we see that ?F +3 = {f15,f17}, and now G = {f4,f7,f12,f15,f17}. At this point, F = ?, but the algorithm continues since new elements are added to SP via the updatePairs subalgorithm. Indeed, many more iterations through the ?WHILE SP negationslash= ?? loop are required before a Gr?obner basis for F will be obtained. In addition to the two modifications discussed above, another modification to F4 that could improve the efficiency of a Gr?obner basis computation, would be to redefine the Sel(SP) function. An obvious way to redefine Sel(SP) would to have the function select the subset of pairs in SP with least sugar. Although F4 is known to be an efficient algorithm for Gr?obner basis computation, it is rather unclear what effect various modications to F4 will have on the efficiency of the algorithm. F4 is now the default algorithm used for Gr?obner basis computation in most computer algebra systems, so certainly Faug`ere?s algorithm has been well received. But, as we mention above, there are several variants of F4. We will now turn our attention to investigating how different variants of F4 compare through testing each variant on a variety of benchmark ideals. 22 Chapter 6 Variants of F4 - Test Data Analysis For this analysis, we consider ten benchmark ideals and eight variants of Faug`ere?s F4 algorithm while always choosing GF(31991) for the coefficient field. The author carried out all tests using an UltraSparc 650Mz Processor with 1024M of memory. The ten ideals studied are: Cyclic 6, Cyclic 7, Cyclic 8, Cyclic 9, Katsura 7, Katsura 8, Katsura 9, Katsura 10, hCyclic 8, and f633. Each may be found in Appendix A. Note that hCyclic 8 and f633 are homogeneous ideals, whereas the others are not. The eight variants of F4 that we compare are all combinations of the three modifications discussed in the previous section. We number the variants as follows: 1. F4 with no modification 2. Reduce all rows 3. Use sugar 4. Check for deletable pairs 5. Reduce all rows and use sugar 6. Reduce all rows and check for deletable pairs 7. Use sugar and check for deletable pairs 8. Reduce all rows, use sugar, and check for deletable pairs Note that in Example 5.3 discussed in the previous section, we use variant 6. For each test case, we observe the following: time: the time (in seconds) required for the Gr?obner basis computation spairs: the number of S-pairs considered maxr: the maximum number of rows appearing in the matrix at any time during the computation maxc: the maximum number of columns appearing in the matrix at any time during the compu- tation GBsz: the Gr?obner basis size (i.e., the number of polynomials in the Gr?obner basis found) 23 1 2 3 4 5 6 7 8 time: 0.168 0.231 0.165 0.166 0.230 0.233 0.151 0.226 spairs: 377 377 380 377 380 377 380 380 maxr: 144 139 148 144 139 139 148 139 maxc: 173 168 173 173 168 168 173 168 GBsz: 111 111 104 111 104 111 104 104 Table 6.1: Cyclic 6 From the data in Table 6.1, we observe that all eight variants of F4 produce a Gr?obner basis in a fraction of a second. Although the timings are extremely close for all eight variants, those variants that include use of the sugar strategy each produce a smaller Gr?obner basis for Cyclic 6. It is also the case that variants of F4, which include the reduce all rows option, result in a Gr?obner basis while requiring less computer memory. We can conclude that the variants using the reduce all rows option require less computer memory, because the reduce all rows option reduces the maximum size of the matrix required to carry out the computation. 1 2 3 4 5 6 7 8 time: 9.986 14.87 9.200 9.325 13.16 14.93 8.724 13.31 spairs: 2682 2682 2678 2682 2678 2682 2678 2678 maxr: 954 842 1097 954 883 842 1097 883 maxc: 1061 875 1211 1061 877 875 1211 877 GBsz: 592 592 556 592 556 592 556 556 Table 6.2: Cyclic 7 For the Cyclic 7 problem, we observe that a Gr?obner basis is computed in less than 15 seconds for all eight variants of F4. Again we observe that the variants that include use of the sugar strategy produce a smaller Gr?obner basis, whereas variants that include the reduce all rows option require a significantly smaller matrix to carry out the Gr?obner basis computation. Although variants that make use of the reduce all rows option consistently use less computer memory, this advantage is offset by the fact that such variants are noticeably slower than other variants. In the case of the Cyclic 8 problem, our results are consistent with the results previously seen in the Cyclic 6 and Cyclic 7 problems; however, our observed differences between the variants have grown in number and are increasingly magnified. Use of the sugar strategy now results in a Gr?obner basis with 300+ fewer polynomials than other variants. The matrices used to compute a 24 1 2 3 4 5 6 7 8 time: 261.2 443.2 190.1 237.2 309.7 443.7 157.2 309.3 spairs: 7588 7588 6466 7586 6466 7588 6415 6462 maxr: 4908 3733 4659 4908 3087 3733 4659 3087 maxc: 5614 4394 5863 5614 4169 4394 5863 4169 GBsz: 1516 1516 1212 1515 1212 1517 1172 1208 Table 6.3: Cyclic 8 Gr?obner basis for variants using the reduce all rows option have 1000+ fewer rows and columns than the matrices of other variants. Although checking for deletable pairs did not seem to improve the performance of the algorithm in the cases of Cyclic 6 and Cyclic 7, here we observe that this modified version of F4 can have positive effects on the algorithm, particularly when combined with sugar and/or reduce all rows. Note that variant 7 is the most efficient variant, finding a Gr?obner basis 32.9 seconds faster than variant 3, the second most efficient variant for this benchmark problem. Note also that the computer memory saving effect of the reduce all rows option is greatest in variants 5 and 8, the two variants where the reduce all rows option is used in conjunction with checking for deletable pairs. We do not include a table for the tests on the Cyclic 9 problem, since all eight variants of F4 failed to find a Gr?obner basis for Cyclic 9 due to insufficient memory. Relative to Cyclic 7, we observed above a very substantial increase in the size of the matrix used to compute a Gr?obner basis for Cyclic 8. As one would expect, when transitioning from the Cyclic 8 problem to the Cyclic 9 problem, the matrix grows considerably larger once more. Although the reduce all rows option helps to limit the growth of the matrix, the variants of F4 that include this option all failed to contain the size of the matrix enough to allow for a Gr?obner basis computation for the Cyclic 9 problem within the constraints of the available 1024M of computer memory. 1 2 3 4 5 6 7 8 time: 0.889 1.174 0.939 0.897 1.168 1.171 0.901 1.226 spairs: 377 377 377 377 377 377 377 377 maxr: 806 510 806 806 510 510 806 510 maxc: 832 554 832 832 554 554 832 554 GBsz: 76 76 76 76 76 76 76 76 Table 6.4: Katsura 7 25 For the Katsura 7 problem, all eight variants of F4 compute a Gr?obner basis in about one second. The different variants have absolutely no effect on the total number of S-pairs considered or the size of the Gr?obner basis. All variants that include the reduce all rows option require a significantly smaller matrix to carry out the computation; however, the effect of the reduce all rows option on the size of the matrix is neither enhanced nor lessened through combining this option with the sugar strategy and/or checking for deletable pairs. 1 2 3 4 5 6 7 8 time: 6.783 7.637 6.831 6.867 7.626 7.722 6.820 7.799 spairs: 881 881 881 881 881 881 881 881 maxr: 1934 1136 1934 1934 1136 1136 1934 1136 maxc: 1970 1172 1970 1970 1172 1172 1970 1172 GBsz: 145 145 145 145 145 145 145 145 Table 6.5: Katsura 8 Like the Katsura 7 problem, we observe little difference between the eight variants of F4 in the case of the Katsura 8 problem. The number of S-pairs computed and the Gr?obner basis size are exactly the same for all eight variants. The time required for the Gr?obner basis computation is consistantly about 7 seconds. Variants which include the reduce all rows option compute a Gr?obner basis for Katsura 8 while containing the growth of the matrix necessary to carry out the computation. Indeed, approximately 800 fewer rows and columns are present in the matrices for variants 2, 5, 6, and 8, than is the case for the variants that do not use the reduce all rows option. 1 2 3 4 5 6 7 8 time: 50.75 53.79 51.32 51.21 53.87 54.60 51.22 53.66 spairs: 1974 1974 1974 1974 1974 1974 1974 1974 maxr: 4528 2436 4528 4528 2436 2436 4528 2436 maxc: 4555 2525 4555 4555 2525 2525 4555 2525 GBsz: 274 274 274 274 274 274 274 274 Table 6.6: Katsura 9 Again, the number of S-pairs computed and the Gr?obner basis size are the same for all eight variants of F4. Variants that include the reduce all rows option compute a Gr?obner basis for Katsura 9 while using matrices with 2000+ fewer rows and columns than other variants. The very modest differences in the time that each variant required to carry out the Gr?obner basis 26 computation for Katsura 7 grew slightly larger in Katsura 8, and a bit larger again with Katsura 9. It is worth noting that variant 1 has been the most efficient in all three cases, and variant 7 has consistently been the second most efficient. Nevertheless, the slowest variant, variant 6, computes a Gr?obner basis for Katsura 9 while requiring only 3.85 additional seconds than variant 1. 1 2 3 4 5 6 7 8 time: 403.7 403.9 403.6 405.1 403.4 404.4 404.2 406.3 spairs: 4464 4464 4464 4464 4464 4464 4464 4464 maxr: 11777 5406 11777 11777 5406 5406 11777 5406 maxc: 11767 5362 11767 11767 5362 5362 11767 5362 GBsz: 539 539 539 539 539 539 539 539 Table 6.7: Katsura 10 As we would expect based on the results of testing Katsura 7, 8, and 9, all eight variants of F4 reduce the same number of S-pairs during the Gr?obner basis computation for Katsura 10, and the size of the Gr?obner basis found is the same for all variants. Although timings continue to differ only slightly between the eight variants, the variants which do not utilize the reduce all rows option use considerably more computer memory than the variants that do employ the reduce all rows option. Indeed variants 2, 5, 6, and 8 carry out the Gr?obner basis computation with matrices less than half the size of other variants, using 5800+ less rows and columns while retaining the same efficiency the other four variants. 1 2 3 4 5 6 7 8 time: 255.1 377.6 255.4 256.4 376.4 378.7 255.9 380.1 spairs: 7025 7025 7025 7025 7025 7025 7025 7025 maxr: 8583 3611 8583 8583 3611 3611 8583 3611 maxc: 10762 4285 10762 10762 4285 4285 10762 4285 GBsz: 1189 1189 1189 1189 1189 1189 1189 1189 Table 6.8: hCyclic 8 For the homogeneous Cyclic 8 problem, we see that the reduce all rows option has a very positive effect on the size of matrix necessary for the Gr?obner basis computation but also a negative effect on the efficiency of the computation process. All four variants that use the reduce all rows option require matrices that are less than half the size of the matrices required by the other variants, yet the variants using the reduce all rows option need 50% more time to compute the 27 Gr?obner basis. 1 2 3 4 5 6 7 8 time: 0.636 0.560 0.637 0.637 0.562 0.565 0.645 0.563 spairs: 392 392 392 392 392 392 392 392 maxr: 2778 762 2778 2778 762 762 2778 762 maxc: 3110 985 3110 3110 985 985 3110 985 GBsz: 74 74 74 74 74 74 74 74 Table 6.9: f633 In the case of the f633 problem, which is also a homogeneous ideal, the Gr?obner basis computation is carried out in less than one second for all eight variants. Although all eight variants are essentially equally efficient, it is interesting to note that the four variants which use the reduce all rows option are modestly faster than the four variants that do not utilize the reduce all rows option. The matrices used by variants that do not include the reduce all rows option grow to be over three times the size of matrices needed to compute a Gr?obner basis by variants that do use the reduce all rows option. Like the Katsura n probems and the homogeneous Cyclic 8 problem, all variants of F4 find a Gr?obner basis for the f633 ideal that is of the same size and results from the reduction of precisely the same number of S-pair reductions. 28 Chapter 7 Conclusion The data above demonstrates that there is not one particular variant of F4 that is always the best choice. For the benchmark ideals considered, it appears that F4 with no modifications often computes a Gr?obner basis in the least amount of time; however, in the case of Cyclic 8, this variant is much slower than variant 7. For all test problems considered, variant 7 is never a bad choice with respect to efficiency. Indeed in cases where variant 7 is not the most efficient, it is only marginally slower than the fastest variant. Thus, when efficiency of the Gr?obner basis computation process is of particular concern, the results above suggest that using F4 together with the sugar strategy and also checking for deletable pairs is likely to be a good choice. However, we must recall that all tests were conducted using the degree-reverse-lexicographic term order, and results will likely differ considerably for other term orders. Also, it is generally a rash assumption to assume that efficiency is the only concern during a Gr?obner basis computation. As we observe in the case of the Cyclic 9 problem above, the amount of computer memory required to compute a Gr?obner basis is also an important issue. Indeed if the amount of computer memory is insufficient for the computation, a Gr?obner basis will not be found. The variants of F4 that include the reduce all rows option typically use less computer memory than other variants of F4, sometimes drastically less. Thus, if one uses a variant of F4 that does not include the reduce all rows option and finds that the Gr?obner basis computation of a particular ideal with a given term order requires more computer memory than available resources will allow, the computation might be successful for an F4 variant that does include the reduce all rows option. Unfortunately, in the case of the Cyclic 9 problem, even the memory savings provided through use of the reduce all rows option did not allow for the computation of Gr?obner basis within the bounds of the available 1024M of computer memory. Based on the results of the Cyclic n problems, we have reason to believe that variants of F4 29 that use the sugar strategy will often produce a Gr?obner basis that is closer to a minimal Gr?obner basis (i.e., has fewer redundant polynomials) than other variants. Yet, further tests should be done to investigate, in the case of other ideals and/or term orders, whether variants of F4 that include the sugar strategy are always the closest to minimal Gr?obner bases. The Cyclic 8 problem for large coefficients was an intractable problem less than a decade ago [7]. Through finding a Gr?obner basis for Cyclic 8 over GF(31991) for all eight variants of F4, we demonstrate that Faug`ere?s F4 algorithm is indeed a particularly powerful algorithm for computing Gr?obner bases, and F4 has rightly become the default algorithm for most major computer algebra systems. 30 Appendix A Benchmark Polynomial Ideals x0x1x2x3x4x5 ?1 x0x1x2x3x4 +x1x2x3x4x5 +x2x3x4x5x0 +x3x4x5x0x1+ x4x5x0x1x2 +x5x0x1x2x3 x0x1x2x3 +x1x2x3x4 +x2x3x4x5 +x3x4x5x0 +x4x5x0x1+ x5x0x1x2 x0x1x2 +x1x2x3 +x2x3x4 +x3x4x5 +x4x5x0 +x5x0x1 x0x1 +x1x2 +x2x3 +x3x4 +x4x5 +x5x0 x0 +x1 +x2 +x3 +x4 +x5 Table A.1: Cyclic6 x0x1x2x3x4x5x6 ?1 x0x1x2x3x4x5 +x1x2x3x4x5x6 +x2x3x4x5x6x0+ x3x4x5x6x0x1 +x4x5x6x0x1x2 +x5x6x0x1x2x3+ x6x0x1x2x3x4 x0x1x2x3x4 +x1x2x3x4x5 +x2x3x4x5x6 +x3x4x5x6x0+ x4x5x6x0x1 +x5x6x0x1x2 +x6x0x1x2x3 x0x1x2x3 +x1x2x3x4 +x2x3x4x5 +x3x4x5x6 +x4x5x6x0+ x5x6x0x1 +x6x0x1x2 x0x1x2 +x1x2x3 +x2x3x4 +x3x4x5 +x4x5x6 +x5x6x0 +x6x0x1 x0x1 +x1x2 +x2x3 +x3x4 +x4x5 +x5x6 +x6x0 x0 +x1 +x2 +x3 +x4 +x5 +x6 Table A.2: Cyclic7 31 x0x1x2x3x4x5x6x7 ?1 x0x1x2x3x4x5x6 +x1x2x3x4x5x6x7 +x2x3x4x5x6x7x0+ x3x4x5x6x7x0x1 +x4x5x6x7x0x1x2 +x5x6x7x0x1x2x3+ x6x7x0x1x2x3x4 +x7x0x1x2x3x4x5 x0x1x2x3x4x5 +x1x2x3x4x5x6 +x2x3x4x5x6x7 +x3x4x5x6x7x0+ x4x5x6x7x0x1 +x5x6x7x0x1x2 +x6x7x0x1x2x3 +x7x0x1x2x3x4 x0x1x2x3x4 +x1x2x3x4x5 +x2x3x4x5x6 +x3x4x5x6x7+ x4x5x6x7x0 +x5x6x7x0x1 +x6x7x0x1x2 +x7x0x1x2x3 x0x1x2x3 +x1x2x3x4 +x2x3x4x5 +x3x4x5x6 +x4x5x6x7+ x5x6x7x0 +x6x7x0x1 +x7x0x1x2 x0x1x2 +x1x2x3 +x2x3x4 +x3x4x5 +x4x5x6 +x5x6x7 +x6x7x0 +x7x0x1 x0x1 +x1x2 +x2x3 +x3x4 +x4x5 +x5x6 +x6x7 +x7x0 x0 +x1 +x2 +x3 +x4 +x5 +x6 +x7 Table A.3: Cyclic8 x0x1x2x3x4x5x6x7x8 ?1 x0x1x2x3x4x5x6x7 +x1x2x3x4x5x6x7x8 +x2x3x4x5x6x7x8x0+ x3x4x5x6x7x8x0x1 +x4x5x6x7x8x0x1x2 +x5x6x7x8x0x1x2x3+ x6x7x8x0x1x2x3x4 +x7x8x0x1x2x3x4x5 +x8x0x1x2x3x4x5x6 x0x1x2x3x4x5x6 +x1x2x3x4x5x6x7 +x2x3x4x5x6x7x8+ x3x4x5x6x7x8x0 +x4x5x6x7x8x0x1 +x5x6x7x8x0x1x2+ x6x7x8x0x1x2x3 +x7x8x0x1x2x3x4 +x8x0x1x2x3x4x5 x0x1x2x3x4x5 +x1x2x3x4x5x6 +x2x3x4x5x6x7 +x3x4x5x6x7x8+ x4x5x6x7x8x0 +x5x6x7x8x0x1 +x6x7x8x0x1x2+ x7x8x0x1x2x3 +x8x0x1x2x3x4 x0x1x2x3x4 +x1x2x3x4x5 +x2x3x4x5x6 +x3x4x5x6x7+ x4x5x6x7x8 +x5x6x7x8x0 +x6x7x8x0x1 +x7x8x0x1x2+ x8x0x1x2x3 x0x1x2x3 +x1x2x3x4 +x2x3x4x5 +x3x4x5x6 +x4x5x6x7+ x5x6x7x8 +x6x7x8x0 +x7x8x0x1 +x8x0x1x2 x0x1x2 +x1x2x3 +x2x3x4 +x3x4x5 +x4x5x6 +x5x6x7 +x6x7x8+ x7x8x0 +x8x0x1 x0x1 +x1x2 +x2x3 +x3x4 +x4x5 +x5x6 +x6x7 +x7x8 +x8x0 x0 +x1 +x2 +x3 +x4 +x5 +x6 +x7 +x8 Table A.4: Cyclic9 32 ?x0 + 2x27 + 2x26 + 2x25 + 2x24 + 2x23 + 2x22 + 2x21 +x20 ?x1 + 2x7x6 + 2x6x5 + 2x5x4 + 2x4x3 + 2x3x2 + 2x2x1 + 2x1x0 ?x2 + 2x7x5 + 2x6x4 + 2x5x3 + 2x4x2 + 2x3x1 + 2x2x0 +x21 ?x3 + 2x7x4 + 2x6x3 + 2x5x2 + 2x4x1 + 2x3x0 + 2x2x1 ?x4 + 2x7x3 + 2x6x2 + 2x5x1 + 2x4x0 + 2x3x1 +x22 ?x5 + 2x7x2 + 2x6x1 + 2x5x0 + 2x4x1 + 2x3x2 ?x6 + 2x7x1 + 2x6x0 + 2x5x1 + 2x4x2 +x23 ?1 + 2x7 + 2x6 + 2x5 + 2x4 + 2x3 + 2x2 + 2x1 +x0 Table A.5: Katsura7 ?x0 + 2x28 + 2x27 + 2x26 + 2x25 + 2x24 + 2x23 + 2x22 + 2x21 +x20 ?x1 + 2x8x7 + 2x7x6 + 2x6x5 + 2x5x4 + 2x4x3 + 2x3x2 + 2x2x1 + 2x1x0 ?x2 + 2x8x6 + 2x7x5 + 2x6x4 + 2x5x3 + 2x4x2 + 2x3x1 + 2x2x0 +x21 ?x3 + 2x8x5 + 2x7x4 + 2x6x3 + 2x5x2 + 2x4x1 + 2x3x0 + 2x2x1 ?x4 + 2x8x4 + 2x7x3 + 2x6x2 + 2x5x1 + 2x4x0 + 2x3x1 +x22 ?x5 + 2x8x3 + 2x7x2 + 2x6x1 + 2x5x0 + 2x4x1 + 2x3x2 ?x6 + 2x8x2 + 2x7x1 + 2x6x0 + 2x5x1 + 2x4x2 +x23 ?x7 + 2x8x1 + 2x7x0 + 2x6x1 + 2x5x2 + 2x4x3 ?1 + 2x8 + 2x7 + 2x6 + 2x5 + 2x4 + 2x3 + 2x2 + 2x1 +x0 Table A.6: Katsura8 33 ?x0 + 2x29 + 2x28 + 2x27 + 2x26 + 2x25 + 2x24 + 2x23 + 2x22+ 2x21 +x20 ?x1 + 2x9x8 + 2x8x7 + 2x7x6 + 2x6x5 + 2x5x4 + 2x4x3 + 2x3x2+ 2x2x1 + 2x1x0 ?x2 + 2x9x7 + 2x8x6 + 2x7x5 + 2x6x4 + 2x5x3 + 2x4x2 + 2x3x1+ 2x2x0 +x21 ?x3 + 2x9x6 + 2x8x5 + 2x7x4 + 2x6x3 + 2x5x2 + 2x4x1 + 2x3x0+ 2x2x1 ?x4 + 2x9x5 + 2x8x4 + 2x7x3 + 2x6x2 + 2x5x1 + 2x4x0 + 2x3x1+ x22 ?x5 + 2x9x4 + 2x8x3 + 2x7x2 + 2x6x1 + 2x5x0 + 2x4x1 + 2x3x2 ?x6 + 2x9x3 + 2x8x2 + 2x7x1 + 2x6x0 + 2x5x1 + 2x4x2 +x23 ?x7 + 2x9x2 + 2x8x1 + 2x7x0 + 2x6x1 + 2x5x2 + 2x4x3 ?x8 + 2x9x1 + 2x8x0 + 2x7x1 + 2x6x2 + 2x5x3 +x24 ?1 + 2x9 + 2x8 + 2x7 + 2x6 + 2x5 + 2x4 + 2x3 + 2x2 + 2x1 +x0 Table A.7: Katsura9 34 ?x0 + 2x210 + 2x29 + 2x28 + 2x27 + 2x26 + 2x25 + 2x24 + 2x23+ 2x22 + 2x21 +x20 ?x1 + 2x10x9 + 2x9x8 + 2x8x7 + 2x7x6 + 2x6x5 + 2x5x4 + 2x4x3+ 2x3x2 + 2x2x1 + 2x1x0 ?x2 + 2x10x8 + 2x9x7 + 2x8x6 + 2x7x5 + 2x6x4 + 2x5x3 + 2x4x2+ 2x3x1 + 2x2x0 +x21 ?x3 + 2x10x7 + 2x9x6 + 2x8x5 + 2x7x4 + 2x6x3 + 2x5x2 + 2x4x1+ 2x3x0 + 2x2x1 ?x4 + 2x10x6 + 2x9x5 + 2x8x4 + 2x7x3 + 2x6x2 + 2x5x1 + 2x4x0+ 2x3x1 +x22 ?x5 + 2x10x5 + 2x9x4 + 2x8x3 + 2x7x2 + 2x6x1 + 2x5x0 + 2x4x1+ 2x3x2 ?x6 + 2x10x4 + 2x9x3 + 2x8x2 + 2x7x1 + 2x6x0 + 2x5x1 + 2x4x2+ x23 ?x7 + 2x10x3 + 2x9x2 + 2x8x1 + 2x7x0 + 2x6x1 + 2x5x2 + 2x4x3 ?x8 + 2x10x2 + 2x9x1 + 2x8x0 + 2x7x1 + 2x6x2 + 2x5x3 +x24 ?x9 + 2x10x1 + 2x9x0 + 2x8x1 + 2x7x2 + 2x6x3 + 2x5x4 ?1 + 2x10 + 2x9 + 2x8 + 2x7 + 2x6 + 2x5 + 2x4 + 2x3 + 2x2 + 2x1 +x0 Table A.8: Katsura10 x0x1x2x3x4x5x6x7 ?x88 x0x1x2x3x4x5x6 +x1x2x3x4x5x6x7 +x2x3x4x5x6x7x0+ x3x4x5x6x7x0x1 +x4x5x6x7x0x1x2 +x5x6x7x0x1x2x3+ x6x7x0x1x2x3x4 +x7x0x1x2x3x4x5 x0x1x2x3x4x5 +x1x2x3x4x5x6 +x2x3x4x5x6x7 +x3x4x5x6x7x0+ x4x5x6x7x0x1 +x5x6x7x0x1x2 +x6x7x0x1x2x3 +x7x0x1x2x3x4 x0x1x2x3x4 +x1x2x3x4x5 +x2x3x4x5x6 +x3x4x5x6x7+ x4x5x6x7x0 +x5x6x7x0x1 +x6x7x0x1x2 +x7x0x1x2x3 x0x1x2x3 +x1x2x3x4 +x2x3x4x5 +x3x4x5x6 +x4x5x6x7+ x5x6x7x0 +x6x7x0x1 +x7x0x1x2 x0x1x2 +x1x2x3 +x2x3x4 +x3x4x5 +x4x5x6 +x5x6x7 +x6x7x0 +x7x0x1 x0x1 +x1x2 +x2x3 +x3x4 +x4x5 +x5x6 +x6x7 +x7x0 x0 +x1 +x2 +x3 +x4 +x5 +x6 +x7 Table A.9: hCyclic8 35 x0x5 ?x210 x1x6 ?x210 x2x7 ?x210 x3x8 ?x210 x4x9 ?x210 ?4x1x5 ?4x2x5 ?4x3x5 ?4x4x5 + 4x0x6 ?4x2x6 ?4x3x6 ?4x4x6+ 4x0x7 + 4x1x7 ?4x3x7 ?4x4x7 + 4x0x8 + 4x1x8 + 4x2x8 ?4x4x8+ 4x0x9 + 4x1x9 + 4x2x9 + 4x3x9 + 2x0x10 + 2x1x10 + 2x2x10+ 2x3x10 + 2x4x10 +x210 2x0x10 + 2x1x10 + 2x2x10 + 2x3x10 + 2x4x10 +x210, 4x1x5 + 4x2x5 + 4x3x5 + 4x4x5 ?4x0x6 + 4x2x6 + 4x3x6+ 4x4x6 ?4x0x7 ?4x1x7 + 4x3x7 + 4x4x7 ?4x0x8 ?4x1x8? 4x2x8 + 4x4x8 ?4x0x9 ?4x1x9 ?4x2x9 ?4x3x9 + 2x5x10+ 2x6x10 + 2x7x10 + 2x8x10 + 2x9x10x210 2x0 + 2x1 + 2x2 + 2x3 + 2x4 +x10 2x5 + 2x6 + 2x7 + 2x8 + 2x9 +x10 Table A.10: f633 36 BIBLIOGRAPHY [1] W. W. Adams and P. Loustaunau, An Introduction to Gr?obner Bases, Graduate Studies in Mathematics, vol. 3, American Mathematical Society, Providence, RI, 1994. [2] D. Bayer, M. Stillman, The design of Macaulay: A system for computing in algebraic geometry and commutative algebra, 1986 ACM Symposium on Symbolic and Algebraic Computation, University of Waterloo, Ontario, 157-162, 1986. [3] B. Buchberger, Ein Algorithmus zum Auffinden der Basiselemente des Restklassenringes nach einem nulldimensionalen Polynomideal, PhD thesis, Innsbruck, 1965. [4] B. Buchberger, A Criterion for Detecting Unnecessary Reductions in the Construction of Gr?obner Bases, in ?EUROSAM 1979,? Lecture Notes in Computer Science 72, Springer Verlag, Berlin-Heidelberg-New York, 1979, 3-21. [5] M. Caboara, M. Kreuzer, and L. Robbiano, Efficiently Computing Minimal Sets of Critical Pairs, Preprint submitted to Journal of Symbolic Computation, 2003. [6] D. Cox, J. Little, and D. O?Shea, Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra, Springer Verlag, Berlin and New York, 1992. [7] J. C. Faugere, A New Efficient Algorithm for Computing Gr?obner Bases (F4), Journal of Pure and Applied Algebra, 139(1-3):61-88, June 1999. [8] R. Gebauer and H. M. M?oller, On an Installation of Buchberger?s Algorithm, J. Symbolic Computation, 6: 257-286, 1987. [9] A. Giovini, T. Mora, G. Niesi, L. Robbiano, and C. Traverso, ?One Sugar Cube, Please? or Selection Strategies in the Buchberger Algorithm, in Proc. International Symposium on Symbolic and Algebraic Computation ISSAC?91 (S.M. Watt ed.), ACM Press, New York, 1991, 49-54. [10] H. M. M?oller, A Reduction Strategy for the Taylor Resolution, Proc. EUROCAL 85, Springer L.N. in Comp. Sci. 162, 526-534. [11] D. K. Taylor, Ideals Generated by Monomials in an R-sequence, Ph.D. Thesis, University of Chicago, 1966. 37