Pages

A brief history of algebra

I'm planning to write a series of posts about one of my favorite topics in mathematics: algebraic number theory. We'll start out very gently, with material anyone who's studied high school algebra can easily appreciate. Before long, the climb will become somewhat steeper, but I hope it can be followed by anyone who has read popularizations of mathematical subjects, such as recent books on the Riemann Hypothesis. (Which is a subject that connects up, eventually, with algebraic number theory.) If there are any terms or names mentioned here for which you'd like more detail, I suggest looking in Wikipedia, although I won't litter the narrative with explicit links.

The place to begin, naturally, is with a brief history of algebra itself.

The word "algebra" comes from Arabic: al-jebr because the subject was studied and written about in something like the modern sense, by scholars who spoke Arabic in what is now the Middle East, in the 9th century CE. Although classical Greeks and various of their predecessors and contemporaries had investigated problems we now call "algebraic", these investigations became known to speakers of European languages not from the classical sources but from Arabic writers. So that is why we use a term derived from Arabic.

Muhammad ben Musa al-Khwarizmi seems to have been the first person whose writing uses the term al-jebr. As he used it, the term referred to a technique for solving equations by performing operations such as addition or multiplication to both sides of the equation – just as is taught in first-year high school algebra. al-Khwarizmi, of course, didn't use our modern notation with Roman letters for unknowns and symbols like "+", "×", and "=". Instead, he expressed everything in ordinary words, but in a way equivalent to our modern symbolism.

The word al-jebr itself is a metaphor, as the usual meaning of the word referred to the setting or straightening out of broken bones. The same metaphor exists in Latin and related languages, as in the English words "reduce" and "reduction". Although they now usually refer to making something smaller, the older meaning refers to making somethng simpler or straighter. The Latin root is the verb ducere, to lead – hence to re-duce is to lead something back to a simpler from a more convoluted state. In elementary algebra still one talks of "reducing" fractions to lowest terms and simplifying equations.

The essence of the study of algebra, then, is solving or "reducing" equations to the simplest possible form. The emphasis is on finding and describing explicit methods for performing this simplification. Such methods are known as algorithms – in honor of al-Khwarizmi. Different types of methods can be used. Guessing at solutions, for instance, is a method. One can often, by trying long enough, guess the exact solution of a simple equation. And if one has a guess that is close but not exact, by changing this guess a little one can get a better solution by an iterative process of successive approximation. This is a perfectly acceptable method of "solving" equations for many practical purposes – so much so that it is the method generally used by computers (where irrational numbers can be specificed only approximately anyhow). Some approximation methods are fairly sophisticated, such as "Newton's method" for finding the roots of polynomial equations – but they're still based essentially on guessing an initial rough answer.

Another method for finding solutions of equations is by means of geometric construction. One can construct geometric figures in which the length of a certain line segment is a solution to some given equation. This works well, for example, when square roots are needed, since the hypotenuse of a right triangle has a length which is the square root of the sum of squares of the other two sides of the triangle. That is, if the lengths of the sides are a, b, and c, then a2 + b2 = c2 and hence c = √(a2 + b2). If a and b are whole numbers, so is the sum of their squares. Algorithms for finding the approximate square root of a whole number were known, so c could be computed approximately. However, with a geometric construction, c could be found simply be measuring the length of the right line segment. For future reference, note that an interesting problem is finding two numbers a and b such that for some given number d, d = a2 + b2. This is because if d is given, finding a and b enables one to find the square root of d by a geometric construction.

In addition to such approximation and geometric methods, al-Khwarizmi was interested in methods for finding exact solutions by a sequence of steps that could be described in cookbook style – algorithms. This can be done completely successfully for linear equations of the form ax + b = c (in modern notation) using just the arithmetic operations of addition, subtraction, multiplication, and division. Just as important, the operations can be performed symbolically – not just on particular numbers, but on symbols that stand for "any number". And so, one seeks to express the solution of a given equation in a symbolic form.

An interesting question is that of when the idea of representing equations in symbolic form arose. It's not easy to answer such a question, in part because symbolic representations were used before their full and considerable utility was recognized. For instance, Greek geometers labeled the lines in their figures with single letters, so it was natural to write what we now recognize as the Pythagorean theorem in the form a2 + b2 = c2. But the importance of this representation was somewhat blurred, since the distinction between a line and the length of a line was not fully appreciated. In fact, although Greeks and other early mathematicians (e. g. in India) used symbolic equations, al-Khwarizmi did not. (Hence it is likely he didn't know of Greek mathematics and much of his work was original, if not always as advanced as that of the Greeks.)

In modern notation, polynomial equations can be classified in terms of the highest power of any variable which occurs in them. We call an equation linear if the highest power is one, because its graph is a straight line. If there is just one variable, such an equation has the most general form ax + b = 0. If the highest power is two, the equation is called "quadratic", and has the form ax2 + bx + c = 0. (Why does the Latin prefix quad, usually associated with the number 4, occur here? Simply because the word for "square" is quadra in Latin.) In spite of lacking a symbolic representation of equations, al-Khwarizmi effectively did know the quadratic forumula which says that there are two solutions of the last equation, that can be written as x = {-b±√(b2-4ac)}/2a. He also realized that the equation has solutions at all in terms of "real" numbers only if the quantity we now call the discriminant, b2-4ac, is not negative.

The highest power of an unknown which occurs in a given polynomial equation is known as the degree of the equation. Although al-Khwarizmi doesn't seem to have studied equations of degree 3, called cubic equations, a more famous successor, who lived about 250 years later, did: Omar Khayyam (ca. 1050-1123), a Persian. Like his predecessor, Khayyam did not work with symbolic expressions for the equations. But he was able to produce solutions using geometric constructions (involving conic sections), provided a positive solution exists. He also thought, mistakenly, that such solutions couldn't be found by algebraic (algorithmic) methods of the sort al-Khwarizmi used.

The next substantial advance in solving equations began when scholars in Western Europe began to study and appreciate the work of people like al-Khwarizmi and Khayyam. Most notable among these scholars was Leonardo of Pisa, more commonly known as Fibonacci (ca. 1180-1250). He showed that algorithmic (as opposed to geometric) methods could be used to find solutions of some cubic equations. Fibonnacci had a much more obscure contemporary, Jordanus Nemorarius, of whom little is known apart from several books attributed to him on arithmetic, mechanics, geometry, and astronomy. He made a more systematic use of letters to stand for "variable" (not necessarily unknown) quantities in equations, but the importance of this technique was still not widely appreciated

With the advent of the Renaissance, progress in mathematics began to speed up. One of the first notable names was a German, Regiomontanus (1436-76). Though he produced less original work than others, he was widely read in the classic works of both the Greeks and the Muslim world. In particular, he had studied the Arithmetic of Diophantus of Alexandria (who was active around 250 CE) in the original Greek, and even considered publishing a Latin translation, though he never got around to it. Diophantus was in some respects more advanced than any other mathematician before the Renaissance, and among the problems he considered were what are now called Diophantine equations. The relevance of such problems will be explained in due time.

Somewhat more original than Regiomontanus was a Frenchman, Nicolas Chuquet, who died around 1500. He used expressions involving nested radicals farily close to the modern style, such as √(14-√180)), to represent solutions of 4th degree equations.

The real breakthrough came in the work of several Italians in the 16th century. In 1545 Girolamo Cardano (1501-76) published explicit algebraic solutions (that is, using arithmetic operations plus extraction of roots) of both cubic and quartic (4th degree) equations. Cardano, however, did not discover the solutions himself. The result for cubics was known before 1541 by Niccolo Tartaglia (ca. 1500-57), though apparently discovered even earlier by Scipione del Ferro (ca. 1465-1526). Cardano admitted he had not discovered the solution, but apparently he did break a promise to Tartaglia to keep the results a secret. (Just as now, precedence in publishing new scientific results was a matter of great prestige.) As for the quartic, Cardano states that the solution was discovered by Ludovico Ferrari (1522-65), though at his [Cardano's] request.

Such rapid progress naturally raised the question of solutions to equations of 5th degree (quintics) and higher, either by algebraic means (using arithmetic operations and radicals) or at least by means of geometric constructions (using only straightedge and compass). Surprisingly, it was proven almost 300 years later that solutions of either sort were not possible in general, i. e. for all cases. This was done independently by two young men, Niels Henrik Abel (1802-29) in 1824 and Évariste Galois (1811-32) in 1832. Galois' result is especially important, as it is based on very novel methods of abstract algebra – the theory of groups – and in fact Galois' ideas thoroughly permeate the theory of algebraic numbers, to be discussed.

In spite of that astonishing negative result, only a few year earlier Carl Friedrich Gauss (1777-1855) had proven in his doctoral thesis of 1798 that polynomial equations of any degree n must have exactly n solutions in a certain very specific sense. This result was so important it became known as the fundamental theorem of algebra. The exact sense in which that theorem is true is the subject of the other part of this story of algebraic numbers – "numbers". That will be taken up in the next installment.

Tags: ,