Algebra: A Very Short Introduction
Latest Publications


TOTAL DOCUMENTS

10
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780198732822, 9780191797644

Author(s):  
Peter M. Higgins

‘Matrices and groups’ continues with the example of geometric matrix products to see what happens when we compose the mappings involved. It explains several features, including the identity matrix, the inverse matrix, the square matrix, and the concept of isomorphism. If a collection of matrices represent the elements of a group, such as the eight matrices that represent the dihedral group D, then each of these matrices A will have an inverse, A −1, such that AA-1 = A-1A =I, the identity matrix. This prompts the twin questions of when the inverse of a square matrix A exists and, if it does, how to find it.


Author(s):  
Peter M. Higgins

Polynomials are expressions of the form p(x) = a0 + a1x + a2x2 + ... + anxn; the number ai is the coefficient of xi, a 0 is the constant term of p(x), and an is the leading coefficient. The underlying algebra of polynomials mirrors that of the integers. Polynomials can be added, subtracted, and multiplied, and the laws of associativity and commutativity and the distributive law of addition over multiplication all hold. Division is more complicated. ‘The algebra of polynomials and cubic equations’ outlines the Remainder and Factor Theorems along with complex numbers in the Argand plane. The factorization of polynomials, the Rational Root Theorem, the Conjugate Root Theorem, and solution of cubic equations are also discussed.


Author(s):  
Peter M. Higgins

Matrices represent the central algebraic vehicle for advanced computation throughout mathematics as well as the physical and social sciences. ‘Introduction to matrices’ explains that matrices are simply rectangular arrays of numbers. There are some natural, simple operations that can be performed on matrices. Scalar multiplication is where all entries in a matrix are multiplied by a fixed number. Network theory is one of the major applications of linear algebra, which is the branch of the subject that is largely represented by matrices and matrix calculations. Another application of matrices is to the geometry of transformations.


Author(s):  
Peter M. Higgins

A quadratic equation is one involving a squared term and takes on the form ax2 + bx + c = 0. Quadratic expressions are central to mathematics, and quadratic approximations are extremely useful in describing processes that are changing in direction from moment to moment. ‘Quadratic equations’ outlines the three-stage solution process. Firstly, the quadratic expression is factorized into two linear factors, allowing two solutions to be written down. Next is completing the square, which allows solution of any particular quadratic. Finally, completing the square is applied to the general equation to derive the quadratic formula that allows the three coefficients to be put into the associated expression, which then provides the solutions.


Author(s):  
Peter M. Higgins

‘The laws of algebra’ explores the three laws that govern arithmetic operations and explains how these rules are extended so that they continue to be respected as we pass from one number system to a greater one that subsumes the former. The associative law of addition shows that that (a + b) + c = a + (b + c), and the associative law of multiplication is a(bc) = (ab)c. The distributive law tells us how to multiply out the brackets: a(b + c) = ab + ac. The commutative law of addition is a + b = b + a, a law that holds equally well for multiplication: ab = ba.


Author(s):  
Peter M. Higgins

‘Determinants and matrices’ explains that in three dimensions, the absolute value of the determinant det(A) of a linear transformation represented by the matrix A is the multiplier of volume. The columns of A are the images of the position vectors of the sides of the unit cube and they define a three-dimensional version of a parallelogram, a parallelepiped, the volume of which is |det(A)|. It goes on to describe the properties and applications of determinants to networks (using the Kirchhoff matrix); Cramer’s Rule; eigenvalues; and eigenvectors, which are fundamental in linear mathematics. Other key topics in matrix theory—similarity, diagonalization, and factorization of matrices—are also discussed.


Author(s):  
Peter M. Higgins

‘Algebra and the arithmetic of remainders’ considers a new type of algebra, which is both an ancient topic and one that has found major contemporary application in Internet cryptography. It begins with an outline of abstract algebra, including groups, rings, and fields. Semigroups and groups are algebras with a single associative operation, while rings and fields are algebras with two operations linked via the distributive law. Lattices are algebras with an ordered structure, while vector spaces and modules are algebras where the members can be multiplied by scalar quantities from other fields or rings. The rules of modular arithmetic (or clock arithmetic) and solving linear congruences are also described.


Author(s):  
Peter M. Higgins

‘Linear equations and inequalities’ describes how to solve the simpler type of linear equations with one unknown and simultaneous linear equations with two unknowns. It also introduces manipulation of inequality signs, and closes with an example of finding the values of unknown quantities that satisfy particular equations while being constrained to lie within certain ranges. This mathematics underpins much of the logistics of modern society, governing operations such as railway and airline schedules, while allowing businesses to meet customer demand by running lower stock inventories, which represents an enormous and ongoing cost saving. The algebraic ideas of elimination and constraint satisfaction underlie all this.


Author(s):  
Peter M. Higgins

‘Numbers and algebra’ introduces the number system and explains several terms used in algebra, including natural numbers, positive and negative integers, rational numbers, number factorization, the Fundamental Theorem of Arithmetic, Euclid’s Lemma, the Division Algorithm, and the Euclidean Algorithm. It proves that any common factor c of a and b is also a factor of any number of the form ax + by, and since the greatest common divisor (gcd) of a and b has this form, which may be found by reversing the steps of the Euclidean Algorithm, it follows that any common factor c of a and b divides their gcd d.


Author(s):  
Peter M. Higgins

‘Vector spaces’ discusses the algebra of vector spaces, which are abelian groups with an additional scalar multiplication by a field. Every finite abelian group is the direct product of cyclic groups. Any finite abelian group can be represented in one of two special ways based on numerical relationships between the subscripts of the cyclic groups involved. In one representation, all the subscripts are powers of primes; in the alternative, each subscript is a divisor of its successor. It concludes by bringing together the ideas of modular arithmetic, the construction of the complex numbers, factorization of polynomials, and vector spaces to explain the existence of finite fields.


Sign in / Sign up

Export Citation Format

Share Document