\" geqn -d$$ 3_Math | groff -t -ms | lpr .ft 4 C .EQ delim $$ .EN .bp .EH '\fIScientific Computation''Mathematical Concepts\fR %' .OH '\fIScientific Computation''Mathematical Concepts\fR %' .sp 5 .ps 14 .ce .B 3. Formal Mathematics .R .NL .sp 3 .SH Introduction .PP Science means knowledge and connotes understanding. Science also has the pragmatic goals of prediction and control. Progress is made through observation and speculation. Mathematics forms a convenient and efficient language to express ideas to verify observations and derive conclusions. Facility with certain mathematical concepts and procedures is essential to scientific investigation, from analysis of data through abstraction and generalization, to discovery and prediction. Much of science involves describing physical objects with mathematical models, or representations. Computers assist in discovering, synthesizing and analyzing mathematical models. Conversely, computers and the languages they understand are soundly based on mathematical systems and structures. .PP There are mathematical tools to assist the process of discovering, describing and explaining behavior. Reducing observations to simple models uses the process of \fIinductive\fR reasoning. Mathematical \fIdeduction\fR can be used to simplify complicated statements and to derive results from fundamental principles. Our intention here is to briefly review mathematical concepts and constructs from algebra, calculus and differential equations in the context of scientific modeling. .NH Mathematical Foundations .PP Mathematics deals with collections (\fIsets\fR) of entities (\fIelements\fR or \fIobjects\fR) and their properties, defined by assumptions (\fIpostulates, \fIaxioms\fR), and well-defined modifying procedures (\fImanipulations, derivations, operations\fR) on the entities. Collections of operations (\fIderivations, proofs\fR) lead to conclusions (\fItheorems\fR). Mathematics attempts to deal with the most general (abstract) nature of things by using symbolism to represent objects and by reducing the number of assumptions and fundamental operations to a minimum. Mathematical objects may or may not represent actual systems, but internal consistency (freedom from contradiction) is a desirable feature. No system is perfect and at any level there may be ambiguity and redundancy.\** .FS Bertrand Russell, co-author of \fIPrincipia Mathematica\fR once stated, "...mathematics may be defined as the subject which we will never know what we are talking about, nor whether what we are saying is true." .FE .PP Mathematical operations are used to derive consequences (theorems, proofs) and to simplify and transform complex statements into equivalent, but simpler statements. Beginning with a statement of undefined \fIterms\fR, deductive mathematical reasoning associates the terms through \fIdefinitions\fR, postulates associations among the terms and definitions through \fIaxioms\fR, and applies rules of \fIoperations\fR to arrive at \fIconclusions\fR. A \fBformal axiomatic mathematical system \fRconsists of \fIa set of elements, one or more operations on the elements, and a set of assumptions about the elements and their operations\fR. Mathematical systems are not deduced in a vacuum, however, and the distillation of an abstract formal system involves observation and creative thinking. .PP The development of mathematical systems has a long history, going back nearly two millenia to Euclid's \fIElements\fR in the West and \fINine Chapters on the Mathematical Art\fR in the East (Yun 1987). Many of the great mathematicians since that time have contributed to the foundations of mathematics, including Plato (Theory of abstract ideas, 300 BC), Blaise Pascal (formal proofs, 1660 AD), Gotfried Leibnitz ((symbolic logic, 17??), George Boole (1860 AD), and there is no reason to believe that the final word has been spoken. At the turn of the twentieth Century, David Hilbert envisioned a formal axiomatic foundation for all of mathematics based on three logical premises: \fIconsistency\fR (any statement cannot be both true and false), \fIcompleteness\fR (every valid statement is either true or false), and \fIdecidability\fR (there must be a procedure for deciding the truth or falsity of any statement). Consistence and completeness should infer decidability. However, Kurt Goedel (incompleteness, 1931) and Alan Turing (halting problem, 1936) showed that there are some statements in some systems which can not be decided in a finite sequence of steps, thus eroding the foundation of a universal formal axiomatic system for mathematics. Given the lack of a suitable alternative at this point, we will nevertheless assume the existence of some formal axiomatic systems based on a foundation of sets, logic and groups so that we can proceed with applications of mathematics to practical problems. Set concepts allow us to declare terms and define properties. Logic allows us to manipulate objects. Group theory provides a platform of abstract axiomatic foundations leading to applications. A close connection between such mathematical structures and computer structures will become evident. .SH Sets .PP Set theory was developed by Georg Cantor in the last quarter of the nineteenth Century to establish a basis for discussing abstract objects in terms of undefined terms and relationships. .PP \fIUndefined Terms\fR. A \fBset\fR is a collection of distinguishable entities, called \fBelements\fR or \fBmembers\fR, or \fBobjects\fR. The elements could be anything, such as material objects (physics), abstract numbers (arithmetic), declarative statements (logic), coordinate points (geometry), etc., even (recursively) collections of sets not including themselves (set theory). The concepts of set, member and the relationship between member and set are formally undefined, but assumed to be understood. A convenient notation uses upper letters for sets (e.g. S), and lower case letters for member (e.g. m). A given set can be specified \fIexplicitly\fR by listing the elements (concept of membership): S $==$ {$m sub 1 , m sub 2 , ...$}, or \fIimplicitly\fR by stating some characteristic property of the elements (class concept): S $==$ {m | m has property p}, where the symbols $==$ {} may be translated "is specified by the collection ...", and the symbol | translates to "such that". .PP \fIDefinitions\fR. \fBRelationships\fR connect objects together. The \fBmembership relation \(mo\fR connects members to sets such that the notations: m \(mo S and m \(nm S mean "m is a member, or element of set S", and "m is not a member, or element of set S", respectively. An \fBordered pair\fR of elements is a set of two elements arranged in a specified sequence, {a,b}. Ordered n-tuples may be constructed by induction from pairs of ordered pairs. An \fBunary operation $bold '$\fR relates one member to another member. A \fBbinary operation $bold omicron$\fR relates pairs of members to a third member. \fBSet equivalence\fR between two sets means both sets have the same members. An \fBEquivalence relation ($bold = $)\fR for set elements is defined in terms of three properties: .sp .KS .B1 .LP \fISet member equivalence properties:\fR .RS .IP 1. 3 Reflexivity: $a~=~a$ .IP 2. 3 Symmetry: if $a~=~b$, then $b~=~a$ .IP 3. 3 Transitivity: if $a~=~b$ and $b~=~c$, then $b~=~c$ .sp .RE .sp .B2 .KE .sp .SH Logic .PP The word logic connotes reason, correctness and a premise-to-conclusion process. Informal logic employs human natural language declarative sentences to form a chain of reasoning leading to a conclusion. Formal \fBsymbolic logic\fR employs strings of symbols representing statements and connectives arranged according to rules of development (grammar). The elements of a \fBstatement calculus\fR consists of simple statements, or \fIsentences ($S sub 1 , S sub 2 , ...$)\fR operated on by one (unary) modifier, \fInot\fR (\(no), and four (binary) connectives: \fIand\fR (\(AN), \fIor, (\(OR) \fIif ... then ($=>$),\fR and \fIif and only if ($< =>$)\fR. The goal of statement calculus is to deduce the validity of a compound statement from the truth attributes of its components. A formal symbolic logic system was developed by Alfred North Whitehead and Bertrand Russell (\fIPrincipia Mathematica\fR, 1900). .PP \fBPredicate calculus\fR extends statement calculus to provide a theory of inference and validation through \fIpredicate statements, quantifiers\fR and \fIprocedures\fR. Predicates statements play the roll of mathematical functions manipulating terms, called quantifiers, consisting of constants and variables. .PP These elements of logical systems have analogues in the statements, operators, syntax and algorithms of computer programming languages. .SH Groups .PP A \fBgroup, G\fR is a mathematical system having an equivalence relation ($=$), one unary operation $prime$ (inverse), and one binary operation $omicron$ (combination), satisfying four "\fIaxioms\fR" (givens), or more precisely, one \fIproperty\fR and three \fIdefinitions\fR for the symbols {$e, prime , ()$}: .sp .KS .B1 .LP \fIGroup Axioms:\fR .RS .IP 1. 3 Closure: $a~omicron~b~\(mo~G$ .IP 2. 3 (universal) Identity: $a~omicron~e~=~e~omicron~a~=~a$ .IP 3. 3 (specific) Inverse: $a prime~omicron~a~=~a~omicron~a prime~=~e$ .IP 4. 3 Associativity: $(a~omicron~b)~omicron~c~=a~omicron~(b~omicron~c)$ .RE .LP An \fBAbelian group\fR has the further property: .RS .IP 5. 3 Commutativity: $a~omicron~b~=~b~omicron~a$ .sp .5 .RE .sp .B2 .KE .sp .PP A straightforward derivation shows that the closure, identity and inverse elements are unique, and therefore that a group must contain at least two elements. Groups are among the simplest mathematical systems because they involve only one binary operation. Nevertheless, there are many examples of groups, as demonstrated in Table 3.1 below. .TS box center tab(|); c s s s l l l l. \fITable 3.1. Examples of Groups _ \fBGroup|Combination Operation ($bold omicron$\fB)|Identity Element (e)|Inverse Operation ($bold prime$\fB)\fR _ Integers|summation (+)|zero (0)|subtraction ($-$) Positive reals|multiplication ($times$)|unity (1)|division (\(di) Isomorphic functions|composition ($omicron$)|identity (i)|inversion ($\ sup -1$) Space|transformation|identity (I)|inverse transformation .TE .PP Group theory finds applications in treating symmetry properties of fundamental objects in physics (particles and fields) and chemistry (atoms and molecules). .PP Affine transformations of vector spaces consist of translations, rotations, reflections, and stretches, and collectively, form a group. .SH Algebraic structures .PP A \fBring\fR is a set with two operations, commonly called \fIaddition\fR and \fImultiplication\fR, which forms an abelian group with respect to addition, but lacks a multiplicative inverse and commutative property. A \fBfield\fR includes the multiplicative inverse and commutative property for non-zero elements. These systems have obvious roots in number theory and arithmetic, but apply to a wide range of systems. The set of integers forms a commutative ring with respect to the operations of arithmetic (Peano). Rational (fractional), real (rational plus irrational), complex (real plus imaginary) numbers and polynomial (power) functions are examples of ordered fields, while the set of square matrices forms a non-commutative ring. .SH Numbers and Arithmetic .PP Numbers are related to counting, an iterative process of associating a symbol with each member of a sequence. Decimal integers consist of the sequence of symbols {... \-3, \-2, \-1, 0 +1, +2, +3, ...}, with the understanding of place notation for multidigit numbers as described in Eqn. (1.2). .PP Unary (one argument) operations on numbers include negation (replacing + with \-) and its inverse (replacing \- with +). Binary (two-argument) operations on numbers (and common symbolic representations) include addition (+) and its inverse, subtraction (\-). Iterated (repeated) addition with the same argument is called \fImultiplication\fR (x, *) and its inverse, \fIdivision\fR (\(di, /, fraction). Iterated multiplication with the same argument is called \fIexponentiation\fR and is represented by a symbol with a number denoting the number of iterations (^, **, superscript). Note that iteration connotes a procedure, or sequence of operations. .PP One notation for division uses ratios or fractions, with numerators written on top of denominators. \fIProper fractions\fR have numerator less than the denominator, while the opposite holds for \fIimproper fractions\fR. A division procedure generates an equivalent alternative representation to fractions. It employs a period (.), or "decimal" "point" symbol for fractions less than unity. Three possibilities exist for decimal fractions, they either terminate at some point, or repeat after some point, or never repeat. Terminating or repeating decimal fractions correspond to integer fractions, called \fIrational numbers\fR; non-terminating decimal fractions correspond to \fIirrational numbers\fR. \fIReal numbers\fR are comprised of the the complete set of rational numbers and irrational numbers. A famous early mathematical proof demonstrates that the number representing the length of the diagonal of a unit square (sides of length unity) is an irrational number (i.e., cannot be represented as the ratio of two integers). A more recent landmark in the history of mathematics is Cantor's proof of the nondenumerability of the reals (i.e., the set of all real numbers cannot be put into one-to-one correspondence with the positive-integer, or "natural" numbers). .PP Collections of operations and operands are called \fBexpressions\fR. Equivalency between separate expressions is denoted by \fBequations\fR, using the special symbol = to connect the equivalent expressions. .SH Boolean Algebras .PP A \fBBoolean Algebra\fR is a mathematical system (set of objects) with an equivalence relation $=$, one unary operation $prime$ (complementation), two binary operations AND (\(AN) and OR (\(OR), and two identity elements zero (z) and unity (u), satisfying four pairs of axioms: .sp .KS .B1 .LP \fIBoolean Algebra Axioms:\fR .RS .IP 1. 3 Identity: $a~\(AN~z~=~a$ $a~\(OR~u~=~a$ .IP 2. 3 Complement: $a~\(AN~a prime~=~u$ $a~\(OR~a prime~=~z$ .IP 3. 3 Distributivity: $a~\(AN~(b~\(OR~c)~=(a~\(AN~b)~\(OR~(a~\(AN~c)$ $a~\(OR~(b~\(AN~c)~=(a~\(OR~b)~\(AN~(a~\(OR~c)$ .IP 4. 3 Commutativity: $a~\(AN~b~=~b~\(AN~a$ $a~\(OR~b~=~b~\(OR~a$ .sp .RE .sp .B2 .KE .sp .PP Notice the symmetry between \(AN and \(OR and the identity elements z and u. Any statement (theorem) will have a valid \fIdual\fR statement with the operators and identity elements interchanged. .PP A variety of practical theorems can be derived from the Boolean axioms, including laws of associativity and cancellation. Familiar examples of Boolean systems include set operations, number arithmetic, symbol algebra and geometry transformations. .TS box center tab(|); c s s s s s l l l l l l. \fITable 3.2. Examples of Boolean Algebras _ \fBSystem|$bold \(AN$|$bold \(OR$|z|u|$bold prime$\fR _ \fRReals|addition ($roman +$)|multiplication ($times$)|zero (0)|unity (1)|1 - x, (x $times$ x = x) Sets|union ($union$)|intersection ($inter$)|empty (\(es)|universe (U)|complement ($A bar$) Logic|or (\(OR)|and (\(AN)|impossibility ($A~\(AN A prime$)|certainty $((A~\(AN A prime ) prime$)|not (\(no) .TE .PP In the case of sets, the "universe" refers to the set of all elements of interest. The logic example refers to the statement calculus described above, with equality having the interpretation of "congruence" (equal validity or truth). .PP Arithmetic identifies the operations of + and * with the boolean operations \(AN and \(OR, respectively. The Boolean identities u and z are identified as 1 and 0, respectively, and the respective complements are recognized as division and subtraction, denoted $1 over a$ and $1~-~a$. .NH Linear Systems .PP \fBLinear systems\fR are algebraic systems involving elements to the first power at most. .SH Matrices .PP A \fBMatrix\fR is a two-dimensional array of elements, or \fBtableu\fR: .EQ bold A~==~left [ pile { {a sub 11} above {.} above {.} above {.} above {a sub 1j} above {.} above {.} above {.} above {a sub 1n} } ~~pile { {a sub 21} above {.} above {.} above {.} above {a sub 2j} above {.} above {.} above {.} above {a sub 2n} } ~~pile { {.} above {.} above {.} above {.} above {.} above {.} above {.} above {.} above {.} } ~~pile { {a sub m1} above {.} above {.} above {.} above {a sub mj} above {.} above {.} above {.} above {a sub mn} } right ] .EN where i is called the \fBrow index\fR and j the \fBcolumn index\fR. Special cases of matrices include \fBscalars\fR (single element $a sub ij$), \fBsquare matricies\fR (m = n), \fBsymmetric matrices\fR ($a sub ji~=~a sub ij$), \fBtriangular matrices\fR ($a sub j>i~=~0$), or ($a sub i>j~=~0$), \fBdiagonal matrices\fR ($a sub {j != i}~=~0$), \fBthe null matrix\fR ($a sub ji~=~0$), and \fBthe identity (or unit) matrix\fR ($a sub ii~=~1$), and \fBvectors\fR, which are one-dimensional arrays: .EQ A vec~==~[a sub 1 ... a sub i ... a sub n ]~~or~~left [ pile { {a sub 1} above {.} above {.} above {.} above {a sub i} above {.} above {.} above {.} above {a sub n} } right ] .EN .PP .EN Useful functions of matrices include the \fBtranspose, trace, determinant\fR and \fBadjoint\fR: .EQ A sup T~~< = >~~a sub ij sup T~=~a sub ji .EN .EQ Tr(A)~==~sum from i=1 to n {a sub ii} .EN .EQ det(A)~==~sum {(-1) sup {i+j} a sub ij minor (A sub ij )} .EN where the \fBminor\fR $A sub ij$ is the (n - 1) x (n - 1) determinant formed by deleting the ith row and jth column. .EQ adj(A)~=~A\(dg~==~(-1) sup {i+j} minor (A sub ji ) .EN The inverse operation is formally .EQ A sup -1~==~adj(A) over det(A) .EN .PP Additional special cases of matrices include \fBsingular\fR (det(A) = 0)), \fBHermitian\fR ($A sup \(dg = A$), \fBunitary\fR ($A sup \(dg = A sup -1$), .PP Matrices form groups with respect addition and multiplication by scalars (linear vector spaces), where the operations are: .EQ C~=~A~+~B,~~where~~c sub ij~=~a sub ik~+~b sub kj .EN .EQ B~=~cA,~~where~~b sub ij~=~c~*~a sub ij .EN Matrix multiplication is defined for \fIconformable\fR rectangular matrices (the first has as many colums as the second has rows) as .EQ C~=~AB,~~where~~c sub ij~=~sum from k=1 to n {a sub ik b sub kj} .EN Matrix multiplication is not necessarily commutative. .PP A set of linear equations in the variables $a sub i$ can be represented in terms of a matrix and vector relationship: .EQ pile { {a sub 11 x sub 1} above {.} above {.} above {.} above {a sub 1j s sub j} above {.} above {.} above {.} above {a sub 1n x sub n} } ~~pile { {.} above {.} above {.} above {.} above {.} above {.} above {.} above {.} above {.} } ~~pile { {a sub m1 x sub 1} above {.} above {.} above {.} above {a sub mj x sub j} above {.} above {.} above {.} above {a sub mn x sub n} } pile { {~=~} above {.} above {.} above {.} above {~=~} above {.} above {.} above {.} above {~=~} } pile { {b sub 1} above {.} above {.} above {.} above {b sub i} above {.} above {.} above {.} above {b sub n} } .EN as .EQ bold A x vec~=~b vec .EN with formal solution: .EQ x vec~=~bold {A sup -1} b vec .EN (provided A is not singular). .LP The set of linear equations geometrically represents a \fBlinear transformation\fR. .EQ delim ## .EN .NH Symbolic Mathematical Computation .PP In the early days of the development of high-level computer languages it was soon recognized that general purpose digital electronic computers designed to manipulate bits of numerical information could just as easily manipulate bits of more general symbolic information. Computer programming languages quickly separated into two divisions, emphasising either numerical or symbolic manipulation; FORTRAN (FORmula TRANslation) and LISP (LISt Processing) being representative. Symbolic computation can be used to automate much of the analytic mathematical manipulations in science. .PP In this section we will explore some well-developed symbolic languages for mathematics. It is sometimes claimed that all of grade school and much of college mathematics, including algebra, calculus and differential equations has been automated with the develpment of computer symbolic mathematics. For demonstration purposes one of the rather well-developed symbol manipulation programs, MACSYMA will be used. .PP Computer programs written to perform mathematical symbolic manipulation communicate in familiar mathematical terminology, suggesting that learning to compute symbolically should be a relatively easy process. One important feature of symbol manipulation programs is their interactive capability (i.e. immediate response to a single command or set of commands.) This not only allows for instantaneous editing and correcting but provides freedom to explore various pathways leading to the desired result expressed in a designated format. Input is in the form of high level commands which accept arguments and produce output. Because symbolic computation programs are written in high-level languages, such as C and LISP, it is usually possible to incorporate programs written general-purpose languages for advanced applications. Although symbolic manipulation programs are designed specifically to produce analytic formulas, numerical approximation features are usually included for non-analytic situations. .SH 2 Commands .PP The transcript of the first interactive session below introduces some notation and syntax of simple symbolic manipulation commands. Each input line is automatically referenced and indexed for possible future reference and causes a response by the system. Comments (delimited by slash and star notation borrowed from the PL1 language) are not processed. Numbers, names, simple mathematical operations and library function calls are syntactically similar to other languages. One significant difference is the symbol used for assignment. Since the equals sign (=) must be reserved for expressing equations in symbolic form, another symbol is needed for "replacement of" or "assignment to" variables. In MACSYMA the assignment symbol is the colon (:). .KS .nf \f4 (c1) /* Arithemetic */ /* Note each command (or set of commands) is terminated with a semi-colon. */ /* (Note also that expressions are automatically evaluated, reduced and expressed in a "typeset" form.) */ 2 + 3*4 -5/6; .EQ (d1) 79 over 6 .EN (c2) /* Powers */ 2^10; .EQ (d2) 1024 .EN (c3) /* Variables */ x^-y; .EQ (d3) 1 over { x sup y } .EN (c4) /* Variable assignment */ x:2; .EQ (d4) 2 .EN (c5) y:10; .EQ (d5) 10 .EN (c6) /* Evlauate the expression given on line d3 with x and y "bound" to numbers. */ ev(d3); .EQ (d6) 1 over 1024 .EN (c7) /* % is a special symbol used to refer to the result of the previous computation. */ ev(%,numer); .EQ (d7) 0.0009765625 .EN (c8) /* % is also used to name the following three mathematical constants: */ %e^(%i*%pi); .EQ (d8) ^-^ 1 .EN (c9) /* Factorial notation is also understood */ 100!; .EQ 93326215443944152681699238856266700490715968264381621 .EN .EQ 46859296389521759999322991156089414639761565182862536 .EN .EQ (d9) 97920827223758251185210916864000000000000000000000000 .EN (c10) /* Note that the value assigned to x above has not been forgotten. */ log(x)^3; .EQ (d10) log ( 2 ) sup 3 .EN (c11) /* This could be easily checked by referencing the symbol x. */ x; .EQ (d11) 2 .EN (c12) /* In general, assignment using the symbol : defines a "shorthand" label for referencing the expression on the right of the colon. */ eq1:u+v-4=0; .EQ (d12) ^-^ 4 ^+^ u ^+^ v ^=^ 0 .EN (c13) eq2:u-v+2=0; .EQ (d13) 2 ^+^ u ^-^ v ^=^ 0 .EN (c14) eq3:eq1+eq2; .EQ (d14) ^-^ 2 ^+^ 2 \^ u ^=^ 0 .EN (c15) /* Solve the simple equation labeled "eq3": */ solve(eq3,u); .EQ (d15) [ u ^=^ 1 ] .EN (c16) /* Solve a harder equation: */ solve(log(z)=1,z); .EQ (d16) [ z ^=^ e ] .EN (c17) /* If an attempt had been made to solve for the variable x, an error would have occurred because x is still assigned the number 2. The "binding" of x to 2 can be removed however with the "remvalue" (or the "kill") command: */ remvalue(x); .EQ (d17) [ x ] .EN (c18) x; .EQ (d18) x .EN (c19) /* (Expressions which do not evaluate to anything are merely echoed.) */ /* Now its ok to solve for x as an unknown. */ solve(log(x)=1,x) ; .EQ (d19) [ x ^=^ e ] .EN (c20) /* Finally, note that if the input is terminated with a $ instead of a ; displaying (but not evaluating) the output is supressed. */ eq1:u+(4/2)*v+5=0$ (c21) eq1; .EQ (d21) 5 ^+^ u ^+^ 2 \^ v ^=^ 0 .EN (c22) /* End the session */ exit(); .R .fi .KE .SH Functions .PP The real power of symbol manipulation begins to appear with operational "functions", which invoke transparent routines for performing sophisticated mathematical algorithmic procedures. A standard demonstration is the factorization of of a polynomial. .KS .nf \f4 (c1) /* Construct a large polynomial with the command expand: */ (x+1)^10; .EQ (d1) ( 1 ^+^ x ) sup 10 .EN (c2) expand(%); .EQ (d2) 1 ^+^ 10 \^ x ^+^ 45 \^ x sup 2 ^+^ 120 \^ x sup 3 ^+^ 210 \^ x sup 4 ^+^ 252 \^ x sup 5 ^+^ 210 \^ x sup 6 ^+^ 120 \^ x sup 7 ^+^ 45 \^ x sup 8 ^+^ 10 \^ x sup 9 ^+^ x sup 10 .EN (c3) /* Now factor the expanded polynomial and recover the original expression: */ factor(%); .EQ (d3) ( 1 ^+^ x ) sup 10 .EN (c4) exit(); .R .fi .KE .PP Clearly a new realm of computing capability has been demonstrated here with the introduction of callable routines having a comprehension of mathematical symbol manipulation. This example illustrates the ability to decode symbolic expressions and manipulate them according to predetermined rules, in this case the rules of algebra. .PP One drawback of symbolic programming compared to numerical programming can be illustrated by the preference for integer arithmetic over floating-point arithmetic. In the next example, unnecessary complexity is introduced by attempting to force floating-point computation. .KS .nf \f4 (c1) /* Solve a polynomial equation in one variable. */ solve(x^2-1=0,x); Solution: .EQ (e1) x ^=^ ^-^ 1 .EN .EQ (e2) x ^=^ 1 .EN .EQ (d2) [ e1 , e2 ] .EN (c3) /* (The brackets indicate that the answer is displayed as a "list" of two solutions) */ /* Now turn on a MACSYMA flag which causes floating point conversion. */ numer:true; .EQ (d3) true .EN (c4) /* Solve the same equation as before. */ solve(x^2-1=0,x); RAT replaced 6.283185307179586 by 732133415/116522652 = 6.283185307179586 RAT replaced 3.141592653589793 by 732133415/233045304 = 3.141592653589793 Solution: .EQ (e4) x ^=^ e sup { 3.141592653589793 \^ i } .EN .EQ (e5) x ^=^ e sup { 6.283185307179586 \^ i } .EN .EQ (d5) [ e4 , e5 ] .EN (c6) /* RAT is a library function; it was called automatically and used to convert the 16 signficant-figure decimal approximation of pi to an equivalent rational fraction. RAT can also be called directly: */ rat(%pi); RAT replaced 3.141592653589793 by 732133415/233045304 = 3.141592653589793 .EQ (d6) 732133415 over 233045304 .EN (c7) exit(); .R .fi .KE .SH Algebra .PP Algebraic manipulations will be demonstrated by temperature scale conversions, quantum theory and chemical equation balancing. .KS .nf \f4 (c1) /* Convert 50\(de Fahrenheit to Celsius. */ /* The implicit conversion formula is */ eq:F=(9/5)*C+32; .EQ (d1) F ^=^ 32 ^+^ { 9 \^ C } over 5 .EN (c2) /* The usual solution is to substitute the given temperature into the conversion formula and solve the resulting equation in one unknown. */ subst(40,F,eq); .EQ (d2) 40 ^=^ 32 ^+^ { 9 \^ C } over 5 .EN (c3) solve(%,C); Solution: .EQ (e3) C ^=^ 40 over 9 .EN .EQ (d3) [ e3 ] .EN (c4) /* Another way is to solve two simultaneous equations. */ solve([F=50,eq]); Solution .EQ (e4) F ^=^ 50 .EN .EQ (e5) C ^=^ 10 .EN .EQ (d5) [ [ e4 , e5 ] ] .EN (c6) /* Finding the temperature at which F=2C can also be done either way but solving two simultaneous equations may be mathematically more illuminating. */ solve([F=2*C,eq]); Solution .EQ (e6) C ^=^ 160 .EN .EQ (e7) F ^=^ 320 .EN .EQ (d7) [ [ e6 , e7 ] ] .EN (c8) exit(); .R .fi .KE .PP The old quantum theory is a form of classical mechanics modified to provide quantized orbits. Many problems may be solved using elementary algebra. A simple example obtains the quantized orbit properties of two attractively charged particles (the Bohr atom). .KS .nf \f4 (c1) /* Simple Bohr atom */ momentum:m*v*r=n*hbar; .EQ (d1) m \^ r \^ v ^=^ hbar \^ n .EN (c2) force:m*v^2/r=z*e^2/r^2; .EQ (d2) { m \^ v sup 2 } over r ^=^ { e sup 2 \^ z } over { r sup 2 } .EN (c3) energy:E=(1/2)*m*v^2-z*e^2/r; .EQ (d3) E ^=^ { m \^ v sup 2 } over 2 ^-^ { e sup 2 \^ z } over r .EN (c4) solve([momentum,force,energy],[r,v,E]); .EQ (d4) [ [ r ^=^ { hbar sup 2 \^ n sup 2 } over { e sup 2 \^ m \^ z } , v ^=^ { e sup 2 \^ z } over { hbar \^ n } , E ^=^ ^-^ { e sup 4 \^ m \^ z sup 2 } over { 2 \^ hbar sup 2 \^ n sup 2 } ] ] .EN (c5) exit(); .R .fi .KE .PP New quantum mechanics solves eigenvalue problems realized as differential or matrix equations. However one dimensional problems at least can be reduced to elementary algebra using ladder techniques. Traditional textbooks treat eigenvalue problems as a branch of differential equations. However the rather artificial constraints placed on the eigenvalue problem in differential equation form required to obtain physically meaningful solutions (e.g. boundary behavior) do not lend themselves to automatic mathematical manipulation. Symbolic programming can be useful in obtaining intermediate complicated formulas used in electronic structure theory. .PP Chemical reactions are balanced conveniently using symbolic algebra. Mathematically the process can be achieved by solving systems of linear algebraic equations. The atomic theory insures that the variables involved will either be integers or rational fractions of integers. In this case symbolic manipulation shows a decided advantage over algorithms which are restricted to floating-point arithmetic. .KS .nf \f4 (c1) /* Balance: aCu\d2\uS + bH\u+\d +cNO\d3\u\u-\d \(-> dCu\u2+\d + eNO + fS\d8\u + gH\d2\uO */ /* The conservation equation for each element can be assigned a short-hand label using a name that reflects the element being conserved (Q stands for charge). */ Cu:2*a=d$ (c2) S:a=8*f$ (c3) H:b=2*g$ (c4) N:c=e$ (c5) O:3*c=e+g$ (c6) Q:b-c=2*d$ (c7) /* The linear equations are to be solved simultaneously. The library routine "solve" expects a [list of equations] and a [list of unknowns], each contained in square brackets. There is one more unknown than equations, which reflects the ambiguity in balanced chemical equations. */ coefs:solve([Cu,S,H,N,O,Q],[b,c,d,e,f,g]); .EQ (d7) [ [ b ^=^ { 16 \^ a } over 3 , c ^=^ { 4 \^ a } over 3 , d ^=^ 2 \^ a , e ^=^ { 4 \^ a } over 3 , f ^=^ a over 8 , g ^=^ { 8 \^ a } over 3 ] ] .EN (c8) /* Fractions may cleared by finding the lowest common multiple of the denominators. One way to accomplish this is to define a routine which takes a list of numbers and computes the l.c.m. stepwise with the MACSYMA function for finding the greatest common divisor, "ezgcd" */ l:makelist(denom(rhs(part(coefs[1],i))),i,1,6); .EQ (d8) [ 3 , 3 , 1 , 3 , 8 , 3 ] .EN (c9) (lcm:l[1], for n:2 thru length(l) do lcm:lcm*l[n]/first(ezgcd(lcm,l[n]))); .EQ (d9) done .EN (c10) coefs:multthru(lcm,makelist(rhs(part(coefs[1],i)),i,1,6)); .EQ (d10) [ 128 \^ a , 32 \^ a , 48 \^ a , 32 \^ a , 3 \^ a , 64 \^ a ] .EN (c11) exit(); .R .fi .KE .SH Matricies .PP H\*:uckel Molecular Orbital (HMO) theory is a simple approximation to accurate electronic structure theory, which in turn forms the basis of understanding chemical reactivities and states. HMO is a simple model of chemical bonding based on the topological properties of molecules. .PP The energy values of the molecule are represented by the eigenvalues of the "adjacency" or bond matrix, reflecting the topology of the bonds (arranged in a square in our example). The (i,j) element of the adjacency matrix is equal to unity if a bond exists between atoms i and j; otherwise the element is zero. Since an atom is not bonded to itself, diagonal elements of the adjacency matrix are equal to zero. .PP A typical example is the molecule called cyclobutadiene, which for our purposes can be thought of as a square with carbon atoms at the corners and the sides representing the chemical bonds between the atoms. The adjacency matrix for butadiene is expressed according to a numbering scheme which identifies the atoms around the vertices of the square (1,2,3,4). .PP Mathematically HMO theory employs linear algebra techniques. The matrix operations involved quickly become so complex as to exclude all but the simplest examples from the textbooks and exercises. The statement of the problem is transparently simpler in a symbolic language; however the solution of the problem is more efficient using a numerical program. .KS .nf \f4 (c1) /* HMO for butadiene */ /* Construct the H\*:uckel matrix. */ mat:matrix([a,b,0,0],[b,a,b,0],[0,b,a,b],[0,0,b,a]); .EQ (d1) left [ matrix { rcol { a above b above 0 above 0 } rcol { b above a above b above 0 } rcol { 0 above b above a above b } rcol { 0 above 0 above b above a } } right ] .EN (c2) /* The eigen package is not on-line and needs to be loaded from the library. Eigen returns a list of eigenvalues, followed by a list of their degeneracies. */ load(eigen); [load /usr/mac/share/eigen.l] .EQ (d1) /usr/mac/share/eigen.l .EN (c2) eigenvalues(mat); .EQ (d2) [ [ { 2 \^ a ^+^ ( 1 ^-^ sqrt 5 ) \^ b } over 2 , { 2 \^ a ^+^ ( 1 ^+^ sqrt 5 ) \^ b } over 2 , { 2 \^ a ^+^ ( ^-^ 1 ^-^ sqrt 5 ) \^ b } over 2 , { 2 \^ a ^+^ ( ^-^ 1 ^+^ sqrt 5 ) \^ b } over 2 ] , [ 1 , 1 , 1 , 1 ] ] .EN (c3) exit(); .R .fi .KE .PP Larger molecules may be treated in a similar fashion. It would not be out of the question to consider an extension of this program which would receive chemical composition and structural information as input and produce HMO energies and molecular properties as output. .SH Calculus .PP Differential and integral calculus was invented for temporal processes in science (specifically mechanics at first), so examples abound. We will show some examples from Statistical Mechanics .PP To illustrate applications to classical statistical mechanics, we will obtain the most probable velocity of a Maxwellian distribution function of an ideal gas as an extremum. .KS .nf \f4 (c1) /* Most probable velocity of a Maxwell distribution */ n(v):=4*%pi*n[0]*[m/(2*%pi*k*T)]^(3/2)*exp(-m*v^2/(2*k*T))*v^2; .EQ (d1) n ( v ) ~:=~ 4 \^ pi \^ n sub 0 \^ [ m over { 2 \^ pi \^ k \^ T } ] sup { 3 over 2 } \^ exp ( ^-^ { m \^ v sup 2 } over { 2 \^ k \^ T } ) \^ v sup 2 .EN (c2) ratsimp(diff(n(v),v)); .EQ (d2) [ ^-^ { ( ^-^ 2 \^ sqrt 2 \^ n sub 0 \^ T \^ k \^ m over { T \^ k } sup { 3 over 2 } \^ v ^+^ sqrt 2 \^ n sub 0 \^ m \^ m over { T \^ k } sup { 3 over 2 } \^ v sup 3 ) \^ e sup { ^-^ { m \^ v sup 2 } over { 2 \^ T \^ k } } } over { sqrt pi \^ T \^ k } ] .EN (c3) solve(%,v); .EQ (d3) [ v ^=^ ^-^ sqrt 2 \^ sqrt { { T \^ k } over m } , v ^=^ sqrt 2 \^ sqrt { { T \^ k } over m } , v ^=^ 0 ] .EN (c4) exit(); .R .fi .KE .PP A second example from statistical mechanics derives the Stefan-Boltzmann and Wien radiation laws from the Planck radiation distribution function. .KS .nf \f4 (c1) /* Stefan-Boltzmann law from the Planck radiation distribution function */ assume(c>0,h>0,k>0,T>0); .EQ (d1) [ c > 0 , h > 0 , k > 0 , T > 0 ] .EN (c2) rho(nu):=(8*%pi*h/c^3)*nu^3/(exp(h*nu/(k*T))-1); .EQ (d2) rho ( nu ) ~:=~ { { 8 \^ pi \^ h } over { c sup 3 } \^ nu sup 3 } over { exp ( { h \^ nu } over { k \^ T } ) ^-^ 1 } .EN (c3) integrate(rho(nu),nu); .EQ (d3) { 8 \^ pi \^ h \^ ( ^-^ { nu sup 4 } over 4 ^+^ { 6 \^ T sup 4 \^ k sup 4 \^ ( { h sup 3 \^ nu sup 3 \^ log ( 1 ^-^ e sup { { h \^ nu } over { T \^ k } } ) } over { 6 \^ T sup 3 \^ k sup 3 } ^+^ { h sup 2 \^ nu sup 2 \^ li sub 2 [ e sup { { h \^ nu } over { T \^ k } } ] } over { 2 \^ T sup 2 \^ k sup 2 } ^-^ { h \^ nu \^ li sub 3 [ e sup { { h \^ nu } over { T \^ k } } ] } over { T \^ k } ^+^ li sub 4 [ e sup { { h \^ nu } over { T \^ k } } ] ) } over { h sup 4 } ) } over { c sup 3 } .EN (c4) E=limit(%,nu,0); .EQ (d4) E ^=^ { 8 \^ pi sup 5 \^ T sup 4 \^ k sup 4 } over { 15 \^ c sup 3 \^ h sup 3 } .EN (c5) /* (limit doesn't evaluate to zero as nu goes to infinity) */ /* Wien displacement law */ solve(diff(rho(nu),nu),h*nu/(k*T)); .EQ (d5) [ { h \^ nu } over { T \^ k } ^=^ log ( { 3 \^ T \^ k } over { 3 \^ T \^ k ^-^ h \^ nu } ) ] .EN (c6) /* (Thus frequency is inversely proportional to temperature.) */ exit(); .R .fi .KE .PP Obtaining and using statistical mechanical partition functions employs mathematical manipulations which should be straightforward for symbolic algebra programs. The following examples obtain limiting values for the heat capacity of an Einstein solid, and analytical and numerical forms for the entropy of a monatomic ideal gas, starting with the quantized energy expressions. .KS .nf \f4 (c2) /* Heat capacity of an Einstein solid. */ assume(h>0,v>0,k>0,T>0,N>0); .EQ (d2) [ h > 0 , v > 0 , k > 0 , T > 0 , N > 0 ] .EN (c3) depends([q,E],T); .EQ (d3) [ q ( T ) , E ( T ) ] .EN (c4) q:sum(exp(-(n+1/2)*h*v/(k*T)),n,0,inf); .EQ (d4) { e sup { ^-^ { h \^ v } over { 2 \^ T \^ k } } } over { 1 ^-^ e sup { ^-^ { h \^ v } over { T \^ k } } } .EN (c5) E:factor(k*T^2*diff(log(q^(3*N)),T)); .EQ (d5) { 3 \^ N \^ h \^ v \^ ( 1 ^+^ e sup { { h \^ v } over { T \^ k } } ) } over { 2 \^ ( ^-^ 1 ^+^ e sup { { h \^ v } over { T \^ k } } ) } .EN (c6) Cv:factor(diff(E,T)); .EQ (d6) { 3 \^ N \^ h sup 2 \^ v sup 2 \^ e sup { { h \^ v } over { T \^ k } } } over { T sup 2 \^ k \^ ( ^-^ 1 ^+^ e sup { { h \^ v } over { T \^ k } } ) sup 2 } .EN (c7) limit(Cv,T,0); .EQ (d7) 0 .EN (c8) /* (Use Taylor series expansions to get upper limit.) */ tlimit(Cv,T,inf); .EQ (d8) 3 \^ N \^ k .EN (c9) exit(); .R .fi .KE .KS .nf \f4 (c1) /* Entropy of a monatomic ideal gas (Sackur-Tetrode equation). */ /* One degree of translation */ E(n):=n^2*h^2/(8*m*a^2); .EQ (d1) E ( n ) ~:=~ { n sup 2 \^ h sup 2 } over { 8 \^ m \^ a sup 2 } .EN (c2) /* Define the classical mechanical partition function as an integral. */ /* Let the integrator know that the parameters have positive values: */ assume(T>0,h>0,k>0,m>0,a>0,g[e]>0)$ (c3) q[tr1]:rootscontract(integrate(exp(-E(n)/(k*T)),n,0,inf)); .EQ (d3) { a \^ sqrt { ( 2 \^ pi \^ T \^ k \^ m ) } } over h .EN (c4) /* Cube and substitute V for a-cubed (g[e] is the electronic degeneracy). */ q[tr3]:g[e]*subst(V,a^3,q[tr1]^3); .EQ (d4) { 2 \^ sqrt 2 \^ pi sup { 3 over 2 } \^ T sup { 3 over 2 } \^ V \^ g sub e \^ k sup { 3 over 2 } \^ m sup { 3 over 2 } } over { h sup 3 } .EN (c5) /* Q is the molar partition function. */ Q:(q[tr3]*%e/N)^N$ (c6) logexpand:super$ (c7) S:k*log(Q)+k*T*diff(log(Q),T)$ (c8) S:subst(N*k*T/P,V,subst(M/N,m,S)); .EQ I (d8) 0.0u { 3 \^ N \^ k } over 2 ^+^ k \^ ( N ^+^ { 3 \^ log ( 2 ) \^ N } over 2 ^+^ { 3 \^ log ( pi ) \^ N } over 2 ^+^ { 3 \^ N \^ ( log ( M ) ^-^ log ( N ) ) } over 2 .EN .EQ I " " 428u ^+^ { 3 \^ N \^ log ( T ) } over 2 ^+^ N \^ log ( g sub e ) ^-^ 3 \^ N \^ log ( h ) ^+^ { 3 \^ N \^ log ( k ) } over 2 .EN .EQ I " " 428u ^+^ N \^ ( ^-^ log ( P ) ^+^ log ( T ) ^+^ log ( k ) ) ) .EN (c9) S:ev(S,N=6.0220e23,h=6.626e-27,k=1.3807e-16)$ (c10) /* Standard entropy in calories: */ S[0]:expand(ev(S,T=298.16,P=1.0132e6,numer)/4.184e7); .EQ I (d10) 0.0u 25.99181902933839 ^+^ 2.980846821223709 \^ log ( M ) ^+^ 1.98723121414914 \^ log ( g sub e ) .EN (c11) /* Entropy of argon in calories (observed value is 36.983 cal/mol-deg): */ S[Ar]=ev(S[0],g[e]=1,M=39.948); .EQ (d11) S sub Ar ^=^ 36.98392600214311 .EN (c12) exit(); .R .fi .KE .PP Had an attempt been made to substitute R/k for N in the analytical entropy expression, an incorrect numerical value would have resulted. This is due to the fact that N does not always occur multiplied by k, which causes superfluous R variables to be introduced into the entropy expression. This illustrates the care which must be given in treating algebraic variables and expressions when using computer algebra. In examples like this difficulties are avoided by paying strict attention to the expected functional dependency on independent variables; entropy can depend on k or R, but should not be expected to depend on both. On the other hand, replacing atomic mass m with molar mass M through M/N does not increase the number of independent variables. .SH Differential Equations .PP Chemical kinetics naturally involves differential equations, the solution to which are referred to as \fIintegrated rate expressions\fR. We will take for our example the steady-state approximation, which is used to reduce systems of differential equations associated with complex reaction mechanisms to tractable form. It is pedagogically useful to compare steady-state solutions with accurate solutions under various conditions to assess the validity of the steady-state approximation. Consider the simplest case of reactant producing product through a single intermediate in first-order steps: .ce A \(-> B \(-> C. A direct approach to this problem can be made using symbolic differential equation solvers. .KS .nf \f4 (c1) /* Consecutive first-order kinetics demonstrates the steady-state approximation */ /* Initial values of concentrations */ atvalue(a(t),t=0,a[0])$ (c2) atvalue(b(t),t=0,0)$ (c3) atvalue(c(t),t=0,0)$ (c4) /* The kinetics equations are:*/ eq1:diff(a(t),t) = -k[1]*a(t); .EQ (d4) d over { d t } a ( t ) ^=^ ^-^ k sub 1 \^ a ( t ) .EN (c5) eq2:diff(b(t),t) = k[1]*a(t)-k[2]*b(t); .EQ (d5) d over { d t } b ( t ) ^=^ k sub 1 \^ a ( t ) ^-^ k sub 2 \^ b ( t ) .EN (c6) eq3:diff(c(t),t) = k[2]*b(t); .EQ (d6) d over { d t } c ( t ) ^=^ k sub 2 \^ b ( t ) .EN (c7) /* load the diff. eq. solver and solve */ load(desoln); desoln.l being loaded. [load desoln.l] .EQ (d7) desoln.l .EN (c8) desolve([eq1,eq2,eq3],[a(t),b(t),c(t)]); .EQ (d8) [ a ( t ) ^=^ a sub 0 \^ e sup { ^-^ k sub 1 \^ t } , b ( t ) ^=^ { a sub 0 \^ k sub 1 \^ e sup { ^-^ k sub 1 \^ t } } over { ^-^ k sub 1 ^+^ k sub 2 } ^-^ { a sub 0 \^ k sub 1 \^ e sup { ^-^ k sub 2 \^ t } } over { ^-^ k sub 1 ^+^ k sub 2 } , c ( t ) ^=^ a sub 0 ^-^ { a sub 0 \^ k sub 2 \^ e sup { ^-^ k sub 1 \^ t } } over { ^-^ k sub 1 ^+^ k sub 2 } ^+^ { a sub 0 \^ k sub 1 \^ e sup { ^-^ k sub 2 \^ t } } over { ^-^ k sub 1 ^+^ k sub 2 } ] .EN (c9) /* (Save these solutions for future reference) */ truesoln:%$ (c10) /* Now solve for the steady-state approximate solution: */ atvalue(bs(t),t=0,0)$ (c11) atvalue(cs(t),t=0,0)$ (c12) eq4:diff(as(t),t) = -k[1]*as(t); .EQ (d12) d over { d t } as ( t ) ^=^ ^-^ k sub 1 \^ as ( t ) .EN (c13) eq5:0 = k[1]*as(t)-k[2]*bs(t); .EQ (d13) 0 ^=^ k sub 1 \^ as ( t ) ^-^ k sub 2 \^ bs ( t ) .EN (c14) eq6:diff(cs(t),t) = k[2]*bs(t); .EQ (d14) d over { d t } cs ( t ) ^=^ k sub 2 \^ bs ( t ) .EN (c15) /* The differential equation solver can also handle this system of mixed differential and algebraic equations: */ sssoln:desolve([eq4,eq5,eq6],[as(t),bs(t),cs(t)]); .EQ (d15) [ as ( t ) ^=^ as ( 0 ) \^ e sup { ^-^ k sub 1 \^ t } , bs ( t ) ^=^ { as ( 0 ) \^ k sub 1 \^ e sup { ^-^ k sub 1 \^ t } } over { k sub 2 } , cs ( t ) ^=^ as ( 0 ) ^-^ as ( 0 ) \^ e sup { ^-^ k sub 1 \^ t } ] .EN (c16) /* Now lets compare the steady-state concentration of species B with the actual B concentration at various times, for the special case of k\d2\u = 10k\d1\u, and a\d0\u = 1 */ /* Calculate the ratio of B(steady-state) to B(actual) for the above conditions */ ratio(k2t):=(subst(k2t/k[2],t,(subst(.1*k[2],k[1],subst(1,as(0),rhs(sssoln[2])))/ subst(.1*k[2],k[1],subst(1,a[0],rhs(truesoln[2])))))); .EQ (d16) ratio ( k2t ) ~:=~ substitute ( k2t over { k sub 2 } , t , { substitute ( 0.1 \^ k sub 2 , k sub 1 , substitute ( 1 , as ( 0 ) , rhs ( sssoln sub 2 ) ) ) } over { substitute ( 0.1 \^ k sub 2 , k sub 1 , substitute ( 1 , a sub 0 , rhs ( truesoln sub 2 ) ) ) } ) .EN (c17) /* This ratio should approach (9/10) at large values of k\d2\ut: */ value:0.1; .EQ (d17) 0.1 .EN (c18) for i:1 thru 4 do (display(value,ev(ratio(value),numer)),value:10*value); .EQ value ^=^ 0.1 .EN .EQ ev ( ratio ( value ) , numer ) ^=^ 10.4567490889257 .EN .EQ value ^=^ 1.0 .EN .EQ ev ( ratio ( value ) , numer ) ^=^ 1.516605975364507 .EN .EQ value ^=^ 10.0 .EN .EQ ev ( ratio ( value ) , numer ) ^=^ 0.9001110825323516 .EN .EQ value ^=^ 100.0 .EN .EQ ev ( ratio ( value ) , numer ) ^=^ 0.9 .EN .EQ (d18) done .EN (c19) exit(); .R .fi .KE .PP Systems of first order differential equations can also be solved by Laplace transform techniques and by matrix algebra (Perrin, 1970). .KS .nf \f4 (c1) /* Three consecutive kinetic equations, using matrix methods. */ /* Set up the rate coefficient matrix K such that -dC/dt = K.C */ Kmat:matrix([k[1],0,0],[-k[1],k[2],0],[0,-k[2],0]); .EQ (d1) left [ matrix { rcol { k sub 1 above ^-^ k sub 1 above 0 } rcol { 0 above k sub 2 above ^-^ k sub 2 } rcol { 0 above 0 above 0 } } right ] .EN (c2) /* Assign a column matrix of the initial concentrations: */ C[0]:matrix([a[0]],[0],[0]); .EQ (d2) left [ matrix { rcol { a sub 0 above 0 above 0 } } right ] .EN (c3) /* simtran returns two lists; the first has sublists of eigenvalues and corresponding degeneracies, and the second has eigenvectors. It also produces matricies of eigenvectors such that leftmatrix.kmat.rightmatrix = a diagonal eigenvalue matrix. */ eigvals:first(first(simtran(Kmat))); .EQ (d3) [ k sub 2 , k sub 1 , 0 ] .EN (c4) /* Simplify the eigenvector matrices */ Xinv:radcan(leftmatrix)$ (c5) X:radcan(rightmatrix)$ (c6) /* Construct the matrix function exp(-t*eigvals) */ f[i,j]:=if i=j then exp(-t*eigvals[i]) else 0$ (c7) fmat:genmatrix(f,3,3); .EQ (d7) left [ matrix { rcol { e sup { ^-^ k sub 2 \^ t } above 0 above 0 } rcol { 0 above e sup { ^-^ k sub 1 \^ t } above 0 } rcol { 0 above 0 above 1 } } right ] .EN (c8) /* coefmat transforms initial concentrations into concentrations at time t (combine simplifies the result): */ coefmat:combine(X.fmat.Xinv)$ (c9) /* The matrix of concentrations as functions of time is: */ C:coefmat.C[0]; .EQ (d9) left [ matrix { rcol { a sub 0 \^ e sup { ^-^ k sub 1 \^ t } above { a sub 0 \^ ( k sub 1 \^ e sup { ^-^ k sub 1 \^ t } ^-^ k sub 1 \^ e sup { ^-^ k sub 2 \^ t } ) } over { ^-^ k sub 1 ^+^ k sub 2 } above a sub 0 \^ ( 1 ^+^ { ^-^ k sub 2 \^ e sup { ^-^ k sub 1 \^ t } ^+^ k sub 1 \^ e sup { ^-^ k sub 2 \^ t } } over { ^-^ k sub 1 ^+^ k sub 2 } ) } } right ] .EN (c10) /* The steady-state solutions correspond to k2>>k1 and t>>1/k2: */ assume(a[0]>0,k[1]>0,k[2]>0)$ (c11) css:factor(limit(limit(C,k[2]-k[1],k[2]),k[2]*t,inf)); .EQ (d11) left [ matrix { rcol { a sub 0 \^ e sup { ^-^ k sub 1 \^ t } above { a sub 0 \^ k sub 1 \^ e sup { ^-^ k sub 1 \^ t } } over { k sub 2 } above a sub 0 \^ e sup { ^-^ k sub 1 \^ t } \^ ( ^-^ 1 ^+^ e sup { k sub 1 \^ t } ) } } right ] .EN (c12) exit(); .R .fi .KE .PP .EQ delim $$ .EN .SH \fIReferences\fR .IP 1. \fISets, Logic and Axiomatic Theories\fR, Robert R. Stoll, W. H. Freeman and Company, San Francisco, 1961. .IP 2. \fIA Survey of Modern Algebra\fR, 3rd. Edition, Garrett Birkhoff and Saunders MacLane, The Macmillan Company, New York, 1965. .IP 3. \fIThe Mathematics of Physics and Chemistry\fR, Henry Margenau and George Moseley Murphy, D. Van Nostrand, Princeton, 2nd ed., 1956 .IP 4. \fIMathematics for Chemists\fR, C. L. Perrin, Wiley, 1970. .IP 5. \fIMathematica, A System for Doing Mathematics by Computer\fR, 2nd ed., Stephen Wolfram, Addison-Wesley Publishing Company, Inc., 1991. .IP 6. \fIMaple\fR, 2nd ed., .IP 7. \fIAxiom User's Guide\fR, Springer-Verlag, 1993. .IP 8. \fIIntroduction to Modern Mathematics\fR, Herbert Meschkowski, Translated by A. Mary Tropper, George G. Harrap & Co., London, 1968. .OH '''' .EH ''''