**Symmetric Polynomials**are those polynomial expressions

\( f(x_1, ... , x_n) \in R[x_1, ... , x_n] \)

in the polynomial ring over R which remain fixed under every possible permutation of their variables. Given \(n\) variables there will be \(n!\) (\(n\)-factorial) such permutations.For instance, the polynomial \(f (x_1, x_2) = x_1 ^ 2 + x_2 ^ 2 \) is symmetric in \(x_1\) and \(x_2\) because permuting these gives \(x_2 ^ 2 + x_1 ^ 2\), and clearly this is the same as \(x_1 ^ 2 + x_2 ^ 2\) since addition is assumed to be commutative.

However, \(g (x_1, x_2) = ax_1 ^ 2 + bx_2 ^ 2 \; \) is in general

*not*symmetric in \(x_1\) and \(x_2\) unless \(a\)

*happens to equal \(b\).*

The

**Elementary Symmetric Polynomials**constitute a fundamental building block out of which all of the symmetric polynomials can be formed. More precisely, we define the \(k^{th}\)

*Elementary Symmetric Polynomial in*\(n\) variables as follows:

\(e_k (x_1, ... , x_n) \) = the sum over all distinct products with k distinct variables.

` `

\[\sum \limits_{1 \leq j_1 < j_2 < ... < j_k \leq n} = x_{j_1}x_{j_2} ... x_{j_k}\]

As an example, if \(n = 4\), then

\(e_1 (x_1 , x_2 , x_3 , x_4) = x_1 + x_2 + x_3 + x_4\)

\(e_2 (x_1 , x_2 , x_3 , x_4) = x_1x_2 + x_1x_3 + x_1x_4 + x_2x_3 + x_2x_4 + x_3x_4\)

\(e_3 (x_1 , x_2 , x_3 , x_4) = x_1x_2x_3 + x_1x_2x_4 + x_1x_3x_4 + x_2x_3x_4\)

\(e_4 (x_1 , x_2 , x_3 , x_4) = x_1x_2x_3x_4\)

Notice that given some

\(f (x) = (x - x_0)(x-x_1) ... (x-x_{n-1})(x-x_n) \)

\(= (-1)^ne_n + (-1)^{n-1}e_{n-1}x + ... + (-1)e_1x^{n-1} + x^n \)

\(= (-1)^ne_n + (-1)^{n-1}e_{n-1}x + ... + (-1)e_1x^{n-1} + x^n \)

the elementary symmetric polynomials in this case are simply, within a change of sign, the coefficients of the polynomial f(x).

The

**Fundamental Theorem of Symmetric Polynomials**states that every polynomial \( f(x_1, ... , x_n) \in R(x) \) symmetric in \(x_1, ... , x_n\) is equal to some further polynomial \( g(e_1, ... , e_n) \in R(x) \) evaluated with the elementary symmetric polynomials in \(x_1, ... , x_n\) . In order to prove this fact, we shall introduce a

*Lexicographical Ordering*onto the monomials which form these polynomials.

Every polynomial \(f (x_1, ... , x_n)\) is surely composed of terms of the form

\(c_0x_1^{i_1} ... x_n^{i_n}.\)

We say that

\(c_0x_1^{i_1} ... x_n^{i_n} < c_1x_1^{j_1} ... x_n^{j_n}\)

if \(c_1 \ne 0\) and there is some \(0 \leq k \leq n\) such that \(i_{n-l} = j_{n-l}\) for \(l = 0, 1, ... , k-1\) and \(i_{n-k} < j_{n-k}.\)

Consider the following illustration of this ordering:

\(4x_1^4x_2^3x_3^5 < 6x_1^4x_2^4x_3^5\)

This is because \(0 \neq c_1 = 6\) and \(5 = i_3 = j_3\) but \(3 = i_2 < j_2 = 4 .\)

Our approach will be repeatedly taking away from \(f (x_1, ... , x_n) \) a product of elementary symmetric polynomials in such a way that allows us to constantly reduce the value of the greatest monomial with respect to the lexicographical ordering introduced above. This process will always terminate after a finite number of iterations, and we will be done. Taking stock of all the subtracted symmetric polynomials shall produce the sought after polynomial in \(e_1, ... , e_n\) .

**Proof.**Let \(m = cx_1^{i_1} ... x_n^{i_n}\) be the greatest monomial in \(f (x_1, ... , x_n)\) . Then it must be the case that \(i_1 < i_2 < i_3 < ... < i_n\) because otherwise \(m\) couldn't be the greatest monomial by symmetry on \(x_1, ... , x_n\) .

Now define

\(r = ce_1^{i_n - i_{n-1}}e_2^{i_{n-1} - i_{n-2}} ... e_{n-1}^{i_2 - i_1}e_n^{i_1} .\)

The largest monomial in \(r\) is \(m\), because the largest monomial in each \(e_i\) is

\[x_{n - i + 1}x_{n - i + 2} ... x_n\]

and so the largest monomial in \(r\) equals

\[cx_n^{i_n - i_{n-1}}(x_{n-1}x_n)^{i_{n-1} - i_{n-2}} ... (x_2x_3 ... x_n)^{i_2 - i_1}(x_1x_2x_3 ...x_n)^{i_1}\]

\[f_0 = f, f_1, f_2, ... f_j, f_{j+1}, ... f_s\]

of symmetric polynomials will terminate so that \(f_s \in R\) (the \(R\) of \(R[x]\) ) for some positive integer \(s\). Then if we let \(r_0 \; = \; r\)

\[f_s + r_0 + r_1 + \; ... + \; r_{s-1}\; = \; f = \; g(e_1 , ... , e_n) \]

and we are done. \( \blacksquare \)

Lagrange used these elementary symmetric polynomials in his 1770

*Réflexions sur la résolution algébrique des équations*to systematize the solution of univariate polynomial equations. For instance, consider the expression \((x_1 + \omega x_2 + \omega^2 x_3)^3\), where \(x_1, \; x_2, \; x_3\) are the solutions of the cubic polynomial equation \( x^3 + \alpha_2x^2 + \alpha_1x + \alpha_0 = 0 \; \) and \( \omega \) is some primitive 3rd root of unity.

This expression assumes only two distinct values under all of the permutations; it remains the same during even permutations and assumes a different value after odd permutations (you can prove this for yourself, it just requires a lot of routine and monotonous algebra which I've omitted for brevity).

\(A = (x_1 + \omega x_2 + \omega^2 x_3)^3\) and \(B = (x_1 + \omega^2 x_2 + \omega x_3)^3.\)

Adding these two expressions together gives the symmetric polynomial expression

\(A + B = (x_1 + \omega x_2 + \omega^2 x_3)^3 + (x_1 + \omega^2 x_2 + \omega x_3)^3.\)

This can be computed from the elementary symmetric polynomials. We can do likewise with the multiple of the two expressions

\(AB = (x_1 + \omega x_2 + \omega^2 x_3)^3 (x_1 + \omega^2 x_2 + \omega x_3)^3.\)

These two expression are the elementary symmetric polynomials of a quadratic equation

\( y^2 - (A+B)y + AB= (y - A)(y - B) = 0 \)

which we can explicitly solve using the well-known Quadratic Formula

$$y = {-b \pm \sqrt{b^2-4ac} \over 2a}$$

where \(a = 1\), \(b = -(A+B)\), and \(c = AB\).

Thus, since A,B are the only roots of this quadratic equation, we have found an expression for each of these involving only taking square roots (as well as the usual arithmetical operations) of expressions involving only primitive roots of unity and the coefficients of the original cubic polynomial; that is \( \alpha_2, \alpha_1, \alpha_0 \).

Thus, since A,B are the only roots of this quadratic equation, we have found an expression for each of these involving only taking square roots (as well as the usual arithmetical operations) of expressions involving only primitive roots of unity and the coefficients of the original cubic polynomial; that is \( \alpha_2, \alpha_1, \alpha_0 \).

Finally, since \(x_1 + x_2 + x_3 = - \alpha_1\) is just the coefficient of \(x^2\) in the cubic polynomial equation, we obtain a 3 by 3 system of linear equations

\[\begin{array}
{lcl} x_1 + \omega x_2 + \omega^2 x_3 & = & \sqrt [3] {A} \\
x_1 + \omega^2 x_2 + \omega x_3 & = & \sqrt [3] {B} \\ x_1 + x_2
+ x_3 & = & - \alpha_1 \end{array} \]

which we can solve to obtain the solutions of the cubic (since A,B each have three cubed roots and there are two choices for \(\omega\), we try all eighteen possible combinations of \(x_1,x_2,x_3\)).

Sources: Wikipedia's Article "Elementary Symmetric Polynomials"

*Galois Theory 3rd Ed.*by Ian Stewart