K-Algebra - Meaning and background of the concept

I am trying to understand the concept of a k-algebra without much success.

Can someone please give me a clear explanation of the background, definition and use of the concept.

Also I would be extremely grateful of some examples of k-algebras

Peter

Re: K-Algebra - Meaning and background of the concept

ok, you know how fields can be considered "two groups in one" (one is the additive group, and one is the multiplicative group of non-zero elements)? well a k-algebra is a similar idea of "two structures in one".

on one hand, we have that a k-algebra is a vector space over k. this is the same thing as a k-module (which is FREE over any basis).

on the other hand, we have that a k-algebra forms a ring (usually associative and with unity). the ring structure and the vector space structure have to be COMPATIBLE.

this means: the ring multiplication is bilinear.

another way to look at this is that you have a ring, A, together with a ring-homomorphism η:k-->Z(A) (the center of A).

if A is not a trivial ring (just 0), then η is injective (because k is a field, and field-homomorphisms are always monomorphisms), so normally n(k) is identified with k.

this allows us to define a vector space on A by:

vector addition is just the ring addition (for any ring A, (A,+) is an abelian group).

scalar multiplication is defined like so: for c in k, and a in A:

ca = η(c)a (where the RHS is the ring multiplication of A).

******************

some typical examples:

let k be a field, and let E be any field containing k as a subfield. define for c in k, and a in E: ca to be the product in E. for example, the complex numbers C are an R-algebra. this is a 2-dimensional R-algebra.

let k be a field, and let k[x] be the ring of polynomials over k. then k[x] is a k-algebra. this is an infinite-dimensional k-algebra.

let k be a field, and let V be any vector space over k. then Hom(V,V) = End(V), the set of all k-linear mappings from V to V, is a k-algebra. if V is finite-dimensional, of dimension n, then End(V) has dimension n^{2}. for the finite-dimensional case, this can be identified with Mat_{nxn}(k), the set of all nxn matrices with entries in k.

let G be any group, and let k be any field. the then group-algebra k[G], consisting of formal k-linear combinations of elements of G (together with a "polynomial-like" multiplication) is a k-algebra.

the fact that k-algebras are vector spaces over k, lets us use the tools of linear algebra to investigate them. the fact the k-algebras are also rings, lets us use ring-concepts in investigating them.

for example, for any k-algebra, we can speak of its group of units. for the k-algebra k[x], this is just k. for the k-algebra End_{k}(V), this is GL_{k}(V), the general linear group of V. again, if V is finite-dimensional, this corresponds to the INVERTIBLE nxn matrices.

one can also form "function algebras". for example, one has the R-algebra C[a,b], consisting of continuous functions f:[a,b]--->R, where the ring-structure is inherited from R:

(f+g)(x) = f(x) + g(x)

(fg)(x) = f(x)g(x)

(cf)(x) = c(f(x)) <--this is the "scalar multiplication".

this example can easily be generalized to functions f:S-->k, where S is any set, and k is the field k. often we are interested in some sub-ring of k^{S} (continuous, differentiable, linear, etc.) and often S has additional properties (such as a topology, or is a vector space itself).

historically, k-algebras (and not specific instances of them) are relatively recent, being studied in their own right only since around the beginning of the 20th century. some of the development of k-algebras probably came about as an attempt to realize various structures as matrix algebras, for example it is well-known that the complex numbers can be realized as a sub-algebra of Mat_{2x2}(R) as:

$\displaystyle a+bi \leftrightarrow \begin{bmatrix}a&-b\\b&a \end{bmatrix}$

in a similar vein, the quaternions form a 4-dimensional algebra over R, and a 2-dimensional algebra over C, a 2x2 complex matrix that describes a quaternion is:

$\displaystyle \begin{bmatrix} A&B\\-B^*&A^* \end{bmatrix}$

where A* = the complex conjugate of A. each entry can be viewed as a 2x2 "block matrix", yielding a 4x4 real matrix.

a nifty feature of End(V), for finite-dimensional V: we have a monoid-homomorphism det:End(V)-->k, to the multiplicative monoid (k,*), which preserves units: T is in U(End(V)) (that is: GL(V)) iff det(T) is in U(k) = k* (that is: if det(T) ≠ 0).

in general, we can investigate the sub-structure of a k-algebra A, by considering the ideals of A as a ring. such ideals are automatically subspaces of A since:

u in J and v in J means u+v is in J (ideals are closed under addition)

u in J and c in k means cu is in J (here we are implicitly considering c in A via the monomorphism η).

0 is in any ideal J of A.

note that in the algebra k[x], the ideal generated by x is considerably bigger than the subspace generated by x. so in k-algebras, you sometimes have to be careful specifying "how you're decomposing it". the "dual nature" of k-algebras leads to a rich and varied theory. one of the things k-algebras are useful for is "representation theory". a representation of a k-algebra A consists of two things:

1. a vector space V over k

2. an action on V by A via endomorphisms (equivalently: an algebra homomorphism A --> End_{k}(V)).

concretely (when dimension V = n), this lets us think of elements of A as nxn matrices, so that instead of doing "abstract algebra" in A, we can do "concrete (matrix) arithmetic" in Mat_{nxn}(k). this, in turn, is equivalent to turning V into an A-module (you may already be familiar with regarding V as a k[x]-module, by picking a linear transformation T, and setting p(x).v = p(T)(v), or regarding V as a k[G]-module via a homomorphism φ:G-->GL(V) and setting:

(a_{1}e + a_{2}g_{1} +...+ a_{n}g_{n-1}).v = a_{1}v + a_{2}φ(g_{1})(v) +...+ a_{n}φ(g_{n-1})(v) ).

Re: K-Algebra - Meaning and background of the concept

Thank you for such a helpful post

Just working through it in detail now.

Peter

Re: K-Algebra - Meaning and background of the concept

Deveno,

You write:

"on one hand, we have that a k-algebra is a vector space over k. this is the same thing as a k-module (which is FREE over any basis)."

What do you mean by "FREE over any basis"

Akso, can you explain "compatible" further?

Peter

Re: K-Algebra - Meaning and background of the concept

ok, a free R-module over a ring R is a module M with a subset S such that:

1. S generates M (as finite R-linear combinations of elements of S).

2. S is R-linearly independent

S is called a BASIS for M. for example: consider the group (ZxZ,+). this can be considered a Z-module in a natural way:

n.(a,b) = (na,nb)

the set S = {(1,0),(0,1)} clearly generates ZxZ, since (a,b) = (a,0) + (0,b) = a(1,0) + b(0,1).

moreover, if a(1,0) + b(0,1) = (0,0), then (a,b) = (0,0), whence a = 0, and b = 0.

if R = k, a field, and V is vector space over k (which is what a k-module is), V is uniquely determined (up to isomorphism) by the size of its basis.

in general, the free R-module generated by S, is a module M such that:

S ⊆ M, and given any module N, and any function f:S-->N, there is a unique R-module homomorphism F:M-->N such that F(s) = f(s) for all s in S (F can be thought of as "the unique R-module homomorphism obtained by extending f to all of M via R-linearity").

the following statement is true: the R-module M is free over a subset S iff S is a basis for M.

your original question in this context is: if S is a basis for M, why is M free over S? let's look at that in greater detail:

suppose we are given that S is a basis for an R-module M. since <S> = M, we can write any m in M as:

$\displaystyle m = \sum_j a_js_j$ for some a_{j}'s in R, and s_{j}'s in S (we are only considering FINITE R-linear combinations).

now suppose we are given a function (just an ordinary set-function) f:S-->N to some R-module N.

define F:M-->N by:

$\displaystyle F(m) = F\left(\sum_j a_js_j \right) = \sum_j a_jf(s_j)$

it is routine (but tedious) to verify that F(m+m') = F(m) + F(m'), and F(am) = aF(m), so F is R-linear, and thus an R-module homomorphism. clearly for any s in S, F(s) = F(1s) = 1f(s) = f(s). so F is one possible homomorphism that fits the bill. are there any others?

suppose we have an R-module homomorphism G:M-->N with G(s) = f(s), for all s in S. then, given m in M:

$\displaystyle F(m) - G(m) = F\left(\sum_j a_js_j\right) - G\left(\sum_j a_js_j\right)$

$\displaystyle = \sum_j a_jf(s_j) - \sum_j G(a_js_j) = \sum_j a_jf(s_j) - \sum_j a_jG(s_j)$ (because G is R-linear)

$\displaystyle = \sum_j a_jf(s_j) - \sum_j a_jf(s_j) = 0$ since G(s_{j}) = f(s_{j}), by definition of G.

therefore, F = G. now, you may wonder, where did we use the R-linear independence of S? it's sort of "hidden" in the well-definedness of F:

recall that we defined F by defining F on R-linear combinations of the s_{j}. so, what if:

$\displaystyle m = \sum_j a_js_j = \sum_k b_ks_k$ for two different subsets $\displaystyle A = \{s_{j_1},\dots,s_{j_r}\}$ and $\displaystyle B = \{s_{k_1},\dots,s_{k_t}\}$ of S?

well, we can rewrite each R-linear combination as an R-linear combination of elements in AUB (which is still a finite R-linear combination of elements in S, since A and B are finite), by adding "0-terms" if necessary,

and get:

$\displaystyle 0 = m - m = \sum_u a_us_u - \sum_u b_us_u = \sum_u (a_u - b_u)s_u$.

by the R-linear independence of S, each a_{u}-b_{u} = 0.

if s_{u} only occurs in A (that is, we added a "0 term" for b_{u}) this means that a_{u} must have been 0 in the first place.

if s_{u} only occurs in B (that is we padded out our expression in terms of a's by using a "0 term" for a_{u}), we see that b_{u} must likewise have been 0.

if, however, s_{u} is in A∩B, we see the a-terms and the b-terms must match exactly, so that in fact, these are the SAME linear combination, so defining F on R-linear combinations of S uniquely defines it.

if you think about it, we defined F "in the only way possible". R-linear independence removes any ambiguity in our possible definition (this is NOT true for an arbitrary generating set X for M. for example, consider M = Z, and the generating set:

X = {2,3}. it is quite possible to find distinct pairs (k,m) of integers for which n = 2k + 3m for any n. for example 1 = 3 - 2 = 9 - 8 (the first is k = -1, m = 1, the second is k = -4, m= 3), or 16 - 15 (k = 8, m = -5).

the set {2,3} is not Z-linearly independent: we have 2(3) + 3(-2) = 0, without 3 = -2 = 0. in this example if we choose N = Z as well, with f:X-->N as f(2) = f(3) = 1, we get (trying to use a similar F):

F(1) = F(2k + 3m) = kf(2) + mf(3) = -f(2) + f(3) = -1 + 1 = 0, using k = -1, m = 1

F(1) = F(2k + 3m) = kf(2) + mf(3) = -4f(2) + 3f(3) = -4 + 3 = -1, using k = -4, m= -3

F(1) = F(2k + 3m) = kf(2) + mf(3) = 8f(2) - 5f(3) = 8 - 5 = 3, using k = 8, m= -5, showing our definition of F is not "well-defined" (we get different values for F depending on how we pick k and m).

*********************************

now, in general an R-module need not HAVE a basis, and even if it does, two bases need not be isomorphic. but for all commutative rings R, if we HAVE a basis, the cardinality of the basis is an invariant. if R = k, a field, then not only is the size of any basis of a k-module invariant, but every k-module also HAS a basis (the proof of this for arbitrary k-modules (vector spaces) involves the axiom of choice, and is a bit involved).

my apologies for the length of the exposition above. suffice to say, modules are "almost like vector spaces" (except the scalars are a ring), and when the ring is a field, they ARE vector spaces. so if you start with a basis set B of a vector space V, and create the free k-module over B, you wind up with something isomorphic to V, which being a subspace of V, IS V. in purely linear algebraic terms, if B is a basis, span(B) = V (this sort of "hides" the fact that B is LI).

********************************

regarding "compatibility", the product in a k-algebra A is required to be BILINEAR (over k). this means two things:

1) (u)(v + w) = uv + uw

(u + v)(w) = uw + vw (these are, of course, just the distributive laws for a ring) for all u,v,w in A.

2) (cu)(v) = (u)(cv) = c(uv) (the scalar multiplication of A as a vector space respects the multiplication of A as a ring), for all c in k, and u,v in A.

note that if we defined A in terms of a field monomorphism η:k-->Z(A), then:

c.(uv) = η(c)(uv) = (η(c)u)(v) = (c.u)(v) by associativity of multiplication in A, while:

c.(uv) = η(c)(uv) = (η(c)u)(v) = (uη(c))(v) = (u)(η(c)v) = (u)(c.v), by associativity, and the fact that η(c), being in Z(A), commutes with u.

(the "dot" is normally omitted in the scalar product, because of these rules).

Re: K-Algebra - Meaning and background of the concept

Thank you again - extremely helpful

WIll now work through the post carefully!

Peter