Results 1 to 12 of 12
Like Tree1Thanks
  • 1 Post By emakarov

Math Help - Simple proof for a property of Vector space

  1. #1
    Junior Member
    Joined
    Jul 2012
    From
    New Delhi
    Posts
    52
    Thanks
    3

    Simple proof for a property of Vector space

    For a  \epsilon R and x  \epsilon V (a vector space) show that
    ax=0 => a=0 or x=0

    Not able to prove it mathematically
    If I go by taking a!=0 and x!=0
    and divide ax by a I get x=0
    but this doesnot work for dividing by x as division is not defined for the vector element.

    Some help please/
    thanks.
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor
    Joined
    Oct 2009
    Posts
    5,540
    Thanks
    780

    Re: Simple proof for a property of Vector space

    Quote Originally Posted by pratique21 View Post
    If I go by taking a!=0 and x!=0
    and divide ax by a I get x=0
    but this doesnot work for dividing by x as division is not defined for the vector element.
    Why would you want to divide by x? Suppose ax = 0 and a ≠ 0. Then, as you say, (1/a)(ax) = (1/a)0, from where x = 0. From pure logic, (ax = 0 => a = 0 or x = 0) is equivalent to (ax = 0 and a ≠ 0 => x = 0). Note that, strictly speaking, you need to prove that a0 = 0 for a zero vector since this is not one of the axioms, e.g., in Wikipedia.
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Junior Member
    Joined
    Jul 2012
    From
    New Delhi
    Posts
    52
    Thanks
    3

    Re: Simple proof for a property of Vector space

    okay. But we are supposed to show that ax=0 => a=0 or x=0
    Here we have shown that if ax=0, at least x=0 (by taking a ≠ 0)
    Don't we need to show that a=0 (if x ≠ 0) ? How to do that ?
    And yes, i am aware of the fact that a0 = 0 needs to be proved.
    Follow Math Help Forum on Facebook and Google+

  4. #4
    MHF Contributor
    Joined
    Oct 2009
    Posts
    5,540
    Thanks
    780

    Re: Simple proof for a property of Vector space

    Quote Originally Posted by emakarov View Post
    From pure logic, (ax = 0 => a = 0 or x = 0) is equivalent to (ax = 0 and a ≠ 0 => x = 0).
    If you had agreed to this claim, then this would have settled the question. If ∧ denotes "and", ∨ denotes "or" and → denotes "implies", then we have the following equivalences.

    (A ∧ B → C) is equivalent to (A → (B → C)),

    (B → C) is equivalent to B ∨ C,

    B is equivalent to B.

    Therefore, ((A ∧ B) → C) is equivalent to (A → (B ∨ C)). If A is "ax = 0", B is "a = 0" and C is "x = 0", this says that

    ((ax = 0 ∧ (a = 0)) → x = 0) is equivalent to (ax = 0 → (a = 0 ∨ x = 0)).

    Quote Originally Posted by pratique21 View Post
    okay. But we are supposed to show that ax=0 => a=0 or x=0
    Here we have shown that if ax=0, at least x=0 (by taking a ≠ 0)
    Don't we need to show that a=0 (if x ≠ 0) ?
    No, we don't need to show this because (x ≠ 0 → a = 0) is equivalent to (a ≠ 0 → x = 0), which we have already shown (the second formula is the contrapositive of the first one modulo the equivalence between (x = 0) and x = 0).
    Thanks from pratique21
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Junior Member
    Joined
    Jul 2012
    From
    New Delhi
    Posts
    52
    Thanks
    3

    Re: Simple proof for a property of Vector space

    wow..that's wonderful.
    I get it now. thanks a lot.
    I am quite illogical it seems.
    Follow Math Help Forum on Facebook and Google+

  6. #6
    MHF Contributor

    Joined
    Mar 2011
    From
    Tejas
    Posts
    3,397
    Thanks
    760

    Re: Simple proof for a property of Vector space

    mathematicans, when presented with a statement like: A → (B v C) like to prove instead: (A & B) v (A → C). these statements have the exact same truth-table (which i urge you to verify, if only just this once), so are logically equivalent. in words, the proofs go like this:

    suppose A. if B, we are done, so suppose not B. but then (and some actual proof goes here) C. bam! (ok, the halmos square, or QED is actually used instead of "bam!", but that's because mathematicians are a humorless sort).

    to a casual reader, it looks like what "ought to be proved" is something like (A → B) v (A → C). one "sees" the A → C part of the proof, and thinks: "hey, what about the A → B part? isn't half the proof missing?". and indeed, this third statement:

    (A → B) v (A → C) is also logically equivalent to the other two forms (all three are only false when: A is true, and B and C are both false). but for this last statement to be true, only ONE of the two statements A → B, or A → C needs to be proven. the only thing we need to "watch out for" is when one of them is false (like when A is true, and B is false), and then the other one better be true.

    now, in the present example you have two choices:

    given ax = 0, you can either assume a ≠ 0, and then prove x = 0, OR:
    given ax = 0, you can assume that x ≠ 0, and then prove a = 0.

    you do not have to do BOTH. the first one is "easier" since then we can mulitply by 1/a to get:


    (1/a)(ax) = (1/a)(0) = 0 (this is where the proving c0 = 0 comes in)
    ((1/a)(a))x = 0 (the passage from the first line to the second is one of the vector space axioms: a(bx) = (ab)x)
    1x = 0
    x = 0 (1x = x is another vector space axiom).

    out of perversity, let's do the "hard" one:

    ax = 0, x ≠ 0.

    suppose that (by way of contradiction) a ≠ 0.

    now ax + x = x (since ax = 0) thus:

    (1/a)(ax + x) = (1/a)(ax) + (1/a)x = ((1/a)(a))x + (1/a)x = 1x + (1/a)x = x + (1/a)x, so:

    x + (1/a)x = (1/a)x. subtracting (1/a)x from both sides gives:

    x = 0, contradiction (you see the hoops we had to jump through because we can't "divide by x"). therefore, a must be 0.

    the proof of:

    ax = 0, a ≠ 0, so x = 0

    is "easier" because we have "more things we can do to a" (a field is a richer structure than an abelian group).

    of course, this proof is not complete without showing that:

    c0 = 0, for all c. but this is easy:

    c0 = c(0 + 0) (since 0 = 0 + 0, because x = 0 + x for all x, including x = 0)
    c0 = c0 + c0 (since in any vector space c(x + y) = cx + cy).
    0 = c0 (subtracting c0 from both sides. ok, actually adding -(c0), but sheesh....)
    Follow Math Help Forum on Facebook and Google+

  7. #7
    MHF Contributor
    Joined
    Oct 2009
    Posts
    5,540
    Thanks
    780

    Re: Simple proof for a property of Vector space

    Quote Originally Posted by Deveno View Post
    mathematicans, when presented with a statement like: A → (B v C) like to prove instead: (A & B) v (A → C)... in words, the proofs go like this:

    suppose A. if B, we are done, so suppose not B. but then (and some actual proof goes here) C.
    I am not sure: are you proving (A & B) v (A → C)? Then why do you start with "suppose A" if the statement is not an implication and you are not using the law of excluded middle for A?

    Quote Originally Posted by Deveno View Post
    now, in the present example you have two choices:

    given ax = 0, you can either assume a ≠ 0, and then prove x = 0, OR:
    given ax = 0, you can assume that x ≠ 0, and then prove a = 0.
    These statements are A /\ ~B -> C and A /\ ~C -> B, but not (A & B) v (A → C).
    Follow Math Help Forum on Facebook and Google+

  8. #8
    MHF Contributor

    Joined
    Mar 2011
    From
    Tejas
    Posts
    3,397
    Thanks
    760

    Re: Simple proof for a property of Vector space

    Quote Originally Posted by emakarov View Post
    I am not sure: are you proving (A & B) v (A → C)? Then why do you start with "suppose A" if the statement is not an implication and you are not using the law of excluded middle for A?

    These statements are A /\ ~B -> C and A /\ ~C -> B, but not (A & B) v (A → C).
    why do mathematicians assume in statements of the form: B follows from A that A is true? (answer: because if A isn't true, we could prove anything we like, which tells us absolutely...nothing).

    now the statements A&(A→B) and A&B are equivalent. if we're given A (in this case A is: ax = 0), then A→B is the same as A&B. in other words:

    A&[(A→B) v (A→C)] = A&[(A&B) v (A→C)].

    we're not interested in the truth of A. it's *handed* to us. we're only interested in the stuff in the square brackets. the phrase "suppose A" is used loosely (and OFTEN) in literature as meaning: "we take A to be true".

    so let's look at (A&B) v (A→C). what am i saying here? i'm only interested in when this is TRUE. if both A and B are true, i don't have to show anything. if anything, i'm applying the law of the excluded middle to B, not to A. that is:

    if ax = 0 and a = 0 is true, then:

    if ax = 0, then a = 0, or x = 0 is ALSO true.

    the ONLY time that (A&B) v (A→C) is false, is if BOTH parts of the conjunction are false. if ~B is true, then A&B is false, in which case, the only hope of salvaging some true statement is to prove C is true (given A).

    let me go into some more detail:

    we want to prove:

    A → (B v C), where:

    A is: ax = 0
    B is: a = 0
    C: x = 0.

    is there any DOUBT that A → (B v C) is the same as: (A → B) v (A → C)? we can eliminate the implications by writing X→Y as: ~X v Y.

    then A → (B v C) = ~A v (B v C) = (~A v ~A) v (B v C) = (~A v (~A v B)) v C = ((~A v B) v ~A) v C = (~A v B) v (~A v C) = (A→B) v (A→C).

    so we COULD prove (A→B) v (A→C) INSTEAD of A → (B v C). but (A&B) v (A→C) is equivalent to THAT. so we can prove that instead. is any of this untrue so far??

    it's COMMON PRACTICE in mathematics when proving statements of the form C v D, to prove instead: ~C → D.

    well, for A&B, GIVEN A, ~(A&B) means ~B. so yes, when passing to a proof of (A&B) v (A→C) in the presence of A, one naturally writes ~B → (A→C).

    ok, from the start:

    "assume ax = 0" this is just what we are given. the whole rigamarole of C, C→D, therefore D, is useless formalism, i can't remember when a proof of anything outside of formal systems even MENTIONS "modus ponens". if you cannot see that if C is true, and C entails D that D is also true, then have a machine write your proofs for you. we can build up any natural language statement into a complex weave of formal boolean statements that say the same thing, but that OBSCURES the truth, it does not REVEAL it.

    "if a = 0, we are done." why? because if A is true, and B is true, than BOTH A&B and A→B are true. there is quite literally, nothing to prove. whether or not x = 0 happens to be true, makes no difference ("or" statements only require one part to be true).

    "so assume a ≠ 0." again, since A is true, this falsifies both A&B and A→B. now we HAVE to prove A→C (that is: ax = 0 implies x = 0).

    now we do just that.

    what infuriates me, about your entire post is it does just the OPPOSITE of what i'm trying to explain:

    when we have a proof like: given A, prove B or C, we do NOT have to prove both A→B AND A→C. we just have to prove ONE holds. often one of these 2 is harder than the other. we dispense with that by considering B either true, or not true. if A is true, it makes no difference whatsoever if i say A→B, or A&B. B plays the same role either way. in the case that B is true, we've "proved one half". in the case that B is not true, we need to prove "the other half".

    i didn't just "make up" this style of proof for the purposes of this thread. it's COMMON. for example, it's used frequently to show that a given integer is prime (using Euclid's lemma). the point i'm trying to make is NOT:

    "certain logical propositions are formally equivalent"

    but rather:

    with OR proofs, negate one implication and prove the other. yes, one can show:

    ~(~A v B) = A&(~B), and A&(A→B) therefore is (A&(~A))v(A&B) = (false)v(A&B) = A&B, but that's not how i think.

    why do mathematicians identify, in informal proof-writing, A&(A→B) and A&B? who knows. better yet, who cares? if i express a proof of (A→B) v (A→C) by:

    (A&B) v (A→C), and show ~B→(A→C), do i really have to justify this?
    Follow Math Help Forum on Facebook and Google+

  9. #9
    MHF Contributor
    Joined
    Oct 2009
    Posts
    5,540
    Thanks
    780

    Re: Simple proof for a property of Vector space

    Quote Originally Posted by Deveno View Post
    when we have a proof like: given A, prove B or C, we do NOT have to prove both A→B AND A→C.
    I agree.

    Quote Originally Posted by Deveno View Post
    if i express a proof of (A→B) v (A→C) by:

    (A&B) v (A→C), and show ~B→(A→C), do i really have to justify this?
    Talking about (A&B) v (A→C) is misleading because you are actually proving ~B→(A→C), or A -> ~B -> C, which is almost the same thing. This is confirmed by the fact that you start the proof (in post #6) by assuming A and also by explicitly writing the fact you are proving: "given ax = 0, you can either assume a ≠ 0, and then prove x = 0." There are three cases when one assumes A in a proof: (1) when the claim has the form A -> ...; (2) when one uses the law of excluded middle (LEM) for A; and (3) when one proves a lemma of the form A -> ... Case (1) agrees with the fact that you are proving A -> ~B -> C; (2) is not the case here and (3) is more complicated than this problem is worth, especially when you are trying to explain it to somebody.

    I am not arguing that A -> B \/ C is not equivalent to (A&B) v (A→C). I am just doubting that "mathematicians, when presented with a statement like: A → (B v C) like to prove instead: (A & B) v (A → C)." If you really started proving (A & B) v (A → C), you would start the proof either by LEM or by choosing one of the disjunct to prove. Instead, it is easier and more natural to prove A -> ~B -> C, which is also what you did.
    Follow Math Help Forum on Facebook and Google+

  10. #10
    Junior Member
    Joined
    Jul 2012
    From
    New Delhi
    Posts
    52
    Thanks
    3

    Re: Simple proof for a property of Vector space

    I don't know if I am correct here, but when we multiply ax=0 by 1/a
    don't we get
    1.x=0
    which means that to show x=0, we are actually using the property that we are supposed to prove, namely, since 1 != 0 , x=0 .
    Thanks.
    Follow Math Help Forum on Facebook and Google+

  11. #11
    MHF Contributor
    Joined
    Oct 2009
    Posts
    5,540
    Thanks
    780

    Re: Simple proof for a property of Vector space

    1 * x = x is one of the axioms of a vector space.
    Follow Math Help Forum on Facebook and Google+

  12. #12
    MHF Contributor

    Joined
    Mar 2011
    From
    Tejas
    Posts
    3,397
    Thanks
    760

    Re: Simple proof for a property of Vector space

    Quote Originally Posted by emakarov View Post
    I agree.

    Talking about (A&B) v (A→C) is misleading because you are actually proving ~B→(A→C), or A -> ~B -> C, which is almost the same thing. This is confirmed by the fact that you start the proof (in post #6) by assuming A and also by explicitly writing the fact you are proving: "given ax = 0, you can either assume a ≠ 0, and then prove x = 0." There are three cases when one assumes A in a proof: (1) when the claim has the form A -> ...; (2) when one uses the law of excluded middle (LEM) for A; and (3) when one proves a lemma of the form A -> ... Case (1) agrees with the fact that you are proving A -> ~B -> C; (2) is not the case here and (3) is more complicated than this problem is worth, especially when you are trying to explain it to somebody.

    I am not arguing that A -> B \/ C is not equivalent to (A&B) v (A→C). I am just doubting that "mathematicians, when presented with a statement like: A → (B v C) like to prove instead: (A & B) v (A → C)." If you really started proving (A & B) v (A → C), you would start the proof either by LEM or by choosing one of the disjunct to prove. Instead, it is easier and more natural to prove A -> ~B -> C, which is also what you did.
    not my intention to "mis-lead". my sole point is, the logical equivalence of many of the statements we've discussed in a FORMAL system, is often "swept under the rug" in an INFORMAL proof (which MOST proofs actually are). so in proving something like:

    (A&B) v (A→C) one often tacitly assumes ~(A&B) (which because of A, is ~B) and thus proceeds to prove A→C.

    this is asserting that X v Y is the same as ~X → Y (which is the same as asserting X → Y is the same as ~X v Y (as one can see by replacing X with ~X), which as i understand it is a BASIC logical equivalence (implication has a disjunctive normal form)).

    (A→B) v (A→C) has a certain "symmetry" with regard to B and C. but the proofs of this form do not. in fact they are largely concerned with proving just one of A→B or A→C (perhaps it would satisfy you more, if i said: "mathemeticans prefer to prove instead: ~(A→B) → (A→C)." i suppose that is a fair critique).

    not to belabor the point (ok, i'm belaboring the point...it's not a lie, it's irony), but "under the surface" in my mind:

    A&(A→B)
    A&B

    are the same, and express the same thing: in the presence of A, the truth of "one half of (A→B) v (A→C)" (the A→B part) only depends on B. as is typical for ME (and probably for other people, judging from the literature) i naturally break this down into 2 cases:

    B
    ~B

    the first case which "verifies" A&B A&(A→B), is hardly worthy of mention, it goes without saying (hence the paraphrase "there is nothing to prove").

    in the case at hand: if ax = 0, and a = 0, then certainly one of:

    a = 0 or x = 0

    is in fact, true (the a = 0 part...and *maybe* the x = 0 part, as well, but who cares?).

    the other case: (~B so ~A(&B), or if you prefer A&(~(A→B))) is the only "sticking point". informally, if we have a dilemma, and we can take care of one horn, we're good. so if the first horn won't gore us, cool: but if not, let's be sure we can take care of the second horn. this is a "meta-strategy", it's almost entirely INTERNAL.

    this strategy CAN be proved formally. and perhaps it is worth-while to assure oneself of this, at least once. but after that, one needn't worry about it again.

    main over-arching considerations:

    in (algebraic) structures defined axiomatically, the LEM is tacitly assumed. for if not, the axioms don't actually define anything (for example, if we don't have either a = b, or a ≠ b, the usefulness of "=" in equations is greatly diminished).

    → (implication) comes in many forms: A only if B, ~A v B, "sufficiency for B", ~(A&~B), "follows from", true statements A are a subset of true statements B, and so on (i am speaking "loosely" here, assuming some "universe of discourse" which is well-defined...in this example, we may take such a universe to be the vector space V, or more generally the category of all vector spaces over a given field F). in informal proofs, these are often substituted freely, without warning to the reader that this is being done.

    of course, all of this may be bewildering to the original poster, who just wanted to prove something about vector spaces.

    ok, from the top:

    suppose ax = 0 (given!!!!!!)

    now we either have ax = 0 and a = 0, OR:

    (we don't, since A&B = true, or A&B = false, and since A is true if ~(A&B) (that is: A&B = false), then A&(~(A&B)) → ~B (which is, in fact, a tautology, but gosh...we're kind of getting away from the matter at hand), so ~B, so we need to show that ~B → (A→C)).

    we must show that ax = 0 implies x= 0 (when a ≠ 0). (all the stuff in parentheses above is usually NOT stated).

    (point taken: the statement (A&B) v (~B → (A→C)) is not the same "form" as (A&B) v (A→C), but B = true, automatically makes the second part of the disjunction "true", and in, general, mathematicians are not concerned with statements of the form "false → (anything)". so there's a kind of "automatic contraction" that happens, a tacit assumption that B = false. in general, when proving implications, we (mathematicians? myself personally? you decide...) always assume the antecedent is true. i don't know if this is actually written down anywhere, it just "happens".)

    perhaps, one can see this as an inherent ambiguity in using the english word "or" in informal proofs: are we using the law of the excluded middle, or logical disjunction? it's often not spelled out, and one has to "parse" this internally, from context.

    indeed, one often also sees the equivalent form:

    A →[(A&B) v (A&~B&C)] which again doesn't "look like" any of the "logical propositions" we've been discussing so far. here, perhaps, it's clearer where the "asymmetry" comes from: C only occurs in the "last test phrase", to "falsify this" A has to be true, and B has to be false, and C has to be false.

    but again, i don't manipulate logical expressions in my head when i do these things. i "disregard" the case B = true as trivial, and focus on (A&~B)→C

    (which gives us "yet another form to play with" A → {(A&B) v [(A&~B)→C]}).

    why did i just write (A&~B)→C there, when i said in my original post : (A&B) v (A→C)? hmm...it's how i think: i'm only after the TRUE statements. when some "propositional formula" involving OR evaluates to "false", i "cross it off the list" as "not relevant". so yes, i'm secretly carrying a "~B" premise over to A→C, because if B (and we already KNOW A) then A&B, so true, so if ~(A&B), but A, then obviously ~B, in which case i must prove A→C. so i'm "skipping some internal steps" when i "paraphrase this" as (A&B) v (A→C). my goal is to show the following:

    A = true (given)
    ~(~B&~C) = true (the ONE scenario in which all of these expressions that have been floating around all become FALSE is ~B&~C).

    so, yes, i prove true = not not false "all the time", and i don't even mention this fact. it's like demorgan's rules are "hard-wired" into my brain.

    **********
    my apologies to you, pratique21, and also to emakarov. i understand your concern about thinking that:

    1x = 0 → x= 0 looks like "circular logic" in that we are using the result ax = 0 → x= 0, when a ≠ 0, for the specific case a= 1. however, as emakarov points out:

    in a vector space, it is an AXIOM that 1x = x, which comes to our rescue here.

    as far as emakarov's objections go: they are valid, i'm often not being exactly precise about each and every "link" in the chain of reasoning. if two logical statements are equivalent, i'm prone to use whatever formulation (in words) suits me at the time, without explanation.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Vector Space proof
    Posted in the Advanced Algebra Forum
    Replies: 1
    Last Post: June 12th 2011, 06:26 PM
  2. Simple Vector Space Proof
    Posted in the Advanced Algebra Forum
    Replies: 5
    Last Post: May 25th 2010, 01:42 AM
  3. Vector Space Proof
    Posted in the Advanced Algebra Forum
    Replies: 1
    Last Post: June 14th 2009, 10:23 AM
  4. Vector space Proof
    Posted in the Advanced Algebra Forum
    Replies: 6
    Last Post: April 14th 2008, 07:44 PM
  5. Vector space proof
    Posted in the Advanced Algebra Forum
    Replies: 3
    Last Post: February 13th 2006, 09:55 AM

Search Tags


/mathhelpforum @mathhelpforum