Page 2 of 2 FirstFirst 12
Results 16 to 22 of 22

Math Help - L÷wenheim-Skolem and induction on complexity

  1. #16
    MHF Contributor
    Joined
    Oct 2009
    Posts
    5,417
    Thanks
    718
    It seems that the original poster's question is why the constructed model is an interpretation, i.e., closed under functional symbols:
    My logic professor told me to show that the interpretation I created had an countable domain, "closed under f" (if there was one function symbol f), and such that the restriction of T would make A true.
    I assume that the phrase I emphasized means that the functional symbol f is in the language. Is it so? If yes, this has been shown by clic-clac.
    This is the idea: given some subset of the domain of you consider the submodel of generated by
    Also:
    (here D^* is the same as <D_0> above). Note that D^* occurs in both sides, which shows that it is closed.

    To describe this in more detail, let's define an algebra as an interpretation with all relational symbols removed. I.e., an algebra is a domain plus interpretations of constants and functional symbols on this domain (let's from now on include constants in functional symbols). So an algebra is closed under functions by definition. Let me denote the interpretation of a functional symbol f or a predicate R by [[f]] or [[R]], respectively.

    The suggested proof of L÷wenheim-Skolem theorem consists of three parts. In the first, we take the algebra part \langle D, [[\cdot]]^f\rangle of the given uncountable interpretation T=\langle D, [[\cdot]]^f, [[\cdot]]^R\rangle and build its countable subalgebra \langle D_0,[[\cdot]]_0^f\rangle (which is an algebra and therefore closed under functions). In the second part, we build a series of interpretations T_n=\langle D_n, [[\cdot]]_n^f, [[\cdot]]_n^R\rangle and take its union T_\omega=\langle D_\omega, [[\cdot]]_\omega^f, [[\cdot]]_\omega^R\rangle. Finally, in the third part we show that for any formula A in the original language, T\models A iff T_\omega\models A. In fact, the first and second parts are pretty similar because the initial subalgebra \langle D_0,[[\cdot]]_0^f\rangle can also be built by taking a countable union of algebras. So, the first step build a series of algebras to make them closed under functional symbols, and the second step builds a series of interpretations in order to make them closed under \exists.

    The first step has been described a couple of times by clic-clac, so I am wondering if this was your question about closure. I can write it in more detail if you need.
    Follow Math Help Forum on Facebook and Google+

  2. #17
    Newbie
    Joined
    Oct 2009
    Posts
    9
    First of all. Very interesting and beneficial discussion for me. Nice to see one more person join us.

    I've taken my time to read your explanations clic-clac, and I understand your point about closure and functions now. I blame my previous inability to understand what you said by being unfamiliar with the "<E>"-style notation, and my insecurities about what closure under a domain actually means. Your "translation" into another approach was very usefull also, thanks!
    My side note was not that important.

    emakarov: We are not supposed to prove L÷wenheim-Skolem via the completeness theorem for first order logic, which is also a part of this course.

    Your model theoretic description seems like it might be really beneficial for me, but I have little clue how to read the notation you use. Remember I'm not a math student, so I'm not familiar with that notation. I believe I understand the first two steps about closure intuitively now, and understand somewhat how to describe it in syntax.

    To clarify one thing. What kinds of objects are in a union of interpretations?

    Also, now if we have done the first two steps correctly, the last step seems to flow pretty effortlessly from our construction? So how would one go about to prove this, would it require a lengthy proof. Being a beginner I don't have a feeling for how much one have to write or make explicit etc...
    Follow Math Help Forum on Facebook and Google+

  3. #18
    MHF Contributor
    Joined
    Oct 2009
    Posts
    5,417
    Thanks
    718
    All right, let's introduce some notations. I chose most of them pretty arbitrarily, so you can come up with your own.

    An algebra is a triple consisting of a domain D, a set of functional symbols with associated arities \mathcal{L} (including constants) and a mapping [[\cdot]] that associates a function [[f]] on D of the corresponding arity to each functional symbol f\in\mathcal{L}. (This definition of an algebra as a triple is pretty standard.) If \mathfrak{A}=\langle D,\mathcal{L},[[\cdot]]\rangle is an algebra, we can add a subscript \mathfrak{A} to D, \mathcal{L}, and [[\cdot]] to identify them unambiguously.

    I am not going to specify how we represent a function [[f]]: e.g., we can do this by writing an algorithm, or as a set of argument-value pairs (the standard representation of functions in set theory). However, we can define [[f_1]]\subseteq[[f_2]] to be the case when [[f_1]] is a function on some D_1, [[f_2]] is a function on D_2, D_1\subseteq D_2, and [[f_2]] agrees with [[f_1]] on all arguments from D_1. If we represent functions as sets of pairs, then the relation defined above holds precisely when [[f_1]]\subseteq[[f_2]] as sets. In such case, we can define the union [[f_1]]\cup [[f_2]] as the corresponding set union. If [[f_1]]\subseteq[[f_2]], then [[f_1]]\cup [[f_2]] is still a function, i.e., it assigns exactly one value to each argument. These notations can be extended from two functions to a countable family of functions provided they form an increasing chain: [[f_1]]\subseteq[[f_2]]\subseteq[[f_2]]\subseteq\dots.

    An interpretation is also a triple \langle D,\mathcal{L},[[\cdot]]\rangle where, by abuse of notation, the language \mathcal{L}, besides functional symbols, now includes predicate symbols with their arities, and the mapping [[\cdot]] is extended to \mathcal{L} by mapping a predicate symbol R to some subset of D\times\dots\times D (the number of D's equals the arity of R). Again, if T=\langle D,\mathcal{L},[[\cdot]]\rangle is an interpretation, we can add the subscript T to D, \mathcal{L}, and [[\cdot]]. It is clear that if we remove predicate symbols and their values under [[\cdot]] from an interpretation, we get an algebra.

    We are going to construct an interpretation whose domain is built entirely of terms of the language. Such interpretations are called syntactic, or Herbrand, and the corresponding algebras are called free. So, if \mathcal{L} is a language, let Terms(\mathcal{L}) denote the set of all closed terms of \mathcal{L}. To construct an algebra with the domain Terms(\mathcal{L}), we map a functional symbol f of arity n into the function [[f]], which, given terms t_1,\dots,t_n of \mathcal{L}, returns the term f(t_1,\dots,t_n). Then it is clear that \langle Terms(\mathcal{L}),\mathcal{L},[[\cdot]]\rangle is an algebra. It is also clear that the cardinality of the domain does not exceed the cardinality of the language, provided the latter is infinite.

    This concludes the first step, where, given a new language \mathcal{L}' that, perhaps, extends an old language \mathcal{L} with some new constants, we construct a free algebra generated by \mathcal{L}'. I mentioned that a free algebra generated by \mathcal{L} this can be done in stages. We define D^0 to be the set of constants of \mathcal{L}, and D^{i+1}=\{f(t_1,\dots,t_n)\mid f\in\mathcal{L}, t_1,\dots,t_n\in D^i\}. Then define D^\omega=\bigcup_{i=0}^\infty D^i; this is the domain of the algebra. Function interpretations are defined as before.

    I'll make the next post about step 2 hopefully later today.
    Follow Math Help Forum on Facebook and Google+

  4. #19
    MHF Contributor
    Joined
    Oct 2009
    Posts
    5,417
    Thanks
    718
    If T=\langle D,\mathcal{L},[[\cdot]]\rangle is an interpretation, then, in addition to the relation [[f_1]]\subseteq[[f_2]] and the operation [[f_1]]\cup[[f_2]] for functional symbols f_1,f_2\in\mathcal{L}, we can define [[R_1]]\subseteq[[R_2]] and [[R_1]]\cup[[R_2]] for predicate symbols R_1,R_2\in\mathcal{L}. (Unlike for functions, the union of predicates is always defined.) So if T_1=\langle D_1,\mathcal{L}_1,[[\cdot]]_1\rangle and T_2=\langle D_2,\mathcal{L}_2,[[\cdot]]_2\rangle are two interpretations, we define T_1\subseteq T_2 to hold iff \mathcal{L}_1\subseteq\mathcal{L}_2, D_1\subseteq D_2, as well as [[f_1]]_1\subseteq[[f_2]] and [[R_1]]\subseteq[[R_2]] for all functions f_1,f_2\in\mathcal{L}_1 and all predicates R_1,R_2\in\mathcal{L}_1.

    Below, if T_i is an interpretation, I will add the subscript i to the components of T_i, i.e., I will assume that T_i=\langle D_i,\mathcal{L}_i,[[\cdot]]_i\rangle .

    Another notation: When P is a unary predicate symbol and t is a term, I will write P(t) for the atomic formula. If A is a possibly compound formula with a free variable x and t is a term, I will write A[x] as a synonym for A to emphasize x and A[t] for the result of substitution of t for x. The same convention applies to predicates of other arities and formulas with several free variables.

    So, step 2. Suppose we have an interpretation T=\langle D,\mathcal{L},[[\cdot]]\rangle where D is infinite and may be uncountable. We are going to build a countable syntactic model T_\omega whose language is an extension of \mathcal{L}. In fact, we are going to construct a sequence T_0\subseteq T_1\subseteq T_2\subseteq\dots of interpretations and define T_\omega as \bigcup_{i=0}^\infty T_i. We are also going to extend the language of T to all \mathcal{L}_i and \mathcal{L}_\omega without changing D or [[\cdot]].

    The language of T_0 is \mathcal{L}, and the algebra part of T_0 is the free algebra generated by \mathcal{L}, as described in the previous post. Interpretations of all predicate symbols are empty in T_0.

    Suppose we have already constructed T_i and extended T to \mathcal{L}_i, i.e., the language of T_i. To obtain T_{i+1}, we consider all formulas of the form \exists x\,A[x_1,\dots,x_n,x] in \mathcal{L}_i. If for some a_1,\dots a_n\in D_i (recall that each a_i is a term in \mathcal{L}_i since T_i is a syntactic model) we have T\models\exists x\,A[a_1,\dots,a_n,x], then there exists an a\in D such that T\models A[a_1,\dots,a_n,a]. In this case we create a fresh constant, which I will denote by \bar{a}. So if C is the set of these fresh constants obtain for all possible existential formulas, we define \mathcal{L}_{i+1}=\mathcal{L}\cup C and D_{i+1}=Terms(\mathcal{L}_{i+1}). We also extend the interpretation of T by putting [[\bar{a}]]=a for all \bar{a}\in C.

    To complete T_{i+1}, we need to define [[R]]_{i+1} for each predicate symbol R in \mathcal{L}_{i+1}. In fact, the set of non-constant functions and predicates is the same in all T_i, only their interpretation changes. We define [[R]]_{i+1}=\{(a_1,\dots,a_n)\mid a_k\in D_{i+1},\ T\models R(a_1,\dots,a_n)\}. (Now that we extended T, each term a_k\in\mathcal{L}_{i+1}, including the newly added constants, has a denotation in T.)

    It is easy to see that T_i\subseteq T_{i+1}. Therefore, we denote by T_\omega the union of all T_i, and also extend T to \mathcal{L}_\omega. This completes step 2.

    In step 3 we prove that for every formula A in \mathcal{L}_\omega, T_\omega\models A iff T\models A (equivalence is important). In particular, this is true for all formulas in \mathcal{L}=\mathcal{L}_0. We also make a convention that the universal quantifier \forall is not a primitive connective, but a contraction for \neg\exists\neg.

    The proof is by induction on A. The cases of atoms and boolean connectives are easy. To show that T\models\exists x\,A[x] implies T_\omega\models\exists x\,A[x], choose a witness a in D such that \bar{a} was added to some \mathcal{L}_i and apply the induction hypothesis.

    It is late now and there are probably tons of errors. Finding them is left as an exercise .

    OK, now that I wrote everything, I believe that if this text were written by someone else and I saw it for the first time, I would be lost in details. Writing this proof helped me understand the high-level idea maybe for the first time. So, the best thing is not to memorize this to forget it several days later, but to get to the point where you can write it yourself. Then you'll really "get it" and remember it for a long time.
    Follow Math Help Forum on Facebook and Google+

  5. #20
    Newbie
    Joined
    Oct 2009
    Posts
    9
    Thanks a lot. I'm really starting to catch on now. Since I can rely on the fact that for any sentence in L, there exist a sentence on skolem form equivalent for satisfiability. Then I guess the different cases you put forward at the end can be left out emakarov?

    In my simplified example I have... For all x there exist a y such that C(x,y) is true. Now as I said I can see that if I choose an object in let's say D3, I am guaranteed that there will exist an object in D4 satisfying the formula. Correct? And I can see that for any object in Di there will exist an object in Di+1 that satisfies the formula. Does the proof end at this stage then?
    Follow Math Help Forum on Facebook and Google+

  6. #21
    MHF Contributor
    Joined
    Oct 2009
    Posts
    5,417
    Thanks
    718
    Since I can rely on the fact that for any sentence in L, there exist a sentence on skolem form equivalent for satisfiability.
    I agree that a formula is satisfiable iff its Skolem form is satisfiable. However, I have never used Skolem form.

    Further, from your first post:
    Let A be a sentence on skolemform... For simplicity let's assume that A has the form ∀x∃yC(x,y).
    The formula ∀x∃yC(x,y) is not in Skolem form. Its Skolem form would be ∀xC(x,c) for som constant c.

    Finally,
    Then I guess the different cases you put forward at the end can be left out emakarov?
    Are talking about step 3, and by cases do you mean induction on A? I don't see how you can dispense with induction. Also, what I was proving is that for two concrete models T and T_\omega, the latter is an elementary submodel of the former (i.e., roughly speaking, they validate the same formulas.) When one says that a formula's Skolem form is satisfiable when the formula itself is, one cannot control the model where the Skolem form is true; we only know that it exists if a model for the original formula exists.

    In my simplified example I have... For all x there exist a y such that C(x,y) is true.
    We know that this holds in the given uncountable model T.
    Now as I said I can see that if I choose an object in let's say D3...
    What object in D3?
    And I can see that for any object in Di there will exist an object in Di+1 that satisfies the formula.
    What formula? The formula \forall x\,\exists y\,C[x,y] is closed; it does not need an additional object to satisfy it.

    Let's think about this a little more; I will try to give an example or an intuitive idea later.
    Follow Math Help Forum on Facebook and Google+

  7. #22
    MHF Contributor
    Joined
    Oct 2009
    Posts
    5,417
    Thanks
    718
    I think I see now what you mean.
    I can see that if I choose an object in let's say D3, I am guaranteed that there will exist an object in D4 satisfying the formula.
    Suppose A is a formula in \mathcal{L}_i and T\models\exists x\,A[x]. Then T\models A[a] for some a\in D, so we introduce a constant \bar{a}, add \bar{a} to \mathcal{L}_i to obtain \mathcal{L}_{i+1} and make T_{i+1} and T agree on all atoms in the language \mathcal{L}_{i+1}. And so you are saying that now T_{i+1}\models A[\bar{a}].

    That's the idea, though it's not entirely correct. We don't have the power to ensure that T_{i+1}\models A[\bar{a}]. What we can do is to make sure that T\models B implies T_{i+1}\models B for all atomic formulas in \mathcal{L}_{i+1}, not all formulas in that language, which includes A[\bar{a}]. The reason is, we need to come up with an interpretation T_{i+1}, and an interpretation, besides an algebra, is determined by the truth values of atomic formulas. These truth values we can change to our liking, but the truth value of the compound formula A[\bar{a}] is fixed once we are done with atoms. And in general it is not clear how to tweak the atoms to make a compound formula with many nested quantifiers true. Also, maybe our way of tweaking the atoms will invalidate some other necessary formula.

    Unfortunately, I don't have a good example of this situation now because it is hard to think about all existential formulas that we consider during step i in order to add the necessary constants to \mathcal{L}_i. It seems that there are a lot of such formulas and a lot of witness constants that they define. So when we add all of them and make sure that all atomic formulas in the extended language agree with T, it seems that T_{i+1} should be pretty similar to T. However, I don't currently see a reason why every formula in \mathcal{L}_{i+1} that is true in T should also be true in T_{i+1}.

    It's a shame, really, that I don't have an illuminating example because how can we study this topic and prove things in general when we have not seen a single example of how this works in practice? Something is not right about this type of learning.

    Does the proof end at this stage then?
    The construction of the model does indeed end at step 2 when we took the union of all T_i to get T_\omega. And in general we do have to take an infinite union because introduction of new domain elements during step i can lead to the necessity to add even more witnesses at step i+1. However, even if we took care of all existential formulas (which is not clear because, as I said, we only determined the truth values of atomic formulas), the proof is not finished because there may be all kinds of other compound formulas that are present in the original theory (for which we have to build a countable model). That's what induction in step 3 is for.
    Follow Math Help Forum on Facebook and Google+

Page 2 of 2 FirstFirst 12

Similar Math Help Forum Discussions

  1. [SOLVED] complexity theorie, deterministic turing-machines, time complexity
    Posted in the Discrete Math Forum
    Replies: 3
    Last Post: December 19th 2011, 06:44 AM
  2. O(sin n), Ω(sin n), Θ(sin n) complexity
    Posted in the Advanced Math Topics Forum
    Replies: 4
    Last Post: August 18th 2010, 06:55 AM
  3. Computational Complexity.
    Posted in the Advanced Applied Math Forum
    Replies: 2
    Last Post: May 24th 2010, 06:23 PM
  4. What is the Computional Complexity of this ?
    Posted in the Discrete Math Forum
    Replies: 1
    Last Post: January 24th 2009, 04:04 AM
  5. Complexity of Algorithm
    Posted in the Discrete Math Forum
    Replies: 9
    Last Post: June 21st 2008, 10:13 AM

Search Tags


/mathhelpforum @mathhelpforum