DOC

CROSS-CATEGORIAL

By Don Hughes,2014-01-30 12:50
6 views 0
CROSS-CATEGORIAL

CHAPTER FOUR: BACKWARD λ-CONVERSION AND CROSS-CATEGORIAL

    CONNECTIVES

4.1. Backward λ-conversion.

The aim of this section is to show how, given assumptions about the syntax, the

    syntax-semantics relation, and certain basic type assignments, backward λ-conversion helps you

    to find the compositional interpretation of the expressions and hence helps you in defining the

    grammar.

We will start with the following example:

     (1) John doesn't walk.

Our aim will be to find a compositional semantics for this example, and to write a grammar.

Let us start by assuming a simple syntax.

    We assume the following syntactic categories: NP, VP, NEG, S

We assume the following syntactic rules:

NP + VP ==> S

    NEG + VP ==> VP

And we assume the following lexical items:

     NP VP NEG

     ? ? ?

     john walk not

    We're ignoring everything having to do with do and with subject verb agreement, and assume that for our purposes here generating John not walk is fine enough. Given this, we generate our sentence with the following structure:

     S

     NP VP

     ?

     John NEG VP

     ? ?

     not walk

We have now made some assumptions about the syntax, let us make some assumptions about

    the relation between syntactic categories and semantics types.

    First let us make Montague's Fixed Type Assignment assumption [Montague 1974]:

    101

     Fixed Type Assignment:

     we associate with every syntactic category one and only one semantic type, and all

    expressions of that syntactic category are semantically interpreted as expressions of that

    semantic type.

Secondly let us assume, as before, that intransitive verbs are interpreted as predicates of type

    , and that sentences are interpreted as expressions of type t. So we associate with VP type and with S type t.

    This fixes the semantic types corresponding to the above tree as follows:

     t

     .

     ?

     John .

     ? ?

     not walk

Next let us assume that all the syntactic rules that we have given are interpreted semantically as

    functional application.

We then have to decide which expression is going to be the function and which is going to be

    the argument.

We have two syntactic rules:

    NP + VP ==> S

    NEG + VP ==> VP

    As before, the grammar produces pairs of the form <α,α'> where α is a syntactic tree and α' its translation in type logic.

We then have four possibilities for our rules:

R1. + ==> < S , VP'(NP')>

     NP VP

R2. + ==> < S , NP'(VP')>

     NP VP

R3. + ==> < VP , NEG'(VP')>

     NEG VP

R4. + ==> < VP , VP'(NEG')>

     NEG VP

    102

    Of these four, the last rule R4 is incompatible with our assumptions so far. We have assumed that the type of VP is . But if NEG is the argument of VP, its type has to be e, and the type of the output VP would be t. But we have already fixed the type of VP to be .

Hence we know that our grammar will contain rule R3 and not R4:

R3. + ==> < VP , NEG'(VP')>

     NEG VP

This fixes the type of NEG. Clearly, its type has to be <,>.

    So we have gotten the following information:

     t

     .

     ?

     John <,>

     ? ?

     not walk

    Given that we have decided to follow Montague here in assuming the Fixed Type Assignment assumption, and given the fact that we have assumed that the type assigned to VP is , both rules R1 and R2 are still compatible with this particular example. Rule R1 tells us that the type assigned to NP is e; rule R2 tells us that the NP is the function, giving t as output type, which can only mean that the type assigned to NP is <,t>.

    This particular example doesn't decide between these two, but other considerations do. The grammar that we are going to make should not only generate the above sentence, but should work generally, for other NPs as well. That is, we want a grammar that would also be capable of generating every boy walks.

    But that means that under our assumptions, rule R1 would require the NP every boy also to be

    of type e.

    But that is semantically problematic, because as an expression of type e, it would have to denote an individual, and, of course, quantified NPs do not denote individuals.

These considerations lead us to disregard rule R1.

Note: we only disregard rule 1 given our assumptions of the Fixed Type Assignment

    assumption and our choice of letting the type of VP be . As we will see later, it is quite well possible (and in fact rather common) to assume a grammar that has rule R1 rather than (or besides) rule R2. But such a grammar gives up the FTA or the assignment to VP, or both.

     Here we are making both assumptions, so the rule for combining a subject with an intransitive verb phrase is rule R2:

    103

R2. + ==> < S , NP'(VP')>

     NP VP

This fixes all the types in our example as follows:

     t

     <,t>

     ?

     John <,>

     ? ?

     not walk

    Now, we could finish our grammar easily by adding the following lexical entries:

     where WALK CON

     ?

     walk

where NOT CON <,>

     ?

     not

where JOHN CON <,t>

     ?

     John

and get as a semantic representation for the sentence:

     (JOHN((NOT(WALK)))

    The problem is that, as a semantic representation, this is not informative enough.

Let us assume that John is an individual and that the constant j CON denotes him. Our e

    constant JOHN denotes a set of properties, and nothing so far requires that there is any connection between that set of properties and the individual John. Similarly, our constant NOT denotes some function from properties into properties, but nothing yet tells us what the relation is between the property NOT(WALK) and the property WALK.

    That is, this representation does not yet support the entailments that we intuitively associate with the sentence.

    Our task is not just to stipulate meanings for our lexical items, but rather to find out what they are. In other words, our semantics is not explicit enough if it doesn't explain the relation between the meaning of john at type <,t> and the individual John, which is denoted by our constant j CON. Similarly, we need to explain the relation between not at type e

    <,> and sentential negation ?.

    104

    Given that we have assumed that walk is interpreted as an expression of type , a property, -and we're not here interested in trying to lexically decompose its meaning- we will assume that we interpret this as a constant of type . Hence, we do assume the lexical entry for walk:

     where WALK CON

     ?

     walk

Assuming that our constant j CON denotes John, we do know what the meaning of the e

    whole sentence should be, namely: ?WALK(j).

    Thus we have the following semantic information:

     t ?WALK(j)

     <,t>

     ?

     JOHN <,>

     ? ?

     NOT WALK

    Our task now is to find the correct interpretations of the other expressions.

    We know that semantically, the meaning corresponding to the topnode should be:

S' = ((NP'((NEG'(VP')))

    Since VP' = WALK and S' = ?WALK(j), we know that NP' and NEG' should be expressions that satisfy the following equation:

     ?WALK(j) = ((NP'((NEG'(WALK)))

Here is where backward λ-conversion comes in.

    ?WALK(j) is an expression that has the constant j sitting in the middle. Now obviously, whatever the interpretation of NP' is going to be, it should be responsible for the introduction of the constant j.

    That means that j should be part of the NP' expression.

    This means that we are looking for an expression in which j is not the argument of WALK, but is part of an expression that takes an expression with WALK in it as an argument. So we need to pull j out of the expression ?WALK(j). We do that through backward λ-conversion:

?WALK(j)

    105

= [λx.?WALK(x)](j) [backward λ-con]

where x VARe.

As we have seen these two expressions are equivalent by λ-conversion. λx.?WALK(x) is an

    expression of type , which is very promising, because we need an expression of exactly that type to be the interpretation of the VP not walk.

    However, in the above expression j is an argument of the predicate λx.?WALK(x), while what we need is a representation where it is in a function that takes the expression representing the contribution of negation and WALK as its argument.

    We bring the expression in that form by once more backward λ-conversion. We can replace the

    whole predicate λx.?WALK(x) (of type ) by a variable P of type , abstract over this

    variable P, and apply the result to λx.?WALK(x):

[λx.?WALK(x)](j)

    = [λP.P(j])(λx.?WALK(x)) [backward λ-conversion]

    In this expression λP.P(j) is an expression of type <,t> and λx.?WALK(x) an expression of type .

    We have shown that applying the first to the second results in an expression with the same meaning as ?WALK(j), hence we can add the following information to our interpretation tree:

     t ?WALK(j)

     <,t> λx.?WALK(x)

     ?

     λP.P(j) <,>

     ? ?

     NOT WALK

This means that we add as our lexical entry for john:

    , P VAR where j CONe

     ?

     john

We are left with finding the interpretation for the negation.

    This task is exactly the same as the task we had before: we need an interpretation for NEG' that satisfies the following equation

     VPtop' = (NEG'(VPbot'))

Given what we know now, we need to solve the following equation:

     λx.?WALK(x) = (NEG'(WALK))

    106

λx.?WALK(x) is an expression that contains the negation in it, we need to write it in a form

    where we have a functional expression containing the contribution of negation, which applies to

    WALK.

    We get that though backward λ-conversion:

    we replace in λx.?WALK(x), WALK by a variable P, abstract over that variable and apply the

    result to WALK:

λx.?WALK(x)

    = [λPλx.?P(x)](WALK) [backward λ-conversion]

In this expression λPλx.?P(x) is an expression of type <,>, which is what we were

    after. We complete the tree:

     t ?WALK(j)

     <,t> λx.?WALK(x)

     ?

     λP.P(j) <,>

     ? ?

     λPλx.?P(x) WALK

    and we add the following lexical entry:

    , x VAR where P VARe

     ?

     not

    Collecting the grammar together we get:

SYNTACTIC CATEGORIES:

    NP, VP, NEG, S

TYPE ASSIGNMENT:

    NP <,t>

    VP

    NEG <,>

    S t

LEXICAL ENTRIES:

     where j CON, P VAR e

     ?

     john

where WALK CON

     ?

    107

     walk

    , x VAR where P VARe

     ?

     not

RULES:

    R2. + ==> < S , NP'(VP')>

     NP VP

R3. + ==> < VP , NEG'(VP')>

     NEG VP

and we can check that we get the right interpretation by doing the derivation:

λPλx.?P(x)> + ==> < VP , [λPλx.?P(x)](WALK) >

     ? ?

     not walk NEG VP

     ? ?

     not walk

[λPλx.?P(x)](WALK)

    = λx.?WALK(x) -conversion]

So this is the same as:

< VP , λx.?WALK(x) >

     NEG VP

     ? ?

     not walk

+ < VP , λx.?WALK(x) > ==> < S , [λP.P(j)](λx.?WALK(x)) >

     ?

     john NEG VP NP VP

     ? ? ?

     not walk John NEG VP

     ? ?

     not walk

     [λP.P(j)](λx.?WALK(x))

    = [λx.?WALK(x)](j) -conversion]

    = ?WALK(j) -conversion]

So we get:

    108

< S , ?WALK(j) >

     NP VP

     ?

     John NEG VP

     ? ?

     not walk

    Note that, disregarding for the moment the question of whether these interpretations are 'intuitive', they do make sense.

    We interpret predicate negation as:

    λPλx.?P(x). This is an operation that takes any predicate Q and maps it onto the property that you have iff you don't have Q.

    Hence it maps WALK onto the property that you have iff you don't have the property WALK.

     This is precisely what doesn't does, so it is the right interpretation for doesn't.

Also the interpretation of john as λP.P(j) makes sense.

    λP.P(j) is the set of all properties that John has.

    λP.P(j) (WALK) expresses that the property WALK is one of the properties in that set, hence is one of the properties that John has, hence is a property that John has, hence John walks. Similarly, [λP.P(j)](λx.?WALK(x)) expresses that not walking is one of the properties in the set

    of all properties that John has, which means that not walking is a property that John has, which

    means that John doesn't have the property of walking, hence John doesn't walk.

Let us now look at the following example:

     (2) Every girl walks.

    We add to our syntax the categories N for nouns and DET for determiners and the obvious syntactic rule:

DET + N ==> NP

    We furthermore make the assumption that the type assigned to common nouns is also , like that assigned to VP and we assume that girl is interpreted as a constant

    . GIRL CON

    Assuming that the semantic operation corresponding to our syntactic rule is functional application, it follows that the determiner applies as a function to the common noun. The reason is that, whatever we decide the type of NPs to be, it is obviously not going to be t.

Thus we add to the grammar:

< N,GIRL> where GIRL CON

     ?

    109

     girl

+ ==> < NP , DET'(N')>

     DET N

The syntactic structure of every girl walks becomes:

     S

     NP VP

     ?

     DET N walk

     ? ?

    every girl

    We argued before that the type of NPs could not be generally e, because NPs do not denote generally individuals (quantificational NPs do not). Hence we were forced to assume that the type of NPs is <,t>, the type of sets of sets. Sets of sets are called generalized quantifiers,

    hence <,t> is the type of generalized quantifiers and NPs denote generalized quantifiers. Generalized quantifier theory hence is the theory of noun phrase interpretations.

    Given that the type of NPs is <,t>, it follows from the above rule that the type of determiners is <,<,t>>.

    This means that determiners are relations between sets (type ). In other words: the determiner every in every girl walks expresses a relation between the set of girls and the set of walkers.

    Given the above considerations, the types and interpretations in our example are fixed in the following way:

     t ;x[GIRL(x) WALK(x)]

     <,t> WALK

<,<,t>> GIRL

     ?

     every

    The topnode represents what we want the meaning of the sentence to be: i.e. whatever semantic representation we will be able to come up with, it should be equivalent to this.

    Our task now is to find the meanings of the remaining nodes: the NP and the DET. We find these, again, through backward λ-conversion:

The NP meaning has to satisfy the following equation:

    110

Report this document

For any questions or suggestions please email
cust-service@docsford.com