CHAPTER FOUR: BACKWARD λ-CONVERSION AND CROSS-CATEGORIAL
4.1. Backward λ-conversion.
The aim of this section is to show how, given assumptions about the syntax, the
syntax-semantics relation, and certain basic type assignments, backward λ-conversion helps you
to find the compositional interpretation of the expressions and hence helps you in defining the
We will start with the following example:
(1) John doesn't walk.
Our aim will be to find a compositional semantics for this example, and to write a grammar.
Let us start by assuming a simple syntax.
We assume the following syntactic categories: NP, VP, NEG, S
We assume the following syntactic rules:
NP + VP ==> S
NEG + VP ==> VP
And we assume the following lexical items:
NP VP NEG
? ? ?
john walk not
We're ignoring everything having to do with do and with subject verb agreement, and assume that for our purposes here generating John not walk is fine enough. Given this, we generate our sentence with the following structure:
John NEG VP
We have now made some assumptions about the syntax, let us make some assumptions about
the relation between syntactic categories and semantics types.
First let us make Montague's Fixed Type Assignment assumption [Montague 1974]:
Fixed Type Assignment:
we associate with every syntactic category one and only one semantic type, and all
expressions of that syntactic category are semantically interpreted as expressions of that
Secondly let us assume, as before, that intransitive verbs are interpreted as predicates of type
This fixes the semantic types corresponding to the above tree as follows:
Next let us assume that all the syntactic rules that we have given are interpreted semantically as
We then have to decide which expression is going to be the function and which is going to be
We have two syntactic rules:
NP + VP ==> S
NEG + VP ==> VP
As before, the grammar produces pairs of the form <α,α'> where α is a syntactic tree and α' its translation in type logic.
We then have four possibilities for our rules:
Of these four, the last rule R4 is incompatible with our assumptions so far. We have assumed that the type of VP is
Hence we know that our grammar will contain rule R3 and not R4:
This fixes the type of NEG. Clearly, its type has to be <
So we have gotten the following information:
Given that we have decided to follow Montague here in assuming the Fixed Type Assignment assumption, and given the fact that we have assumed that the type assigned to VP is
This particular example doesn't decide between these two, but other considerations do. The grammar that we are going to make should not only generate the above sentence, but should work generally, for other NPs as well. That is, we want a grammar that would also be capable of generating every boy walks.
But that means that under our assumptions, rule R1 would require the NP every boy also to be
of type e.
But that is semantically problematic, because as an expression of type e, it would have to denote an individual, and, of course, quantified NPs do not denote individuals.
These considerations lead us to disregard rule R1.
Note: we only disregard rule 1 given our assumptions of the Fixed Type Assignment
assumption and our choice of letting the type of VP be
Here we are making both assumptions, so the rule for combining a subject with an intransitive verb phrase is rule R2:
This fixes all the types in our example as follows:
Now, we could finish our grammar easily by adding the following lexical entries:
and get as a semantic representation for the sentence:
The problem is that, as a semantic representation, this is not informative enough.
Let us assume that John is an individual and that the constant j ？ CON denotes him. Our e
constant JOHN denotes a set of properties, and nothing so far requires that there is any connection between that set of properties and the individual John. Similarly, our constant NOT denotes some function from properties into properties, but nothing yet tells us what the relation is between the property NOT(WALK) and the property WALK.
That is, this representation does not yet support the entailments that we intuitively associate with the sentence.
Our task is not just to stipulate meanings for our lexical items, but rather to find out what they are. In other words, our semantics is not explicit enough if it doesn't explain the relation between the meaning of john at type <
Given that we have assumed that walk is interpreted as an expression of type
Assuming that our constant j ？ CON denotes John, we do know what the meaning of the e
whole sentence should be, namely: ?WALK(j).
Thus we have the following semantic information:
Our task now is to find the correct interpretations of the other expressions.
We know that semantically, the meaning corresponding to the topnode should be:
S' = ((NP'((NEG'(VP')))
Since VP' = WALK and S' = ?WALK(j), we know that NP' and NEG' should be expressions that satisfy the following equation:
?WALK(j) = ((NP'((NEG'(WALK)))
Here is where backward λ-conversion comes in.
?WALK(j) is an expression that has the constant j sitting in the middle. Now obviously, whatever the interpretation of NP' is going to be, it should be responsible for the introduction of the constant j.
That means that j should be part of the NP' expression.
This means that we are looking for an expression in which j is not the argument of WALK, but is part of an expression that takes an expression with WALK in it as an argument. So we need to pull j out of the expression ?WALK(j). We do that through backward λ-conversion:
= [λx.?WALK(x)](j) [backward λ-con]
where x ？ VARe.
As we have seen these two expressions are equivalent by λ-conversion. λx.?WALK(x) is an
expression of type
However, in the above expression j is an argument of the predicate λx.?WALK(x), while what we need is a representation where it is in a function that takes the expression representing the contribution of negation and WALK as its argument.
We bring the expression in that form by once more backward λ-conversion. We can replace the
whole predicate λx.?WALK(x) (of type
variable P, and apply the result to λx.?WALK(x):
= [λP.P(j])(λx.?WALK(x)) [backward λ-conversion]
In this expression λP.P(j) is an expression of type <
We have shown that applying the first to the second results in an expression with the same meaning as ?WALK(j), hence we can add the following information to our interpretation tree:
This means that we add as our lexical entry for john:
, P ？ VAR
We are left with finding the interpretation for the negation.
This task is exactly the same as the task we had before: we need an interpretation for NEG' that satisfies the following equation
VPtop' = (NEG'(VPbot'))
Given what we know now, we need to solve the following equation:
λx.?WALK(x) = (NEG'(WALK))
λx.?WALK(x) is an expression that contains the negation in it, we need to write it in a form
where we have a functional expression containing the contribution of negation, which applies to
We get that though backward λ-conversion:
we replace in λx.?WALK(x), WALK by a variable P, abstract over that variable and apply the
result to WALK:
= [λPλx.?P(x)](WALK) [backward λ-conversion]
In this expression λPλx.?P(x) is an expression of type <
after. We complete the tree:
and we add the following lexical entry:
, x ？ VAR
Collecting the grammar together we get:
NP, VP, NEG, S
NP ？ <
NEG ？ <
S ？ t
, x ？ VAR
and we can check that we get the right interpretation by doing the derivation:
not walk NEG VP
= λx.?WALK(x) [λ-conversion]
So this is the same as:
< VP , λx.?WALK(x) >
john NEG VP NP VP
? ? ?
not walk John NEG VP
= [λx.?WALK(x)](j) [λ-conversion]
= ?WALK(j) [λ-conversion]
So we get:
< S , ?WALK(j) >
John NEG VP
Note that, disregarding for the moment the question of whether these interpretations are 'intuitive', they do make sense.
We interpret predicate negation as:
λPλx.?P(x). This is an operation that takes any predicate Q and maps it onto the property that you have iff you don't have Q.
Hence it maps WALK onto the property that you have iff you don't have the property WALK.
This is precisely what doesn't does, so it is the right interpretation for doesn't.
Also the interpretation of john as λP.P(j) makes sense.
λP.P(j) is the set of all properties that John has.
λP.P(j) (WALK) expresses that the property WALK is one of the properties in that set, hence is one of the properties that John has, hence is a property that John has, hence John walks. Similarly, [λP.P(j)](λx.?WALK(x)) expresses that not walking is one of the properties in the set
of all properties that John has, which means that not walking is a property that John has, which
means that John doesn't have the property of walking, hence John doesn't walk.
Let us now look at the following example:
(2) Every girl walks.
We add to our syntax the categories N for nouns and DET for determiners and the obvious syntactic rule:
DET + N ==> NP
We furthermore make the assumption that the type assigned to common nouns is also
. GIRL ？ CON
Assuming that the semantic operation corresponding to our syntactic rule is functional application, it follows that the determiner applies as a function to the common noun. The reason is that, whatever we decide the type of NPs to be, it is obviously not going to be t.
Thus we add to the grammar:
< N,GIRL> where GIRL ？ CON
The syntactic structure of every girl walks becomes:
DET N walk
We argued before that the type of NPs could not be generally e, because NPs do not denote generally individuals (quantificational NPs do not). Hence we were forced to assume that the type of NPs is <
Given that the type of NPs is <
This means that determiners are relations between sets (type
Given the above considerations, the types and interpretations in our example are fixed in the following way:
t ;x[GIRL(x) ？ WALK(x)]
The topnode represents what we want the meaning of the sentence to be: i.e. whatever semantic representation we will be able to come up with, it should be equivalent to this.
Our task now is to find the meanings of the remaining nodes: the NP and the DET. We find these, again, through backward λ-conversion:
The NP meaning has to satisfy the following equation: