Syntax Driven Computational Semantics


In keeping with the approach used in chapter 15, we will begin by augmenting our context-free grammar with semantic attachments. These attachments are instructions that specify how to compute the meaning representation of a construction from the meaning of its constituent parts.

There are myriad ways to instantiate this style of rule-to-rule approach. Our semantic attachments could, for example, take the form of arbitrary programming language fragments. We could then construct a meaning representation for a given derivation by passing the appropriate  fragments to an interpreter in a bottom-up fashion and then storing the resulting representation as the value for the associated non-terminal. Such an approach would allow us to create any meaning representation we might like. Unfortunately, the unrestricted power of this approach would also allow us to create that have no correspondence at all with the kind of formal logical expression described in the last chapter. Moreover, this approach would afford us very little guidance as to how to go about designing the semantic attachments to our grammar rules.

For this reason, more principled approaches are typically used to instantiate the rule-to-rule approach. We introduce such constrained approaches in this chapter. The first makes direct use of FOL and the lambda-calculus notation introduced in chapter 17. This approach essentially uses a logical notation to guide the creation of logical forms in a principled fashion. The second approach, described later in Section 18.4 is the base of the feature-structure and unification formalism introduced in chapter 15.

The semantic attachment for the verb needs to provide the name of the predicate, specify its arity, and provide the means to incorporate an argument once it’s discovered. We can make use of the lambda- expression to accomplish these tasks.

VP -> Verb    {Verb.sem}

Verb -> closed     {lambda.x.Closed(x)}

The semantic attachments to our grammar rules consist primarily of lambda-redutions, whereby one element of an attachment serves as a functor and the rest serve as arguments to it. As we show, the real work resides in the lexicon where the bulk of the meaning representations are introduced. 

Let’s begin by replacing our earliest target representation with one that is more in keeping with the neo-Davidsonian representation introduced in the last chapter, and by considering an example with a more complex noun phrase as its subject.

Every restaurant closed.

ALLxRestaurant(x) -> Existe Closed(e) And ClosedThing(e,x)

Clearly, the semantic contribution of the subject noun phrase in this example is much more extensive than in the previous one. The first step is to determine exactly what we’d like the meaning representation of Every restaurant to be. Let’s start by assuming that Every invokes the ALL quantifier and the restaurant specifies the category of concepts that we’re quantifying over, which we call the  restriction of the noun phrase.

That is, we’re probably trying to say something about all restaurants. This notion is traditionally referred to as the NP’s nuclear scope. In this case, the nuclear scope of this noun phrase is closed.

ALLxRestaurant(x) -> Q(x)

Ultimately, what we need to do to make this expression meaningful is to replace Q with the logical expression corresponding to the nuclear scope. Fortunately, the lambda-calculus can come to our rescue again. Al we need do is permit lambda-variables to range over FOL predicates as well as terms.

lambdaQ.ALLx Restaurant(x) ->  Q(x)

In walking through these examples, we have introduced three techniques that instantiate the rule-to-rule approach to semantic analysis introduced at the beginning of this section:

  1. Associating complex, function like lambda-expressions with lexical items
  2. Copying semantic values from children to parents in non branching rules
  3. Applying the semantics of one of the children  of a rule to the semantics of the older children of the rule through lambda-reduction

These techniques serve to illustrate a general division of labor that guides the design of semantic attachments in this compositional framework. In general, the lexical rule introduces quantifiers, predicates, and terms into the meaning representation being created.

Quantifier Scope Ambiguity and Underspecification

ALLx Restaurant(x)-> EXISTy (Menu(y) and EXISTe (Having(e) and Haver(e,x) and Had(e,y)))

This example illustrates that expression containing quantified terms can give rise to ambiguous representations even in the absence of syntactic, lexical,or a anaphoric ambiguities. This is known as the problem of quantifier scoping.

The interpretation that is produced is based on the order in which the lambda-expressions are reduced as dictated by the grammar and its semantic attachments. To fix this, we need the following capabilities.

  • The ability to efficiently create underspecified representations that embody all possible redings without explicitly enumerating them
  • A means to generate, or extract, all of the possible readings from the representation
  • The ability to choose among the possible readings

The following sections outline approaches to the first two problems. The solution to the last, most important, problem requires the use of the context and world knowledge and, unfortunately, the problem remains largely unsolved.

Store and retrieve Approaches

One way to address the quantifier scope problem is to rethink out notion of what the semantic expressions attached to syntactic constituents should consist of.

To provide this kind of functionality, we introduce the notion of Cooper storage (Cooper, 1983) and once again leverage the power of lambda-expressions. Recall that in our original approach to semantic analysis, we assigned a single FOL formula to each node in a parse tree. In the new approach, we replace this single semantic attachment with a store. The store includes a cora meaning representation for a node along with an indexed list of quantified expressions gathered from the nodes below this node in the tree. These quantified expressions are in the form of lambda- expressions that can be combined with the core meaning representation to incorporate the quantified expressions in the right way.

Constraint-based Approaches

Unfortunately, this storage-based approach suffers from two problems. First, it only addresses the problem of scope ambiguities introduced by quantified noun phrases. However, a wide array of syntactic constructions and lexical item also introduce similar ambiguities.

Even if we could extend the store-and-retrieve approcah to handle additional sources of ambiguity, there is a second more critical shortcoming. Although it allows us to enumerate all the possible scopings for a given expression, it doesn’t allow us to impose additianl constraints on those possibilities.

The solution of these problems lies in a change of perspective. Instead of taking what is essentially a procedural focus on how to retrieve fully-specified representation from stores, we ought to focus instead on how to effectively represent underspecified representations, including any constraints that any final representation must satisfy.

There are a number of current approaches that address the underspacification problem from this constraint-based perspective. The hole semantics(Bos, 1996) approach we describe here is representative of the field.

In the hole sematis approach, we replace this lambda-variables with holes. Instead of using lambda-reductions to fill these holes, we first add labels to all the candidate FOL sub-expressions. In a fully-specified formula, all holes will be filled with labeled subexpressions. Of course, we can’t fill holes with just any labeled expression, so we’ll add dominance constraints between holes and labels that restricts which label can fill which holes.

A plugging is a one-to-one mapping from labels that satisfies all the given constraints.

To implement this approach, we have to define a language to associate labels and holes with FOL expressions and to express dominance constraints between holes and labels.

This constraint based approach to underspecification addresses many problems with the store-and-retrieve approach that we raised at the beginning of this section. First, the approach is not specific to any particular grammar construction or source of scope ambiguity. This follows since we can label, or designate as holes, essentially arbitrary pieces of FOL formula. Secondo, and perhaps more importantly, dominance constraints give us the power to express constraints that can rule out unwanted interpretations. The source of these constraints can come from specific lexical and syntactic knowledge and can expressed directly in the semantic attachments to lexical entries and grammar rules.

Unification-Based Approaches to Semantic Analysis

Feature-structures and the unification operator provide an effective way to implement syntax-driven semantic analysis.

In this unification-based approach, our FOL representation and lambda-based semantic attachments are replaced with complex feature structures and unification equations.

EXIST e Closing(e) and Closed(e, Rhumba)

Our first task is to show that we can encode representations like this within a feature-structure framework. The most straightforward way to approach this task is to simply follow the BNF-style definition of the FOL statements given in chapter 17. The relevant elements of this definition stipulate that FOL formulas come in three varieties:

  1. Atomic formulas consisting of predicates with the appropriate number of term arguments;
  2. Formulas conjoined with other formulas by and, or, and -> operators;
  3. Quantified formulas that consist of a quantifier, variables, and a formula.

One obvious problem with this approach is that it fails to generate all the possible ambiguous representations arising from quantifiers scope ambiguities. Fortunately, the approaches to underspecification described earlier can be adapted to the unification-based approach.

Integration of Semantics into Eearley Parser

In section 18.1 we suggested a simple pipeline architecture for a semantic analyzer where the results of a complete syntactic parse are passed through a semantical analyser. The motivation for this notion stems from the fact that the compositional approach requires the syntactic parse before it can proceed. It is, however, also possible to perform semantic analysis in parallel with syntactic processing. This is possible because in our compositional framework, the meaning representation for a constituent can be created as soon as all of its constituent parts are present. This section describes just such an approach to integrating semantic analysis into the Earley Parser from Chapter 13.

Note the importance of performing feature-structure unification prior to semantic analysis. This ensures that semantic analysis will be performed only on valid trees and that feature needed for semantic analysis will be present. The primary advantage of this integrated approach over the pipeline approach lies in the fact that APPLY_SEMANTICS can fail in a manner similar to the way that unification can fail. If a semantic ill-formedness is found in the meaning representation being created, the corresponding state can be blocked from entering the chart. In this way semantic considerations can be brought to bear during syntactic processing.

Unfortunately, this also illustrates one of the primary disadvantages of integrating semantics directly onto the parser – considerable effort may be spent on the semantic analysis of orphan constituents that do not in the end contribute to a successful parse.

Idioms and Compositionality

As innocuous as it seems, the principle of compositionality runs into trouble fairly quickly when real language is examined. There are many cases in which the meaning of a constituent is not based on the meaning of its parts, at least not in the straightforward compositional sense.

The most straightforward way to handle idiomatic constructions like there is to introduce new grammar rules specifically designed to handle them.

Note that an Earley-style analyzer with this rule will produce two parses when this phrase is encountered: one representing the idiom and one representing the compositional meaning.

To summarize, handling idioms requires at least the following changes to the general compositional framework:

  • Allow the mixing of lexical items with traditional grammatical constituents
  • Allow the creation of additional idiom specific constituents to handle the correct range of productivity of the idiom
  • Permit semantic attachments that introduce logical terms and predicates that are not related to any of the constituents of the rule.

Follow My Blog

Get new content delivered directly to your inbox.

%d bloggers like this: