[Prev][Next][Index]
[xtag-meeting]: this week: Thursday 10:30am IRCS Fishbowl
-
To: xtag-meeting@cis.upenn.edu
-
Subject: [xtag-meeting]: this week: Thursday 10:30am IRCS Fishbowl
-
From: Anoop Sarkar <anoop@linc.cis.upenn.edu>
-
Date: Wed, 06 Mar 2002 11:30:53 EST
-
Sender: owner-xtag@linc.cis.upenn.edu
At this week's XTAG meeting William Schuler will
present a talk about his dissertation work. The
abstract follows.
Place: IRCS Fishbowl
Time: 10:30am
Date: 03/07/2002
Tractable environment-based disambiguation
The standard `pipeline' approach to natural language processing, in
which inputs are morphologically and syntactically resolved to a
single unambiguous representation before being interpreted, can
achieve respectable results on content like newspaper text or dictated
speech, where no machine-readable contextual information is readily
available to provide semantic guidance for disambiguation; but it is a
poor fit for applications such as natural language interfaces, where a
large amount of contextual information is available in the form of the
objects, states, and processes in the application's run-time
environment. This information is effectively ignored in the pipeline
architecture because it is not made available (through semantic
interpretation) until disambiguation decisions have already been made.
This talk will describe work on a practical natural language interface
architecture that disambiguates input sentences by incrementally
calculating the objects, states, and processes in the application's
environment that each hypothesized constituent could denote.
Disambiguation decisions are then based on the results of these
calculations (e.g. to ensure certain kinds of constituents always
refer to something in the environment).
Since this environment-based disambiguation architecture requires the
semantic interpretation of every possible constituent structure that
can be assigned to every possible string of recognized words in order
to inform disambiguation decisions, it must also employ some mechanism
for eliminating redundant calculations or it will quickly succumb to a
combinatorial explosion. The approach described in this talk combines
methods for reasoning about underspecified semantic representations,
taken from the study of formal linguistic semantics, with
structure-sharing representations taken from the study of parsing, in
order to define denotations for `structurally-underspecified'
constituents that subsume several possible subconstituent structures,
allowing semantic interpretation to be efficiently performed on a
compact representation of structural ambiguity called a shared forest.
Additional trade-offs between expressivity and complexity are
described within this framework in order to localize certain semantic
dependencies (in particular, between the restrictor and scope
arguments of quantifier expressions) to a single processing step, and
thereby avoid deriving certain kinds of partial quantifier
constituents and the exponential second-order sets they denote,
ensuring calculated denotations will always be of tractable size.
The talk will conclude with a description of an implemented system
incorporating these ideas (as well as other practical optimizations
for faster processing of rich grammar formalisms and argument-first
shared forest traversals that avoid calculating complete denotations
for 3-D spatial relations) into a broad-coverage speech interface for
instructing human-like agents in a simulated 3-D environment, using
this simulated 3-D environment to guide disambiguation decisions.