In this paper we present the results of a collaborative effort to design and implement a system for cooperative material handling by a small team of human and robotic agents in an unstructured indoor environment. Our approach makes fundamental use of human agents' expertise for aspects of task planning, task monitoring, and error recovery. Our system is neither fully autonomous nor fully teleoperated. It is designed to make effective use of human abilities within the present state of the art of autonomous systems. It is designed to allow for and promote cooperative interaction between distributed agents with various capabilities and resources. Our robotic agents refer to systems which are each equipped with at least one sensing modality and which possess some capability for self-orientation and/or mobility. Our robotic agents are not required to be homogeneous with respect to either capabilities or function.
Our research stresses both paradigms and testbed experimentation. Theory issues include the requisite coordination principles and techniques which are fundamental to the basic functioning of such a cooperative multi-agent system. We have constructed a testbed facility for experimenting with distributed multi-agent architectures. The required modular components of this testbed are currently operational and have been tested individually. Our current research focuses on the integration of agents in a scenario for cooperative material handling.
We describe an approach for mobile robot localization based on geometric features extracted from ultrasonic data. As is well known, a single sonar measurement using a standard Polaroid sensor, though yielding relatively accurate information regarding the range of a reflective surface patch, provides scant information about the location in azimuth or elevation of that patch. This lack of sufficiently precise localization of the reflective patch hampers any attempt at data association, clustering of multiple measurements or subsequent classification and inference.
In previous work we proposed a multi-stage approach to clustering which aggregates sonic data accumulated from arbitrary transducer locations in a sequential fashion. It is computationally tractable and efficient despite the inherent exponential nature of clustering, and is robust in the face of noise in the measurements. It therefore lends itself to applications where the transducers are fixed relative to the mobile platform, where remaining stationary during a scan is both impractical and infeasible, and where deadreckoning errors can be substantial.
In the current work we apply this feature extraction algorithm to the problem of localization in a partially known environment. Feature-based localization boasts advantages in robustness and speed over several other approaches. We limit the set of extracted features to planar surfaces. We describe an approach for establishing correspondences between extracted and map features. Once such correspondences have been established, a least squares approach to mobile robot pose estimation is delineated. It is shown that once correspondence has been found, the pose estimation may be performed in time linear in the number of extracted features. The decoupling of the correspondence matching and estimation stages is shown to offer advantages in speed and precision.
Since the clustering algorithm aggregates sonic data accumulated from arbitrary transducer locations, there are no constraints on the trajectory to be followed for localization except that sufficiently large portions of features be ensonified to allow clustering. Preliminary experiments indicate the usefulness of the approach, especially for accurate estimation of orientation.
We describe a novel approach to the extraction of geometric features from sonic data. As is well known, a single sonar measurement using a standard Polaroid sensor, though yielding relatively accurate information regarding the range of a reflective surface patch, provides scant information about the location in azimuth or elevation of that patch. This lack of sufficiently precise localization of the reflective patch hampers any attempt at data association, clustering of multiple measurements or subsequent classification and inference. The problem is particularly apparent in uncertain environments with unknown geometry, such as is found underwater. Moreover, the underwater environment precludes the usual (office-environment) simplification of two-dimensionality.
We propose a multi-stage approach to clustering which aggregates sonic data accumulated from arbitrary transducer locations in an on-line fashion. It is computationally tractable and efficient despite the inherent exponential nature of clustering, and is robust in the face of noise in the measurements. It therefore lends itself to applications where the transducers are fixed relative to the mobile platform, where remaining stationary during a scan is both impractical and infeasible, and where deadreckoning errors can be substantial. The approach may be used both for map-building during exploration and for feature identification during navigation.
Functionality of an object defines its purpose in an environment. In this paper we introduce a representation for functionality and present a methodology for its recovery. The representation for an object must include not only its intrinsic (material) but also its functional (how it is used) properties. In addition, the representation should include information on how the functional properties as well as the intrinsic properties are to be recovered. Hence, we define the representation of functionality to be active . In order to recover the functionality of an object, we have developed a formalism based on Discrete Event System Theory and on the paradigm of Active Perception. This formalism allows us to express an investigating task in the form of a finite automaton for controlling and observing the interactions in our task. Our current investigation focuses on manipulatory interactions, and thus far, the functionality of piercing. We envision an expandable system having at its disposal a score of investigative procedures and through them to be able to classify and recognize an object based on its functional attributes.
A characterization of the Perception Laser Range Finder from Perceptron, Inc. (Farmington Hills, MI) is presented along with techniques for calibration and use. Emphasis is placed on using the range finder in a controlled environment and for applications which require high accuracy.
In the ten years since we put forward the idea of active perception (Bajcsy 1985, Bajcsy 1988) we have found that cooperative processes of various kinds and at various levels are often called for. In this paper we suggest that a proper understanding of cooperative processes will lead to a foundation for intelligent behavior and demonstrate the feasibility of this approach for some of the difficult and open problems in the understanding of intelligent behaviors.
Keywords: cooperation, hybrid systems, modeling agents, perception-action cycle, signal-to-symbol transformation.
The correctness of real-time distributed systems depends not only on the function they compute but also on their timing characteristics. Furthermore, these characteristics are strongly influenced by the delays due to synchronization and resource availability. Process algebras have have been used successfully to define and prove correctness of distributed systems. More recently, there has been a lot of activity to extend their application to real-time systems. The problem with most current approaches is that they ignore resource constraints and assume either maximum parallelism (i.e., unlimited resources) or pure interleaving (i.e., single resource). Algebra of Communicating Shared Resources (ACSR) is a process algebra designed for the formal specification and manipulation of distributed systems with resources and real-time constraints. A dense time domain provides a more natural way of specifying systems compared to the usual discrete time. Priorities provide a measure of urgency for each action and can be used to ensure that deadlines are met. In ACSR, processes are specified using resource bound, timed actions and instantaneous synchronization events. Processes can be combined using traditional operators such as nondeterministic choice and parallel execution. Specialized operators allow the specification of real-time behavior and constraints. The semantics of ACSR is defined as a labeled transition system. Equivalence between processes is based on the notion of strong bisimulation. A sound and complete set of algebraic laws can be used to transform almost any ACSR process into a normal form.
Graphical Communicating Shared Resources, GCSR, is a formal language for specifying real-time systems including their functional and resource requirements. A GCSR specification consists of a set of nodes that are connected with directed, labeled edges, which describe possible execution flows. Nodes represent instantaneous selection among execution flows, or time and resource consuming system activities. In addition, a node can represent a system subcomponent, which allows modular, hierarchical, thus scalable system specifications. Edges are labeled with instantaneous communication actions or time to describe the duration of activities in the source node. GCSR supports the explicit representation of resources and priorities to resolve resource contention. The semantics of GCSE is the Algebra of Communicating Shared Resources, a timed process algebra with operational semantics that makes GCSR specifications executable. Furthermore, the process algebra provides behavioral equivalence relations between GCSR specifications. These equivalence relations can be used to replace a GCSR specification with an equivalent specification inside another, and to minimize a GCSR specification in terms of the number of nodes and edges. The paper defines the GCSR language, describes GCSR specification reductions that preserve the specification behaviors, and illustrates GCSR with example design specifications.
Scientific data of importance to biologists in the Human Genome Project resides not only in conventional databases, but in structured files maintained in a number of different formats (e.g. ASN.1 and ACE) as well as sequence analysis packages (e.g. BLAST and FASTA). These formats and packages contain a number of data types not found in conventional databases, such as lists and variants, and may be deeply nested. We present in this paper techniques for querying and transforming such data, and illustrate their use in a prototype system developed in conjunction with the Human Genome Center for Chromosome 22. We also describe optimizations performed by the system, a crucial issue for bulk data.
In computer vision for object recognition or autonomous navigation, shadows are a frequent occurrence. However, shadows in an image can make it difficult to partition the image into regions corresponding to physical objects. Consequently, shadows must be accounted for in images. Despite this, relatively little work in image understanding has addressed this problem of recognizing shadows. This is in large part because shadows are difficult to identify. They cannot be infallibly recognized until a scene's geometry and lighting are known. However, this dissertation presents a number of cues which together strongly suggest the identification of a shadow and which can be examined without a high computational cost.
The techniques developed are: a color model for shadows and a color image segmentation method that recovers single material surfaces as single images regardless of whether the surface is partially in shadow; a method to recover the penumbra and umbra of a shadow; a method for determining whether some object could be obstructing a light source; and a set of tests to determine that shadows that appear to be case on planar surfaces are in fact on those surfaces. Although some cues require the recovery of information about scene geometry, this dissertation shows how these cues can be applied without doing absolute depth recovery for surfaces or objects in the scene -- with the exception of an estimate of the light source position, which is part of the obstruction cue.
These cues either depend on or their reliability improves with the examination of some well understood shadows in a scene. Consequently, the observer is equipped with an extendible probe for casting its own shadows. Any visible shadows cast by the probe are easily identified because they will be new to the scene. These actively obtained shadows allow the observer to experimentally determine the number, location, and rough extent of the light sources in the scene and the observer gains information about the likely spectral changes due to shadows.
The system has been tested against a variety of indoor and outdoor scenes.
To facilitate the transport of audio and video data across emerging Asynchronous Transfer Mode (ATM) networks, a simple, low cost, audio/video ATM appliance, the AVATAR, has been developed. This appliance is capable of handling uncompressed bidirectional audio and NTSC video connections.
The intended applications for this device include TeleMentoring (a NSF sponsored exploration of distance mentoring), video conferencing, and network monitoring. Our experience has shown that AVATAR is an effective, low cost means of providing multimedia connectivity between sites within the Aurora Gigabit testbed.
Mether is a Network Shared Memory (NSM). It allows applications on autonomous computers connected by a network to share a segment of memory.
NSMs offer the attraction of a simple abstraction for shared state, i.e., shared memory. NSMs have a potential performance problem in the cost of remote references, which is typically solved by grouping memory into larger units such as pages, and caching pages. While Mether employs grouping and caching to reduce the average memory reference delay, it also removes the need for many remote references (page faults) by providing a facility with relaxed consistency requirements.
Applications ported from a multiprocessor supercomputer with shared memory to a 16-workstation Mether configuration showed a cost/performance advantage of over 300 in favor of the Mether system. While Mether is currently implemented for Sun-3 and Sun-4 systems connected via Ethernet, other characteristics (such as a choice of page sizes and a semaphore-like access mode useful for process synchronization) should suit it to a wide variety of networks. A reimplementation for an alternate configuration employing packet-switched networks is in progress.
This technical report is the documentation of the work I did during my 6 months visit from August 1, 1994 to February 1995 at the GRASP Laboratory, University of Pennsylvania, Philadelphia, USA.\\ This report has evolved from being working notes into what is presented here. I hope that this does not decrease the readability of the report.
Although moving human observers actively fixate their eyes on points in the world, computer vision algorithms designed for the estimation of structure-from-motion or egomotion typically do not make use of this constraint. The main contribution of this work is to precisely specify the form of the optical flow field for a fixating observer. In particular, we show theoretically that the use of a hemispherical (retinal) imaging surface generates an optical flow field of a particularly simple form. The predictions of this theory are tested using the first actual hemispherical lens-camera system in computer vision, involving a 180 degree field of view lens. A further contribution is the finding that the sign of flow at the retinal periphery can be used to predict collisions.
A description of a model for accumulating segmentation information over multiple images is presented. The model combines geometric information with segmentation information to improve predicted appearance of objects. An extensive example using polyhedral object representations is described. Statistical results are shown over a set of 40 real images, verifying the underlying assumptions of the segmentation model concerning spatially distributed segmentation information over object edges. Evidence is accumulated in support of the hypothesis that segmentation varies significantly because of the interference between scene objects as well as viewpoint and illumination.
In order to take advantage of autonomous robotic systems, and yet ensure successful completion of all feasible tasks, we propose a mediation hierarchy in which an operator can interact at all system levels. Robotic systems are not robust in handling un-modeled events. Reactive behaviors may be able to guide the robot back into a modeled state and to continue. Reasoning systems may simply fail. Once a system has failed it is difficult to re-start the task from the failed state. Rather, the rule base is revised, programs altered, and the task re-tried from the beginning.
Human-machine interfaces have been developed for applications in the areas of nuclear power plants, aviation, and telerobotics, however, these systems are generally not considered autonomous with the operator providing a ``supervisory'' role. Typically, the human operator controls the entire task execution.
One aspect of the system we are developing permits the human operator, when necessary, to interact with all levels of a system to assist with process errors. This interaction encompasses all areas of a semi-autonomous system from the processes which would be considered fully-autonomous to those considered telerobotic.
Our system, MASC, Multiple Agent Supervisory Control system, permits the agents to work autonomously until the human supervisor is requested to take control or detects a problem. Our design strategy is to develop a general system which is applicable to various robotic systems. We combine the advantages of autonomous systems with the human's ability to control a system through a human-machine interface. MASC provides the supervisor with tools to interact with all the robotic system processing levels. These interactions may correct corrupted data or process decisions which would typically cause an autonomous system to enter an unstable state. We desire to create a more comprehensive semi-autonomous system based on this interaction which will successfully complete the execution of task assignments.
We have defined four hierarchical levels of supervisory interaction with the various robotic system levels. MASC permits the supervisor to specify task assignments, teleoperate agents, display sensory data, override process conclusions and reconfigure the system during sundry sensory and agent failures.
One goal in certain classes of networked multimedia applications, such as full-feedback remote control, is to provide end-to-end guarantees. To achieve guarantees, all resources along the path(s) between the resource(s) and sink(s) must be controlled. Resource availability is checked by the admission service during the call establishment phase. Current admission services control only network resources such as bandwidth and network delay. To provide end-to-end guarantees, the networked applications also need operation system resources and I/O devices at the endpoints. All such resources must be included in a robust admission process. By integrating the end-point resources, we observed several dependencies which force changes in admission algorithms designed and implemented for control of a single resource.
We have designed and implemented the multi-level admission service within our Omega architecture which controls the availability of end-point resources needed in remote control multimedia applications such as telerobotics.
This is a collection of five papers that concern applications of ideas from proof theory to problems in the semantics of types and concurrency.
We will examine the problem of distinguishing between database instances and values in models which incorporate object-identities and recursive data-structures. We will show that the notion of observational distinguishability is intricately linked to the languages available for querying a database. In particular we will show that, given a simple query language incorporating a test for equality of object-identities, database instances are indistinguishable if they are isomorphic , and that, in a language without any operators on object-identities, database instances are indistinguishable if a bisimilarity relation holds between them. Further, such a bisimulation relation may be computed on values, but doing so requires the ability to recurse over all the object-identities in an instance.
We will then show that systems of keys give rise to observational distinguishability relations which lie between these two extremes. We show that a system of keys satisfying certain restrictions provides us with an efficient means of comparing values, while avoiding the need to compare object identities directly.
The central theme of this paper is to study the properties and expressive power of data-models which use type systems with extents in order to represent recursive or self-referential data-structures. A standard type system is extended with classes which represent the finite extents of values stored in a database. Such an extended type system expresses constraints about a database instance which go beyond those normally associated with the typing of data-values, and takes on an important part of the functionality of a database schema. Recursion in data-structures is then constrained to be defined via these finite extents, so that all values in a database have a finite representation.
The idea of extending a type system with such classes is not new. In particular [2] introduced a type system and data models equivalent to those used here. However such existing work focuses on the expressive power of systems which allow the dynamic creation of recursive values, while we are concerned more with the properties of querying and manipulating databases containing known static extensions of data--values.
The paper consists of three parts. In part I we look at the problem of expressing transformations and constraints over a model based on object identities . A declarative language based on Horn-clause logic is introduced in which we can express a very general family of constraints and transformations. A normal form for transformations is defined and it is shown that transformation specifications expressed in the language which satisfy certain syntactic restrictions can be converted into equivalent transformation specifications in normal form. The normal form transformations can then be converted into an appropriate DBPL for implementation.
In part II we present a more detailed study of data-models based around such an extended type system. A second data model, based of regular trees is introduced. It is shown that this second data--model is a finer model than the first object-identity based model, and that under certain assumptions about the operators available on object identities, the second data model is observably equivalent to the first. It is also shown that, under different assumptions about the operators on object identities, any two non-isomorphic instances in the second model are observationally distinguishable, and that other assumptions yielding useful observational properties between these two extremes are also possible.
In part III we study the evaluation of recursive function definitions over such data-structures. We show that, in general, such function definitions may have many possible solutions, and identify the desirable or intuitive solutions as those which can be computed constructively . We also show that, by making use of the known finite domains of such functions, we can compute these solutions in a manner which is guaranteed to terminate.
I have deferred conclusions and discussion of further work until part IV where I briefly state the additional work I believe needs to be done in these areas and directions for future research. In addition there are many omitted proofs and technical details that need to be written up, and perhaps a need for further examples.
New cell-switched network technologies and multimedia peripherals enable distributed applications with strict real-time requirements such as remote control with feedback. Time-bounded network communications services are necessary, but not sufficient, to meet application-to-application real-time requirements. Real-time communication must be coupled with real-time computing support at the network end-points. An end-point architecture for the computation/communications coupling must be flexible and robust to support a diversity of applications.
The OMEGA architecture, when coupled with cell-switched networks (or others which can make bandwidth and delay guarantees), can approximate the behavior of dedicated microcontrollers connected by dedicated circuits in support of an application. The essence of the OMEGA architecture is resource reservation and management within the set of multimedia endpoints. Communications is preceded by a call set-up period where requirements, expressed in terms of Quality of Service (QoS) parameters, are negotiated, and guarantees are made at several logical levels, such as between applications and the network subsystem, applications and the operating system, and the network subsystem and the operating system. This establishes customized connections and allocation of resources appropriate to the application requirements and OS/network capabilities. To facilitate this resource management process, a new paradigm called the ``QoS Brokerage'' is used. This paradigm requires new services and protocols across all layers of the protocol stack (i.e., the higher layers of B-ISDN), as well as rearchitecting the application/network interface.
A prototype of OMEGA has been implemented and tested with a master/slave telerobotics application using a dedicated 155 Mbps ATM LAN. This application employs media with highly diverse QoS requirements and therefore provides a good platform for testing how closely one can approximate a dedicated circuit and controller with workstation hosts and cell-switching. Experience with this implementation has helped to identify new challenges to extending these techniques to a larger domain of applications and systems, and raises several new research questions.
To acquire the complete description of an arbitrary object's surface using a range camera, multiple images of the object from different viewpoints must be combined. This paper presents a solution to the problem of determining how to position the range camera so as to sample more of the object's surface while making sure the new image can be registered and integrated into the (incomplete) model of the object. Requirements for the solution to this problem are 1) all portions of the surface which can be scanned will be scanned; 2) the solution will generate a set of views which when combined will converge to the object's surface in a minimum number of images; 3) all surfaces of the object should be scanned with at least a given minimum confidence; 4) the solution will be robust in the sense that if it chooses a viewing position for the camera the resulting image will be successfully registered and integrated with the (incomplete) model or it will announce that now such view exists. We present a framework for such a solution and argue that it satisfies conditions 1 and 2. This sets the foundation to extend the present solution to satisfy conditions 3 and 4.
In this paper we study the existence, structure and computation of minimax and near-minimax rules under zero-one loss for a restricted location parameter of an absolutely continuous distribution.
Database transformations arise in many different settings including database integrations, evolution of database systems, and implementing user views and data-entry tools. This paper surveys approaches that have been taken to problems in these settings, assesses their strengths and weaknesses, and develops requirements on a formal model for specifying and implementing database transformations.
We also consider the problem of insuring the correctness of database transformations. In particular, we demonstrate that the usefulness of correctness conditions such as information preservation are hindered by the interactions of transformations and database constraints, and the limited expressive power of established database constraint languages. We conclude that more general notions of correctness are required, and that there is a need for a uniform formalism for expressing both database transformations and constraints, and reasoning about their interactions.
Finally we introduce WOL, a declarative language for specifying and implementing database transformations and constraints. We briefly describe the WOL language and its semantics, and argue that it addresses many of the requirements of a formalism for dealing with general database transformations.
This paper explores and reasons about the interplay between symbolic and continuous representations. We first provide some historical perspective on signal and symbol integration as viewed by the Artificial Intelligence, Robotics and Computer Vision communities. The domain of autonomous robotic agents residing in dynamically changing environments anchors well different aspects of this integration and allows us to look at the problem in its entirety. Models of reasoning, sensing and control actions of such agents determine three different dimensions for discretization of the agent-world behavioral state space. The design and modeling of robotic agents, where these three aspects have to be closely tied together, provide a good experimental platform for addressing the signal-to-symbol transformation problem. We present some experimental results from the domain of cooperating mobile agents involved in tasks of navigation and manipulation.
Gigabit per second (Gbps) bandwidths can now be delivered by networks to workstation hosts. This dissertation addresses the problem of delivering this bandwidth through the network subsystem of a hot workstation to a user's application. A central focus is the identification and removal of performance ``bottlenecks.'' The two approaches which I have exposed are the optimization of a single data path and the combination of multiple data paths between the network and applications running on the workstation.
Asynchronous Transfer Mode (ATM) Segmentation and Reassembly (SAR) was initially thought to be a bottleneck. I designed a scalable hardware architecture which is host-independent and able to perform SAR at rates of over 1Gbps. I have implemented it in a 160 Mbps host interface for the IBM RISC System/6000 workstation and at 640 Mbsp for HP 9000/700 workstations. It is now clear that relatively simple hardware can perform SAR at high speeds, and that the overall system bottlenecks lie elsewhere, such as the host's memory, I/O subsystems, and operating system.
Bottlenecks can also be removed by using parallel data paths. These paths can be transparently combined to obtain a virtual resource that is indistinguishable from a single, higher performance data path. The techniques used are similar to the ``striping'' concept which has been applied to disk subsystems. I developed a precise terminology for describing striping within a networking context and performed a systematic analysis of striping at various layers in a networking protocol stack.
My results using these two approaches have advanced our understanding of architectural techniques for designing and implementing network subsystems. The impact of the results is already widespread and will further increase as ATM becomes a dominant network technology.
The OMEGA architecture is an end-point architecture for provision of end-to-end QoS guarantees. The architecture principles, design, high-level implementation issues, and telerobotics experimental setup for validation of OMEGA concepts are described in my thesis (MS-CIS-95-31).
In this document, I will concentrate on the structure of the software modules, their naming and content. The complete code is part of this document. There are approximately 10,000 lines of code.
In trauma, many injuries impact anatomical structures, which may in turn affect physiological processes - not only those processes within the structure, but also ones occurring in physical proximity to them. Our goal with this research is to model mechanical interactions of different body systems and their impingement on underlying physiological processes. We are particularly concerned with pathological situations in which body system functions that normally do not interact become dependent as a result of mechanical behavior. Towards that end, the proposed TRAUMAP system (Trauma Modeling of Anatomy and Physiology) consists of three modules: (1) a hypothesis generator for suggesting possible structural changes that result from the direct injuries sustained; (2) an information source for responding to operator querying about anatomical structures, physiological processes, and pathophysiological processes; and (3) a continuous system simulator for simulating and illustrating anatomical and physiological changes in three dimensions. Models that can capture such changes may serve as an infrastructure for more detailed modeling and benefit surgical planning, surgical training, and general medical education, enabling students to visualize better, in an interactive environment, certain basic anatomical and physiological dependencies.
High-speed cell switched networks and new computer peripherals supporting various media types suggest novel applications. In particular, these technologies enable the class of long-lived distributed multimedia applications with strict real-time requirements. Until now, a dedicated circuit was required to meet these requirements, as today's packet switched network and end-point technologies (e.g., workstation operating systems) cannot support this application class.
This thesis argues that a proper software architecture coupled with cell switched networks can approximate the behavior of a dedicated circuit. The use of software to control multiplexing offers the additional benefit of programmable service customization. This thesis proposes one architecture, and demonstrates its success experimentally.
My approach negotiates requirements and guarantees between the application and the network, including the operating systems (OS), in a call set-up period prior to use of the network by the application. I establish customized virtual circuits and allocate resources appropriate to the application requirements and OS/network capabilities. These customizations can be renegotiated dynamically. To facilitate this resource management process I have developed a new paradigm, called ``QoS Brokerage'' to support Quality of Service. This paradigm requires new services and protocols across all layers of the protocol stack, as well as rearchitecting the application/network interface.
I used an application-driven approach to define requirements for the experimental evaluation of our architecture and the brokerage process. This forced an examination of requirement representations and an investigation of the differences between application and network models for QoS, control flows, and guarantee provision in networks. The architecture addresses the important issues of locating essential functions and decision-making points in such a system. The architecture also provides services such as tuning, translation, admission and QoS negotiation/renegotiation , which support the required functionalities. The transmission protocol stack is implemented as a set of generic services, embedded in our Real-Time Application Protocol (RTAP) and Real-Time Network Protocol (RTNP), which are tuned by the deal the QoS Broker achieved.
As a driving application I used a master/slave telerobotics application for which I implemented the communication system Omega with functionalities, services, and protocols as outlined using a dedicated 155 Mbps ATM LAN. This application employs media with diverse requirements and provides a good platform for testing how closely one can approximate a dedicated circuit and controller with workstation hosts and cell-switching. The results show that my software architecture can support this complex control system. It is the first such system to accomplish this goal for telerobotics over ATM. As a consequence of this implementation, I have identified the key barriers to extending these techniques to a larger domain.
In this thesis I study FORUM as a specification language. FORUM is a higher-order logic based on the logical connectives of Linear Logic. As an initial example, I demonstrate that FORUM is well suited for specifying concurrent computations by specifying the higher-order p calculus. Next, I focus on the problem of specifying programming languages with higher-order functions, and imperative features such as assignable variables, exceptions and first--class continuations. I provide a modular and declarative specification of an untyped programming language, UML, which contains the above mentioned features. Further, I use the proof theory of FORUM to study program equivalence for the functional core of UML augmented with assignable variables. Using my compositional specifications in FORUM, I prove equivalence of programs that have been challenging for other specification languages. Finally I study the operation semantics of DLX, a prototypical RISC machine. I specify the sequential and pipelined operational semantics of DLX with important optimizations such as call-forwarding and early branch resolutions, and prove them to be equivalent. Furthermore, I study the problem of code equivalence via the FORUM specification, and, in particular, analyze the problem of cue rescheduling for DLX.
In this dissertation, we examine complete search algorithms for SAT, the satisfiability problem for propositional formulas in conjunctive normal form. SAT is NP-complete, easy to think about, and one of the most important computational problems in the field of Artificial Intelligence. From an empirical perspective, the central problem associated with these algorithms is to implement one that runs as quickly as possible on a wide range of hard SAT problems. This in turn requires identifing a set of useful techniques and programming guidelines. Another important problem is to identify the techniques that do not work well in practice, and provide qualitative reasons for their poor performance whenever possible. This dissertation addresses all four of these problems.
Our thesis is that any efficient SAT search algorithm should perform only a few key techniques at each node of the search tree. Furthermore, any implementation of such an algorithm should perform these techniques in quadratic time total down any path of the search tree, and use only a linear amount of space. We have justified these claims by writing POSIT (for PrOpositional SatIsfiability Testbed), a SAT tester which runs more quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard random 400-variable 3-SAT problems in about 2 hours on the average. In general, it can solve hard n-variable random 3-SAT problems with search trees of size 0(2 /18.7).
In addition to justifying these claims, this dissertation describes the most significant achievements of other researchers in this area, and discusses all of the widely known general techniques for speeding up SAT search algorithms. It should be useful to anyone interested in NP-complete problems or combinatorial optimization in general, and it should be particularly useful to researchers in either Artificial Intelligence or Operations Research.
VERSA is a tool for the automated analysis of resource-bound real-time systems using the Algebra of Communicating Shared Resources (ACSR). This document serves as an introduction to the tool for beginning users, and as a reference for process and command syntax, examples of usage, and tables of operators, built-in functions and algebraic laws. Two detailed examples demonstrate the application of VERSA to cononical examples from the literature.
This version of the VERSA user's guide reflects the 95.09.10 version of the tool. The latest version of VERSA is available by anonymous ftp from ftp.cis.upenn.edu in directory pub/rtg, and through the World-Wide Web via the Penn Real-Time Group home page http://www/cis.upenn.edu/~rtg/home.html.