In earlier work, we presented a process algebra, PACSR, that uses a notion of resource failure to capture probabilistic behavior in reactive systems. PACSR also supports an operator for resource hiding. In this paper, we carefully consider the interaction between these two features from an axiomatic perspective. For this purpose, we introduce a subset of PACSR, called "PACSR-lite," that allows us to isolate the semantic issues surrounding resource hiding in a probabilistic setting, and provide a sound and complete axiomatization of strong bisimulation for this fragment.
Cognitive modeling is a provocative new paradigm that paves the way towards intelligent graphical characters by providing them with logic and reasoning skills. Cognitively empowered self-animating characters will see in the near future a widespread use in the interactive game, multimedia, virtual reality and production animation industries. This review covers three recently-published papers from the field of cognitive modeling for computer animation. The approaches and techniques employed are very different. The cognition model in the first paper is built on top of Soar, which is intended as a general cognitive architecture for developing systems that exhibit intelligent behaviors. The second paper uses an active plan tree and a plan library to achieve the fast and robust reactivity to the environment changes. The third paper, based on an AI formalism known as the situation calculus, develops a cognitive modeling language called CML and uses it to specify a behavior outline or "sketch plan" to direct the characters in terms of goals. Instead of presenting each paper in isolation them comparatively analyzing them, we take a top-down approach by first classifying the field into three different categories and then attempting to put each paper into a proper category. Hopefully in this way it can provide a more cohensive, systematic view of cognitive modeling approaches employed in computer animation.
The design principle of restricting local autonomy only where necessary for global robustness has led to a scalable Internet. Unfortunately, this scalability and capacity for distributed control has not been achieved in the mechanisms for specifying and enforcing security policies.
The STRONGMAN system described in this paper demonstrates three new approaches to providing efficient local policy enforcement complying with global security policies. First is the use of a compliance checker to provide great local autonomy within the constraints of a global security policy. Second is a mechanism to compose policy rules into a coherent enforceable set, e.g., at the boundaries of two locally autonomous application domains. Third is the "lazy instantiation" of policies to reduce the amount of state enforcement points need to maintain.
We demonstrate the use of these approaches in the design, implementation and measurements of a distributed firewall.
In the current highly interconnected computing environments, users regularly use insecure software. Many popular applications, such as Netscape Navigator and Microsoft Word, are targeted by hostile applets or malicious documents, and might therefore compromise the integrity of the system. Current operating systems are unable to protect their users from these kinds of attacks, since the hostile software is running with the user's privileges and permissions.
We introduce the notion of the SubOS, a process-specific protection mechanism. Under SubOS, any application that might deal with incoming, possibly malicious objects, behaves like an operating system. It views those objects the same way an operating system views users-it assigns sub-user id's-and restricts their accesses to the system resources.
This paper describes a method for automatic generation of tests from specifications written in Statecharts. These tests are to be applied to an implementation to validate the consistency of the implementation write respect to the specification. For test coverage, we adapt the notions of control-flow coverage and data-flow coverage used traditionally in software testing to Statecharts. In particular, we redefine these notions for Statecharts and formulate test generation problem as finding a counterexample during the model checking of a Statecharts specification. The ability to generate a counterexample allows test generation to be automatic.
To illustrate our approach, we show how to translate Statecharts to SMV, after defining the semantics of Statecharts using Kripke structure. We, then, describe how to formulate various test coverage criteria in CTL, and show how the SMV model checker can be used to generate only executable tests.
This report focuses on two different aspects of modular robots, the enumeration of distinct configurations of a modular robot and the generation of gaits for hybrid robots with wheels and legs. Given a particular set of modules from which the robot can be formed, a modular robot made up of these modules can attain a number of different configurations based on the relative attachment of the modules. The distinct configurations possible are enumerated for a locomotion system consisting of a base with multiple ports where wheel or leg modules can be attached.
Given a particular configuration of the modular robot, we would like to generate a set of inputs that would drive the robot from an initial position to a desired position. The method used for this must be applicable to different kinds of modules that may be used for locomotion. The method presented here involves generating a set of constant inputs that will drive a drift-free system from an initial to a final desired position. Simulation results are generated for translation and rotation of the robot and motion along a Lie Bracket direction (sideways motion) for the hybrid mobile robot.
We describe a general framework for controlling and coordinating a group of nonholonomic mobile robots equipped with range sensors, with applications ranging from scouting and reconnaissance, to search and rescue and manipulation tasks. We first describe a set of control laws that allows each robot to control its position and orientation with respect to neighboring robots or obstacles in the environment. We then develop a coordination protocol that allows the robots to automatically switch between the control laws to follow a specified trajectory. Finally, we describe two simple trajectory generators that are derived from potential field theory. The first allows each robot to plan its reference trajectory based on the information available to it. The second scheme requires sharing of information and results in a trajectory for the designated leader. Numerical simulations illustrate the application of these ideas and demonstrate the scalability of the proposed framework for a large group of robots.
In this paper we focus on systems whose goal is the distribution or exchange of resources such as software packages, scientific data, or multimedia files. To accomplish this goal efficiently, distributed systems find and retrieve resources using combinations of techniques such as brokering, proxying, caching, publish/subscribe, advertising, chaining, referral, recruiting, etc. The resulting architectures are complex and scale with difficulty.
We propose two ideas that combine to make such systems much more scalable. One is to encapsulate queries into processes and to use process migration as the basic primitive for cooperation between sites. With this, a generic installer can run at each site, implementing any and all of the techniques mentioned above. The other idea is to describe resource access as "delayed" queries embedded in metadata. Standard distributed database techniques can be applied to the metadata with the additional side-effect of spawning appropriate query processes.
We describe a distributed query language that can be obtained by adding our process manipulation primitives to any query language. We discuss installation strategies and we present an algorithm for the site-generic installer on which such a language is based. We also show how to apply query rewriting techniques that allow the installer to perform optimizations. Moreover, we give algorithms that insure that the global amount of installation associated with a given resource retrieval task is bounded.
In this paper we present a new algorithm for structure from motion from point correspondences in images taken from uncalibrated catadioptric cameras with parabolic mirrors. We assume that the unknown intrinsic parameters are three: the combined focal length of the mirror and lens and the intersection of the optical axis with the image. We introduce a new representation for images of points and lines in catadioptric images which we call the circle space. This circle space includes imaginary circles, one of which is the image of the absolute conic. We formulate the epipolar constraint in this space and establish a new 4x4 catadioptric fundamental matrix. We show that the image of the absolute conic belongs to the kernel of this matrix. This enables us to prove that Euclidean reconstruction is feasible from two views with constant paramenters and from three views with varying parameters. In both cases, it is one less than the number of views necessary with perspective cameras.
In this paper we describe a scheme for localizing teams of three or more robots using the imagery obtained from onboard omnidirectional cameras. We derive a method for recovering the relative position and orientation of the robots in three dimensions up to a scale factor solely from image measurements. We demonstrate this method with a team of three robots, and show results from localization experiments. We also discuss preliminary results obtained by applying this technique to a map building task.
Large scale distributed applications such as electronic commerce and online marketplaces e.g., auction services combine network access with multiple storage and computational elements. The distributed responsibility for resourcs control creates new security and privacy issues, caused by the complexity of the operating environment. In particular, policies at multiple layers and locations force conventional mechanisms such as firewalls and compartmented file storage into roles where they are clumsy and failure-prone.
We propose a new approach, virtual private services. Our approach relies on two functional divisions. First, we split policy specification and policy enforcement, providing local autonomy within the constraints of the global security policy. Second, we create virtual security domains each with its own security policy. Every domain has an associated set of privileges and permissions restricting it to the resources it needs to use and the services it must perform. Virtual private services ensure security and privacy policies are adhered to by coordinating policy enforcement points.
Our prototype implementation under OpenBSD demonstrates low performance overhead on a variety of latency- and throughput-oriented micro- and macro-benchmarks.
Moire interferometry is a common tool if depth estimation from a single image frame is desired. It uses the interference of periodic patterns to measure the topology of a given surface. The actual depth estimation relies on the consistent phase estimation from the interferogram. We show that this task can be simplified by estimating the image phase and the local image orientation simultaneously, followed by a processing step called orientation unwrapping which is introduced in this paper. We compare two methods for extracting image phase and orientation and present experimental results.
We have previously proposed chase and backchase as a novel method for using materialized views and integrity constraints in query optimization. In this paper, we show that the method is usable in realistic optimizers by extending it to bag and mixed (i.e. bag-set) semantics as well as to grouping views and by showing how to integrate it with standard cost-based optimization. We understand materialized views broadly, including user-defined views, cached queries and physical access structures (such as join indexes, access support relations, and gmaps). Moreover, our internal query representation supports object features hence the method applies to OQL and (extended) SQL: 1999 queries. Chase and backchase supports a very general class of integrity constraints, thus being able to find execution plans using views that do not fall in the scope of other methods. In fact, we prove completeness theorems that show that our method will find the best plan in the presence of common and practically important classes of constraints and views, even when bag and set semantics are mixed. We report on a series of experiments that demonstrate the practicality of our new ides.
The Internet embodies a new paradigm of distributed open computer networks, in which participants--users and software agents--are not cooperative, but self-interested, with private information and goals. A fundamental challenge in these systems is to design mechanisms that: (a) compute optimal system-wide solutions to distributed problems despite the self-interest of individual participants, and (b) place reasonable computational demands on participants and on the network infrastructure. In my dissertation I propose iBundle, an iterative auction for the combinatorial allocation problem (CAP), in which agents demand bundles of items. Computationally, iBundle is important because it solves problems without agents revealing, or even computing, complete information about their value for all possible bundles. I take a two-stage approach to design. First, I assume that agents follow a myopic best-response strategy and prove that iBundle implements a primal-dual algorithm for the CAP. Second, I extend iBundle for a few more rounds and use primal-dual information to adjust prices to Vickrey payments after termination, which makes myopic best-response a sequentially rational strategy for agents. Experimental results confirm the theoretical properties of the extended iBundle algorithm, and also demonstrate that simple approximation techniques within iBundle can reduce computation by orders of magnitude with little reduction in allocative efficiency.
In this paper, we consider the potential for reasoning about animations in the language of hybrid dynamical systems (i.e., systems with both continuous and discrete dynamics). We begin by directly applying hybrid systems theory to animation, using a general-purpose hybrid system specification tool to generate multi-agent animations; this application also illustrates that hybrid system models can provide systematic modular ways to incorporate low-level behavior into a design for higher-level behavioral modeling. We then apply the logical framework of hybrid systems to animation: we formally state properties of animation systems that may not be readily expressed in other frameworks; and we mechanically check a collision-avoidance property for a simple race-like game. This hybrid systems-oriented approach could improve our ability to reason about virtual worlds, thus improving our ability to create intelligent virtual agents.
The notion of bisimulation in theoretical computer science is one of the main complexity reduction methods for the analysis and design of labeled transition systems. Bisimulations are special quotients of the state space that preserve many important properties, and, in particular, reachability. In this paper, the framework of bisimilar transition systems is applied to various transition systems that are generated by linear control systems. Given a discrete-time or continuous-time linear system, and a finite or linear observation map, we characterize linear quotient maps that result in quotient transition systems that are bisimilar to the original system. Interestingly, the characterizations for discrete-time systems are more restrictive than for continuous-time systems, due to the existence of an atomic time step. We show that computing the coarsest bisimulations, which results in maximum complexity reduction, corresponds to computing the maximal controlled or reachability invariant subspace inside the kernel of the observation map. These results establish strong connections between complexity reduction concepts in control theory and computer science.
We are interested in the theoretical foundations of the optimization of conjunctive regular path queries (CRPQs). The basic problem here is deciding query containment both in the absence and presence of constraints. Containment without constraints for CRPQs is EXPSPACE-complete, as opposed to only NP-complete for relational conjunctive queries. Our past experience with implementing similar algorithms suggests that staying in PSPACE might still be useful. Therefore we investigate the complexity of containment for a hierarchy of fragments of the CRPQ language. The classifying principle of the fragments is the expressivity of the regular path expressions allowed in the query atoms. For most of these fragments, we give matching lower and upper bounds for containment in the absence of constraints. We also introduce for every fragment a naturally corresponding class of constraints in whose presence we show both decidability and undecidability results for containment in various fragments. Finally, we apply our results to give a complete algorithm for rewriting with views in the presence of constraints for a fragment that contains Kleene-star and disjunction.
XPath is a W3C standard that plays a crucial role in several influential query, transformation, and schema standards for XML. Motivated by the larger challenge of XML query optimization, we investigate the problem of containment of XPath expressions under integrity constraints. We study two fragments of XPath expressions called regular, respectively wildcard XPath and a class XIC of XML integrity constraints based on regular XPath expressions. XICs can express many database-style constraints, including key and foreign key constraints specified in the XML Schema standard proposal, as well as many of the constraints implied by DTDs. We identify the subclass of one-step XICs under which containment of both regular and wildcard XPath expression is decidable and give tight complexity bounds for the containment problem. We also show that even modest additions of XICs that are not one-step makes the same problem undecidable. In particular, the addition of some simple (but non-one-step) constraints implied by many common DTDs leads to undecidability. Containment of regular XPath expressions in the presence of DTDs only remains open, and so does containment of fully-fledged XPath expressions, even in the absence of integrity constraints.
The automated detection of humans in computer vision as well as the realistic rendering of people in computer graphics necessitates improved modeling of the human skin color. In this paper we describe the acquisition and modeling of skin reflectance data densely sampled over the entire visible spectrum. The data collected through a spectrograph allows us to explain skin color (and its variations) and to discriminate between human skin and dyes designed to mimic human skin color. We study the approximation of these data using several sets of basic functions. Our study shows that skin reflectance data can best be approximated by a linear combination of Gaussians or their first derivatives. This result has a significant practical impact on optical acquisition devices: the entire visible spectrum of skin reflectance can now be captured with a few filters of optimally chosen central wavelengths and bandwidth.
Sharing of files is a major application of computer networks, with examples ranging from LAN-based network file systems to wide-area applications such as use of version control systems in distributed software development. Identification, authentication and access control are much more challenging in this complex large-scale distributed environment. In this paper, we introduce the Distributed Credential Filesystem (DisCFS). Under DisCFS, credentials are used to identify both the files stored in the file system and the users that are permitted to access them, as well as the circumstances under which such access is allowed. As with traditional capabilities, users can delegate access rights (and thus share information) simply by issuing new credentials. Credentials allow files to be accessed by remote users that are not known a a priori to the server. Our design achieves an elegant separation of policy and mechanism which is mirrored in the implementation. Our prototype implementation of DisCFS runs under OpenBSD 2.8, using a modified userlevel NFS server. Our measurements suggest that flexible and secure file sharing can be made scalable at a surprisingly low performance cost.
Humans use gestures in most communicative acts. How are these gestures initiated and performed? What kinds of communicative roles do they play and what kinds of meanings do they convey? How do listeners extract and understand these meanings? Will it be possible to build computerized communicating agents that can extract and understand the meanings and accordingly simulate and display expressive gestures on the computer in such a way that they can be effective conversational partners? All these questions are easy to ask, but far more difficult to answer. In this thesis we try to address these questions regarding the synthesis and acquisition of communicative gestures.
Our approach to gesture is based on the principles of movement observation science, specifically Laban Movement Analysis (LMA) and its Effort and Shape components. LMA, developed in the dance community over the past seventy years, is an effective method for observing, describing, notating, and interpreting human movement to enhance communication and expression in everyday and professional life. Its Effort and Shape component provide us with a comprehensive and valuable set of parameters to characterize gesture formation. The computational model (the EMOTE system) we have built offers power and flexibility to procedurally synthesize gestures based on predefined key pose and time information plus Effort and Shape qualities.
To provide real quantitative foundations for a complete communicative gesture model, we have built a computational framework where the observable characteristics of gestures---not only key pose and timing but also the underlying motion qualitites---can be extracted from live performance, either in 3D motion capture data or in 2D video data, and correlated with observations validated by LMA notators. Experiments of this sort have not been conducted before and should be of interest not only to the computer animation and computer vision community but would be a powerful and valuable methodological tool for creating personalized, communicating agents.
The two-way aspect model is a latent class statistical mixture model for performing soft clustering of co-occurrence data observations. It acts on data such as document/word pairs (words occurring in documents) or movie/people pairs (people see certain movies) to produce their joint distribution estimate. This document describes our software immplementation of the aspect model available under GNU Public License (included with the distribution). We call this package PennAspect. The distribution is packaged as Java source and class files. The software comes with no guarantees of any kind. We welcome user feedback and comments. To down load PennAspect, visit: http://www.cis.upenn.edu/datamining/software_distPennAspect/index.html.
In this paper we present an "active", or programmable, packet system, SNAP (Safe and Nimble Active Packets) that can be used to provide scalable distributed network management. We begin by arguing that mobile agent approaches have asymptotic scalability advantages over centralized polling or extensible stationary agent approaches. Unfortunately, most existing mobile agent platforms are too heavyweight; either they consume too many resources to be practical or are simply too slow to provide accurate real-time management.
SNAP's active packets, on the other hand, are lightweight enough to serve as practical mobile agents in these scenarios. We briefly describe SNAP and its in-kernel implementation, and describe how SNAP can be used for detecting distributed denial-of-service (DDoS) attacks. Our example SNAP program requires only 20 instructions and can be carried within a single Ethernet frame. Finally, we present performance data that confirm SNAP's lightweight implementation by comparing SNAP with simple SNMP and ICMP ECHO (ping) applications. These comparisons show that SNAP can offer the flexibility advantages of mobile agents with the lightweightness of more conventional approaches.
Monitoring the global state of a network is a continuing challenge for network operators and users. It has become still harder with increases in scale and heterogeneity. Monitoring requires status information for each node and to construct the global picture at a monitoring point. GNOSIS, the Global Network Operations Status Information System, achieves a global view by careful extraction and presentation of locally available node data. The GNOSIS model improves on the traditional polling model of monitoring schemes by 1.) collecting accurate data 2.) decreasing the granularity with which network applications can detect change in the network and 3.) displaying status information in near real-time.
We define the Network Snapshot as the basic unit of information capture and display in GNOSIS. A Network Snapshot is a visualization of locally available state collected during a common time interval. A sequence of these Network Snapshots over time represent the evolution of network state.
In this paper, we motivate the need for a network monitoring system that can detect global problems, in spite of both scale and heterogeneity. We present three design criteria, Accuracy, Continuity and Timeliness for a global monitoring system. Finally, we present the GNOSIS architecture and demonstrate how it better detects network problems which are currently of concern. The goal of GNOSIS is to present a stream of consistent, accurate local data in a timely manner.
Network monitoring is a vital part of modern network infrastructure management. Existing techniques either present a restricted view of network behavior and state, or do not efficiently scale to higher network speeds and heavier monitoring workloads. Considering these shortcomings we present a novel architecture for programmable packet-level network monitoring. Our approach allows users to customize the monitoring function at the lowest possible level of abstraction to suit a wide range of monitoring needs: we use operating system mechanisms that result in a programming environment providing a high degree of flexibility, retaining fine-grained control over security, and minimizing the associated performance overheads. We present the implementation of this architecture as well as a set of experimental applications.
Memory coherence is a commonly accepted correctness criterion for distributed shared-memory computing platforms. Coherence is formulated assuming a static architecture in which all processors can communicate with one another. In this paper, we argue that the classical notion is not appropriate for ad-hoc networks consisting of mobile devices with constantly changing communication topology. We introduce and formalize a new correctness criterion, called group coherence, as a suitable abstract specification for shard-memory computing architectures ad-hoc networks. We show that two existing systems, the Coda file system and Lampson's global naming scheme, satisfy our definition, Finally, we propose a timestamp-based extension of the popular Snoopy cache coherence protocol for caching in ad-hoc networks, and show it to be group coherent.
The vast majority of information exchanged over public telecommunication networks has been voice. The traditional public telephony and ISDN networks utilize circuit-switched networks. Circuit switching establishes a dedicated path (circuit) between the sender and receiver and provides a contention-free environment with fixed bandwidth and controlled latency. However, the capacity of a given circuit is not shared with other users, thereby reducing the systems overall efficiency.
In contrast, a packet-switched network (such as the Internet) sends data through a network by splitting them into packets that are routed independently over shared network links. This is a suitable environment for non-interactive data, wherein the efficiency of a shared network is more desirable than guarantees of bandwidth, latency, and jitter. Unfortunately, packet networks in general are ill suited for carrying voice packets; attempting to do so without proper engineering and design usually leads to poor voice quality.
The goal of Voice over Internet Protocol (VolP) is to provide the efficiency of a packet-switched network while rivaling the quality of circuit-switched network. This paper will describe VolP technology and its four key components: signaling, encoding, transport, and gateway control. In addition, it will focus on audio coders/decoders and their impact on voice quality. A discussion of wireless networks and their effect on VolP will also be presented, preceding a description of the elements of voice quality.
Many proprietary technologies for VolP are available, and it is expected that several of these protocols will be adopted as standards as the technology matures perhaps forming a single standard that is an amalgamation of current schemes. The Internet will also be widely used for facsimile calls and videoconferencing as standards evolve; however, these topics are outside the scope of this paper. Also outside the scope of this paper is transporting voice over the protocols, such as ATM.
This document is intended to introduce the key elements of the Nuprl Proof Development System (Nuprl, for short) from the perspective of a Nuprl user, as opposed to the perspective of someone intimately involved in developing or extending Nuprl. As such, it may be more appropriate than other Nuprl-related documents for readers who are primarily concerned with uses of Nuprl and not fine details of Nuprl's mathematical foundation. It introduces and illustrates key Nuprl concepts ---such as types, terms, display forms, and tactics--- in the framework of a model of calculational predicate logic inference.
In this paper, we address the problem of finding the minimal number of viewpoints outside a polyhedron in two or three dimensions such that every point on the exterior of the polyhedron is visible from at least one of the chosen viewpoints. This problem which we call the minimum fortress guard problem (MFGP) is the optimization version of a variant of the art-gallery problem (sometimes called the fortress problem with point guards) and has practical importance in surveillance and image-based rendering. Solutions in the vision and graphics literature are based on image quality constraints and are not concerned with the number of viewpoints needed. The corresponding question for art galleries (minimum number of viewpoints in the interior of a polygon to see the interior of the polygon) which we call the minimum art-gallery guard problem (MAGP) has been shown to be NP-complete. A simple reduction from this problem shows the NP-completeness of MFGP. Instead of relying on heuristic searches, we address the approximability of the camera placement problem. It is well known (and easy to see) that this problem can be cast as a hitting set problem. While the approximability of generic instances of the hitting set problem is well understood, Bronnimann and Goodrich presented improved approximation algorithms for the problem in the case that the input instances have bounded Vapnik-Chervonenkis (VC) dimension.
In this paper we explore the VC-dimension of set systems associated with the camera placement problem described above. We show a constant bound for the VC dimension in the two dimensional case but a tight logarithmic bound in the three dimensional case. In the two dimensional case we are also able to present an algorithm that uses at most one more viewpoint than the optimal in the case that the viewpoints are restricted to be on a circumscribing circle --- a restriction that is justified in practice.
Recent advances in computing have enabled fast reconstructions of dynamic scenes from multiple images. However, the efficient coding of changing 3D-data has hardly been addressed. Progressive geometric compression and streaming are based on static data sets which are mostly artificial or obtained from accurate range sensors. In this paper, we present a system for efficient coding of 3D-data which are given in forms of 2+1/2 disparity maps. Disparity maps are spatially coded using wavelets and temporally predicted by computing flow. The resulted representation of a 3D-stream consists then of spatial wavelet coefficients, optical flow vectors, and disparity differences between predicted and incoming image. The approach has also very useful by-products: disparity predictions can significantly reduce the disparity search range and if appropriately modeled increase the accuracy of depth estimation.
In this paper we present a new scheme for the representation of object surfaces. The purpose is to model a surface efficiently in a coarse to fine hierarchy. Our scheme is based on the combination of spherical harmonic functions and wavelet networks on the sphere. The coefficients can be estimated from scattered data sampled form a star-shaped object's surface. Spherical harmonic functions are used to model the coarse structure of the surface, while spherical Gabor wavelets are used for the representation of fine scale detail. Theoretical background on wavelets on the sphere is provided as well as a discussion of implementation issues concerning convolutions on the sphere. Results are presented which show the efficiency of the proposed representation.
Data defined on spherical domains occurs in various applications, such as surface modeling, omnidirectional imaging, and the analysis of keypoints in volumetric data. The theory of spherical signals lacks important concepts like the Gaussian function, which is permanently used in planar image processing. We propose a definition of a spherical Gaussian function as the Green's function of the spherical diffusion process. This allows to introduce a linear scale space on the sphere. We apply this new filter to the smoothing of 3D object surfaces.