[Date Prev][Date Next][Date Index]
Fwd: Mini-Workshop on Hypertree Decompositions
Mini-Symposium
(Sponsored by Wolfgang Pauli Instiute, Austria)
Place: Seminarroom 184-3 (KBS), Favoritenstraße 9-11 / 184-3, staircase
1, 3rd floor (when you leave the elevator turn right)
Date: Wednesday, 29th June 2005, 10:30 - 12:30
Talks:
10:30 - 11:30
Data, Schema, Ontology and Logic Integration
Joseph Goguen, University of California at San Diego, USA
11:45 - 12:30
Computer Core for Data Exchange: New Algorithms and Practical
Solutions
Georg Gottlob, Vienna University of Technology, Austria
Abstracts:
Data, Schema, Ontology and Logic Integration:
This paper gives a general definition of a ``kind of schema'' (often
called a ``meta-model'' in the literature, but here called a ``species'')
along with general definitions for the schemas of a species, and for the
databases, constraints, and queries over a given schema of a
species. This leads naturally to a general theory of data
translation and integration over arbitrary schemas of arbitrary species,
based on schema morphisms, and to a similar general theory of ontology
translation and integration over arbitrary logics. Institutions
provide a general notion of logic, and Grothendieck flattening provides a
general tool for integrating heterogeneous schema species and logics,
which in turn enables integration of theories, such as ontologies, over
different logics.
Computer Core for Data Exchange: New Algorithms and Practical
Solutions:
Data Exchange is the problem of inserting data structured under a
source schema into a target schema of different structure (possibly with
integrity constraints), while reflecting the source data as accurately as
possible. We study computational issues related to data exchange in the
setting ofFagin, Kolaitis, and Popa (PODS´03). We use the technique of
hypertree decompositions to derive improved algorithms for computing the
core of a relational instance with labeled nulls, a problem we show to be
fixed-parameter intractable with respect to the block size of the input
instances. We show that computing the core of a data exchange problem is
tractable for two large and useful classes of target constraints. The
first class includes functional dependencies and weakly acyclic inclusion
dependencies. The second class consists of full tuple generating
dependencies and arbitrary equation generating dependencies. Finally, we
show that computing cores is NP-hard in presence of a system-predicate
NULL(x), which is true if x is a null value.