Datalog is a declarative logic programming language. While it is syntactically a subset of Prolog, Datalog generally uses a bottom-up rather than top-down evaluation model. This difference yields significantly different behavior and properties from Prolog. It is often used as a query language for deductive databases. Datalog has been applied to problems in data integration, Computer network, program analysis, and more.
Example
A Datalog program consists of
facts, which are statements that are held to be true, and
rules, which say how to deduce new facts from known facts. For example, here are two facts that mean
xerces is a parent of brooke and
brooke is a parent of damocles:
parent(xerces, brooke).
parent(brooke, damocles).
The names are written in lowercase because strings beginning with an uppercase letter stand for variables. Here are two rules:
ancestor(X, Y) :- parent(X, Y).
ancestor(X, Y) :- parent(X, Z), ancestor(Z, Y).
The :- symbol is read as "if", and the comma is read "and", so these rules mean:
-
X is an ancestor of Y if X is a parent of Y.
-
X is an ancestor of Y if X is a parent of some Z, and Z is an ancestor of Y.
The meaning of a program is defined to be the set of all of the facts that can be deduced using the initial facts and the rules. This program's meaning is given by the following facts:
parent(xerces, brooke).
parent(brooke, damocles).
ancestor(xerces, brooke).
ancestor(brooke, damocles).
ancestor(xerces, damocles).
Some Datalog implementations don't deduce all possible facts, but instead answer queries:
?- ancestor(xerces, X).
This query asks: Who are all the X that xerces is an ancestor of? For this example, it would return brooke and damocles.
Comparison to relational databases
The non-recursive subset of Datalog is closely related to query languages for relational databases, such as
SQL. The following table maps between Datalog, relational algebra, and
SQL concepts:
More formally, non-recursive Datalog corresponds precisely to unions of conjunctive queries, or equivalently, negation-free relational algebra.
s(x, y).
t(y).
r(A, B) :- s(A, B), t(B).
CREATE TABLE s (
z0 TEXT NONNULL,
z1 TEXT NONNULL,
PRIMARY KEY (z0, z1)
);
CREATE TABLE t (
z0 TEXT NONNULL PRIMARY KEY
);
INSERT INTO s VALUES ('x', 'y');
INSERT INTO t VALUES ('y');
CREATE VIEW r AS
SELECT s.z0, s.z1
FROM s, t
WHERE s.z1 = t.z0;
Syntax
A Datalog program consists of a list of
rules (
). If
constant and
variable are two
Countable set sets of constants and variables respectively and
relation is a countable set of predicate symbols, then the following
BNF grammar expresses the structure of a Datalog program:
::= | ""
::= ":-" "."
::= "(" ")"
::= | "," | ""
::= |
::= | "," | ""
Atoms are also referred to as . The atom to the left of the :- symbol is called the of the rule; the atoms to the right are the . Every Datalog program must satisfy the condition that every variable that appears in the head of a rule also appears in the body (this condition is sometimes called the ).
There are two common conventions for variable names: capitalizing variables, or prefixing them with a question mark ?.
Note that under this definition, Datalog does include negation nor aggregates; see for more information about those constructs.
Rules with empty bodies are called . For example, the following rule is a fact:
r(x) :- .
The set of facts is called the or of the Datalog program. The set of tuples computed by evaluating the Datalog program is called the or .
Syntactic sugar
Many implementations of logic programming extend the above grammar to allow writing facts without the :-, like so:
r(x).
Some also allow writing 0-ary relations without parentheses, like so:
p :- q.
These are merely abbreviations (syntactic sugar); they have no impact on the semantics of the program.
Semantics
There are three widely-used approaches to the semantics of Datalog programs:
Model theory, fixed-point, and proof-theoretic. These three approaches can be proven equivalent.
An atom is called if none of its subterms are variables. Intuitively, each of the semantics define the meaning of a program to be the set of all ground atoms that can be deduced from the rules of the program, starting from the facts.
Model theoretic
A rule is called ground if all of its atoms (head and body) are ground. A ground rule
R1 is a
ground instance of another rule
R2 if
R1 is the result of a substitution of constants for all the variables in
R2. The
Herbrand base of a Datalog program is the set of all ground atoms that can be made with the constants appearing in the program. The of a Datalog program is the smallest subset of the Herbrand base such that, for each ground instance of each rule in the program, if the atoms in the body of the rule are in the set, then so is the head. The model-theoretic semantics define the minimal Herbrand model to be the meaning of the program.
Fixed-point
Let be the
power set of the Herbrand base of a program
P. The
immediate consequence operator for
P is a map from to that adds all of the new ground atoms that can be derived from the rules of the program in a single step. The least-fixed-point semantics define the least fixed point of to be the meaning of the program; this coincides with the minimal Herbrand model.
The fixpoint semantics suggest an algorithm for computing the minimal model: Start with the set of ground facts in the program, then repeatedly add consequences of the rules until a fixpoint is reached. This algorithm is called naïve evaluation.
Proof-theoretic
[[Image:Proof tree for Datalog transitive closure computation.svg|thumb|Proof tree showing the derivation of the ground atom x from the program
edge(x, y).
edge(y, z).
path(A, B) :-
edge(A, B).
path(A, C) :-
path(A, B),
edge(B, C).
]]
The proof-theoretic semantics defines the meaning of a Datalog program to be the set of facts with corresponding proof trees. Intuitively, a proof tree shows how to derive a fact from the facts and rules of a program.
One might be interested in knowing whether or not a particular ground atom appears in the minimal Herbrand model of a Datalog program, perhaps without caring much about the rest of the model. A top-down reading of the proof trees described above suggests an algorithm for computing the results of such queries. This reading informs the SLD resolution algorithm, which forms the basis for the evaluation of Prolog.
Evaluation
There are many different ways to evaluate a Datalog program, with different performance characteristics.
Bottom-up evaluation strategies
Bottom-up evaluation strategies start with the facts in the program and repeatedly apply the rules until either some goal or query is established, or until the complete minimal model of the program is produced.
Naïve evaluation
Naïve evaluation mirrors the fixpoint semantics for Datalog programs. Naïve evaluation uses a set of "known facts", which is initialized to the facts in the program. It proceeds by repeatedly enumerating all ground instances of each rule in the program. If each atom in the body of the ground instance is in the set of known facts, then the head atom is added to the set of known facts. This process is repeated until a fixed point is reached, and no more facts may be deduced. Naïve evaluation produces the entire minimal model of the program.
Semi-naïve evaluation
Semi-naïve evaluation is a bottom-up evaluation strategy that can be asymptotically faster than naïve evaluation.
Performance considerations
Naïve and semi-naïve evaluation both evaluate recursive Datalog rules by repeatedly applying them to a set of known facts until a fixed point is reached. In each iteration, rules are only run for "one step", i.e., non-recursively. As mentioned above, each non-recursive Datalog rule corresponds precisely to a conjunctive query. Therefore, many of the techniques from
database theory used to speed up conjunctive queries are applicable to bottom-up evaluation of Datalog, such as
-
Database index selection
-
Query optimization, especially join order
[ "The LogicBlox engine performs full query optimization."]
-
-
Selection of used to store relations; common choices include and , other possibilities include disjoint set data structures (for storing equivalence relations),
bries (a variant of ), binary decision diagrams, and even SMT formulas
Many such techniques are implemented in modern bottom-up Datalog engines such as Soufflé. Some Datalog engines integrate SQL databases directly.
Bottom-up evaluation of Datalog is also amenable to parallelization. Parallel Datalog engines are generally divided into two paradigms:
-
In the shared-memory, multi-core setting, Datalog engines execute on a single node. Coordination between threads may be achieved using locking or lock-free data structures. The shared-memory setting may be further divided into single instruction, multiple data and multiple instruction, multiple data paradigms:
-
Datalog engines that execute on graphics processing units fall into the SIMD paradigm.
-
Datalog engines using OpenMP
are instances of the MIMD paradigm.
-
In the shared-nothing setting, Datalog engines execute on a Computer cluster of nodes. Such engines generally operate by splitting relations into disjoint subsets based on a hash function, performing computations (joins) on each node, and then exchanging newly-generated tuples over the network.
[ "These approaches implement the idea of parallel bottom-up evaluation by splitting the tables into disjoint partitions via discriminating functions, such as hashing, where each partition is then mapped to one of the parallel workers. After each iteration, workers coordinate with each other to exchange newly generated tuples where necessary.] Examples include Datalog engines based on MPI, Apache Hadoop, and Apache Spark.
Top-down evaluation strategies
SLD resolution is sound and complete for Datalog programs.
Magic sets
Top-down evaluation strategies begin with a
query or
goal. Bottom-up evaluation strategies can answer queries by computing the entire minimal model and matching the query against it, but this can be inefficient if the answer only depends on a small subset of the entire model. The
magic sets algorithm takes a Datalog program and a query, and produces a more efficient program that computes the same answer to the query while still using bottom-up evaluation.
A variant of the magic sets algorithm has been shown to produce programs that, when evaluated using semi-naïve evaluation, are as efficient as top-down evaluation.
Complexity
The
decision problem formulation of Datalog evaluation is as follows: Given a Datalog program split into a set of facts (EDB) and a set of rules , and a ground atom , is in the minimal model of ? In this formulation, there are three variations of the computational complexity of evaluating Datalog programs:
-
The is the complexity of the decision problem when and are inputs and is fixed.
-
The is the complexity of the decision problem when and are inputs and is fixed.
-
The is the complexity of the decision problem when , , and are inputs.
With respect to data complexity, the decision problem for Datalog is P-complete (See Theorem 4.4 in ). P-completeness for data complexity means that there exists a fixed datalog query for which evaluation is P-complete. The proof is based on Datalog metainterpreter for propositional logic programs.
With respect to program complexity, the decision problem is EXPTIME-complete. In particular, evaluating Datalog programs always terminates; Datalog is not Turing-complete.
Some extensions to Datalog do not preserve these complexity bounds. Extensions implemented in some Datalog engines, such as algebraic data types, can even make the resulting language Turing-complete.
Extensions
Several extensions have been made to Datalog, e.g., to support negation, aggregate functions, inequalities, to allow object-oriented programming, or to allow
as heads of clauses. These extensions have significant impacts on the language's semantics and on the implementation of a corresponding interpreter.
Datalog is a syntactic subset of Prolog, disjunctive Datalog, answer set programming, DatalogZ, and constraint logic programming. When evaluated as an answer set program, a Datalog program yields a single answer set, which is exactly its minimal model.
Many implementations of Datalog extend Datalog with additional features; see for more information.
Aggregation
Datalog can be extended to support aggregate functions.
Notable Datalog engines that implement aggregation include:
Negation
Adding negation to Datalog complicates its semantics, leading to whole new languages and strategies for evaluation. For example, the language that results from adding negation with the stable model semantics is exactly answer set programming.
Stratified negation can be added to Datalog while retaining its model-theoretic and fixed-point semantics. Notable Datalog engines that implement stratified negation include:
-
LogicBlox
[ "Additionally, negation is only allowed when the platform can determine a way to stratify all rules and constraints that use negation."]
-
Soufflé
Comparison to Prolog
Unlike in
Prolog, statements of a Datalog program can be stated in any order. Datalog does not have Prolog's cut operator. This makes Datalog a fully declarative language.
In contrast to Prolog, Datalog
-
disallows complex terms as arguments of predicates, e.g., y is admissible but not z,
-
disallows negation,
-
requires that every variable that appears in the head of a clause also appear in a literal in the body of the clause.
This article deals primarily with Datalog without negation (see also ). However, stratified negation is a common addition to Datalog; the following list contrasts Prolog with Datalog with stratified negation. Datalog with stratified negation
-
also disallows complex terms as arguments of predicates,
-
requires that every variable that appears in the head of a clause also appear in a positive (i.e., not negated) atom in the body of the clause,
-
requires that every variable appearing in a negative literal in the body of a clause also appear in some positive literal in the body of the clause.
Expressiveness
Datalog generalizes many other query languages. For instance, conjunctive queries and union of conjunctive queries can be expressed in Datalog. Datalog can also express regular path queries.
When we consider ordered databases, i.e., databases with an order relation on their active domain, then the Immerman–Vardi theorem implies that the expressive power of Datalog is precisely that of the class PTIME: a property can be expressed in Datalog if and only if it is computable in polynomial time.
The for Datalog asks, given a Datalog program, whether it is , i.e., the maximal recursion depth reached when evaluating the program on an input database can be bounded by some constant. In other words, this question asks whether the Datalog program could be rewritten as a nonrecursive Datalog program, or, equivalently, as a union of conjunctive queries. Solving the boundedness problem on arbitrary Datalog programs is undecidable, but it can be made decidable by restricting to some fragments of Datalog.
Datalog engines
Systems that implement languages inspired by Datalog, whether
, interpreters, libraries, or
, are referred to as . Datalog engines often implement extensions of Datalog, extending it with additional
, foreign function interfaces, or support for user-defined lattices. Such extensions may allow for writing non-terminating or otherwise ill-defined programs.
Here is a short list of systems that are either based on Datalog or provide a Datalog interpreter:
Free software/open source
+ List of Datalog engines that are free software and/or open source
! Name
! Year of latest release
! Written in
! Licence
! Data sources
! Description
! Links |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Non-free software
-
FoundationDB provides a free-of-charge database binding for pyDatalog, with a tutorial on its use.
[.]
-
Leapsight Semantic Dataspace (LSD) is a distributed deductive database that offers high availability, fault tolerance, operational simplicity, and scalability. LSD uses Leaplog (a Datalog implementation) for querying and reasoning and was created by Leapsight.
-
LogicBlox, a commercial implementation of Datalog used for web-based retail planning and insurance applications.
-
Profium Sense is a native RDF compliant graph database written in Java. It provides Datalog evaluation support of user defined rules.
-
.QL, a commercial object-oriented variant of Datalog created by Semmle for analyzing source code to detect security vulnerabilities.
[.]
-
SecPAL a security policy language developed by Microsoft Research.
-
Stardog is a graph database, implemented in Java. It provides support for RDF and all OWL 2 profiles providing extensive reasoning capabilities, including datalog evaluation.
-
StrixDB: a commercial RDF graph store, SPARQL compliant with Lua API and Datalog inference capabilities. Could be used as httpd (Apache HTTP Server) module or standalone (although beta versions are under the Perl Artistic License 2.0).
Uses and influence
Datalog is quite limited in its expressivity. It is not Turing-complete, and doesn't include basic data types such as integers or strings. This parsimony is appealing from a theoretical standpoint, but it means Datalog
per se is rarely used as a programming language or knowledge representation language.
[Lifschitz, Vladimir. "Foundations of logic programming." Principles of knowledge representation 3 (1996): 69-127. "The expressive possibilities of Datalog are much too limited for meaningful applications to knowledge representation."] Most Datalog engines implement substantial extensions of Datalog. However, Datalog has a strong influence on such implementations, and many authors don't bother to distinguish them from Datalog as presented in this article. Accordingly, the applications discussed in this section include applications of realistic implementations of Datalog-based languages.
Datalog has been applied to problems in data integration, information extraction, Computer network, security, cloud computing and machine learning.[.] Google has developed an extension to Datalog for big data processing.
Datalog has seen application in static program analysis. The Soufflé dialect has been used to write Pointer analysis for Java and a control-flow analysis for Scheme. Datalog has been integrated with SMT solvers to make it easier to write certain static analyses. The Flix dialect is also suited to writing static program analyses.
Some widely used database systems include ideas and algorithms developed for Datalog. For example, the standard includes recursive queries, and the Magic Sets algorithm (initially developed for the faster evaluation of Datalog queries) is implemented in IBM's DB2.
History
The origins of Datalog date back to the beginning of logic programming, but it became prominent as a separate area around 1977 when Hervé Gallaire and
Jack Minker organized a workshop on
logic and
.
[.] David Maier is credited with coining the term Datalog.
[.]
See also
-
Answer set programming
-
Conjunctive query
-
DatalogZ
-
Disjunctive Datalog
-
Flix
-
SWRL
-
Tuple-generating dependency (TGD), a language for integrity constraints on relational databases with a similar syntax to Datalog
Notes