Type Inference in a Database Programming Language*

Type Inference in a Database Programming Language* Atsushi Ohori Peter Buneman Department of Computer and Information Science/D2, University of Pennsy...
Author: Ralf Byrd
21 downloads 0 Views 213KB Size
Type Inference in a Database Programming Language* Atsushi Ohori Peter Buneman Department of Computer and Information Science/D2, University of Pennsylvania, Philadelphia, PA 19104-6389

Abstract

scriptions onto various small partial descriptions and then representing a database as a collections of sets of these partial descriptions. Larger descriptions are then obtained by joining these partial descriptions when needed. A join and a projection are therefore essential for database programming. Wand observed [Wan87] the importance of extending a record with some new information and introduced an operation with on records. e with l := e0 adds a new field l if e does not contain an l field, otherwise it obliterates the existing information. In database programming, we prefer to have a join that combines information so that, for example, the join of [ Name = [ Firstname = 0 John 0 ] ] and [ Name = [ Lastname = 0 Doe 0 ] ] yields [ Name = [ Firstname = 0 John 0 , Lastname = 0 Doe 0 ] ]. If two descriptions are inconsistent then the join of the two yields an exception. In this paper we will show that an ML-like type system can be properly extended to include labeled records, join, projection and sets without sacrificing its useful property of having a solvable type inference problem. We first extend the notion of type schemes to conditional type schemes that include conditions on substitutions and then develop an algorithm to construct a principal conditional type scheme for any typable term, which establishes the decidability of the type inference problem. It should be noted that this extension is necessary to infer correct typings for terms even if we only extend the language with field selectors. For example, the term λx.(x.N ame), which extracts the N ame field from a record, does not have a conventional principal type scheme. The problem is seen in Standard ML [HMM86] which cannot find a type for functions whose arguments are partial matches for records such as fun f{Name = x, ...} = x . In our system, this term is given a type scheme t1 → t2 with the condition [ N ame : t2 ] ∈ t1 . A substitution instance θ(t1 → t2 ) is a type of the term if θ has the property that θ(t1 ) contains the field N ame : θ(t2 ). As

We extend an ML-like implicit type system to include a number of structures and operations that are common in database programming including sets, labeled records, joins and projections. We then show that the type inference problem of the system is decidable by extending the notion of principal type schemes to include conditions on substitutions. Combined with Milner’s polymorphic let constructor, our language also supports type inheritance.

1

Introduction

A database can be regarded as a collection of descriptions of real-world objects. One way to represent such descriptions in programming languages is to use labeled records found, for example, in Standard ML [HMM86]. A database is then represented as a set of records. In practice both the records and the set are very large and contain a great deal of redundancy, a problem that is solved by first projecting de∗ Appeared in Proceedings of ACM Conference on LISP and Functional Programming, Utah, 1988, Pages 174–183. This research was supported in part by grants NSF IRI8610617, NSF MCS 8219196-CER, ARO DAA6-29-84-k-0061, and by funding from AT&T’s Telecommunications Program at the University of Pennsylvania and from OKI Electric Industry Co., Japan. Revised September 1988.

1

observed by Cardelli [Car84] and Wand [Wan87], in order to support type inheritance we need to capture the polymorphic nature of such field selectors. This extension is therefore also needed to support type inheritance in an ML-like implicit type system. Ideas of representing typings with constraints can be also found in [Sta88, Mit84]. Wand proposed [Wan87] a type inference algorithm for a type system containing labeled records based on unification method. However, as we shall see, his algorithm cannot infer all possible types for some terms. Cardelli and Wegner proposed [CW85] the notion of bounded universal quantification to capture the polymorphic nature of field selectors. However, as we shall see, bounded quantification is too general for this purpose and their type system exhibits an anomaly that type information is lost by applying function containing a field selector. The approach described here, which is based upon tecniques in [Wan87], has the advantage that it not only solves these problems but also allows us to infer types of terms containing sets, joins and projections in a uniform way.

2

τ1 = {[Pname : string, Supplier : {[Sname : string, City : string]}]} We can think of database terms as partial descriptions that are ordered with respect to their information content. By defining a partial order on database terms that represent this ordering, we can generalize join and projection to work over arbitrary database terms. To get such an ordering, we first define the pre-order ¹ by nullι ¹ c ¹ [ R1 ] ¹ S1

¹

cι for all atomic value c of base type ι c for all atomic values c [ R2 ] if for each (l = e) ∈ R1 there is (l = e0 ) ∈ R2 such that e ¹ e0 S2 if ∀s ∈ S2 ∃s0 ∈ S1 .s0 ¹ s

The third rule for sets is define to capture the properties of sets in database programming. Readers are referred to [BJO91] for the importance of this ordering for sets in database programming. ¹ fails to be anti-symmetric because of this rule. However we can use the induced equivalence relation {(a, b)|a ¹ b and b ¹ a} to define a partial order and an equality rule for database terms. For each equivalence class, we can now define a canonical representative satisfying the property that if it contains a set expression S then there are no s, s0 ∈ S such that s ¹ s0 . We then regard ¹ as the partial order on canonical representatives and define the join as the least upper bound operator of this partial order. The equality on database term is the above equivalence relation, and this is readily computable. As an example of the join of two set terms,

Database types and database terms

Before giving a formal definition of our language, we informally show how sets, join and projection are introduced in the language. Since sets, join and projection require decidable equality on terms, they cannot be introduced on arbitrary terms. For this reason, we identify the subset of terms as database terms, on which sets, join and projection are defined. Database terms are terms that are constructed from constants (on which we assume computable equality) by record and set constructors. In addition to usual constants of base types, we include special constant nullι for each base type ι to represent undefined value or null value, which is often needed in database programming. The following is an example of a database term containing sets:

r2 ={[Pname = 0 Nut 0 , Supplier = {[City = 0 Paris 0 ]}, Qty = 100], [Pname = 0 Bolt 0 , Supplier = {[City = 0 Paris 0 ]}, Qty = 200]} r1 t¹ r2 = {[Pname = 0 Nut 0 , Supplier = {[Sname = 0 Blake 0 , City = 0 Paris 0 ], Qty = 100], [Pname = 0 Bolt 0 , Supplier = {[Sname = 0 Blake 0 , City = 0 Paris 0 ]}, Qty = 200]}

r1 = {[Pname = 0 Nut 0 , Supplier = {[Sname = 0 Smith 0 , City = 0 London 0 ], [Sname = 0 Blake 0 , City = 0 Paris 0 ]}], [Pname = 0 Bolt 0 , Supplier = {[Sname = 0 Blake 0 , City = 0 Paris 0 ], [Sname = 0 Adams 0 , City = 0 Athens 0 ]}]}

The relevance of this operation in database programming is shown by the fact that [BJO91] if r1 and r2 are relations in the relational data model then r1 t¹ r2 is what is called natural join r1 1 r2 . Because of this connection, we use 1 as the syntax for the this least upper bound operation.

Types of database terms are database types and are also constructed from base types by record and set type constructors. For example, r1 is given the type: 2

Along with joins of expressions, we also define joins of types as the least upper bound with respect to the ordering ≤ on their structures defined by: ι

explicitly, we assume an linear order < on L and that any A-record is implicitly converted to a canonical representative (l1 : a1 ) · · · (ln : an ) satisfying l1 < · · · < ln . An A-record ρ can be regarded as a function from a finite set L ⊆ L to A. We write dom(ρ) for the domain of ρ and ρ(l) for the value of ρ for l ∈ dom(ρ).

≤ ι for all base types ι

[ T1 ]

≤ [ T2 ] if for each (l : τ ) ∈ T1 there is (l : τ 0 ) ∈ T2 such that τ ≤ τ 0 {τ } ≤ {τ 0 } if τ ≤ τ 0

3.1

The set of types T ype (ranged over by τ ) of the language is defined by the following abstract syntax:

The following is an example of join of two types. τ2 = {[Pname : string, Supplier : {[City : string]}, Qty : int]} τ1 t≤ τ2 = {[Pname : string, Supplier : {[Sname : string, City : string]}, Qty : int]}

τ

δ

::= ι | [ ρδ ] |{δ}

where ρδ denotes Dtype-records. The ordering relation ≤ defined in the previous section is a partial order on Dtype. This ordering is based on inclusion relation on fields in record types, and we note the following properties, which we shall need to construct the type inferencing algorithm:

π{[ P name:string,Supplier:{[ Sname:string ]} ]} (r1 ) = {[Pname = 0 Nut 0 , Supplier = {[Sname = 0 Smith 0 ], [Sname = 0 Blake 0 ]}], [Pname = 0 Bolt 0 , Supplier = {[Sname = 0 Blake 0 ], [Sname = 0 Adams 0 ]}]}

(1) ≤ has the bounded join property, i.e. for any subset A ⊆ Dtype if there is some b ∈ Dtype such that ∀a ∈ A.a ≤ b, then the least upper bound t≤ A exists. (2) There is a polynomial-time algorithm to test whether δ1 ≤ δ2 .

Definition of the language

(3) There is a polynomial-time algorithm, taking δ1 , δ2 ∈ Dtype, that yields τ1 t≤ τ2 if it exists and reports failure otherwise.

At this point, for convenience, we shall alter the syntax for records that we used in the previous section and adopt a common syntax for labeled structures, whether they are terms or types. Let L be a fixed set of labels (ranged over by l). For some language A, the set of A-records is the language defined by the following abstract syntax: ρA

::= ι | [ ρτ ] | {τ } | τ → τ

where ι denotes a set of base types BaseT ype, ρτ denotes T ype-records and { } is the set type constructor. The set of database types Dtype (ranged over by δ) is the set of types that do not contain a function type constructor (→):

Note the relationship between joins of expressions and joins of types, i.e. if e3 = e1 1 e02 and e1 : τ1 , e2 : τ20 then e3 : τ1 t≤ τ2 . This property allows us to type-check statically expressions containing joins. Projections are also generalized to projections on types. πτ is a mapping from any expressions of type τ 0 such that τ ≤ τ 0 to expressions of type τ . The following is an example of a projection:

3

Types

In what follows, we regard ≤ as a relation on T ype.

3.2

Terms

Let V ar be a fixed set of variables (ranged over by x) and Const be a given set of typed constants (ranged over by cτ ). The set of terms T erm of the language (ranged over by e) is then defined by the following abstract syntax:

::= ² | ρA (l : a)

where ² is the empty string, a stands for terms in A and l in ρA (l : a) does not appear in ρ. Records are unordered collections of labeled fields and the following equality holds for A-records:

e ::= x | nullι | cτ | [ re ] | e.l | {e, . . . , e} | e ∪ e | e \ e | e 1 e | πτ (e) | λx.e | e∗ | (ee) | (e)

(li1 : ai1 ) · · · (lin : ain ) = (lj1 : aj1 ) · · · (ljn : ajn ) (1)

where re are T erm-records and .l, ∪ , \ are field selection, union, difference operators respectively and ∗ is the function extension to sets.

where (lj1 : aj1 ) · · · (ljn : ajn ) is a permutation of (li1 : ai1 ) · · · (lin : ain ). Instead of treating this equality 3

3.3

Typing rules for terms

jections. Definitions for other term constructors are straightforward. The set of database terms Dterm (ranged over by d) is defined as:

We want to allow only those terms that have a typing under some type assignment to its free variables. A type assignment A is a function from a finite subset X of V ar to T ype. We write A, {x : τ } for the function A0 such that dom(A0 ) = dom(A) ∪ {x}, A0 (x) = τ and A0 (y) = A(y) for y ∈ dom(A), y 6= x. A typing is then defined as a triple (A, e, τ ), written as A ` e : τ , meaning that the term e has a type τ under the assignment A. For any term e, A ` e : τ is a typing iff it is derivable in the following proof system, usually called typing rules: Axioms (var)

A`x:τ

d ::= cδ | nullι | [ rd ] | {d, . . . , d} where rd denotes Dterm-records and δ denotes database types. The pre-order relation ¹ defined in previous section is a pre-order on Dterm. ¹ induces an equivalence relation ∼ =¹ , i.e. a ∼ =¹ b iff a ¹ b and b ¹ a. Define v as the partial order on equivalence classes induced by ¹, i.e. E1 v E2 iff there are e1 ∈ E1 , e2 ∈ E2 such that e1 ¹ e2 . It is easy to check that v has the bounded join property. We regard a database term as a representative of the corresponding equivalence class. For a term d ∈ Dterm we say that d is in canonical form if either (1) d = cδ or d = nullι , or (2) d = [ (l1 : d1 ) · · · (ln : dn ) ] and each di is in canonical form or (3) d = {d1 , . . . , dn } and each di is in canonical form and there are no di , dj ∈ {d1 , . . . , dn } such that di ¹ dj . It can then be shown that each equivalence class is uniquely represented (up to permutations of elements in sets) by a term in canonical form. For a set of database terms X, we write join(X) for the canonical representative of the equivalence class tv X if it exists. For finite X, it is easy to define an algorithm to compute join(X). The equality for join is then defined as:

if A(x) = τ

τ

(const) A ` c : τ ([ ])

A`[]:[]

({})

A ` {} : {δ}

for any δ

Inference Rules A ` [r] : [ρ] A ` e : τ (record) A ` [ r(l : e) ] : [ ρ(l : τ ) ] (dot)

A ` e : [ ρ ] l ∈ dom(ρ) A ` e.l : ρ(l)

(set)

A ` {e1 , . . . , en } : {δ} A ` e : δ A ` {e1 , . . . , en , e} : {δ}

(union)

A ` e1 : {δ} A ` e2 : {δ} A ` e1 ∪ e2 : {δ}

(dif)

A ` e1 : {δ} A ` e2 : {δ} A ` e1 \ e2 : {δ}

(join)

A ` e1 : δ 1

e1 1 e2 = join({e1 , e2 }) The intended meaning of πδ (d) is to project d onto the set of values having the type δ. We therefore define the equality for projection as:

A ` e2 : δ2 δ3 = δ1 t≤ δ2 A ` e1 1 e2 : δ 3

πδ (d) = join({v|v has the type δ, v v d})

(π)

A ` e1 : δ 1 δ 2 ≤ δ 1 A ` πδ2 (e1 ) : δ2

(abs)

A, {x : τ1 } ` e : τ2 A ` λx.e : τ1 → τ2

(ext)

A ` e : δ1 → δ2 A ` e∗ : {δ1 } → {δ2 }

4

(app)

A ` e1 : τ1 → τ2 A ` e2 : τ1 A ` (e1 e2 ) : τ2

The static type-checking of a term e determines whether there are some A, τ such that A ` e : τ . Since our language is implicitly typed, there may be more than one such pair (A, τ ). In order to develop a type-checking algorithm, we need to determine the set {(A, τ )|A ` e : τ } for any given term e. This problem is often called the type inference problem. Milner solved this problem for the type system of ML [Mil78]. A derivation system for typings of the language is presented in [DM82]. By defining a language of type expressions containing type variables, it

3.4

The algorithm to compute the term equal to the right hand side of the above equation can be also defined.

Equations between terms

In addition to the usual equality rules (α) and (β) for function abstraction and function application, we need to define equality rules for other term constructors we have introduced in our language. These rules shall provide an operational semantics of the language. Here we only define rules for joins and pro4

The type inference problem

is shown that for any term e in the language, a principal type scheme A ` e : T satisfying the condition that {(A, τ )|A ` e : τ } = {(A, τ )|(A ↑dom(A) , τ ) is a ground instance of (A, T )} can be constructed, where A is an assignment of type expressions to variables, T is a type expression and A ↑X is the restriction of A to X. This is based on the following properties of the language:

where τ and ρ ranges over arbitrary types and typerecords respectively. The rule for e.l is now represented by meta-variables without any condition. The necessary condition is correctly captured by the equality rule (2). Therefore the primitive operators with l := and .l do have principal type schemes [ γ ] × t 7→ [ γ(l : t) ] and [ γ(l : t) ] 7→ t respectively in T E and RE. Since RE equipped with the equality rule (2), any ground instances of these type schemes are types of these operators. However, RE is no longer freely generated. Because of this fact, equations between T E and RE have not in general a most general solution. For example, the following term has a typing under his set of type inference rules but his unification-based algorithm cannot find it.

1. Each typing rule is represented by type metavariables ranging over arbitrary types without any condition. This means that set of all instances of typing rules applicable to a given term can be represented by using type variables. Using this property, a desired set of typings for a compound term is specified by a set of equations between type expressions.

(λx.λy. z (y(x with l := 1)) (y[ l = 1 ]) ) [ l = true ]

2. Type expressions are freely generated by the set of type variables. This guarantees that a most general solution of a set of equations between type expressions can be obtained by using Robinson’s unification method [Rob65].

4.1

We solve the type inference problem of our language by extending the notion of type schemes to conditional type schemes. Informally, a conditional type scheme is a type scheme A ` e : T with a set C of conditions on substitutions representing the conditions associated with the typing rules (dot), (join) and (π). The set of typings of a term e is then identified by the set of all (A, τ ) such that there is a substitution θ that satisfies C and (A ↑dom(A) , τ ) = θ(A, T ). For the type-checking purpose, we need to know whether the set represented by (A, C, T ) is empty or not. We therefore require the set of conditions C to be satisfiable. Note that our notion of a conditional type scheme is stronger than the similar notion proposed in [Sta88] where a constraint set is not necessarily satisfiable. Let T var be a set of variables (ranged over by t). Define type expressions T exp as:

Unfortunately, this method is not directly applicable to our type system, because of the typing rules (dot), (join) and (π). These rules come with conditions of the forms l ∈ dom(ρ), δ3 = δ1 t≤ δ2 and δ2 ≤ δ1 respectively and therefore a set of instances of these rules cannot be represented by type expressions. This is equivalent to saying that the corresponding term constructor .l, 1 and πδ ( ) do not have principal type schemes. Wand [Wan87] tried to solve this problem for labeled records and labeled disjoint sums by decomposing type expressions into two languages, T E and RE. Here we restrict attention to labeled records. Labeled disjoint sums can be understood similarly. Then T E and RE are defined as follows: TE RE

::= t | ι | T E → T E | [ RE ] := γ | ø | RE(l : T E)

T

(2) (3)

His type system contains following typing rules for records. A ` e1 : [ ρ ] A ` e2 : τ (with) A ` e1 with l := e2 : [ ρ(l : τ ) ] (dot0 )

::= t | ι | [ R ] | {T } | T → T

where R denotes T -records. A type expression is ground if it does not contain type variables. We identify ground type expressions and types. A substitution θ is a function from T var to T exp such that θ(t) 6= t for only finitely many t. Abusing notation, we write dom(θ) for the set {t ∈ T var|θ(t) 6= t}. For substitutions θ, µ, θ ◦ µ is the composition of the two defined as θ ◦ µ(t) = θ(µ(t)). In what follows, we identify θ with its extension to T exp. We write {T1 /t1 , . . . , Tn /tn } for a substitution which maps ti to Ti . If θ is a substitution and A is a structure containing type expressions T1 , . . . , Tn , we write θ(A) for the structure obtained from A by replacing Ti with θ(Ti ). If θ(t) does not contain type

where t, γ are variables ranging over T E, RE respectively and ø is a constant symbol denoting the empty record. RE satisfies the following equality rules: R(l : T1 )(l : T2 ) = R(l : T2 ) R(l1 : T1 )(l2 : T2 ) = R(l2 : T2 )(l1 : T1 )

Conditional type schemes

A ` e : [ ρ(l : τ ) ] A ` e.l : τ 5

variable for all t ∈ dom(θ) then θ is a ground substitution. We also say that θ is a ground substitution for A if θ(t) does not contain type variable for any t that appears in A. A condition is one of the forms [ (l : T1 ) ] ∈ T2 , T1 = T2 t≤ T3 or T1 ≤ T2 , corresponding respectively to conditions of the typing rules (dot), (join) and (π). A substitution θ satisfies a condition c:

Theorem 2 (Robinson) There is an algorithm U which, given a pair of terms a, b freely generated by a set of variables, either returns a substitution σ or failure such that: 1. If U(a, b) returns σ then σ(a) = σ(b). 2. If there is a substitution µ such that µ(a) = µ(b) then U(a, b) returns some σ and there is another substitution ν such that µ = ν ◦ σ.

1. if c ≡ [ (l : T1 ) ] ∈ T2 and θ(T1 ), θ(T2 ) are ground instances of T1 , T2 respectively and θ(T2 ) = [ · · · (l : θ(T1 )) · · · ].

Since a sequence of pairs (a1 , b1 ), . . . , (an , bn ) can be also regarded as a pair of terms (< a1 , . . . , an >, < b1 , . . . , bn >), the unification algorithm U is extended to a set of pairs and returns a substitution that simultaneously unifies all pairs. Such an algorithm U exists for T exp, since T exp under the equality 1 is isomorphic to a free algebra generated by T var with the function symbols → , { }, ι (0-ary function for each base type ι) and [ (l1 : ) · · · (ln : ) ] (n-ary function for each finite sequence l1 < · · · < ln , li ∈ L, 0 ≤ n). If A1 , A2 are assignments of type expressions to variables, we write match(A1 , A2 ) to denote the set {(A1 (x), A2 (x))|x ∈ dom(A1 )∩dom(A2 )}. Algorithm P is defined by induction on the structure of the term. Here we provide some of the cases. Others are similar. Algorithm P

2. if c ≡ T1 = T2 t≤ T3 and θ(T1 ), θ(T2 ), θ(T3 ) are ground instances of T1 , T2 , T3 respectively and θ(T1 ) = θ(T2 ) t≤ θ(T3 ). 3. if c ≡ T1 ≤ T2 and θ(T1 ), θ(T2 ) are ground instances of T1 , T2 respectively and θ(T1 ) ≤ θ(T2 ). A substitution θ satisfies the set of conditions C iff θ satisfies each c ∈ C. We also say that θ is a model of C if θ satisfies C. Let C be a set of conditions, A be an assignment of type expressions to variables, and T be a type expression. A conditional type scheme of a term e is now defined as a 4-tuple (C, A, e, T ), written C, A ` e : T , such that (1) C is satisfiable and (2) for any ground substitution θ for A and T , if θ satisfies C then θ(A) ` e : θ(T ) is a typing. A conditional type scheme C, A ` e : T is principal if for any typing A ` e : τ there is a ground substitution θ for A and T such that θ satisfies C and A ↑dom(A) = θ(A), τ = θ(T ). If the satisfiability condition of C is dropped then C, A ` e : T is called a principal conditional type pre-scheme.

4.2

P(e) = (C, A, T ) where (1) If e ≡ x then C = ∅, A = {x : t} (t fresh), T = t. (2) If e ≡ (e1 e2 ) then let P(e1 ) = (C1 , A1 , T1 ) and P(e2 ) = (C2 , A2 , T2 ) and U(match(A1 , A2 ) ∪ {(T1 , T2 → t)}) = σ (t fresh); then C = σ(C1 ) ∪ σ(C2 ), A = σ(A1 ) ∪ σ(A2 ) and T = σ(t). (3) If ≡ λx.e1 then let P(e1 ) = (C1 , A1 , T1 ); then if x ∈ dom(A1 ) then C = C1 , A = A1 ↑dom(A1 )\{x} , and T = A1 (x) → T1 otherwise C = C1 , A = A1 , T = t → T1 (t fresh).

Algorithm to construct a principal conditional pre-scheme

(4) If e ≡ [ r(l : e1 ) ] then let P([ r ]) = (C1 , A1 , [ ρ ]) and P(e1 ) = (C2 , A2 , T2 ) and U (match(A1 , A2 )) = σ then; C = σ(C1 )∪σ(C2 ), A = σ(A1 ) ∪ σ(A2 ) and T = σ([ ρ(l : T2 ) ]).

From the definition, a term e has a typing iff there is a principal conditional type pre-scheme C, A ` e : T and C is satisfiable. The type inference problem is then solved by first constructing a principal conditional type pre-scheme C, A ` e : T and then checking the satisfiability of C (Theorem 1 and Theorem 3).

(5) If e ≡ e1 .l then let P(e1 ) = (C1 , A1 , T1 ); then C = C1 ∪ {[ (l : t) ] ∈ T1 } (t fresh), A = A1 and T = t.

Theorem 1 There is an algorithm P, taking any term e, that yields either failure or (C, A, T ) such that if P(e) = (C, A, T ) then (C, A, T ) is a principal conditional type pre-scheme, otherwise e has no typing.

(6) If e ≡ e1 1 e2 then let P(e1 ) = (C1 , A1 , T1 ) and P(e2 ) = (C2 , A2 , T2 ) and U (match(A1 , A2 )) = σ then; C = σ(C1 ) ∪ σ(C2 ) ∪ {t = σ(T1 ) t≤ σ(T2 )} (t fresh), A = σ(A1 ) ∪ σ(A2 ) and T = t.

Proof To define the algorithm P, we need an unification algorithm [Rob65]:

(7) If e ≡ πδ (e1 ) then let P(e1 ) = (C1 , A1 , T1 ) then; C = C1 ∪ {δ ≤ T1 }, A = A1 and T = δ. 6

4.3

In cases (2),(4),(6), σ(A1 )(x) = σ(A2 )(x) for all x ∈ dom(A1 ) ∩ dom(A2 ), therefore σ(A1 ) ∪ σ(A2 ) is well defined. In case (4), since [ r(l : e1 ) ] is a well defined term, r does not contain l. Then it is easily shown by simple induction that ρ does not contain l. Therefore σ([ ρ(l : T2 ) ]) is well defined.

Satisfiability of conditions

The algorithm P(e) constructs only a pre-scheme (C, A, T ), i.e. A ` e : τ is a typing iff there is a ground substitution θ such that θ satisfies C and A ↑dom(A) = θ(A) and τ = θ(T ). P(e) succeeds and return (C, A, T ) does not implies that e has a typing. In order to decide whether e has a typing, we need to check whether there is some θ that satisfies C.

We now show the correctness of P by showing the property that (1) if P(e) = (C, A, T ) then for any ground substitution θ for C, A and T , if θ satisfies C then θ(A) ` e : θ(T ) is a typing of e and (2) if e has a typing then P(e) = (C, A, T ) and there is a ground substitution θ for C, A, T that satisfies C and A ↑dom(A) = θ(A) and τ = θ(T ). The cases other than e1 .l,e1 1 e2 and πδ (e1 ) do not generate conditions and the desired property is shown by using theorem 2 as was done in [DM82]. Here we only show the case for e1 1 e2 . Cases for e1 .l and πδ (e1 ) are similar.

Theorem 3 We can effectively decide whether a set C of conditions is satisfiable. Proof We prove this theorem by analyzing possible structures of substitution instances of type variables. Represent a type expression T as a labeled tree whose nodes are labeled by type constructor symbols N S = T var ∪ BaseT ype ∪ {record, set, f unction} and whose edges are labeled by symbols ES = L ∪ {element, domain, range} identifying the arguments of type constructors. Then since labels of edges from the same node are all distinct, a path α ∈ ES ∗ appears in a tree uniquely determines a node symbol. We then define the address of a node as the path from the root to the node and the address of an edge as the path form the root to the edge (including itself). If a node is labeled by one of {record, set, f unction}, we call it a constructor node. Let x ∈ ES ∪N S, α ∈ ES ∗ and K be a tree. We write (α, x, K) for the occurrence of the symbol(either node symbol or edge symbol) x at the address α in K. In what follows we regard type expressions as their tree representations. The proof of the theorem uses the following lemma.

Suppose P(e1 1 e2 ) = (C, A, t) and θ satisfies C. Then since σ(C1 ) ⊆ C and σ(C2 ) ⊆ C, θ also satisfies both σ(C1 ) and σ(C2 ). Then by definition of the satisfiability, θ ◦ σ satisfies both C1 and C2 . Then by induction hypothesis, both e1 and e2 have typings θ ◦ σ(A1 ) ` e1 : θ ◦ σ(T1 ) and θ ◦ σ(A2 ) ` e1 : θ ◦ σ(T1 ) respectively. But θ ◦ σ(Ai ) = θ(A) ↑dom(Ai ) , i = 1, 2. Thus e1 , e2 also have typings: θ(A) ` e1 : θ(σ(T1 )) and θ(A) ` e1 : θ(σ(T1 )) respectively. Since t = σ(T1 ) t≤ σ(T2 ) ∈ C, θ(t) = θ(σ(T1 )) t≤ θ(σ(T2 )). Then by the rule (join), θ(A) ` e1 1 e2 : θ(t) is a typing. Conversely, suppose these is a typing A ` e1 1 e2 : τ . Then by typing rules, there are some τ1 , τ2 such that τ = τ1 t≤ τ2 and A ` e1 : τ1 and A ` e2 : τ2 are typings. Then by induction hypothesis, P(e1 ) = (C1 , A1 , T1 ), P(e2 ) = (C2 , A2 , T2 ) and these are some θ1 , θ2 such that they respectively satisfy C1 , C2 and A ↑dom(A1 ) = θ1 (A1 ),τ1 = θ1 (T1 ), A ↑dom(A2 ) = θ2 (A2 ),τ2 = θ2 (T2 ). Then by the property of U, U (match(A1 , A2 )) return σ such that σ(A1 (x)) = σ(A2 (x)) for all x ∈ dom(A1 ) ∩ dom(A2 ) and there are θ10 , θ20 satisfying θ1 = θ10 ◦ σ, θ2 = θ20 ◦ σ. Thus P(e1 1 e2 ) returns (C, A, t) where C = σ(C1 ) ∪ σ(C2 ) ∪ {t = σ(T1 ) t≤ σ(T2 )}, A = σ(A1 ) ∪ σ(A2 ). Now let θ = θ10 ∪θ20 ∪{t : τ } regarding θ10 , θ20 as graphs. Since θ10 (x) = θ20 (x) for all x ∈ dom(θ10 ) ∩ dom(θ20 ) and t is fresh, θ is a well defined substitution. Since τ = τ1 t≤ τ2 and θ(σ(T1 )) = τ1 and θ(σ(T2 )) = τ2 , θ satisfies C. By the property of σ, A is well defined and A(x) = σ(A1 )(x) for all x ∈ dom(A1 ) and A(x) = σ(A2 )(x) for all x ∈ dom(A2 ). Then by the assumption of A1 , A2 and the definitions of θ, A ↑dom(A1 ) = θ(σ(A1 )), A ↑dom(A2 ) = θ(σ(A2 )) and hence A ↑dom(A) = θ(A). Finally, by definition, τ = θ(t).

Lemma 1 If C has a model then C also has a model θ such that for any t ∈ dom(θ), θ(t) only contains labels that appear in C and if θ(t) contains a base type ι then either it appears in C or it is some fixed ιo and the number of all constructor symbol occurrences in θ(t) is at most the product of the toal number of type variable occurrences in C and the total number of constructor symbol occurrences in C. Proof Suppose C has a model η. Let {t1 , . . . , tm } be the set of type variables occurrences in C. Also let {T1 , . . . , Tn } be the set of all type expressions appearing in C i.e. Ti is one of T, T 0 , T 00 in T ∈ T 0 ∈ C, T = T 0 t≤ T 00 ∈ C or T ≤ T 0 ∈ C. We first define a matching relation ∼ on occurrences of symbols in T1 , . . . , Tn , η(t1 ), . . . , η(tm ) as follows: (α1 , s1 , K1 ) ∼ (α2 , s2 , K2 ) iff there are Ti , Tj such that (1) K1 ≡ Ti or K1 is a subtree of η(Ti ) and K2 ≡ Tj or K2 is a subtree of η(Tj ), (2) Ti , Tj appear in a same condition c in C and (3) let (β1 , s1 , η(Ti )), (β, s2 , η(Tj )) be respectively the occurrences of (α1 , s1 , K1 ) and (α2 , s2 , K2 ) in η(Ti ) and η(Tj ) then β1 = β2 . Since 7

5

η is a model of C, if (α1 , s1 , K1 ) ∼ (α2 , s2 , K2 ) then + s1 ≡ s2 . Let ∼ be the transitive closure of ∼. We + next construct a substitution θ from η using ∼. Define θ such that dom(θ) = dom(η) and θ(t) is a tree obtained from η(t) by (1) removing all occurrences of edges labeled with (α, l, η(t)) (l ∈ L) that are not + related by ∼ to any occurrences of labels in any Ti and (2) replacing by ιo all base types occurrences + that are not related by ∼ to some base type occurrences in any Ti and (3) replacing by ιo all subtrees whose root nodes are constructor symbol occurrences + that are not related by ∼ to any occurrences of constructor symbols in any Ti and (4) removing all the occurrences of edges labeled with (α1 , l, K1 ) (l ∈ L) satisfying the following condition: Let (α1 , l, K1 ), (α2 , l, K2 ), . . . , (αk , l, Kk ), (αk+1 , l, Tj ) be the shortest chaine in ∼ (such a chain always exists). Then the sequence K1 , . . . , Kn has a repetation of subsequences of the form Ki1 , . . . , Kil Ki1 , . . . , Kil . From the definition of the satisfiability, the three transformations (1) – (3) preserve the satisfiability of C. For the transformation (4), if there is an occurrence of a symbol (β, l, K) such that (α, l, K1 ) ∼ (β, l, K) and (β, l, K) is not removed then it is shown that K1 , K are subtrees of θ(Ti ), θ(Tj ) for some Ti , Tj such that either (i) Ti ≤ Tj ∈ C or (ii) Tj = Ti t≤ Tk and Tk contsins a subtree K 0 such that (β, l, K) ∼ (γ, l, K 0 ) for some occurrence (γ, l, K 0 ) in K 0 . Therefore the transformation (4) also preseves the satisfiability of C and θ is also a model of C. From the transformations (1) and (2), θ clearly satisfies the first two conditions of the lemma. From the transformations (3) and (4), it is also shown that for each t ∈ dom(θ) an injective mapping from the set of all occurrences of constructor symboles in θ(t) to the space {t1 , . . . , tm }×{c1 , . . . , cp } can be defined, where {c1 , . . . , cp } is the set of all occurrences of constructor symbols in C. We now conclude the proof of the theorem. From the lemma, C is satisfiable iff C has a model θ satisfying the properties described in the lemma. But there are only finitely many such substitutions θ. Thus the satisfiability is decided by deciding the satisfiability of each θ(C). But since θ(C) does not contain type variables, the satisfiability can be decided by checking the inclusion of field, computing joins of types, and the ordering relation of two types. Checking the inclusion of fields is clearly decidable. As we have noted in section 3.1, the other two are also decidable Combining theorem 1 and theorem 3, we now have:

Complexity of the type inference

The following result shows that it is unlikely to fined an efficient algorithm for complete type inference. Theorem 4 Given a term e, it is NP-complete to test whether e has a typing. Proof Membership in NP follows from the analysis of our construction of pre-schemes and satisfiability checking. From theorem 1 and theorem 3, a non-deterministic algorithm can check whether a given term has a typing by constructing a prescheme, guessing a satisfying substitution among the finite set of substitutions satisfying the conditions of lemma 1, then checking the satisfiability of the ground instances of conditions. The complexity of constructing a pre-scheme is the same as that of constructing a principle type schme for a term in ML without let. For such a term, it is known that there is a polynomial-time algorithm that constructs a polynomial-size representation of a principal type scheme using the technique described in [DKM84].1 Using a similar method, a polynomial-size representation of a pre-scheme can be constructed in polynomial-time. It is also easy to check that the result of applying a substitution satisfying the condition of lemma 1 to a set of conditions of a pre-scheme is still polynomial in size. Then from the observation of section 3.1, the satisfiability of the instances of conditions can be checked in polynomial time. Completeness is shown by a reduction from MONOTONE 3SAT [Gol74]: Given a 3CNF Boolean formula whose clauses are either all negated literals (called a negative clause) or all un-negated literals (called a positive clause), test whether there is a truth assignment. Let F = {c1 , . . . , cm } be the given set of clauses and {x1 , . . . , xn } be the set of all literals that appear (either un-negated or negated) in F . We construct a term eF such that F has a truth assignment iff eF has a typing. We use the following constants: f : int → int, g : bool → bool. We use four variables xtrue , xf alse , xint , xbool for each literal x, one label #x for each x and one label #c for each c and labels l, #1, #2, #3, #4. For each x, let M x be the term M x ≡ [#1

=

f ((xtrue 1 xint ).l),

#2 = g((xf alse 1 xbool ).l), #3 = (xtrue 1 xf alse ).l, #4

Corollary 1 For any term e, we can effectively decide whether there are A, τ such that A ` e : τ .

1

8

= (xint 1 xbool ).l]

Also personal communication from Paris Kanellakis.

For each clause c, if c consists of un-negated literals {x, y, z} then let N c be the term

(Sub)

Because of this property, his system captures the polymorphic nature of field selection operators .l. For example, the term λx : [ l : ι ].x.l can be applied to any record types that contain l : ι field. However, as pointed out in [CW85], in his original system type information is lost in function applications. For 0 example, M ≡ (λx : [l : ι].x)[l = cι , l0 = cι ] is re0 duced to [l = cι , l0 = cι ] but ∅ ` M : [l : ι, l0 : ι0 ] is not provable. In order to fix this problem, Cardelli and Wegner proposed [CW85] the notion of bounded quantification and tried to capture the polymorphic nature of .l by explicit polymorphism with a condition that a type parameter rages only over all types that are subtypes of a certain type. However, bounded quantification is too general for this purpose. To see this, consider the raw term λx.x.l. In their new system, it corresponds to the second order term: M1 ≡ (all t2 )(all t1 < [ l : t2 ])λx : t1 .x.l with the type ∀t2 . ∀t1 < [ l : t2 ]. t1 → t2 where (all t) is type abstraction and ∀t1 < [ l : t2 ] denotes the quantification over all types that are subtypes of [ l : t2 ]. However, this conditon does not capture the exact nature of x.l, since t2 can be a record type and t1 can be a record type that contains l : t3 such that t3 is a subtype of t2 but not equal to t2 . The precise specification of the relationship between t1 and t2 is that t1 is a record type containing l : t2 field. Because of this fact, their new system still has the same problem. For example, by type applications, we get the term: M2 ≡ M1 [ l2 : ι1 ][ l1 : [ l2 : ι1 , l3 : ι2 ] ] with the type [ l1 : [ l2 : ι1 , l3 : ι2 ] ] → [ l2 : ι1 ], which is reduced to λx : [ l1 : [ l2 : ι1 , l3 : ι2 ] ].x.l1 . Then M3 ≡ (M2 [ l1 = [ l2 = cι1 , l3 = cι2 ] ]) is reduced to [ l2 = cι1 , l3 = cι2 ] but ∅ ` M3 : [ l2 : ι1 , l3 : ι2 ] is not provable. As Wand pointed out [Wan87], by combining Milner’s polymorphic let constructor [Mil78], an implicitly typed language containing labeled records can deal with type inheritance. For exapme, in M in let f = λx.l in M , f can be applied to any records whose types contain at leats l field. Our language shares this property. An advantage of this treatment of inheritance is that the above problem associated with subtyping does not appear since we exactly capture the condition associated with the operation .l

N c ≡ f (((xtrue 1 ytrue ) 1 ztrue ).l) otherwise c consists of negated literals {x, y, z} then let N c be the term N c ≡ g(((xf alse 1 yf alse ) 1 zf alse ).l) Now define the desired term e as the following record: eF ≡ [#x1 = M x1 , . . . , #xn = M xn , #c1 = N c1 , . . . , #cm = N cm ] The translation from F to e is clearly polynomial. We next show the desired property of eF . Suppose eF has a typing A ` eF : τ . By the typing rule, each M x and N c have a typing under A. By the definition of M x , if M x has a typing under A then either A(xtrue ) is a record type containing the field (l : int) or A(xf alse ) is a record type containing the field (l : bool) and not both. Define a truth assignment M such that M(x) = true iff A(xtrue ) is a record type containing (l : int) field. By the definition of N c and the rule (join), for a positive clause {x, y, z}, if N {x,y,z} has a typing under A then at least one of A(xtrue ), A(ytrue ), A(ztrue ) has the field (l : int) and for a negative clause {x, y, z}, if N {x,y,z} has a typing under A then at least one of A(xf alse ), A(yf alse ), A(zf alse ) has the field (l : bool). By the definition of M this implies that M satisfies F. Conversely suppose F is satsified by an assignment M. Define a type assignment A as follows: if M(x) = true then A(xtrue ) = [ (l : int) ], A(xf alse ) = [ ], A(xint ) = [ ], A(xbool ) = [ (l : bool) ] otherwise A(xtrue ) = [ ], A(xf alse ) = [ (l : bool) ], A(xint ) = [ (l : int) ], A(xbool ) = [ ]. It is then easy to check that e has the following type under A: 0 [ #x1 : τ1 , . . . , #xn : τn , #c1 : τ10 , . . . , #cm : τm ]

where τi is [ #1 : int, #2 : bool, #3 : int, #4 : bool ] if M(xi ) = true otherwise [ #1 : int, #2 : bool, #3 : bool, #4 : int ] and τj0 = int if cj is positive clause otherwise τj0 = bool

6

A ` e : τ τ < τ0 A ` e : τ0

Subtyping vs. polymorphism

7

Cardelli proposed [Car84] a type system with subtyping to support type inheritance. Subtype relation