Assignment 0:
Judgments, Rules, and Church’s λ-Calculus
15-312: Principles of Programming Languages (Fall 2023)
Welcome to 15-312! In this assignment, we will explore foundational concepts in programming
languages through the lens of Church’s λ-calculus.
In this first assignment, we are introducing the use of inductive definitions of judgments about
terms in the λ-calculus. The λ-calculus is a remarkably simple and elegant model of computation
formulated by Church in the early 1930’s that is based on the concept of a mathematical variable,
which is given meaning by substitution. λ-terms may be thought of as functions acting on λ-terms,
one of which is the λ-term itself—the λ-calculus is inherently self-referential, which is the source
of its expressive power. In fact, the λ-calculus can express all functions that can be defined by a
Turing machine. It is expected that you will find the λ-calculus itself to be somewhat mysterious,
mystifying, and non-obvious. Your goal is not to master the λ-calculus as a model of computation,
but rather to understand it as a microscopic programming language exhibiting core concepts of
central importance.
Although this is the first assignment, it is advisable to take it seriously. This assignment is the
“on-ramp” to the course. It involves a lot of unfamiliar concepts and requires good programming
skills. We recommend you start early; please don’t hesitate to ask for help on Piazza and/or at
office hours if you get stuck!
1 Course Mechanics
The purpose of this question is to ensure that you get familiar with this course’s policies.
Academic Integrity Go to the Academic Integrity page on the course Canvas to understand the
whiteboard policy for collaboration regarding the homework assignments, the late policy regarding
timeliness of homework submissions, and the use of Piazza. As in any class, you are responsible for
following our collaboration policy; violations will be handled according to university policy.
Submission Please read Appendix A to understand how written and code submissions for this
class work.
Task 1.1 (5 pts). Decide whether each of the following statements are consistent with course
policies. Briefly explain your answers.
1. Jeanne, Daniel, and Nitin split a pizza while talking about their homework, and by the end
of lunch, their pizza box is covered with notes and solutions. Daniel throws out the pizza box
and the three go to class, writing their solutions up separately that evening.
2. Mia and Yue write out a solution to Problem 4 on a whiteboard in Wean Hall. Then, they
erase the whiteboard and run to Gates. Five minutes later, each student types up the solution
on their laptop, sitting at separate tables.
3. Pressed for time, Matthew notices that many of the test cases for an implementation task can
be passed with a trivial solution. After submitting his code to the autograder, he is relieved
to learn that he is guaranteed at least a 72% score on this task.
2 Judgments and Rules
Throughout the semester, we will define programming languages and other concepts via judgments,
which can be thought of as mathematical relations. In this section, we will consider simple judg-
ments about lists of natural numbers, where natural numbers are defined by the judgment n nat in
PFPL, Chapter 2. We will usually abbreviate natural numbers via numeric literals, that is, instead
of s( s( z ) ), we simply write 2. First, we define what it means to be a list via the judgment l list ,
which can be thought of as a unary relation.
Judgements in this class are usually defined by induction using inference rules. In general, an
inference rule is of the following form, where n ≥ 0:
J1 . . . Jn
Each judgment Ji is a premise (i.e., assumption) and the judgment J is the conclusion.
Lists of natural numbers are defined inductively as follows:
1. nil is a list.
2. If x is a natural number and l is a list, then cons(x ; l ) is a list.
Using the judgment l list as “l is a list”, we may express this definition using inference rules:
x nat l list
cons(x ; l ) list
The annotations L:Nil and L:Cons are the names of the respective rules. Such names are useful
to refer to the rules in the text but are not part of the actual rule.
2.1 Sublists
Using the following inference rules, we may define the judgment l1 ⊑ l2 , intended to mean that
list l1 is an ordered sublist of list l2.
cons(x ; l1 ) ⊑ cons(x ; l2 )
l1 ⊑ cons(x ; l2 )
By composing these rules into a tree structure, we may provide a derivation of a particular fact,
working from the bottom up and satisfying all premises. For example, we can prove that
cons( 1 ; cons( 2 ; nil ) ) ⊑ cons( 1 ; cons( 5 ; cons( 3 ; cons( 1 ; cons( 2 ; nil ) ) ) ) )
https://ncatlab.org/nlab/show/deductive+system
via the following derivation tree:
cons( 2 ; nil ) ⊑ cons( 2 ; nil )
cons( 2 ; nil ) ⊑ cons( 1 ; cons( 2 ; nil ) )
cons( 2 ; nil ) ⊑ cons( 3 ; cons( 1 ; cons( 2 ; nil ) ) )
cons( 2 ; nil ) ⊑ cons( 5 ; cons( 3 ; cons( 1 ; cons( 2 ; nil ) ) ) )
cons( 1 ; cons( 2 ; nil ) ) ⊑ cons( 1 ; cons( 5 ; cons( 3 ; cons( 1 ; cons( 2 ; nil ) ) ) ) )
Notice that no judgments are left unjustified, as there are no premises for rule SL:Nil. We call
such a rule an axiom or a leaf rule.
Task 2.1 (10 pts). Give a different derivation tree showing the same judgment.
You may wish to modify our given LATEX code, which can be found in written/assn0.tex .
To prove theorems about our judgments, we use a technique called rule induction. Rule induction is
a generalization of structural induction, allowing us to consider all derivation trees for a particular
judgment. If we go by induction over the judgment defining a data structure, such as l list , rule
induction is in fact just structural induction.
Theorem 2.1. If l list, then l ⊑ l.
Task 2.2 (10 pts). Prove Theorem 2.1 by rule induction on l list (i.e., by structural induction
on l). As a reminder, in each case, you should argue for the existence of a derivation showing l ⊑ l.
This theorem is fairly straightforward and involves a familiar proof technique, going by cases on
how a list was construced. However, thanks to the generality of rule induction, we can go by cases
on how any judgment was derived. For example, we can go by induction on a derivation tree
for a sublist judgment, considering all possible cases for how this derivation tree could have been
constructed. We consider a worked example here:
Theorem 2.2. If l1 ⊑ l2 and l2 ⊑ l3, then l1 ⊑ l3.
We provide the following proof, annotated with footnotes.
Proof. Assume l1 ⊑ l2 and l2 ⊑ l3.12 We prove by rule induction on l2 ⊑ l3, showing in each case
that l1 ⊑ l3.3 The cases we have to consider correspond to all rules that could have been used at
the root of the derivation tree of the judgment l2 ⊑ l3.
1The structure of a proof should follow the structure of the theorem. Since the theorem is of the form “if P then
Q”, we should assume P and show Q.
2By assuming l1 ⊑ l2 and l2 ⊑ l3, we are really assuming that we have derivation trees for both l1 ⊑ l2 and l2 ⊑ l3.
3In general, if we have multiple assumptions, which one(s) we should induct on, and in what order we should
induct on them, depends on the proof itself.
Case nil ⊑ nil
Here, we know l2 = nil and l3 = nil. It remains to show that l1 ⊑ l3; in other words,
l1 ⊑ nil. By assumption, l1 ⊑ l2. Since we know l2 = nil, we have l1 ⊑ nil, which is
precisely what we needed to show.
cons(x ; l′2 ) ⊑ cons(x ; l
Here, we know l2 = cons(x ; l
2 ) and l3 = cons(x ; l
3 ). Observe that by the premise, we know
3. It remains to show that l1 ⊑ l3; in other words, l1 ⊑ cons(x ; l
3 ). By assumption,
l1 ⊑ l2, so l1 ⊑ cons(x ; l′2 ).
5 We go by rule induction on l1 ⊑ cons(x ; l′2 ).
Case nil ⊑ nil
Here, we know l1 = nil and cons(x ; l
2 ) = nil. However, cons(x ; l
2 ) and nil are
different, so this case is vacuous.6
cons(x ; l′1 ) ⊑ cons(x ; l
Here, we know l1 = cons(x ;l
1 ) (and l2 = cons(x ;l
2 ), which we already knew). Observe
that by the premise, we must know l′1 ⊑ l
2. Since we know l
3 via sub-
derivations, we may invoke the inductive hypothesis to get that l′1 ⊑ l
3. To show l1 ⊑ l3,
i.e. cons(x ; l′1 ) ⊑ cons(x ; l
3 ), we may use the rule SL:Keep:
cons(x ; l′1 ) ⊑ cons(x ; l
The premise is justified by our previous reasoning.
l1 ⊑ cons(x ; l′2 )
Observe that by the premise, we must know l1 ⊑ l′2. Since we know that l1 ⊑ l
3 via sub-derivations, we may invoke the inductive hypothesis to get that l1 ⊑ l
To show l1 ⊑ l3, i.e. l1 ⊑ cons(x ; l′3 ), we may use the rule SL:Drop:
l1 ⊑ cons(x ; l′3 )
The premise is justified by our previous reasoning.
4Observe that we may rename variables in the rule as we wish.
5Notice that here, there is no immediate reasoning to do; it depends on how we know l1 ⊑ cons(x ; l′2 ). Therefore,
we consider nested rule induction, breaking down our derivation of l1 ⊑ cons(x ; l′2 ).
6As a reminder, we say a case is vacuous when it is immediately contradictory for structural reasons. In this
example, we clearly explained the vacuousness, but in your proofs, you may simply state that a particular case (or
all other cases not considered) are vacuous, so long as it is true.
In all three sub-cases, we showed that l1 ⊑ l3, as desired, so this case holds.
l2 ⊑ cons(x ; l′3 )
Here, we know l3 = cons(x ; l
3 ). Observe that by the premise, we must know l2 ⊑ l
remains to show that l1 ⊑ l3; in other words, l1 ⊑ cons(x ; l′3 ). Since we assumed l1 ⊑ l2 and
we have a sub-derivation showing l2 ⊑ l′3, we may invoke the inductive hypothesis to get that
l1 ⊑ l′3. To show l1 ⊑ cons(x ; l
3 ), we may use the rule SL:Drop:
l1 ⊑ cons(x ; l′3 )
The premise is justified by our previous reasoning.
In all three cases, we showed that l1 ⊑ l3, as desired. Thus, the claim holds.
In this proof, we were fairly verbose about justifying the reasoning. You need not be this verbose
in the proofs you submit, although you should be just as careful in structuring your proofs.
A close look at our proof of Theorem 2.1 reveals that it is constructive. A constructive proof does
not only show that an object exists but also how to construct it. Almost all the proofs in this
course are constructive. In our case, the proof shows how to construct a derivation of the judgment
l1 ⊑ l3 from derivations of the judgments l1 ⊑ l2 and l2 ⊑ l3. Consider for example the case of
SL:Drop in the proof. In this case, the root of the derivation tree of l2 ⊑ l3 looks as follows.
l2 ⊑ cons(x ; l′3 )
In particular, l3 = cons(x ; l
3 ) and we have a derivation tree T1 for l2 ⊑ l
3. The proof then shows
how to construct a derivation tree for l1 ⊑ l3. First, per induction hypothesis, the proof recursively
shows us how to construct a derivation tree T2 for l1 ⊑ l′3 (using T1 and the derivation tree for
l1 ⊑ l2). Then we can apply the rule SL:Drop to get a derivation tree for l1 ⊑ l3:
l1 ⊑ cons(x ; l′3 )
Task 2.3 (10 pts). Consider the following derivation:
nil ⊑ cons( 2 ; nil )
cons( 1 ; nil ) ⊑ cons( 1 ; cons( 2 ; nil ) )
Recall our earlier derivation tree, as well:
cons( 2 ; nil ) ⊑ cons( 2 ; nil )
cons( 2 ; nil ) ⊑ cons( 1 ; cons( 2 ; nil ) )
cons( 2 ; nil ) ⊑ cons( 3 ; cons( 1 ; cons( 2 ; nil ) ) )
cons( 2 ; nil ) ⊑ cons( 5 ; cons( 3 ; cons( 1 ; cons( 2 ; nil ) ) ) )
cons( 1 ; cons( 2 ; nil ) ) ⊑ cons( 1 ; cons( 5 ; cons( 3 ; cons( 1 ; cons( 2 ; nil ) ) ) ) )
We abbreviate:
l1 = cons( 1 ; nil )
l2 = cons( 1 ; cons( 2 ; nil ) )
l3 = cons( 1 ; cons( 5 ; cons( 3 ; cons( 1 ; cons( 2 ; nil ) ) ) ) )
Combined, these trees show that l1 ⊑ l2 and l2 ⊑ l3. Using these two derivations, what deriva-
tion of l1 ⊑ l3 does our proof of Theorem 2.2 find? Briefly (via a few sentences) explain your
reasoning.
Hint. There are two different ways to derive l1 ⊑ l3; be careful to use the derivation of l2 ⊑ l3
above, rather than the one you constructed in a previous task.
2.2 List Containment
Now, consider another judgment, x ∈ l , expressing that x is an element of list l:
x ∈ cons(x ; l )
x ∈ cons( y ; l )
The following theorem relates the sublist judgment to the containment judgment:
Theorem 2.3. If l1 ⊑ l2 and x ∈ l1, then x ∈ l2.
Task 2.4 (25 pts). Prove Theorem 2.3 by rule induction.
Hint. Go by rule induction on l1 ⊑ l2. Some cases may be vacuous. In some cases, you will need
to use an inner rule induction on x ∈ l1.
3 Binding and Scope
3.1 Free Variables
Recall the definition of the judgment x ∈ FV(M) from the λ-calculus supplement. We could define
x /∈ FV(M) if not x ∈ FV(M). However, we can also define the judgment x /∈ FV(M) inductively
without using x ∈ FV(M).
Task 3.1 (20 pts). Recall the definition of the judgment x ∈ FV(M) from the PFPL Supple-
1. Give an inductive definition of the judgment x /∈ FV(M) by exhibiting a selection of rules,
stating positively that x is not-free in M .
Be careful!
Do not define the judgment x /∈ FV(M) to be the logical negation of x ∈ FV(M), as they are
two separately defined judgments. As a matter of fact, you’re not allowed to refer to the
judgment x ∈ FV(M) in your rules. This idea of logical complementation accurately specifies
the behavior of the judgment, but doesn’t inform its implementation.
As such, restated in terms of this semantic specification, define a judgment x /∈ FV(M) such
(a) For all x, M , x /∈ FV(M) is derivable if x ∈ FV(M) is not derivable
(b) For all x, M , x /∈ FV(M) is not derivable if x ∈ FV(M) is derivable
2. Prove that if x var and M tm either x ∈ FV(M) or x /∈ FV(M). In other words, prove that
at least one of these two statements is true. You are not required to prove that exactly
one of them is true, although your definition of the judgment x /∈ FV(M) should satisfy that.
Proceed by rule induction on M tm, making use of the fact that if x var and y var, then either
x = y or x ̸= y.
3.2 Substitution
Recall the definition of {M/x}N = P from the PFPL Supplement. Unfortunately, {M/x}N = P
is undefined for certain terms M and N , where the substitution could capture variables. To avoid
this issue, we have introduced α-equivalence to make substitution a total operation that is defined
for all terms.
Theorem 3.1. For all M tm, x var, and N tm, there exists N ′ tm and P ′ tm such that N ′ ≡α N
and {M/x}N ′ = P ′.7
Task 3.2 (10 pts). Give some M , N , N ′, and P ′ such that:
1. there does not exist a P such that {M/x}N = P
7Moreover, it is the case that if N ′′ ≡α N ′ and {M/x}N ′′ = P ′′, then P ′′ ≡α P ′.
http://www.cs.cmu.edu/~rwh/pfpl/supplements/ulc.pdf
http://www.cs.cmu.edu/~rwh/pfpl/supplements/ulc.pdf
http://www.cs.cmu.edu/~rwh/pfpl/supplements/ulc.pdf
2. N ′ ≡α N
3. {M/x}N ′ = P ′
Briefly explain why these properties hold for your solution.
4 Implementing the λ-Calculus
We have observed that it is convenient to work up to α-equivalence, ignoring the names of bound
variables. In this section, we will build a tool in Standard ML for representing α-equivalence classes
of λ-terms.8
The tool you will build can be generalized to support arbitrary abstract binding trees (ABTs) as
defined in PFPL, Chapter 1. By implementing a language as simple as the untyped λ-calculus
(abbreviated as ULC in the code), you will grapple with fundamental ideas; then, on the rest of the
assignments, we will provide you with analogous ABT structures for more complex languages.
4.1 Variables
A variable in a λ-term is a placeholder for a λ-term, which can be plugged in for the variable
using substitution. To work with variables, we introduce the TEMP signature, partially reproduced
below. Here, “temp” stands for “temporary”, since the exact identity of a variable is only kept
until the variable is bound, as we will see in the next section.
var/temp.sig
1 signature TEMP =
5 (* Creates a new , globally unique temp. *)
6 val new: string -> t
8 (* Tests whether two temps are equal. *)
9 val equal: t * t -> bool
14 (* Provides a string representation of the globally unique temp.
15 val toString: t -> string
24 structure Dict: DICT where type key = t
Values of type t represent variables. A key idea is the notion of a new or fresh variable –
one that has not been seen before in some given context. There are several ways to guarantee
freshness relative to a context, but one simple way is to just ignore the context and make sure
every freshly generated variable is globally unique. This is achieved by using global state be-
hind the scenes. As regards your ABT implementation, this shows up in the above signature as
val new : string -> t , where the string value being passed in is entirely up to you (the
user), as it’s primarily there for human readability.
8If you are ambitious and have the time, it is a very good exercise to try to implement abstract binding trees
yourself, without the benefit of any theoretical understanding or the techniques suggested here, and watch yourself
suffer and fail miserably.
For example, if Var : TEMP , then Var.toString (Var.new “x”) might give “x1” , whereas
running it again would give “x2” and running with “y” would give “y3” .
You will only need to work with an abstract structure ascribing to the signature.
4.2 α-Equivalence Classes of λ-Terms
Now, we introduce the signature for λ-terms. This is a version of an interface that has been
developed here at CMU over numerous compiler development efforts, designed to help users avoid
common errors.
lang-ulc/ulc.abt.sig
1 signature ULC =
3 structure Term:
5 type termVar
6 type term
7 type t = term
9 structure Var: TEMP where type t = termVar
11 datatype view = Var of termVar | Lam of termVar * term | Ap of
term * term
13 val Var ’: termVar -> term
14 val Lam ’: termVar * term -> term
15 val Ap ’: term * term -> term
17 val into: view -> term
18 val out: term -> view
19 val aequiv: term * term -> bool
20 val toString: term -> string
22 val subst: term -> termVar -> term -> term
First, observe that ULC contains a single sub-structure, Term . In languages with more than one
sort (e.g., a sort for expressions and a sort for types), we will have one sub-structure per sort.
4.2.1 Variables
In the Term structure, we have structure Var : TEMP ; this will provide us with all the necessary
operations on variables. We also have type termVar , synonymous with Var.t .
4.2.2 Terms and Views
Next, we introduce type term and type t , which are synonymous.9 Values of type Term.t
will be (α-equivalence classes of) λ-terms. Importantly, type t is abstract : its representation will
be internal to the structure.
Access to the representation type t is mediated by a view, which is a non-recursive datatype
view with a pair of functions into and out . The idea behind a view is that it is a type whose
values represent a one-step unfolding of a value of the abstract type t . The function out does
this one-step unfolding of a λ-term to create a view, and into puts a view back together into a
The function out has type term -> view , taking a genuine λ-term and makes it visible to the
user. Let m : term ; we describe the behavior of out m as follows:
1. Suppose m represents x, where x is a variable. Then, out m would return Var x , where
the value x : Variable.t represents the variable x.
2. Suppose m represents the lambda abstraction λ(x . M ′ ). Then, out m would return
Lam (y, m’) , where y represents a fresh variable y and m’ represents the λ-term {y/x}M ,
which is well-defined because y is (globally) fresh.
In other words, the out operation renames the variable bound by an abstractor when unpack-
ing it! This is one of the essential features of this interface, and implementing this correctly
will save you many hours of tracking down bugs involving binding and scope.
3. Suppose m represents ap(M1 ;M2 ). Then, out m should return Ap (m1, m2) , where m1
represents the λ-term M1 and m2 represents the λ-term M2. Observe that m1 and m2 are
of type term , not of type view . This is what we mean by “one-step unfolding”.
The function into has type view -> term , folding a one-step unfolding (that is, a view) back
into a λ-term.
1. Suppose x : Variable.t represents the variable x. Then, into (Var x) will return an
internal form representing the variable term x.
2. Suppose x : Variable.t represents the variable x and m : term represents the λ-termM .
Then, into (Lam (x, m)) will return an internal form representing the λ-term λ(x . M ).
3. Suppose m1 : term represents the λ-term M1 and m2 represents the λ-term M2. Then,
into (Ap (m1, m2)) will return an internal form representing the λ-term ap(M1 ;M2 ).
For convenience, we provide functions which simply compose constructors with into :
val Var ’ = into o Var
val Lam ’ = into o Lam
val Ap ’ = into o Ap
9In SML, it is common practice to define type foo and type t = foo in structure Foo , so the signature
can use foo while clients of the structure may use Foo.t .
Task 4.1 (5 pts). Suppose v is a value of type view . Will out (into v) be equal to v ? (Here,
equality does not mean α-equivalence.) If so, explain why; if not, give a simple counterexample.
Think carefully about what effect the renaming out can perform before answering this question.
4.2.3 Auxiliary Functions
We can test α-equivalence with the aequiv function, which returns true if its two arguments
are α-equivalent, and false otherwise.
Additionally, we can use the subst function to substitute λ-terms in for free variables. Suppose
m1 represents M1, x represents x, and m represents M . Then, subst m1 x m will return
{M1/x}M , which by Theorem 3.1 must be defined since m represents an α-equivalence class of
Finally, we have a convenience function toString : term -> string for producing a string rep-
resentation of a term. Note that toString uses out , so the variables in the string representation
will be different on each call of toString .
4.3 Implementing ABTs
One of the inconveniences of a naive representation of λ-terms is that α-equivalent terms can have
multiple representations, so implementing aequiv and other helper functions becomes tricky. We
will look at a more sophisticated representation, called locally nameless form or de Bruijn indexed
form, which avoids this problem, so that each α-equivalence class is represented with a single data
value, and thus α-equivalence can be tested for with a simple structural traversal.
The main idea is to observe that variables in a λ-term serve two roles.
1. First, they can appear free — that is, they can be a name for a yet-to-be-determined term.
2. Second, they can appear bound, in which case the variable occurrences simply refer back to
the location of the λ abstraction.
For example, consider the following λ-term:
λ(x . λ( y . ap(x ; ap( y ; z ) ) ) )
The use of x refers back to the variable from the outer λ abstraction; similarly, the use of y refers
back to the variable from the inner λ-abstraction. However, z is a free variable. The only reason we
need the names of the bound variables x and y is to distinguish one abstraction site from another;
the names themselves are irrelevant from a semantic point-of-view. (This is just another way of
saying that we want to identify terms up to α-equivalence, of course.)
In a locally nameless representation, we distinguish these two roles in our data structure, in order to
make α-equivalence implementable as structural equality on terms. The trick in this representation
is to exploit a structural invariant of λ-terms: in any λ abstraction λ(x .M ), the only occurrences
of x are within M , up to α-equivalence.
As a result of this fact, we have a unique path “upward” from each bound variable to the abstractor
that bound it. We can represent this path as a number that tells us how many binders we have to
hop over before we reach the one we’re interested in. Thus, we define type term as follows:
datatype term
= BV of int
| FV of Variable.t
| LAM of term
| AP of term * term
Here, BV stands for “bound variable” and stores the (non-negative) number of binders between the
variable and the binding site, and FV stands for “free variable” and stores a genuine variable. Notice
the difference between datatype term (recursive) and datatype view (non-recursive).
Consider the following diagram of λx.x(λy.(λz.x )( y )( z(λw.y ) ) ).
x := BV 0 λy.
z := FV z λw.
We’ve put a box around every λ abstraction and labeled each bound variable with its bound variable
number. We can calculate the bound variable number by looking at each path from an abstraction
to its use sites and count the number of abstractions crossed along the way:
Path Variable #
λx. → ap → x 0
λx. → ap → λy. → ap → ap → λz. → x 2
λy. → ap → ap → y 0
λy. → ap → ap → λw. → y 1
An important fact to notice about these paths is that even for the same binder, each occurrence
of its bound variable can have a different bound variable number, depending on the number of
abstractions we crossed over to reach that variable occurrence.
Task 4.2 (40 pts). Implement the rest of structure ULC in lang-ulc/ulc.abt.sml using a
locally nameless representation. Specifically, you should implement into , out , aequiv , and
subst according to the specifications above.
Hint. When implementing this structure, you will find it helpful to define two recursive helper
functions bind (for into ) and unbind (for out ).
� bind should take a variable x : Variable.t , some i : int , and a term m : term ,
returning an updated version of m with all (free) occurrences of x bound, assuming the
abstraction site is i layers away.
� unbind should take a variable x : Variable.t , some i : int , and a term m : term ,
returning an updated version of m with bound variable i layers away replaced with free
variable x .
Hint. Note that one of the invariants of the locally nameless representation is that the case BV n
only occurs once you’ve gone beneath a binder. So, it can’t happen at the top level of a term.
Hint. Your implementations of aequiv and subst should be fairly simple, since they need not
worry about variable capture due to the locally nameless representation.
To test your code, you can load lang-ulc/sources.cm :
smlnj -m lang -ulc/sources.cm
– structure Term = ULC.Term;
– val x = Term.Var.new “x”;
val x = – : Term.Var.t
– val i = Term.Lam ’ (x, Term.Var ’ x);
val i = – : ?.ULC.term
– Term.toString i;
val it = “(Lam (x2 , x2))” : string
– Term.toString i;
val it = “(Lam (x3 , x3))” : string
– Term.aequiv (i, i);
val it = true : bool
– Term.aequiv (i, Term.Lam ’ (x, Term.Var ’ (Term.Var.new “x”)));
val it = false : bool
– val y = Term.Var.new “y”;
val y = – : Term.Var.t
– Term.toString (Term.subst (Term.Lam ’ (x, Term.Var ’ y)) x
(Term.Ap’ (Term.Var ’ x, Term.Var ’ x)));
val it = “(Ap ((Lam (x6 , y5)), (Lam (x7 , y5))))” : string
5 β-Equivalence: Calculation in the λ-Calculus
In the λ-calculus, β-equivalence tells us how to compute! In this section, you will code in the
λ-calculus and implement a simple interpreter based on β-equivalence. Note that you may attempt
the following subsections in either order, depending on your preference.
5.1 Church Encodings
At first glance, it would seem that the λ-calculus is too simple to be useful. While it has (higher-
order) functions, what about data structures like booleans? Surprisingly, booleans (and all other
data structures!) need not be included as primitives, since as we saw in lecture, they can be defined
inside the λ-calculus.
5.1.1 Booleans
Consider the following encoding of booleans:
true ≜ λt.λf.t
false ≜ λt.λf.f
if(M ;M1 ;M0 ) ≜ M(M1 )(M0 )
Here, if(M ;M1 ;M0 ) is the ABT form of the concrete syntax ifM thenM1 elseM0. Observe
that the booleans themselves are “active” data: given a “then” branch M1 and an “else” branch
M0, the boolean itself is an algorithm that picks between them.
How do we argue this encoding is correct? Our specification of booleans should require that:
if( true ;M1 ;M0 ) ≡β M1
if( false ;M1 ;M0 ) ≡β M0
We can validate these laws using the axioms of ≡β, stated in the PFPL Supplement; we give
the following derivations as a chain of β-equivalences, since ≡β is postulated to be reflexive and
transitive.
if( true ;M1 ;M0 ) ≜ true(M1 )(M0 )
≜ (λt.λf.t )(M1 )(M0 )
≡β (λf.M1 )(M0 )
if( false ;M1 ;M0 ) ≜ false(M1 )(M0 )
≜ (λt.λf.f )(M1 )(M0 )
≡β (λf.f )(M0 )
http://www.cs.cmu.edu/~rwh/pfpl/supplements/ulc.pdf
5.1.2 Pairs
We can also define pairs in the λ-calculus:
⟨M1,M2⟩ ≜ λk.k(M1 )(M2 )
M · l ≜ M(λx1.λx2.x1 )
M · r ≜ M(λx1.λx2.x2 )
Here, M · l/M · r mean “get the left/right component of pair M”.
To validate this encoding, we specify that the following equivalences should hold:
⟨M1,M2⟩ · l ≡β M1
⟨M1,M2⟩ · r ≡β M2
Task 5.1 (10 pts). Validate the encoding of pairs by giving a derivation of the given equivalences
via chains of β-equivalences. Please make sure to justify each β-equivalence step by citing the
corresponding rule (as presented in the λ-calculus supplement).
5.1.3 Options
Now, it’s your turn!
Task 5.2 (15 pts). In tests/option.lc , define the following operations:
none ≜ . . .
some(M ) ≜ . . .
case(M ;M0 ;M1 ) ≜ . . .