Smallstep: Small-step operational semantics
The evaluators we have seen so far (e.g., the ones for
aexps, bexps, and commands) have been formulated in a
"big-step" style -- they specify how a given expression can
be evaluated to its final value (or a command plus a store to
a final store) "all in one big step."
This style is simple and natural for many purposes, but it has some shortcomings. In particular, suppose we enriched the language of while programs with boolean variables in addition to the numeric ones. In this language, a command might fail to map a given starting state to any ending state for two quite different reasons: either because the execution gets into an infinite loop or because, at some point, the program tries to do an operation that makes no sense, such as taking the successor of a boolean variable, and none of the evaluation rules can be applied.
These two outcomes -- nontermination vs. getting stuck in an erroneous configuration -- need to be treated differently: we want to allow the first (permitting the possibility of infinite loops is the price we pay for the convenience of being able to program with general looping constructs like while) but prevent the second, for example by adding some form of typechecking to the language. (Indeed, this will be a major topic for the rest of the course.)
As a first step, it is useful to introduce a different way of defining how programs behave -- replacing the "big-step" eval relation with a "small-step" relation that specifies, for a given program, how just the FIRST atomic step of computation is to be performed.
This style is simple and natural for many purposes, but it has some shortcomings. In particular, suppose we enriched the language of while programs with boolean variables in addition to the numeric ones. In this language, a command might fail to map a given starting state to any ending state for two quite different reasons: either because the execution gets into an infinite loop or because, at some point, the program tries to do an operation that makes no sense, such as taking the successor of a boolean variable, and none of the evaluation rules can be applied.
These two outcomes -- nontermination vs. getting stuck in an erroneous configuration -- need to be treated differently: we want to allow the first (permitting the possibility of infinite loops is the price we pay for the convenience of being able to program with general looping constructs like while) but prevent the second, for example by adding some form of typechecking to the language. (Indeed, this will be a major topic for the rest of the course.)
As a first step, it is useful to introduce a different way of defining how programs behave -- replacing the "big-step" eval relation with a "small-step" relation that specifies, for a given program, how just the FIRST atomic step of computation is to be performed.
To save space in the discussion, let's work with an
incredibly simple language containing just constants and
addition.
Inductive tm : Type :=
| tm_const : nat -> tm
| tm_plus : tm -> tm -> tm.
Tactic Notation "tm_cases" tactic(first) tactic(c) :=
first;
[ c "tm_const" | c "tm_plus" ].
Module SimpleArith1.
Here is a standard big-step evaluator for this language, in
exactly the same style as we've been using up to this
point.
Inductive eval : tm -> nat -> Prop :=
| E_Const : forall n,
eval (tm_const n) n
| E_Plus : forall t1 t2 n1 n2,
eval t1 n1 ->
eval t2 n2 ->
eval (tm_plus t1 t2) (plus n1 n2).
End SimpleArith1.
Here is a slight variation (still in "big-step" style) where
the final result of evaluating a term is also a term.
Inductive eval : tm -> tm -> Prop :=
| E_Const : forall n1,
eval (tm_const n1) (tm_const n1)
| E_Plus : forall t1 n1 t2 n2,
eval t1 (tm_const n1)
-> eval t2 (tm_const n2)
-> eval (tm_plus t1 t2) (tm_const (plus n1 n2)).
Tactic Notation "eval_cases" tactic(first) tactic(c) :=
first;
[ c "E_Const" | c "E_Plus" ].
Module SimpleArith2.
Now, here is the small-step variant.
Inductive step : tm -> tm -> Prop :=
| ST_PlusConstConst : forall n1 n2,
step (tm_plus (tm_const n1) (tm_const n2))
(tm_const (plus n1 n2))
| ST_Plus1 : forall t1 t1' t2,
(step t1 t1')
-> step (tm_plus t1 t2)
(tm_plus t1' t2)
| ST_Plus2 : forall n1 t2 t2',
(step t2 t2')
-> step (tm_plus (tm_const n1) t2)
(tm_plus (tm_const n1) t2').
A few things to notice:
- We are defining just a single reduction step, in which
one tm_plus node is replaced by its value.
- Each step finds the leftmost tm_plus node that is "ready
to go" (both of its operands are constants) and reduces it.
The first rule tells how to reduce this tm_plus node
itself; the other two rules tell how to find it.
- A term that is just a constant cannot take a step.
Tactic Notation "step_cases" tactic(first) tactic(c) :=
first;
[ c "ST_PlusConstConst" | c "ST_Plus1" | c "ST_Plus2" ].
A couple of examples of reasoning with the step
relation...
If t1 can take a step to t1', then tm_plus t1 t2 steps
to plus t1' t2:
Example test_step_1 :
step
(tm_plus
(tm_plus (tm_const 0) (tm_const 3))
(tm_plus (tm_const 2) (tm_const 4)))
(tm_plus
(tm_const (plus 0 3))
(tm_plus (tm_const 2) (tm_const 4))).
Proof.
apply ST_Plus1. apply ST_PlusConstConst. Qed.
Right-hand sides of sums can take a step only when the
left-hand side is finished: if t2 can take a step to t2',
then tm_plus (tm_const n) t2 steps to tm_plus (tm_const n)
t2':
Example test_step_2 :
step
(tm_plus
(tm_const 0)
(tm_plus
(tm_const 2)
(tm_plus (tm_const 0) (tm_const 3))))
(tm_plus
(tm_const 0)
(tm_plus
(tm_const 2)
(tm_const (plus 0 3)))).
Proof.
(* FILL IN HERE *) Admitted.
☐
One interesting property of the step relation is that, like
the evaluation relation for our language of While programs, it
is deterministic: for each t, there is at most one t'
such that step t t' is provable. Formally, this is the same
as saying that step is a partial function.
Proof sketch: We must show that if x steps to both y1 and
y2 then y1 and y2 are equal. Consider the last rules
used in the derivations of step x y1 and step x y2.
- If both are ST_PlusConstConst, the result is immediate.
- It cannot happen that one is ST_PlusConstConst and the
other is ST_Plus1 or ST_Plus2, since this would imply
that x has the form tm_plus t1 t2 where both t1 and
t2 are constants (by ST_PlusConstConst) AND one of t1
or t2 has the form tm_plus ....
- Similarly, it cannot happen that one is ST_Plus1 and the
other is ST_Plus2, since this would imply that x has
the form tm_plus t1 t2 where t1 has both the form
tm_plus t1 t2 and the form tm_const n.
- The cases when both derivations end with ST_Plus1 or ST_Plus2 follow by the induction hypothesis.
Theorem step_deterministic :
partial_function step.
Proof.
unfold partial_function. intros x y1 y2 Hy1 Hy2.
generalize dependent y2.
step_cases (induction Hy1) Case.
Case "ST_PlusConstConst". intros y2 Hy2. step_cases (inversion Hy2) SCase.
SCase "ST_PlusConstConst". reflexivity.
SCase "ST_Plus1". inversion H2.
SCase "ST_Plus2". inversion H2.
Case "ST_Plus1". intros y2 Hy2. step_cases (inversion Hy2) SCase.
SCase "ST_PlusConstConst".
rewrite <- H0 in Hy1. inversion Hy1.
SCase "ST_Plus1".
rewrite <- (IHHy1 t1'0).
reflexivity. assumption.
SCase "ST_Plus2". rewrite <- H in Hy1. inversion Hy1.
Case "ST_Plus2". intros y2 Hy2. step_cases (inversion Hy2) SCase.
SCase "ST_PlusConstConst". rewrite <- H1 in Hy1. inversion Hy1.
SCase "ST_Plus1". inversion H2.
SCase "ST_Plus2".
rewrite <- (IHHy1 t2'0).
reflexivity. assumption. Qed.
End SimpleArith2.
Before we move on, let's take a moment to slightly generalize
the way we state the definition of single-step reduction.
It is useful to think of the step relation as defining a sort of abstract machine for evaluating programs:
It is useful to think of the step relation as defining a sort of abstract machine for evaluating programs:
- At any moment, the state of the machine is a term.
- A step of the machine is an atomic unit of computation
- - a single "add" operation, in the case of the present
- The final states of the machine are ones where there is no more computation to be done.
- Take t as the starting state of the machine.
- Repeatedly use the step relation to find a sequence of
machine states such that each steps to the next.
- When no more reduction is possible, "read out" the final state of the machine as the result of execution.
Having introduced the idea of values, we can use it in the
definition of the step relation to write ST_Plus2 rule in
a slightly more intuitive way:
Inductive step : tm -> tm -> Prop :=
| ST_PlusConstConst : forall n1 n2,
step (tm_plus (tm_const n1) (tm_const n2))
(tm_const (plus n1 n2))
| ST_Plus1 : forall t1 t1' t2,
(step t1 t1')
-> step (tm_plus t1 t2)
(tm_plus t1' t2)
| ST_Plus2 : forall v1 t2 t2',
(value v1) (* <----- *)
-> (step t2 t2')
-> step (tm_plus v1 t2)
(tm_plus v1 t2').
Tactic Notation "step_cases" tactic(first) tactic(c) :=
first;
[ c "ST_PlusConstConst" | c "ST_Plus1" | c "ST_Plus2" ].
As a sanity check on this change, let's re-verify determinacy
Proof sketch: We must show that if x steps to both y1 and y2 then y1 and y2 are equal. Consider the final rules used in the derivations of step x y1 and step x y2.
Proof sketch: We must show that if x steps to both y1 and y2 then y1 and y2 are equal. Consider the final rules used in the derivations of step x y1 and step x y2.
- If both are ST_PlusConstConst, the result is immediate.
- It cannot happen that one is ST_PlusConstConst and the
other is ST_Plus1 or ST_Plus2, since this would imply
that x has the form tm_plus t1 t2 where both t1 and
t2 are constants (by ST_PlusConstConst) AND one of
t1 or t2 has the form tm_plus ....
- Similarly, it cannot happen that one is ST_Plus1 and
the other is ST_Plus2, since this would imply that x
has the form tm_plus t1 t2 where t1 both has the form
tm_plus t1 t2 and is a value (hence has the form
tm_const n).
- The cases when both derivations end with ST_Plus1 or ST_Plus2 follow by the induction hypothesis. ☐
Most of this proof is the same as the one above. But to get
maximum benefit from the exercise you should try to write it
from scratch and just use the earlier one if you get
stuck.
☐
A fundamental property of this language is that every term is
either a value or it can "make progress" by stepping to some
other term. This property is called progress.
Proof sketch: By induction on t.
Proof sketch: By induction on t.
- If t is a constant, then it is a value.
- If t = tm_plus t1 t2, then by the IH t1 and t2 are
either values or can take steps under step.
- If t1 and t2 are both values, then t can take a
step, by ST_PlusConstConst.
- If t1 is a value and t2 can take a step, then so can
t, by ST_Plus2.
- If t1 can take a step, then so can t, by ST_Plus1.
- If t1 and t2 are both values, then t can take a
step, by ST_PlusConstConst.
Theorem progress : forall t,
value t \/ (exists t', step t t').
Proof.
tm_cases (induction t) Case.
Case "tm_const". left. apply v_const.
Case "tm_plus". right. inversion IHt1.
SCase "l". inversion IHt2.
SSCase "l". inversion H. inversion H0.
exists (tm_const (plus n n0)).
apply ST_PlusConstConst.
SSCase "r". inversion H0 as [t' H1].
exists (tm_plus t1 t').
apply ST_Plus2. apply H. apply H1.
SCase "r". inversion H as [t' H0].
exists (tm_plus t' t2).
apply ST_Plus1. apply H0. Qed.
This property can be extended to tell us something very
interesting about values: they are exactly the terms that
cannot make progress in this sense. To state this fact,
let's begin by giving a name to terms that cannot make
progress: We'll call them normal forms.
Definition normal_form (X:Type) (R:relation X) (t:X) : Prop :=
~ exists t', R t t'.
Implicit Arguments normal_form [X].
We've actually defined what it is to be a normal form for an
arbitrary relation R over an arbitrary set X, not just
for the particular reduction relation over terms that we are
interested in at the moment. We'll re-use the same
terminology for talking about other relations later in the
course.
We can use this terminology to generalize the observation we
made in the progress theorem: normal forms and values are
actually the same thing.
Note that we state and prove this result as two different lemmas, rather than using an if-and-only-if (<->). That's because it will be easier to apply the separate lemmas later on; as noted before, Coq's facilities for dealing "in-line" with <-> statements are a little awkward.
Note that we state and prove this result as two different lemmas, rather than using an if-and-only-if (<->). That's because it will be easier to apply the separate lemmas later on; as noted before, Coq's facilities for dealing "in-line" with <-> statements are a little awkward.
Lemma value_is_nf : forall t,
value t -> normal_form step t.
Proof.
intros t H. unfold normal_form. intros contra. inversion H.
rewrite <- H0 in contra. destruct contra as [t' P]. inversion P. Qed.
Proof sketch: This is a corollary of progress.
Lemma nf_is_value : forall t,
normal_form step t -> value t.
Proof.
intros t H.
unfold normal_form in H.
assert (value t \/ exists t', step t t') as G.
SCase "Proof of assertion". apply progress.
inversion G.
SCase "l". apply H0.
SCase "r". apply ex_falso_quodlibet. apply H. assumption. Qed.
Why are these last two facts interesting? For two reasons:
- Because value is a syntactic concept -- it is a defined
by looking at the form of a term -- while normal_form is
a semantic one -- it is defined by looking at how the term
steps. Is it not obvious that these concepts should
coincide.
- Indeed, there are lots of languages in which the concepts of normal form and value do not coincide.
(* -------------------------------------------------- *)
We might, for example, accidentally define value so that it
includes some terms that are not finished reducing.
Module Temp1.
(* Open an inner module so we can redefine value and step. *)
Inductive value : tm -> Prop :=
| v_const : forall n, value (tm_const n)
| v_funny : forall t1 n2, (* <---- *)
value (tm_plus t1 (tm_const n2)).
Inductive step : tm -> tm -> Prop :=
| ST_PlusConstConst : forall n1 n2,
step (tm_plus (tm_const n1) (tm_const n2))
(tm_const (plus n1 n2))
| ST_Plus1 : forall t1 t1' t2,
(step t1 t1')
-> step (tm_plus t1 t2)
(tm_plus t1' t2)
| ST_Plus2 : forall v1 t2 t2',
(value v1)
-> (step t2 t2')
-> step (tm_plus v1 t2)
(tm_plus v1 t2').
Lemma value_not_same_as_normal_form :
exists t, value t /\ ~ normal_form step t.
Proof.
(* FILL IN HERE *) Admitted.
exists t, value t /\ ~ normal_form step t.
Proof.
(* FILL IN HERE *) Admitted.
☐
Alternatively, we might accidentally define step so that it
permits something designated as a value to reduce further.
Module Temp2.
Inductive value : tm -> Prop :=
| v_const : forall n, value (tm_const n).
Inductive step : tm -> tm -> Prop :=
| ST_Funny : forall n, (* <---- *)
step (tm_const n)
(tm_plus (tm_const n) (tm_const 0))
| ST_PlusConstConst : forall n1 n2,
step (tm_plus (tm_const n1) (tm_const n2))
(tm_const (plus n1 n2))
| ST_Plus1 : forall t1 t1' t2,
(step t1 t1')
-> step (tm_plus t1 t2)
(tm_plus t1' t2)
| ST_Plus2 : forall v1 t2 t2',
(value v1)
-> (step t2 t2')
-> step (tm_plus v1 t2)
(tm_plus v1 t2').
Lemma value_not_same_as_normal_form :
exists t, value t /\ ~ normal_form step t.
Proof.
(* FILL IN HERE *) Admitted.
exists t, value t /\ ~ normal_form step t.
Proof.
(* FILL IN HERE *) Admitted.
☐
Finally, we might accidentally define value and step so
that there is some term that is not a value but that cannot
take a step in the step relation. Such terms are said to
be stuck.
Module Temp3.
Inductive value : tm -> Prop :=
| v_const : forall n, value (tm_const n).
Inductive step : tm -> tm -> Prop :=
| ST_PlusConstConst : forall n1 n2,
step (tm_plus (tm_const n1) (tm_const n2))
(tm_const (plus n1 n2))
| ST_Plus1 : forall t1 t1' t2,
(step t1 t1')
-> step (tm_plus t1 t2)
(tm_plus t1' t2).
Note that ST_Plus2 is missing.
Lemma value_not_same_as_normal_form :
exists t, ~ value t /\ normal_form step t.
Proof.
(* FILL IN HERE *) Admitted.
exists t, ~ value t /\ normal_form step t.
Proof.
(* FILL IN HERE *) Admitted.
☐
Here is another very simple language whose terms, instead of
being just plus and numbers, are just the booleans true and
false and a conditional expression...
Inductive tm : Type :=
| tm_true : tm
| tm_false : tm
| tm_if : tm -> tm -> tm -> tm.
Inductive value : tm -> Prop :=
| v_true : value tm_true
| v_false : value tm_false.
Inductive step : tm -> tm -> Prop :=
| ST_IfTrue : forall t1 t2,
step (tm_if tm_true t1 t2)
t1
| ST_IfFalse : forall t1 t2,
step (tm_if tm_false t1 t2)
t2
| ST_If : forall t1 t1' t2 t3,
step t1 t1'
-> step (tm_if t1 t2 t3)
(tm_if t1' t2 t3).
Which of the following propositions are provable? (This is
just a thought exercise, but for an extra challenge feel free
to prove your answers in Coq.)
Definition bool_step_prop1 :=
step tm_false tm_false.
(* FILL IN HERE *)
Definition bool_step_prop2 :=
step
(tm_if
tm_true
(tm_if tm_true tm_true tm_true)
(tm_if tm_false tm_false tm_false))
tm_true.
(* FILL IN HERE *)
Definition bool_step_prop3 :=
step
(tm_if
(tm_if tm_true tm_true tm_true)
(tm_if tm_true tm_true tm_true)
tm_false)
(tm_if
tm_true
(tm_if tm_true tm_true tm_true)
tm_false).
(* FILL IN HERE *)
☐
Just as we proved a progress theorem for plus expressions, we
can do so for boolean expressions, as well.
☐
☐
Suppose we want to add a "short circuit" to the step relation
for boolean expressions, so that it can recognize when the
then and else branches of a conditional are the same
value (either tm_true or tm_false) and reduce the whole
conditional to this value in a single step, even if the guard
has not yet been reduced to a value. For example, we would
like this proposition to be provable:
Write an extra clause for the step relation that achieves
this effect and prove bool_step_prop4.
Inductive step : tm -> tm -> Prop :=
| ST_IfTrue : forall t1 t2,
step (tm_if tm_true t1 t2)
t1
| ST_IfFalse : forall t1 t2,
step (tm_if tm_false t1 t2)
t2
| ST_If : forall t1 t1' t2 t3,
step t1 t1'
-> step (tm_if t1 t2 t3)
(tm_if t1' t2 t3)
(* FILL IN HERE *)
.
☐
To check that your previous answer is correct, prove that the
following step is now possible.
Definition bool_step_prop4 :=
step
(tm_if
(tm_if tm_true tm_true tm_true)
tm_false
tm_false)
tm_false.
Example bool_step_prop4_holds :
bool_step_prop4.
Proof.
(* FILL IN HERE *) Admitted.
☐
It can be shown that the determinism and progress theorems
for the step relation in the lecture notes also hold for the
definition of step given above. After we add the clause
ST_ShortCircuit...
- Does step_deterministic still hold? Write yes or no and
briefly (1 sentence) explain your answer.
Optional exercise: prove your answer correct in Coq.
(* FILL IN HERE *)
- Does a progress theorem hold? Write yes or no and briefly
(1 sentence) explain your answer.
Optional exercise: prove your answer correct in Coq.
(* FILL IN HERE *)
- In general, is there any way we could cause progress to fail if we took away one or more constructors from the original step relation? Write yes or no and briefly (1 sentence) explain your answer.
☐
Until now, we've been working with the single-step reduction
relation step, which formalizes the individual steps of
abstract machine for executing programs. It is also
interesting to use this machine to reduce programs to
completion, to find out what final result they yield. This
can be formalized in two steps.
- First, we define a multi-step reduction relation
stepmany, which relates terms t and t' if t can
reach t' by any number (including 0) of single reduction
steps.
- Then we can define a "result" of a term t as a normal form that t can reach by some number of reduction steps. Formally, we write normal_form_of t t' to mean that t' is a normal form reachable from t by many-step reduction.
To begin, let's review a bit of terminology from the basic
theory of relations, which you probably saw at some point in
a discrete math course.
The reflexive, transitive closure of a relation R is the
smallest relation that contains R and that is both
reflexive and transitive. Formally, it is defined like
this in the Relations module of the Coq standard library:
Module Excerpt_From_Coq_Relations_Library.
Inductive clos_refl_trans {A: Type} {R: relation A} : relation A :=
| rt_step : forall x y, R x y -> clos_refl_trans x y
| rt_refl : forall x, clos_refl_trans x x
| rt_trans : forall x y z,
clos_refl_trans x y -> clos_refl_trans y z -> clos_refl_trans x z.
End Excerpt_From_Coq_Relations_Library.
Implicit Arguments clos_refl_trans [A].
Tactic Notation "rt_cases" tactic(first) tactic(c) :=
first;
[ c "rt_step" | c "rt_refl" | c "rt_trans" ].
For example, the reflexive and transitive closure of the
next_nat relation coincides with the le relation.
Inductive next_nat (n:nat) : nat -> Prop :=
| nn : next_nat n (S n).
Theorem next_nat_closure_is_le : forall n m,
(n <= m) <-> ((clos_refl_trans next_nat) n m).
Proof.
intros n m. split.
Case "->".
intro H. induction H.
apply rt_refl.
apply rt_trans with m. apply IHle. apply rt_step. apply nn.
Case "<-".
intro H. rt_cases (induction H) SCase.
SCase "rt_step". inversion H. apply le_S. apply le_n.
SCase "rt_refl". apply le_n.
SCase "rt_trans".
apply le_trans with y.
apply IHclos_refl_trans1.
apply IHclos_refl_trans2. Qed.
The above definition of reflexive, transitive closure is
natural -- it says, explicitly, that the reflexive and
transitive closure of R is the least relation that includes
R and that is closed under rules of reflexivity and
transitivity. But it turns out that this definition is not
very convenient for doing proofs -- the "nondeterminism" of
the rt_trans rule can sometimes lead to tricky inductions.
Here is a more useful definition...
Here is a more useful definition...
Inductive refl_step_closure (X:Type) (R: relation X)
: X -> X -> Prop :=
| rsc_refl : forall (x : X),
refl_step_closure X R x x
| rsc_step : forall (x y z : X),
R x y
-> refl_step_closure X R y z
-> refl_step_closure X R x z.
Implicit Arguments refl_step_closure [X].
This new definition "bundles together" the rtc_R and
rtc_trans rules into the single rule step. The left-hand
premise of this step is a single use of R, leading to a
much simpler induction principle.
Before we go on, we should check that the 2 definitions do indeed define the same relation...
First, we prove two lemmas showing that rsc mimics the behavior of the two "missing " rtc constructors.
Before we go on, we should check that the 2 definitions do indeed define the same relation...
First, we prove two lemmas showing that rsc mimics the behavior of the two "missing " rtc constructors.
Tactic Notation "rsc_cases" tactic(first) tactic(c) :=
first;
[ c "rsc_refl" | c "rsc_step" ].
Theorem rsc_R : forall (X:Type) (R:relation X) (x y : X),
R x y -> refl_step_closure R x y.
Proof.
intros X R x y r.
apply rsc_step with y. apply r. apply rsc_refl. Qed.
Theorem rsc_trans :
forall (X:Type) (R: relation X) (x y z : X),
refl_step_closure R x y
-> refl_step_closure R y z
-> refl_step_closure R x z.
Proof.
(* FILL IN HERE *) Admitted.
forall (X:Type) (R: relation X) (x y z : X),
refl_step_closure R x y
-> refl_step_closure R y z
-> refl_step_closure R x z.
Proof.
(* FILL IN HERE *) Admitted.
☐
Theorem rtc_rsc_coincide :
forall (X:Type) (R: relation X) (x y : X),
clos_refl_trans R x y <-> refl_step_closure R x y.
Proof.
(* FILL IN HERE *) Admitted.
forall (X:Type) (R: relation X) (x y : X),
clos_refl_trans R x y <-> refl_step_closure R x y.
Proof.
(* FILL IN HERE *) Admitted.
☐
Now we're ready to define the multi-step reduction relation.
(Note that we use Notation instead of Definition here.
This means that stepmany will be automatically unfolded by
Coq, which will simplify some of the proof automation later
on.)
A few examples...
Lemma test_stepmany_1:
stepmany
(tm_plus
(tm_plus (tm_const 0) (tm_const 3))
(tm_plus (tm_const 2) (tm_const 4)))
(tm_const (plus (plus 0 3) (plus 2 4))).
Proof.
apply rsc_step with
(tm_plus
(tm_const (plus 0 3))
(tm_plus (tm_const 2) (tm_const 4))).
apply ST_Plus1. apply ST_PlusConstConst.
apply rsc_step with
(tm_plus
(tm_const (plus 0 3))
(tm_const (plus 2 4))).
apply ST_Plus2. apply v_const.
apply ST_PlusConstConst.
apply rsc_R.
apply ST_PlusConstConst. Qed.
Here's another proof for the same example that uses eapply
to avoid explicitly constructing all the intermediate terms.
Lemma test_stepmany_1':
stepmany
(tm_plus
(tm_plus (tm_const 0) (tm_const 3))
(tm_plus (tm_const 2) (tm_const 4)))
(tm_const (plus (plus 0 3) (plus 2 4))).
Proof.
eapply rsc_step. apply ST_Plus1. apply ST_PlusConstConst.
eapply rsc_step. apply ST_Plus2. split. apply ST_PlusConstConst.
eapply rsc_step. apply ST_PlusConstConst.
apply rsc_refl. Qed.
☐
Lemma test_stepmany_3:
stepmany
(tm_plus (tm_const 0) (tm_const 3))
(tm_plus (tm_const 0) (tm_const 3)).
Proof.
(* FILL IN HERE *) Admitted.
stepmany
(tm_plus (tm_const 0) (tm_const 3))
(tm_plus (tm_const 0) (tm_const 3)).
Proof.
(* FILL IN HERE *) Admitted.
☐
Lemma test_stepmany_4:
stepmany
(tm_plus
(tm_const 0)
(tm_plus
(tm_const 2)
(tm_plus (tm_const 0) (tm_const 3))))
(tm_plus
(tm_const 0)
(tm_const (plus 2 (plus 0 3)))).
Proof.
(* FILL IN HERE *) Admitted.
stepmany
(tm_plus
(tm_const 0)
(tm_plus
(tm_const 2)
(tm_plus (tm_const 0) (tm_const 3))))
(tm_plus
(tm_const 0)
(tm_const (plus 2 (plus 0 3)))).
Proof.
(* FILL IN HERE *) Admitted.
☐
Now we define normal forms and relate them to values.
If t steps to t' in zero or more steps and t' is a
normal form, we say that "t' is a normal form of t."
We have already seen that single-step reduction is
deterministic -- i.e., a given term can take a single step in
at most one way. It follows from this that, if t can reach
a normal form, then this normal form is unique -- i.e.,
normal_form_of is a partial function. In other words, we
can actually pronounce normal_form t t' as "t' is the
normal form of t."
Theorem normal_forms_unique:
partial_function normal_form_of.
Proof.
unfold partial_function. unfold normal_form_of. intros x y1 y2 P1 P2.
destruct P1 as [P11 P12]. destruct P2 as [P21 P22].
generalize dependent y2.
(* We recommend using this initial setup as-is! *)
(* FILL IN HERE *) Admitted.
partial_function normal_form_of.
Proof.
unfold partial_function. unfold normal_form_of. intros x y1 y2 P1 P2.
destruct P1 as [P11 P12]. destruct P2 as [P21 P22].
generalize dependent y2.
(* We recommend using this initial setup as-is! *)
(* FILL IN HERE *) Admitted.
☐
Indeed, something stronger is true for this language (though
not for all programming languages!): the reduction of ANY
term t will eventually reach a normal form -- i.e.,
normal_form_of is a total function. Formally, we say the
step relation is normalizing.
Definition normalizing (X:Type) (R:relation X) :=
forall t, exists t',
(refl_step_closure R) t t' /\ normal_form R t'.
Implicit Arguments normalizing [X].
To prove that step is normalizing, we need a few lemmas.
First, we observe that, if t reduces to t' in many steps,
then the same sequence of reduction steps is possible when
t appears as the left-hand child of a tm_plus node, and
similarly when t appears as the right-hand child of a
tm_plus node whose left-hand child is a value.
Lemma stepmany_congr_1 : forall t1 t1' t2,
stepmany t1 t1'
-> stepmany (tm_plus t1 t2) (tm_plus t1' t2).
Proof.
intros t1 t1' t2 H. rsc_cases (induction H) Case.
Case "rsc_refl". apply rsc_refl.
Case "rsc_step". apply rsc_step with (tm_plus y t2).
apply ST_Plus1. apply H.
apply IHrefl_step_closure. Qed.
Lemma stepmany_congr_2 : forall t1 t2 t2',
value t1
-> stepmany t2 t2'
-> stepmany (tm_plus t1 t2) (tm_plus t1 t2').
Proof.
(* FILL IN HERE *) Admitted.
value t1
-> stepmany t2 t2'
-> stepmany (tm_plus t1 t2) (tm_plus t1 t2').
Proof.
(* FILL IN HERE *) Admitted.
☐
Before proving it formally, let's sketch the proof that
step is normalizing.
Theorem:
forall t, exists t',
stepmany t t' /\ normal_form step t'
Proof sketch: By induction on terms. There are two cases to consider:
Theorem:
forall t, exists t',
stepmany t t' /\ normal_form step t'
Proof sketch: By induction on terms. There are two cases to consider:
- t = tm_const n for some n. Here t doesn't take a
step, and we have t' = t. We can derive the left-hand
side by reflexivity and the right-hand side by observing
(a) that values are normal forms (by value_is_nf) and
(b) that t is a value (by v_const).
- t = tm_plus t1 t2 for some t1 and t2. By the IH,
both t1 and t2 normalize to normal forms t1' and
t2', respectively. Recall that normal forms are values
(by nf_is_value); we know that t1' = tm_const n1 and
t2' = tm_const n2, for some n1 and n2. We can
combine the stepmany derivations for t1 and t2 to
prove that:
stepmany (tm_plus t1 t2) (tm_const (plus n1 n2)).
It is clear that our choice of t' = tm_const (plus n1 n2) is a value, which is in turn a normal form.
Theorem step_normalizing :
normalizing step.
Proof.
unfold normalizing.
tm_cases (induction t) Case.
Case "tm_const".
exists (tm_const n).
split.
SCase "l". apply rsc_refl.
SCase "r". apply value_is_nf. apply v_const.
Case "tm_plus".
destruct IHt1 as [t1' H1]. destruct IHt2 as [t2' H2].
destruct H1 as [H11 H12]. destruct H2 as [H21 H22].
apply nf_is_value in H12. apply nf_is_value in H22.
inversion H12 as [n1]. inversion H22 as [n2].
rewrite <- H in H11.
rewrite <- H0 in H21.
exists (tm_const (plus n1 n2)).
split.
SCase "l".
apply rsc_trans with (tm_plus (tm_const n1) t2).
apply stepmany_congr_1. apply H11.
apply rsc_trans with
(tm_plus (tm_const n1) (tm_const n2)).
apply stepmany_congr_2. apply v_const. apply H21.
apply rsc_R. apply ST_PlusConstConst.
SCase "r".
apply value_is_nf. apply v_const. Qed.
Having defined the operational semantics of our tiny
programming language in two different styles, it makes sense
to ask whether these definitions actually define the same
thing! They do, but it is not completely straightforward to
show this, or even to see how to state it exactly, since one
of the relations only goes a small step at a time while the
other proceeds in large chunks.
Lemma eval__value : forall t1 t2,
eval t1 t2
-> value t2.
Proof.
intros t1 t2 HE.
(eval_cases (inversion HE) Case); apply v_const. Qed.
You'll want to use the congruences and some properties of
rsc.
☐
Write an informal version of the proof of eval__stepmany.
(* FILL IN HERE *)
☐
☐
Theorem step__eval : forall t t' v,
step t t'
-> eval t' v
-> eval t v.
Proof.
(* FILL IN HERE *) Admitted.
step t t'
-> eval t' v
-> eval t v.
Proof.
(* FILL IN HERE *) Admitted.
☐
Theorem stepmany__eval : forall t v,
normal_form_of t v -> eval t v.
Proof.
intros t v Hnorm.
unfold normal_form_of in Hnorm.
inversion Hnorm as [Hs Hnf]; clear Hnorm.
(* v is a normal form -> v = tm_const n for some n *)
apply nf_is_value in Hnf. inversion Hnf. clear Hnf.
(rsc_cases (induction Hs) Case); subst.
Case "rsc_refl".
apply E_Const.
Case "rsc_step".
eapply step__eval. eassumption. apply IHHs. reflexivity. Qed.
Bringing it all together into a crisp connection, we can
simply say that the v is the normal form of t iff t
evaluates to v.
Corollary stepmany_iff_eval : forall t v,
normal_form_of t v <-> eval t v.
Proof.
split.
Case "->". apply stepmany__eval.
Case "<-". unfold normal_form_of. intros E. split. apply eval__stepmany. assumption.
apply value_is_nf. eapply eval__value. eassumption. Qed.
Define a Fixpoint that evaluates tms. Prove that it is
equivalent to the existing semantics.
Hint: we just proved that eval and stepmany are equivalent, so logically it doesn't matter which you choose. One will be easier than the other, though!
Hint: we just proved that eval and stepmany are equivalent, so logically it doesn't matter which you choose. One will be easier than the other, though!
(* FILL IN HERE *)
☐
We've considered the arithmetic and conditional expressions
separately. This exercise explores how the two interact.
Module Combined.
Inductive tm : Type :=
| tm_const : nat -> tm
| tm_plus : tm -> tm -> tm
| tm_true : tm
| tm_false : tm
| tm_if : tm -> tm -> tm -> tm.
Tactic Notation "tm_cases" tactic(first) tactic(c) :=
first;
[ c "tm_const" | c "tm_plus" |
c "tm_true" | c "tm_false" | c "tm_if" ].
Inductive value : tm -> Prop :=
| v_const : forall n, value (tm_const n)
| v_true : value tm_true
| v_false : value tm_false.
Inductive step : tm -> tm -> Prop :=
| ST_PlusConstConst : forall n1 n2,
step (tm_plus (tm_const n1) (tm_const n2))
(tm_const (plus n1 n2))
| ST_Plus1 : forall t1 t1' t2,
(step t1 t1')
-> step (tm_plus t1 t2)
(tm_plus t1' t2)
| ST_Plus2 : forall n1 t2 t2',
(step t2 t2')
-> step (tm_plus (tm_const n1) t2)
(tm_plus (tm_const n1) t2')
| ST_IfTrue : forall t1 t2,
step (tm_if tm_true t1 t2)
t1
| ST_IfFalse : forall t1 t2,
step (tm_if tm_false t1 t2)
t2
| ST_If : forall t1 t1' t2 t3,
step t1 t1'
-> step (tm_if t1 t2 t3)
(tm_if t1' t2 t3).
Tactic Notation "step_cases" tactic(first) tactic(c) :=
first;
[ c "ST_PlusConstConst" | c "ST_Plus1" | c "ST_Plus2" |
c "ST_IfTrue" | c "ST_IfFalse" | c "ST_If" ].
Earlier, we separately proved for both plus- and if-expressions
- that the step relation was a partial function (i.e., it was
deterministic), and
- a progress lemma, stating that every term is either a value or can take step.
(* FILL IN HERE *)
☐