# BasicBasic Proofs in Separation Logic

Set Implicit Arguments.

From SLF Require Import LibSepReference.

Import ProgramSyntax DemoPrograms.

Implicit Types n m : int.

Implicit Types p q : loc.

From SLF Require Import LibSepReference.

Import ProgramSyntax DemoPrograms.

Implicit Types n m : int.

Implicit Types p q : loc.

# A First Taste

## Parsing of Programs

fun p →

let n = !p in

let m = n + 1 in

p := m

There is no need to learn how to write programs in this custom syntax:
source code is provided for all the programs involved in this course.
To simplify the implementation of the framework and the reasoning about
programs, we make throughout the course the simplifying assumption that
programs are written in "A-normal form": all intermediate expressions must
be named using a let-binding.

## Specification of the Increment Function

In the specification above, p denotes the "location", that is, the address
in memory, of the reference cell provided as argument to the increment
function. Locations have type loc in the framework.
The precondition is written p ~~> n. This Separation Logic predicate
describes a memory state in which the contents of the location p is the
value n. In the present example, n stands for an integer value.
The behavior of the operation incr p consists of updating the memory state
by incrementing the contents of the cell at location p, updating its
contents to n+1. Thus, the memory state posterior to the increment
operation is described by the predicate p ~~> (n+1).
The result value returned by incr p is the unit value, which does not
carry any useful information. In the specification of incr, the
postcondition is of the form fun _ ⇒ ..., indicating that there is no
need to bind a name for the result value.
The general pattern of a specification admits the following scheme.
Note that we have to write p ~~> (n+1) using parentheses around n+1,
because p ~~> n+1 would get parsed as (p ~~> n) + 1.

- Quantification of the arguments of the functions---here, the variable p.
- Quantification of the "ghost variables" used to describe the input state---here, the variable n.
- The application of the predicate triple to the function application incr p---here, the term being specified by the triple.
- The precondition describing the input state---here, the predicate p ~~> n.
- The postcondition describing both the output value and the output state. The general pattern is fun r ⇒ H', where r names the result and H' describes the final state. Here, r is just an underscore symbol, and the final state is described by p ~~> (n+1).

## Verification of the Increment Function

Proof.

xwp begins the verification proof.

xwp.

The proof obligation is displayed using a custom notation of the form
PRE H CODE F POST Q. In the CODE section, one should be able to somewhat
recognize the body of incr. Indeed, if we ignore the back-ticks and
perform the alpha-renaming from v to n and v

<[ Let n := App val_get p in

Let m := App val_add n 1 in

App val_set p ) ]> which is somewhat similar to the original source code, but displayed using a special syntax whose meaning will be explained later on, in chapter WPgen.
The remainder of the proof performs essentially a symbolic execution of the
code. At each step, one should not attempt to read the full proof
obligation, but instead only look at the current state, described by the
PRE part (here, p ~~> n), and at the first line only of the CODE part,
which corresponds to the next operation to reason about. Each of the
operations involved here is handled using the tactic xapp.
First, we reason about the operation !p that reads into p; this read
operation returns the value n.

_{0}to m, the CODE section reads like:<[ Let n := App val_get p in

Let m := App val_add n 1 in

App val_set p ) ]> which is somewhat similar to the original source code, but displayed using a special syntax whose meaning will be explained later on, in chapter WPgen.

xapp.

Second, we reason about the addition operation n+1.

xapp.

Third, we reason about the update operation p := n+1, thereby updating the
state to p ~~> (n+1).

xapp.

At this stage, the proof obligation takes the form H

_{1}==> H_{2}. It requires us to check that the final state matches what is claimed in the postcondition. We discharge it using the tactic xsimpl.
xsimpl.

Qed.

Qed.

This completes the verification of the lemma triple_incr, which
establishes a formal specification for the increment function. Before moving
on to another function, we associate the lemma triple_incr with the
function incr in a hint database called triple. We do so using the
command shown below. If at some point we verify a function that includes a
call to incr, the xapp tactic will be able to automatically invoke the
lemma triple_incr.

#[global] Hint Resolve triple_incr : triple.

To minimize the amount of syntactic noise in specifications, we leverage an
advanced feature of Coq's coercion mechanism. Concretely, instead of writing
the specification in the form triple <{ incr p }> ..., we write it in the
form triple (incr p) ..., that is, with just parentheses. Thanks to the
coercion mecanism, explained in more details in chapter Rules, when
Coq sees a "program value" incr being applied to an argument p, it
automatically interprets this as a "program function call" of incr to p.
Thus, the specification of the increment function can be written as follows.

Lemma triple_incr' : ∀ (p:loc) (n:int),

triple (incr p)

(p ~~> n)

(fun _ ⇒ (p ~~> (n+1))).

Proof.

(* Here, to view coercions, use Set Printing Coercions. *)

xwp. xapp. xapp. xapp. xsimpl.

Qed.

triple (incr p)

(p ~~> n)

(fun _ ⇒ (p ~~> (n+1))).

Proof.

(* Here, to view coercions, use Set Printing Coercions. *)

xwp. xapp. xapp. xapp. xsimpl.

Qed.

The existence of implicit coercions might be a little confusing at times,
yet coercions make specifications so much more readable that it would be a
pity to not exploit them.
The reader may be curious to know what the notation PRE H CODE F POST Q
stands for, and what the x-tactics are doing. Everything will be explaiend
throughout the course. This chapter and the next one focus presenting the
features of Separation Logic, and on showing how x-tactics can be used to
verify programs in Separation Logic.

## A Function with a Return Value

The specification takes the form triple (example_let n) H (fun r ⇒ H'),
where r, of type val, denotes the output value.
The precondition H describes what we need to assume about the input state.
For this function, we need not assume anything, hence we write \[] to
denote the empty precondition. The program might have allocated data prior
to the call to the function example_let, however this function will not
interfer in any way with this previously-allocated data.
The postcondition describes what the function produces. More precisely, the
postcondition specifies both the output that the function returns, and the
data from memory that the function has allocated, accessed, or updated. The
function example_let does not interact with the state, thus the
postcondition could be described using the empty predicate \[].
Yet, if we write just fun (r:val) ⇒ \[] as postcondition, we would have
said nothing about the output value r produced by a call example_let.
Instead, we would like to specify that the result r is equal to 2*n. To
that end, we write the postcondition fun r ⇒ \[r = 2*n]. Here, we use the
predicate \[P], which allows to embed "pure facts", of type Prop in
preconditions and postconditions.
The equality r = 2*n actually resolves to r = val_int (2*n), where
val_int is a coercion that translates the integer value 2*n into the
corresponding integer value, of type val, from the programming language.
If you do not know what a coercion is, just ignore the previous sentence.

The proof script is quite similar to the previous one: xwp begins the
proof, xapp performs symbolic execution. and xsimpl simplifies the
entailment. Ultimately, we need to check that the expression computed,
(n + 1) + (n - 1), is equal to the specified result, that is, 2*n. To
prove this equality, we invoke the tactic math, which is a variant of the
tactic lia provided by the TLC library. Recall from the preface that this
course leverages TLC for enhanced definitions and tactics.

Proof.

xwp. xapp. xapp. xapp. xsimpl. math.

Qed.

xwp. xapp. xapp. xapp. xsimpl. math.

Qed.

#### Exercise: 1 star, standard, especially useful (triple_quadruple)

Specify and verify the function quadruple to express that it returns 4*n
. Hint: follow the pattern of the previous proof.

(* FILL IN HERE *)

☐

☐

#### Exercise: 2 stars, standard, especially useful (triple_inplace_double)

Specify and verify the function inplace_double. Hint: follow the pattern
of the first example, namely triple_incr.

(* FILL IN HERE *)

☐

☐

## Increment of Two References

The specification of this function takes the form
triple (incr_two p q) H (fun _ ⇒ H'), where the underscore symbol denotes
the result value. We do not bother binding a name for that unit value.
The precondition describes two references cells: p ~~> n and q ~~> m. To
assert that the two cells are distinct from each other, we separate their
description with the operator \*. Thus, the precondition is
(p ~~> n) \* (q ~~> m), or simply p ~~> n \* q ~~> m. The operator \*
is called the "separating conjunction" of Separation Logic. It is also known
as the "star" operator.
The postcondition describes the final state in a similar way, as
p ~~> (n+1) \* q ~~> (m+1). This predicate reflects the fact that the both
references have their contents increased by one unit.
The specification triple for incr_two is thus as follows. The proof
follows the same pattern as in the previous examples.

Lemma triple_incr_two : ∀ (p q:loc) (n m:int),

triple (incr_two p q)

(p ~~> n \* q ~~> m)

(fun _ ⇒ p ~~> (n+1) \* q ~~> (m+1)).

Proof using.

xwp. xapp. xapp. xsimpl.

Qed.

triple (incr_two p q)

(p ~~> n \* q ~~> m)

(fun _ ⇒ p ~~> (n+1) \* q ~~> (m+1)).

Proof using.

xwp. xapp. xapp. xsimpl.

Qed.

We will make use of the function incr_two later in this chapter, thus we
register the specification triple_incr_two in the triple database.

#[global] Hint Resolve triple_incr_two : triple.

Separation Logic expressions such as p ~~> n or \[] or H

_{1}\* H_{2}are called "heap predicates", because they corresponding to predicates over "heaps", i.e., predicates over memory states.## Aliased Arguments

A call to aliased_call p increases the contents of p by 2. This
property can be specified as follows.

Lemma triple_aliased_call : ∀ (p:loc) (n:int),

triple (aliased_call p)

(p ~~> n)

(fun _ ⇒ p ~~> (n+2)).

triple (aliased_call p)

(p ~~> n)

(fun _ ⇒ p ~~> (n+2)).

If we attempt the proof, we get stuck. The tactic xapp reports its failure
by issuing a proof obligation of the form \[] ==> (p ~~> ?m) \* _. This
proof obligation requires us to show that, from the empty heap predicate
state, one can extract a heap predicate p ~~> ?m describing a reference at
location p with some integer contents ?m.

Proof using.

xwp. xapp.

Abort.

xwp. xapp.

Abort.

On the one hand, the precondition of the specification triple_incr_two,
with q = p, requires providing p ~~> ?n \* p ~~> ?m. On the other hand,
the current state is described as p ~~> n. When trying to match the two,
the internal simplification tactic xsimpl is able to cancel out one
occurrence of p ~~> n from both expressions, but then there remains to
match the empty heap predicate \[] against (p ~~> ?m). The issue here is
that the specification triple_incr_two is specialized for the case of
"non-aliased" references.
One thing we can do is to state and prove an alternative specification for
the function incr_two to cover the case of aliased arguments. The
precondition of this alternative specification mentions a single reference,
p ~~> n. Its postcondition asserts that the contents of that reference is
increased by two units. This alternative specification is stated and proved
as follows.

Lemma triple_incr_two_aliased : ∀ (p:loc) (n:int),

triple (incr_two p p)

(p ~~> n)

(fun _ ⇒ p ~~> (n+2)).

Proof using.

xwp. xapp. xapp. xsimpl. math.

Qed.

triple (incr_two p p)

(p ~~> n)

(fun _ ⇒ p ~~> (n+2)).

Proof using.

xwp. xapp. xapp. xsimpl. math.

Qed.

By exploiting the alternative specification for incr_two, we are able to
prove the specification of the function aliased_call. In order to indicate
to the tactic xapp that it should not invoke the lemma triple_incr_two
registered for incr_two, but instead invoke the lemma
triple_incr_two_aliased, we provide that lemma as argument to xapp.
Concretely, we write xapp triple_incr_two_aliased.

Lemma triple_aliased_call : ∀ (p:loc) (n:int),

triple (aliased_call p)

(p ~~> n)

(fun _ ⇒ p ~~> (n+2)).

Proof using.

xwp. xapp triple_incr_two_aliased. xsimpl.

Qed.

triple (aliased_call p)

(p ~~> n)

(fun _ ⇒ p ~~> (n+2)).

Proof using.

xwp. xapp triple_incr_two_aliased. xsimpl.

Qed.

Taking a step back, it may appear somewhat disappointing that we need two
different specifications for the same function, depending on whether its
arguments are aliased on not. There exists advanced features of Separation
Logic that allow handling the two cases through a single specification.
However, for such a simple function, it is easiest to just state and prove
the two specifications separately.

## A Function that Takes Two References and Increments One

We can specify this function by describing its input state as
p ~~> n \* q ~~> m, and describing its output state as
p ~~> (n+1) \* q ~~> m. Formally:

Lemma triple_incr_first : ∀ (p q:loc) (n m:int),

triple (incr_first p q)

(p ~~> n \* q ~~> m)

(fun _ ⇒ p ~~> (n+1) \* q ~~> m).

Proof using.

xwp. xapp. xsimpl.

Qed.

triple (incr_first p q)

(p ~~> n \* q ~~> m)

(fun _ ⇒ p ~~> (n+1) \* q ~~> m).

Proof using.

xwp. xapp. xsimpl.

Qed.

The second reference plays absolutely no role in the execution of the
function. Thus, we could equally well consider a specification that mentions
only the existence of the first reference.

Lemma triple_incr_first' : ∀ (p q:loc) (n:int),

triple (incr_first p q)

(p ~~> n)

(fun _ ⇒ p ~~> (n+1)).

Proof using.

xwp. xapp. xsimpl.

Qed.

triple (incr_first p q)

(p ~~> n)

(fun _ ⇒ p ~~> (n+1)).

Proof using.

xwp. xapp. xsimpl.

Qed.

Interestingly, the specification triple_incr_first, which mentions the two
references, is derivable from the specification triple_incr_first', which
mentions only the first reference. To prove the implication, it suffices to
invoke the tactic xapp with argument triple_incr_first'.

Lemma triple_incr_first_derived : ∀ (p q:loc) (n m:int),

triple (incr_first p q)

(p ~~> n \* q ~~> m)

(fun _ ⇒ p ~~> (n+1) \* q ~~> m).

Proof using.

intros. xapp triple_incr_first'. xsimpl.

Qed.

triple (incr_first p q)

(p ~~> n \* q ~~> m)

(fun _ ⇒ p ~~> (n+1) \* q ~~> m).

Proof using.

intros. xapp triple_incr_first'. xsimpl.

Qed.

More generally, in Separation Logic, if a specification triple holds, then
this triple remains valid when we add the same heap predicate to both the
precondition and the postcondition. This is the "frame" principle, a key
modularity feature that we'll come back to later on in the course.

## Transfer from one Reference to Another

Definition transfer : val :=

<{ fun 'p 'q ⇒

let 'n = !'p in

let 'm = !'q in

let 's = 'n + 'm in

'p := 's;

'q := 0 }>.

<{ fun 'p 'q ⇒

let 'n = !'p in

let 'm = !'q in

let 's = 'n + 'm in

'p := 's;

'q := 0 }>.

#### Exercise: 1 star, standard, especially useful (triple_transfer)

State and prove a lemma called triple_transfer, to specify the behavior of transfer p q in the case where p and q denote two distinct references.
(* FILL IN HERE *)

☐

☐

#### Exercise: 1 star, standard, especially useful (triple_transfer_aliased)

State and prove a lemma called triple_transfer_aliased specifying the behavior of transfer when it is applied twice to the same argument. It should take the form triple (transfer p p) _ _.
(* FILL IN HERE *)

☐

☐

## Specification of Allocation

Parameter triple_ref : ∀ (v:val),

triple <{ ref v }>

\[]

(fun r ⇒ \∃ p, \[r = val_loc p] \* p ~~> v).

triple <{ ref v }>

\[]

(fun r ⇒ \∃ p, \[r = val_loc p] \* p ~~> v).

The pattern fun r ⇒ \∃ p, \[r = val_loc p] \* H) occurs whenever a
function returns a pointer. To improve concision for this frequent pattern,
we introduce a specific notation, of the form funloc p ⇒ H.

Notation "'funloc' p '=>' H" :=

(fun (r:val) ⇒ \∃ p, \[r = val_loc p] \* H)

(at level 200, p name, format "'funloc' p '=>' H").

(fun (r:val) ⇒ \∃ p, \[r = val_loc p] \* H)

(at level 200, p name, format "'funloc' p '=>' H").

Using this notation, the specification triple_ref can be reformulated more
concisely, as follows.

The tool CFML, which leverages similar techniques as described in this
course, leverages type-classes to generalize the notation funloc to all
return types. Yet, in order to avoid technical difficulties associated with
type-classes, we will not go for the general presentation, but instead
exploit the funloc notation, specific to the case where the return type is
a location. For other types, we can quantify over the result value
explicitly.

## Allocation of a Reference with Greater Contents

The precondition of ref_greater asserts the existence of a cell p ~~> n.
The postcondition of ref_greater asserts the existence of two cells,
p ~~> n and q ~~> (n+1), where q denotes the location returned by the
function. The postcondition is thus written
funloc q ⇒ p ~~> n \* q ~~> (n+1), which is a shorthand for
fun (r:val) ⇒ \∃ q, \[r = val_loc q] \* p ~~> n \* q ~~> (n+1).

Lemma triple_ref_greater : ∀ (p:loc) (n:int),

triple (ref_greater p)

(p ~~> n)

(funloc q ⇒ p ~~> n \* q ~~> (n+1)).

Proof using.

xwp. xapp. xapp. xapp. intros q. xsimpl. auto.

Qed.

☐

triple (ref_greater p)

(p ~~> n)

(funloc q ⇒ p ~~> n \* q ~~> (n+1)).

Proof using.

xwp. xapp. xapp. xapp. intros q. xsimpl. auto.

Qed.

☐

#### Exercise: 2 stars, standard, especially useful (triple_ref_greater_abstract)

State another specification for the function ref_greater with a postcondition that does not reveal the contents of the fresh reference q, but instead only asserts that it is greater than the contents of p. To that end, introduce in the postcondition an existentially quantified variable called m, with m > n. This new specification, to be called triple_ref_greater_abstract, should be derived from triple_ref_greater, following the proof pattern employed in triple_incr_first_derived.
(* FILL IN HERE *)

☐

☐

## Power of the Frame Rule with Respect to Allocation

Parameter triple_ref' : ∀ (v:val),

triple <{ ref v }>

\[]

(funloc p ⇒ p ~~> v).

Parameter triple_ref_with_frame : ∀ (p':loc) (v':val) (v:val),

triple (val_ref v)

(p' ~~> v')

(funloc p ⇒ p ~~> v \* p' ~~> v').

triple (val_ref v)

(p' ~~> v')

(funloc p ⇒ p ~~> v \* p' ~~> v').

The separating conjunction that appears in the above postcondition captures
the fact that the location p is distinct from p'. As illustrated here,
the frame rule indirectly captures the property that any piece of freshly
allocated data is distinct from any piece of previously existing data. This
property may seem obvious, yet in the work on program verification prior to
Separation Logic, it has proved to be challenging to capture.

## Deallocation in Separation Logic

A call to that function can be specified using an empty precondition and a
postcondition asserting that the final result is equal to n+1. The
postcondition has no reason to mention the reference used internally by the
function. Let us investigate how we get stuck on the last step when trying
to prove that specification.

Lemma triple_succ_using_incr_attempt : ∀ (n:int),

triple (succ_using_incr_attempt n)

\[]

(fun r ⇒ \[r = n+1]).

Proof using.

xwp. xapp. intros p. xapp. xapp. xsimpl. { auto. }

Abort.

triple (succ_using_incr_attempt n)

\[]

(fun r ⇒ \[r = n+1]).

Proof using.

xwp. xapp. intros p. xapp. xapp. xsimpl. { auto. }

Abort.

We get stuck with the unprovable entailment p ~~> (n+1) ==> \[], where the
left-hand side describes a state with one reference, whereas the right-hand
side describes an empty state. There are three possibilities to work around
the issue.
The first solution consists of extending the postcondition to account for
the existence of the reference p. This yields a provable specification.

Lemma triple_succ_using_incr_attempt' : ∀ (n:int),

triple (succ_using_incr_attempt n)

\[]

(fun r ⇒ \[r = n+1] \* \∃ p, (p ~~> (n+1))).

Proof using.

xwp. xapp. intros p. xapp. xapp. xsimpl. { auto. }

Qed.

triple (succ_using_incr_attempt n)

\[]

(fun r ⇒ \[r = n+1] \* \∃ p, (p ~~> (n+1))).

Proof using.

xwp. xapp. intros p. xapp. xapp. xsimpl. { auto. }

Qed.

However, while the specification above is provable, it is totally
unsatisfying. Indeed, the piece of postcondition \∃ p, p ~~> (n+1) is
of absolutely no use to the caller of the function. Worse, the caller will
get its own heap predicate polluted with \∃ p, p ~~> (n+1), with no
way of throwing away this predicate.
A second solution is to alter the code of the program to include an explicit
free operation, written free p, for deallocating the reference. This
operation does not exist in OCaml, but let us nevertheless assume it to be
able to demonstrate how Separation Logic supports reasoning about explicit
deallocation.

Definition succ_using_incr :=

<{ fun 'n ⇒

let 'p = ref 'n in

incr 'p;

let 'x = ! 'p in

free 'p;

'x }>.

<{ fun 'n ⇒

let 'p = ref 'n in

incr 'p;

let 'x = ! 'p in

free 'p;

'x }>.

This program may be proved correct with respect to the intended
postcondition fun r ⇒ \[r = n+1], without the need to mention p. In the
proof, shown below, the key step is the last call to xapp. This call is
for reasoning about the operation free p, which consumes the heap
predicate p ~~> _. At the last proof step, we invoke the tactic xval for
reasoning about the return value.

Lemma triple_succ_using_incr : ∀ n,

triple (succ_using_incr n)

\[]

(fun r ⇒ \[r = n+1]).

Proof using.

xwp. xapp. intros p. xapp. xapp.

xapp. (* reasoning about the call free p *)

xval. (* reasoning about the return value, named x. *)

xsimpl. auto.

Qed.

triple (succ_using_incr n)

\[]

(fun r ⇒ \[r = n+1]).

Proof using.

xwp. xapp. intros p. xapp. xapp.

xapp. (* reasoning about the call free p *)

xval. (* reasoning about the return value, named x. *)

xsimpl. auto.

Qed.

The third solution for handling the garbage collection of data consists of
considering a generalized version of Separation Logic in which specific
classes of heap predicates may be freely discarded from the current state,
at any point during the proofs. This variant is described in the chapter
Affine.

## Combined Reading and Freeing of a Reference

#### Exercise: 2 stars, standard, especially useful (triple_get_and_free)

Prove the correctness of the function get_and_free.
Lemma triple_get_and_free : ∀ p v,

triple (get_and_free p)

(p ~~> v)

(fun r ⇒ \[r = v]).

Proof using. (* FILL IN HERE *) Admitted.

☐

triple (get_and_free p)

(p ~~> v)

(fun r ⇒ \[r = v]).

Proof using. (* FILL IN HERE *) Admitted.

☐

#[global] Hint Resolve triple_get_and_free : triple.

## Nondeterminism: Specifying Random Output Values

Parameter triple_rand : ∀ n,

n > 0 →

triple (val_rand n)

\[]

(fun r ⇒ \∃ m, \[r = val_int m] \* \[0 ≤ m < n]).

n > 0 →

triple (val_rand n)

\[]

(fun r ⇒ \∃ m, \[r = val_int m] \* \[0 ≤ m < n]).

Consider the function two_dices, which simulates the throw of two dices,
and returns their sum.

Definition two_dices : val :=

<{ fun 'u ⇒

let 'n

let 'n

let 's = 'n

's + 2 }>.

<{ fun 'u ⇒

let 'n

_{1}= val_rand 6 inlet 'n

_{2}= val_rand 6 inlet 's = 'n

_{1}+ 'n_{2}in's + 2 }>.

#### Exercise: 2 stars, standard, optional (triple_two_dices)

Prove the correctness of the function two_dices. Hint: you'll need to use xapp triple_rand, because xapp is not able to discharge the side-condition n > 0 automatically. Use the tactic math for handling arithmetic proof obligations.
Lemma triple_two_dices :

triple <{ two_dices () }>

\[]

(fun r ⇒ \∃ n, \[r = val_int n] \* \[2 ≤ n ≤ 12]).

Proof using. (* FILL IN HERE *) Admitted.

☐

triple <{ two_dices () }>

\[]

(fun r ⇒ \∃ n, \[r = val_int n] \* \[2 ≤ n ≤ 12]).

Proof using. (* FILL IN HERE *) Admitted.

☐

## Axiomatization of the Mathematical Factorial Function

The factorial of 0 and 1 is equal to 1, and the factorial of n for
n > 1 is equal to n * facto (n-1). Note that we purposely leave
unspecified the value of facto on negative arguments.

Parameter facto_init : ∀ n,

0 ≤ n ≤ 1 →

facto n = 1.

Parameter facto_step : ∀ n,

n > 1 →

facto n = n * (facto (n-1)).

End Facto.

0 ≤ n ≤ 1 →

facto n = 1.

Parameter facto_step : ∀ n,

n > 1 →

facto n = n * (facto (n-1)).

End Facto.

## A Partial Recursive Function, Without State

if n ≤ 1

then 1

else n * factorec (n-1)

Definition factorec : val :=

<{ fix 'f 'n ⇒

let 'b = 'n ≤ 1 in

if 'b

then 1

else let 'x = 'n - 1 in

let 'y = 'f 'x in

'n * 'y }>.

<{ fix 'f 'n ⇒

let 'b = 'n ≤ 1 in

if 'b

then 1

else let 'x = 'n - 1 in

let 'y = 'f 'x in

'n * 'y }>.

A call to factorec n can be specified as follows:
In case the argument is negative (i.e., n < 0), we have two choices:
Let us follow the second approach, in order to illustrate the specification
of partial functions.
There are two possibilities for expressing the constraint n ≥ 0:
The two presentations are totally equivalent. We prefer the second
presentation, which tends to improve both the readability of specifications
and the conciseness of proof scripts. In that style, the specification of
factorec is stated as follows.

- the initial state is empty,
- the final state is empty,
- the result value r is such that r = facto n, when n ≥ 0.

- either we explicitly specify that the result is 1 in this case,
- or we rule out this possibility by requiring n ≥ 0.

- either we use as precondition \[n ≥ 0],
- or we we use the empty precondition, that is, \[], and we place an assumption (n ≥ 0) → _ to the front of the triple.

In general, we prove specifications for recursive functions by exploiting a
strong induction principle statement ("well-founded induction") that allows
us to assume, while we try to prove the specification, that the
specification already holds for any "smaller input". The (well-founded)
order relation that defines whether an input is smaller than another one is
specified by the user. In the present example of factorec, we use the
well-founded relation downto 0, which is defined as follows:
downto 0 m n asserts that 0 ≤ m < n holds.
Let's walk through the proof script in detail, to see in particular how to
set up the induction, how we exploit it for reasoning about the recursive
call, and how we justify that the recursive call is made on a smaller input.

Proof using.

We set up a proof by induction on n to obtain an induction hypothesis for
the recursive calls. The tactic induction_wf, provided by the TLC library,
helps setting up well-founded inductions. It is exploited as follows.

intros n. induction_wf IH: (downto 0) n.

Observe the induction hypothesis IH. By unfolding downto as done in the
next step, we can see that this hypothesis asserts that, given the current
argument n, the specification of factorec can be exploited for any m
such that 0 ≤ m < n.

unfold downto in IH. (* optional *)

We may then begin the interactive verification proof.

intros Hn. xwp.

We reason about the evaluation of the boolean condition n ≤ 1.

xapp.

The result of the evaluation of n ≤ 1 in the source program is described
by the boolean value isTrue (n ≤ 1), which appears in the CODE section
after Ifval. The operation isTrue is provided by the TLC library as a
conversion function from Prop to bool. The use of such a conversion
function (which leverages classical logic) greatly simplifies the process of
automatically performing substitutions after calls to xapp. We next perform the case analysis on the test n ≤ 1.

xif.

Doing so gives two cases. In the "then" branch, we can assume n ≤ 1.

{ intros C.

Here, the return value is 1.

xval. xsimpl.

We check that 1 = facto n when n ≤ 1.

In the "else" branch, we can assume n > 1.

{ intros C.

We reason about the evaluation of n-1

xapp.

We reason about the recursive call, implicitly exploiting the induction
hypothesis IH with n-1.

xapp.

We justify that the recursive call is indeed made on a smaller argument than
the current one, that is, a nonnegative integer smaller than n.

{ math. }

We justify that the recursive call is made to a nonnegative argument, as
required by the specification.

{ math. }

We reason about the multiplication n * facto(n-1).

xapp.

We check that n * facto (n-1) matches facto n.

This completes our first proof of a recursive function.

## A Recursive Function with State

if m > 0 then (

incr p;

repeat_incr p (m - 1)

)

_{1}then t

_{2}end. The keyword end avoids ambiguities in cases where this construct is followed by a semicolon.

Definition repeat_incr : val :=

<{ fix 'f 'p 'm ⇒

let 'b = 'm > 0 in

if 'b then

incr 'p;

let 'x = 'm - 1 in

'f 'p 'x

end }>.

<{ fix 'f 'p 'm ⇒

let 'b = 'm > 0 in

if 'b then

incr 'p;

let 'x = 'm - 1 in

'f 'p 'x

end }>.

The specification for repeat_incr p requires that the initial state
contains a reference p with some integer contents n, that is, p ~~> n.
Its postcondition asserts that the resulting state is p ~~> (n+m), which
is the result after incrementing m times the reference p. Observe that
this postcondition is only valid under the assumption that m ≥ 0.

Lemma triple_repeat_incr : ∀ (m n:int) (p:loc),

m ≥ 0 →

triple (repeat_incr p m)

(p ~~> n)

(fun _ ⇒ p ~~> (n + m)).

m ≥ 0 →

triple (repeat_incr p m)

(p ~~> n)

(fun _ ⇒ p ~~> (n + m)).

#### Exercise: 2 stars, standard, especially useful (triple_repeat_incr)

Prove the specification of the function repeat_incr, by following the template of the proof of triple_factorec'. Hint: begin the proof with intros m. induction_wf IH: ..., without introducing m, other the induction principle obtained is too weak.
Proof using. (* FILL IN HERE *) Admitted.

☐

☐

Lemma triple_repeat_incr' : ∀ (p:loc) (n m:int),

m ≥ 0 →

triple (repeat_incr p m)

(p ~~> n)

(fun _ ⇒ p ~~> (n + m)).

Proof using.

m ≥ 0 →

triple (repeat_incr p m)

(p ~~> n)

(fun _ ⇒ p ~~> (n + m)).

Proof using.

First, introduces all variables and hypotheses.

intros p n m Hm.

Next, generalize those that are not constant during the recursion. We use
the TLC tactic gen, which is a shorthand for generalize dependent.

gen n Hm.

Then, we set up the induction.

induction_wf IH: (downto 0) m. unfold downto in IH.

Finally, we re-introduce the generalized hypotheses.

intros.

The rest of the proof is exactly the same as before.

Abort.

## Trying to Prove Incorrect Specifications

Lemma triple_repeat_incr_incorrect : ∀ (p:loc) (n m:int),

triple (repeat_incr p m)

(p ~~> n)

(fun _ ⇒ p ~~> (n + m)).

Proof using.

intros. revert n. induction_wf IH: (downto 0) m. unfold downto in IH.

intros. xwp. xapp. xif; intros C.

{ (* In the 'then' branch: m > 0 *)

xapp. xapp. xapp. { math. } xsimpl. math. }

{ (* In the 'else' branch: m ≤ 0 *)

xval.

triple (repeat_incr p m)

(p ~~> n)

(fun _ ⇒ p ~~> (n + m)).

Proof using.

intros. revert n. induction_wf IH: (downto 0) m. unfold downto in IH.

intros. xwp. xapp. xif; intros C.

{ (* In the 'then' branch: m > 0 *)

xapp. xapp. xapp. { math. } xsimpl. math. }

{ (* In the 'else' branch: m ≤ 0 *)

xval.

At this point, we are requested to justify that the current state p ~~> n
matches the postcondition p ~~> (n + m), which amounts to proving
n = n + m.

xsimpl.

Abort.

Abort.

When the specification features the assumption m ≥ 0, we can prove this
equality because the fact that we are in the else branch means that m ≤ 0
, thus m = 0. However, without the assumption m ≥ 0, the value of m
could very well be negative. In that case, the equality n = n + m is
unprovable. As a user, the proof obligation (m ≤ 0) → (n = n + m) gives
us a very strong hint on the fact that either the code or the specification
is not handling the case m < 0 properly. This concludes our example
attempt at proving an incorrect specification.
There exists a valid specification for repeat_incr that does not constrain
m but instead specifies that, regardless of the value of m, the state
evolves from p ~~> n to p ~~> (n + max 0 m). The corresponding proof
scripts exploits two characteristic properties of the function max.

Lemma max_l : ∀ n m,

n ≥ m →

max n m = n.

Proof using. introv M. unfold max. case_if; math. Qed.

Lemma max_r : ∀ n m,

n ≤ m →

max n m = m.

Proof using. introv M. unfold max. case_if; math. Qed.

n ≥ m →

max n m = n.

Proof using. introv M. unfold max. case_if; math. Qed.

Lemma max_r : ∀ n m,

n ≤ m →

max n m = m.

Proof using. introv M. unfold max. case_if; math. Qed.

Here is the most general specification for the function repeat_incr.

#### Exercise: 2 stars, standard, optional (triple_repeat_incr')

Prove the general specification for the function repeat_incr, covering also the case m < 0.
Lemma triple_repeat_incr' : ∀ (p:loc) (n m:int),

triple (repeat_incr p m)

(p ~~> n)

(fun _ ⇒ p ~~> (n + max 0 m)).

Proof using. (* FILL IN HERE *) Admitted.

☐

triple (repeat_incr p m)

(p ~~> n)

(fun _ ⇒ p ~~> (n + max 0 m)).

Proof using. (* FILL IN HERE *) Admitted.

☐

## A Recursive Function Involving two References

if !q > 0 then (

incr p;

decr q;

step_transfer p q

)

Definition step_transfer :=

<{ fix 'f 'p 'q ⇒

let 'm = !'q in

let 'b = 'm > 0 in

if 'b then

incr 'p;

decr 'q;

'f 'p 'q

end }>.

<{ fix 'f 'p 'q ⇒

let 'm = !'q in

let 'b = 'm > 0 in

if 'b then

incr 'p;

decr 'q;

'f 'p 'q

end }>.

The specification of step_transfer is essentially the same as that of the
function transfer presented previously, the only difference being that we
here assume the contents of q to be nonnegative.

Lemma triple_step_transfer : ∀ p q n m,

m ≥ 0 →

triple (step_transfer p q)

(p ~~> n \* q ~~> m)

(fun _ ⇒ p ~~> (n + m) \* q ~~> 0).

m ≥ 0 →

triple (step_transfer p q)

(p ~~> n \* q ~~> m)

(fun _ ⇒ p ~~> (n + m) \* q ~~> 0).

#### Exercise: 2 stars, standard, especially useful (triple_step_transfer)

Verify the function step_transfer. Hint: to set up the induction, follow the pattern shown in the proof of triple_repeat_incr'.
Proof using. (* FILL IN HERE *) Admitted.

☐

☐

# Summary

- "Heap predicates", which are used to describe memory states in Separation Logic.
- "Specification triples", of the form triple t H Q, which relate a term t, a precondition H, and a postcondition Q.
- "Verification triples", of the form PRE H CODE F POST Q, are triples of a specific form, produced by the framework.
- "Entailments", of the form H ==> H' or Q ===> Q', which assert that a pre- or post-condition is weaker than another one.
- Custom proof tactics, called "x-tactics", which are specialized tactics for carrying discharging these proof obligations.

- p ~~> n, which describes a memory cell at location p with contents n,
- \[], which describes an empty state,
- \[P], which also describes an empty state, and moreover asserts that the proposition P is true,
- H
_{1}\* H_{2}, which describes a state made of two disjoint parts, one satisfying H_{1}and another satisfying H_{2}, - \∃ x, H, which is used to quantify variables in postconditions.

- xwp to begin a proof,
- xapp to reason about an application,
- xval to reason about a return value,
- xif to reason about a conditional,
- xsimpl to simplify or prove entailments (H ==> H' and Q ===> Q').

- math, which is a variant of lia for proving mathematical goals,
- induction_wf, which sets up proofs by well-founded induction,
- gen, which is a shorthand for generalize dependent, a tactic also useful to set up induction principles.

# Historical Notes

(* 2023-11-29 09:22 *)