Commit 31da8ade authored by Ralf Jung's avatar Ralf Jung

README editing

parent d96a7447
...@@ -136,7 +136,8 @@ The directory structure is as follows. ...@@ -136,7 +136,8 @@ The directory structure is as follows.
For example, [opt/ex1.v](theories/opt/ex1.v) provides the proof that the For example, [opt/ex1.v](theories/opt/ex1.v) provides the proof that the
optimized program refines the behavior of the unoptimized program, where the optimized program refines the behavior of the unoptimized program, where the
optimized program simply replaces the unoptimized one's `ex1_unopt` function optimized program simply replaces the unoptimized one's `ex1_unopt` function
with the `ex1_opt` function. with the `ex1_opt` function. This proof also contains comments relating it
to the proof sketch in Section 3.4 of the paper.
For this proof, we need to show that (1) `ex1_opt` refines `ex1_unopt`, and For this proof, we need to show that (1) `ex1_opt` refines `ex1_unopt`, and
(2) all other unchanged functions refine themselves. (2) all other unchanged functions refine themselves.
...@@ -144,7 +145,8 @@ The directory structure is as follows. ...@@ -144,7 +145,8 @@ The directory structure is as follows.
The proof of (2) is the reflexivity of our simulation relation for The proof of (2) is the reflexivity of our simulation relation for
well-formed programs, provided in [theories/sim/refl.v](theories/sim/refl.v). well-formed programs, provided in [theories/sim/refl.v](theories/sim/refl.v).
- For `example1` (Section 3.4 in the paper), - For `example1` (Section 3.4 in the paper, except that we already generalized
`y` to arbitrary code as described at the end of that section),
see [opt/ex1.v](theories/opt/ex1.v) ; see [opt/ex1.v](theories/opt/ex1.v) ;
`example1_down` did not appear in the paper but we verified it in `example1_down` did not appear in the paper but we verified it in
[opt/ex1_down.v](theories/opt/ex1_down.v). [opt/ex1_down.v](theories/opt/ex1_down.v).
...@@ -159,7 +161,6 @@ The directory structure is as follows. ...@@ -159,7 +161,6 @@ The directory structure is as follows.
### SIMULATION FRAMEWORK ### SIMULATION FRAMEWORK
The simulation framework developed in this project is a standard simulation via coinduction, and is in fact a simplified version of [Hur et al.'s POPL'12 work](https://doi.org/10.1145/2103621.2103666). The simulation framework developed in this project is a standard simulation via coinduction, and is in fact a simplified version of [Hur et al.'s POPL'12 work](https://doi.org/10.1145/2103621.2103666).
In that sense it is not novel.
The only difference is that our framework depends on the *Iris* Coq library. The only difference is that our framework depends on the *Iris* Coq library.
However, note that this kind of simulation proofs has not been done in However, note that this kind of simulation proofs has not been done in
Iris before, and here we in fact do **not** use the Iris logic to build our simulation. Iris before, and here we in fact do **not** use the Iris logic to build our simulation.
...@@ -185,17 +186,17 @@ corecursive call of `behave` in the `inftau` case. ...@@ -185,17 +186,17 @@ corecursive call of `behave` in the `inftau` case.
The theorem `sim_prog_sim_classical` (in [sim/program.v](theories/sim/program.v)) proves that, The theorem `sim_prog_sim_classical` (in [sim/program.v](theories/sim/program.v)) proves that,
if the program `prog_src` simulates `prog_tgt`, i.e. every function of `prog_src` if the program `prog_src` simulates `prog_tgt`, i.e. every function of `prog_src`
is in the simulation relation with the corresponding function of `prog_tgt` is in the simulation relation with the corresponding function of `prog_tgt`
(where "corresponding" means they have the same function id in both programs), (where "corresponding" means they have the same name in in both programs),
then the possible behaviors of `prog_tgt` are *included* in the possible then the possible behaviors of `prog_tgt` are *included* in the possible
behaviors of `prog_src`. behaviors of `prog_src`.
As undefined behavior subsumes any behavior, `sim_prog_sim_classical` states As undefined behavior subsumes any behavior, `sim_prog_sim_classical` states
that if `prog_src` does not get stuck (no undefined behavior), then any that if `prog_src` does not get stuck (no undefined behavior), then any
observation of `prog_tgt` is also an observation of `prog_src`. observation of `prog_tgt` is also an observation of `prog_src`.
More specifically, More specifically, if `prog_src` does not get stuck then
* if `prog_tgt` can terminate to a value, `prog_src` must also be * if `prog_tgt` can terminate to a value, `prog_src` must also be
able to terminate to the same value, and able to terminate to the same value, and
* if `prog_tgt` can diverge, `prog_src` must also be able to diverge, and * if `prog_tgt` can diverge, `prog_src` must also be able to diverge, and
* `prog_tgt` in fact does not get stuck, because that is not an observation of `prog_src`. * `prog_tgt` does not get stuck, because that is not an observation of `prog_src`.
#### Simulation relations #### Simulation relations
The main simulation relation is defined (in the standard way of [paco] proofs) for language expressions in `sim_local_body` (in [sim/local.v](theories/sim/local.v)), and is then lifted to the *function simulation* relation `sim_local_fun`. The main simulation relation is defined (in the standard way of [paco] proofs) for language expressions in `sim_local_body` (in [sim/local.v](theories/sim/local.v)), and is then lifted to the *function simulation* relation `sim_local_fun`.
...@@ -211,16 +212,6 @@ The resource *r* allows us to state abstractly which part of the global state ...@@ -211,16 +212,6 @@ The resource *r* allows us to state abstractly which part of the global state
the pair `(src,tgt)` focuses on. the pair `(src,tgt)` focuses on.
It is a rely-guarantee setup that allows us to reason locally about functions' bodies, as long as we maintain the invariant on the global state. It is a rely-guarantee setup that allows us to reason locally about functions' bodies, as long as we maintain the invariant on the global state.
#### `FnEntry` or `Default` Retagging?
In the optimization directory ([opt](theories/opt)), one can find different examples, some of which use `FnEntry` retagging, while others use `Default` retagging.
Here, `FnEntry` is used to retag arguments passed into a *Rust* function.
Note that "functions" in our language do **not** always correspond to Rust functions.
Functions in our language can also be used to capture some arbitrary expressions in Rust.
In those cases, when creating new pointers, the `Default` retag is used.
#### Concurrency support
We do not target concurrency at all. This is left for future work.
## Publicly available repositories ## Publicly available repositories
=============================== ===============================
......
...@@ -11,11 +11,12 @@ Definition ex1_unopt : function := ...@@ -11,11 +11,12 @@ Definition ex1_unopt : function :=
let: "x" := new_place (&mut int) "i" in let: "x" := new_place (&mut int) "i" in
(* retag_place reborrows the pointer value stored in "x" (which is "i"), (* retag_place reborrows the pointer value stored in "x" (which is "i"),
then updates "x" with the new pointer value *) then updates "x" with the new pointer value. A [Default] retag is
sufficient for this, we don't need the protector. *)
retag_place "x" (RefPtr Mutable) int Default ;; retag_place "x" (RefPtr Mutable) int Default ;;
(* The unknown code is represented by a call to an unknown function "f" or (* The unknown code is represented by a call to an unknown function "f" or
"g" *) "g". *)
Call #[ScFnPtr "f"] [] ;; Call #[ScFnPtr "f"] [] ;;
(* Write 42 to the cell pointed to by the pointer in "x" *) (* Write 42 to the cell pointed to by the pointer in "x" *)
...@@ -60,7 +61,7 @@ Proof. ...@@ -60,7 +61,7 @@ Proof.
intros sarg ->. intros sarg ->.
sim_apply sim_simple_let=>/=. sim_apply sim_simple_let=>/=.
apply: sim_simple_result. apply: sim_simple_result.
(* Step (1) in the proof sketch: (* ## Step (1) in the proof sketch:
retagging with a fresh tag `tg_n` on top of "x"'s stack. *) retagging with a fresh tag `tg_n` on top of "x"'s stack. *)
(* Retag a local place *) (* Retag a local place *)
sim_apply sim_simple_let=>/=. sim_apply sim_simple_let=>/=.
...@@ -93,7 +94,7 @@ Proof. ...@@ -93,7 +94,7 @@ Proof.
sim_apply sim_body_write_unique_1; [solve_sim..|]. sim_apply sim_body_write_unique_1; [solve_sim..|].
intros ?? Htop. simplify_eq/=. intros ?? Htop. simplify_eq/=.
sim_apply sim_body_let. simplify_eq/=. sim_apply sim_body_let. simplify_eq/=.
(* Step (2) in the proof sketch: (* ## Step (2) in the proof sketch:
If "x" had been accessed by "f", then "f" must have use a tag different If "x" had been accessed by "f", then "f" must have use a tag different
from `tg_n`, which in turn implies that `tg_n` would have been popped off from `tg_n`, which in turn implies that `tg_n` would have been popped off
"x"'s stack. However, since the write `*{int} "x" <- #[42]` does not have "x"'s stack. However, since the write `*{int} "x" <- #[42]` does not have
...@@ -116,7 +117,7 @@ Proof. ...@@ -116,7 +117,7 @@ Proof.
intros rf' frs' frt' ??? ? _ _ FREL'. simplify_eq/=. intros rf' frs' frt' ??? ? _ _ FREL'. simplify_eq/=.
apply: sim_simple_result. simpl. apply: sim_simple_result. simpl.
sim_apply sim_simple_let=>/=. sim_apply sim_simple_let=>/=.
(* Step (3) in the proof sketch: (* ## Step (3) in the proof sketch:
With similar reasoning to Step (2), "g" must not have accessed "x". *) With similar reasoning to Step (2), "g" must not have accessed "x". *)
(* Copy local (left). We drop to complex just because simple does not support this yet. *) (* Copy local (left). We drop to complex just because simple does not support this yet. *)
intros σs σt. intros σs σt.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment