program-logic.tex 29.6 KB
Newer Older
Ralf Jung's avatar
Ralf Jung committed
1

2
\section{Program Logic}
3
\label{sec:program-logic}
4

Ralf Jung's avatar
Ralf Jung committed
5
This section describes how to build a program logic for an arbitrary language (\cf \Sref{sec:language}) on top of the base logic.
Ralf Jung's avatar
Ralf Jung committed
6
So in the following, we assume that some language $\Lang$ was fixed.
Ralf Jung's avatar
Ralf Jung committed
7

Ralf Jung's avatar
Ralf Jung committed
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104
\subsection{Dynamic Composeable Resources}
\label{sec:composeable-resources}

The base logic described in \Sref{sec:base-logic} works over an arbitrary CMRA $\monoid$ defining the structure of the resources.
It turns out that we can generalize this further and permit picking CMRAs ``$\iFunc(\Prop)$'' that depend on the structure of assertions themselves.
Of course, $\Prop$ is just the syntactic type of assertions; for this to make sense we have to look at the semantics.

Furthermore, there is a composeability problem with the given logic: if we have one proof performed with CMRA $\monoid_1$, and another proof carried out with a \emph{different} CMRA $\monoid_2$, then the two proofs are actually carried out in two \emph{entirely separate logics} and hence cannot be combined.

Finally, in many cases just having a single ``instance'' of a CMRA available for reasoning is not enough.
For example, when reasoning about a dynamically allocated data structure, every time a new instance of that data structure is created, we will want a fresh resource governing the state of this particular instance.
While it would be possible to handle this problem whenever it comes up, it turns out to be useful to provide a general solution.

The purpose of this section is to describe how we solve these issues.

\paragraph{Picking the resources.}
The key ingredient that we will employ on top of the base logic is to give some more fixed structure to the resources.
To instantiate the program logic, the user picks a family of locally contractive bifunctors $(\iFunc_i : \COFEs \to \CMRAs)_{i \in \mathcal{I}}$.
(This is in contrast to the base logic, where the user picks a single, fixed CMRA that has a unit.)

From this, we construct the bifunctor defining the overall resources as follows:
\begin{align*}
  \textdom{ResF}(\cofe^\op, \cofe) \eqdef{}& \prod_{i \in \mathcal I} \nat \fpfn \iFunc_i(\cofe^\op, \cofe)
\end{align*}
We will motivate both the use of a product and the finite partial function below.
$\textdom{ResF}(\cofe^\op, \cofe)$ is a CMRA by lifting the individual CMRAs pointwise, and it has a unit (using the empty finite partial functions).
Furthermore, since the $\iFunc_i$ are locally contractive, so is $\textdom{ResF}$.

Now we can write down the recursive domain equation:
\[ \iPreProp \cong \UPred(\textdom{ResF}(\iPreProp, \iPreProp)) \]
$\iPreProp$ is a COFE defined as the fixed-point of a locally contractive bifunctor.
This fixed-point exists and is unique\footnote{We have not proven uniqueness in Coq.} by America and Rutten's theorem~\cite{America-Rutten:JCSS89,birkedal:metric-space}.
We do not need to consider how the object is constructed. 
We only need the isomorphism, given by
\begin{align*}
  \Res &\eqdef \textdom{ResF}(\iPreProp, \iPreProp) \\
  \iProp &\eqdef \UPred(\Res) \\
	\wIso &: \iProp \nfn \iPreProp \\
	\wIso^{-1} &: \iPreProp \nfn \iProp
\end{align*}

Notice that $\iProp$ is the semantic model of assertions for the base logic described in \Sref{sec:base-logic} with $\Res$:
\[ \Sem{\Prop} \eqdef \iProp = \UPred(\Res) \]
Effectively, we just defined a way to instantiate the base logic with $\Res$ as the CMRA of resources, while providing a way for $\Res$ to depend on $\iPreProp$, which is isomorphic to $\Sem\Prop$.

We thus obtain all the rules of \Sref{sec:base-logic}, and furthermore, we can use the maps $\wIso$ and $\wIso^{-1}$ \emph{in the logic} to convert between logical assertions $\Sem\Prop$ and the domain $\iPreProp$ which is used in the construction of $\Res$ -- so from elements of $\iPreProp$, we can construct elements of $\Sem{\textlog M}$, which are the elements that can be owned in our logic.

\paragraph{Proof composeability.}
To make our proofs composeable, we \emph{generalize} our proofs over the family of functors.
This is possible because we made $\Res$ a \emph{product} of all the CMRAs picked by the user, and because we can actually work with that product ``pointwise''.
So instead of picking a \emph{concrete} family, proofs will assume to be given an \emph{arbitrary} family of functors, plus a proof that this family \emph{contains the functors they need}.
Composing two proofs is then merely a matter of conjoining the assumptions they make about the functors.
Since the logic is entirely parametric in the choice of functors, there is no trouble reasoning without full knowledge of the family of functors.

Only when the top-level proof is completed we will ``close'' the proof by picking a concrete family that contains exactly those functors the proof needs.

\paragraph{Dynamic resources.}
Finally, the use of finite partial functions lets us have as many instances of any CMRA as we could wish for:
Because there can only ever be finitely many instances already allocated, it is always possible to create a fresh instance with any desired (valid) starting state.
This is best demonstrated by giving some proof rules.

So let us first define the notion of ghost ownership that we use in this logic.
Assuming that the family of functors contains the functor $\Sigma_i$ at index $i$, and furthermore assuming that $\monoid_i = \Sigma_i(\iPreProp, \iPreProp)$, given some $\melt \in \monoid_i$ we define:
\[ \ownGhost\gname{\melt:\monoid_i} \eqdef \ownM{(\ldots, \emptyset, i:\mapsingleton \gname \melt, \emptyset, \ldots)} \]
This is ownership of the pair (element of the product over all the functors) that has the empty finite partial function in all components \emph{except for} the component corresponding to index $i$, where we own the element $\melt$ at index $\gname$ in the finite partial function.

We can show the following properties for this form of ownership:
\begin{mathparpagebreakable}
  \inferH{res-alloc}{\text{$G$ infinite} \and \melt \in \mval_{M_i}}
  {  \TRUE \proves \upd \Exists\gname\in G. \ownGhost\gname{\melt : M_i}
  }
  \and
  \inferH{res-update}
    {\melt \mupd_{M_i} B}
    {\ownGhost\gname{\melt : M_i} \proves \upd \Exists \meltB\in B. \ownGhost\gname{\meltB : M_i}}

  \inferH{res-empty}
  {\text{$\munit$ is a unit of $M_i$}}
  {\TRUE \proves \upd \ownGhost\gname\munit}
  
  \axiomH{res-op}
    {\ownGhost\gname{\melt : M_i} * \ownGhost\gname{\meltB : M_i} \provesIff \ownGhost\gname{\melt\mtimes\meltB : M_i}}

  \axiomH{res-valid}
    {\ownGhost\gname{\melt : M_i} \Ra \mval_{M_i}(\melt)}

  \inferH{res-timeless}
    {\text{$\melt$ is a discrete COFE element}}
    {\timeless{\ownGhost\gname{\melt : M_i}}}
\end{mathparpagebreakable}

Below, we will always work within (an instance of) the logic as described here.
Whenever a CMRA is used in a proof, we implicitly assume it to be available in the global family of functors.
We will typically leave the $M_i$ implicit when asserting ghost ownership, as the type of $\melt$ will be clear from the context.



Ralf Jung's avatar
Ralf Jung committed
105
\subsection{World Satisfaction, Invariants, Fancy Updates}
Ralf Jung's avatar
Ralf Jung committed
106
\label{sec:invariants}
Ralf Jung's avatar
Ralf Jung committed
107 108 109 110 111 112 113

To introduce invariants into our logic, we will define weakest precondition to explicitly thread through the proof that all the invariants are maintained throughout program execution.
However, in order to be able to access invariants, we will also have to provide a way to \emph{temporarily disable} (or ``open'') them.
To this end, we use tokens that manage which invariants are currently enabled.

We assume to have the following four CMRAs available:
\begin{align*}
Robbert Krebbers's avatar
Robbert Krebbers committed
114
  \textmon{State} \eqdef{}& \authm(\maybe{\exm(\State)}) \\
Robbert Krebbers's avatar
Robbert Krebbers committed
115 116 117
  \textmon{Inv} \eqdef{}& \authm(\nat \fpfn \agm(\latert \iPreProp)) \\
  \textmon{En} \eqdef{}& \pset{\nat} \\
  \textmon{Dis} \eqdef{}& \finpset{\nat}
Ralf Jung's avatar
Ralf Jung committed
118 119 120 121 122 123 124
\end{align*}
The last two are the tokens used for managing invariants, $\textmon{Inv}$ is the monoid used to manage the invariants themselves.
Finally, $\textmon{State}$ is used to provide the program with a view of the physical state of the machine.

Furthermore, we assume that instances named $\gname_{\textmon{State}}$, $\gname_{\textmon{Inv}}$, $\gname_{\textmon{En}}$ and $\gname_{\textmon{Dis}}$ of these CMRAs have been created.
(We will discuss later how this assumption is discharged.)

Ralf Jung's avatar
Ralf Jung committed
125
\paragraph{World Satisfaction.}
Ralf Jung's avatar
Ralf Jung committed
126 127
We can now define the assertion $W$ (\emph{world satisfaction}) which ensures that the enabled invariants are actually maintained:
\begin{align*}
Robbert Krebbers's avatar
Robbert Krebbers committed
128
  W \eqdef{}& \Exists I : \nat \fpfn \Prop.
129 130 131 132 133 134 135
  \begin{array}{@{} l}
    \ownGhost{\gname_{\textmon{Inv}}}{\authfull
      \mapsingletonComp {\iname}
        {\aginj(\latertinj(\wIso(I(\iname))))}
        {\iname \in \dom(I)}} * \\
    \Sep_{\iname \in \dom(I)} \left( \later I(\iname) * \ownGhost{\gname_{\textmon{Dis}}}{\set{\iname}} \lor \ownGhost{\gname_{\textmon{En}}}{\set{\iname}} \right)
  \end{array}
Ralf Jung's avatar
Ralf Jung committed
136 137 138 139
\end{align*}

\paragraph{Invariants.}
The following assertion states that an invariant with name $\iname$ exists and maintains assertion $\prop$:
140 141
\[ \knowInv\iname\prop \eqdef \ownGhost{\gname_{\textmon{Inv}}}
  {\authfrag \mapsingleton \iname {\aginj(\latertinj(\wIso(\prop)))}} \]
Ralf Jung's avatar
Ralf Jung committed
142

Ralf Jung's avatar
Ralf Jung committed
143 144
\paragraph{Fancy Updates and View Shifts.}
Next, we define \emph{fancy updates}, which are essentially the same as the basic updates of the base logic ($\Sref{sec:base-logic}$), except that they also have access to world satisfaction and can enable and disable invariants:
145
\[ \pvs[\mask_1][\mask_2] \prop \eqdef W * \ownGhost{\gname_{\textmon{En}}}{\mask_1} \wand \upd\diamond (W * \ownGhost{\gname_{\textmon{En}}}{\mask_2} * \prop) \]
Ralf Jung's avatar
Ralf Jung committed
146
Here, $\mask_1$ and $\mask_2$ are the \emph{masks} of the view update, defining which invariants have to be (at least!) available before and after the update.
Robbert Krebbers's avatar
Robbert Krebbers committed
147
We use $\top$ as symbol for the largest possible mask, $\nat$, and $\bot$ for the smallest possible mask $\emptyset$.
148 149
We will write $\pvs[\mask] \prop$ for $\pvs[\mask][\mask]\prop$.
%
Ralf Jung's avatar
Ralf Jung committed
150
Fancy updates satisfy the following basic proof rules:
151
\begin{mathpar}
Ralf Jung's avatar
Ralf Jung committed
152
\infer[fup-mono]
153 154 155
{\prop \proves \propB}
{\pvs[\mask_1][\mask_2] \prop \proves \pvs[\mask_1][\mask_2] \propB}

Ralf Jung's avatar
Ralf Jung committed
156
\infer[fup-intro-mask]
157
{\mask_2 \subseteq \mask_1}
158
{\prop \proves \pvs[\mask_1][\mask_2]\pvs[\mask_2][\mask_1] \prop}
159

Ralf Jung's avatar
Ralf Jung committed
160
\infer[fup-trans]
161 162 163
{}
{\pvs[\mask_1][\mask_2] \pvs[\mask_2][\mask_3] \prop \proves \pvs[\mask_1][\mask_3] \prop}

Ralf Jung's avatar
Ralf Jung committed
164
\infer[fup-upd]
165 166
{}{\upd\prop \proves \pvs[\mask] \prop}

Ralf Jung's avatar
Ralf Jung committed
167
\infer[fup-frame]
168
{}{\propB * \pvs[\mask_1][\mask_2]\prop \proves \pvs[\mask_1 \uplus \mask_\f][\mask_2 \uplus \mask_\f] \propB * \prop}
169

Ralf Jung's avatar
Ralf Jung committed
170
\inferH{fup-update}
171 172 173
{\melt \mupd \meltsB}
{\ownM\melt \proves \pvs[\mask] \Exists\meltB\in\meltsB. \ownM\meltB}

Ralf Jung's avatar
Ralf Jung committed
174
\infer[fup-timeless]
175 176
{\timeless\prop}
{\later\prop \proves \pvs[\mask] \prop}
177
%
Ralf Jung's avatar
Ralf Jung committed
178
% \inferH{fup-allocI}
179 180 181
% {\text{$\mask$ is infinite}}
% {\later\prop \proves \pvs[\mask] \Exists \iname \in \mask. \knowInv\iname\prop}
%gov
Ralf Jung's avatar
Ralf Jung committed
182
% \inferH{fup-openI}
183 184
% {}{\knowInv\iname\prop \proves \pvs[\set\iname][\emptyset] \later\prop}
%
Ralf Jung's avatar
Ralf Jung committed
185
% \inferH{fup-closeI}
186
% {}{\knowInv\iname\prop \land \later\prop \proves \pvs[\emptyset][\set\iname] \TRUE}
187
\end{mathpar}
188
(There are no rules related to invariants here. Those rules will be discussed later, in \Sref{sec:invariants}.)
Ralf Jung's avatar
Ralf Jung committed
189

Ralf Jung's avatar
Ralf Jung committed
190
We can further define the notions of \emph{view shifts} and \emph{linear view shifts}:
Ralf Jung's avatar
Ralf Jung committed
191
\begin{align*}
192 193
  \prop \vsW[\mask_1][\mask_2] \propB \eqdef{}& \prop \wand \pvs[\mask_1][\mask_2] \propB \\
  \prop \vs[\mask_1][\mask_2] \propB \eqdef{}& \always(\prop \wand \pvs[\mask_1][\mask_2] \propB)
Ralf Jung's avatar
Ralf Jung committed
194
\end{align*}
Ralf Jung's avatar
Ralf Jung committed
195
These two are useful when writing down specifications and for comparing with previous versions of Iris, but for reasoning, it is typically easier to just work directly with fancy updates.
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221
Still, just to give an idea of what view shifts ``are'', here are some proof rules for them:
\begin{mathparpagebreakable}
\inferH{vs-update}
  {\melt \mupd \meltsB}
  {\ownGhost\gname{\melt} \vs \exists \meltB \in \meltsB.\; \ownGhost\gname{\meltB}}
\and
\inferH{vs-trans}
  {\prop \vs[\mask_1][\mask_2] \propB \and \propB \vs[\mask_2][\mask_3] \propC}
  {\prop \vs[\mask_1][\mask_3] \propC}
\and
\inferH{vs-imp}
  {\always{(\prop \Ra \propB)}}
  {\prop \vs[\emptyset] \propB}
\and
\inferH{vs-mask-frame}
  {\prop \vs[\mask_1][\mask_2] \propB}
  {\prop \vs[\mask_1 \uplus \mask'][\mask_2 \uplus \mask'] \propB}
\and
\inferH{vs-frame}
  {\prop \vs[\mask_1][\mask_2] \propB}
  {\prop * \propC \vs[\mask_1][\mask_2] \propB * \propC}
\and
\inferH{vs-timeless}
  {\timeless{\prop}}
  {\later \prop \vs \prop}

222 223 224 225 226 227 228 229 230 231
% \inferH{vs-allocI}
%   {\infinite(\mask)}
%   {\later{\prop} \vs[\mask] \exists \iname\in\mask.\; \knowInv{\iname}{\prop}}
% \and
% \axiomH{vs-openI}
%   {\knowInv{\iname}{\prop} \proves \TRUE \vs[\{ \iname \} ][\emptyset] \later \prop}
% \and
% \axiomH{vs-closeI}
%   {\knowInv{\iname}{\prop} \proves \later \prop \vs[\emptyset][\{ \iname \} ] \TRUE }
%
232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249
\inferHB{vs-disj}
  {\prop \vs[\mask_1][\mask_2] \propC \and \propB \vs[\mask_1][\mask_2] \propC}
  {\prop \lor \propB \vs[\mask_1][\mask_2] \propC}
\and
\inferHB{vs-exist}
  {\All \var. (\prop \vs[\mask_1][\mask_2] \propB)}
  {(\Exists \var. \prop) \vs[\mask_1][\mask_2] \propB}
\and
\inferHB{vs-always}
  {\always\propB \proves \prop \vs[\mask_1][\mask_2] \propC}
  {\prop \land \always{\propB} \vs[\mask_1][\mask_2] \propC}
 \and
\inferH{vs-false}
  {}
  {\FALSE \vs[\mask_1][\mask_2] \prop }
\end{mathparpagebreakable}

\subsection{Weakest Precondition}
Ralf Jung's avatar
Ralf Jung committed
250 251

Finally, we can define the core piece of the program logic, the assertion that reasons about program behavior: Weakest precondition, from which Hoare triples will be derived.
252 253

\paragraph{Defining weakest precondition.}
Ralf Jung's avatar
Ralf Jung committed
254 255 256
We assume that everything making up the definition of the language, \ie values, expressions, states, the conversion functions, reduction relation and all their properties, are suitably reflected into the logic (\ie they are part of the signature $\Sig$).

\begin{align*}
257 258 259
  \textdom{wp} \eqdef{}& \MU \textdom{wp}. \Lam \mask, \expr, \pred. \\
        & (\Exists\val. \toval(\expr) = \val \land \pvs[\mask] \prop) \lor {}\\
        & \Bigl(\toval(\expr) = \bot \land \All \state. \ownGhost{\gname_{\textmon{State}}}{\authfull \state} \vsW[\mask][\emptyset] {}\\
260 261
        &\qquad \red(\expr, \state) * \later\All \expr', \state', \vec\expr. (\expr, \state \step \expr', \state', \vec\expr) \vsW[\emptyset][\mask] {}\\
        &\qquad\qquad \ownGhost{\gname_{\textmon{State}}}{\authfull \state'} * \textdom{wp}(\mask, \expr', \pred) * \Sep_{\expr'' \in \vec\expr} \textdom{wp}(\top, \expr'', \Lam \any. \TRUE)\Bigr) \\
Ralf Jung's avatar
Ralf Jung committed
262
%  (* value case *)
263
  \wpre\expr[\mask]{\Ret\val. \prop} \eqdef{}& \textdom{wp}(\mask, \expr, \Lam\val.\prop)
Ralf Jung's avatar
Ralf Jung committed
264
\end{align*}
Ralf Jung's avatar
Ralf Jung committed
265
If we leave away the mask, we assume it to default to $\top$.
Ralf Jung's avatar
Ralf Jung committed
266

Ralf Jung's avatar
Ralf Jung committed
267 268 269 270
This ties the authoritative part of \textmon{State} to the actual physical state of the reduction witnessed by the weakest precondition.
The fragment will then be available to the user of the logic, as their way of talking about the physical state:
\[ \ownPhys\state \eqdef \ownGhost{\gname_{\textmon{State}}}{\authfrag \state} \]

271
\paragraph{Laws of weakest precondition.}
Ralf Jung's avatar
Ralf Jung committed
272
The following rules can all be derived:
273 274 275 276 277
\begin{mathpar}
\infer[wp-value]
{}{\prop[\val/\var] \proves \wpre{\val}[\mask]{\Ret\var.\prop}}

\infer[wp-mono]
278 279
{\mask_1 \subseteq \mask_2 \and \vctx,\var:\textlog{val}\mid\prop \proves \propB}
{\vctx\mid\wpre\expr[\mask_1]{\Ret\var.\prop} \proves \wpre\expr[\mask_2]{\Ret\var.\propB}}
280

Ralf Jung's avatar
Ralf Jung committed
281
\infer[fup-wp]
282 283
{}{\pvs[\mask] \wpre\expr[\mask]{\Ret\var.\prop} \proves \wpre\expr[\mask]{\Ret\var.\prop}}

Ralf Jung's avatar
Ralf Jung committed
284
\infer[wp-fup]
285 286 287
{}{\wpre\expr[\mask]{\Ret\var.\pvs[\mask] \prop} \proves \wpre\expr[\mask]{\Ret\var.\prop}}

\infer[wp-atomic]
288
{\physatomic{\expr}}
289 290 291 292 293 294 295 296
{\pvs[\mask_1][\mask_2] \wpre\expr[\mask_2]{\Ret\var. \pvs[\mask_2][\mask_1]\prop}
 \proves \wpre\expr[\mask_1]{\Ret\var.\prop}}

\infer[wp-frame]
{}{\propB * \wpre\expr[\mask]{\Ret\var.\prop} \proves \wpre\expr[\mask]{\Ret\var.\propB*\prop}}

\infer[wp-frame-step]
{\toval(\expr) = \bot \and \mask_2 \subseteq \mask_1}
297
{\wpre\expr[\mask_2]{\Ret\var.\prop} * \pvs[\mask_1][\mask_2]\later\pvs[\mask_2][\mask_1]\propB \proves \wpre\expr[\mask_1]{\Ret\var.\propB*\prop}}
298 299 300 301 302 303

\infer[wp-bind]
{\text{$\lctx$ is a context}}
{\wpre\expr[\mask]{\Ret\var. \wpre{\lctx(\ofval(\var))}[\mask]{\Ret\varB.\prop}} \proves \wpre{\lctx(\expr)}[\mask]{\Ret\varB.\prop}}
\end{mathpar}

304 305
We will also want rules that connect weakest preconditions to the operational semantics of the language.
In order to cover the most general case, those rules end up being more complicated:
306 307
\begin{mathpar}
  \infer[wp-lift-step]
308
  {}
309
  { {\begin{inbox} % for some crazy reason, LaTeX is actually sensitive to the space between the "{ {" here and the "} }" below...
310
        ~~\pvs[\mask][\emptyset] \Exists \state_1. \red(\expr_1,\state_1) * \later\ownPhys{\state_1} * {}\\\qquad~~ \later\All \expr_2, \state_2, \vec\expr. \Bigl( (\expr_1, \state_1 \step \expr_2, \state_2, \vec\expr) * \ownPhys{\state_2} \Bigr) \wand \pvs[\emptyset][\mask] \Bigl(\wpre{\expr_2}[\mask]{\Ret\var.\prop} * \Sep_{\expr_\f \in \vec\expr} \wpre{\expr_\f}[\top]{\Ret\any.\TRUE}\Bigr)  {}\\\proves \wpre{\expr_1}[\mask]{\Ret\var.\prop}
311 312 313
      \end{inbox}} }
\\\\
  \infer[wp-lift-pure-step]
314
  {\All \state_1. \red(\expr_1, \state_1) \and
315 316
   \All \state_1, \expr_2, \state_2, \vec\expr. \expr_1,\state_1 \step \expr_2,\state_2,\vec\expr \Ra \state_1 = \state_2 }
  {\later\All \state, \expr_2, \vec\expr. (\expr_1,\state \step \expr_2, \state,\vec\expr)  \Ra \wpre{\expr_2}[\mask]{\Ret\var.\prop} * \Sep_{\expr_\f \in \vec\expr} \wpre{\expr_\f}[\top]{\Ret\any.\TRUE} \proves \wpre{\expr_1}[\mask]{\Ret\var.\prop}}
317 318
\end{mathpar}

Ralf Jung's avatar
Ralf Jung committed
319 320 321 322 323 324
We can further derive some slightly simpler rules for special cases:
We can derive some specialized forms of the lifting axioms for the operational semantics.
\begin{mathparpagebreakable}
  \infer[wp-lift-atomic-step]
  {\atomic(\expr_1) \and
   \red(\expr_1, \state_1)}
325
  { {\begin{inbox}~~\later\ownPhys{\state_1} * \later\All \val_2, \state_2, \vec\expr. (\expr_1,\state_1 \step \ofval(\val),\state_2,\vec\expr)  * \ownPhys{\state_2} \wand \prop[\val_2/\var] * \Sep_{\expr_\f \in \vec\expr} \wpre{\expr_\f}[\top]{\Ret\any.\TRUE} {}\\ \proves  \wpre{\expr_1}[\mask_1]{\Ret\var.\prop}
Ralf Jung's avatar
Ralf Jung committed
326 327 328 329 330
  \end{inbox}} }

  \infer[wp-lift-atomic-det-step]
  {\atomic(\expr_1) \and
   \red(\expr_1, \state_1) \and
331 332
   \All \expr'_2, \state'_2, \vec\expr'. \expr_1,\state_1 \step \expr'_2,\state'_2,\vec\expr' \Ra \state_2 = \state_2' \land \toval(\expr_2') = \val_2 \land \vec\expr = \vec\expr'}
  {\later\ownPhys{\state_1} * \later \Bigl(\ownPhys{\state_2} \wand \prop[\val_2/\var] * \Sep_{\expr_\f \in \vec\expr} \wpre{\expr_\f}[\top]{\Ret\any.\TRUE} \Bigr) \proves \wpre{\expr_1}[\mask_1]{\Ret\var.\prop}}
Ralf Jung's avatar
Ralf Jung committed
333 334

  \infer[wp-lift-pure-det-step]
335
  {\All \state_1. \red(\expr_1, \state_1) \\
336 337
   \All \state_1, \expr_2', \state'_2, \vec\expr'. \expr_1,\state_1 \step \expr'_2,\state'_2,\vec\expr' \Ra \state_1 = \state'_2 \land \expr_2 = \expr_2' \land \vec\expr = \vec\expr'}
  {\later \Bigl( \wpre{\expr_2}[\mask_1]{\Ret\var.\prop} * \Sep_{\expr_\f \in \vec\expr} \wpre{\expr_\f}[\top]{\Ret\any.\TRUE} \Bigr) \proves \wpre{\expr_1}[\mask_1]{\Ret\var.\prop}}
Ralf Jung's avatar
Ralf Jung committed
338
\end{mathparpagebreakable}
339 340


Ralf Jung's avatar
Ralf Jung committed
341
\paragraph{Adequacy of weakest precondition.}
342

343 344 345 346
The purpose of the adequacy statement is to show that our notion of weakest preconditions is \emph{realistic} in the sense that it actually has anything to do with the actual behavior of the program.
There are two properties we are looking for: First of all, the postcondition should reflect actual properties of the values the program can terminate with.
Second, a proof of a weakest precondition with any postcondition should imply that the program is \emph{safe}, \ie that it does not get stuck.

347
To express the adequacy statement for functional correctness, we assume we are given some set $V \subseteq \Val$ of legal return values.
348 349
Furthermore, we assume that the signature $\Sig$ adds a predicate $\pred$ to the logic which reflects $V$ into the logic:
\[\begin{array}{rMcMl}
350
  \Sem\pred &:& \Sem{\Val\,} \nfn \Sem\Prop \\
351 352 353 354 355
  \Sem\pred &\eqdef& \Lam \val. \Lam \any. \setComp{n}{v \in V}
\end{array}\]
The signature can of course state arbitrary additional properties of $\pred$, as long as they are proven sound.

The adequacy statement now reads as follows:
356
\begin{align*}
357 358
 &\All \mask, \expr, \val, \pred, \state, \state', \tpool'.
 \\&( \ownPhys\state \proves \wpre{\expr}[\mask]{x.\; \pred(x)}) \Ra
359 360
 \\&\cfg{\state}{[\expr]} \step^\ast
     \cfg{\state'}{[\val] \dplus \tpool'} \Ra
361
     \\&\val \in V
362 363
\end{align*}

364
The adequacy statement for safety says that our weakest preconditions imply that every expression in the thread pool either is a value, or can reduce further.
365
\begin{align*}
366
 &\All \mask, \expr, \state, \state', \tpool'.
367
 \\&(\All n. \melt \in \mval_n) \Ra
368
 \\&( \ownPhys\state \proves \wpre{\expr}[\mask]{x.\; \pred(x)}) \Ra
369 370 371 372 373 374
 \\&\cfg{\state}{[\expr]} \step^\ast
     \cfg{\state'}{\tpool'} \Ra
     \\&\All\expr'\in\tpool'. \toval(\expr') \neq \bot \lor \red(\expr', \state')
\end{align*}
Notice that this is stronger than saying that the thread pool can reduce; we actually assert that \emph{every} non-finished thread can take a step.

Ralf Jung's avatar
Ralf Jung committed
375 376 377 378
\paragraph{Hoare triples.}
It turns out that weakest precondition is actually quite convenient to work with, in particular when perfoming these proofs in Coq.
Still, for a more traditional presentation, we can easily derive the notion of a Hoare triple:
\[
379
\hoare{\prop}{\expr}{\Ret\val.\propB}[\mask] \eqdef \always{(\prop \wand \wpre{\expr}[\mask]{\Ret\val.\propB})}
Ralf Jung's avatar
Ralf Jung committed
380
\]
Ralf Jung's avatar
Ralf Jung committed
381

Ralf Jung's avatar
Ralf Jung committed
382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446
We only give some of the proof rules for Hoare triples here, since we usually do all our reasoning directly with weakest preconditions and use Hoare triples only to write specifications.
\begin{mathparpagebreakable}
\inferH{Ht-ret}
  {}
  {\hoare{\TRUE}{\valB}{\Ret\val. \val = \valB}[\mask]}
\and
\inferH{Ht-bind}
  {\text{$\lctx$ is a context} \and \hoare{\prop}{\expr}{\Ret\val. \propB}[\mask] \\
   \All \val. \hoare{\propB}{\lctx(\val)}{\Ret\valB.\propC}[\mask]}
  {\hoare{\prop}{\lctx(\expr)}{\Ret\valB.\propC}[\mask]}
\and
\inferH{Ht-csq}
  {\prop \vs \prop' \\
    \hoare{\prop'}{\expr}{\Ret\val.\propB'}[\mask] \\   
   \All \val. \propB' \vs \propB}
  {\hoare{\prop}{\expr}{\Ret\val.\propB}[\mask]}
\and
% \inferH{Ht-mask-weaken}
%   {\hoare{\prop}{\expr}{\Ret\val. \propB}[\mask]}
%   {\hoare{\prop}{\expr}{\Ret\val. \propB}[\mask \uplus \mask']}
% \\\\
\inferH{Ht-frame}
  {\hoare{\prop}{\expr}{\Ret\val. \propB}[\mask]}
  {\hoare{\prop * \propC}{\expr}{\Ret\val. \propB * \propC}[\mask]}
\and
% \inferH{Ht-frame-step}
%   {\hoare{\prop}{\expr}{\Ret\val. \propB}[\mask] \and \toval(\expr) = \bot \and \mask_2 \subseteq \mask_2 \\\\ \propC_1 \vs[\mask_1][\mask_2] \later\propC_2 \and \propC_2 \vs[\mask_2][\mask_1] \propC_3}
%   {\hoare{\prop * \propC_1}{\expr}{\Ret\val. \propB * \propC_3}[\mask \uplus \mask_1]}
% \and
\inferH{Ht-atomic}
  {\prop \vs[\mask \uplus \mask'][\mask] \prop' \\
    \hoare{\prop'}{\expr}{\Ret\val.\propB'}[\mask] \\   
   \All\val. \propB' \vs[\mask][\mask \uplus \mask'] \propB \\
   \physatomic{\expr}
  }
  {\hoare{\prop}{\expr}{\Ret\val.\propB}[\mask \uplus \mask']}
\and
\inferH{Ht-false}
  {}
  {\hoare{\FALSE}{\expr}{\Ret \val. \prop}[\mask]}
\and
\inferHB{Ht-disj}
  {\hoare{\prop}{\expr}{\Ret\val.\propC}[\mask] \and \hoare{\propB}{\expr}{\Ret\val.\propC}[\mask]}
  {\hoare{\prop \lor \propB}{\expr}{\Ret\val.\propC}[\mask]}
\and
\inferHB{Ht-exist}
  {\All \var. \hoare{\prop}{\expr}{\Ret\val.\propB}[\mask]}
  {\hoare{\Exists \var. \prop}{\expr}{\Ret\val.\propB}[\mask]}
\and
\inferHB{Ht-box}
  {\always\propB \proves \hoare{\prop}{\expr}{\Ret\val.\propC}[\mask]}
  {\hoare{\prop \land \always{\propB}}{\expr}{\Ret\val.\propC}[\mask]}
% \and
% \inferH{Ht-inv}
%   {\hoare{\later\propC*\prop}{\expr}{\Ret\val.\later\propC*\propB}[\mask] \and
%    \physatomic{\expr}
%   }
%   {\knowInv\iname\propC \proves \hoare{\prop}{\expr}{\Ret\val.\propB}[\mask \uplus \set\iname]}
% \and
% \inferH{Ht-inv-timeless}
%   {\hoare{\propC*\prop}{\expr}{\Ret\val.\propC*\propB}[\mask] \and
%    \physatomic{\expr} \and \timeless\propC
%   }
%   {\knowInv\iname\propC \proves \hoare{\prop}{\expr}{\Ret\val.\propB}[\mask \uplus \set\iname]}
\end{mathparpagebreakable}
Ralf Jung's avatar
Ralf Jung committed
447

Ralf Jung's avatar
Ralf Jung committed
448 449 450 451 452 453 454 455 456 457
\subsection{Invariant Namespaces}
\label{sec:namespaces}

In \Sref{sec:invariants}, we defined an assertion $\knowInv\iname\prop$ expressing knowledge (\ie the assertion is persistent) that $\prop$ is maintained as invariant with name $\iname$.
The concrete name $\iname$ is picked when the invariant is allocated, so it cannot possibly be statically known -- it will always be a variable that's threaded through everything.
However, we hardly care about the actual, concrete name.
All we need to know is that this name is \emph{different} from the names of other invariants that we want to open at the same time.
Keeping track of the $n^2$ mutual inequalities that arise with $n$ invariants quickly gets in the way of the actual proof.

To solve this issue, instead of remembering the exact name picked for an invariant, we will keep track of the \emph{namespace} the invariant was allocated in.
458
Namespaces are sets of invariants, following a tree-like structure:
Ralf Jung's avatar
Ralf Jung committed
459 460 461 462 463 464 465 466
Think of the name of an invariant as a sequence of identifiers, much like a fully qualified Java class name.
A \emph{namespace} $\namesp$ then is like a Java package: it is a sequence of identifiers that we think of as \emph{containing} all invariant names that begin with this sequence. For example, \texttt{org.mpi-sws.iris} is a namespace containing the invariant name \texttt{org.mpi-sws.iris.heap}.

The crux is that all namespaces contain infinitely many invariants, and hence we can \emph{freely pick} the namespace an invariant is allocated in -- no further, unpredictable choice has to be made.
Furthermore, we will often know that namespaces are \emph{disjoint} just by looking at them.
The namespaces $\namesp.\texttt{iris}$ and $\namesp.\texttt{gps}$ are disjoint no matter the choice of $\namesp$.
As a result, there is often no need to track disjointness of namespaces, we just have to pick the namespaces that we allocate our invariants in accordingly.

Robbert Krebbers's avatar
Robbert Krebbers committed
467
Formally speaking, let $\namesp \in \textlog{InvNamesp} \eqdef \List(\nat)$ be the type of \emph{invariant namespaces}.
Ralf Jung's avatar
Ralf Jung committed
468 469 470 471
We use the notation $\namesp.\iname$ for the namespace $[\iname] \dplus \namesp$.
(In other words, the list is ``backwards''. This is because cons-ing to the list, like the dot does above, is easier to deal with in Coq than appending at the end.)

The elements of a namespaces are \emph{structured invariant names} (think: Java fully qualified class name).
Robbert Krebbers's avatar
Robbert Krebbers committed
472 473 474
They, too, are lists of $\nat$, the same type as namespaces.
In order to connect this up to the definitions of \Sref{sec:invariants}, we need a way to map structued invariant names to $\nat$, the type of ``plain'' invariant names.
Any injective mapping $\textlog{namesp\_inj}$ will do; and such a mapping has to exist because $\List(\nat)$ is countable.
Ralf Jung's avatar
Ralf Jung committed
475 476 477 478
Whenever needed, we (usually implicitly) coerce $\namesp$ to its encoded suffix-closure, \ie to the set of encoded structured invariant names contained in the namespace: \[\namecl\namesp \eqdef \setComp{\iname}{\Exists \namesp'. \iname = \textlog{namesp\_inj}(\namesp' \dplus \namesp)}\]

We will overload the notation for invariant assertions for using namespaces instead of names:
\[ \knowInv\namesp\prop \eqdef \Exists \iname \in \namecl\namesp. \knowInv\iname{\prop} \]
Ralf Jung's avatar
Ralf Jung committed
479
We can now derive the following rules (this involves unfolding the definition of fancy updates):
Ralf Jung's avatar
Ralf Jung committed
480 481
\begin{mathpar}
  \axiomH{inv-persist}{\knowInv\namesp\prop \proves \always\knowInv\namesp\prop}
Ralf Jung's avatar
Ralf Jung committed
482

483
  \axiomH{inv-alloc}{\later\prop \proves \pvs[\emptyset] \knowInv\namesp\prop}
Ralf Jung's avatar
Ralf Jung committed
484

Ralf Jung's avatar
Ralf Jung committed
485 486 487
  \inferH{inv-open}
  {\namesp \subseteq \mask}
  {\knowInv\namesp\prop \vs[\mask][\mask\setminus\namesp] \later\prop * (\later\prop \vsW[\mask\setminus\namesp][\mask] \TRUE)}
Ralf Jung's avatar
Ralf Jung committed
488

Ralf Jung's avatar
Ralf Jung committed
489 490 491 492
  \inferH{inv-open-timeless}
  {\namesp \subseteq \mask \and \timeless\prop}
  {\knowInv\namesp\prop \vs[\mask][\mask\setminus\namesp] \prop * (\prop \vsW[\mask\setminus\namesp][\mask] \TRUE)}
\end{mathpar}
Ralf Jung's avatar
Ralf Jung committed
493

Ralf Jung's avatar
Ralf Jung committed
494 495 496 497 498 499 500 501 502 503 504 505 506 507
\subsection{Accessors}

The two rules \ruleref{inv-open} and \ruleref{inv-open-timeless} above may look a little surprising, in the sense that it is not clear on first sight how they would be applied.
The rules are the first \emph{accessors} that show up in this document.
Accessors are assertions of the form
\[ \prop \vs[\mask_1][\mask_2] \Exists\var. \propB * (\All\varB. \propB' \vsW[\mask_2][\mask_1] \propC) \]

One way to think about such assertions is as follows:
Given some accessor, if during our verification we have the assertion $\prop$ and the mask $\mask_1$ available, we can use the accessor to \emph{access} $\propB$ and obtain the witness $\var$.
We call this \emph{opening} the accessor, and it changes the mask to $\mask_2$.
Additionally, opening the accessor provides us with $\All\varB. \propB' \vsW[\mask_2][\mask_1] \propC$, a \emph{linear view shift} (\ie a view shift that can only be used once).
This linear view shift tells us that in order to \emph{close} the accessor again and go back to mask $\mask_1$, we have to pick some $\varB$ and establish the corresponding $\propB'$.
After closing, we will obtain $\propC$.

Ralf Jung's avatar
Ralf Jung committed
508
Using \ruleref{vs-trans} and \ruleref{Ht-atomic} (or the corresponding proof rules for fancy updates and weakest preconditions), we can show that it is possible to open an accessor around any view shift and any \emph{atomic} expression.
Ralf Jung's avatar
Ralf Jung committed
509 510 511 512 513
Furthermore, in the special case that $\mask_1 = \mask_2$, the accessor can be opened around \emph{any} expression.
For this reason, we also call such accessors \emph{non-atomic}.

The reasons accessors are useful is that they let us talk about ``opening X'' (\eg ``opening invariants'') without having to care what X is opened around.
Furthermore, as we construct more sophisticated and more interesting things that can be opened (\eg invariants that can be ``cancelled'', or STSs), accessors become a useful interface that allows us to mix and match different abstractions in arbitrary ways.
514 515 516 517 518

%%% Local Variables:
%%% mode: latex
%%% TeX-master: "iris"
%%% End: