The base logic is parameterized by an arbitrary CMRA$\monoid$ having a unit $\munit$.
By \lemref{lem:cmra-unit-total-core}, this means that the core of $\monoid$ is a total function, so we will treat it as such in the following.
The base logic is parameterized by an arbitrary camera$\monoid$ having a unit $\munit$.
By \lemref{lem:camera-unit-total-core}, this means that the core of $\monoid$ is a total function, so we will treat it as such in the following.
This defines the structure of resources that can be owned.
As usual for higher-order logics, you can furthermore pick a \emph{signature}$\Sig=(\SigType, \SigFn, \SigAx)$ to add more types, symbols and axioms to the language.
...
...
@@ -193,7 +193,7 @@ In writing $\vctx, x:\type$, we presuppose that $x$ is not already declared in $
\infer{\vctx\proves\wtt{\melt}{\textlog{M}}}
{\vctx\proves\wtt{\ownM{\melt}}{\Prop}}
\and
\infer{\vctx\proves\wtt{\melt}{\type}\and\text{$\type$ is a CMRA}}
\infer{\vctx\proves\wtt{\melt}{\type}\and\text{$\type$ is a camera}}
{\vctx\proves\wtt{\mval(\melt)}{\Prop}}
\and
\infer{\vctx\proves\wtt{\prop}{\Prop}}
...
...
@@ -212,13 +212,13 @@ In writing $\vctx, x:\type$, we presuppose that $x$ is not already declared in $
}
\end{mathparpagebreakable}
\subsection{Proof rules}
\subsection{Proof Rules}
\label{sec:proof-rules}
The judgment $\vctx\mid\prop\proves\propB$ says that with free variables $\vctx$, proposition $\propB$ holds whenever assumption $\prop$ holds.
Most of the rules will entirely omit the variable contexts $\vctx$.
In this case, we assume the same arbitrary context is used for every constituent of the rules.
%Furthermore, an arbitrary \emph{boxed} assertion context $\always\pfctx$ may be added to every constituent.
%Furthermore, an arbitrary \emph{boxed} proposition context $\always\pfctx$ may be added to every constituent.
Axioms $\vctx\mid\prop\provesIff\propB$ indicate that both $\vctx\mid\prop\proves\propB$ and $\vctx\mid\propB\proves\prop$ are proof rules of the logic.
\judgment{\vctx\mid\prop\proves\propB}
...
...
@@ -448,7 +448,7 @@ Furthermore, we have the usual $\eta$ and $\beta$ laws for projections, $\textlo
{\upd\plainly\prop\proves\prop}
\end{mathpar}
The premise in \ruleref{upd-update} is a \emph{meta-level} side-condition that has to be proven about $a$ and $B$.
%\ralf{Trouble is, we don't actually have $\in$ inside the logic...}
%\ralf{Trouble is, we do not actually have $\in$ inside the logic...}
Cancellable invariants are useful, for example, when reasoning about data structures that will be deallocated: Every reference to the data structure comes with a fraction of the token, and when all fractions have been gathered, \ruleref{CInv-cancel} is used to cancel the invariant, after which the data structure can be deallocated.
Sometimes it is necessary to maintain invariants that we need to open non-atomically.
Clearly, for this mechanism to be sound we need something that prevents us from opening the same invariant twice, something like the masks that avoid reentrancy on the ``normal'', atomic invariants.
...
...
@@ -40,16 +70,16 @@ To simplify this construction,we piggy-back into ``normal'' invariants.
The idea behind the \emph{boxes} is to have an assertion $\prop$ that is actually split into a number of pieces, each of which can be taken out and back in separately.
In some sense, this is a replacement for having an ``authoritative PCM of Iris assertions itself''.
The idea behind the \emph{boxes} is to have an proposition $\prop$ that is actually split into a number of pieces, each of which can be taken out and back in separately.
In some sense, this is a replacement for having an ``authoritative PCM of Iris propositions itself''.
It is similar to the pattern involving saved propositions that was used for the barrier~\cite{iris2}, but more complicated because there are some operations that we want to perform without a later.
Roughly, the idea is that a \emph{box} is a container for an assertion $\prop$.
A box consists of a bunch of \emph{slices} which decompose $\prop$ into a separating conjunction of the assertions $\propB_\sname$ governed by the individual slices.
Roughly, the idea is that a \emph{box} is a container for an proposition $\prop$.
A box consists of a bunch of \emph{slices} which decompose $\prop$ into a separating conjunction of the propositions $\propB_\sname$ governed by the individual slices.
Each slice is either \emph{full} (it right now contains $\propB_\sname$), or \emph{empty} (it does not contain anything currently).
The assertion governing the box keeps track of the state of all the slices that make up the box.
The proposition governing the box keeps track of the state of all the slices that make up the box.
The crux is that opening and closing of a slice can be done even if we only have ownership of the boxes ``later'' ($\later$).
The interface for boxes is as follows:
The two core assertions are: $\BoxSlice\namesp\prop\sname$, saying that there is a slice in namespace $\namesp$ with name $\sname$ and content $\prop$; and $\ABox\namesp\prop{f}$, saying that $f$ describes the slices of a box in namespace $\namesp$, such that all the slices together contain $\prop$. Here, $f$ is of type $\nat\fpfn\BoxState$ mapping names to states, where $\BoxState\eqdef\set{\BoxFull, \BoxEmp}$.
The two core propositions are: $\BoxSlice\namesp\prop\sname$, saying that there is a slice in namespace $\namesp$ with name $\sname$ and content $\prop$; and $\ABox\namesp\prop{f}$, saying that $f$ describes the slices of a box in namespace $\namesp$, such that all the slices together contain $\prop$. Here, $f$ is of type $\nat\fpfn\BoxState$ mapping names to states, where $\BoxState\eqdef\set{\BoxFull, \BoxEmp}$.
\begin{mathpar}
\inferH{Box-create}{}
{\TRUE\vs[\namesp]\ABox\namesp\TRUE\emptyset}
...
...
@@ -109,13 +139,13 @@ This is essentially an \emph{optional later}, indicating that the lemmas can be
\newcommand\SliceInv{\textlog{SliceInv}}
The above rules are validated by the following model.
In this section we discuss some additional constructions that we define within and on top of the base logic.
These are not ``extensions'' in the sense that they change the proof power of the logic, they just form useful derived principles.
\subsection{Derived rules about base connectives}
\subsection{Derived Rules about Base Connectives}
We collect here some important and frequently used derived proof rules.
\begin{mathparpagebreakable}
\infer{}
...
...
@@ -42,9 +42,9 @@ We collect here some important and frequently used derived proof rules.
Noteworthy here is the fact that $\prop\proves\later\prop$ can be derived from Löb induction, and $\TRUE\proves\plainly\TRUE$ can be derived via $\plainly$ commuting with universal quantification ranging over the empty type $0$.
\subsection{Persistent assertions}
We call an assertion $\prop$\emph{persistent} if $\prop\proves\always\prop$.
These are assertions that ``don't own anything'', so we can (and will) treat them like ``normal'' intuitionistic assertions.
\subsection{Persistent Propositions}
We call a proposition $\prop$\emph{persistent} if $\prop\proves\always\prop$.
These are propositions that ``do not own anything'', so we can (and will) treat them like ``normal'' intuitionistic propositions.
Of course, $\always\prop$ is persistent for any $\prop$.
Furthermore, by the proof rules given in \Sref{sec:proof-rules}, $\TRUE$, $\FALSE$, $t = t'$ as well as $\ownGhost\gname{\mcore\melt}$ and $\mval(\melt)$ are persistent.
...
...
@@ -52,15 +52,15 @@ Persistence is preserved by conjunction, disjunction, separating conjunction as
\subsection{Timeless assertions and except-0}
\subsection{Timeless Propositions and Except-0}
One of the troubles of working in a step-indexed logic is the ``later'' modality $\later$.
It turns out that we can somewhat mitigate this trouble by working below the following \emph{except-0} modality:
\[\diamond\prop\eqdef\later\FALSE\lor\prop\]
This modality is useful because there is a class of assertions which we call \emph{timeless} assertions, for which we have
This modality is useful because there is a class of propositions which we call \emph{timeless} propositions, for which we have
The base logic described in \Sref{sec:base-logic} works over an arbitrary camera $\monoid$ defining the structure of the resources.
It turns out that we can generalize this further and permit picking cameras ``$\iFunc(\Prop)$'' that depend on the structure of propositions themselves.
Of course, $\Prop$ is just the syntactic type of propositions; for this to make sense we have to look at the semantics.
Furthermore, there is a composability problem with the given logic: if we have one proof performed with camera $\monoid_1$, and another proof carried out with a \emph{different} camera $\monoid_2$, then the two proofs are actually carried out in two \emph{entirely separate logics} and hence cannot be combined.
Finally, in many cases just having a single ``instance'' of a camera available for reasoning is not enough.
For example, when reasoning about a dynamically allocated data structure, every time a new instance of that data structure is created, we will want a fresh resource governing the state of this particular instance.
While it would be possible to handle this problem whenever it comes up, it turns out to be useful to provide a general solution.
The purpose of this section is to describe how we solve these issues.
\paragraph{Picking the resources.}
The key ingredient that we will employ on top of the base logic is to give some more fixed structure to the resources.
To instantiate the logic with dynamic higher-order ghost state, the user picks a family of locally contractive bifunctors $(\iFunc_i : \OFEs^\op\times\OFEs\to\CMRAs)_{i \in\mathcal{I}}$.
(This is in contrast to the base logic, where the user picks a single, fixed camera that has a unit.)
From this, we construct the bifunctor defining the overall resources as follows:
Here, $\iPreProp$ is a COFE defined as the fixed-point of a locally contractive bifunctor, which exists and is unique up to isomorphism by \thmref{thm:america_rutten}, so we obtain some object $\iPreProp$ such that:
Now we can instantiate the base logic described in \Sref{sec:base-logic} with $\Res$ as the chosen camera:
\[\Sem{\Prop}\eqdef\UPred(\Res)\]
We obtain that $\Sem{\Prop}=\iProp$.
Effectively, we just defined a way to instantiate the base logic with $\Res$ as the camera of resources, while providing a way for $\Res$ to depend on $\iPreProp$, which is isomorphic to $\Sem\Prop$.
We thus obtain all the rules of \Sref{sec:base-logic}, and furthermore, we can use the maps $\wIso$ and $\wIso^{-1}$\emph{in the logic} to convert between logical propositions $\Sem\Prop$ and the domain $\iPreProp$ which is used in the construction of $\Res$ -- so from elements of $\iPreProp$, we can construct elements of $\Sem{\textlog M}$, which are the elements that can be owned in our logic.
\paragraph{Proof composability.}
To make our proofs composeable, we \emph{generalize} our proofs over the family of functors.
This is possible because we made $\Res$ a \emph{product} of all the cameras picked by the user, and because we can actually work with that product ``pointwise''.
So instead of picking a \emph{concrete} family, proofs will assume to be given an \emph{arbitrary} family of functors, plus a proof that this family \emph{contains the functors they need}.
Composing two proofs is then merely a matter of conjoining the assumptions they make about the functors.
Since the logic is entirely parametric in the choice of functors, there is no trouble reasoning without full knowledge of the family of functors.
Only when the top-level proof is completed we will ``close'' the proof by picking a concrete family that contains exactly those functors the proof needs.
\paragraph{Dynamic resources.}
Finally, the use of finite partial functions lets us have as many instances of any camera as we could wish for:
Because there can only ever be finitely many instances already allocated, it is always possible to create a fresh instance with any desired (valid) starting state.
This is best demonstrated by giving some proof rules.
So let us first define the notion of ghost ownership that we use in this logic.
Assuming that the family of functors contains the functor $\Sigma_i$ at index $i$, and furthermore assuming that $\monoid_i =\Sigma_i(\iPreProp, \iPreProp)$, given some $\melt\in\monoid_i$ we define:
This is ownership of the pair (element of the product over all the functors) that has the empty finite partial function in all components \emph{except for} the component corresponding to index $i$, where we own the element $\melt$ at index $\gname$ in the finite partial function.
We can show the following properties for this form of ownership:
@@ -32,28 +32,49 @@ The latest versions of this document and the Coq formalization can be found in t
For further information, visit the Iris project website at \url{http://plv.mpi-sws.org/iris/}.
\end{abstract}
\clearpage
\clearpage\begingroup
\tableofcontents
\endgroup
\clearpage\begingroup
\section{Iris from the Ground Up}
In \citetitle{iris-ground-up}~\cite{iris-ground-up}, we describe Iris~3.1 in a bottom-up way.
That paper is hence much more suited as an introduction to the model of Iris than this reference, which mostly contains definitions, not explanations or examples.
The following differences between Iris as described in \citetitle{iris-ground-up} and the latest version documented here are worth mentioning:
\begin{itemize}
\item As an experimental feature, we added the \emph{plainly modality}$\plainly$.
\item As an experimental feature, weakest preconditions take a \emph{stuckness}$\stuckness$ as parameter, indicating whether the program may get stuck or not.