@@ -149,7 +149,7 @@ Notice also that the core of an RA is a strict generalization of the unit that a
...
@@ -149,7 +149,7 @@ Notice also that the core of an RA is a strict generalization of the unit that a
We further define $\melt\mupd\meltB\eqdef\melt\mupd\set\meltB$.
We further define $\melt\mupd\meltB\eqdef\melt\mupd\set\meltB$.
\end{defn}
\end{defn}
The assertion $\melt\mupd\meltsB$ says that every element $\maybe{\melt_\f}$ compatible with $\melt$ (we also call such elements \emph{frames}), must also be compatible with some $\meltB\in\meltsB$.
The proposition $\melt\mupd\meltsB$ says that every element $\maybe{\melt_\f}$ compatible with $\melt$ (we also call such elements \emph{frames}), must also be compatible with some $\meltB\in\meltsB$.
Notice that $\maybe{\melt_\f}$ could be $\mnocore$, so the frame-preserving update can also be applied to elements that have \emph{no} frame.
Notice that $\maybe{\melt_\f}$ could be $\mnocore$, so the frame-preserving update can also be applied to elements that have \emph{no} frame.
Intuitively, this means that whatever assumptions the rest of the program is making about the state of $\gname$, if these assumptions are compatible with $\melt$, then updating to $\meltB$ will not invalidate any of these assumptions.
Intuitively, this means that whatever assumptions the rest of the program is making about the state of $\gname$, if these assumptions are compatible with $\melt$, then updating to $\meltB$ will not invalidate any of these assumptions.
Since Iris ensures that the global ghost state is valid, this means that we can soundly update the ghost state from $\melt$ to a non-deterministically picked $\meltB\in\meltsB$.
Since Iris ensures that the global ghost state is valid, this means that we can soundly update the ghost state from $\melt$ to a non-deterministically picked $\meltB\in\meltsB$.
@@ -218,7 +218,7 @@ In writing $\vctx, x:\type$, we presuppose that $x$ is not already declared in $
...
@@ -218,7 +218,7 @@ In writing $\vctx, x:\type$, we presuppose that $x$ is not already declared in $
The judgment $\vctx\mid\prop\proves\propB$ says that with free variables $\vctx$, proposition $\propB$ holds whenever assumption $\prop$ holds.
The judgment $\vctx\mid\prop\proves\propB$ says that with free variables $\vctx$, proposition $\propB$ holds whenever assumption $\prop$ holds.
Most of the rules will entirely omit the variable contexts $\vctx$.
Most of the rules will entirely omit the variable contexts $\vctx$.
In this case, we assume the same arbitrary context is used for every constituent of the rules.
In this case, we assume the same arbitrary context is used for every constituent of the rules.
%Furthermore, an arbitrary \emph{boxed} assertion context $\always\pfctx$ may be added to every constituent.
%Furthermore, an arbitrary \emph{boxed} proposition context $\always\pfctx$ may be added to every constituent.
Axioms $\vctx\mid\prop\provesIff\propB$ indicate that both $\vctx\mid\prop\proves\propB$ and $\vctx\mid\propB\proves\prop$ are proof rules of the logic.
Axioms $\vctx\mid\prop\provesIff\propB$ indicate that both $\vctx\mid\prop\proves\propB$ and $\vctx\mid\propB\proves\prop$ are proof rules of the logic.
The idea behind the \emph{boxes} is to have an assertion $\prop$ that is actually split into a number of pieces, each of which can be taken out and back in separately.
The idea behind the \emph{boxes} is to have an proposition $\prop$ that is actually split into a number of pieces, each of which can be taken out and back in separately.
In some sense, this is a replacement for having an ``authoritative PCM of Iris assertions itself''.
In some sense, this is a replacement for having an ``authoritative PCM of Iris propositions itself''.
It is similar to the pattern involving saved propositions that was used for the barrier~\cite{iris2}, but more complicated because there are some operations that we want to perform without a later.
It is similar to the pattern involving saved propositions that was used for the barrier~\cite{iris2}, but more complicated because there are some operations that we want to perform without a later.
Roughly, the idea is that a \emph{box} is a container for an assertion $\prop$.
Roughly, the idea is that a \emph{box} is a container for an proposition $\prop$.
A box consists of a bunch of \emph{slices} which decompose $\prop$ into a separating conjunction of the assertions $\propB_\sname$ governed by the individual slices.
A box consists of a bunch of \emph{slices} which decompose $\prop$ into a separating conjunction of the propositions $\propB_\sname$ governed by the individual slices.
Each slice is either \emph{full} (it right now contains $\propB_\sname$), or \emph{empty} (it does not contain anything currently).
Each slice is either \emph{full} (it right now contains $\propB_\sname$), or \emph{empty} (it does not contain anything currently).
The assertion governing the box keeps track of the state of all the slices that make up the box.
The proposition governing the box keeps track of the state of all the slices that make up the box.
The crux is that opening and closing of a slice can be done even if we only have ownership of the boxes ``later'' ($\later$).
The crux is that opening and closing of a slice can be done even if we only have ownership of the boxes ``later'' ($\later$).
The interface for boxes is as follows:
The interface for boxes is as follows:
The two core assertions are: $\BoxSlice\namesp\prop\sname$, saying that there is a slice in namespace $\namesp$ with name $\sname$ and content $\prop$; and $\ABox\namesp\prop{f}$, saying that $f$ describes the slices of a box in namespace $\namesp$, such that all the slices together contain $\prop$. Here, $f$ is of type $\nat\fpfn\BoxState$ mapping names to states, where $\BoxState\eqdef\set{\BoxFull, \BoxEmp}$.
The two core propositions are: $\BoxSlice\namesp\prop\sname$, saying that there is a slice in namespace $\namesp$ with name $\sname$ and content $\prop$; and $\ABox\namesp\prop{f}$, saying that $f$ describes the slices of a box in namespace $\namesp$, such that all the slices together contain $\prop$. Here, $f$ is of type $\nat\fpfn\BoxState$ mapping names to states, where $\BoxState\eqdef\set{\BoxFull, \BoxEmp}$.
@@ -42,9 +42,9 @@ We collect here some important and frequently used derived proof rules.
...
@@ -42,9 +42,9 @@ We collect here some important and frequently used derived proof rules.
Noteworthy here is the fact that $\prop\proves\later\prop$ can be derived from Löb induction, and $\TRUE\proves\plainly\TRUE$ can be derived via $\plainly$ commuting with universal quantification ranging over the empty type $0$.
Noteworthy here is the fact that $\prop\proves\later\prop$ can be derived from Löb induction, and $\TRUE\proves\plainly\TRUE$ can be derived via $\plainly$ commuting with universal quantification ranging over the empty type $0$.
\subsection{Persistent Assertions}
\subsection{Persistent Propositions}
We call an assertion $\prop$\emph{persistent} if $\prop\proves\always\prop$.
We call a proposition $\prop$\emph{persistent} if $\prop\proves\always\prop$.
These are assertions that ``don't own anything'', so we can (and will) treat them like ``normal'' intuitionistic assertions.
These are propositions that ``don't own anything'', so we can (and will) treat them like ``normal'' intuitionistic propositions.
Of course, $\always\prop$ is persistent for any $\prop$.
Of course, $\always\prop$ is persistent for any $\prop$.
Furthermore, by the proof rules given in \Sref{sec:proof-rules}, $\TRUE$, $\FALSE$, $t = t'$ as well as $\ownGhost\gname{\mcore\melt}$ and $\mval(\melt)$ are persistent.
Furthermore, by the proof rules given in \Sref{sec:proof-rules}, $\TRUE$, $\FALSE$, $t = t'$ as well as $\ownGhost\gname{\mcore\melt}$ and $\mval(\melt)$ are persistent.
...
@@ -52,15 +52,15 @@ Persistence is preserved by conjunction, disjunction, separating conjunction as
...
@@ -52,15 +52,15 @@ Persistence is preserved by conjunction, disjunction, separating conjunction as
\subsection{Timeless Assertions and Except-0}
\subsection{Timeless Proposition and Except-0}
One of the troubles of working in a step-indexed logic is the ``later'' modality $\later$.
One of the troubles of working in a step-indexed logic is the ``later'' modality $\later$.
It turns out that we can somewhat mitigate this trouble by working below the following \emph{except-0} modality:
It turns out that we can somewhat mitigate this trouble by working below the following \emph{except-0} modality:
\[\diamond\prop\eqdef\later\FALSE\lor\prop\]
\[\diamond\prop\eqdef\later\FALSE\lor\prop\]
This modality is useful because there is a class of assertions which we call \emph{timeless} assertions, for which we have
This modality is useful because there is a class of propositions which we call \emph{timeless} propositions, for which we have
@@ -26,10 +26,10 @@ For the remaining base types $\type$ defined by the signature $\Sig$, we pick an
...
@@ -26,10 +26,10 @@ For the remaining base types $\type$ defined by the signature $\Sig$, we pick an
\]
\]
For each function symbol $\sigfn : \type_1, \dots, \type_n \to\type_{n+1}\in\SigFn$, we pick a function $\Sem{\sigfn} : \Sem{\type_1}\times\dots\times\Sem{\type_n}\nfn\Sem{\type_{n+1}}$.
For each function symbol $\sigfn : \type_1, \dots, \type_n \to\type_{n+1}\in\SigFn$, we pick a function $\Sem{\sigfn} : \Sem{\type_1}\times\dots\times\Sem{\type_n}\nfn\Sem{\type_{n+1}}$.
\judgment[Interpretation of assertions.]{\Sem{\vctx\proves\term : \Prop} : \Sem{\vctx}\nfn\UPred(\monoid)}
\judgment[Interpretation of propositions.]{\Sem{\vctx\proves\term : \Prop} : \Sem{\vctx}\nfn\UPred(\monoid)}
Remember that $\UPred(\monoid)$ is isomorphic to $\monoid\monra\SProp$.
Remember that $\UPred(\monoid)$ is isomorphic to $\monoid\monra\SProp$.
We are thus going to define the assertions as mapping camera elements to sets of step-indices.
We are thus going to define the propositions as mapping camera elements to sets of step-indices.
\begin{align*}
\begin{align*}
\Sem{\vctx\proves t =_\type u : \Prop}_\venv&\eqdef
\Sem{\vctx\proves t =_\type u : \Prop}_\venv&\eqdef
@@ -13,7 +13,7 @@ The counterexample assumes a higher-order logic with separating conjunction, mag
...
@@ -13,7 +13,7 @@ The counterexample assumes a higher-order logic with separating conjunction, mag
\begin{thm}
\begin{thm}
\label{thm:counterexample-1}
\label{thm:counterexample-1}
If there exists a type $\GName$ and an assertion $\_\Mapsto\_ : \GName\to\Prop\to\Prop$ associating names $\gamma : \GName$ to propositions and satisfying:
If there exists a type $\GName$ and a proposition $\_\Mapsto\_ : \GName\to\Prop\to\Prop$ associating names $\gamma : \GName$ to propositions and satisfying:
The type $\GName$ should be thought of as the type of ``locations'' and $\gname\Mapsto P$ should be read as stating that location $\gname$ ``stores'' proposition $P$.
The type $\GName$ should be thought of as the type of ``locations'' and $\gname\Mapsto P$ should be read as stating that location $\gname$ ``stores'' proposition $P$.
Notice that these are immutable locations, so the maps-to assertion is persistent.
Notice that these are immutable locations, so the maps-to proposition is persistent.
The rule \ruleref{sprop-alloc} is then thought of as allocation, and the rule \ruleref{sprop-agree} states that a given location $\gname$ can only store \emph{one} proposition, so multiple witnesses covering the same location must agree.
The rule \ruleref{sprop-alloc} is then thought of as allocation, and the rule \ruleref{sprop-agree} states that a given location $\gname$ can only store \emph{one} proposition, so multiple witnesses covering the same location must agree.
%Compared to saved propositions in prior work, \ruleref{sprop-alloc} is stronger since the stored proposition can depend on the name being allocated.
%Compared to saved propositions in prior work, \ruleref{sprop-alloc} is stronger since the stored proposition can depend on the name being allocated.
...
@@ -36,15 +36,15 @@ The rule \ruleref{sprop-alloc} is then thought of as allocation, and the rule \r
...
@@ -36,15 +36,15 @@ The rule \ruleref{sprop-alloc} is then thought of as allocation, and the rule \r
The conclusion of \ruleref{sprop-agree} usually is guarded by a $\later$.
The conclusion of \ruleref{sprop-agree} usually is guarded by a $\later$.
The point of this theorem is to show that said later is \emph{essential}, as removing it introduces inconsistency.
The point of this theorem is to show that said later is \emph{essential}, as removing it introduces inconsistency.
%
%
The key to proving \thmref{thm:counterexample-1} is the following assertion:
The key to proving \thmref{thm:counterexample-1} is the following proposition:
Intuitively, $A(\gname)$ says that the saved proposition named $\gname$ does \emph{not} hold, \ie we can disprove it.
Intuitively, $A(\gname)$ says that the saved proposition named $\gname$ does \emph{not} hold, \ie we can disprove it.
Using \ruleref{sprop-persist}, it is immediate that $A(\gname)$ is persistent.
Using \ruleref{sprop-persist}, it is immediate that $A(\gname)$ is persistent.
Now, by applying \ruleref{sprop-alloc} with $A$, we obtain a proof of $\prop\eqdef\gname\Mapsto A(\gname)$: this says that the proposition named $\gname$ is the assertion saying that it, itself, doesn't hold.
Now, by applying \ruleref{sprop-alloc} with $A$, we obtain a proof of $\prop\eqdef\gname\Mapsto A(\gname)$: this says that the proposition named $\gname$ is the proposition saying that it, itself, doesn't hold.
In other words, $\prop$ says that the assertion named $\gname$ expresses its own negation.
In other words, $\prop$ says that the proposition named $\gname$ expresses its own negation.
Unsurprisingly, that leads to a contradiction, as is shown in the following lemma:
Unsurprisingly, that leads to a contradiction, as is shown in the following lemma:
\begin{lem}\label{lem:saved-prop-counterexample-not-agname} We have $\gname\Mapsto A(\gname)\proves\always\lnot A(\gname)$ and $\gname\Mapsto A(\gname)\proves A(\gname)$. \end{lem}
\begin{lem}\label{lem:saved-prop-counterexample-not-agname} We have $\gname\Mapsto A(\gname)\proves\always\lnot A(\gname)$ and $\gname\Mapsto A(\gname)\proves A(\gname)$. \end{lem}
@@ -9,8 +9,8 @@ So in the following, we assume that some language $\Lang$ was fixed.
...
@@ -9,8 +9,8 @@ So in the following, we assume that some language $\Lang$ was fixed.
\label{sec:composeable-resources}
\label{sec:composeable-resources}
The base logic described in \Sref{sec:base-logic} works over an arbitrary camera $\monoid$ defining the structure of the resources.
The base logic described in \Sref{sec:base-logic} works over an arbitrary camera $\monoid$ defining the structure of the resources.
It turns out that we can generalize this further and permit picking cameras ``$\iFunc(\Prop)$'' that depend on the structure of assertions themselves.
It turns out that we can generalize this further and permit picking cameras ``$\iFunc(\Prop)$'' that depend on the structure of propositions themselves.
Of course, $\Prop$ is just the syntactic type of assertions; for this to make sense we have to look at the semantics.
Of course, $\Prop$ is just the syntactic type of propositions; for this to make sense we have to look at the semantics.
Furthermore, there is a composability problem with the given logic: if we have one proof performed with camera $\monoid_1$, and another proof carried out with a \emph{different} camera $\monoid_2$, then the two proofs are actually carried out in two \emph{entirely separate logics} and hence cannot be combined.
Furthermore, there is a composability problem with the given logic: if we have one proof performed with camera $\monoid_1$, and another proof carried out with a \emph{different} camera $\monoid_2$, then the two proofs are actually carried out in two \emph{entirely separate logics} and hence cannot be combined.
...
@@ -45,11 +45,11 @@ We do not need to consider how the object $\iPreProp$ is constructed, we only ne
...
@@ -45,11 +45,11 @@ We do not need to consider how the object $\iPreProp$ is constructed, we only ne
\wIso^{-1}&: \iPreProp\nfn\iProp
\wIso^{-1}&: \iPreProp\nfn\iProp
\end{align*}
\end{align*}
Notice that $\iProp$ is the semantic model of assertions for the base logic described in \Sref{sec:base-logic} with $\Res$:
Notice that $\iProp$ is the semantic model of propositions for the base logic described in \Sref{sec:base-logic} with $\Res$:
\[\Sem{\Prop}\eqdef\iProp=\UPred(\Res)\]
\[\Sem{\Prop}\eqdef\iProp=\UPred(\Res)\]
Effectively, we just defined a way to instantiate the base logic with $\Res$ as the camera of resources, while providing a way for $\Res$ to depend on $\iPreProp$, which is isomorphic to $\Sem\Prop$.
Effectively, we just defined a way to instantiate the base logic with $\Res$ as the camera of resources, while providing a way for $\Res$ to depend on $\iPreProp$, which is isomorphic to $\Sem\Prop$.
We thus obtain all the rules of \Sref{sec:base-logic}, and furthermore, we can use the maps $\wIso$ and $\wIso^{-1}$\emph{in the logic} to convert between logical assertions $\Sem\Prop$ and the domain $\iPreProp$ which is used in the construction of $\Res$ -- so from elements of $\iPreProp$, we can construct elements of $\Sem{\textlog M}$, which are the elements that can be owned in our logic.
We thus obtain all the rules of \Sref{sec:base-logic}, and furthermore, we can use the maps $\wIso$ and $\wIso^{-1}$\emph{in the logic} to convert between logical propositions $\Sem\Prop$ and the domain $\iPreProp$ which is used in the construction of $\Res$ -- so from elements of $\iPreProp$, we can construct elements of $\Sem{\textlog M}$, which are the elements that can be owned in our logic.
\paragraph{Proof composability.}
\paragraph{Proof composability.}
To make our proofs composeable, we \emph{generalize} our proofs over the family of functors.
To make our proofs composeable, we \emph{generalize} our proofs over the family of functors.
...
@@ -120,7 +120,7 @@ The last two are the tokens used for managing invariants, $\textmon{Inv}$ is the
...
@@ -120,7 +120,7 @@ The last two are the tokens used for managing invariants, $\textmon{Inv}$ is the
We assume that at the beginning of the verification, instances named $\gname_{\textmon{State}}$, $\gname_{\textmon{Inv}}$, $\gname_{\textmon{En}}$ and $\gname_{\textmon{Dis}}$ of these cameras have been created, such that these names are globally known.
We assume that at the beginning of the verification, instances named $\gname_{\textmon{State}}$, $\gname_{\textmon{Inv}}$, $\gname_{\textmon{En}}$ and $\gname_{\textmon{Dis}}$ of these cameras have been created, such that these names are globally known.
\paragraph{World Satisfaction.}
\paragraph{World Satisfaction.}
We can now define the assertion $W$ (\emph{world satisfaction}) which ensures that the enabled invariants are actually maintained:
We can now define the proposition $W$ (\emph{world satisfaction}) which ensures that the enabled invariants are actually maintained:
\begin{align*}
\begin{align*}
W \eqdef{}&\Exists I : \InvName\fpfn\Prop.
W \eqdef{}&\Exists I : \InvName\fpfn\Prop.
\begin{array}[t]{@{} l}
\begin{array}[t]{@{} l}
...
@@ -133,7 +133,7 @@ We can now define the assertion $W$ (\emph{world satisfaction}) which ensures th
...
@@ -133,7 +133,7 @@ We can now define the assertion $W$ (\emph{world satisfaction}) which ensures th
\end{align*}
\end{align*}
\paragraph{Invariants.}
\paragraph{Invariants.}
The following assertion states that an invariant with name $\iname$ exists and maintains assertion $\prop$:
The following proposition states that an invariant with name $\iname$ exists and maintains proposition $\prop$:
@@ -246,11 +246,11 @@ Still, just to give an idea of what view shifts ``are'', here are some proof rul
...
@@ -246,11 +246,11 @@ Still, just to give an idea of what view shifts ``are'', here are some proof rul
\subsection{Weakest Precondition}
\subsection{Weakest Precondition}
Finally, we can define the core piece of the program logic, the assertion that reasons about program behavior: Weakest precondition, from which Hoare triples will be derived.
Finally, we can define the core piece of the program logic, the proposition that reasons about program behavior: Weakest precondition, from which Hoare triples will be derived.
\paragraph{Defining weakest precondition.}
\paragraph{Defining weakest precondition.}
We assume that everything making up the definition of the language, \ie values, expressions, states, the conversion functions, reduction relation and all their properties, are suitably reflected into the logic (\ie they are part of the signature $\Sig$).
We assume that everything making up the definition of the language, \ie values, expressions, states, the conversion functions, reduction relation and all their properties, are suitably reflected into the logic (\ie they are part of the signature $\Sig$).
We further assume (as a parameter) a predicate $\stateinterp : \State\to\iProp$ that interprets the physical state as an Iris assertion.
We further assume (as a parameter) a predicate $\stateinterp : \State\to\iProp$ that interprets the physical state as an Iris proposition.
This can be instantiated, for example, with ownership of an authoritative RA to tie the physical state to fragments that are used for user-level proofs.
This can be instantiated, for example, with ownership of an authoritative RA to tie the physical state to fragments that are used for user-level proofs.
\begin{align*}
\begin{align*}
...
@@ -442,7 +442,7 @@ We only give some of the proof rules for Hoare triples here, since we usually do
...
@@ -442,7 +442,7 @@ We only give some of the proof rules for Hoare triples here, since we usually do
\subsection{Invariant Namespaces}
\subsection{Invariant Namespaces}
\label{sec:namespaces}
\label{sec:namespaces}
In \Sref{sec:invariants}, we defined an assertion $\knowInv\iname\prop$ expressing knowledge (\ie the assertion is persistent) that $\prop$ is maintained as invariant with name $\iname$.
In \Sref{sec:invariants}, we defined a proposition $\knowInv\iname\prop$ expressing knowledge (\ie the proposition is persistent) that $\prop$ is maintained as invariant with name $\iname$.
The concrete name $\iname$ is picked when the invariant is allocated, so it cannot possibly be statically known -- it will always be a variable that's threaded through everything.
The concrete name $\iname$ is picked when the invariant is allocated, so it cannot possibly be statically known -- it will always be a variable that's threaded through everything.
However, we hardly care about the actual, concrete name.
However, we hardly care about the actual, concrete name.
All we need to know is that this name is \emph{different} from the names of other invariants that we want to open at the same time.
All we need to know is that this name is \emph{different} from the names of other invariants that we want to open at the same time.
...
@@ -468,7 +468,7 @@ In order to connect this up to the definitions of \Sref{sec:invariants}, we need
...
@@ -468,7 +468,7 @@ In order to connect this up to the definitions of \Sref{sec:invariants}, we need
Any injective mapping $\textlog{namesp\_inj}$ will do; and such a mapping has to exist because $\List(\nat)$ is countable and $\InvName$ is infinite.
Any injective mapping $\textlog{namesp\_inj}$ will do; and such a mapping has to exist because $\List(\nat)$ is countable and $\InvName$ is infinite.
Whenever needed, we (usually implicitly) coerce $\namesp$ to its encoded suffix-closure, \ie to the set of encoded structured invariant names contained in the namespace: \[\namecl\namesp\eqdef\setComp{\iname}{\Exists\namesp'. \iname=\textlog{namesp\_inj}(\namesp' \dplus\namesp)}\]
Whenever needed, we (usually implicitly) coerce $\namesp$ to its encoded suffix-closure, \ie to the set of encoded structured invariant names contained in the namespace: \[\namecl\namesp\eqdef\setComp{\iname}{\Exists\namesp'. \iname=\textlog{namesp\_inj}(\namesp' \dplus\namesp)}\]
We will overload the notation for invariant assertions for using namespaces instead of names:
We will overload the notation for invariant propositions for using namespaces instead of names:
We can now derive the following rules (this involves unfolding the definition of fancy updates):
We can now derive the following rules (this involves unfolding the definition of fancy updates):
\begin{mathpar}
\begin{mathpar}
...
@@ -489,11 +489,11 @@ We can now derive the following rules (this involves unfolding the definition of
...
@@ -489,11 +489,11 @@ We can now derive the following rules (this involves unfolding the definition of
The two rules \ruleref{inv-open} and \ruleref{inv-open-timeless} above may look a little surprising, in the sense that it is not clear on first sight how they would be applied.
The two rules \ruleref{inv-open} and \ruleref{inv-open-timeless} above may look a little surprising, in the sense that it is not clear on first sight how they would be applied.
The rules are the first \emph{accessors} that show up in this document.
The rules are the first \emph{accessors} that show up in this document.
One way to think about such assertions is as follows:
One way to think about such propositions is as follows:
Given some accessor, if during our verification we have the assertion $\prop$ and the mask $\mask_1$ available, we can use the accessor to \emph{access}$\propB$ and obtain the witness $\var$.
Given some accessor, if during our verification we have the proposition $\prop$ and the mask $\mask_1$ available, we can use the accessor to \emph{access}$\propB$ and obtain the witness $\var$.
We call this \emph{opening} the accessor, and it changes the mask to $\mask_2$.
We call this \emph{opening} the accessor, and it changes the mask to $\mask_2$.
Additionally, opening the accessor provides us with $\All\varB. \propB' \vsW[\mask_2][\mask_1]\propC$, a \emph{linear view shift} (\ie a view shift that can only be used once).
Additionally, opening the accessor provides us with $\All\varB. \propB' \vsW[\mask_2][\mask_1]\propC$, a \emph{linear view shift} (\ie a view shift that can only be used once).
This linear view shift tells us that in order to \emph{close} the accessor again and go back to mask $\mask_1$, we have to pick some $\varB$ and establish the corresponding $\propB'$.
This linear view shift tells us that in order to \emph{close} the accessor again and go back to mask $\mask_1$, we have to pick some $\varB$ and establish the corresponding $\propB'$.