Iris issueshttps://gitlab.mpi-sws.org/iris/iris/-/issues2016-08-23T14:34:02Zhttps://gitlab.mpi-sws.org/iris/iris/-/issues/1Thread-local state, user-controlled fork2016-08-23T14:34:02ZRalf Jungjung@mpi-sws.orgThread-local state, user-controlled forkChange `core_lang` to get rid of evaluation contexts and let the user control what fork does. This will invalidate the Bind rule, I will create another issue for that.Change `core_lang` to get rid of evaluation contexts and let the user control what fork does. This will invalidate the Bind rule, I will create another issue for that.Ralf Jungjung@mpi-sws.orgRalf Jungjung@mpi-sws.orghttps://gitlab.mpi-sws.org/iris/iris/-/issues/2Bring back the Bind rule2019-11-19T12:40:24ZRalf Jungjung@mpi-sws.orgBring back the Bind ruleOnce #1 is done: Provide a module `ectx_lang` for languages with an evaluation context interface, and show that this implies a bind rule. This will be similar to how `core_lang` looks right now, but not identical - in particular, the user will be in control of fork.
@swasey I recall you changed the `core_lang` axioms some time ago to actually make some sense. These changed axioms will then become part of `ectx_lang`. Are they suitably canonical, or is there something we should discuss there? Do we know that an actual Coq implementation of some lambda calculus can satisfy these axioms? Once we have `ectx_lang` in the development, how much work would it take to port your old AutoSubst-based language to that interface, so that we have a (nice?) example of an instantiated language?Once #1 is done: Provide a module `ectx_lang` for languages with an evaluation context interface, and show that this implies a bind rule. This will be similar to how `core_lang` looks right now, but not identical - in particular, the user will be in control of fork.
@swasey I recall you changed the `core_lang` axioms some time ago to actually make some sense. These changed axioms will then become part of `ectx_lang`. Are they suitably canonical, or is there something we should discuss there? Do we know that an actual Coq implementation of some lambda calculus can satisfy these axioms? Once we have `ectx_lang` in the development, how much work would it take to port your old AutoSubst-based language to that interface, so that we have a (nice?) example of an instantiated language?Ralf Jungjung@mpi-sws.orgRalf Jungjung@mpi-sws.orghttps://gitlab.mpi-sws.org/iris/iris/-/issues/3Prove adequacy of observational view shifts2016-11-24T09:29:23ZRalf Jungjung@mpi-sws.orgProve adequacy of observational view shiftsProve an adequacy theorem that uses an observational view shift to establish that all the physical states during the execution of a program satisfy some property.
@swasey If you still want to do this, assign this issue to yourself, please.
One thing I have been wondering is whether this result should/could somehow be generalized to the case where some *code* has to run before the invariant we talk about is actually established. I think I have some idea(s) here, that should be fleshed out on a whiteboard.Prove an adequacy theorem that uses an observational view shift to establish that all the physical states during the execution of a program satisfy some property.
@swasey If you still want to do this, assign this issue to yourself, please.
One thing I have been wondering is whether this result should/could somehow be generalized to the case where some *code* has to run before the invariant we talk about is actually established. I think I have some idea(s) here, that should be fleshed out on a whiteboard.David SwaseyDavid Swaseyhttps://gitlab.mpi-sws.org/iris/iris/-/issues/4Re-do the basics: Cofe's and (CM)RA's2016-01-18T16:38:26ZRalf Jungjung@mpi-sws.orgRe-do the basics: Cofe's and (CM)RA'sSlim down the algebraic hierarchy to contain only Cofe's, RA's and CMRA's, all as unbundled typeclasses and bundled canonical structures. Instead of a general notion of monotone function, we only really need one functor that maps a CMRA to monotone, step-indexed predicates over that CMRA. Maybe call that uPred? THat's kind of what it is, though it's not the same uPred that we had originally - that one could only deal with RA's.
Then we don't need the notion of a canonical partial order on a type - often, it's not even clear which one that should be. This also gives us the opportunity to define the metric and order on that uPred such that we can have "own(a) => valid(a)" as opposed to right now, where we have "own(a) \vs valid(a)". It's not entirely clear yet though if that latter part will actually work out without significant pain.Slim down the algebraic hierarchy to contain only Cofe's, RA's and CMRA's, all as unbundled typeclasses and bundled canonical structures. Instead of a general notion of monotone function, we only really need one functor that maps a CMRA to monotone, step-indexed predicates over that CMRA. Maybe call that uPred? THat's kind of what it is, though it's not the same uPred that we had originally - that one could only deal with RA's.
Then we don't need the notion of a canonical partial order on a type - often, it's not even clear which one that should be. This also gives us the opportunity to define the metric and order on that uPred such that we can have "own(a) => valid(a)" as opposed to right now, where we have "own(a) \vs valid(a)". It's not entirely clear yet though if that latter part will actually work out without significant pain.Robbert KrebbersRobbert Krebbershttps://gitlab.mpi-sws.org/iris/iris/-/issues/5Come up with a story for non-atomic, non-pure expressions2015-11-06T19:01:16ZRalf Jungjung@mpi-sws.orgCome up with a story for non-atomic, non-pure expressionsWe currently have very poor support for expressions that depend on the state (either changing it, on only being safe for some states), and that do not reduce to a value in a single step - that are not atomic. We cannot use the atomic rule around them, so once ownership of the physical state has been put into an invariant, such expressions cannot be given a triple. Ouch.
@robbertkrebbers mentioned that such expressions do come up in languages he has studied. For example, calling a function will both push a stackframe (which is part of the global state) and expand the function body - the operation is hence neither atomic nor pure.
The best proposal on the table so far is to further strengthen (and complicate) the lifting lemmas, and make it possible for them to pull the physical state out of an invariant. That's kind of ugly though, it would mean that these lemmas deal with both the "physical state change" aspect of the reduction, and the "atomicity" aspect. The atomic rule of consequence can probably not be derived from this, and would remain a primitive. It would also be much nicer to use: The lifting lemmas "open" an invariant only around the first reduction step of an expression, not around the entire triple, so we don't even have nice notation for them.
@janno @swasey what do you think?We currently have very poor support for expressions that depend on the state (either changing it, on only being safe for some states), and that do not reduce to a value in a single step - that are not atomic. We cannot use the atomic rule around them, so once ownership of the physical state has been put into an invariant, such expressions cannot be given a triple. Ouch.
@robbertkrebbers mentioned that such expressions do come up in languages he has studied. For example, calling a function will both push a stackframe (which is part of the global state) and expand the function body - the operation is hence neither atomic nor pure.
The best proposal on the table so far is to further strengthen (and complicate) the lifting lemmas, and make it possible for them to pull the physical state out of an invariant. That's kind of ugly though, it would mean that these lemmas deal with both the "physical state change" aspect of the reduction, and the "atomicity" aspect. The atomic rule of consequence can probably not be derived from this, and would remain a primitive. It would also be much nicer to use: The lifting lemmas "open" an invariant only around the first reduction step of an expression, not around the entire triple, so we don't even have nice notation for them.
@janno @swasey what do you think?https://gitlab.mpi-sws.org/iris/iris/-/issues/6Add & update appendix2016-03-17T16:01:33ZRalf Jungjung@mpi-sws.orgAdd & update appendixI think we should have a version of the appendix that's in sync with the Coq development right in the repository. The latest public version is outdated, and I have sent various snapshots of the appendix to various people. Pointing them to this git repository would make it possible for them to stay in sync with updates, while still making it clear that this is all in development and not polished.
As a prerequisite for this to be useful, we have to sync the appendix and the Coq sources again - at least the first part, describing the model. I am not sure yet whether we want the rest of the appendix in here as well. It's less related to the Coq development, but it's more what people are actually interested in. @janno suggested some kind of Hackathon, where we sit together and check that definitions actually match. That should probably happen after #4 is done. @swasey what do you think?I think we should have a version of the appendix that's in sync with the Coq development right in the repository. The latest public version is outdated, and I have sent various snapshots of the appendix to various people. Pointing them to this git repository would make it possible for them to stay in sync with updates, while still making it clear that this is all in development and not polished.
As a prerequisite for this to be useful, we have to sync the appendix and the Coq sources again - at least the first part, describing the model. I am not sure yet whether we want the rest of the appendix in here as well. It's less related to the Coq development, but it's more what people are actually interested in. @janno suggested some kind of Hackathon, where we sit together and check that definitions actually match. That should probably happen after #4 is done. @swasey what do you think?https://gitlab.mpi-sws.org/iris/iris/-/issues/7Support stored propositions2016-01-20T13:26:21ZRalf Jungjung@mpi-sws.orgSupport stored propositionsIt would be nice to have support for "Stored propositions" as described in <http://www-users.cs.york.ac.uk/~miked/publications/verifying_custom_sync_constructs.pdf>.
@kaspersv suggested they could be hacked into Iris without changing in the model. Let me quote his mail (with some parts of my mail also being inline):
>> And finally, of course, there's the question whether this could be done
>> in Iris. I am sure we could have the following proof rule:
>>
>> \inv^i (P) * \inv^i (Q) => \later (P <=> Q)
>
>Indeed, we can even strengthen it to inv^j (P) * inv^j (Q) => later (P = Q)
>because the interpretation of inv is slightly stronger in Iris than rintrp is
>in iCAP. In particular, inv^j (P) asserts that the invariant is n-equal to P,
>while iCAP only asserts that the region interpretation is n-equal to
>the given predicate when applied to any future world of the current
>world.
>
>> but we can't easily use the trick you did, with the 2-state STS, in
>> order to not actually store a P in the invariant. Did you think about this?
>
>Yes, we did think about an encoding, using two mutually exclusive
>ghost resources gh(0) and gh(1). The idea would be to create an
>invariant (P * gh(0)) \/ gh(1) and transfer gh(1) to the region. Then
>the question boils down to:
>
>(P * gh(0)) \/ gh(1) = (Q * gh(0)) \/ gh(1) |- P = Q
>
>I don’t think it is possible to prove this internally and we weren’t
>sufficiently motivated to check if it holds in the semantics, but it
>seems like it should.
As an alternative, @janno and me tossed around the idea of letting the user chose an arbitrary, locally non-expansive functor to construct the ghost monoid. That would allow the user to make use of my agreement construction, with "Ag(\later Prop)" being in the ghost monoid. This would then let the user do stored propositions without hacks like the above. @robbertkrebbers also seemed to like that. However, we would lose the axiom that ghost ownership is timeless, and I see no easy way to regain it for the part of the functor that has a discrete metric. So, probably, we would want to keep both a discrete monoid (where ownership is timeless) and a functor (where crazy things can happen) in the interface, and pair them together ourselves - which raises the question if that's worth it. The model would end up with a 4-tuple-monoid instead of the three-tuples is has currently (invariants, ghost, physical). Right now, I cannot imagine using any other monoid with propositions embedded. Of course it would be nice to have, for example, STSs with states indexed by propositions, but actually doing that would require making this STS a functor, and showing all the functorial properties and local non-expansiveness and so on - probably it would be much easier to go with stored propositions and index the STS by the name instead. For that reason, right now, I'd prefer the solution suggested by @kaspersv if it works.
@jtassaro: I mentioned this yesterday during our conversation.It would be nice to have support for "Stored propositions" as described in <http://www-users.cs.york.ac.uk/~miked/publications/verifying_custom_sync_constructs.pdf>.
@kaspersv suggested they could be hacked into Iris without changing in the model. Let me quote his mail (with some parts of my mail also being inline):
>> And finally, of course, there's the question whether this could be done
>> in Iris. I am sure we could have the following proof rule:
>>
>> \inv^i (P) * \inv^i (Q) => \later (P <=> Q)
>
>Indeed, we can even strengthen it to inv^j (P) * inv^j (Q) => later (P = Q)
>because the interpretation of inv is slightly stronger in Iris than rintrp is
>in iCAP. In particular, inv^j (P) asserts that the invariant is n-equal to P,
>while iCAP only asserts that the region interpretation is n-equal to
>the given predicate when applied to any future world of the current
>world.
>
>> but we can't easily use the trick you did, with the 2-state STS, in
>> order to not actually store a P in the invariant. Did you think about this?
>
>Yes, we did think about an encoding, using two mutually exclusive
>ghost resources gh(0) and gh(1). The idea would be to create an
>invariant (P * gh(0)) \/ gh(1) and transfer gh(1) to the region. Then
>the question boils down to:
>
>(P * gh(0)) \/ gh(1) = (Q * gh(0)) \/ gh(1) |- P = Q
>
>I don’t think it is possible to prove this internally and we weren’t
>sufficiently motivated to check if it holds in the semantics, but it
>seems like it should.
As an alternative, @janno and me tossed around the idea of letting the user chose an arbitrary, locally non-expansive functor to construct the ghost monoid. That would allow the user to make use of my agreement construction, with "Ag(\later Prop)" being in the ghost monoid. This would then let the user do stored propositions without hacks like the above. @robbertkrebbers also seemed to like that. However, we would lose the axiom that ghost ownership is timeless, and I see no easy way to regain it for the part of the functor that has a discrete metric. So, probably, we would want to keep both a discrete monoid (where ownership is timeless) and a functor (where crazy things can happen) in the interface, and pair them together ourselves - which raises the question if that's worth it. The model would end up with a 4-tuple-monoid instead of the three-tuples is has currently (invariants, ghost, physical). Right now, I cannot imagine using any other monoid with propositions embedded. Of course it would be nice to have, for example, STSs with states indexed by propositions, but actually doing that would require making this STS a functor, and showing all the functorial properties and local non-expansiveness and so on - probably it would be much easier to go with stored propositions and index the STS by the name instead. For that reason, right now, I'd prefer the solution suggested by @kaspersv if it works.
@jtassaro: I mentioned this yesterday during our conversation.https://gitlab.mpi-sws.org/iris/iris/-/issues/8Modular global monoid2016-03-17T16:06:26ZRalf Jungjung@mpi-sws.orgModular global monoidRight now, Iris is instantiated with a single global monoid. As described in the appendix, we can easily use the product monoid to have multiple different monoids available in Iris, but this still requires knowing the exact set of all monoids we will need up-front, before instantiating the logic. That's not nice.
I see two possible solutions.
### The "any" monoid
We could have a monoid of the form "\Sigma (A: Type) {RA A}. A". This is a monoid that can be "any" monoid. However, doing that in Coq would get very hairy. We would have to require equality of the types and RAs in the composition, and somehow transport the actual data in A over that equality to even compose the pieces. I think FCSL is doing something like this, but they assume all sorts of classical axioms. We would probably want at least function extensionality and Axiom K.
### The "all" monoid
When I was visiting @robbertkrebbers , we talked about indexed products a lot as they impose some interesting challenges when done in Coq, where we don't want to assume the axiom of choice. We agreed that for example, it would be nice to have he set of all STS graphs + token assignments in the index set, so that we could actually have *all* STS monoids at once, rather than having to add every single STS monoid to the global monoid individually.
That had me thinking, could it be possible to index the product by monoids? Essentially, replace the \Sigma above by a \Pi. That would be the monoid of all monoids, with the index defining the monoid structure at that position. We just do everything pointwise, so there should be no trouble with inhomogenous equality or anything like that. As long as we make sure that a module always uses the *same* index, and nobody reasons about equality of indices, we should be good - right? Or did I miss anything here?
@janno, @swasey, we also talked about this problem at some point, so you're probably also interested in this.Right now, Iris is instantiated with a single global monoid. As described in the appendix, we can easily use the product monoid to have multiple different monoids available in Iris, but this still requires knowing the exact set of all monoids we will need up-front, before instantiating the logic. That's not nice.
I see two possible solutions.
### The "any" monoid
We could have a monoid of the form "\Sigma (A: Type) {RA A}. A". This is a monoid that can be "any" monoid. However, doing that in Coq would get very hairy. We would have to require equality of the types and RAs in the composition, and somehow transport the actual data in A over that equality to even compose the pieces. I think FCSL is doing something like this, but they assume all sorts of classical axioms. We would probably want at least function extensionality and Axiom K.
### The "all" monoid
When I was visiting @robbertkrebbers , we talked about indexed products a lot as they impose some interesting challenges when done in Coq, where we don't want to assume the axiom of choice. We agreed that for example, it would be nice to have he set of all STS graphs + token assignments in the index set, so that we could actually have *all* STS monoids at once, rather than having to add every single STS monoid to the global monoid individually.
That had me thinking, could it be possible to index the product by monoids? Essentially, replace the \Sigma above by a \Pi. That would be the monoid of all monoids, with the index defining the monoid structure at that position. We just do everything pointwise, so there should be no trouble with inhomogenous equality or anything like that. As long as we make sure that a module always uses the *same* index, and nobody reasons about equality of indices, we should be good - right? Or did I miss anything here?
@janno, @swasey, we also talked about this problem at some point, so you're probably also interested in this.https://gitlab.mpi-sws.org/iris/iris/-/issues/9Formalize a realistic language2016-01-18T16:58:07ZRalf Jungjung@mpi-sws.orgFormalize a realistic languageTo gain confidence in the axioms we use for abstracting away the language, and in our lifting lemmas, it'd be great if we could have a formalization of the lambda-calculus we used in the paper (or maybe one with a heap, I don't care strongly either way).
@swasey I think you're the one who did most work in this direction.To gain confidence in the axioms we use for abstracting away the language, and in our lifting lemmas, it'd be great if we could have a formalization of the lambda-calculus we used in the paper (or maybe one with a heap, I don't care strongly either way).
@swasey I think you're the one who did most work in this direction.https://gitlab.mpi-sws.org/iris/iris/-/issues/10Defining RAs with a partial composition operation2016-01-18T16:38:58ZRalf Jungjung@mpi-sws.orgDefining RAs with a partial composition operationRight now, if your RA has a partial composition operation and you'd like to make a case distinction on something that's not decidable, you have to use some rather hideous trick. For example, in the case of STSs, composition is defined only if the intersection of the state sets is non-empty. To define this in Coq, the STS monoid actually carries around a `valid: Prop` piece of "data", and equality of monoid elements is defined as a quotient, equalizing all invalid elements. @robbertkrebbers convinced me this also comes up for some other monoid, I believe it was the fractions - but actually, I can't tell any more why this should come up there. Robbert, could you remind me?
It may be a good idea to axiomatize a class of "DRAs" (disjoint RAs), that have, instead of a validity predicate, a binary *disjointness* predicate. Composition would still have to be total, but commutativity (and potentially associativity) would only have to hold if the involved elements are disjoint. We could then have a functor from RAs to DRAs, using above construction with the `valid: Prop` and the quotient. This way, people could define such monoids much easier, but we wouldn't have to suffer the pain of using DRAs in the model. (Which would be really annoying; every time we use commutativity, we'd have to prove disjointness.)Right now, if your RA has a partial composition operation and you'd like to make a case distinction on something that's not decidable, you have to use some rather hideous trick. For example, in the case of STSs, composition is defined only if the intersection of the state sets is non-empty. To define this in Coq, the STS monoid actually carries around a `valid: Prop` piece of "data", and equality of monoid elements is defined as a quotient, equalizing all invalid elements. @robbertkrebbers convinced me this also comes up for some other monoid, I believe it was the fractions - but actually, I can't tell any more why this should come up there. Robbert, could you remind me?
It may be a good idea to axiomatize a class of "DRAs" (disjoint RAs), that have, instead of a validity predicate, a binary *disjointness* predicate. Composition would still have to be total, but commutativity (and potentially associativity) would only have to hold if the involved elements are disjoint. We could then have a functor from RAs to DRAs, using above construction with the `valid: Prop` and the quotient. This way, people could define such monoids much easier, but we wouldn't have to suffer the pain of using DRAs in the model. (Which would be really annoying; every time we use commutativity, we'd have to prove disjointness.)https://gitlab.mpi-sws.org/iris/iris/-/issues/11Structured Invariant identifiers2016-03-17T16:08:02ZGhost UserStructured Invariant identifiersI'm starting to get pretty annoyed working with global invariant names in Iris. In particular, if I have a library that creates multiple invariants, I can parameterise its specification with an infinite set of names that the library can use. However, it is very annoying to have to refer to particular distinct elements of this set in a precise way.
I believe proof outlines would benefit if we had an invariant identifier resource, unused(mask), that expressed that no invariants existed with an identifier in mask. This would allow the allocator to choose a name for invariants upon allocation, by consuming an unused resource:
> unused({ i }) * later P view-shift inv(i, P)
and of course the unused resource could be split arbitrarily:
> unused(mask_1 \uplus mask_2) <=> unused(mask_1) * unused(mask_2)
Libraries would then be parameterised by a single invariant identifier i and require the user to transfer ownership of unused({ concat(i, i') }) to the library upon instantiation. This would allow the library to use any invariant identifier starting with i for its invariants. In particular, it could choose to create two invariants with identifiers concat(i, "0") and concat(i, "1"), which would automatically be distinct in proof outlines.
I don't believe it is possible to define the unused resource in Iris, so I propose to extended Iris with a primitive invariant identifier resource.
Any comments or ideas for better solutions?
I'm starting to get pretty annoyed working with global invariant names in Iris. In particular, if I have a library that creates multiple invariants, I can parameterise its specification with an infinite set of names that the library can use. However, it is very annoying to have to refer to particular distinct elements of this set in a precise way.
I believe proof outlines would benefit if we had an invariant identifier resource, unused(mask), that expressed that no invariants existed with an identifier in mask. This would allow the allocator to choose a name for invariants upon allocation, by consuming an unused resource:
> unused({ i }) * later P view-shift inv(i, P)
and of course the unused resource could be split arbitrarily:
> unused(mask_1 \uplus mask_2) <=> unused(mask_1) * unused(mask_2)
Libraries would then be parameterised by a single invariant identifier i and require the user to transfer ownership of unused({ concat(i, i') }) to the library upon instantiation. This would allow the library to use any invariant identifier starting with i for its invariants. In particular, it could choose to create two invariants with identifiers concat(i, "0") and concat(i, "1"), which would automatically be distinct in proof outlines.
I don't believe it is possible to define the unused resource in Iris, so I propose to extended Iris with a primitive invariant identifier resource.
Any comments or ideas for better solutions?
https://gitlab.mpi-sws.org/iris/iris/-/issues/12Prove stronger stepping frame rule2016-08-22T15:55:51ZRalf Jungjung@mpi-sws.orgProve stronger stepping frame ruleRight now, we have a "stepping frame rule" that looks as follows:
```text
{ P } e { Q } e is not a value
------------------------------------
{ \later R * P } e { R * Q }
```
Essentially, this allows us to step away a later when framing around any non-value. (We also sometimes call this the "atomic frame rule", since atomic expressions are never values. But the one above is just as valid, and obviously stronger.)
However, I think I will need for my Rust stuff a stronger version of this rule, that allows me to strip away a later *inside an invariant* from any non-value. This comes up because I have non-atomic reads and writes that are, well, non-atomic, and I still want to strip away a later from something that is located inside an invariant[1]. This rule is rather awkward to write down in terms of Hoare triples and view shifts:
```text
R0 E1^==>^E2 \later R1 R1 E2^==>^E1 R2 { P } e { Q }_E1
e is not a value E2 <= E1
------------------------------------------------------------------
{ R0 * P } e { R2 * Q }_E1
```
It looks nicer in terms of primitive view shifts and weakest-pre:
```text
wp_E1 e Q * vs_E1^E2(\later vs_E2^E1(R))
e is not a value E2 <= E1
-------------------------------------
wp_E1 e (Q * R)
```
When we re-do the weakest-pre proofs for V2.0, I think this is the rule we should show. (And derive the old one, of course.)
[1] If you are curious: The invariant has two states "vs(P)" and "P", with an STS making sure that one can only go from the 1st state to the 2nd. The rule I need allows me to open this invariant alongside the non-atomic access, obtain "\later (vs(P) \/ P)", strip away the later, view shift to "P", and put that back into the invariant; taking out the witness that the view shift has been performed. I do not need any resource from the invariant to justify the memory access, so it's fine for that one to be non-atomic. But I *do* need to perform this view shift, because immediately after the access I will want to open the invariant again and know I get "\later P", without any more skips.
Cc: @robbertkrebbers @swasey @janno Right now, we have a "stepping frame rule" that looks as follows:
```text
{ P } e { Q } e is not a value
------------------------------------
{ \later R * P } e { R * Q }
```
Essentially, this allows us to step away a later when framing around any non-value. (We also sometimes call this the "atomic frame rule", since atomic expressions are never values. But the one above is just as valid, and obviously stronger.)
However, I think I will need for my Rust stuff a stronger version of this rule, that allows me to strip away a later *inside an invariant* from any non-value. This comes up because I have non-atomic reads and writes that are, well, non-atomic, and I still want to strip away a later from something that is located inside an invariant[1]. This rule is rather awkward to write down in terms of Hoare triples and view shifts:
```text
R0 E1^==>^E2 \later R1 R1 E2^==>^E1 R2 { P } e { Q }_E1
e is not a value E2 <= E1
------------------------------------------------------------------
{ R0 * P } e { R2 * Q }_E1
```
It looks nicer in terms of primitive view shifts and weakest-pre:
```text
wp_E1 e Q * vs_E1^E2(\later vs_E2^E1(R))
e is not a value E2 <= E1
-------------------------------------
wp_E1 e (Q * R)
```
When we re-do the weakest-pre proofs for V2.0, I think this is the rule we should show. (And derive the old one, of course.)
[1] If you are curious: The invariant has two states "vs(P)" and "P", with an STS making sure that one can only go from the 1st state to the 2nd. The rule I need allows me to open this invariant alongside the non-atomic access, obtain "\later (vs(P) \/ P)", strip away the later, view shift to "P", and put that back into the invariant; taking out the witness that the view shift has been performed. I do not need any resource from the invariant to justify the memory access, so it's fine for that one to be non-atomic. But I *do* need to perform this view shift, because immediately after the access I will want to open the invariant again and know I get "\later P", without any more skips.
Cc: @robbertkrebbers @swasey @janno https://gitlab.mpi-sws.org/iris/iris/-/issues/13Bring back unsafe triples2018-02-02T19:07:29ZRalf Jungjung@mpi-sws.orgBring back unsafe triplesThey were lost in the transition from Iris 1.1 to Iris 2.0. @swasey needs them.They were lost in the transition from Iris 1.1 to Iris 2.0. @swasey needs them.https://gitlab.mpi-sws.org/iris/iris/-/issues/14Evaluation-context language2016-03-29T18:01:52ZRalf Jungjung@mpi-sws.orgEvaluation-context languageWe should have a typeclass for an evaluation-context based language, and an instance to convert these into a general Iris language, and some specialized lifting lemmas for such languages.We should have a typeclass for an evaluation-context based language, and an instance to convert these into a general Iris language, and some specialized lifting lemmas for such languages.Ralf Jungjung@mpi-sws.orgRalf Jungjung@mpi-sws.orghttps://gitlab.mpi-sws.org/iris/iris/-/issues/15Unify frame-preserving updates and local updates2016-10-27T09:19:17ZRalf Jungjung@mpi-sws.orgUnify frame-preserving updates and local updatesThe two seem to have a lot in common, see e.g. `frac.v`.The two seem to have a lot in common, see e.g. `frac.v`.https://gitlab.mpi-sws.org/iris/iris/-/issues/16Document variable naming conventions2019-11-01T14:12:15ZRalf Jungjung@mpi-sws.orgDocument variable naming conventionsWe should
* [x] Convert naming.txt to markdown
* [ ] document the naming conventions for single-letter variable names
* [ ] maybe make them more consistent between Coq and LaTeX.
* [x] Document postfixes like C, T, R, G
Cc @robbertkrebbers We should
* [x] Convert naming.txt to markdown
* [ ] document the naming conventions for single-letter variable names
* [ ] maybe make them more consistent between Coq and LaTeX.
* [x] Document postfixes like C, T, R, G
Cc @robbertkrebbers https://gitlab.mpi-sws.org/iris/iris/-/issues/17Sum CMRA / CMRA without core2016-06-21T09:02:04ZRalf Jungjung@mpi-sws.orgSum CMRA / CMRA without coreWe would like to support a sensible CMRA structure on sums. For example, we could use a sum CMRA to express `one_shot X` as `exclC unitC + ag X`. We could also express the disposable monoid using sums.
Right now, the CMRA structure on sums does not make much sense because both summands need to have a core. It is thus impossible to do a frame-preserving update from `inl a` to `inr b`. For sums to work out nicely, we need a way to have summands without a unit.
@robbertkrebbers and me agreed that the most promising way forward is to get rid of the core in the CMRA axioms, and instead add a typeclass for "CMRAs that also have a core". There should not be significant performance trouble because the `iFunctor` will bundle the core of the user CMRA. The main open question is whether the loss of reflexivity in the CMRA inclusion will be a problem.
Cc @robbertkrebbers We would like to support a sensible CMRA structure on sums. For example, we could use a sum CMRA to express `one_shot X` as `exclC unitC + ag X`. We could also express the disposable monoid using sums.
Right now, the CMRA structure on sums does not make much sense because both summands need to have a core. It is thus impossible to do a frame-preserving update from `inl a` to `inr b`. For sums to work out nicely, we need a way to have summands without a unit.
@robbertkrebbers and me agreed that the most promising way forward is to get rid of the core in the CMRA axioms, and instead add a typeclass for "CMRAs that also have a core". There should not be significant performance trouble because the `iFunctor` will bundle the core of the user CMRA. The main open question is whether the loss of reflexivity in the CMRA inclusion will be a problem.
Cc @robbertkrebbers Ralf Jungjung@mpi-sws.orgRalf Jungjung@mpi-sws.orghttps://gitlab.mpi-sws.org/iris/iris/-/issues/18[Maybe] Remove invariants2016-08-22T15:55:33ZRalf Jungjung@mpi-sws.org[Maybe] Remove invariantsI am reasonably sure that having higher-order ghost state, we can remove invariants and masks from core Iris and instead just have an assertion annotated at WP that is preserved throughout execution.
Cc @kaspersv @robbertkrebbers
## Changes to the model
Worlds are removed from the global resources.
World satisfaction becomes *significantly* simpler: Masks are removed, and we are left with
```text
\sigma |= r := {n | r \in V_{n} /\ r.\pi = ex(\sigma)}
```
(Or maybe we still have to make this trivial at step-index 0, I am not sure.)
This simplification is the main pay-off of the change. Of course, the complexity is not just gone, but it it moved from the model to the logic -- following the usual pattern that we saw over and over again with Iris.
Masks are just removed from primitive view shifts.
For WP, of course masks are removed, but we need to add something instead: We need to add a way for an assertion to be preserved at every single step of execution. So, we get something like
```text
wp_I(e, \Lam v. Q) = \Lam r n. \All r_f, \sigma, m, ...
(\Exists r_I. I r_I (m+1) /\ (m+1) \in \sigma |= (r * r_I * r_f)) ...
=> \Exists s_1, s_2. (\Exists r_I. I r_I m /\ m \in \sigma |= (s_1 * s_2 * r_I * r_f))
...
```
In other words, besides the frame r_f, we demand that there is some resource r_I satisfying I, both before and after the step has been taken. This replaces the former existential quantification over some map defining the resources backing the invariants, which was part of world satisfaction.
# Changes to the logic
Remove the invariant assertions and all rules related to invariants or masks.
We obtain a new form of the atomic rule of consequence. This is how it looks for Hoare triples:
```text
P * I ==> P' * I'
{P'} e {Q'}_I'
Q' * I' ==> Q * I
e atomic
-------------------
{P} e {Q}_I
```
I have yet to figure out the most elegant way to write this as a WP rule.
We also need a weakening rule, maybe like this:
```text
I <=> I' * I_f
{P'} e {Q'}_I'
-------------------
{P} e {Q}_I
```
All the other rules force the I to be the same in all constituents.
# Recovering invariants
To recover invariants, we first define the `I` that we typically use for triples. It is pretty much exactly the part we removed from world satisfaction, but expressed *inside* the logic:
```
W(E) := \Exists S \in \finpowerset(N). own ... (\authfull S) *
\Sep_{i \in S \cap E} \Exists P : iProp. \later P * i |=> P
```
In other words, there is some ghost state tracking which invariants are allocated. This ghost state has the structure `Auth(\finpowerset(N))`, where we use union (*not* disjoint!) as composition on N. Furthermore, for all invariants that exist and that are enabled in the mask `E`, there is a saved proposition defining which is the invariant with that index, and the invariant is enforced.
We can then recover a WP / Hoare triple with mask `E` by using `I := W(E)`. Furthermore, we define
```
P |={E1,E2}=> Q := P * W(E1) ==> P * W(E2)
inv i P := own ... (\authfrag {i}) * i |=> P
```
We can then prove the old atomic rule of consequence, as well as the invariant-related view shifts, inside the logic.
We can go even further and, for example, allow creating an invariant that is *open*:
```text
E infinite
-------------------
W(E) ==> \Exists i \in E. inv i P * W(E \ {i})
```
This was not possible previously as there was no way to bind the `i` in the new mask.
# Open questions
Right now (in current Iris), the forked-off thread actually has a different mask than the local thread. I think to model this, we need a WP with *two* `I` annotations: One for the current thread, one for all forked-off threads. Maybe there has to be some relationship between these two. I am not sure how exactly this would look.
The weakening rule looks a little strange, and may not even be strong enough. What is the `I_f` to weaken `W(E)` to a smaller mask?I am reasonably sure that having higher-order ghost state, we can remove invariants and masks from core Iris and instead just have an assertion annotated at WP that is preserved throughout execution.
Cc @kaspersv @robbertkrebbers
## Changes to the model
Worlds are removed from the global resources.
World satisfaction becomes *significantly* simpler: Masks are removed, and we are left with
```text
\sigma |= r := {n | r \in V_{n} /\ r.\pi = ex(\sigma)}
```
(Or maybe we still have to make this trivial at step-index 0, I am not sure.)
This simplification is the main pay-off of the change. Of course, the complexity is not just gone, but it it moved from the model to the logic -- following the usual pattern that we saw over and over again with Iris.
Masks are just removed from primitive view shifts.
For WP, of course masks are removed, but we need to add something instead: We need to add a way for an assertion to be preserved at every single step of execution. So, we get something like
```text
wp_I(e, \Lam v. Q) = \Lam r n. \All r_f, \sigma, m, ...
(\Exists r_I. I r_I (m+1) /\ (m+1) \in \sigma |= (r * r_I * r_f)) ...
=> \Exists s_1, s_2. (\Exists r_I. I r_I m /\ m \in \sigma |= (s_1 * s_2 * r_I * r_f))
...
```
In other words, besides the frame r_f, we demand that there is some resource r_I satisfying I, both before and after the step has been taken. This replaces the former existential quantification over some map defining the resources backing the invariants, which was part of world satisfaction.
# Changes to the logic
Remove the invariant assertions and all rules related to invariants or masks.
We obtain a new form of the atomic rule of consequence. This is how it looks for Hoare triples:
```text
P * I ==> P' * I'
{P'} e {Q'}_I'
Q' * I' ==> Q * I
e atomic
-------------------
{P} e {Q}_I
```
I have yet to figure out the most elegant way to write this as a WP rule.
We also need a weakening rule, maybe like this:
```text
I <=> I' * I_f
{P'} e {Q'}_I'
-------------------
{P} e {Q}_I
```
All the other rules force the I to be the same in all constituents.
# Recovering invariants
To recover invariants, we first define the `I` that we typically use for triples. It is pretty much exactly the part we removed from world satisfaction, but expressed *inside* the logic:
```
W(E) := \Exists S \in \finpowerset(N). own ... (\authfull S) *
\Sep_{i \in S \cap E} \Exists P : iProp. \later P * i |=> P
```
In other words, there is some ghost state tracking which invariants are allocated. This ghost state has the structure `Auth(\finpowerset(N))`, where we use union (*not* disjoint!) as composition on N. Furthermore, for all invariants that exist and that are enabled in the mask `E`, there is a saved proposition defining which is the invariant with that index, and the invariant is enforced.
We can then recover a WP / Hoare triple with mask `E` by using `I := W(E)`. Furthermore, we define
```
P |={E1,E2}=> Q := P * W(E1) ==> P * W(E2)
inv i P := own ... (\authfrag {i}) * i |=> P
```
We can then prove the old atomic rule of consequence, as well as the invariant-related view shifts, inside the logic.
We can go even further and, for example, allow creating an invariant that is *open*:
```text
E infinite
-------------------
W(E) ==> \Exists i \in E. inv i P * W(E \ {i})
```
This was not possible previously as there was no way to bind the `i` in the new mask.
# Open questions
Right now (in current Iris), the forked-off thread actually has a different mask than the local thread. I think to model this, we need a WP with *two* `I` annotations: One for the current thread, one for all forked-off threads. Maybe there has to be some relationship between these two. I am not sure how exactly this would look.
The weakening rule looks a little strange, and may not even be strong enough. What is the `I_f` to weaken `W(E)` to a smaller mask?https://gitlab.mpi-sws.org/iris/iris/-/issues/19[Maybe] Make uPred work on OFEs2016-11-22T15:25:52ZRalf Jungjung@mpi-sws.org[Maybe] Make uPred work on OFEsI spent some time last week talking with Santiago, a student of Andrew Appel, and trying to figure out the exact relation of their and our model. We did not reach a definitive conclusion. In particular, we could not tell where exactly our CMRA requirements of non-expansiveness and the extension axiom appear in their model.
However, one thing became clear: They do have a separation algebra on what Santiago called "juicy heap", which is a heap that can store either values or assertions (so, it combines our physical state and invariant maps). Separation algebras are relational, so they had no trouble with equality not being decidable. (And Santiago confirmed my belief that working with this relational composition is a *huge* pain in many other places.) However, they also did not have to go through the length of having a limit construction corresponding to what our agreement CMRA does. This is not entirely surprising, we know we don't actually need these limits and in their approach, all these proof obligations are coming up and discharged much more "on-demand", and much later. (For example, they have to be really, really careful when defining the meaning of what corresponds to our invariant assertion. If they screw up, they will only notice much later, when some proof rule does not work out. On the other hand, all we have to check is non-expansiveness of the definition.)
It would be nice if we would not have to unnecessarily complicate our model with this limit construction. That would also give us a better position when comparing with Appel's stuff ;-)
Cc @robbertkrebbers
I guess this is blocked on "sorting the canonical structure mess" ;-)I spent some time last week talking with Santiago, a student of Andrew Appel, and trying to figure out the exact relation of their and our model. We did not reach a definitive conclusion. In particular, we could not tell where exactly our CMRA requirements of non-expansiveness and the extension axiom appear in their model.
However, one thing became clear: They do have a separation algebra on what Santiago called "juicy heap", which is a heap that can store either values or assertions (so, it combines our physical state and invariant maps). Separation algebras are relational, so they had no trouble with equality not being decidable. (And Santiago confirmed my belief that working with this relational composition is a *huge* pain in many other places.) However, they also did not have to go through the length of having a limit construction corresponding to what our agreement CMRA does. This is not entirely surprising, we know we don't actually need these limits and in their approach, all these proof obligations are coming up and discharged much more "on-demand", and much later. (For example, they have to be really, really careful when defining the meaning of what corresponds to our invariant assertion. If they screw up, they will only notice much later, when some proof rule does not work out. On the other hand, all we have to check is non-expansiveness of the definition.)
It would be nice if we would not have to unnecessarily complicate our model with this limit construction. That would also give us a better position when comparing with Appel's stuff ;-)
Cc @robbertkrebbers
I guess this is blocked on "sorting the canonical structure mess" ;-)https://gitlab.mpi-sws.org/iris/iris/-/issues/20wp_store/wp_load run slowly with multiple points to assumptions in context2016-06-20T16:08:53ZJoseph Tassarottiwp_store/wp_load run slowly with multiple points to assumptions in contextThe wp_store and wp_load tactics appear to be very sensitive to the order of points to assumptions in the context. Consider the following snippet, placed after heap_e_spec in tests/heap_lang.v
```
Definition heap_e2 : expr [] :=
let: "x" := ref #1 in
let: "y" := ref #1 in
'"x" <- !'"x" + #1 ;; !'"x".
Lemma heap_e2_spec E N :
nclose N ⊆ E → heap_ctx N ⊢ WP heap_e2 @ E {{ v, v = #2 }}.
Proof.
iIntros {HN} "#?". rewrite /heap_e2. iApply (wp_mask_weaken N); first done.
wp_alloc l. wp_let.
wp_alloc l'. wp_let.
time wp_load.
Abort.
```
The wp_load takes 109 seconds to complete on my machine. The problem appears to be in the call to iAssumptionCore in wp_load, which tries to find an assumption l |-> ?v in the context. Because of the order of assumptions, it will first try to unify with l' |-> 1. For some reason, this does not fail instantaneously. You can check that this is the source of delay with the following example:
Remark bad_unify (l l': loc): True.
Proof.
(evar (v: Z );
let v0 := eval unfold v in v in
unify (l ↦ #v0)%I (l' ↦ #1)%I).
Abort.
The obvious work-around is to just re-order the two points to facts in the context, or use "remember" to "hide" one of them temporarily, but this is obviously brittle. I am not an expert on how unification works under the hood, so it is hard for me to diagnose why it's so slow here.
First, does this occur for others? I am using coq 8.5pl1 and ssreflect ad273277ab38b. If so, is there some more robust work-around?The wp_store and wp_load tactics appear to be very sensitive to the order of points to assumptions in the context. Consider the following snippet, placed after heap_e_spec in tests/heap_lang.v
```
Definition heap_e2 : expr [] :=
let: "x" := ref #1 in
let: "y" := ref #1 in
'"x" <- !'"x" + #1 ;; !'"x".
Lemma heap_e2_spec E N :
nclose N ⊆ E → heap_ctx N ⊢ WP heap_e2 @ E {{ v, v = #2 }}.
Proof.
iIntros {HN} "#?". rewrite /heap_e2. iApply (wp_mask_weaken N); first done.
wp_alloc l. wp_let.
wp_alloc l'. wp_let.
time wp_load.
Abort.
```
The wp_load takes 109 seconds to complete on my machine. The problem appears to be in the call to iAssumptionCore in wp_load, which tries to find an assumption l |-> ?v in the context. Because of the order of assumptions, it will first try to unify with l' |-> 1. For some reason, this does not fail instantaneously. You can check that this is the source of delay with the following example:
Remark bad_unify (l l': loc): True.
Proof.
(evar (v: Z );
let v0 := eval unfold v in v in
unify (l ↦ #v0)%I (l' ↦ #1)%I).
Abort.
The obvious work-around is to just re-order the two points to facts in the context, or use "remember" to "hide" one of them temporarily, but this is obviously brittle. I am not an expert on how unification works under the hood, so it is hard for me to diagnose why it's so slow here.
First, does this occur for others? I am using coq 8.5pl1 and ssreflect ad273277ab38b. If so, is there some more robust work-around?