Skip to content
Snippets Groups Projects

New atomic shifts

Closed Ghost User requested to merge greatest_fix into master
1 unresolved thread

New version of atomic WP that doesn't require any laters.

Also includes the previous PR that updates the Rust version and brings the old pcas_spec up to date (hopefully, the new one fulfills the same intent).

Edited by Ralf Jung

Merge request reports

Approval is optional

Closed by Ralf JungRalf Jung 6 years ago (Jun 18, 2018 8:57pm UTC)

Merge details

  • The changes were not merged into master.

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Ghost User added 1 commit

    added 1 commit

    • a934edb7 - Clean up atomic sync proof.

    Compare with previous version

  • Ghost User added 1 commit

    added 1 commit

    Compare with previous version

  • Ghost User changed the description

    changed the description

  • This is nice, but there is an odd thing though. The current version only works because we are using a shallow embedding of the logic, it would not work in a deep embedding of the logic. The problem is that you have an existential quantifier that ranges over a turn style:

     P (_ :  x, P x  F P x)

    This you cannot write down in the deep embedding (a.k.a. pen and paper version of the Iris logic), because the turn style is only available at the meta level, and not in the logic.

    To figure out how this stuff actually works, I tried to make a version that works entirely in the logic, see below. The advantage here is also that Fix_coind is entirely in the logic.

    Definition mono {M A} (F : (A  uPred M)  (A  uPred M)) :=
       P Q, ((  x, P x -∗ Q x) -∗  x, F P x -∗ F Q x)%I.
    
    Definition Fix {M A} (F : (A  uPred M)  (A  uPred M)) (x : A) : uPred M :=
      ( P,  ( x, P x -∗ F P x)  P x)%I.
    
    Section Fix.
      Context {M : ucmraT} {A} (F : (A  uPred M)  (A  uPred M)) (Hmono : mono F).
    
      Lemma Fix_implies_F_Fix x : Fix F x  F (Fix F) x.
      Proof.
        iDestruct 1 as (P) "[#Hincl HP]".
        iApply (Hmono P (Fix F)).
        - iAlways. iIntros (y) "Hy". iExists P. by iSplit.
        - by iApply "Hincl".
      Qed.
    
      Lemma F_Fix_implies_Fix x : F (Fix F) x  Fix F x.
      Proof.
        iIntros "HF". iExists (F (Fix F)).
        iIntros "{$HF} !#"; iIntros (y) "Hy". iApply (Hmono with "[] Hy").
        iAlways. iIntros (z). by iApply Fix_implies_F_Fix.
      Qed.
    
      Corollary Fix_unfold x : Fix F x  F (Fix F) x.
      Proof. apply (anti_symm _); auto using Fix_implies_F_Fix, F_Fix_implies_Fix. Qed.
    
      Lemma Fix_coind (P : A  uPred M) (x : A) :
         ( y, P y -∗ F P y) -∗ P x -∗ Fix F x.
      Proof. iIntros "#HP Hx". iExists P. by iIntros "{$Hx} !#". Qed.
    End Fix.
  • Wow, impressive! Having this entire repo ported to the new definition is certainly a very valuable data point. I am however still worried that there will be a problem with the elimination stack. The logically atomic spec for that stack has never been formalized AFAIK, but there is a formalization of a weaker spec for a similar (or the same) stack at http://iris-project.org/pdfs/2017-case-study-concurrent-stacks-with-helping.pdf.

    Strange enough, the appendix at http://plv.mpi-sws.org/iris/appendix.pdf still requires timeless(P) to solve this problem. I thought we had published this with the updated version where instead we cleverly used laters in the atomic triples, but it seems I was wrong. Here's the latest on-paper definition:

    Screenshot_20170818_142909

    Notice the later in the triples, and no timeless(P) in the shifts.

    Here's the proof of the stack: stack-proof.pdf

    The interesting step is on the last page, after let b = cas(o.state, 0, 1) in. However, it seems to me like the physical step provided by the CAS is actually sufficient here to strip away the later we obtain from opening the invariant, and hence the new atomic triples suggested here should work.

    I wonder if it's worth spending the week or so that it would take to formalize the elimination stack?

  • @robbertkrebbers I know about this other coind fixpoint.

    The difference is that you are using the internal notion of monotonicity, which is slightly stronger than the external notion of monotonicity. I am not sure this makkes any difference in practice. I.e., is there any "useful" monotone predicate transformer that is monotone for the weak notion but not for the strong notion?

  • @abizjak See the discussion above -- it seems we (finally ;) want to be able to take monotone fixpoints in Iris. The logic is of course powerful enough to encode this, but there is a stronger fixpoint theorem we could have that requires a proof of monotonicity outside of the logic, rather than inside the logic. This works all fine in Coq and lets us use Setoids to prove monotonicity, which is nice. However, it uses a turnstile embedded into the logic, violating non-expansiveness.

    My current inclination is to go with the weaker internal fixpoint, because it's good enough and the conservative choice. Still I'd like hear your opinion on just how "bad" it would be to to use the external fixpoint.

  • Ralf Jung
    Ralf Jung @jung started a thread on the diff
12 12 How much more annoying? And how much to we gain in terms of things
13 13 becomign easier to prove? *)
14 14 Definition synced R (f f': val) :=
15 ( P Q (x: val), {{ R P }} f x {{ v, R Q v }}
16 {{ P }} f' x {{ Q }})%I.
15 ( Φ (x: val), (R - WP f x {{ v, R Φ v }}) -
16 WP f' x {{ Φ }})%I.
  • This is significantly changing the spec. With your definition, f' may only call f once. I guess that makes sense, but still this makes mk_syncer stronger, i.e., it makes it potentially harder to prove sth. to be a syncer. OTOH the proofs in this repo all still work, so I guess we are good.

  • Author Contributor

    Yes, this was the change I referenced earlier, and I discussed it with Janno and JH before making it. I believe it is justifiable (and even desirable). And, as you note, existing actual syncer definitions all still work.

  • Please register or sign in to reply
  • Ralf Jung resolved all discussions

    resolved all discussions

  • One argument in favour of the "external" fixpoints is that it is easier to prove external monotonicity rather than internal monotonicity, because we have some automation (solve_proper, f_equiv).

    One argument in against of the "external" fixpoints is that the fixpoint combinator may not be non-expansive (i.e., close predicates transformer should give rise to close fixpoints).

    An important observation: the coinduction principles are different. More precisely, the one we get externally seems weaker. However, in the case the predicate transformer is internally monotone, it turns out the two coinduction principles are equivalent.

    This leads me to a question: in the case the predicate transformer is internally monotone, is it the case that the two fixpoints coincide? The fact that the coinduction principles are equivalent seems to indicate that the fixpoints do coincide. If it is indeed true, then I think I would be in favour of using the external fixpoint, because of the additional generality and of the tactical support.

  • This leads me to a question: in the case the predicate transformer is internally monotone, is it the case that the two fixpoints coincide? The fact that the coinduction principles are equivalent seems to indicate that the fixpoints do coincide. If it is indeed true, then I think I would be in favour of using the external fixpoint, because of the additional generality and of the tactical support.

    Isn't it the case that for (internal) monotone functions, the greatest fixpoint is unique (and same for the least fixpoint, though of course the two can be different)?

  • Yes, this should be the case, and this is why I am asking the question. But there are enough subtle details here that only a Coq proof would convince me that they are indeed equivalent ;).

  • To summarize the discussion thus far, so you can point out if I missed something.

    There are two ways to define greatest fixed points, the "external" and "internal" ones. The internal one is the Knaster-Tarski fixed point theorem, whereas the external one is somewhat strange, and partly stated and proved at the meta-level.

    The only difference (in the end) is in the assumption: the external one requires a (technically) weaker notion of monotonicity. However if the functional F is internally monotone then the two fixed points coincide, which follows quite directly from their respective coinduction principles and the fact that the external notion of monotonicity implies the internal one.

    I cannot comment on the practicality of the two approaches. As Robbert said the first one does rely on the specific embedding of Iris in Coq. If one is to do it on paper then one must mix reasoning at the meta-level with reasoning in the logic. The internal fixed point is the canonical one as far as I am concerned.

    The external construction one can be made into a construction inside the logic using an additional "provability" modality (which is something we used before), but that is not well-behaved, so I would advise against adding it. It is not just non-expansiveness that is broken (which I don't think is that much of an issue). The modality does not work well with (term) substitution either.

    Regarding practical examples, it seems to me that if you are constructing the functional F inside the logic then it is going to be monotone provided all occurrences of P in F P are non-negative with regards to implication and wand. Which is going to be the same for the external notion of monotonicity. Presumably some infrastructure could be made to simplify proofs of monotonicity in such common cases.

  • The internal one is the Knaster-Tarski fixed point theorem, whereas the external one is somewhat strange, and partly stated and proved at the meta-level.

    Actually, they are both the Knaster-Tarski fixed point theorem. The difference stands from which lattice you are using: in the internal fixed-point, we are considering the lattice of propositions as an internal objects of the logic, while in the external fixed-point, we are considering the lattice of (the interpretation of) the propositions in the meta-logic.

    It is not just non-expansiveness that is broken (which I don't think is that much of an issue). The modality does not work well with (term) substitution either.

    Isn't it the same problem, actually? You can perform substitutions only in non-expansive objects, no?

  • To sum this up, it seems @robbertkrebbers, @abizjak and me prefer using the internal fixed-point mostly to be "conservative" wrt. this actually working in the on-paper theory, while @jjourdan (and presumably @pythonsq) prefer the external fixed-point because it is easier to use.

  • Ralf Jung changed title from Greatest fix to New atomic shifts

    changed title from Greatest fix to New atomic shifts

  • I submitted https://gitlab.mpi-sws.org/FP/iris-coq/merge_requests/60 to get a fixedpoint combinator into Iris. Once we landed that, this PR should be updated to use that combinator rather than proving it inline.

  • Summarizing offline discussion of the current status:

    • This should be updated to use the fixpoints from https://gitlab.mpi-sws.org/FP/iris-coq/merge_requests/60 which has just landed.
    • We should figure out why we had the timeless condition in the Iris 1.0 paper (which later developed into the \later in the definition of atomic triples). One interesting benchmark is the elimination stack; @pythonsq expressed interest in trying to get that one formalized in Coq. I will dig commit logs and other sources to try and find out when and why we added this condition in the first place.
  • I did some digging to try and find out the origin of the timeless requirement. Turns out it's not related to the stack at all, it is related to the MCAS library. MCAS is for "multiple compare-and-swap"; this is an example in the TaDA paper that I used to develop logically atomic shifts for Iris (back in 2014). The library has a single lock mitigating access to all locations covered by it, so from an algorithmic perspective, it is all rather trivial. The interesting aspect is giving it a logically atomic spec.

    In an earlier version of that library, I tried to make it possible to remove a location from the MCAS library without acquiring a lock. To this end, I had to prove that the location we are removing is not currently locked, so the function (e.g. write) acquiring the lock had to put the atomic shift it got (providing ownership of the location) into an invariant -- this is helping, effectively. When write releases the lock again, we prove another atomic shift (unlock itself has a logically atomic spec). This atomic shift has to get P out of the helping invariant and then run the linearization point, i.e., run the view shift provided by the client of write. If P is neither timeless nor of the form \later ?, this cannot work -- we cannot get P out of an invariant (getting \later P) and then run a P ==> ... view shift in the same step.

    Later, I decided this was all not worth the hassle and changed this "getting a location out of the MCAS" to just acquire and release the lock. Not very satisfying, but at least it worked. So in the appendix submitted to POPL, as far as I can tell, there was no reason any more to even have P be timeless. This seems odd, given that some of our coauthors were rather concerned about this limitation; either I did not realize back then that timelessness is not actually required any more or I defended timelessness purely based on a spec that we did not use any more. Or there is something I am missing here. ;)

  • Oh, there's another possible reason: In the stack, if we cannot make use of the fact that CAS takes a step, we also need the \later P. For the Iris 1.0 paper, we set things up such that the underlying machine just has message-passing channels, and then we implemented a heap with a logically atomic spec on top of that. A logically atomic spec for CAS does not let us take a step. So, with the setup from the Iris 1.0 paper, the stack does indeed need the \later P.

    So, the general concern here is that if we use helping and the underlying synchronization primitive just has a logically atomic spec, then we do need things to be of the form \later P. That seems like a pretty severe problem; helping is a core feature and layering logically atomic specs is one of their key properties.

  • Superseded by !7 (merged)

  • closed

  • Please register or sign in to reply
    Loading