Skip to content
Snippets Groups Projects

Avoid using specific `G` classes for locks, always use the projection `lockG` of `lock`.

Open Robbert Krebbers requested to merge robbert/lock_G_Σ into master
4 unresolved threads

This MR is a follow up of !979 (merged).

Always preferring the projections is similar to the way we use acquire/release/etc.

This MR also adjusts the subG instance to use lockG instead of spin_lockG, and tests that the conditions are resolved automatically when using adequacy.

TODO: I have only done this for spin lock, the same remains to be done for ticket lock, rw lock, atomic heap.

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
101 97 lock.newlock_spec _ _ _ _ := newlock_spec;
102 98 lock.acquire_spec _ _ _ _ := acquire_spec;
103 99 lock.release_spec _ _ _ _ := release_spec |}.
100
101 Definition spin_lockΣ : gFunctors := #[GFunctor (exclR unitO)].
102
103 Global Instance subG_spin_lockΣ {Σ} : subG spin_lockΣ Σ @lockG spin_lock Σ.
  • This is exactly what I wanted to avoid -- now everyone declaring a lock instance has to also declare such an instance. I think that's worse than the status quo.

  • I am not sure I understand, this instance was already there. It merely got moved to the bottom of the file.

  • That's completely breaking our usual pattern for how to work with the global ghost state.

    Also this won't work as nicely for primitive_atomic_heap where there currently simply is no instance, but you will have to add one.

  • In particular if/when we finally have a macro to declare the spin_lockG and spin_lockΣ and subG_spin_lockΣ stuff, then I'm not sure how that would work with your approach.

  • Please register or sign in to reply
  • Ralf Jung
    Ralf Jung @jung started a thread on the diff
  • 538 eauto.
    537 wp_smart_apply (release_spec with "[$Hlock $Hlocked Hloc]"); by eauto.
    539 538 Qed.
    540 End lock.
    541 Section spin_lock.
    542 Local Existing Instance spin_lock.
    539 End lock_gen.
    543 540
    544 Definition spin_lock_client : val :=
    545 λ: "loc" "lock",
    546 acquire "lock";;
    547 "loc" <- #42;;
    548 release "lock".
    541 (** Make sure the lock type class works to write clients and
    542 specifications for specific locks (here: spin lock). *)
    543 Section lock_gen.
  • Ralf Jung
    Ralf Jung @jung started a thread on the diff
  • 559 wp_smart_apply (acquire_spec with "Hislock") as "[Hlocked [%v Hloc]]".
    556 unfold lock_client_spin. wp_alloc l as "Hl".
    557 wp_smart_apply (newlock_spec ( n : Z, l #n) with "[Hl]")
    558 as (lk γ) "#Hlock"; first by eauto.
    559 wp_smart_apply (acquire_spec with "Hlock") as "(Hlocked & %v & Hloc)".
    560 560 wp_store.
    561 wp_smart_apply (release_spec with "[$Hlocked Hloc]").
    562 { iFrame "Hislock". eauto. }
    563 eauto.
    561 wp_smart_apply (release_spec with "[$Hlock $Hlocked Hloc]"); by eauto.
    562 Qed.
    563 End lock_gen.
    564
    565 (** Making sure we the [lockG] conditions are resolved when using adequacy. For
    566 the generic client, we need to instantiate it with a specific lock for that to
    567 make sense. *)
  • Ralf Jung
    Ralf Jung @jung started a thread on commit fa555c7b
  • 305 305 Tactic failure: wp_cmpxchg_suc: cannot find 'CmpXchg' in
    306 306 (#() #()).
    307 "wp_spin_lock_client"
    308 : string
    309 1 goal
    310
    311 Σ : gFunctors
    312 heapGS0 : heapGS Σ
    313 spin_lockG0 : spin_lockG Σ
    314 loc : locations.loc
    315 lock : val
    316 γ : lock_name
    317 ============================
    318 "Hislock" : is_lock γ lock (∃ v : val, loc ↦ v)
    319 --------------------------------------□
    320 WP let: "lock" := lock in acquire "lock";; #loc <- #42;; release "lock"
  • Ralf Jung mentioned in merge request !981 (merged)

    mentioned in merge request !981 (merged)

  • Please register or sign in to reply
    Loading