- Aug 06, 2016
-
-
Robbert Krebbers authored
-
- Aug 05, 2016
-
-
Robbert Krebbers authored
Also make those for introduction and elimination more symmetric: !% pure introduction % pure elimination !# always introduction # always elimination !> later introduction > pat timeless later elimination !==> view shift introduction ==> pat view shift elimination
-
Robbert Krebbers authored
This commit features: - A simpler model. The recursive domain equation no longer involves a triple containing invariants, physical state and ghost state, but just ghost state. Invariants and physical state are encoded using (higher-order) ghost state. - (Primitive) view shifts are formalized in the logic and all properties about it are proven in the logic instead of the model. Instead, the core logic features only a notion of raw view shifts which internalizing performing frame preserving updates. - A better behaved notion of mask changing view shifts. In particular, we no longer have side-conditions on transitivity of view shifts, and we have a rule for introduction of mask changing view shifts |={E1,E2}=> P with E2 ⊆ E1 which allows to postpone performing a view shift. - The weakest precondition connective is formalized in the logic using Banach's fixpoint. All properties about the connective are proven in the logic instead of directly in the model. - Adequacy is proven in the logic and uses a primitive form of adequacy for uPred that only involves raw views shifts and laters. Some remarks: - I have removed binary view shifts. I did not see a way to describe all rules of the new mask changing view shifts using those. - There is no longer the need for the notion of "frame shifting assertions" and these are thus removed. The rules for Hoare triples are thus also stated in terms of primitive view shifts. TODO: - Maybe rename primitive view shift into something more sensible - Figure out a way to deal with closed proofs (see the commented out stuff in tests/heap_lang and tests/barrier_client).
-
- Jul 13, 2016
-
-
Robbert Krebbers authored
The intropattern {H} also meant clear (both in ssreflect, and the logic part of the introduction pattern).
-
- May 31, 2016
-
-
Robbert Krebbers authored
be the same as
. This is a fairly intrusive change, but at least makes notations more consistent, and often shorter because fewer parentheses are needed. Note that viewshifts already had the same precedence as →.
-
- May 24, 2016
-
-
Robbert Krebbers authored
Changes: - We no longer have a different syntax for specializing a term H : P -★ Q whose range P or domain Q is persistent. There is just one syntax, and the system automatically determines whether either P or Q is persistent. - While specializing a term, always modalities are automatically stripped. This gets rid of the specialization pattern !. - Make the syntax of specialization patterns more consistent. The syntax for generating a goal is [goal_spec] where goal_spec is one of the following: H1 .. Hn : generate a goal using hypotheses H1 .. Hn -H1 .. Hn : generate a goal using all hypotheses but H1 .. Hn # : generate a goal for the premise in which all hypotheses can be used. This is only allowed when specializing H : P -★ Q where either P or Q is persistent. % : generate a goal for a pure premise.
-
- May 07, 2016
-
-
Robbert Krebbers authored
-
- May 02, 2016
-
-
Robbert Krebbers authored
iSpecialize and iDestruct. These tactics now all take an iTrm, which is a tuple consisting of a.) a lemma or name of a hypotheses b.) arguments to instantiate c.) a specialization pattern.
-
- Apr 19, 2016
-
-
Robbert Krebbers authored
-
Robbert Krebbers authored
-
Ralf Jung authored
-
Robbert Krebbers authored
-
Robbert Krebbers authored
-
Ralf Jung authored
-
Ralf Jung authored
-
Robbert Krebbers authored
That way, we do not have useless type annotations of the form "v : language.val heap_lang" cluttering about any goal. Note, that we could decide to eta expand everywhere (as we do for ∀ and ∃), and use the notation "WP e {{ Q }}" for "wp e ⊤ (λ _, Q)".
-
Ralf Jung authored
-
- Apr 11, 2016
-
-
Robbert Krebbers authored
-
- Mar 11, 2016
-
-
Ralf Jung authored
-
- Mar 10, 2016
-
-
Ralf Jung authored
-
Ralf Jung authored
-
Robbert Krebbers authored
Thanks to Amin Timany for the suggestion.
-
- Mar 07, 2016
-
-
Ralf Jung authored
-
Ralf Jung authored
Add both non-expansive and contractive functors, and bundle them for the general Iris instance as well as the global functor construction This allows us to move the \later in the user-defined functor to any place we want. In particular, we can now have "\later (iProp -> iProp)" in the ghost CMRA.
-
- Mar 04, 2016
-
-
Ralf Jung authored
-
- Mar 02, 2016
-
-
Robbert Krebbers authored
This cleans up some ad-hoc stuff and prepares for a generalization of saved propositions.
-
- Feb 25, 2016
-
-
Robbert Krebbers authored
-
Ralf Jung authored
-
- Feb 19, 2016
-
-
Robbert Krebbers authored
-
Robbert Krebbers authored
* Put level of the triple at 20, so we can write things like ▷ {{ P }} e @ E {{ Φ }} without parentheses. * Use high levels for P, e and Φ. * Allow @ E to be omitted in case E = ⊤.
-
- Feb 18, 2016
-
-
Ralf Jung authored
-
Robbert Krebbers authored
This avoids ambiguity with P and Q that we were using before for both uPreds/iProps and indexed uPreds/iProps.
-
Ralf Jung authored
-
- Feb 13, 2016
-
-
Robbert Krebbers authored
Also, make our redefinition of done more robust under different orders of Importing modules.
-
- Feb 12, 2016
- Feb 11, 2016
-
-
Robbert Krebbers authored
Also do some minor clean up.
-
- Feb 10, 2016
-
-
Ralf Jung authored
-
Robbert Krebbers authored
It is now slightly below implication. In order to do this, I had to change the notation from P ={E1,E2}=> Q to P >{E1,E2}=> Q because the prefer ={n is already used at level 70 for the distance of the metric.
-
- Feb 09, 2016
-
-
Robbert Krebbers authored
-