remove proof-term from JobReady type-class
My attempt to merge !351 (closed) in a sequence of atomic commits.
Merge request reports
Activity
assigned to @kbedarka
added 6 commits
Toggle commit listadded 4 commits
-
2db7f22f...92af0b6f - 2 commits from branch
RT-PROOFS:master
- adf6450a - remove proof-term from JobReady type-class
- 61ba0b57 - [to squash]
-
2db7f22f...92af0b6f - 2 commits from branch
@bbb, POET is failing, but the rest is ready
76 83 77 84 (** Note that we do not explicitly require that a valid schedule satisfies 78 85 [jobs_must_arrive_to_execute] or [completed_jobs_dont_execute] because these 79 86 properties are implied by jobs_must_be_ready_to_execute. *) I appreciate the effort you have put into this, thank you.
I'm getting doubts whether this is the right design, though. The whole notion of
valid_schedule
is getting subverted by this. Sinceready
no longer has any relationship topending
, it is now possible that jobs that are ready have not yet arrived. Similarly, the fact that completed jobs remain ready — the whole point of this exercise — seems really unnatural and hard to explain to someone who doesn't already know the motivation behind this change.So I wonder: do we really need this?
If I understand correctly, the only reason to introduce this change is to accommodate completion overhead, where a job remains scheduled after it has completed. Now, completion overhead is something we definitely want to support, but perhaps this is the wrong approach.
Let's rewind a bit: Why should we consider a completed job that causes completion overhead to be "scheduled"? True, the processor isn't available to other jobs, but it is also odd to say that the completed job is scheduled. In reality, it's some part of the system (e.g., the kernel or some runtime environment) executing cleanup and bookkeeping activities.
Why did we want to consider a completed job going through completion overhead to be "scheduled" anyway? To avoid having to give up on work conservation and priority compliance. However, in the meantime, the discussion on that side has evolved to accepting that we'll need weaker notions of conditional work conservation and conditional priority compliance that give us an "escape hatch" for transient conditions during which the invariants are violated in a real system.
Such as when processing completion overhead.
Which leads me to the question: If we don't consider any job to be scheduled during completion overhead, do we still need this quite invasive patch at all?
In other words, perhaps a better approach is to realize that, while completion overhead is a form of interference (which can be captured as abstract interference), it is not HEP interference (i.e., it is not the case that some higher-or-equal-priority job is scheduled when this kind of interference is incurred).
Edited by Björn BrandenburgYes, that's not unreasonable and was also my first interpretation.
The "it's still scheduled" view corresponds to overhead-accounting by execution-time inflation. In that case, the cost is also increased accordingly, so no complications arise with the typical invariants.
But we are trying to model overheads precisely (without parameter inflation). If we still consider the job scheduled, we end up in this hybrid scenario, resulting in mismatches down the line.
Doesn't it make everything easier to simply say that
scheduled_at
is false for all jobs during job completion?Edited by Björn BrandenburgUltimately, this change is needed for Kimaya's project; so, if not merging this MR works for Kimaya, I'm okay with it. I'd still like to debate a few points you bring up.
The whole notion of
valid_schedule
is getting subverted by this. Sinceready
no longer has any relationship topending
, it is now possible that jobs that are ready have not yet arrived.We can simply add
ready_implies_arrived
tovalid_schedule
.Similarly, the fact that completed jobs remain ready — the whole point of this exercise — seems really unnatural and hard to explain to someone who doesn't already know the motivation behind this change.
Just saying "a job might still run for a few cycles after completion due to completion overhead" sounds convincing to me.
The "it's still scheduled" view corresponds to overhead-accounting by execution-time inflation. In that case, the cost is also increased accordingly, so no complications arise with the typical invariants.
I'd argue that the "it's still scheduled and receives service" view corresponds to overhead-accounting by execution-time inflation. Since we don't do execution-time inflation and use SBFs, it does not have anything to do with parameter inflation techniques.
But we are trying to model overheads precisely (without parameter inflation). If we still consider the job scheduled, we end up in this hybrid scenario, resulting in mismatches down the line.
I may lack the context, but I'm not quite sure what mismatches you're referring to. If the current document is not outdated, then you still consider a job to be scheduled during some other overheads (i.e.,
SelectionOverhead
andDispatchOverhead
). So regardless of our decision onCompletionOverhead
, you are still in the hybrid scenario.(I replied to Kimaya's thread on Mattermost.)
As I said on Mattermost, if we keep going in this direction, rolling the assumption into
valid_schedule
seems most appealing to me. Then it shouldn't explicitly feature in most high-level results, won't get repeated all over the place, and can still be avoided in lower levels where necessary.From my PoV, the linked document doesn't reflect a final design yet.