Sorry for the late reply. Yes, I agree, the general problem is that of dealing with finer behavior changes that depend on the scheduling. I understand in a sense the notion of job wants to abstract as much as possible over such differences. Nonetheless, this abstraction may have a different impact in the two cases we are considering. In the case of cost, it seems natural to approximate behavior by choosing the maximum. In the case of critical sections, the problem may be more than their length, it might be their placement and even their number. In that case, it seems harder to think of a reasonable approximation.
I think there are two aspects that can be distinguished in my proposal, anyway. One is the extension of the processor state class. The other one is taking into account non-determinism, specifically when jobs have a task associated with known program code, and there is more than a possible control flow branch.
Concerning the latter, I had in mind simple examples, such as the following. Let T be a periodic task, with deadlines longer than the period. The task code includes altogether 5 critical sections, and it allows for 3 possible branches, each associated to a subset, say,
branch 1 - sections 1, 2, 3 branch 2 - sections 4, 3 branch 3 - sections 1, 5
Therefore a job can meet at most 3 sections. Each section is specified with respect to the received service. Jobs are non-preemptive, except for the points at the start of each section.
My proposal is essentially to take the branching explicitly into account (i.e. by associating the task to a list of subsets). An alternative could be to treat the three branches as jobs of different tasks (but these tasks might not be periodic). Another possibility would be to abstract away from the difference between the branches - but this seems difficult to me, given the preemption points. Yet another possibility might be to ignore the preemption points, assume jobs can become blocked (either in the suspend-resume or in the spin-blocking sense), and associate an additional worst-case cost to account for the blocking. This could superficially simplify the problem, especially if jobs tend to run on different cores and most of the interference is caused by concurrent requests. But then the problem would reappear as before if we try to make a finer-grained analysis of the blocking.