Taxonomy/2. Methodology/Non-Monotonic Refinement
The properties of a realistic artefact usually are a compromise between what one wanted ideally and what one was able to realise. Often, one did not even really understand what one wanted. Often, one did not fully understand what was possible. →specifications and →blueprints often did not accompany the developent process. Sometimes preliminary realisations – "rapid prototypes" – were adapted until the developer intuitively felt that the design was correct. If for such a system a correctness theorem were provable, it will first have to be discovered.
Unlike in Monotonic Refinement, the decisions taken in such a chaotic development process are not limited to design decisions . Many decisions are adaptations of the original requirement.
The quality of such a realistic development process can be improved if the seemingly chaotic process is guided by the canonical form of the Correctness Theorem. And if it has not been guided by it, it can be re-engineerd as if it were. This helps...
- to choose the right abstractions for a formal model that can be used for verification,
- to define the boundary between the artefact and its environment,
- to find a meaningful →decomposition,
- not to mix up levels,
- to take design decisions not before they are really understood,
- to define interfaces when it is understood why they are necessary rather than in the beginning of the process,
- to document all decisions in a logical place,
- to understand the impact of design decisions on the overall properties.
What we want to have in the end is a →model satisfying the quality criteria for models: truthfulness,compactness, understandability and tracability.
Inhoud
Introduction
The method starts from a vague idea about desired properties and derives a realisable blueprint together with a realistic specification. The method can be used in these cases:
- There is an idea around desired properties, but no artefact is known that has these properties. In this case the main result will be a blueprint of a new artefact.
- We have an artefact without blueprint and some idea what its intended properties are (→reverse engineering). In this case the main result will be a specification of the artefact.
- We have an incomplete artefact (for example a plant without control program) and an idea of the desired properties. In this case the main result will be a specification of the missing part.
We will use predicate logic as a specification and blueprint language. The disadvantage that specifications are cumbersome to write and read is compensated by the advantage that we can see the essence and that we do not have to fix things like mathematical spaces, interfaces, etc. before we understand them.
Goal
For a given, more or less vague idea of the purpose or desired properties of a not necessarily existing artefact, find:
- a formula A⇒C where C is an acceptable specification of the goal and A a set of acceptable assumptions about the environment,
- a finite, hopefully small number of formal specifications ai⇒ci for each of which a realisation can be obtained (viz. a →decomposition in realisable parts),
- a formalisation N of some domain properties of the environment,
- such that N ∧ (∀ i. ai⇒ci) ⇒ (A⇒C) can be proved.
The method
The theoretical foundation, explanation and examples can be found in System Design as a Creative Mathematical Activity (Mader, Wupper). Here a summary of the method, instantiated for case 3, where the specification of the control for an existing plant has to be derived.
After the preliminary choices of a →view and the languages and tools to be used for verification, the verification model can be constructed. MethodologyThe method proposed here is not meant to replace existing methods and tools. Rather, it should be applied in conjunction with the method you like to use. The purpose of our method is
and thus to break down the arguments used for construction of the verification model into such small and understandable, logically ordered pieces that these together give confidence in the truthfulness of the exercise. Our method is not confined to a specific formalism or tool, but is based on the following assumptions:
GoalWhichever languages, methods and tools are used for formal verification, the essence of the formal part of verification of →embedded systems consists in finding out by means of computer support whether P, X |= S where S=(A→C), the specification of the goal, is a formula defining a desired property of the artefact under discussion, X is a specification of the control program, and P=(a1→c1, a2→c2, ... , an→cn) is a set of formulae specifying something we know about the physical parts of the artefact that are to be controlled. A formula, however, cannot directly define a physical phenomenon. The semantics of a formula defines a mathematical object, which hopefully corresponds somehow to the intended physical phenomenon. We reserve the term specification of a physical phenomenon for a formula that defines a truthful mathematical model of that phenomenon. Finding a verification model for an embedded system means finding specifications P, X and S. Many valuable papers on formal verification start by presenting some P, X and S small and readable enough to be useful. Here, we are interested in the decisions that went into finding them. Together with each step, these should be documented.
The methodThis is the method step by step. We will use a washing machine as running example. We hope that the reader sees how the informal specifications we suggest can be formalised in her favourite language.
a1→c1, a2→c2, ... , an→cn |= An→Cn
Not before we have obtained this complete picture, we start with the difficult part of conflicting resources. The specification of one of the processes in our example requires that the water stays in the tumbler for a certain period, while the specification of another one requires it to be pumped out while no new water streams in. A physical tumbler cannot implement both specifications unless it has valves that can be opened and closed. Likewise, engines and pumps must have switches to activate and deactivate them.
From these adapted specifications, An→Cn can no longer be proved. This is where the control program comes in:
a*1→c1, a*2→c2, ... , a*n→cn, X |= An→Cn This can be difficult, but brings us into a well-understood problem area: scheduling. The solution may require further weakening of An→Cn, as the overall process takes more time because sub-processes cannot be executed in parallel. |
Example
Here nearly trivial example for a formula that occurs and is non-monotonically adapted during the method.
Assume that a part–let us call it Sf–in the system under development has to compute a function f. The first version of the specification of the behaviour of that part is:
DEF Sf := ∀t: TIME. ∀v:ℜ. o(t, input, v) ⇒ o(t, output, f(v))
Here, input and output are the interface points of that part. Predicate o states that at a certain moment (here: t) at a certain interface point a certain value occurs and can be observed. When we want to implement this part as a computer with a program using the machine arithmetic, this part can only handle input in the range [minreal..maxreal]. The result will be inaccurate. The computer will sample its input with a period δ. So we cannot really implement specification Sf. We can only implement something like:
DEF Pf := ∀t: N*δ. ∀v:[minreal..maxreal]. o(t, input, v) ⇒ (∃g:[f(v)-ε..f(v)+ε]. o(t+δ, output, g))
If Pf occurs at the left hand side of a Correctness Theorem we will have to weaken the right hand side correspondingly.
This example shows that the ideal part Sf and the realistic part Pf differ in three respects:
- Computations consume time.
- In this case, the output occurs δ seconds too late w.r.t. the ideal specification Sf.
- Computations are not precise in time and value.
- In this case, the input can be δ seconds too old and the arithmetic introduces an error of ε.
- A physical part can never handle arbitrary, unbounded quantities.
- In this case, the domain is [minreal..maxreal]. This is not imited to computers! The same problem would occur when we implement a reactor in a chemical plant: it can only contain a certain amount of substance.
The first and second can be neglected under certain circumstances: For simple artefacts without feedback and without computational loops) one can hope to ensure that delays and rounding errors do not accumulate and that control theory ensures that they can indeed be neglected. One could also use a specification formalism (think of interval arthmetic) that keeps track of possible accumulations and gives an error message when the effect can not be neglected.
The third can never be neglected. Every physical object has a limited capacity. There is no artefact that can handle unbounded quantities. So even if we decide to neglect imprecisions, we will end up with a specification like:
DEF Qf := ∀t: TIME. ∀v:[minreal..maxreal]. o(t, input, v) ⇒ o(t, output, f(v))
If this specification occurs on the left hand side of the correctness theorem for an aeroplane, it can turn out that we will have to adapt the right hand side, the overall specification. We may, for example, have to assume that air turbulence is bounded.
Even the best, well designed coffee machine has a tank of limited capacity. I can replace it by a bigger machine or by a whole coffee factory, but that will not remove limitations altogether. So we better learn to treat them correctly.
Due to inherent limitations of physical parts, quantifiers in ideal specifiations will have to be replaced by bounded quantifiers as son as we decide which real part to use in an artefact. These bounds will have repercussions on other parts and on the overall performance.
Every engineer outside computer science knows this. But textbooks and courses on programming and software engineering traditionally ignore this issue and given students the idea that computers can safely be treated as unlimited. (Have we not all written recursive or iterative programs for the computation of factorials? The first how many factorials fit in an INTEGER word? About twenty? Why write a program where a short table would be smaller and much quicker?) When then a rocket explodes because a computer program cannot handle a big value, newspapers write about "programming errors", while the programmers have done their job as taught. Le Lann pointed out that we should better talk of systems engineering errors: the components of a system simply do not "fit", because some have the wrong capacity.
Case studies
This is a link to a report on a case study which follows this method: