In the example depicted at right, if Operational tests were to be defined at both output flags (OUT1 & OUT2), then (using common isolation) the eXpress diagnostics could isolate all possible failure combinations to fault groups containing a single failed function.
If function A were to fail by itself, then both Operational tests would fail and the diagnostics would either isolate to a fault group containing only A (using common causeisolation) or both A and B (using multiple-failure isolation). The same fault group would be isolated if both functions A and B were to malfunction simultaneously.
The eXpress diagnostic engine does not attempt to isolate to a fault group containing all failed items, but rather to the “best” (usually the smallest) fault group containing at least one failed item. Finally, if function B were to fail by itself, then the Operational test at OUT1 would fail, but the test at OUT2 would pass, resulting in the isolation of a fault group containing only B.
Although Operational tests are inherently symmetric, there are nevertheless two situations where an Operational test can be treated asymmetrically. Like all other tests, an Operational test is interpreted asymmetrically when it is used as a refinement test.
The second situation where an Operational test may be treated asymmetrically involves object states. If an Operational test requires that one or more objects be in a particular state, then the diagnostics may come to different conclusions when the test fails than when it passes (based on whether state control dependencies can be inferred to be good). Other than in these two situations, Operational tests are always interpreted symmetrically.
It is important to remember that tests that are described as Operational tests within eXpress must both detect malfunctions and prove operation for all output functions that are upstream from the specified test location (the only exception to this, as we shall see, is when the Operational test is defined for a particular combination of object states). This means that if encapsulation (sequence-independence) is desired for a given Operational test, then that test would have to be capable of not only detecting any failure within the covered functions but also of proving all these functions good when it passes.
For many tests, this could lead to highly complex testing procedures. Another option is to allow test procedures to be strategy specific—tests can be written to prove good only those functions that they must exonerate in a particular diagnostic strategy or test sequence. This is a good option when a substantial number of the functions potentially proven by a test will already have been exonerated by the time that the test is performed within a given diagnostic test sequence.
On the other hand, when a test procedure is customized to a particular diagnostic sequence, then it may not be able to be meaningfully deployed within other diagnostic scenarios. The reduction in short-term expenditures during initial test development could be offset by later expenses as test procedures are redeveloped for compatibility with future diagnostic sequences. In short, strategy-specific test procedures could reduce the overall development effort, yet should only be used when the reduction in test engineering justifies the sacrifice of test reusability.