- “I want a 100% guarantee that the blasting required for this project won’t damage the foundation of my house.”
Let’s face it. When it comes to anything that could threaten our property, our family, or our health, we all want a 100% guarantee. Problem is, the one constant in life is that there are no absolute guarantees.
This rule applies to software as much as to anything else. Just try to create a software system that is both reasonably useful and absolutely dependable. It’s well-nigh impossible. Unfortunately, the same rule also applies to software validation methods: no method is absolutely foolproof. The more complex a software system becomes, the more this rule applies.
A difficult pill to swallow? You bet. But acknowledging it is key to designing a system that successfully achieves functional safety.
Which brings me to Chris Hobbs’ latest paper, “Building Functional Safety into Complex Software Systems, Part II.” For Chris, functional safety must be built into a software system from day one. Moreover, all work should follow from the premise that software always contains faults and that these faults may lead to failures. We must, as a result, include multiple lines of defense when designing a system:
All this begins with the best available expertise and a crystal-clear definition of the system’s dependability requirements — what Chris refers to as “sufficient dependability.” This definition is essential: It not only provides an accurate measure for validating the system’s functional safety, but also eliminates vague (and therefore meaningless) requirements.
We must also follow rigorous standards and practices throughout the system design and development, and implement a comprehensive validation program that includes not only traditional state-based testing at the module level, but also statistical testing and design verification.
I’m just scratching the surface of Chris’s paper. For the full story, download the paper here.