Recently, the Baltimore Sun published an article on how the FDA is using static analysis tools to uncover software defects in medical devices. Software defects now account for about 20% of medical-device recalls, so it's no surprise that the FDA has become interested in tools that can: a) detect existing defects, and b) prevent new defects from occurring.
The article has generated response from editors of medical device publications, developers of medical software, a ZDnet blogger, and even medical malpractice lawyers.
Nobody has a problem with the FDA uncovering errors that could harm or kill someone. Some folks seem worried, however, that the FDA will force medical-device companies to adopt static analysis as a standard development practice. Their objections fall into several categories, including:
Static analysis tools are expensive — I’m no expert on the total cost of buying, learning, and maintaining static analysis tools. But, in their defense, they can detect a variety of bugs early in the development cycle. And the earlier you catch bugs, the cheaper it is to fix them.
Static analysis tools make too many false positives — Static analysis tools can report a software error when, in fact, none exists. Some argue that this shortcoming wastes precious development time. However, vendors of static analysis tools say that their products now implement highly advanced techniques to minimize this problem.
Static analysis can’t always predict how software will behave when it’s actually running — Agreed, but I doubt that anyone believes static analysis tools should be used alone. At QNX, we recently published a whitepaper on how developers can combine static analysis tools and runtime analysis tools to achieve higher product quality.
What about you? Do you use static analysis tools? Do you develop them? Do you think the FDA would be justified in forcing medical-device companies to use static analysis? Or are there other, better ways to ensure software quality?