Disputes over failed software construction projects raise interlinked technical and legal issues which are complex, costly, and time-consuming to unravel - whatever the financial size of the claims and counterclaims, the facts and circumstances of the contract between the parties, or the conduct of the software development. Software contracts are often terminated, with the software rejected amidst a considerable range and variety of complaints expressed by both supplier and customer. These include allegations of incomplete or inadequate delivery, software or database errors, faulty design, operational or performance deficiencies, shifting user or business requirement specifications, poor project management, delays and cost over-runs.
Forensic Systems Analysis
One of the most important issues on which an expert is asked to give an opinion in such software development or implementation cases is: what was the quality of the delivered software and was it fit for purpose? To answer this, and other equally crucial technical questions posed in such disputes, CASTELL Consulting has developed a range of rigorous analytical techniques, Forensic Systems Analysis, for assessing failed, stalled, delayed or generally troublesome software projects. These are intended to be objectively justifiable and properly unbiased techniques, founded on sound software engineering principles, and impartial, favouring neither customer nor supplier, software user nor software developer.
Some examples of cases in which we have applied these techniques for our clients are:
- A case concerning an in-store EFTPOS system for a major national retailer, where the crucial issue was whether or not the software supplier was likely to fix many outstanding errors and have the system ready to roll-out in time for the Christmas sales. What was the objective technical evidence of the software house's "bug find and fix" performance? Were the bugs escalating, or was the software converging onto a stable, performing system? Or were the constant changes in customer specification to blame for the delays and inability of the software to pass a critical acceptance test?
- A case concerning a national University Consortium, similarly focusing on the apparent inability of a main module of the software system to reach a state capable of passing formal Repeat Acceptance Tests, with faults appearing at each attempt at "final" testing. How serious were these faults, and were earlier faults, thought to have been fixed, constantly re-appearing? Was the customer justified in terminating the contract on the grounds of a "reasonable opinion" that the software supplier would not resolve all the alleged faults in a "timely and satisfactory manner"? Was the supplier's counter-claim for a large financial amount for "software extras" valid, and could that explain the inability of the software to converge onto an "acceptable" system?
- The celebrated case of a real-time computer-aided mobilising system for a large metropolitan ambulance brigade, where the preoccupation was with the response times of the software in a clearly life-or-death application. How well were the desired response, availability, reliability and recovery targets for the software contractually defined; and what was the evidence of the system's actual performance under varying load conditions?
Forensic Systems Analysis methodology examples: EFLAT and FORBAT
EFLAT- Expert's Fault Log Analysis Task - Material Defect
EFLAT, developed after careful debate over a dozen years with many instructing solicitors, and learned Counsel, uses what is now regarded as a sound protocol for testing whether any given software fault is, in terms of its relevance to a material or fundamental breach, and termination, of a contract, truly a material defect.
This protocol is essentially that, to be a material defect, an alleged software fault must
- be of large consequential effect; and
- be impossible, or take (or have taken) a long time, to fix; and
- be incapable of any practical workaround.
The customer is entitled to propose what is a "large" consequential effect; and the supplier, equally, may put forward an appropriate sizing for a "long" time to fix - each from the standpoint of his own business/technical knowledge and experience, and in the context of the particular contract/project. Both views ought to be evidentially supportable. Both views - and, also, whether or not there is indeed a practical workaround - would be the subject of expert scrutiny and opinion.
EFLAT constitutes a careful re-running of the appropriate Acceptances Tests, under expert observation, with each Test Incident Report ("TIR") raised during the test rigorously and dispassionately assessed according to the material defect rule.
FORBAT - FORensic Bug Analysis Task
The pattern of build-up of "bugs" during software development, and their resolution, are important indicators of the progress of software construction and testing. Such indicators are unfortunately often misread by both the software customer and the software developer: in particular, the dramatic increase in TIRs during systems testing can be badly misinterpreted. The point is that systems testing (usually the responsibility of the software developer) is meant to find bugs and fix them - it is not being done properly if there is not a large build-up in recorded TIRs. This contrasts with acceptance testing (usually the responsibility of the customer) where "zero", or only a small number of non-serious bugs, is a not unreasonable expectation, particularly as acceptance testing should be undertaken with the appropriate attitude - for acceptance, not rejection, of the software proffered for testing.
FORBAT uses a number of quantitative analysis techniques, and software metrics, to give an objective graphical presentation of the true "bug find and fix" performance of the software house, readily understandable, with a little explanation, to non-technical lawyers and judges. The insights which spring out of these presentations are usually vivid (and incidentally can come as something of a surprise to the parties themselves):
Illustration of a typical FORBAT "bug find and fix performance" graph, and conclusions
"This graph represents the cumulative number of TIRs outstanding at any point in time, and is formed by subtracting the cumulative total number of resolved TIRs from the cumulative total number of reported TIRs on a day-by-day basis. A trendline has been added to the graph to indicate the overall tendency of the underlying data, but otherwise the graph is an objective portrayal of basic project statistics.
These results are consistent with what is reasonably to be expected in a well-run software development project environment. It is usual in such an environment to see a build-up of reported "errors" or "bugs" during systems testing as a major release approaches. It is also usual to see a steep drop-off in the number of these bugs outstanding just before each major release. They illustrate that here the process to "find and fix bugs" was one in which the software house became increasingly proficient, with the software itself becoming more and more stable, and fewer and fewer faults detectable."
These techniques, together with the related disciplines of auditing, monitoring and researching of the devices and data of computers, servers and related equipment to provide low-level discovery of hidden information from storage media in a legal context (usually given the generic name "computer forensics"), can be used not only in civil litigation but also:
- in criminal cases (particularly where the probity of computer evidence is an issue);
- to assess the fragile status and troublesome characteristics of specific "problem software projects/contracts" before they stall, fail, or sink into litigation;
- more generally, as a positive and rigorous "litigation sensitive" Software and Security Quality Assurance and Project Management Audit Method, equally applicable to software construction and implementation projects as to forensically-sensitive corporate information security, "lost documents", or other similar audits.
Dr. Stephen Castell CITP CPhys FIMA MEWI MIoD, Chartered IT Professional, is Chairman of CASTELL Consulting. He is an internationally acknowledged independent computer expert who has been involved in a wide range of computer litigation over many years. He is an Accredited Member, Forensic Expert Witness Association, Los Angeles Chapter.
©Copyright 2003 - All Rights Reserved
DO NOT REPRODUCE WITHOUT WRITTEN PERMISSION BY AUTHOR.