Background & History

Root Cause Analysis of Schedule Slippage

SCRAM is the result of a collaborative effort between Adrian Pitman from the Australian Department of Defence, Angela Tuffley of RedBay Consulting in Australia, and Betsy Clark and Brad Clark of Software Metrics Inc. in the United States.  SCRAM evolved out of their years of consulting and, specifically, their experience in performing assessments of projects in trouble. While these projects invariably experienced cost overruns, schedule slippage has consistently been the major trigger for concern and the focus of the assessments. 

The most frequently asked questions about these projects are

SCRAM utilises a framework for organising the information gathered during an assessment, for identifying the root causes of slippage, for communicating the reasons behind the schedule slippage, and for structuring recommendations to the root causes. This framework, referred to as the Root Cause Analysis of Schedule Slippage (or RCASS) depicts the major areas impacting schedule and the relationships between these areas. RCASS is based on work by Barry Boehm[1] and by John McGarry et al.[2] and been expanded and refined as it has been used on SCRAM Reviews.

Once RCASS was in place, Schedule Health Checks were added to SCRAM to evaluate the construction and logic of the program schedule. A Monte Carlo analysis was also added to show the probability of achieving a given planned delivery date.

In 2010, the SCRAM Process Reference / Assessment Models (PR/AM) were developed as ISO/IEC 15504 Information technology – Process assessment conformant models in an effort to maximize consistency and provide an objective framework for assessment. This effort was undertaken to support SCRAM as it is rolled out to a wider audience of users.  In 2017, the PR/AM was revised to become the Schedule Risk Management and Assessment Guide (SRMAG)

[1]  Barry Boehm, “Section 2: Risk Management Practices: The Six Basic Steps,” from Software Risk Management, IEEE Computer Society Press, 1989.

[2] John McGarry, David Card, Cheryl Jones, Beth Layman, Elizabeth Clark, Joseph Dean, and Fred Hall, “Practical Software Measurement: Objective Information for Decision Makers,” Addison-Wesley, 2001.