ABSTRACT Title of Document: SUPERPATH: A NON-COMPUTERIZED PROBABILISTIC SCHEDULING METHODOLOGY USING FIRST PRINCIPLES OF THE U. S. NAVY?S PROGRAM EVALUATION REVIEW TECHNIQUE Charles Christopher Smith Doctor of Philosophy 2008 Directed By: Professor Greg B. Baecher Department of Civil and Environmental Engineering The Superpath method, a non-computerized probabilistic scheduling methodology relying upon first principles of the Program Evaluation Review Technique (PERT) and a summary level, event-centric network is presented. Like PERT, a network scheduling methodology developed by the United States Navy during the development of the Polaris submarine launched ballistic missile program of the 1950s, Superpath employs probabilistic techniques within a network-based time management platform allowing for the assessment of program or project performance against calendar dates certain. The proposed methodology relies upon the identification of easily identifiable, distinguishable, unambiguous and measureable events along with either short term probabilistic or deterministic time estimates for the single summary paths, or Superpaths, that lie between. A fundamental tenet of the Superpath approach is that the topology of several key events or reference points in time, vice hundreds or thousands of activities, forms a valid basis from which to perform a schedule analysis. Once this simplified network is constructed, conventional deterministic calculations are performed to determine the slack along each Superpath affording the capability of evaluating the probability of the project?s on-time completion. Superpath also accommodates the nearness of non- critical superpaths, a limitation of PERT and modern-day CPM. Superpath differs from the Critical Path Method (CPM), perhaps the most common network scheduling methodology as of 2008 and widely employed within the projects and programs of public and private industry. CPM is a network based methodology that relies upon a relatively large number of work activities and multiple types of network relationships as the basic network. CPM is task focused whereas Superpath focuses on discrete events and considers the interstitial space between events in far less detail. Complex relationships, sequential activities, ?logic lags,? constraints, calendars and the daily unit of measure are also aspects of CPM not found in Superpath. Generally, probabilistic analysis of time is not part of a CPM methodology, which relies upon a deterministic treatment of time using a single estimated durations for each activity. SUPERPATH: A NON-COMPUTERIZED PROBABILISTIC SCHEDULING METHODOLOGY USING FIRST PRINCIPLES OF THE U. S. NAVY?S PROGRAM EVALUATION REVIEW TECHNIQUE By Charles Christopher Smith Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park, in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2008 Advisory Committee: Professor Greg Baecher, Chair Professor Miroslaw Skibniewski Professor Robert Friedel Professor Kenneth O?Connell Professor Gerald Galloway Professor Lewis Link ? Copyright by Charles Christopher Smith 2008 ivv Dedication For Jean McCullough Smith and Eddyth Carter Smith ?You can do anything you put your mind to.? ii ivvi Acknowledgements I would like to thank the chairman of my dissertation examining committee, Professor Greg Baecher for not only agreeing to guide me through the dissertation process, but encouraging me to question both the subject matter and the fundamental tenets of project management. His insights, guidance and patience along the circuitous that my research took are still difficult for me to fathom and without which I would be in a very different place. I am forever grateful. I would also like to thank the members of my committee for their support and accommodation of my research within their busy schedules. Professors Gerry Galloway and Ed Link for their insights into the collaborative process in highly complex, large scale engineering efforts now seemingly lost in today?s scheduling profession and Professors Miroslaw Skibniewski and Kenneth O?Connell for perspectives in technology and CPM scheduling. I would also like to thank Professor Robert Friedel of the University of Maryland?s History Department for not only agreeing to serve on my committee as Dean?s Representative but providing me with invaluable ideas and research leads that proved pivotal in weaving the numerous parts of my work into a single fabric and stimulating ideas for further research. I would also like to thank Al Santos and Elyse Beaulieu of the Department of Civil and Environmental Engineering for their guidance and the University of Maryland?s Zupnik Scholarship for support during my graduate studies. I would like to thank a very large number of people with whom I was able to work while serving in the U. S. Navy?s Civil Engineer Corps, many more than I can mention in this short space: Admiral Christopher Weaver, Vice Admiral Michael iii ivvii Cowan, Rear Admiral Thomas Dames, Captain Guy Mehula, Captain Ron Hertwig, Captain Mark Libonate were all my mentors, whether they knew it or not, and deeply influenced my thoughts and development which made its way into this research. Gary Horne, Vann Marshburn were also tremendous influences on my career and to this day, when the questions get difficult, I still ask myself what they would do. My five years in Hill International?s Construction Claims Group in Washington, DC were also a tremendous influence on my work. I want to express my deepest thanks to Hill?s Founder, Chairman and Chief Executive Officer Irv Richter for both supporting my graduate education and fostering an environment at Hill where all the tools and minds were available to tackle what seemed to be the most challenging problems imaginable. For Hill?s Claims Group President Roy Mitchell whose personal interest in my development and guidance both during and after my time at Hill has proven invaluable in every regard. For Mark Anderson for ?taking a chance on an unknown kid? and never hesitating to pick me up, dust me off and keep in the game as I experienced what seemed to be the darkest sides of industry. For Richard Lamb whose belief in my potential exceeded anything I could ever imagine and his ability to foresee events well before they occurred made a tremendous impression on my work. In addition to these men, I would also like to thank several expert witnesses who allowed me to work with them at close quarters, particularly Gordon H. Smith, Marvin Weinstein, Tony Delhomme and Bob Dieterle. I would also like to thank my family and friends who are still there for me despite my silence and neglect. My parents Jean and Christine Smith and sister Sonja. My Uncle and Aunt Steve and Marianna Canby. Apirath Phanasantiphap, iv ivviii Alec Amarlikit, Dan Feinblum, Supat Sirirat and Be Pompruk. Margaret and Bob Pursell and Mike and Lauren Ruehring for always being there when I called, Diana Temple for providing me with guidance on referencing and formatting and Nelly Bain for assisting with document assembly. I want to thank Navy Commander Marko Medved, my much smarter Annapolis roommate who remains in the service of his country. Finally I want to thank my wife Elizabeth and daughter Katie for their love, encouragement, sacrifices and understanding during this endeavor. ???. v ivix Table of Contents Dedication ..................................................................................................................... v? Acknowledgements ...................................................................................................... vi? Table of Contents ......................................................................................................... ix? List of Tables ............................................................................................................. xiii? List of Figures ............................................................................................................ xiv? CHAPTER 1 ................................................................................................................. 1? The Last Gantt Chart ..................................................................................................... 1? 1.1? Chapter Overview ......................................................................................... 1? 1.2? Henry Lawrence Gantt .................................................................................. 1? 1.3 ? The Hanford Works ...................................................................................... 7? 1.4? History, Science and Human Motivation .................................................... 12? CHAPTER 2 ............................................................................................................... 22? The Establishment of the U. S. Navy Fleet Ballistic Missile Program ....................... 22? 2.1? Chapter Overview ....................................................................................... 22? 2.2? The Role of Scientists and Private Industry during the Cold War .............. 22? 2.3? The Balancing Effect of Mutually Assured Destruction ............................. 27? 2.4? The Fleet Ballistic Missile Program ........................................................... 30? 2.5? The Polaris Submarine Launched Ballistic Missile .................................... 34? 2.6? The Success of the U. S. Navy?s Fleet Ballistic Missile Program .............. 38? CHAPTER 3 ............................................................................................................... 40? The Performance Evaluation Research Task .............................................................. 40? 3.1? The Performance Evaluation Research Task .............................................. 40? 3.2? The Declassification and Publication of the PERT Methodology .............. 44? 3.3? Fundamental Concepts of PERT ................................................................. 45? 3.3.1? ?The Flow Plan? ..................................................................................... 48? 3.3.2? ?Elapsed Time Estimates? ...................................................................... 50? 3.3.3? ?Organization of Data? ........................................................................... 57? 3.3.4? ?The Analysis? ........................................................................................ 59? 3.3.5? ?Computation of ?Expected Times? for Events? .................................... 61? 3.3.6? ?Computation of the ?Latest Time? for Events? ..................................... 62? 3.3.7? ?Computation of ?Slack? in the System? ................................................ 62? 3.3.8? ?Identifying the Network?s Critical Path? .............................................. 64? 3.3.9? ?Probability of Meeting an Existent Schedule? ...................................... 66? 3.4? Chapter Summary ....................................................................................... 68? CHAPTER 4 ............................................................................................................... 69? A Paternity Dispute Over Network Scheduling .......................................................... 69? 4.1? Chapter Overview ....................................................................................... 69? 4.2? The Development of Computer Technology .............................................. 70? 4.3? The Work of Sperry-Rand on Behalf of the U. S. Department of Defense 75? 4.4? The Declassification of the Performance Evaluation Review Technique .. 78? 4.5? Other Efforts Towards a Network Based Scheduling Method ................... 78? 4.6? The ?Extra Cash Value? of Network Scheduling Methods ........................ 81? vi ivx 4.7? The Kelley-Walker ?Critical-Path Planning and Scheduling? .................... 83? 4.7.1? ?1. Project Structure? .............................................................................. 84? 4.7.2? ?2. Calendar Limits on Activities?.......................................................... 86? 4.8? The Perpetuation of Legacy ........................................................................ 87? 4.9? Chapter Summary ....................................................................................... 91? CHAPTER 5 ............................................................................................................... 95? The Development of Network Scheduling Methods, 1959-2008 ............................... 95? 5.1? Chapter Overview ....................................................................................... 95? 5.2? PERT?s De-Classification, Celebration, Rise and Demise ......................... 97? 5.3? The Osmosis of PERT and Kelley-Walker CPM ....................................... 98? 5.4? The Precedence Diagramming Method ...................................................... 98? 5.5? The Influence of Information Technology on Network Scheduling ......... 106? 5.6? Chapter Summary ..................................................................................... 110? CHAPTER 6 ............................................................................................................. 112? The Earned Value Management System ................................................................... 112? 6.1? Chapter Overview ..................................................................................... 112? 6.2? The ANSI/EIA-748-A Earned Value Management System (EVMS) ...... 113? 6.3? The History of Earned Value Management, 1967 to 2008 ....................... 117? 6.4? U. S. Government Requirements for EVMS ............................................ 119? 6.5? The Limitations of EVMS Management ................................................... 121? 6.5.1? Non-Prescriptive Requirements for EVMS .......................................... 121? 6.5.2? Detachable Value .................................................................................. 122? 6.5.3? Substitute Value .................................................................................... 124? 6.5.4? The Costing Lag .................................................................................... 129? 6.5.5? Critical Path Value ................................................................................ 131? 6.5.6? The Duration of the Analytical Period .................................................. 134? 6.5.7? Banana Curves ...................................................................................... 138? 6.6? Chapter Summary ..................................................................................... 140? CHAPTER 7 ............................................................................................................. 142? The Rationale for An Event-Centric Network for Time Management ..................... 142? 7.1? The Loss of Basic PERT Principles in Modern Construction Scheduling 142? 7.2? Debates on the Mathematics of PERT ...................................................... 153? 7.2.1? Use of the Beta Distribution to Model the Opinions of Competent Engineers............................................................................................... 155? 7.2.3? Use of the Normal Distribution to Calculate T E ................................... 159? 7.2.4? Addressing The Proximity and Variability of Non-Critical Paths ........ 164? 7.3? The Adverse Effect of Negative Float ...................................................... 165? 7.4? Projects, Programs, Humans and the CPM Schedule ............................... 170? 7.5? History as Rationale for a ?New? Approach to Time Management ......... 182? 7.7? Chapter Summary ..................................................................................... 186? CHAPTER 8 ............................................................................................................. 187? The Superpath Methodology ..................................................................................... 187? 8.1? The Concept and Appearance of Superpath .............................................. 187? 8.2? The Rationale for Superpath ..................................................................... 192? 8.3? The Superpath Methodology (Deterministic Solution) ............................. 193? 8.3.1? The Flow Plan ....................................................................................... 193? vii ivxi 8.3.2? Elapsed-Time Estimates ........................................................................ 194? 8.3.3? Organization of Data ............................................................................. 194? 8.3.4? The Analysis ......................................................................................... 194? 8.3.5? Computation of ?Expected Time? for Events ........................................ 195? 8.3.6? Computation of ?Latest Time? for Events ............................................. 195? 8.3.7? Computation of ?Slack? in the System .................................................. 195? 8.3.8? Identify the ?Critical Path? in the Network ........................................... 195? 8.3.9? The Display of Information .................................................................. 196? 8.4? The Superpath Methodology (Probabilistic Solution) .............................. 197? 8.4.1? The Elicitation of Time Estimates for Each Super-Arrow .................... 199? 8.4.2? Evaluating Delays to the Overall Project or Program .......................... 203? 8.5? Superpath?s Relationship to the Critical Path Method .............................. 206? 8.6? Chapter Summary ..................................................................................... 214? CHAPTER 9 ............................................................................................................. 216? Case Study ................................................................................................................ 216? 9.1? Chapter Overview ..................................................................................... 216? 9.2? Pre-Construction History .......................................................................... 217? 9.3? Six Companies? Mobilization to the Site of Hoover Dam ........................ 221? 9.4? Overview of the Superpath Review on the Hoover Dam Project ............. 222? 9.5? Deterministic Superpath Review for the Hoover Dam Project ................. 225? 9.5.1? Selection of Superpath Events .............................................................. 226? 9.5.2? Expression of Relationships and Criticality of Superpath Events ........ 234? 9.5.3? Identification of Early and Late Event Positions .................................. 239? 9.6? Probabilistic Superpath Review for the Hoover Dam Project .................. 241? 9.7? Chapter Summary ..................................................................................... 253? CHAPTER 10 ........................................................................................................... 255? Discussion ................................................................................................................. 255? 10.1? Chapter Overview ..................................................................................... 255? 10.2? Contributions............................................................................................. 255? 10.2.1? Contribution #1: On The Role of Conflict and Games in Project Management ...................................................................................... 256? 10.2.2? Contribution #2: The Importance of Project Management Histories 256? 10.2.3? Contribution #3: The Problematic Theory of CPM?s Parallel Development ..................................................................................... 257? 10.2.4? Contribution #4: Describing the Loss of the Activity-Event Juxtaposition ..................................................................................... 257? 10.2.5? Contribution #5: The Rationale for an Event-Centric Network ....... 258? 10.2.6? Contribution #6: Expressions on the Limitations of Earned Value . 258? 10.2.7? Contribution #7: The Summarized Event Centric Network ............ 259? 10.2.8? Contribution #8: The Non-Computerized Solution to Probabilistic Scheduling......................................................................................... 259? 10.3? Close ......................................................................................................... 260? Appendix A ............................................................................................................... 261? Correspondences of Albert Einstein and President F. D. Roosevelt ......................... 261? Appendix B ............................................................................................................... 264? The 41 Fleet Ballistic Missile Submarines of the U. S. Fleet, 1967 ......................... 264? viii ivxii Appendix C ............................................................................................................... 265? Actual Production Milestones for George Washington Class Submarine ................ 265? Appendix D ............................................................................................................... 266? Tables for the Normal Probability Distribution ........................................................ 266? Appendix E ............................................................................................................... 269? Draft Letter from J. W. Mauchly to Remington Rand Executive ............................. 269? Appendix F................................................................................................................ 270? C/SCSC versus ANSI/EIA EVMS Standard ............................................................ 270? Glossary .................................................................................................................... 276? Bibliography ............................................................................................................. 277? ix ivxiii List of Tables Table 7-1 Comparison of Results of Three Solution Types 163 Table 7-2 Comparison of Results of Three Solution Types (Weekly Units) 163 Table 9-1 Superpath Events Related to the Diversion Tunnels 231 Table 9-2 Superpath Events Related to Cofferdams 232 Table 9-3 Superpath Events Related to the Colorado River, Dam and Reservoir 232 Table 9-4 Superpath Events for Intake Tunnels and Towers 233 Table 9-5 Superpath Events for Spillways, Powerhouse and Needle Works 234 Table 9-6 Assignment of Chips Under the ?No Complications? Scenario 245 Table 9-7 Probability of Occurrence for Three Scenarios 247 Table 9-8 Assignment of Chips for Three Scenarios 249 Table 9-9 Cumulative Probabilities for Three Long Term Scenarios 252 x ivxiv List of Figures Figure 1-1 ?First Gantt Chart Plotted for Artillery Ammunition? 2 Figure 1-2 ?BONUS RECORD OF GIRLS WORKING IN A FOLDING ROM? 3 Figure 1-3 ?First Gantt Chart to be Published? 6 Figure 1-4 DuPont Executive?s 1942 Schedule for Hanford Works 9 Figure 1-5 DuPont Gantt Chart For Area 105-B, Hanford Works 10 Figure 1-6 Post Attack Conditions at Hiroshima, Japan, August 1945 12 Figure 1-7 Vanderbilt Hall, Grand Central Terminal, New York City 15 Figure 1-8 FDR?s Sketch of Future Naval Hospital on Display at Bethesda 17 Figure 1-9 President Roosevelt Lays Cornerstone at Bethesda in 1940 18 Figure 2-1 J. Robert Oppenheimer and MGEN Leslie R. Groves, USA 23 Figure 2-2 ?Little Boy? and ?Fat Man? Bombs of the Manhattan Project 23 Figure 2-3 The Hiroshima Bomb of August 6, 1945 and ?Enola Gay? B-29 24 Figure 2-4 Soviet Scientists Andrei Sakharov and Igor Kurchatov 26 Figure 2-5 Photographs of Soviet Atomic Weapons Testing 26 Figure 2-6 The Mariner Freighter U. S. S. Compass Island 31 Figure 2-7 ?Comparison of Missiles of the Fleet Ballistic Missile Program? 32 Figure 2-8 Vice Admiral William F. Raborn, Special Projects Office 35 Figure 2-9 ?A Polaris Fleet Ballistic Missile Submarine? 37 Figure 2-10 U. S. S. GEORGE WASHINGTON (SSBN-598) 37 Figure 2-11 Photograph of the first launch of Polaris, July 20, 1960 38 Figure 3-1 Critical Path Illustration, Polaris Program 46 xi ivxv Figure 3-2 PERT System Flow Plan 48 Figure 3-3 Illustration of elapsed time estimate (t e ) for Single Activity 50 Figure 3-4 The Three ?Elapsed-Time? Estimates for Individual PERT Activity 51 Figure 3-5 ?Estimating the elapsed-time distribution 52 Figure 3-6 Symmetric and Asymmetric Beta Distributions 53 Figure 3-7 Expected Time 54 Figure 3-8 First Appearance of the Term ?Critical Path? 56 Figure 3-9 PERT ?Diagram Showing Sequenced Events? 58 Figure 3-10 PERT?s ?List of Sequenced Events? 59 Figure 3-11 Illustration of Three Paths Within a Simplified Network 60 Figure 3-12 ?Outputs from Analysis? 62 Figure 3-13 ?Determination of slack by calculating T L ? 63 Figure 3-14 ?Critical Path in System Flow Plan? 64 Figure 3-15 ?Event Identification File? from the Polaris PERT Schedule 66 Figure 3-16 ?Estimate of Probability of Meeting Scheduled Date, T OS ? 68 Figure 4-1 The UNIVAC Computer System 74 Figure 4-2 Advertisement for UNIVAC Computer 75 Figure 4-3 Major Contractors Within the U. S. Navy?s Polaris Program 78 Figure 4-4 Photograph of Mauchly with ?SkedFlo, Model MCX-30,? 82 Figure 4-5 ?Fig. 1 ? Typical project diagram? 85 Figure 4-6 ?Fig. 1. System flow plan.? 86 Figure 4-7 ?Fig. 5. List of Sequenced Events? 88 Figure 4-8 Rear Admiral Grace Hopper 94 xii ivxvi Figure 5-1 The PERT Flow Plan of the Polaris Program (1958 ? 1965) 96 Figure 5-2 CPM Arrow Diagramming Method (1959 ? Early 1980s) 96 Figure 5-3 Precedence Diagram Method (1961 ? Present) 97 Figure 5-4 ?Diagramming Methods ? Arrow Diagramming? 100 Figure 5-5 ?Diagramming Methods ? Precedence Diagramming? 101 Figure 5-6 ?Initial Rough Network Diagram? 102 Figure 5-7 Fulkerson?s AON and AOA Network Models 103 Figure 5-8 ?PERT System in Operation? 106 Figure 5-9 ?PERT data-processing flow chart? 107 Figure 5-10 An 80-Column IBM computer punch card from the early 1970s 108 Figure 5-11 A Typical IBM Mainframe Computer Setup in 1972 109 Figure 6-1 Earned Value System Parameters 116 Figure 6-2 The Detachable Value Concept 123 Figure 6-3 As Planned Schedule as of ?t 0 ? versus Performance as of ?t 4 ? 127 Figure 6-4 The Age of Information for Earned Value vs. Actual Costs 130 Figure 6-5 Earned Schedule 132 Figure 6-6 Granular Differences in the Comparison-To-Baseline Approach 136 Figure 6-7 Earned Value to Date vs. Baseline PV and Most Recent PV 137 Figure 6-8 ?Banana Curves? ? Dual Profiles for Planned Value 138 Figure 6-9 Extrapolating for the Estimate at Completion 139 Figure 7-1 Diagramming Methodologies of PERT and CPM 142 Figure 7-2 ?PDM? & ?Gantt? Views Using Primavera Suretrak 143 Figure 7-3 Schedule Granularity Over Time 145 xiii ivxvii Figure 7-4 The ?Simple? Relationships Found Within PERT 152 Figure 7-5 Expected Time 157 Figure 7-6 ?Estimate of Probability of Meeting Scheduled Date, T OS ? 160 Figure 7-7 A Simple Network Based Upon James E. Kelley?s Table 162 Figure 7-8 Project Status as of Project Start, 06 MAR 08 167 Figure 7-9 Project Status as of 24 MAR 08 without Finish Constraint 168 Figure 7-10 Project Status as of 24 MAR 08 with Finish Constraint 168 Figure 7-11 Utility Profile Set Forth By Daniel Bernoulli 173 Figure 7-12 Project Cost Curves for Owner And Contractor 178 Figure 7-13 A Utility Model of Project and Program Managers 179 Figure 8-1 A Superpath Network of 16 Events and Super-Arrows 187 Figure 8-2 The Transition to Superpath?s ?Stargaze? or ?Night? View 190 Figure 8-3 The ?Night View? of ?Early? and ?Late? Events 191 Figure 8-4 The ?Day View? of ?Early? and ?Late? Events 192 Figure 8-5 The ?Night View? of ?Early? and ?Late? Events 196 Figure 8-6 Early Gaze (top to bottom) 196 Figure 8-7 Late Gaze (top to bottom) 197 Figure 8-8 ?Range Gaze? Showing Early and Late Extremes 197 Figure 8-9 Simple Network Based Upon Kelley?s Tabular Approach 198 Figure 8-10 Conceptual Illustration of a simplified PERT Network 199 Figure 8-11 Assignment of Chip Distribution for a Single Super-Arrow 201 Figure 8-12 Probabilistic Superpath 203 Figure 8-13 Near Term Probabilistic Approach (?Bow Wave? Method) 205 xiv ivxviii Figure 8-14 Small CPM Schedule Network 207 Figure 8-15 Superpath Events Overlaying a Small CPM Schedule 208 Figure 8-16 View of Early Event Dates Before Criticality is Considered 210 Figure 8-17 View of Early Event Dates With Super Arrows Assigned 212 Figure 8-18 View of Early and Late Event Dates With Super Arrows 213 Figure 8-19 View of Early and Late Event Dates With Super Arrows 213 Figure 8-20 Project Status as of 24 MAR 08 with Finish Constraint 214 Figure 9-1 Photograph of Hoover Dam, Power House and Tunnel Outlets 216 Figure 9-2 U.S. Bureau of Reclamation?s Plan View of Hoover Dam 219 Figure 9-3 U.S. Bureau of Reclamation?s Section of Hoover Dam 221 Figure 9-4 Section View of Hoover Dam 228 Figure 9-5 Plan and Elevation of Hoover Dam 228 Figure 9-6 Hoover Dam Superpath Network as of November 12, 1932 236 Figure 9-7 Hoover Dam Superpath Network as of November 12, 1932 240 Figure 9-8 Critical Path For Probabilistic Superpath Analysis 242 Figure 9-9 Near Term Probabilistic Evaluation as of November 12, 1932 243 Figure 9-10 ?Crowe?s? Assignment of Odds to Three Scenarios 247 Figure 9-11 ?Crowe?s? Expression of Odds Using the Roulette Felt Format 250 Figure 9-12 Cumulative Probability Curve for the Short Term Period 251 Figure 9-13 Arial Photograph of Hoover Dam 254 xv 1 CHAPTER 1 The Last Gantt Chart 1.1 Chapter Overview This chapter describes time management methodologies employed by U.S. industry at the turn of the last century and by the U.S. military during World Wars I and II which involved the works of Henry Lawrence Gantt, a Maryland native who worked extensively in private industry as a management consultant studying production and scheduling and later on behalf of the War Department. During 1917, while working in Washington, DC on behalf of the U.S. Army Ordnance Bureau, he devised a new charting technique for planning and production of artillery ammunition for the Frankford Arsenal in Philadelphia. Gantt?s depiction, which would become known as the ?Gantt Chart,? was employed at Hoover Dam in the early 1930s and within the Manhattan Project of World War II, arguably two of the most challenging projects of the twentieth century. The U. S. S. R., struggling to recover from World War I used the new Gantt chart approach to plan its economic recovery during the 1920s. Why is it that in 2008 our society places little value on these works of Henry Lawrence Gantt? This chapter, and portions of this research, will address this question. Figure 1-8 Henry Lawrence Gantt ? ?Joe, come along with me. I have the whole world by the tail. I have got the best mechanism yet devised for controlling production and purchase programs.? ?In this enthusiastic way Gantt told Professor Joseph W. Roe of the application of the Gantt Chart in war work. The time was the summer of 1917. The place was 2 on F Street in Washington. After this outburst Gantt took Roe into one of the office buildings which had been commandeered by the Ordnance Bureau, and in a room on the upper floor exhibited a chart which was being worked upon by a young lieutenant. He realized even then the value and power of his new management mechanism, and with an intensity of manner and speech, which was a part of him, said: ?We have all been wrong in scheduling on a basis of quantities. The essential element in the situation is time, and this should be the basis in laying out any program.? ? (Alford 1934) Figure 1-1 ?First Gantt Chart Plotted for Artillery Ammunition? (Alford 1934) During the course of his professional career, Henry Lawrence Gantt made many charts and analyses as a consulting engineer to both private industry and the U.S. Government. A native of Maryland, Gantt would attend Johns Hopkins University and Stephens Institute of Technology and spend his early professional years in the 3 mills of the steel industry, starting at Midvale Steel and working at other plants as a consulting engineer through the late 1800s. His later work consisted primarily of management consulting and included collaborations with Frederick W. Taylor. (Alford 1934) The content of Gantt?s 1910 publication Work, Wages and Profits is consistent with Gantt?s 1917 outburst ?we have been?scheduling on a basis of quantities.? This earlier approach is evident in the many charts, analyses and commentary contained within this work that were the results of the analyses of textile settings (e.g. weaving, winding bobbins, winding yarn, inspecting cloth, folding, winding armatures and sewing sheets and pillowcases). Figure 1-2 ?BONUS RECORD OF GIRLS WORKING IN A FOLDING ROOM? (Gantt 1974) Prior to 1917, Gantt?s works, Frederick W. Taylor?s Scientific Management and Frank W. Gilbreth?s Motion Study, each management pioneers of the period, 4 express performance measurement methodologies in a producer vs. product-to-date format. (Taylor 1911), (Gilbreth 1911) Perhaps most important within the methodologies of these men according to Taylor, and a point lost in the heated debates of the time where many viewed ?Taylorism? as simply a management method designed to maximize production and corporate profits at the expense of the worker, were expressed within the opening pages of his principal publication. ?It would seem to be so self-evident that maximum prosperity for the employer, coupled with maximum prosperity for the employ(ee), ought to be the two leading objects of management, that even to state this fact should be unnecessary?(t)he majority of these men believe that the fundamental interest of employ(ee)s and employers are necessarily antagonistic. Scientific management, on the contrary, has for its very foundation the firm conviction that the true interest of the two are one and the same; that prosperity for the employer cannot exist through a long term of years unless it is accompanied by prosperity for the employ(ee), and vice versa; and that it is possible to give the workman what he most wants ? high wages ? and the employer what he wants ? a low labor cost ? for his manufactures.? (Taylor 1911) But perhaps the common thread within Scientific Management and the work of these three men is the careful observation and study of the minutiae of managerial approach, worker methods and motions. An illustration of this perspective is provided within the many figures within these works depicting flow plans of people and product through manufacturing plants and the breakdown of worker motions from movement, to movement, to movement - - almost like the sequential frames of a motion picture, a fairly recent invention at the time. Perhaps due an over reliance on the Critical Path Method by the project management community in 2008, today the ?Gantt Chart? is generally little more than an arrangement of bars over a horizontal time axis, that is impractical as a time 5 management device because it does not convey the interrelationships between individual bars. Industry publications define the Gantt Chart this way. ?Gantt Chart. See Bar Chart.? (PMI 2004) ?The Bar Chart, also called a Gantt chart, is graphically the most simple of the scheduling methods. It is understood by most project people and can be produced quicker than any of the other scheduling methods?can provide a quick, visual overview of a project, but they tend to neglect the management detail necessary to make complicated coordination decisions.? (Gould 1997) But as his 1917 work for the U.S. Army demonstrated, Gantt?s work deliberately presented a larger amount of information. Alford presents the following explanation of the information found in the first ?Gantt Chart? to be published. This appears in Figure 1-3. ?The long line in each case carried forward to October indicates the accumulated needs as expressed in orders. The vertical figure at the end of each month gives the total requirements up to that date. The amount written horizontally in each monthly space is the amount to be supplied during that month. As plotted, the monthly divisions are equal in length. The amount indicated by this legend, however varies according to the monthly needs. So the plotting is to a uniform scale as regards the element of time, and to a variable scale, month by month, as regards amount. Once the principle upon which the chart is constructed is comprehended, it is read at a glace and shows exactly the condition of the requirements and deliveries on the item under consideration.? (Alford 1934) 6 Figure 1-3 ?First Gantt Chart to be Published? (Alford 1934) The time-based Gantt charts would be adopted by industry in the United States as well as internationally. ?The claim is made that when the Soviet ?Five-Year Plan? of industrial development was organized, it was completely plotted on Gantt charts.? (Alford 1934) Was there more to Gantt?s charts than modern industry recognizes? True, while there is no open expression of relationships between Gantt?s bars, when viewed from a different perspective these charts might be recognized as far more powerful tools. In Gantt?s practice, much of which was within manufacturing or ?shop? settings, familiarity with the process was perhaps the most essential element of his work. If Gantt was intimately familiar with the subject matter process before he created and utilized his chart for analytical purposes, was it possible that the ?missing? interactivity relationships were even necessary? And if, as his works suggest, Gantt could perform his analyses without creating, monitoring and measuring hundreds or 7 thousands of activities ? relying instead on less than two dozen in most cases ? does the intimate knowledge of a process and a lesser number of activities provide that such expressions are not even necessary? Stated another way, did these ?simple bar charts? actually contain relationships within the larger analysis of which they were a part but these were just not expressed on paper? Perhaps where intimate knowledge exists about a process, hundreds or thousands of activities are not necessary and a ?simple? bar chart approach can be implemented successfully. Figure 1-8 The Hanford Works How was it that the Manhattan Project, an ultra top secret program to research, develop and produce an atomic device and delivery system during World War II, was able to accomplish its objectives within a relatively short period of time? 1 And if we are to acknowledge that today?s CPM networks have resulted in tremendous efficiencies in many applications, how much more impressive was the Hanford accomplishment? Or, is it important to reserve judgment with respect to the superiority of today?s CPM scheduling while examining the Manhattan Project?s methods of time management to see what they offer? Original project documents of the Emile Irenee Du Pont des Amours Corporation (Du Pont), the prime contractor for the Manhattan Project, suggest an effective summary level scheduling system based upon Henry Lawrence Gantt?s Time Chart was employed during the construction of one the principal project sites at Hanford in eastern Washington state. (E. I. du Pont de Nemours & Company 1945) (Greenewalt 1942) 1 Du Pont reports the actual start of the Hanford Works project as March 22, 1943 and the actual finish as March 31, 1945. E. I. du Pont de Nemours & Company, I. (1945). "Construction, Hanford Engineering Works, History of the Project." E. I. du Pont de Nemours & Company, Inc., Wilmington, 1-1385. 8 The Hanford Works was established to produce plutonium for one of two atomic weapons being produced under the program. The other bomb would be uranium based and its production would occur at the Clinton Works near Oak Ridge, Tennessee. The development and testing of delivery systems and bomb casings would occur at Los Alamos, New Mexico. (Groves 1962) The Trinity Site at Alamogordo, New Mexico would be where the bombs would be tested and the first test explosion occurred ?at 5:30 A.M., Mountain War Time, July 16, 1945.? (Gelb et al. 1988) Two bombs, one of each type, would be dropped on the Japanese cities of Hiroshima and Nagasaki on August 6 th and 9 th using bomber aircraft leading to the unconditional surrender of Japan and the end of World War II. Ignoring the scientific and technical accomplishments of the program, to the extent that is possible, the relatively short time window for design and construction of the Hanford facility, and its on time completion while major aspects of the technical requirements were as yet unknown or in development is an important consideration. Perhaps also significant is that the project was managed using a series of bar charts, with different amounts of information and layout in each. The notes of DuPont executive C. H. Greenwalt provide a summary level bar chart, which lays out the major tasks of the Hanford project under Figure 1-4. It is perhaps significant to note the dashed lines may represent the concept of what would be described as CPM network ?float.? 9 Figure 1-4 DuPont Executive?s 1942 Schedule for Hanford Works (Greenewalt 1942) More detailed schedules were prepared for the project by field personnel. A copy of a schedule for Area 105-B, the site of one of the project?s reactors, is provided under Figure 1-5. 10 Figure 1-5 DuPont Gantt Chart For Area 105-B, Hanford Works (E. I. du Pont de Nemours & Company 1945) The chart appears to be consistent with the factually intensive work of Henry Gantt, but also provides a very clear picture of the progress of individual work elements. The upper half of each horizontal black bar represents the planned duration and performance period of a task and the lower half of each black bar represents the actual start and finish dates. By comparing the position of upper and lower black bar halves, one is able to discern if the work was performed early, on time, or late. For work in progress, the lower bar is extended to the right a specific number of days to 11 the revised planned completion date and is not shaded (i.e. the white portion of the bar). Clearly, the Hanford project was rapidly executed with a successful result and although it is not clear to what extent that the project?s scope might have changed during the course of the project, the early cost estimates for this work appear to have fallen within the contemporaneous definition of ?accurate.? A comparison of Mr. Greenwalt?s handwritten initial cost estimate of March 8, 1943 and the cost of work report contained with the contractor?s document ?Du Pont Project 9536 History of the Project? show that the early cost estimate was exceeded by less than 6%. (Greenewalt 1942), (E. I. du Pont de Nemours & Company 1945) The DuPont Corporation deliberately performed its work on the Manhattan project ?at cost? with no profit for reasons that could only be described as ?complex utilitarian issues? related to the firm?s history and the overall risk and future public perception of the Manhattan Project endeavor. 2 (Kinnane 2002) DuPont had been openly accused of being ?merchants of death? during the early twentieth century as much of their efforts involved the production of explosives used during World War I. In response, DuPont diversified its products and services beyond explosives and by the beginning of the Second World War was quite reticent at becoming involved in such a military endeavor as the Manhattan Project. (Groves 1962), (Kinnane 2002) The decision to earn no profit on this work was surely in response to these concerns as was its absence as a primary contractor in the defense industry since the World War II. Notwithstanding these points, at Hanford, the 2 Final cost estimates for DuPont?s work at Hanford totaled $308,885,798, some 5.46% higher than Mr. Greenwalt?s initial estimate. 12 efficacy of a Gantt type chart was demonstrated on perhaps one of the largest and technically complex projects of the 20 th century. 1.4 History, Science and Human Motivation The constructs of this research, which are presented in the follow-on chapters, are based upon three basic thoughts: (1) that mankind is prone to either ignore or be unaware of relevant historical matter within a particular pursuit, and where this existing matter might be the source of, or be synthesized into, new advancements, a benefit might result; (2) that mankind?s pursuit and advancement of the sciences has not been without peril to society, and the harnessing of current information technology for self-serving interests is possible within modern project management systems such as the critical path method (CPM) and earned value management systems (EVMS); and (3) that the motivations and behavior of individuals or organizations are indeed far more complex than those expressed within one dimensional models of project and program management. Figure 1-6 Post Attack Conditions at Hiroshima, Japan, August 1945 13 A relaxed attitude of specific project and program management communities are illustrated by four examples encountered by the student during the course of, or before, this research: 1. The Washington Metropolitan Area Transit Administration, the public entity that oversaw the construction of the subway system in greater Washington, DC in the 1970s and 1980s, was approached by the student in 2007 about possible sources of schedule data from their original tunneling and train station projects. WMATA indicated that their construction records, at least the ones that had been retained to date, were in the process of being removed from their offices to be possibly destroyed due to a mandated shift of construction responsibilities to multiple local governments in and around the District of Columbia. 2. Some of the project records of New York City?s original Pennsylvania Station, at its time a landmark construction project in Mid-Town Manhattan at the turn of the last century, appear to be in existence, but are filed within the recesses of an office building in Harrisburg, Pennsylvania. (O'Conner 2008) For the Pennsylvania Railroad Company, the builders of this station, this concern was also addressed in the mid-1970s by historians Daniel J. Collins 14 and James O. Morris ?voiced concerns? over the possible destruction or loss of historical project records that resulted in the Federal Railroad Administration?s intervention. (Swanson and Gibb 1978) Also, the grand station was destroyed in the 1960s to make room for Madison Square Garden. Today, all train travelers are now ushered down a set of stairs and escalators below this entertainment complex to board their trains in a cramped environment less ornate or spacious than the adjacent 34 th Street subway station. 3. The renovation of Grand Central Terminal on Manhattan?s 42 nd Street during the mid 1990s revealed a grime and paint covered celestial map within the high ceiling over the terminal?s main concourse. The illuminated map of the celestial zodiac had long since escaped the memory of station managers, travelers and New Yorkers. How was it possible to forget such a stunning ceiling, one that hovered over thousands of travelers in one of the most heavily traversed locations in the world, every day, for ninety years? 15 Figure 1-7 Vanderbilt Hall, Grand Central Terminal, New York City (National Geographic 2008) The walking tour of the terminal now provides the following discussion, although any discussion of the ?re-discovery? is absent from the current literature of the Metropolitan Transit Authority or the terminal?s real estate developers: ?The most notable feature of the Main Concourse is the great astronomical mural, from a design by the French painter Paul Helleu, painted in gold leaf on cerulean blue oil. Arching over the 80,000 square-foot Main Concourse, this extraordinary painting portrays the Mediterranean sky with October-to-March zodiac and 2,500 stars. The 60 largest stars mark the constellations and are illuminated with fiber optics, but used to be lit with 40 watt light bulbs that workers changed regularly by climbing above the ceiling and pulling the light bulbs out from above. Soon after the Terminal opened, it was noted that the section of the zodiac depicted by the mural was backwards. For several decades lively controversy raged over why this was so. Some of the explanations offered were that it just looked better, or it didn?t fit into the ceiling any other way. The actual reason is that Paul Helleu took his inspiration from a medieval manuscript, published in an era when painters and 16 cartographers depicted the heavens as they would have been seen from outside the celestial sphere.? (Grand Central 2008) 4. The Department of Defense?s Base Realignment and Closure (BRAC) Commission of 2005 recommended, and the U.S. Congress mandated: (1) the closure of Walter Reed Army Medical Center in Washington, DC; (2) the renaming of National Naval Medical Center Bethesda after Army Physician Walter Reed, and; (3) the construction of a new major Army military hospital at Fort Belvoir, Virginia. But in these three actions, these parties neglected an opportunity to both honor the history of the military services, Dr. Reed and President Roosevelt. Bethesda was the vision of President Franklyn Delano Roosevelt (FDR) and his original sketch on White House stationary is on display in an exhibit within the lobby of the main tower, a historical fact that was also a part of the formal education of multiple BRAC staffers. 3 (U. S. Department of Defense 2005) 3 The military members of this staff included members of the Navy Civil Engineer Corps who, by attending the U.S. Navy?s Naval School for Civil Engineer Corps Officers (CECOS), would have learned of FDR?s famous sketch during their coursework as junior officers. A copy of this sketch was prominently mounted on a wall of the school?s classroom facility at Port Hueneme, California during the student?s own coursework at CECOS during the early 1990s. 17 Figure 1-8 FDR?s Sketch of Future Naval Hospital on Display at Bethesda (Student 2006) Had the BRAC Commission simply named the new Fort Belvoir Hospital after Doctor Reed and renamed the Bethesda complex after President Roosevelt while closing the former Walter Reed facility, it could have: (1) connected Dr. Reed?s name to a major landmark within his home state of Virginia; 4 (2) recognized President Roosevelt?s role in the construction of the military hospital at Bethesda; and 4 Doctor Walter Reed (1851-1902) was born in Belroi, Virginia. His commission would produce research that would lead to the proof that yellow fever was spread by mosquitoes and not through the soiled bed linens or other fomites of fever sufferers. This discovery would contribute to the successful construction of the Panama Canal by an American effort between 1904 and 1914. (Wikipedia 2006)Wikipedia. (2006). "Walter Reed." 18 (3) Named the Bethesda facility after a former Commander-in-Chief, a concept that would be in greater harmony with the newly minted joint command that would be occupying the complex upon implementation of the new legislation. 5, 6 Figure 1-9 President Roosevelt Lays Cornerstone at Bethesda in 1940 (Student 2006) 5 The term ?joint command? often describes a command consisting of the U.S. Army, Navy, Air Force and Marine Corps. 6 The student outlined the problem and proposed solution in a January 2006 letter to the Hon. Representative Chris Van Hollen (Maryland). Congressman Van Hollen telephoned the student to express interest in the idea but indicated any revision to the BRAC legislation would be effectively impossible, but that other avenues might exist to observe this approach. The student also provided letters to the Hon. Senator Paul Sarbanes and former President Clinton. The idea made it to the desk of then Secretary of Defense Donald Rumsfeld but has not been enacted. 19 The second and third conceptual underpinnings of this research relate to man?s use of technology for both ?good? and ?bad? and how highly sophisticated electronic management platforms, do or do not recognize and/or model the perspectives of individuals or individual parties. Human history is replete with examples of the destructive power of the combination of the human mind, science and project management. But are we applying modern time management technologies to their best use? Modern project management scheduling methods, which often involve intricate expressions of relationships between the many thousands of critical elements using computer technology, are not detached from this human behavior. Recognizing that monetary recoveries in post-project litigation is based to a large degree upon a CPM schedule, it is fair to suggest that this connection has provided sufficient motivation for some to scuttle the possible harmony of these networks purely for self- serving interests. EVMS, meanwhile, may also be manipulated to provide a false sense of security to project participants, allowing projects to proceed to later points before the indicators ?catch? adverse performance, at which time it becomes less possible to abandon the project or program without greater upheaval and cost. These activities, where not fraudulent, might be considered beyond the realm of ethical behavior in most circles, but only if those circles fully understand and are provided with the ability to monitor that which may be only observable within the deepest levels of these two management platforms or the human mind. That the corruption of CPM 20 and EVMS systems are even possible on the largest and most visible projects of the U.S. Government, provides much of the basis for this study. It is fair to consider the interplay between project participants as a game or conflict. ?It should be clear from the discussions of Chapter I that a theory of rational behavior ? i.e. of the foundations of economics and of the main mechanisms of social organization ? requires a thorough study of the ?games of strategy.? (von Neumann and Morgenstern 1953) The 1960 publication The Strategy of Conflict authored by current University of Maryland Professor Thomas Schelling contains an essay entitled ?The Retarded Science of International Strategy.? In it is described how there exists: ?a main dividing line?between those that treat conflict as a pathological state and seek its causes and treatment, and those that take conflict for granted and study the behavior associated with it. Among the latter there is a further division between those that examine the participants in a conflict in all their complexity ? with regard to both ?rational? and irrational? behavior, conscious and unconscious, and to motivations as well as to calculations ? and those that focus on the more rational, conscious, artful kind of behavior. (Schelling 1960) Within the realm of project and program management, this research ?takes conflict for granted and studies the behavior associated with a focus on the more rational, conscious, artful kind of behavior.? (Schelling 1960) This is perhaps a break from conventional treatments or discussions of troubled projects and programs which are disposed towards Schelling?s ?pathological? approach. 7 Student?s Superpath then, becomes less about a simplified visual network of project events, than a perspective based upon Schelling?s theory. That is, as interpreted by the student, if conflict can 7 It is fair to consider the U.S. Government Contract Disputes Act of 1978, Sarbanes-Oxley Act of 2002, and False Claims Act of 1863 as individual examples of attempts to treat business behavior as a pathological condition. Some overly prescriptive contract requirements may also be satisfy this concept. 21 be taken for granted then it is the acute focus on the ?more rational, conscious, artful kind of behavior? found within our projects and programs that is worthwhile. To demonstrate an appreciation for, and the utility of, Professor Schelling?s theory becomes the overarching purpose of the student?s study. 22 CHAPTER 2 The Establishment of the U. S. Navy Fleet Ballistic Missile Program 2.1 Chapter Overview ?The Fleet Ballistic Missile Program, and particularly its Polaris component, is generally considered to be one of the nation?s most successful weapon development projects. Certainly, it is the largest and one of the most important weapon projects ever undertaken by the Department of the Navy.? (Sapolsky 1972) 2.2 The Role of Scientists and Private Industry during the Cold War During World War II, the United States had relied heavily upon private industry and scientists to design, plan and produce its weapons systems for use against the Axis powers. This approach was a departure from previous wars wherein various War Department Bureaus oversaw and even manned these same functions. Gantt?s work for the U. S. Army Ordnance Bureau during World War I is but one example of the pre-World War II approach. The Manhattan Project, which saw the development of the world?s first atomic weapons during World War II, relied heavily upon the leadership of scientists such as J. Robert Oppenheimer, Enrico Ferme and others while under the overall leadership of the U. S. Army Corps of Engineers and Major General Leslie R. Groves. 23 Figure 2-1 J. Robert Oppenheimer and MGEN Leslie R. Groves, USA (Wikipedia 2008) Figure 2-2 ?Little Boy? and ?Fat Man? Bombs of the Manhattan Project (Wikipedia 2008) This organizational structure was different from previous wars. Similar Operations Research approaches were taken in programs for other programs such as the B-29 Superfortress, Liberty Ship construction, North Atlantic convoy operations and anti- submarine warfare. Both the B-29 and Manhattan project bombs were deployed at Hiroshima and Nagasaki, Japan in early August 1945, bringing the unconditional surrender of Japan and ending the Second World War. 24 Figure 2-3 The Hiroshima Bomb of August 6, 1945 and ?Enola Gay? B-29 (Wikipedia 2008) Following World War II, the United States retained many aspects of its ?scientific approach? to weapons production and government contracting. Stephen B. Johnson?s book ?The Secret of Apollo? describes the Army Air Forces? efforts: ?During World War II, scientist vastly increased the fighting capability of both Allied and Axis powers. The atomic bomb, radar, jet fighters, ballistics missiles, and operations research methods applied to fighter and bomber tactics all had significant impact on the war. Recognizing the contributions of scientists, Gen. H. H. ?Hap? Arnold, commander of the Army Air Forces, advocated maintain the partnership between military officers and scientists after the war?s end.? (Johnson 2002) The World War II Manhattan project and strategic significance of atomic weaponry did not go unnoticed by other nations. Great Britain and Canada were both members of the Manhattan project, providing scientists to Oppenheimer?s team. Germany and Japan, meanwhile, had their own atomic weapons programs during the war. The German program was by far the most advanced having begun several years before the Manhattan Project?s start in 1942. It was indeed the United States? awareness of the German program from warnings provided by several scientists in the 25 late 1930s, including Albert Einstein, and the declaration of war on Japan on December 8, 1941, which prompted the start of the Manhattan project. (Bodanis 2000) A copy of Albert Einstein?s 1939 letter to President Franklyn D. Roosevelt is provided under Appendix A as is FDR?s reply. The German program was considered a viable threat to the United States and Great Britain. But in February 1943, a successful raid by the Norwegian Resistance and the British S.A.S on the German?s deuterium production facility at Vemork in occupied Norway destroyed the plant and ended the feasibility of German bomb production. Also effective was the bombing of a Norwegian ferry while underway over a very deep portion of Lake Tinnsjo in early 1944 which, while killing approximately one dozen civilians, sent a relatively large and critical supply of concentrated heavy water destined for Germany to the bottom. (Bodanis 2000) Soviet advances in atomic weapons lagged the American efforts but the American bombings of Hiroshima and Nagasaki motivated the Soviets to establish their own atomic weapons capability. ?The USSR?s highest priority at that time was to produce an atomic bomb to counter the American nuclear power. The all-out research and industrial effort, directed by physicist Igor Kurchatov and assisted by atomic espionage, would be completed with the first Soviet nuclear test on 29 August 1949.? (American Institute of Physics 2008) From their super secret facility in Sarov, the Soviets scientists were also able to develop a program that produced a hydrogen prototype weapon in 1953 and full scale bomb in 1955. Soviet firsts in military and space technology in the late 1940s and 1950s included the first successful tests or launches of an inter-continental ballistic missile and the artificial Sputnik satellite. 26 Figure 2-4 Soviet Scientists Andrei Sakharov and Igor Kurchatov First Atomic Bomb First Hydrogen Device First Hydrogen Bomb August 29, 1949 August 12, 1953 November 22, 1955 Figure 2-5 Photographs of Soviet Atomic Weapons Testing The relationship and conflict of the scientist is perhaps best illustrated by statements made by two leading scientists of the period. ?On the morning of 7 August I left the house for the bakery and stopped by the newspaper displayed on the newspaper stand. I was struck by the report of Truman?s announcement: on 6 August 1945 at 8 a.m. an atomic bomb of the enormous destructive power of 20 thousand tons of TNT was dropped on Hiroshima. My knees buckled. I realized that my life and the life of very many people, maybe all of them, had suddenly changed. Something new and terrible had entered our lives, and it had come from the side of the Grand Science ? the one that I worshipped.? 27 Andrei Sakharov, 1950 (American Institute of Physics 2008) ? ?If the radiance of a thousand suns were to burst into the sky, that would be like the splendor of the Mighty One?I am become Death, the shatterer of worlds??In some sort of crude sense which no vulgarity, no humor, no overstatements can quite extinguish, the physicists have known sin; and this is a knowledge which they cannot lose.? 8 J. Robert Oppenheimer, 1945 (Gelb et al. 1988) 2.3 The Balancing Effect of Mutually Assured Destruction By the mid-1950s, a delicate balance of power existed, with each side possessing enough nuclear armaments to destroy each other, and the world, many times over. The strategic balance was described as one of mutually assured destruction or ?MAD,? but required that a first strike by one side would not incapacitate the other side and prevent them from responding in kind. Despite these capabilities, the United States did lag the Soviets in numbers of weapons. The Soviet military threat lies not only in their present military capabilities ? formidable as they are ? but also in the dynamic development and exploitation of their military technology?(t)hey have developed a spectrum of A- and H-bombs and produced fissionable material sufficient for at least 1500 nuclear weapons. They created from scratch a long range air force with 1500 B-29 type bombers; they then substantially re-equipped it with jet aircraft, while developing a short-range air force of 3000 jet bombers. In the field of ballistic missiles they have weapons of 700 n. m. range, in production for at least a year; successfully tested 950 n. m. missiles; and probably surpassed us in ICBM development?(a)t the same time, they have maintained and largely re-equipped their army of 175 line divisions, while furnishing large quantities of military equipment to their satellites and Red China. (Scientific Advisory Board 1957) 8 Portions of this quote were relayed by Oppenheimer as passages from the Hindu work Bhagavad- Gita. According to Oppenheimer, these words came to him as he witnessed the Trinity test bomb at Alamogordo, New Mexico on July 16, 1945. (Gelb et al. 1988) 28 Although it has since been acknowledged that the Gaither Committee deliberately suppressed the number of U. S. ICBMs and inflated the Soviet numbers to create a more pronounced ?missile gap,? throughout the mid to late 1950s it was believed by most that the Soviets held a sizeable advantage over the United States with respect to the number of ICBMs and a strategic advantage overall. (Sapolsky 1972) The United States? decision to develop a ?fleet? ballistic missile (i.e. a ballistic missile launched from a ship or submarine) did not occur until several years into the ICBM race. The thinking at the mid-1950s was that if the U. S. could develop an alternative ICBM launch platform it might reduce the strategic advantage of the Soviets related to quantum capabilities. Ultimately, this strategy would involve a missile system involving ships and submarines of the U. S. Navy. By the time the Navy presented its first proposal for this ?Fleet Ballistic Missile? (FBM) program to the Department of Defense in 1955, there were already four U. S. ICBM programs well under development by the Army (Jupiter) and Air Force (Thor, Atlas and Titan). (Sapolsky 1972) Missing from most historical accounts, however, is the fact that the Soviets were already developing their own submarine launched ballistic missile which became operational in 1956, four years before the first successful launch of an American Fleet Ballistic Missile. (Polmar 1978) Notwithstanding the earlier Soviet FBM capability, the idea of an underwater missile launch involving a submarine was not new. In 1942 the German Army Weapons Department facility at Peenem?ndee developed and fired missiles from depths of 30 to 50 feet using U-Boats with missile sleds in tow. (Sapolsky 1972) Implementation of this weapons system, however, was never seen succumbing to a 29 bitter inter-service dispute between the service that controlled the missile program (German Army), and the service that was to deliver it (German Navy). Also, the German U-Boat commanders were bitterly against accepting the mission of missile transport and launching, finding that such a break from the traditional missions of attacking surface ships, convoy raiding, reconnaissance and mining was not in keeping with their mission. Interestingly, this same adverse reaction to a submarine launched missile would also be seen from U. S. Navy Submariners during the early stages of the FBM program in the 1950s, some fifteen years later. (Sapolsky 1972) As both the Soviets and Americans recognized, a ?fleet? ballistic missile (FBM) launched from surface ships or submarines had unique strategic advantages. Unlike land based missile silos, FBM submarines would have mobility, allowing them to operate closer to their target area thereby reducing the fuel requirements, missile flight times and overall rocket size. The submarines, further, could remain on station for weeks or months without relief and be undetectable, particularly in the case of a nuclear fueled submarine which did not require periodic surfacing for the recharging of shipboard battery systems as was the case for diesel-electric submarines. 9 Perhaps most importantly, knowing that it would face immediate counter attack from offshore ships and submarines would presumably serve as a deterrent to an attack from the other side. The FBM strategy was in keeping with the ?mutually assured destruction? (MAD) approach that had been adopted by both east and west. 9 The Soviet?s first ballistic missile launching submarine was a diesel-electric ?KILO? class submarine. All U. S. ballistic missile submarines were nuclear powered. The Soviets would begin commissioning nuclear ballistic missile submarines after the Americans. 30 ??because the SLBM force could not effectively (be) (sic) taken out by the enemy, it was the ideal retaliation weapon, and therefore very important for the Cold War concept of MAD (Mutually Assured Destruction).? (Parsch 208) 2.4 The Fleet Ballistic Missile Program In the fall of 1955, the Navy proposed plans for its Fleet Ballistic missile program to President Eisenhower. While rejecting the Navy?s request for its own missile program, Eisenhower directed them to joint venture with either the Army or Air Force within one of their four existing missile programs. The Navy selected the Army?s Jupiter missile program as its host platform over the Air Force controlled Thor, Atlas and Titan missiles for reasons related to size, fuel type, range and program maturity. In November 1955 the Secretary of the Navy stood up the ?Special Projects Office? to oversee the development of a modified Jupiter missile that would be suitable for the Navy?s purposes. The Jupiter-S team consisted of members of the U. S. Army?s Jupiter program which included their ?German rocket team? at Huntsville, Alabama led by Werner Von Braun and a team of Army contractors headed by the Chrysler Corporation as program manager. The original ?as planned? Navy FBM program was to see testing of ?Jupiter-S? 10 in 1958, deployment aboard a Mariner freighter 11 in 1960, submarine test launching in 1963 and deployment aboard a submarine in 1965. (Sapolsky 1972) 10 The Navy?s Jupiter-S missile was to be a revised version of the Army?s Jupiter missile. Revisions to the fuel type and missile dimensions would be necessary to make the missile suitable for shipboard operations. (Sapolsky 1972) 11 The U. S. Navy operated two Mariner freighters; U.S.S. Observation Island and U.S.S. Compass Island which were commissioned on December 5, 1958 and December 3, 1956, respectively. Plans to procure additional Mariners were cancelled following the Navy?s decision to focus on the submarine as 31 Figure 2-6 The Mariner Freighter U. S. S. Compass Island (Polmar 1978) Several concerns surfaced within the first year of the FBM program that would ultimately doom the Mariner Freighter and the Jupiter-S ICBM. First, was the belief that submarines possessed a significant strategic advantage over surface ships as they were virtually undetectable both before and after launch. This advantage was so significant that the submarine became the exclusive launch platform sometime in 1956. Second, that the conversion of the Jupiter from a liquid fuel to a solid fuel would be extremely complicated, if not practicably impossible. 12 And finally, was that the still massive size of the Jupiter-S, shrunken from 60 to 44 feet in height but the FBM launch platform, sometime in 1956. These two ships would be used extensively for test launches and the development of the inertial navigation system for the Polaris missile. (Polmar 1978) 12 It was thought that liquid fueled rockets would be less conducive to a submarine application due to concerns over air quality and the volatility associated with liquid rocket fuel. These were both key considerations for a submarine application of an FBM. 32 now weighing 160,000 lbs for its nautical application was simply too large. 13 The immense size of the missile would reduce the missile?s range, increase the fuel requirements and increase the size of the submarine to a size well beyond that of any that had been built up to that time. And if constructed, this large submarine would only be capable of carrying four Jupiter-S missiles, a number that was low enough to detract from the strategic significance of the launch platform. (Sapolsky 1972) Figure 2-7 ?A Comparison of the Missiles of the Fleet Ballistic Missile Program? (Sapolsky 1972) How the situation was handled by the Navy provides an insight into how the role of scientists and the competitions between the military services factored into the FBM program. Reeling from its loss of a strategic nuclear weapons mission to the Air 13 The first Polaris missile type (the Polaris A-1) weighed 28,800 lbs, precisely 18% of the weight of the Jupiter-S. 33 Force in 1949, 14 the Navy chose the Army?s Jupiter missile over the three Air Force options. This choice could have been related to lingering bitter feelings between the two services, or as Sapolsky suggests, the belief the Army?s grasp on a nuclear mission was tenuous with only one missile program; and that the Army would therefore promote the Navy?s ICBM for at least its own welfare if nothing else. During the first year of the program, the Navy also held a ?summer study? where scientific leaders were invited to a retreat at Nobska Point at Woods Hole, Massachusetts to critically evaluate the thinking behind the Jupiter-S ICBM program. The results of this study, though apparently unintended, would doom the Jupiter-S. On the problem of missile weight, Dr. Edward Teller of the Lawrence Radiation Laboratory made the ?basic? suggestion that if the missile was not to be in operation until 1965, would it not be feasible to assume that a warhead of lesser weight and greater yield would be in place by that time. Therefore a smaller rocket, significantly smaller than the Jupiter-S, would be advisable. This development does not appear to have been deliberate on the part of the Navy, in fact an effort was made to downplay the results of these discussions in the final report of the workshop. But the results of the summer session, combined with the other concerns already in play by the middle of 1956, would doom the Jupiter-S ICBM. (Sapolsky 1972) In September 1956 it was formally concluded that the conversion of the Jupiter-S from liquid to solid fuel would not be feasible and in December 1956 the Navy 14 The Navy?s plans to build a strategic nuclear ?Super Carrier? program was stricken by Secretary of Defense James Forrestal in 1949. Forrestal instead selected the Air Force?s proposal to build and use B-36 ?Peacemaker? bombers for the U. S. strategic nuclear air mission. Something known as the ?Revolt of the Admirals? ensued and Forrestal would soon suffer a nervous breakdown and take his own life while convalescing at National Naval Medical Center in Bethesda, Maryland. 34 received formal authorization to terminate its joint venture with the Army and proceed with its own FBM program for the development of the Polaris missile. 15 2.5 The Polaris Submarine Launched Ballistic Missile The mission of the Navy?s Special Projects Office was to oversee all research, development, testing and production necessary to achieve a functional intercontinental ballistic missile launched from a submarine within eight years (1957 to 1965). This Submarine Launched Ballistic Missile (SLBM) was first of a kind technology, notwithstanding Soviet progress in this area. The complexities were new and not insignificant, such as those related to missile launch: ?Because starting a rocket motor inside a submarine was considered too dangerous, a so-called "cold launch" method was developed, where the missile is ejected from the vertical launch tube by gas pressure before the motor is ignited. The first launch of a Polaris AX test vehicle in September 1958 was unsuccessful, and the first fully successful flight only occurred in April 1959, after 5 other failures?? (Parsch 2008) The Polaris program represented approximately 10% of Navy?s annual budget, a staggering proportion both then and now. 16 The external focus on the program would have been intense and the Special Projects Office recognized that the reputation of a well managed program would be an important ingredient to the success of Polaris. Key to the success of the Polaris program would be the support of Admiral Arleigh A. ?31-Knot? Burke who was appointed as the Chief of Naval Operations in mid-1955, immediately prior to the authorization of the FBM program. 15 The Chrysler Corporation would file a complaint against the U. S. Government. The complaint was settled in the courts in 1960. (Sapolsky 1972) 16 The President?s Budget Request for FY-2009 seeks $149.2 Billion as the Navy?s Annual Budget for Fiscal Year 2009. (Garamone 2008) Garamone, J. (2008). "Bush Sends Budget to Congress." The Journal, National Naval Medical Center Bethesda. 35 The head of the Special Projects Office was Vice Admiral William Raborn who proved especially successful at both managing the Polaris program and fostering the program?s reputation as a breeding ground for successful management innovation. This reputation for innovation and competence would serve to deflect some questioning by congressional and inter-service opponents of the program. One aspect of Admiral Raborn?s approach to Congress was to often have the program?s scientific experts provide congressional testimony rather than simply he himself. This was atypical for the period, and possibly remains so today. (Sapolsky 1972) Figure 2-8 Vice Admiral William F. Raborn, Special Projects Office (Wikipedia 2008) 36 Innovations attributed to the Special Projects Office include enhancement of the Line of Balance method for evaluation worker productivity, establishment of the Performance Evaluation Review Technique for time management, the creation of a ?Reliability Management Index? for the treatment of uncertainty and SPAN which formally integrated the schedule networks of multiple programs. (Sapolsky 1972) Also of interest is the method of codifying the subjective judgments during the evaluation of the program elements. The four terms of ?Good Shape,? ?Minor Weakness,? ?Major Weakness? and ?Critical Weakness? which are similar to the approach used within contemporaneous risk management systems. (Sapolsky 1972) While not managed by the Special Projects Office, the development of Polaris had to be kept in lock step with the design and construction of the U. S. S. GEORGE WASHINGTON class submarine, the Polaris launch platform. The George Washington submarine program was managed by Vice Admiral Hymen G. Rickover?s nuclear submarine community. In order to minimize the production time, these submarines were built via a modification of the smaller SKIPJACK class attack submarine, a platform already in production. Upon the President?s authorization of the FY-1958 Supplemental Shipbuilding Bill on February 11, 1958, three SKIPJACK submarines were ?converted during construction? to become SLBM submarines. 17 This re-design ??provided for the addition of almost 130 feet in length to 17 The first of these first three submarines had to be renamed. What was to have been U. S. S. SCORPION was renamed U. S. S. GEORGE WASHINGTON. The second and third submarines authorized by the President were unnamed at the time of the authorization and became U. S. S. PATRICK HENRY and U. S. S. THEODORE ROOSEVELT. (Polmar 1978) 37 accommodate two rows of eight missile tubes, auxiliary machinery, missile fire control and inertial navigation systems.? 18 The first George Washington class submarine (U. S. S. GEORGE WASHINGTON) was commissioned on December 30, 1959. (Polmar 1978) Figure 2-9 ?A Polaris Fleet Ballistic Missile Submarine? (Sapolsky 1972) Figure 2-10 U. S. S. GEORGE WASHINGTON (SSBN-598) 18 The additional 130 feet increased the overall length of the submarine from 251? 8? to 381? 8.? 38 ?In September 1959, the first Polaris A-1X tactical prototype missile, which included the inertial navigation system, was successfully launched, and tests of the A-1X continued through 1960?The first successful underwater launch of a Polaris missile was accomplished by (the U. S. S. GEORGE WASHINGTON)?on 20 July 1960.? (Parsch 2008) 2.6 The Success of the U. S. Navy?s Fleet Ballistic Missile Program Figure 2-11 Photograph of the first launch of Polaris, July 20, 1960 (Parsch 2008) ?Suddenly the blue-green Gulf Stream erupted with convulsive fury. Like a giant marlin in a cascade of brine, a grey, bottle-shaped monster leaped into the afternoon. For an instant it hung against the sky?silent, ominous, 39 streaming foam. Then it came alive with unearthly racket. Its tail belched flame, and it climbed into its new element with incredible ease. Arcing high into the thin, cold reaches of space, the first ballistic missile ever to be fired from a submerged submarine swung surely toward the south and east. Polaris, named for the mariner's bright pole star, needed no such guidance now. Brief seconds after it broached the water off Cape Canaveral last week and screamed down the Atlantic missile range, it was on its own?and it was on target. ?Some 40 ft. below the roiling water, a grinning redhead, wearing the two stars of a rear admiral?Rear Admiral William Raborn Jr., boss of the Navy's Polaris project, gave orders to get ready for a second shot before a proud succinct message was sent to President Eisenhower in Newport: ?Polaris, from out of the deep to target. Perfect.? (Time 1960) Polaris was to achieve its first operational launch by 1965. It beat that milestone by five years. By 1967 there were forty-one U. S. Navy submarines delivered and on patrol, each with sixteen Polaris SLBM missiles. While a technical and schedule success, that no Polaris ICBM missiles were ever fired in anger are perhaps there most important contribution. Such hopes were expressed by Admiral Raborn in a second message sent to Admiral Arleigh Burke shortly after his message to President Eisenhower. "This new star of peace hoisted a trail of missile smoke from salt water to space as a signal of a bright new addition to seapower, a new strategic use of the world's oceans which will be felt around the world and across and behind the iron and bamboo curtains." (Time 1960) 40 CHAPTER 3 The Performance Evaluation Research Task 3.1 The Performance Evaluation Research Task In The Secret of Apollo Stephen B. Johnson provides insight into the struggle of federal program managers for effective management and control of technical and cost variables in the mid to late 1950s and the 1960s. ??as ballistic missiles and air-defense systems failed in the late 1950s, military officers and aerospace industry leaders had to heed congressional calls for greater reliability and more predictable cost. Managers responded by applying extensive cost-accounting practices, while engineers performed more rigorous testing and analysis. The result was not a ?low cost? design but a more reliable product whose cost was high but predictable. Engineers gained credibility through successful missile performance, and managers gained credibility through successful prediction of cost. Because of the high priority given to and the visibility of space programs, congressional leaders in the 1960s did not mind high costs, but they would not tolerate unpredictable costs or spectacular failures.? (Johnson 2002) Johnson also makes an interesting point concerning the role that management specialists from private industry would play within missile programs. ?the military needed better cost control and technical reliability in its missile programs. Military officers and scientists were not particularly adept in these matters. However, managers and engineers were.? (Johnson 2002) Admiral Raborn might have agreed with this statement as the PERT team assembled by Gordon Pehrson, the Director of Plans and Programs Division of the Navy Special Projects Office in June 1957, deliberately included experts in Operations Research in addition to managers and engineers. Pehrson had been given the task of fully developing the SPO?s management system. The PERT team would 41 later describe themselves in publication as ?a team of operations analysts.? (Malcolm et al. 1959) But with the very beginnings of the Polaris program, the senior leadership?s emphasis on innovative management practices was already obvious. In mid-1956, Admiral Raborn took key members of his team to the corporate headquarters of several major corporations to explore existing management practices with the hopes of gleaning innovative, or cutting edge methodologies that were being employed in private industry. Visits were made to corporations that included the Chrysler Corporation and E. I. DuPont de Nemours, but the results proved disappointing. Admiral Raborn would describe the content of these visits as ?nothing of value? leading him to conclude that the private industry innovations were ?reputations unearned.? (Sapolsky 1972) Several aspects of the management of the SPO do stand out as noteworthy. The SPO?s facility contained a management center, capable of housing dozens of team members for weekly meetings while at the same time displaying performance information for each program element. Sapolsky?s description of this space seems to conjure images of a U. S. Navy ship?s Combat Information Center, although no connection is made by the author. In late 1957 Pehrson discussed formally establishing the management methodologies for time, resource and cost management within the Polaris program with members from the Chicago office of the consulting firm Booz Allen Hamilton. These discussions would ultimately result in the award of a Navy contract to that firm in early 1958 but not before the basic PERT concept had been already been mapped out during meetings prior to the start of the PERT contract, most likely in November or December of 1957. The Lockheed Corporation was also given a role in the 42 formalization of PERT, but Booz Allen Hamilton was the lead contractor. (Sapolsky 1972) The timing of the PERT study, roughly one month after the successful launch and orbit of the Soviet satellite Sputnik I (October 4, 1957) and at about the same time as the Gaither Committee?s report to President Eisenhower is likely relevant. It is not unimaginable that there were additional pressures on the SPO and Pehrson to develop and formalize innovative management concepts in response to Soviet progress in nuclear weapons delivery technology within the U. S. weapons programs. (Sapolsky, 1972) ?The pressure never let up?and then, suddenly, it increased. In August 1957 the Soviets fired their first ICBM, and the oceans narrowed from thousands of miles to 30 minutes. The continental U.S. came within reach of a distant enemy firing from his own shore. On Oct. 4 that same year. Sputnik I soared into orbit. Official Washington, once it got over the shock, set about finding effective ways to respond to the increased Russian capabilities. (Time 1960) A sobering point was that the Soviet rocket used to launch Sputnik I into orbit, the R- 7 Booster, was the same rocket used within the Soviet ICBM program. It is doubtful that the significance of a successful launch of an R-7 rocket would have been lost on western military strategists of the time and as of the Sputnik launch, the U. S. had yet to demonstrate its ICBM capability. The initial members of the PERT group in addition to Pehrson were the SPO?s Willard Fazar, Head of Special Projects Program Evaluation Branch; Donal (sic) Malcolm, John Roseboom (sic) and Dr. Charles Clark of Booz Allen & Hamilton; and Richard Young and Everett Lennen of the Lockheed Corporation. Vice Admiral William F. Raborn, the head of the Special Projects Office has been widely 43 acknowledged as instrumental in the creation not only of PERT, but for fostering a management environment at SPO that allowed for the creation of such theories. (Sapolsky, 1972), (O?Brien, 1999) Others involved included J. W. Pocock, W. F. Whitmore, L. T. E. Thompson, P. Waterman and R. Miner. (Malcolm et al. 1959) Work began under the PERT contract on January 27, 1958 and a report that formalized the PERT methodology was to be completed within three months. Although details of PERT and the work of this team would not be presented to the public until early 1959, the PERT team has noted that ?the general model specification upon which the analysis is based? took one only month. (Malcolm et al. 1959) This would mean that the PERT methodology would have been formally recorded no later than April 1958 and possibly as early as February 1958. ?Project PERT?was set up to develop, test, and implement a methodology for providing management with integrated and quantitative evaluation of: (a) progress to date and the outlook for accomplishing the objectives of the FBM program, (b) validity of established plans and schedule for accomplishing the program objectives, and (c) effect of changes proposed in established plan.? (Malcolm et al. 1959) ?Four features characterize PERT: a network that graphically describes the interrelationship of steps (called events) involved in developing a specific end item; three time estimates for reaching each event in the network ? the most optimistic, the most likely, and the most pessimistic times for completing an activity; a formula for calculating the probability distribution of the ?expected? time for completing the activity; and an identification of the longest expected time sequence through the network, which is labeled ?the critical path? since the end item will not be realized until the path is completed.? (Sapolsky 1972) Fazar elected to name the methodology ?PERT? for ?Program Evaluation Research Task? sometime prior to January 1958. This name choice allowed him to 44 memorialize his Program Evaluation Branch of the Special Projects Office while at the same time finding a name that would be described as ?cute, catchy and bold.? Sometime between February 6, 1958 and April 29, 1959, the words ?Research Task? would be replaced by ?Review Technique?. (Malcolm et al. 1959) The resulting title, ?Program Evaluation Review Technique,? has remained consistent to the present day, although current industry?s version of PERT is fundamentally different from the original form. 3.2 The Declassification and Publication of the PERT Methodology By the spring of 1959 the U. S. Navy had allowed public dissemination of the basic details surrounding its formalized network based schedule methodology that had been developed by Fazar?s group. A journal article authored by Malcolm, Roseboom (sic), Clark and Fazar titled ?Application of a Technique for Research and Development Program Evaluation? was received for publication by Operations Research on April 27, 1959 and published later that year. It described the PERT techniques and methodology to the general public but also described the conditions resulting in its requirement: ?At the time of the initiation of the study reported in this paper, the position of the Plans and Programs Division was as follows: A schedule for the system development was at hand, encompassing thousands of activities extending years into the future. This schedule had been set up partially to conform to time deadlines set in the light of an urgent requirement for the completed weapon system. This forced some activities to be compressed into uncomfortably short time intervals. Slippages of schedules dates sometimes occurred. As the Program Evaluation Branch studied the slippages and prospects for future slippages, it appeared that the capacity to predict future progress was more limited than desired. The importance of the issues at stake is great.? 45 The PERT team?s article mentioned performing a survey of ?current practices? for the prediction of schedule slippages within ?huge development programs,? only to find them inadequate for the purposes of the Fleet Ballistic Missile program. 3.3 Fundamental Concepts of PERT In describing the PERT approach, Malcolm et al. note five fundamental concepts of PERT. They are: (1) that ?the most important requirement for project evaluation?(is) the provision of detailed, well-considered estimates of the time constraints on future activities;? (2) that ?the qualifications of a person making such an estimate must include a thorough understanding of the work to be done;? (3) that ?time estimates for some activities?are highly uncertain. This uncertainty must be exposed; (4) that ?each activity?should have a probability distribution of the times that the activity might require;? and (5) that there must be a ?precise knowledge of the sequencing required or planned in the performance of activities.? With these five fundamentals, it possible to construct a network of events and calculate ?the time at which each milestone?can be expected? and also its uncertainty. The journal article by Malcolm et al. introduces the term ?critical path? and it is believed that this is the first time the term is used publicly: ?One can select the ?critical path? of those activities that can not be delayed without jeopardy to the entire program.? In the case of the Polaris program, it was now possible to identify such a path allowing program managers to respond appropriately to critical and non-critical work items. 46 Figure 3-1 Critical Path Illustration, Polaris Program (Malcolm et al, 1972) By 1962 PERT was being hailed as ?the hottest new trend in theory and practice of management? and had been become a formal contract requirement on all research and development projects of the U.S. Department of Defense and National Aeronautics and Space Administration. (Business Week 1962) Malcolm et al also provide an interesting discussion regarding the treatment of ?three kinds? of variables: ?The status of a developmental program at any given time is a function of several variables. These variables are essentially of three kinds: Resources, in the form of dollars, or what ?dollars? represent ? manpower, materials, and methods of production; technical performance of systems, subsystems, and components; and time.? (Malcolm et al. 1959) 47 Upon identifying these ?three kinds? of variables the authors express that ideally, a system would measure all three (resources, technical performance and time), arriving at some sort of ?optimum? balance of each. In most cases it is likely that each project participant would like to minimize the dollar value of resources utilized and also the overall duration of the project. Technical performance, meanwhile, would appear to be a variable that would be maximized by all participants. Ignoring, at least temporarily, that not all project participants always have these same objectives in common, the authors then describe the need for a single variable integrating resources, technical performance and time so that it might be maximized or minimized. ?Ideally, we should like to evaluate a given actual schedule in terms of all three variables. In this way it would be possible to arrive at an ?optimum? schedule that would properly balance resources, performance, and time. The existence and determination of such an optimum requires that some criterion be analytically maximized or minimized. To do this it is necessary to establish a criterion that integrates time, resources, and performance into meaningful utility.? (Malcolm et al. 1959) Although going on to state that the identification of such an integrated utility function went beyond the PERT study and that ?an approach dealing only with the time variable was selected,? this concept is significant. The FBM?s decision to focus exclusively on the time variable appears to be due to both the likely impossibility of performing such an integration in a short period of time as well as the limited computing capabilities of the period. ?Suffice to say, it was determined that no such criterion was available and that the data-processing problems associated with a plan of some 10,000 events would preclude its practical implementation in any case.? (Malcolm et al. 1959) 48 This decision does not suggest that the FBM program ignored technical performance and resources. FBM, instead, recognized that a time based system could accommodate both the cost and technical variables. The following subsections provide the original PERT methodology as set described by Malcolm et al. in the original journal article appearing in Operations Research in the fall of 1959 titled ?Application of a Technique for Research and Development Program Evaluation.? 3.3.1 ?The Flow Plan? Malcolm et al describe a ?flow plan? as the model of the network of events and activities ?necessary to achieve the end objective, T 0 . T 0 is point in time when the end objective is achieved. Their original illustration is provided under Figure 3-2. Figure 3-2 PERT System Flow Plan (Malcolm et al. 1959) The system flow plan is the basic platform from which the PERT analysis is based and speaks to the fifth fundamental concept of PERT described by the authors (that 49 there must be a ?precise knowledge of the sequencing required or planned in the performance of activities??). Malcolm?s article also describes PERT?s reliance upon unambiguous, distinguishable events and it is significant that each activity is ?bookended? with an event (i.e. an event is found at both the start and finish of every activity within the network). ?An ?event,? depicted by circled numbers?is defined as a distinguishable, unambiguous point in time that coincides with the beginning and/or end of a specific task or activity in the R and D process. ?Events must be defined unambiguously.? (Malcolm et al. 1959) In this regard, it is fair to describe PERT as an ?event centric? network. The PERT flow plan includes only ?finish-to-start? relationships between the events, a characteristic that is not shared by most contemporaneous applications of the critical path method. Malcolm?s flow plan also has several other interesting characteristics that provide a possible insight into PERT. First, the ratio of events to activities is very close to one-to-one, a very low ratio by today?s CPM standards. Second, that the event numbering system descends rather than increases with successive events. To the extent this was employed by the Polaris team this would have provided the team with the ability to gauge its ?distance? from the end objective, and perhaps motivate members involved within late performing activities. What is not clear from this numbering system is how the team would have numbered newly identified events as the project moved forward in time. Today?s network methodologies do not have this problem as they most often increase activity numbers going forward and typically employ a ?skip? numbering system (i.e. they go up by 50 fives or tens). Perhaps the answer is that by relying upon a finite set of well defined events from the start of the project, vice an evolving set of task activities that could not be fully defined at the start of a project, Polaris would see far fewer revisions to its events and numbering system. 3.3.2 ?Elapsed Time Estimates? ?With the flow plan laid out graphically and authenticated as representing the work and activities to be performed, elapsed time estimates for each activity are obtained from competent engineers.? (Malcolm et al. 1959) Figure 3-3 Illustration of elapsed time estimate (t e ) for Single Activity The authors? description of the assignment of an estimate of elapsed time (t e ) for each activity between numbered events within the system flow plan provides several interesting points related to the first and second fundamental concepts of PERT. Combined, two principles were ?the provision of detailed, well-considered estimates of the time constraints on future activities (and that) the qualifications of a person making such an estimate must include a thorough understanding of the work to be done.? With regard to the second concept, it seems safe to assume that the 51 abilities and knowledge of these ?competent engineers? was of the first order and was a large contributor to the overall success of the adaptation of PERT to the Polaris program. To accomplish the first concept, the engineers were required to provide three estimates of ?elapsed-time? to the PERT team for the activity or activities in their charge. These three estimates were for a likely, optimistic and pessimistic time scenarios, each of which were explicitly defined for each activity. These three estimates were assigned the letters a, m and b, respectively. Figure 3-4 The Three ?Elapsed-Time? Estimates for an Individual PERT Activity (Malcolm et al. 1959) Using these three estimates, it was then important to assign a probability distribution stretching between the ?optimistic? (a) and ?pessimistic? (b) values with a peak at the ?likely? (m) time estimate, which was based upon the subjective judgment of the competent engineer(s). This ?likely? value was ?free? to take any position between optimistic and pessimistic values. As such, the resulting probability distribution could be asymmetric. This part of the PERT process recognized what the authors described as the third and fourth fundamental concepts of PERT which held: (3) that ?time estimates for some activities?are highly uncertain. This uncertainty must be exposed? and; (4) that ?each activity?should have a probability distribution of the 52 times that the activity might require;? Although the distributions would likely be of a different shape for each activity, some characteristics were common to all. The ?extreme? optimistic and pessimistic values had a ?relatively little? chance of occurrence and therefore had a small probability associated with them. Figure 3-5 provides an illustration of what the authors describe as the ?elapsed-time distribution? for an individual activity. Figure 3-5 ?Estimating the elapsed-time distribution (Malcolm et al. 1959) Although criticized by later articles, the choice of the beta distribution should be considered appropriate for this application. This distribution is capable of modeling any curve between points ?a? and ?b? resulting from the three time estimates from competent engineers. (Grubbs 1962), (Bildson and Gillespie 1962) Illustrations of the Beta distribution under both symmetric and asymmetric scenarios is illustrated in Figure 3-6. 53 Figure 3-6 Symmetric and Asymmetric Beta Distributions (Clemen and Reilly 2001) The formula for the beta distribution is given by: (Clemen and Reilly 2001) 54 The terms ?r? and ?n? are ?parameters that determine the shape of the density function.? (Clemen 2001) Glavinich provides a discussion of the beta distribution: ?The beta distribution?was selected by the PERT development team and is still the basis for PERT today for the following reasons: (1.) It is a unimodal or ?single-peak? distribution; (2.) It has finite, non-negative end points; (3.) It is non-symmetrical and the mode can be skewed toward either the smallest or largest anticipated duration.? With the ?likely? elapsed-time estimate free to take any value between, the beta distribution was chosen as the probability distribution for every activity within the PERT flow plan for Polaris. With these distributions, the PERT team could compute expected time (t e ) and variance (? 2 (t e )) for every activity. Figure 3-7 illustrates the separate concepts of the three time estimates and the value and variance of expected time. Expected Time (t e ) for Activity Finish Figure 3-7 Expected Time 55 (Malcolm et al. 1959) Together, the ?expected? time (t e ) and variance of ?expected? time (? 2 ) were used in the network computations that followed and allowed the determination of a ?critical path? or longest path through a network, a term introduced to the general public by Malcolm et al. It is important to note here that the PERT analysis has transitioned from an evaluation of probability distributions to a deterministic network analysis where there is only one duration, the ?expected? time (t e ), associated with each activity. The resulting PERT network, therefore, is now indistinguishable from a CPM network, which is also a network of activities with deterministic assessments of individual activity durations. In his 1963 article ?Monte Carlo Methods and the PERT Problem,? Richard van Slyke of the University of California, Berkeley noted that the PERT team?s approximation of ?the stochastic problem by a problem of the deterministic form? (i.e. identifying ?expected? durations for each activity in lieu of solving for the probability distribution) was most likely attributable to both the uncertainty within the three-time estimate and the computer limitations of the period. (van Slyke 1963) Although a somewhat bitter dispute would erupt over the ?invention? of network scheduling methods, two key things should be recognized. First, that PERT was indeed converted to a deterministic network during the solution phase; and second that D. G. Malcolm et al. introduce the term ?critical path? in several places within their 1959 article ?Application of a Technique for Research and Development Program Evaluation.? It is believed that these examples represent the first use of this term in any public forum. 56 ?One can select the ?critical path? of those activities of those activities that cannot be delayed without jeopardy to the entire program.? (Malcolm et al. 1959) ?It is noted that, for some of the events, a zero slack condition exists. This indicates that the expected and latest times for these events are identical. If the zero-slack events are joined together, they will form a path that will extend from the present to the final event. This path can be looked upon as ?the critical path.? Should any event on the critical path slip beyond its expected date of accomplishment, then the final event can be expected to slip a similar amount.? (Malcolm et al. 1959) Figure 3-8 First Appearance of the Term ?Critical Path? (Malcolm et al. 1959) There are some psychological aspects to the elicitation of elapsed time estimates that merit some discussion. That Polaris relied upon the inputs from separate individuals, presumably a relatively high number of experts, is significant and surely presented a separate challenge to those involved with management of the PERT network. According to Malcolm et al., it was felt that by forcing the engineers 57 to provide three time estimates for each activity, and not one, it would ?disassociate the engineer from his built-in knowledge of the existing schedule and ? provide more information concerning the inherent difficulties and variability in the activity being estimated.? This psychological basis for such a consideration would be described more than a decade later by Stanford University psychologists Amos Tversky and Daniel Khannemann in their landmark study in Human Behavior titled ?Judgment Under Uncertainty: Heuristics and Bias? which appeared in the journal Science in 1974. Herein the authors describe the role of representativeness, availability, adjustment and anchoring can create biased assessments under uncertain situations. (Tversky and Kahneman 1974) That this process was deliberately organized, methodical and serious is perhaps illustrated by the fact that the process of eliciting these time estimates from the engineers was referred to as ?the interrogation process,? perhaps a tongue-in-cheek term, but it is clear from these considerations that the PERT methodology considered factors beyond a ?simple? network of events and activities. It at least gave some consideration to human behavior. 3.3.3 ?Organization of Data? With elapsed time estimates for each network activity and a flow plan providing an insight into to predecessors and successors for each activity, the Polaris program managers would construct a ?diagram showing sequenced events? with arrows expressing the activities and the finish-to-start relationships. Events were to be recorded sequentially in a single horizontal line providing ?a pattern which (would) lend itself to analytical treatment ?(t)his (was) the equivalent of graphically collapsing the network.? 58 Figure 3-9 PERT ?Diagram Showing Sequenced Events? (Malcolm et al. 1959) PERT managers worked right-to-left while constructing this diagram, starting with the last event and recording each predecessor event and connecting activity arrows. No event was recorded in this diagram until all successors to that event had been listed. Next, using the collapsed flow plan diagram as a visual aid, a table of events and elapsed time estimates was constructed. This table included the means and variances of the elapsed time estimates for every activity before and after a particular event. 59 Figure 3-10 PERT?s ?List of Sequenced Events? (Malcolm et al. 1959) 3.3.4 ?The Analysis? Before proceeding with the solution the authors note that ?many difficult analytical situations presented themselves? within the network. One such example was illustrated for the elapsed time evaluation of the three paths between Event A and Event D in Figure 3-11. 60 Figure 3-11 Illustration of Three Paths Within a Simplified Network (Malcolm et al. 1959) The authors state that path ?a-d-e? would be correlated by some amount to the other two paths ?a-c? and ?b-e.? This is because path ?a-d-e? shares activity ?a? with path ?a-c? and activity ?e? with path ?b-e.? Rather than solving a correlated solution, 19 the effort for which they describe as ?exorbitant,? the Polaris engineers chose a simpler approach. ??a simplified analysis has been utilized. In this analysis the time constraints of all paths leading up to an event are considered, and the greatest of these expected values is assigned to the event. The variance of this expected value is the sum of the variances associated with each expected value along the longest path?(i)t was felt that, considering the nature of the input data, the utility of other outputs possible from the data and the need for speedy, economical implementation, the method described above was completely satisfactory.? (Malcolm et al. 1959) In other words, the expected time of event D would be given by the largest mean value of elapsed time given by the three paths ?a-c?, ?a-d-e? or ?b-e.? Although the mean and variance of event D were now assigned, the authors were silent on how 19 Performing such a calculation across the 10,000 activities within the Polaris PERT network would not have been practicable and a likely impossibility given the computer limitations of the period. 61 competing paths with lower means but significantly larger variances were to be treated. Later discussions of PERT would describe this step as a ?basic error.? ?As is well known, the effect of activity chains terminating in a particular event are ignored in estimating variance in event-occurrence time unless the chain has the largest mean duration. Even though a chain might have much larger variance than the longest time chain, this variance is in no way accounted for in estimating event variance until the chain becomes critical.? (Bildson and Gillespie 1962) This point will be addressed in later portions of the study. 3.3.5 ?Computation of ?Expected Times? for Events? The next stage of the PERT methodology continues the tabular treatment of events and activities, calculating the ?expected time? (T E ) of occurrence for each event. This expected time is expressed in calendar time and is found within columns two and three within Figure 3-12. Starting at the bottom of the table with ?time now? 20 event, the PERT engineers provided the following methodology: ?Starting at the time ?now? (bottom of the list) examine all the activities leading from this event and choose the one with the longest expected time. List this expected time and its associated variance and then proceed forward into the network (up the page?) adding elapsed times to expected times established for previous events.? (Malcolm et al. 1959) 20 The terms ?Time Now? and ?X-Now? are used by the authors to describe the point in time from which the PERT analysis is conducted. This terminology can be considered equivalent to the terms ?Data Date? or ?Status Date.? These latter terms are found in contemporaneous versions of scheduling software manufactured by Primavera Systems, Inc. and the Microsoft Corporation. 62 Figure 3-12 ?Outputs from Analysis? (Malcolm et al. 1959) Once completed, this exercise populates the second and third columns with an event?s mean expected time of occurrence and variance. The process of moving ?up? in the table is equivalent to a ?forward pass? through the network and produces what could also be described as the date of ?early occurrence? for specific events. 3.3.6 ?Computation of the ?Latest Time? for Events? Next, the equivalent of a ?backward pass? through the network is performed, moving top-to-bottom within columns four and five of the table within Figure 3-12 to identify the latest time (T L ) that each event can occur without affecting the expected time for the latest occurring event (Event 50 in the authors? example). 3.3.7 ?Computation of ?Slack? in the System? 63 With both T E and T L , a calculation of activity ?slack? 21 is computed and entered within the sixth column of Figure 3-12. ?Slack can be taken as a measure of scheduling flexibility that is present in a flow plan, and the slack for an event also represents the time interval in which it might reasonably be scheduled.? (Malcolm et al. 1959) Slack for each activity is computed by subtracting the time of early occurrence (T E ) 22 from the time of latest possible occurrence (T L ) and is given by the following formula: Slack = T L - T E The concept of slack was illustrated by the authors in figure 3-13, wherein Event 33 may finish as early as week 61 (T E = 61) or as late as week 63 (T L =63). Figure 3-13 ?Determination of slack by calculating T L ? 21 The term ?Slack? is introduced by the PERT engineers within this article and remains in use to this day. It is fundamental to both PERT and the Critical Path Method methodologies. ?Float? is perhaps a more common term used within today?s industries. 22 The term ?Expected? differs from terminology used by the critical path method. In CPM the word ?Early? is used. The mathematical calculations, notwithstanding this difference in terminology, are otherwise identical. 64 (Malcolm et al. 1959) 3.3.8 ?Identifying the Network?s Critical Path? With the computation of slack it is possible to connect the events and activities that have identical values for T E and T L and thereby a slack value of zero. This path was dubbed the ?critical path? and is introduced by the authors. This was perhaps the greatest contribution of PERT. Malcolm et al describe this concept as follows: ??for some events, a zero slack condition exists. This indicates that the expected and latest times for these events are identical. If the zero-slack events are joined together, they will form a path that will extend from the present to the final event. This path can be looked upon as ?the critical path.? Should any event on the critical path slip beyond its expected date of accomplishment, then the final event can be expected to slip a similar amount.? (Malcolm et al. 1959) The ?critical path? for the author?s example flows through events 59, 56, 53, 51 and 50 which are noted with an ?X? within Figure 3-14. Figure 3-14 ?Critical Path in System Flow Plan? 65 (Malcolm et al. 1959) While effectively supporting the PERT methodology, the tabular approach presented by the authors appears to complicate the calculation process unnecessarily. A manual calculation using the prescribed tabular approach to assemble the information found within Figures 3-11 and 3-12 was conducted by the student and took approximately one hour. When the same calculation was performed by hand using a network flow plan to calculate and record this same information, the solution was completed in less than ten minutes. The latter approach using the flow plan is consistent with the manual (handwritten ) ?forward pass? and ?backward pass? exercises that are part of most introductions to the critical path method. It is not clear if the tabular approach was simply used as a means to present the methodology, or whether it was in fact employed by the Polaris engineers. Perhaps the tabular approach reflects the computer limitations of the time. The PERT reports produced by the computers were likely incapable of producing anything but tabular printouts such as the one for the Polaris?s Ballistic Shell shown in Figure 3-15. Figure 3-15 ?Event Identification File? from the Polaris PERT Schedule 66 (Malcolm et al. 1959) In any event, this point is minor and speaks only to human machinations related to a far broader concept. 3.3.9 ?Probability of Meeting an Existent Schedule? Finally, the authors use the completed ?study? embodied within the previous sections to evaluate the probability that a ?pre-existing schedule? can be achieved for specific events. The term ?pre-existing schedule? is perhaps too restrictive to describe what can now be performed with the available information. Perhaps a more appropriate expression would be the modern day term ?target schedule? which describes a second schedule against which the current schedule is compared. This ?target schedule? could be a scenario of some sort or another and would have to be ?pre-existing.? This is a minor point. What the authors are describing how to make a probabilistic determination of future performance against a target baseline based upon performance to date In the author?s example, Event 50 is evaluated probabilistically. Column seven within figure 3-12 contains the expected occurrence dates of each event from the pre-existing schedule (T S ). Event 50 was to have occurred in week 82 (T 0 S = 82) according to the pre-existing schedule but the results of the ?current? PERT analysis indicate that week 92 is when it will occur (T 0 E = 92). The PERT methodology now assesses the probability of meeting the original ?target? of event 50 occurring in week 82 in light of performance to date which could be described as a ten week slippage against the preexisting schedule. The authors describe this process: 67 ?Utilizing the central-limit theorem, it may be assumed that the probability distribution of time for accomplishing an event can be closely approximated with the normal probability density.? (Malcolm et al. 1959) Further details involve placing the expected time of occurrence for the event in question (T E ) at the center of a normal distribution, and with the variance of this event also known, calculating the probability of meeting the pre-existing target date (T S ). Figure 3-16 illustrates this approach for event 50. Figure 3-16 ?Estimate of Probability of Meeting Scheduled Date, T OS ? (Malcolm et al. 1959) This calculation is provided as follows for Event 50 and relies upon the table of cumulative probabilities for the normal distribution found within Appendix D. Given From Study: T 0 E = 92 Weeks 68 ? 2 = 38 Weeks Given From Pre-Existing or ?Target? Schedule: T 0S = 82 Weeks Solution: P N (T 0 = < 82 ? ? = 92 , ? 2 = 38 ) P N = (z = < (T 0 ? ?) / ? ) P N = (z = < (82 ? 92) / ?38 ) P N = (z = < -10 / 6.1644) P N = (z = < -1.16222) P (Z = < z) = 0.0526 The solution provides that there is a 5.26% probability that Event 50 will occur on or before Week 82. This calculation is performed for each event within the network and is provided by Malcolm et al. within the right most column of their table within Figure 3-12. 3.4 Chapter Summary This Chapter provided a detailed discussion of the U. S. Navy?s PERT application prepared by the Special Projects Office, Booz Allen Hamilton and the Lockheed Corporation. It is believed to represent the first time based network and first use of the term ?critical path.? 69 CHAPTER 4 A Paternity Dispute Over Network Scheduling 4.1 Chapter Overview James E. Kelley, Jr. and Morgan R. Walker?s presentation at the Joint IRE- AIEE-ACM Computer Conference held in Boston, Massachusetts on December 1-3 1959 is recognized in most contemporaneous accounts as the public advent of the network scheduling technique known as the ?critical path method.? Herein Kelley and Walker, who both had just established a new management consulting firm called Mauchley Associates, asserted that they had invented the concept of network scheduling while working on construction projects of the DuPont Corporation between 1956 and 1959. At the time these two men were employees of Sperry Rand and DuPont, respectively. In light of the fact that the U. S. Navy had published the details of the similar PERT network scheduling methodology only eight months before in this December 1959 conference, Kelley and Walker?s presentation carefully asserted that CPM was indeed different from PERT and had been developed ?in parallel.? A detailed review and analysis of the Navy?s original PERT publication authored by Malcom et al. and corporate records belonging to the DuPont Corporation indicate that Kelley and Walker may have overstated their contribution. These documents indicate that: (1) that the term ?critical path,? the very term used by Messrs. Kelley and Walker to describe their critical path methodology, originated from within the U. S. Navy?s Polaris missile program and was not used to describe any of the workings of the Kelley-Walker team prior to 1959 and (2) that the Kelley- 70 Walker presentation in December 1959 represented the combination of Navy PERT network and the DuPont cash curve methodologies into a single methodology described by the presenters as the ?critical path method.? To date these observations and distinctions have not been made by either the scientific community or the project management industry which continue to credit Kelley and Walker with the introduction of the term ?critical? path and split credit with the Navy for the network ?Flow Plan? representation. One of the authors, James E. Kelley, Jr., continues to publicly assert his claim of invention as recently as 2003. (Kelley 2003) This chapter provides the basis for the thesis that it was the scientists of the Polaris program, which included at least James Kelley and John W. Mauchly?s employer Sperry Rand if not themselves, are the originators of the network scheduling methodology that is now called the Critical Path Method. Beginning with World War Two, an overview of the development of the first large scale computers that were necessary to support the PERT and CPM calculations and a small number of the people and organizations involved in those inventions are discussed. The project and program management methodologies employed within the Polaris program and DuPont construction projects are also discussed before illustrating how modern industry credits Kelley and Walker with the invention of network scheduling methodologies. 4.2 The Development of Computer Technology Harvard University was involved with the development and operation of mainframe computers during World War Two. The Navy ?Mark I,? the world?s first large scale digital computer, was constructed in the basement of Harvard University?s 71 Cruft Laboratory in 1944 under the U. S. Navy?s Bureau of Ordnance Computation Project. The Navy Computation Project was overseen by Harvard Associate Professor Howard Aiken, who was also a Commander in the Naval Reserve. Harvard?s computer work would provide ?complex calculations necessary to accurately aim new Navy guns? during the latter stages of World War II. (Billings 1989) The Mark I would see applications to missile guidance and the Manhattan Project. Grace Hopper, a Navy Lieutenant and one of eight overworked Mark I operators at Harvard provided a sense of the operational tempo and importance of the work: ?There was a rush on everything, and we didn?t realize what was really happening? ?All of a sudden we had self-propelled rockets, and we had to compute where they were going and what they were going to do. The development of the atomic bomb also required a tremendous amount of computation, as did acoustic and magnetic mines.? (Billings 1989) Joining Harvard in the race to develop computers were academic institutions that included the Massachusetts Institute of Technology and Princeton University. The University of Pennsylvania?s Moore School of Electrical Engineering was also involved, conducting several highly secretive wartime projects for the U. S. Army and U. S. Navy involving computer development. Much of the Moore School work was related to ballistics and the development of automated solutions for army artillery and naval gunnery systems. Sometime in 1942, shortly after the U. S. entry into World War II, John W. Mauchly accepted a position as an Adjunct Professor at the Moore School and began teaching and working on some of these classified military ballistics projects. At the time, Mauchly was a member of the faculty of Ursinus 72 College in Philadelphia having obtained a Doctor of Philosophy in Physics from The Johns Hopkins University in 1932. (Goldschmidt and Akera 2008) One of the Moore School efforts was Project PX, a classified Army project for the development of a mainframe computing system which would later become known as ENIAC. Although Mauchly was not afforded the opportunity to be one of the university?s researchers on this project, he was involved, at least peripherally, as a consultant to the other Moore School researchers. The involvement of Princeton University?s John von Neuman is credited for enhancing the ENIAC to hold a stored program, the first computer to do so. (Goldschmidt and Akera 2008) Although interaction with von Neuman was deliberately limited, Army Lieutenant Herman Goldstine distributed a paper describing the content of von Neuman?s design. It is believed that Mauchly received a copy of these notes and was also able to participate in the follow-on work on Project PX which reflected von Neuman?s approach contributing to his understanding of the ENIAC?s operation. (Goldschmidt and Akera 2008) During his time at the Moore School, Mauchly befriended Presper Eckert, a key contributor to the success of Project PX and a teaching assistant for a course in which Mauchly was a student. On February 14, 1946 the ENIAC was unveiled to the public, some six months after the end of the war. At approximately the same time, Mauchly and Eckert requested and received permission from the University of Pennsylvania to file a patent for the ENIAC in their own names. (Goldschmidt and Akera 2008) The patent was filed successfully, but shortly thereafter the university required that Mauchly and Eckert relinquish their rights to the ENIAC patent to the institution. Mauchly and Eckert refused and resigned from the university effective 73 March 31, 1946. For Mauchly, this was not the first controversy over intellectual property. Iowa State University Professor Jon V. Atanasoff had developed an electronic computing device and allowed Mauchly to visit him in 1941 in Iowa. During his visit Atanasoff showed Mauchly his work towards a computing device and the content of this event influenced Mauchly considerably in his follow-on work on the ENIAC and UNIVAC. (Goldschmidt and Akera 2008) Figure 4-1 The UNIVAC Computer System (Goldschmidt and Akera 2008) Although the ENIAC patent would ultimately be invalidated by the U. S. courts in 1973, Mauchly and Eckert were able to use it to successfully develop and market the UNIVAC I (Universal Automatic Computer) computer under the newly formed Eckert-Mauchly Corporation between 1946 and 1950. (Goldschmidt and Akera 2008) With initial successes that included contracts with the U. S. Census Bureau and the Columbia Broadcasting Service, the Eckert-Mauchly Corporation was able to attract significant attention and promise. In 1949, Grace Hopper left the 74 Navy?s active duty service to become an employee of the Eckert-Mauchly Corporation. She would remain in the U. S. Naval Reserve where she was an active member of the Navy?s computer technology program for the next 37 years. For reasons that included the death of Eckert-Mauchly?s principle financier and worsening uncooperative relationships with both the scientific and military communities, the firm was sold to the Remington Rand Corporation in 1950. (Goldschmidt and Akera 2008), (Billings 1989). Figure 4-2 Advertisement for UNIVAC Computer (Goldschmidt and Akera 2008) A synopsis of the professional career of Mauchly, opining on his professional accomplishments and his relationship with the scientists of his period, is provided by the Special Collections of the University of Pennsylvania: ?In designing a general purpose computer, Mauchly had built a machine that inherently served more applications than he could possibly envision. In the wake of World War II, the digital electronic computer took on a military 75 significance that an individual scientist like Mauchly could not be trusted to oversee. Postwar planning for computer development fell to scientific advisors and military strategists who dealt with such technologies as the hydrogen bomb, supersonic combat aircraft, anti-aircraft missiles, and the nation's strategic air defense system. While Mauchly continued to try to advise the Univac Division of Remington Rand on the various applications of computer systems, the larger marketing and development staff of the corporation supplanted the usefulness of his knowledge.? (Goldschmidt and Akera 2008) 4.3 The Work of Sperry-Rand on Behalf of the U. S. Department of Defense In 1955, five years after the sale of Eckert-Mauchly to Remington Rand, Remington Rand merged with the Sperry Corporation to become Sperry Rand. (Wikipedia 2008) In addition to having Grace Hopper, who had remained with Remington Rand following the 1950 sale of Eckert-Mauchly, Sperry Rand executives included retired General Douglas MacArthur, the Supreme Allied Commander of the Pacific during World War II. Also on board at Sperry Rand was Major General Leslie M. Groves, Chief of the Manhattan Engineer District (i.e. the ?Manhattan Project?). With these three personnel alone, combined with the immediate involvement on major military programs, it is plausible that any innovative management practices found on the major programs of World War II could find their way into the work of Sperry Rand. With Groves on board, it is also plausible that any innovative management practices found on the earlier Manhattan project could find their way into the work of Sperry Rand employees, provided of course that the information was not classified or otherwise inappropriate for the common corporate use. This was no different for any other contractor or scientific body working on these projects and programs. 76 The deep connections between the U. S. Government and Sperry Rand are important in this conversation for two reasons. First, that a management system such as CPM could have been developed ?independently? by an individual such as Kelley within large corporations immersed within massive federal workloads is indeed questionable. Second, it likely that Sperry Rand, as an employer of many former employees from these government programs or their military sponsors, would have an ability to tap into the methodologies that had been worked by the U. S. Government on previous or ongoing projects and programs. The Polaris program alone involved over 250 contractors and 9,000 subcontractors working in various capacities over the years. (O'Brien and Plotnick 1999) Figure 4-3 provides a breakdown of the major contractors on the FBM program. The list includes major defense contractors Westinghouse, General Electric, RCA, Lockheed and Aerojet General, Naval laboratories and the involvement of several academic institutions including the Massachusetts Institute of Technology and the Johns Hopkins University Applied Physics Laboratory. Perhaps most significantly, Sperry Rand is also one of the major contractors, responsible for ?Management, Coordination and System Design? within the Navigation Branch of the Polaris project. 77 Figure 4-3 Major Contractors Within the U. S. Navy?s Polaris Program (Sapolsky 1972) It is indeed questionable if Sperry Rand personnel on the Polaris program were not aware of the PERT methodology. Some of their ?competent engineers? were quite likely providing Fazar?s staff with time estimates for the PERT network. For a man of Mauchly?s likely professional stature within Sperry Rand and interests, it is doubtful that he could or would have insulated himself from the techniques that were being employed within the large defense programs in which his firm participated, particularly something as innovative as PERT. It is also clear that Mauchly was ?distinctly unhappy? with his role and position within the Remington Rand entity by 1952. See Appendix E. Mauchly?s former employee, Grace Hopper, remained heavily involved in Navy computer work both as an officer in the naval reserve during the 1950s and while employed with Sperry Rand. It is worth noting that the NORC computer used 78 by the PERT team at Naval Proving Grounds, Dahlgren, Virginia was likely built and operated by Grace Hopper during either her work with Sperry Rand or her drill periods as a member of the Naval Reserve. 4.4 The Declassification of the Performance Evaluation Review Technique Admiral Raborn?s decision to publicize the PERT methodology within the April 1959 edition of Operations Research is significant with regard to the claims of the Kelley-Walker presentation for two reasons. First, that with its publication the intellectual content of the Polaris PERT could be harvested by Messrs. Kelley and Walker, perhaps most notably the term ?critical path,? but also the intuitive network illustrations and general approach that it provides. Second, if Mauchly, Kelley and/or Walker had become aware of the then classified PERT methodology during Sperry- Rand?s work on Polaris between 1955 and 1959, but were prevented from divulging the details due to security requirements, they could now speak of these scheduling techniques without a fear of divulging classified information to the general public. This latter point is significant because it both explains the timing of Kelley-Walker publication and, even if one is to concede that their methods were ?developed in parallel,? suggests that the authors were at least cognizant of an intellectual proximity between their work and the methods within the Polaris program. 4.5 Other Efforts Towards a Network Based Scheduling Method James Fourre?s assertion found with his 1968 article ?Critical Path Scheduling: A Practical Appraisal of PERT? that PERT ?was basically an outgrowth of network theory and process flow charts which have been used in industry in 79 various forms for many years.? (Fourre 1968) Fourre even refers to Henry L. Gantt?s scheduling charts as ?networks,? but does not elaborate as to why he chose this word. Fourre does however, describe something resembling forward and backward passes. ?Using the Gantt technique in the ?forward direction,? we work from left to right, plotting activities as they must occur in time relative to other activities, and establishing a completion date for the job. ?Another method in the Gantt technique is to schedule in the ?backward direction,? starting with the required completion date and working backward or to the left, establishing the required dates of events to meet the schedule.? (Fourre 1968) While there is no evidence to suggest that Gantt performed these operations with his charts, a ?competent? engineer, overly familiar with his work and working with a relatively small number of bars might have done so subconsciously as an innate process. The 1977 work entitled Critical Path Analysis by Douglas W. Lang describes a phenomenon common to multiple historical accounts describing the origins of PERT and CPM, that of the ?parallel development? of PERT-like planning techniques for time management: ?the late ?50s saw significant advances in planning techniques by teams working in the USA and Europe. All the teams were engaged in devising scheduling systems to enable projects to be completed in less time than hitherto, using the same or fewer resources.?(u)nfortunately all the teams were working in parallel and hence each technique developed had its own characteristics.? (Lang 1977) There are indeed multiple references to scientists and private contractors who are developing network based analyses in the mid to late 1950s for the purpose of optimizing schedule, cost and resource applications. 80 Discussions of invention, or at least discussions of partial credit due for the invention of network based scheduling methodologies are not limited to the United States. In Planning and Control in Management: The German RPS System, Walter and Rainer Schleip describe their RPS 23 System as a network scheduling theory based upon the theory of regulating circuits and is reported to have been used by several companies prior to 1972. The publication is not clear how far prior to 1972 this methodology was established, but while making no overt claims of invention, do suggest that RPS was developed independently from CPM and PERT. The Metra-Potential Method (MPM), a series of dots and circles is also purported to have been developed independently by the French Metra Group. (Schleip 1972) Werner von Braun, who in 1968 was head of NASA?s Marshall Space Flight Center in Huntsville, Alabama provides an interesting insight into this topic as one whose work apparently did not embrace network scheduling or even systems engineering until very late in his career. This despite his team?s work on what most might consider some of the most complicated and coordinated works of science of the twentieth century. According to Johnson: ?by the summer of 1968, von Braun recognized that he needed to strengthen system engineering at MSFC 24 ?Von Braun explained to (Professor Philip) Tompkins that (those) who had been trained in electrical engineering, thought more naturally in terms of a ?nervous system? than he, who though of rockets as machines? Why did NASA?s most experienced group of engineers take so long to embrace system engineering? Three factors contributed: the almost exclusive use of in-house capability for rocket development and testing, the extraordinary continuity of von Braun?s team, and the continuity of the teams R&D project.? 23 The authors provide that RPS represents ?Regeltechnischen Planung und Steuerung.? Translated into English this is ?Planning and Control Techniques for Management.? (Schleip 1972) 24 Marshall Space Flight Center, Huntsville, Alabama. 81 (Johnson 2002) Perhaps von Braun?s approach, which was obviously sound, prompts the question of whether network scheduling is truly an invention, or more accurately described as a modern day manifestation of an ancient notion of innate acts of prioritization and planning. The advent of the computer, perhaps, simply allowed this type of thought to occur on a larger scale. 4.6 The ?Extra Cash Value? of Network Scheduling Methods Figure 4-4 Photograph of Mauchly with "SkedFlo, Model MCX-30," (Goldschmidt and Akera 2008) Harvey M. Sapolsky?s The Polaris System Development, Bureaucratic and Programmatic Success in Government describes a ?simple beginning? for PERT within the U. S. Navy?s Special Projects Office. The publication of the PERT article 82 in 1959 stemmed from Gordon Pehrson?s desire to formalize, and Admiral Raborn?s desire to share, some of the management techniques that were in use within the SPO?s Fleet Ballistic Missile Program. Follow on publicity surrounding the SPO?s publication of the PERT technique appears to have been intense as were, perhaps predictably, claims to the previous ownership over invention. ?With the actual birth of the PERT technique and its instant rise to fame, however, an acrimonious dispute arose over parentage. Given the size of the Polaris program, it is not surprising that many people would be in some way involved in the development and application of a management technique that was in high demand. Paternity, then had a particular extra cash value that caused historical accuracy to be easily sacrificed in accounts of PERT?s origins.? (Sapolsky 1972) The Polaris team?s article provided the following point: ?Since the implementation of PERT by the Special Projects Office in 1958, scores of organizations have developed an interest in PERT, several are studying the feasibility of applying PERT techniques within their own operations, and the system is already in operation in a number of industrial concerns and governmental agencies.? (Malcolm et al. 1959) Is it possible that having successfully applied the DuPont cash curve methodology on several projects between 1957 and 1959, and with the basic concepts of the Navy PERT system now declassified, Mauchly, Kelley and Walker were able to integrate the concepts and terminology of both systems into a single presentation. It is during the eight month period between the release of the April 1959 issue of Operations Research and the December 1959 Joint Computer Conference that Mauchly Associates Incorporated is founded and all three of these men leave either Sperry-Rand or DuPont for this new firm. The circumstances of their departures from their former employers have not been determined. This chapter of Mauchly?s career, 83 i.e. claim to invention, the publication of the Kelley-Walker article and the establishment of a new business, are consistent with Mauchly?s behavioral patterns seen while in the midst of world renown scientists on Project PX and with Atanasoff while affiliated with the Moore School. Grace Hopper would not follow John Mauchly, ?this time? electing to remain on with Sperry Rand until her private sector retirement in 1971. 4.7 The Kelley-Walker ?Critical-Path Planning and Scheduling? When James Kelley and Morgan Walker presented ?Critical Path Planning and Scheduling? at the Eastern Joint Computer Conference, December 1-3, 1959 in Boston Massachusetts it is the first published account of a methodology with the term ?critical path? in the title. (Kelley and Walker 1959) But rather than presenting a new methodology, as Kelley and Walker suggest, the non-refereed paper that was published in the conference proceedings employed the basic concepts of cost minimization and time management found within the DuPont cash curves and the network flow plan found within the Polaris program. Perhaps most significant is Kelley and Walker?s adoption of terminology found within the Navy?s PERT paper, including the term ?critical path,? which had not appeared in any of Kelley and Walker?s work prior to April 1959. While Kelley would assert that the term was indeed his own, ultimately he would credit the Polaris program as the original author. (Sapolsky 1972) This capitulation has not found its way into numerous publications of today which still credit Kelley and Walker with the creation of the term ?critical path.? 84 For John W. Mauchly and James E. Kelley, Jr., their early assertions included claims that the Navy absconded the term ?critical path? during a courtesy review of Kelley?s work to the assertion that the critical path method was developed before or in parallel to PERT without any knowledge of the SPO?s efforts. Given the presumably wide dissemination of PERT material within the Polaris team, of which Kelley?s Sperry Rand was a principal contractor, this explanation is difficult to reconcile. The following three subsections provide a comparison between Kelley- Walker CPM and the PERT article of Malcolm et al. 4.7.1 ?1. Project Structure? Kelley-Walker provide a network illustration of work activities consistent with that embodied within the PERT methodology. ?Fundamental to the Critical-Path Method is the basic representation of a project. It is characteristic of all projects that all work must be performed in some well defined order?(t)hese relations of order can be drawn graphically. Each job in the project is represented by an arrow?(t)he result is a topological representation of a project. Fig. 1 typifies the graphical form of a project.? (Kelley and Walker 1959) Kelley?s Figure 1 and the Polaris System Flow Plan illustration provided by Malcolm et al. are provided under figures 4-5 and 4-6. 85 Figure 4-5 ?Fig. 1 ? Typical project diagram? (Kelley and Walker 1959) Figure 4-6 ?Fig. 1. System flow plan.? (Malcolm et al. 1959) Malcolm et al. provide similar discussions within their presentation. Per Malcolm et al., the system flow plan is the basic platform from which the PERT analysis is based and that there must be a ?precise knowledge of the sequencing required or planned in the performance of activities?? (Malcolm et al. 1959) Malcolm?s article also describes: 86 ?An ?event,? depicted by circled numbers?is defined as a distinguishable, unambiguous point in time that coincides with the beginning and/or end of a specific task or activity in the R and D process. There is very little, if any, substantive difference between the discussions of network topology and arrangement between these two articles, other than noting that Kelley and Walker already seem more focused on tasks or ?jobs? than ?events.? 4.7.2 ?2. Calendar Limits on Activities? Kelley and Walker then describe the next step as putting the plan ?on a timetable to obtain a schedule. In order to schedule a project, it is necessary to assign elapsed time durations to each job.? Mathematical computations are then performed for each task producing an ?Earliest start time,? an ?Earliest Completion Time,? a ?Latest Start Time? and a ?Maximum Time Available? for each activity. Where ?maximum time available for a job equals its duration the job is called critical. A delay in a critical job will cause a comparable delay in the project completion time?If a project does contain critical jobs, then it also contains at least one contiguous path of critical jobs through the project diagram from origin to terminus. Such a path is called a critical-path.? (Kelley and Walker 1959) These concepts are not substantively different from those laid out by Malcolm et al. ?One can select the ?critical path? of those activities than can not be delayed without jeopardy to the entire program.? (Malcolm et al. 1959) Perhaps the most significant comparison between the Kelley-Walker method and Polaris PERT is provided by the data provided within the Polaris tabular solution. Ignoring the ?variance of elapsed time estimates? column in figure Polaris PERT?s ?Figure 5?, it is 87 fair to note how the Kelley-Walker methodology does not extend beyond the breadth of the Polaris methodology. See Figure 4-7. If one were to ignore the two columns titles ?Variance ? t e 2 ? and only consider the Polaris network as a set of events and activities whose durations are defined by a single number (Mean Elapsed Time Estimate (t e )), the Kelley-Walker work does represent a furtherance of these concepts. Figure 4-7 ?Fig. 5. List of Sequenced Events? (Malcolm et al. 1959) 4.8 The Perpetuation of Legacy Kelley?s first published work, outside of conference proceedings, on CPM- PERT is submitted for publication on June 1, 1960 and appears in the May-June 1961 issue of Operations Research. (Kelley 1961) Unlike his 1959 collaboration with Walker for the Joint Computer Conference, the article does not present the invention of CPM, but rather deals with the mathematical representation of the broader concepts embodied within CPM. Herein, Kelley assumes a style consistent with that 88 of inventor, freely using the terms ?critical path? and ?critical activities? and not crediting the Polaris authors or any others as the source of this terminology. ?If there is a path from origin to terminus whose length equals the duration of the schedule, it is called a critical-path. All the activities in a critical-path are limiting in the sense that a delay in any one of them will cause a comparable delay in the completion of the project. Therefore, they are called critical activities.? (Kely 1961) Kelley?s 1961 article also presents the basic network scheduling concept as though it originated within CPM despite its obvious PERT likeness: ?This paper is concerned with establishing the mathematical basis of the Critical-Path Method ? a new tool for planning, scheduling, and coordinating complex engineering-type projects. The essential ingredient of the technique is a mathematical model that incorporates sequence information, durations, and costs for each component of the project. ?The process of describing the order relations among the activities of a project is facilitated by the use of a graphical technique. Each activity in the project is denoted by an arrow that depicts the activity?s existence and the direction of time-flow (time flows from the tail to the head of an arrow). The arrows are then interconnected to show the sequence relations among the activities?the nodes of the graph correspond to the events of the project.? (Kelley 1961) Kelley?s article does, however, offer an algorithm that allows for a computational approach to solving the network, that is different from the PERT solution and different from an algorithm that had been published by the RAND Corporation?s D. R. Fulkerson. (Bildson, 1962), (Fulkerson, 1961) Kelley also provide some terminology contributions, describing non-critical path activities as ?floaters,? meaning their total float value was greater than zero. This term is no longer in use as of 2008. His article also attempts to distinguish between CPM and PERT. PERT ?is concerned primarily with monitoring progress on R and D projects?the Critical-Path 89 Method?is concerned with the planning, scheduling, and cost-control aspects of project work.? (Kelley 1961) This is a tenuous statement. Surely any basic distinction here is purely a matter of scale (i.e. between projects and programs) and cannot speak to a significant difference - - whether philosophical, intellectual or mathematic - - between PERT and CPM. In a 1964 article, Kelley would attempt to further establish the separation between PERT and CPM by defining different purposes for each: ??PERT was originally developed to estimate the expected occurrences of previously scheduled milestones for the Polaris program, it is now being used as a scheduling tool. The requirements of the two applications are quite different.? (Kely 1964) Overall, Kelley?s distinctions are non-substantive. The discussion of PERT and CPM as two separate concepts and the history is complicated by the fact that the two methods osmosized so quickly into one methodology, which is now known as CPM. This ?osmosis? is noted by Sapolsky -- ?(i)n practice, elements of PERT and CPM are often combined and variations of both systems are indiscriminately identified with one or the other of the acronyms.? (Sapolsky 1972) This remains true to this day, but this osmosis was essentially complete by the mid 1960s. While relatively little PERT coverage exists beyond the 1959 PERT article by Malcolm et al., exhaustive descriptions of Kelley and Walker?s work are found within the works of Kelley?s prot?g?s. James O?Brien, who has authored six editions of a McGraw-Hill title that is perhaps the most widely read book on CPM in the United 90 States, provides comments and tone consistent with the notion that the Kelley-Walker CPM was the first manifestation of a network scheduling concept: ?In 1956, the E. I. DuPont de Nemours Company set up a group at its Newark, Delaware, facility to study the possible application of new management techniques to the company?s engineering functions. One of the first areas considered was the planning and scheduling of construction projects. ?The critical path method (CPM) was developed specifically for the planning of construction. ?In early 1957, the Univac Applications Research Center, under the direction of Dr. John W. Mauchly, joined the effort with James E. Kelley, Jr., of Remington Rand (UNIVAC) and Morgan Walker of DuPont in direct charge at Newark. The original conceptual work was revised, and the resulting routines became the basic CPM. It is interesting that no fundamental changes in this first work have been made. ?The basic strength of CPM continues to be its ability to represent logical planning factors in network form. One (anonymous) review noted: ?Perhaps the most ironic aspect of the critical path method is that after you understand it, it is self-evident. Just as an algebra student can apply the rules without full appreciation of the power of the mathematical concepts, so can the individual apply CPM or its equivalent without fully appreciating the applicability of the method.? (O?Brien, 1999) O?Brien becomes more specific when discussing the interaction between the U. S. Navy and the DuPont team, carefully establishing that the DuPont work preceded AND contributed to the Navy work. These points have not been supported by this research. ?The development of CPM was enhanced when the U. S. Navy Polaris program became interested in it. The Polaris program had developed its own network system known as performance evaluation and review technique (PERT). The DuPont work is considered antecedent material for the development of PERT?(The U. S. Navy?s) search for a better management system continued throughout the fall of 1957. At that time, the Navy was cognizant of the development of CPM at DuPont. And perhaps most striking: 91 ?PERT owed much to the earlier work by Kelley and Walker. Ironically, after a courtesy review of their own work as converted into PERT, Kelley and Walker were astute enough to use the term ?critical path? as the new caption of their Kelley-Walker (?main chain?) technique.? (O?Brien, 1999) Effusive credits to Kelley and Walker are not limited to recent work: ?The above method of depicting a project graph differs in some respects from the representation used by James E. Kelley, Jr. and Morgan R. Walker, who, perhaps more than anyone alse (sic), were responsible for the initial development of critical-path scheduling. (for an interesting account of its early history, see their paper, Critical-Path Planning and Scheduling,? reported in Proceedings of the Eastern Joint Computer Conference, Boston, December 1-3, 1959.) In the widely used Kelley-Walker form?? (Muth and Thompson 1963) O?Brien also notes at least one conversation between Kelley and or Walker and the PERT team, going on to suggest that this interaction was how the Navy was able to obtain the term ?critical path.? Notwithstanding that this research failed to support the statements and that Kelley would acknowledge Polaris as the creators of the critical path terminology, O?Brien?s accounting has other factual inaccuracies. His reference to the Special Projects Office being established in the fall of 1957 is off by two years. It was established in the fall of 1955. Sapolsky?s history cites a Mauchly Associates document from 1962 titled ?History of CPM and Related Systems? authored by James E. Kelley, Jr. which acknowledges ?(t)he very term CPM ? Critical Path Method ? it turns out, was borrowed in desperation from the better-known PERT system.? (Sapolsky 1972) The 1962 Kelley document has not been located as of this date. 4.9 Chapter Summary 92 Today, none of the corporations who had worked on the major weapons systems of the 1950s or management studies, claim any connection to the invention of PERT or what is now referred to as the Critical Path Method. This includes the firms that were perhaps in closest proximity to Polaris and Mauchly-Kelley-Walker: (1.) Booz Allen Hamilton; (2.) UNISYS (the name of Sperry Rand as of 2008); (3.) DuPont; or (4.) Lockheed Martin. Even the DuPont Corporation, who proudly speak of inventions such as Polyester, Lycra, Kevlar and the over 34,000 patents that have resulted from its work, make no claims or comments related to the invention of CPM on its main website. (E. I. du Pont de Nemours & Company 2008) Any mention of these firms in separate historical accounts of the CPM invention can be attributed to the published work of these three former employees of either Sperry Rand or DuPont (Mauchly, Kelley and Walker) and their prot?g?s. That any of these entities developed a PERT-like approach to the time problem independently from the U. S. Navy?s Fleet Ballistic Missile Program appears unsupported to this day despite widespread publication to the contrary. Despite these inconsistencies, Kelley has done little to correct the history as typified by a statement to Engineering News Record in 2003: ?It's only 46 years since Morgan Walker and I first worked out CPM for DuPont?? (Kelley 2003) While a word mincing exercise would reveal that this statement as factually correct, statements such as these have only fostered the incorrect history on this subject that is held by modern industry. Yes, upon dissection, it is true that 46 years prior to 2003 Walker and Kelley first ?worked out CPM for DuPont,? but this is quite different 93 from saying that this was the first iteration of the network scheduling methodology that would later become known as CPM. It was not. The work to which Kelley referred is the 1957-58 work for DuPont, further, which was neither called CPM nor was it a time management system. It was a methodology for minimizing the overall cost of a project using time and resources as the dependent variables. Mauchly, Walker, Kelley and O?Brien perpetuated the notion that CPM was an invention, in and of itself, developed by two or three individuals rather than the scholars, scientists, engineers and military professionals in and around military weapons programs since the beginning of World War II. It is obvious that this was a good business move, allowing the creation of a scheduling profession championed by management consulting firms and later, the computer software industry. John Mauchly?s role in the development of computing technology is placed into perspective when contrasted with the career of Grace Hopper. Hopper worked in the field of computers for 44 years and would retire at the rank of Rear Admiral in 1986 having returned to active duty in the 1970s. Admiral Hopper is credited with numerous inventions including the creation of the COBAL programming language. (Billings 1989) There is no record of her ever filing a patent, receiving monetary compensation beyond salary, or claiming intellectual property for her contributions to her field. She was, however, bestowed the honor of having a U. S. naval warship named in her honor, interestingly, a guided missile destroyer from the class named in honor of the man who served as Chief of Naval Operations during the Polaris era, Admiral Arleigh Aaron ?31-Knot? Burke. 94 Figure 4-8 Rear Admiral Grace Hopper (left to right) LT Grace Hopper, Mark I Computer Operator, World War Two; U. S. S. Grace Hopper (CG-70) underway; Rear Admiral Grace Hopper at Retirement Ceremony, August 14, 1986 (U. S. Navy Office of Naval Research 2008) 95 CHAPTER 5 The Development of Network Scheduling Methods, 1959-2008 5.1 Chapter Overview This chapter provides an overview of the transition of PERT from a probabilistic network of activities and events applied to only very large government and corporate programs, to a platform available to essentially anyone having access to a personal computer and scheduling software costing approximately five hundred dollars. Several significant developments occurred during the forty-nine years since the publication of the PERT methodology in Operations Research in April 1959. PERT experienced explosive growth, but its immediate consumption by multiple federal agency users created different platforms, standards and confusion. By the late 1960s, PERT was no longer existence in its original format, if at all. Kelley-Walker CPM, meanwhile, became part of a specialized consulting industry catering to the construction industry and particularly the large construction contractors. This industry was less focused on events and more on activities, which is logical in light of the contractor?s role in identifying ?how? the work will be executed. This represented a very important change to the basic concept of PERT, which was more concerned with activities than events. Figures 5-1 and 5-2 illustrate the original Polaris flow plan and the later Activity-on-Arrow format that would become the norm for contractor prepared CPM schedules up through the early 1980s. 96 Figure 5-1 The PERT Flow Plan of the Polaris Program (1958 ? 1965) Figure 5-2 CPM Arrow Diagramming Method (1959 ? Early 1980s) In 1961 Stanford University?s John W. Fondahl would conduct a classified study for the U. S. Navy?s Bureau of Yards and Docks seeking a ?non-computerized? solution to the network scheduling methodologies embodied within PERT and CPM. The resulting methodology was described as the Precedence Diagramming Method (PDM), where the activities were moved to the nodes and the arrows were made bare. PDM would also be used in computer applications following sometime in 1964 and today are the predominant form of CPM software. Figure 5-3 shows the PDM format the same network described by the AOA diagram of Figure 5-2. A1 A3 A2 A4 A5 E E E E E A1 A3 A2 A4 A5 97 Figure 5-3 Precedence Diagram Method (1961 ? Present) This chapter provides a discussion of these subjects within an overview of CPM development since 1959. 5.2 PERT?s De-Classification, Celebration, Rise and Demise As PERT emerged beyond the confines of its security classifications in the late 1950s it was widely hailed as an innovative management concept. Through this openness PERT?s individual merits were discussed as were its weaknesses. By the early 1960s, revised versions of PERT would be found within the major procurement programs of the U. S. Government, particularly within the Department of Defense and the newly founded National Aeronautics and Space Administration, responsible for space exploration and more specifically, by 1962 the U. S. mission to the moon. The adoption of PERT, however, would lead to changes and revisions to the original PERT developed under the Polaris program as these public and private entities adapted the concept to their individual needs. But PERT?s growth would ultimately lead to its demise, as with each adaptation, the differences made for confusion as differing standards and methodologies. By the mid-1960s, PERT was essentially no longer existent in its original form or within federal government contracting. A1 A3 A4 A5 A2 98 5.3 The Osmosis of PERT and Kelley-Walker CPM The Kelley-Walker CPM presentation, meanwhile, while not having as explosive introduction as PERT, quickly became something of a niche industry. It was being championed to construction contractors by management consulting firms as a means to minimize cost of construction by establishing the most efficient work sequences and use of resources. The firms headed by John W. Mauchly and James J. O?Brien were particularly involved in this market. With the Polaris program essentially complete by the mid-1960, PERT no longer being required within federal contracting and the advent of the non-network based Cost/Schedule Control Systems Criteria in 1967, the Kelley-Walker CPM became the mainstream form of network based scheduling systems. And with it, a contractor-style activity-centric approach became quite obvious within their activity-on-arrow diagramming methods which did not, typically, provide event descriptions. Any remaining distinction regarding the Kelley-Walker CPM not representing anything intellectually new beyond the methodology of Polaris PERT, to the extent this was even being discussed during the mid-1960s, appears to have been lost at this point. As of 2008 the term ?CPM? is used to describe all of the intellectual content of PERT and Kelley-Walker CPM, notwithstanding the fact that PERT is now considered a subset methodology within CPM. 5.4 The Precedence Diagramming Method In 1961 the U. S. Navy Bureau of Yards and Docks (BUDOCKs) commissioned a study by Stanford University to devise a ?non-computerized solution? to the concepts embodied within the PERT-CPM application. Headed by 99 Professor John W. Fondahl, this effort, which would remain classified until sometime prior to 1964 sought three things: ?(1) To present a non-computer method for obtaining the benefits of critical path scheduling that it (sic) is practical to apply to many of the projects encountered by the construction contractor. (2) To develop the possibilities inherent in a step-by-step, manual solution to overcome some of the shortcomings of computer programmed solutions. (3) To offer the reader an opportunity to understand the theory and the assumptions of the Critical Path Method by discussing them and presenting a complete solution to an illustrative problem. Described in a supplemental report from 1964, the Stanford team noted several observations on the state of the industry as of 1961. ??there was very little detailed information readily available concerning the application of critical path techniques. Most articles dwelt on the benefits of these methods without providing useful working information. A few management consulting firms were offering workshops and furnishing instructional manual to those who participated. However, they were reaching representatives from only a limited group from the industry, these mostly from larger organizations dealing with very complex projects. The Stanford report also described the difficulty with applying Polaris-PERT to more modest efforts of the period. ?The Special Projects Office, U.S. Navy, had published detailed reports concerning the applications of PERT. However, at that time, PERT offered a probabilistic approach better suited to controlling such undertakings as the Fleet Ballistic Missile Program rather than ordinary construction work.? (Fondahl 1964) Perhaps the most definitive product of this study was a revised network format that would come to be known as the ?Precedence Diagramming Method.? This methodology would place activities and events, to the extent there were any events, 100 on the nodes of the network. The resulting network diagram would come to be known as ?activity-on-node.? Figures 5-4 and 5-5 provide comparison of ?Arrow Diagramming? and ?Precedence Diagramming? for the same set of activities. Figure 5-4 ?Diagramming Methods - Arrow Diagramming? (Fondahl 1964) 101 Figure 5-5 ?Diagramming Methods - Precedence Diagramming? (Fondahl 1964) The basic concept is also provided by a handwritten PDM network reproduced within Fondahl?s 1964 report. Clearly this represents a feasible non-computerized solution. 102 Figure 5-6 ?Initial Rough Network Diagram? (Fondahl 1964) The Navy would allow release of the PDM methodology in late 1961 when it appeared in ?The Constructor,? the monthly magazine of the construction industry trade group the Associated General Contractors of America. Notwithstanding Fondahl?s claims and the classified nature of the BuDock report, the Rand Corporation?s D. R. Fulkerson provides a discussion of the Activity- on-Arrow and a PDM type representation of the same network in an article submitted to Management Science in June 1960. (Fulkerson 1961) The timing here would suggest that Fulkerson?s work is the first evidence of a PDM network. Notwithstanding this point, Fulkerson provides the following discussion: ?Suppose the project consists of jobs 1, 2, 3, 4, 5 and that the only order relations are: 103 1 precedes 3, 4 2 precedes 4, 3, 4 precedes 5, and those implied by transitivity. The usual way of picturing this partially ordered set is shown in Fig. 1, where nodes correspond to jobs and directed arcs to the displayed order relations. Another way is shown in Fig. 2, where some of the arcs represent jobs, and the nodes may be thought of as events in time.? (Fulkerson 1961) Figure 5-7 Fulkerson?s AON and AOA Network Models (Fulkerson 1961) Interestingly, while noting the necessity of the dashed arc ?not corresponding to any job? Fulkerson downplays the significance. ?(t)his need cause no concern, since a dummy job can be added to the project to correspond to such an arc, and the assumption made that such fictitious jobs have zero completion time and zero cost. It is not difficult to see that allowing dummy jobs permits such a network representation for any project. 104 (Fulkerson 1961) Fondahl?s 1964 report for BuDocks provides an accounting of industry feedback to the PDM idea, which Fondahl claims to have been using prior to 1958, particularly on topics describing the differences between arrow diagramming and precedence diagramming. Several reviews of the time describe these differences: ?The principal advantage of the activity-on-node system is its simplicity. The avoidance of dummy activities as special devices eliminates most of the problems of networking, especially for beginners. The disadvantage of using activities-on-nodes as a networking system is primarily that it is non-standard. The other systems greatly overshadow it in general practice. The authors know of only one computer program for this system, as compared to more than 60 for the other systems.? (Moder and Phillips 1964) ?The (PDM) method described above by the present authors avoids the necessity (and complexity) of dummy jobs, is easier to program for a computer, and seems more straightforward in explanation and application.? (Muth and Thompson 1963) Moder and Phillips go on to describe BuDock?s adoption of the PDM methodology within several of its construction contracts by 1964 and have renamed the technique ?circle and connecting arrow technique.? A detailed discussion of the pros and cons of arrow diagramming and precedence diagramming is provided by Glavinich. ?Both AOA and AON networks are computationally equivalent. A comparison of AOA networks with AON networks for planning and scheduling construction projects is as follows: (1) AOA networks require restraints to maintain schedule logic. No restraints are required with AON. 105 (2) AOA networks are more difficult to lay out than AON networks because each activity is defined by two nodes and restraints are required to maintain schedule logic. (3) AOA networks are more difficult to understand and present because each activity has two numbered nodes and restraints are required to maintain schedule logic. AON networks do, however, require a beginning and ending activity the sole purpose of which is to tie the schedule together. This activity is usually a milestone. AOA networks need only a common event node to tie the start and end of the schedule together. ?The AON format is computationally equivalent to the Activity-On-Arrow (AOA) format and was favored over AOA as the primary presentation format for the flowing reasons: (1) The use of AON format is used extensively in construction scheduling because nearly all microcomputer-based scheduling software supports this format. (2) The AON format is conceptually easier to understand for the novice because of its similarities to bar charts. Unlike the AOA format, each activity in an AON network can be uniquely identified with one number and the use of restraints is not required. (3) Examples of network calculations are more easily presented in the AON format because all data associated within an activity can be included within that activity?s node.? (Glavinich 2004) The Fulkerson-Fondahl PDM remained in place within the Navy construction program during the time of the student?s attendance at NAVFAC?s Naval School, Civil Engineer Corps Officers (CECOS) in early 1991. As part of the introductory courses for junior officers, students were instructed to develop a PDM network for a project involving the construction of a remote sentry guard shack using only 3-inch by 3-inch paper squares (for the activities), string (for the relationships), scissors, tape and a rather large classroom wall. That Fondahl?s non-computerized solution would 106 remain part of the BuDocks/NAVFAC curriculum for at least twenty-eight years speaks to the soundness of the theoretical concept and its practicality of the method which had required, until his study, elaborate computer systems for implementation. 5.5 The Influence of Information Technology on Network Scheduling The Polaris program established focus areas for performing the PERT calculation including designing an operating system for data management, the establishment of a computer program that would perform the necessary calculations. (Malcolm et al. 1959) Figure 5-8 provides a depiction of the management ?operation? to include the three inputs into the Naval Operations Research Computer (NORC) at the Naval Proving Grounds, Dahlgren, Virginia (1. Event File, 2. Elapsed Time Estimates and 3. Change Order Analysis) and outputs. It is interesting to note that Polaris required that ?(e)vents be defined unambiguously,? a point lost within the ?activity-centric? CPM schedules of 2008. 107 Figure 5-8 ?PERT System in Operation? (Malcolm et al. 1959) It is not clear if individual computer cards, which were the norm on mainframe computers by the mid-1960s, were used for data entry, but some evidence exists that some sort of ?key punch? system was in use. See Figure 5-9. Figure 5-9 ?PERT data-processing flow chart? (Malcolm et al. 1959) 108 By the time Kelley-Walker presented CPM to industry in 1959, mainframe computers, presumably similar to the Navy NORC computer, were available within the industry, although limited to the larger corporate concerns. The data input for CPM-PERT systems consisted of computer cards that were stacked and ?fed? into the computer. Fourre provides a basic description of the use of computer cards as of 1964. ?On larger jobs, when use of a computer is advisable, input to the computer is usually in the form of standard 80- or 90-column punch cards?(s)tandard programs have been developed to accept cards on an activity basis. Each card must contain two event numbers, the first number describes the start of the activity, the second its completion?(t)hese event numbers must be punched in certain preassigned columns of the card?a field is also available for the activity time. This field is wide enough to accept three-time estimates for those interested in the probability function.? (Fourre 1968) Figure 5-10 An 80-Column IBM computer punch card from the early 1970s (Janis and Thompson 1972) 109 Figure 5-11 A Typical IBM Mainframe Computer Setup in 1972 (Janis and Thompson 1972) Notwithstanding the success of Fondahl?s deliberate attempt to construct a non-computerized solution to the critical path method, the limited availability of computers did continue to influence CPM?s growth and development. In the 1960s and 1970s mainframe computers were the only practical means of implementing larger CPM schedules, and therefore computerized solutions were in place only on either very large projects or within large public and private concerns. Fondahl?s supplemental report to BuDocks in 1964 did, however, make mention of the IBM ?Application Programming Announcement? of February 28, 1964 in which a PDM analyzing system had been developed specifically for the construction industry: ?Recently IBM has announced a project control system for the construction industry?? (Fondahl 1964) With the widespread use of IBM computing systems between 1964 and the early 1980s, it is likely that this industry specific service line utilizing PDM had a large influence on the approach to CPM scheduling for the next fifteen years and beyond. It is ironic that his non- computerized solution to the CPM problem would also represent an improved 110 variation for the computerized solution. Notwithstanding this point, Precedence diagramming and Arrow diagramming methods would both be employed within the CPM industry until the advent of the personal computer in the late 1970s and the dominance of PDM centric software applications by the mid-1980s. As of 2008, the U. S. construction industry is dominated by the CPM scheduling software of the Primavera Systems Corporation. The U. S. Government specifically requires its use, or its ?equivalent,? on all construction projects and programs of the U.S. Department of Defense, U. S. General Services Administration, U. S. State Department, U.S. Department of Energy and all other agencies with a construction program. Software systems found in non-construction related industries include Microsoft Project and others. What is clear, regardless of which industry or software system is reviewed, is that scheduling is now conducted not by teams of people interacting around a large mainframe computer but is accomplished by one, or a small number, of scheduling software operators who may or may not be fully connected to the project. This potential disconnect will be discussed in greater detail in chapter seven. 5.6 Chapter Summary During the forty-nine years since PERT was introduced to the general public, network scheduling has transitioned from a management application reserved for very large government and corporate programs to one that is immediately available to most anyone having access to a personal computer and scheduling software. CPM schedule are required by both private and public sector entities on most construction projects of any magnitude. Beyond the original PERT concept introduced in 1958, 111 the Fondahl-Fulkerson Precedence Diagramming Method from the early 1960s is perhaps the most significant development in the field of network scheduling. Together, these two inventions have greatly influenced the information technology that has supported these two concepts. 112 CHAPTER 6 The Earned Value Management System 6.1 Chapter Overview The American National Standards Institute/EIA-748-A Earned Value Management System (EVMS) is intended to produce meaningful cost and schedule performance metrics through comparisons of actual progress and actual cost-to-date against a pre-existing baseline integrating a project or program?s scope, cost and schedule management platforms. Since the creation of its predecessor the Cost/Schedule Control Systems Criteria in 1967 by the United States Air Force, EVMS has been recognized as a viable means for monitoring cost and schedule performance, and sees broad application, particularly on very large public sector procurements. As of 2007, formalized EVMS reporting is required for all U. S. Government ?cost plus? contracts valued at or above $20 Million (USD). Despite this broad implementation by the U. S. Government alone, formalized requirements for EVMS implementation found within these contracts provide little in the way of exacting procedures for conducting EVMS measurements. The ANSI/EIA-748-A EVMS standard is fairly considered more conceptual than prescriptive. U. S. Government contracts, instead, rely upon the contractor?s own internal cost and scheduling accounting procedures for the EVMS calculation. This offers the possibility for several cost and schedule phenomenon that either fall outside the purview of formalized EVMS requirements, and/or the deliberate manipulations of these 113 platforms by the contractor, to inhibit a project team to accurately describe project performance. This chapter discusses the Earned Value Management System and its limitations while providing discussions of cost and schedule subjects including: (1) the appreciable latitude in cost and schedule management practices that exist within EVMS requirements, (2) the concept of ?Detachable Value,? a term introduced to describe monetary amounts that can be earned freely and independently of supposedly constraining network logic, (3) a discussion of a work substitution methodology that masks EVMS metrics, (4) the influence of the timing of progress assessments and cost recording on EVMS metrics, (5) limitations on the meaningfulness of the ?Earned Schedule? measurement, (6) how the length of the evaluation period influences EVMS metrics, (7) how early and late planned value profiles may be combined into a ?banana curve? which can be used in EVMS calculations, and (8) How EVMS metrics return to a state of equilibrium in the latter project period greatly affecting the utility of these measurements. Together, these discussions may provide a supplement to the ANSI/EIA-748-A and other EVMS standards within the practices of scheduling and costing. 6.2 The ANSI/EIA-748-A Earned Value Management System (EVMS) The Earned Value Management System (EVMS) is a platform that measures a project?s schedule and cost performance based upon the monetary value and timing of accomplished work, or contract scope, against a pre-existing Project Baseline. Earned Value Management (EVM) is regarded by the U. S. Government as a tool that integrates the technical, cost, and schedule parameters of a contract, allowing for 114 actual work performed to compared against the project?s baseline, thereby producing cost and schedule metrics. (U. S. Department of Defense 2006) ?From these basic variance measurements, the program manager (PM) can identify significant drivers, forecast future cost and schedule performance, and construct corrective action plans to get the program back on track. EVM therefore encompasses both performance measurement (i.e., what the program status) and performance management (i.e., what we can do about it). EVM is program management that provides significant benefits to both the Government and the contractor.? (U. S. Department of Defense 2006) In 1967 the United States Department of Defense (DOD) established the ?cost/schedule control system? (CS 2 ) criteria as a means of providing early warnings of cost and schedule problems on major defense contracts. (U. S. General Accounting Office 1997) A 1997 report by the U. S. Government Accounting Office described CS 2 as follows: ?DOD?s CS 2 was established in 1967 as a tool to measure the value of work performed as compared to the actual costs, a concept referred to as earned value. Earned value goes beyond the two-dimensional approach of comparing budgeted costs to actuals. It attempts to compare the value of work accomplished during a given period with the work scheduled for that period. By using the value of work done as a basis for estimated the cost and time to complete, the earned value concepts should alert program managers to potential problems sooner than expenditures alone can.? (U. S. General Accounting Office 1997) EVMS has since been adopted by the governments of foreign countries for contract oversight, particularly Sweden, Canada, France, Australia and New Zealand. (Fleming and Koppelman 2000) The U. S. GAO has also reported that the system has been embraced by private industry in the United States. It is not clear if the adoption of EVMS by private industry in the United States, to the extent this is so, is related to anything beyond formal contract requirements that exist within U. S. 115 Government contracts. The student has been unable to discover a private contractor that is using an earned value management system approach where no contract requirement exists. In December 1996 the U. S. Secretary of Defense William Cohen approved revised EVMS criterion which had been developed and proposed by private industry. This updated EVMS platform was incorporated into the formal Department of Defense standard DODINST 5000.2R and also became the basis of the NSIA/EIA 748 which, as of July 1998 became the ?official? EVMS platform for the United States Government. (Fleming and Koppelman 2000) A comparison of the original 35 CS 2 criteria and 32 EVMS criteria are provided under Appendix F. (Fleming and Koppelman 2000) The American National Standards Institute (ANSI) recognized EVMS as a formal standard in 1999. As of 2006 the Department of Defense required EVMS on all ?cost or incentive contracts, subcontracts, intra-government work agreements, & other agreements valued at over $20 Million.? In November 2006 EVMS became mandatory on all cost-plus U. S. Government contracts over $20,000,000.00. (U. S. Defense Acquistion University 2006) 116 SPI - Schedule Performance Index CPI - Cost Performance Index SV - Schedule Variance CV - Cost Variance Figure 6-1 Earned Value System Parameters ANSI/EIA-748-A ?Standard for Earned Value Management Systems? revised EVMS terminology contained within the original (CS 2 ) standard and this revised terminology as follows: Planned Value (PV). Planned Value is the budgeted cost for the work scheduled to be completed on an activity or WBS component. Earned Value (EV). Earned Value is the budgeted amount for the work actually completed on the schedule activity or WBS component. Actual Cost (AC). Actual Cost is the total cost incurred in accomplishing work on the schedule activity or WBS component. This Actual Cost must correspond in definition and coverage to whatever was budgeted for the Planned Value and the Earned Value. 117 Cost Variance (CV). Cost Variance equals earned value (EV) minus actual cost (AC). The cost variance at the end of the project will be the difference between the budget at completion (BAC) and the actual amount spent. Cost Variance Formula: CV = EV ? AC Schedule Variance (SV). Schedule Variance equals earned value (EV) minus planned value (PV). Schedule variance will ultimately equal zero when the project is completed because all of the planned values will have been earned. Schedule Variance Formula: SV = EV ? PV Cost Performance Index (CPI). CPI equals the ratio of the earned value to the actual cost. Cost Performance Index Formula: CPI = EV/AC Schedule Performance Index (SPI). SPI equals the ratio of the earned value to the planned value. Schedule Performance Index Formula: SPI = EV/PV (PMI 2000) All three variables of the EVMS system, Planned Value, Earned Value and Actual Cost, can be captured within a project?s Work Breakdown Structure (WBS) platform where cost and schedule information have been integrated to create a ?cost loaded schedule.? The cost loaded schedule is often already a formal contract requirement on all U. S. Government, even where there is no formal EVMS requirement. 6.3 The History of Earned Value Management, 1967 to 2008 118 While the concepts of cost variance measurement date back to at least the works of Frederick Taylor and Lawrence Gantt, EVMS can be traced to the original PERT methodology developed by the Polaris program between 1955 and 1958. (Fleming and Koppelman 2000) By 1962 there were two types of PERT techniques employed by the U. S. Government: ?PERT/Time? and ?PERT/Cost.? PERT/Time was essentially the continuation of the time management techniques described by Malcolm et al. whereas PERT/Cost incorporated the cost and resource variables that the Polaris team had identified in very early efforts but deliberately set aside. ?The status of a developmental program at any given time is a function of several variables?(r)esources?technical performance?and time. Ideally, we should like to evaluate a given actual schedule in terms of all three variables? To do this it is necessary to establish a criterion that integrates time, resources, and performance into meaningful utility. Further, it is necessary that the variables be measurable over all feasible ranges. It is beyond the scope of this paper to discuss the nature and difficulty in furthering such an approach. Suffice to say, that it was determined that no such criterion was available and that the data-processing problems associated with a plan of some 10,000 events would preclude its practical implementation in any case. Therefore an approach dealing only with the time variable was selected.? (Malcolm et al. 1959) The connection to PERT can be made because within the 1962 ?PERT/Cost? platform was a specific management report titled ?cost of work report? which captured the concepts of cost and schedule variances that are embodied within contemporaneous EVMS indices. Neither ?PERT/Time? nor ?PERT/Cost? would be in use within federal government procurements by mid to late 1960s. (Fleming and Koppelman 2000) By late 1967 the U. S. Air Force had prepared thirty-five criteria that were to be applied contractor management platforms on major contracts and programs having 119 a cost-plus incentive structure. These criteria were collectively termed the Cost/Schedule Control Systems Criteria (C/SCSC) and were based upon management techniques employed by the Air Force within the Minuteman ICBM program between 1965 and 1967. (Fleming and Koppelman 2000) See Appendix F. C/SCSC (aka CS2 or CS 2 ) was widely implemented by the U. S. Government during the 1970s, 80s and 90s. 6.4 U. S. Government Requirements for EVMS As of October 20007, the U. S. Government requires EVMS on all ?cost-plus? contracts greater than $20 million and discourages its use on EVMS on ?firm-fixed- price? contracts of any amount. The EVMS requirements are formally established by the Federal Acquisition Regulation (FAR): ?FAR 34.201 Policy. An Earned Value Management System (EVMS) is required for major acquisitions for development, in accordance with OMB Circular A-11. The Government may also require an EVMS for other acquisitions, in accordance with agency procedures?? (U. S. General Services Administration 1999) This clause continues, at least encouraging the contractor?s use of the ANSI/EIA-748- A in response to the EVM requirement: ?If the offeror proposes to use a system that has not been determined to be in compliance with the American National Standards Institute /Electronics Industries Alliance (ANSI/EIA) Standard-748, Earned Value Management Systems, the offeror shall submit a comprehensive plan for compliance with these EVMS standards. Offerors shall not be eliminated from consideration for contract award because they do not have an EVMS that complies with these standards.? (U. S. General Services Administration 1999) 120 The type of incentive structure dictates how much cost information is shared, and unshared, between owner and contractor. In cost-plus-fee contracts, contracts that see far greater application of the Earned Value Management System by the U. S. Government, there is generally only one cost management platform for the project and both parties have full access to it at any point in time. In this ?open book? arrangement the contractor is paid for its costs incurred ?plus? a fee which is based upon some sort of pre-arranged calculation such as a percentage of cost incurred, a function of effort and/or time, a simple lump sum amount atop the project cost or a combination thereof. In order to provide an incentive to the contractor to minimize the overall project cost to the owner, the contractor?s fee might be capped at a monetary amount or actually increased in situations where the contractor has maintained the overall cost below a monetary target. This type of project arrangement is known as a ?Guaranteed Maximum Price.? There may also be penalties assessed (i.e. fee reductions) for cost or schedule overruns. Such ?carrot and stick? fee structures are designed to make the contractor ?whole? for costs incurred and pay a reasonable profit or fee, while at the same time ensuring that the contractor is indeed motivated to minimize the project?s overall cost (to owner). A ?firm-fixed-price? contract, or ?lump sum,? is a simpler arrangement where a contractor is paid a sum certain for work performed with no reporting, but for limited exceptions of actual costs incurred. Here, the contractor must estimate the cost of the work and add a ?contingency? amount for cost uncertainties. Should the final cost of the work be less than the contract price, the difference may be considered as profit to the contractor. Where the reverse is true, the contractor loses money and 121 the contract does not allow for the reimbursement of the loss. This is the risk that is assigned to the contractor under a firm-fixed-price arrangement. 6.5 The Limitations of EVMS Management Several aspects of the EVMS system are noted as potential weaknesses which offer the possibility of inaccurate representations of project performance. These limitations are discussed within subsections 6.5.1 through 6.5.8. 6.5.1 Non-Prescriptive Requirements for EVMS Although a formal ANSI standard, the 32 criteria are deliberately non- prescriptive. As such it is reasonable to suggest that in the absence of discrete requirements across the five main areas, a contractor has latitude in managing their cost and schedule platform for a project or program. While it was not the intent of the EVMS authors to dictate contractor means and methods, it is worth noting that the 1997 re-write of the EVMS standard relaxed several areas. EVMS Criteria Number 22, for example, struck key terminology related to comparisons between planned value and earned value. The omitted standard which was stricken in 1997 was as follows: Identify at the cost account level on a monthly basis using data from, or reconcilable with, the accounting system?(v)ariances resulting from the comparisons between the budgeted cost for work scheduled and the budgeted cost for work performed ?together with the reasons for significant variances. (Fleming and Koppelman 2000) 122 This is a significant revision and allows for work substitutions to mask adverse performance. By not requiring EVMS calculations to be performed at a cost account level, holistic, program level calculations are possible and provide a means to mask planned-but-unperformed work with prematurely performed work that was not scheduled to occur until a much later point in the schedule. This subject is discussed in greater detail in latter portions in sub-section 6.5.3. Several other differences, less substantive, but indicative of the relaxation of reporting requirements, include the requirement to provide information ?as needed? by management rather than simple requiring this information be submitted as was the case in the CS 2 standard. See ANSI EVMS criterion 23, 24, 25, 27 and 29. See Appendix F. 6.5.2 Detachable Value Within the subject of granularity, or level of detail, of the cost and schedule platforms is the concept that even at the most detailed or ?critical element? level, there exist multiple cost components that can, in certain cases, be ?earned? independent of schedule network logic. This concept is introduced as ?detachable value.? An example would be a single schedule activity (?Activity 3?) that has an assigned planned value (PV) of $10. Although the CPM schedule clearly shows that Activity 3 cannot commence until Activity 2 is complete, something that will not occur for several months, the contractor might pre-purchase material associated Activity 3 several weeks, or even months, in advance. The value of the material, valued at $2 in this example, is therefore ?detachable? from Activity 3. Figure 6-2 provides an illustration of this concept. 123 Figure 6-2 The Detachable Value Concept The term ?detachable value? is defined as follows: Detachable Value: The monetary value within a single schedule activity that can be earned independent of the satisfaction of expressed predecessor schedule logic. The theory surrounding this concept is based upon the notion that although CPM networks treat individual activities as discrete and absolute objects with a discrete start and finish point for the purposes of time management, the same standard cannot be applied to the cost components of those same activities. Although breaking single activities with detachable value down into multiple activities, it is often impractical to impose excessive granularity on a project schedule platform simply to address the detachable value concern. Therefore it could be recognized that some individual work activities or project elements will have a portion of their total monetary value that is independent of predecessor network logic and can be earned earlier than the 124 early start date provided by the conventional network calculation. Where this value is ?detachable? this has an optimistic influence on EVMS variances and indices because it is ?earned? atop the monetary amounts of earlier activities that have already been accomplished. 6.5.3 Substitute Value Where planned work is not performed within its planned timeframe or is performed at a slower rate of progress, two schedule performance indicators within an Earned Value Management System (EVMS) are designed to detect and quantify these events. These two indicators are Schedule Variance (SV) and Schedule Performance Index (SPI) which compare the value of planned work to date to the earned value (the value of work performed) using the budgeted values of each work item. Schedule Variance = Earned Value - Planned Value SV = EV ? PV Where SV > $0, the Project is Ahead of Schedule Schedule Performance Index = Earned Value / Planned Value SPI = EV/PV Where SPI > 1.00, the Project is Ahead of Schedule It is not uncommon for a project to see the accomplishment of work planned for later periods performed ahead of schedule and/or for work that was planned for a 125 given period to not be performed. These two events are defined by the student as ?premature work? and ?deferred work.? The monetary value of premature work will have a positive effect on the earned value to date and thereby the earned value schedule performance indicators SV and SPI. Deferred work will have a negative effect on these same two. But where these two events occur simultaneously, because of the ?lost language? within EVMS standard #22, SV and SPI will only show the net result of these two separate events. Worse some, or all, of the ?deferred? work is on the critical path of the project thereby representing a project schedule delay, the introduction of non-critical premature work would partially or fully mask the deferred work. Thus, where unplanned work is performed ahead of schedule, EVMS schedule performance indicators are compromised to some degree. Ultimately, as the project progresses, the schedule variance and schedule performance index will eventually reflect the adverse performance unless the contractor has been able to fully mitigate the effect of the original impact(s). This might not occur for a significant period of time and the delay limits the project team?s ability to be cognizant of the situation and take proper action where necessary. Kerzner provides five questions to be asked during the course of a conventional EVMS variance analysis (Kerzner, 2003). These five questions are: 1. How much work should be done? 2. How much work is done? 3. How much did the ?is done? work cost? 4. What was the total job supposed to cost? 5. What do we now expect the total job to cost? 126 Kerzner?s conventional approach produces the values for ?Planned Value?, ?Earned Value?, ?Actual Cost? and ?Estimate at Completion.? What neither Kerzner?s questions nor conventional EVMS methodologies consider, however, is that performed work can be freely substituted into the EVMS calculations without regard to whether it was planned to be performed then or at a much later point in the project. In the case where a planned critical activity is not performed, and a non- critical activity of equal value planned for much later on in the project is performed in its stead, the EVMS indicators will not detect this substitution and thus present a more favorable indication for the period in question. In order to measure the effects of this phenomenon it is necessary to establish a methodology for measuring both deferred work and premature work and calculating EVMS schedule performance indicators. The new methodology must provide for the early detection of work ?deferrals,? thereby providing the project team with an earlier indication of schedule variance than is possible using conventional EVMS calculations. The following figure represents a project that at the outset (0%) is planned to cost one hundred dollars. Figure 1 provides a representation of the planned value across ten progress periods, each with a planned value of ten dollars. Figure 6-3 provides a project status as of time t 4 . 127 Earned Value Planned Value Planned-Not-Earned Value Earned-Not-Planned Value Figure 6-3 As Planned Schedule as of ?t 0 ? versus Performance as of ?t 4 ? At time t 4 the project has earned forty dollars which represents the exact value of the work that was planned to occur as of a specific point. But there have been two departures from the original plan. The first is that $2 worth of work items that had been planned for the t 3 to t 4 period was not earned. The second is that $2 of the work that has been earned as of t 4 was originally planned to occur after the t 4 completion point. The problem becomes apparent when conventional EVMS computations as of t 4 yield ?perfect? EVMS metrics. Planned Value (PV) = $10 + $10 + $10 + $10 = $40 Earned Value (EV) = $10 + $10 + $10 + $8 + $2 $10 $10 $10 $10 $10 $10 $10 $10 $10 $10 t 0 t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 $10 $10 $10 $8 $8 $10 $10 $10 $10 $10 $2 $2 t 0 t 1 t 2 t 3 t 4 t 5 t 6 t 7 t 8 t 9 t 10 128 = $40 Schedule Performance Index (SPI) = EV / PV = $40 / $40 = 1.0 Schedule Variance (SV) = EV ? PV = $40 - $40 = $0 Together the ?deferred work? (i.e. the $2 planned to have been earned prior to t 4 but unearned) and the prematurely performed work (i.e. the $2 planned to have been earned after t 4 , but now already earned), which are coincidentally of equal value in this example, perfectly mask each other making either event undetectable. Where critical path activities have been deferred, this phenomenon allows for a more favorable or optimistic indication of cost and schedule performance within conventional EVMS calculations for the project, at least temporarily. Ultimately, the work that is not being performed will produce adverse EVMS values, but this might not occur immediately and a lag of several weeks or months before this becomes obvious. A corollary concern is that work that is not performed when scheduled stands a greater risk of cost or schedule impact which might be attributable to price escalation, extended financing, disruptions to other work activities resulting in productivity losses, demobilizations, out of sequence work, demobilization of subcontractors and other affects. The ?work substitution? approach might provide a means to manipulate EVMS data as the ?mixing? of work type making it impossible to distinguish adverse 129 and/or superior performance within a specific work area. Short of exhaustive analysis, this provides a means for a contractor to ?mask? adverse performance in one area with superior performance in another. These effects are virtually undetectable without extensive analytical effort. 6.5.4 The Costing Lag EVMS calculations are influenced by the different age of information between earned value, which is based upon an up to the moment and subjective assessment work accomplished versus actual cost, which relies upon the contractor?s recording of expenses in the project ledger. Cost recording, while perhaps more objective than assigning percentages to individual activities, often lags the progress assessment by several weeks. Thus the value for Actual Cost is indeed deflated relative to its true amount. This presents a situation at the time of the assessment where the variables necessary to compute Earned Value (Planned Value and Percent complete) are fully visible but the final actual cost amount is not known. With Actual Cost being lower than its finalized amount, the values for Cost Variance and Cost Performance Index will be more optimistic than they will be once the costing exercise is complete. 130 Figure 6-4 The Age of Information for Earned Value vs. Actual Costs Stated differently, earned value, which relies upon a calculation of percent complete applied against the Planned Value of individual activities at an instantaneous point in time, might be several days or weeks ahead of the ?project costing? exercise. Thus, Earned Value, the numerator within the formula for the Cost Performance Index (CPI = EV / AC) is likely ?inflated? relative to the actual cost denominator which does not have all costs recorded. To the extent this is so, the cost performance index will recognize performance to date but actual costs as of an earlier point in time. The CPI would be slighter higher, or more optimistic, than it actually should be. To the extent a contractor might choose to ?slowly? enter actual costs, this could also be considered a candidate for EVMS manipulation. Therefore, several processes seem to factor heavily into the validity of the EVMS calculation including the contractor?s recording of actual costs and the 131 assessment of current progress, the timing of the progress ?snapshot? and the follow- on processing of the payment application and reporting of EVMS data. In light of the FAR requirement for certified payment applications it is also important to consider that these documents must be prepared, reviewed and validated by the contractor for several days prior to their formal submission to the owner. Therefore they do not represent an exact accounting of current project status, although it would seem that the schedule and cost information might not be more than just a few days old. 6.5.5 Critical Path Value Although not formally recognized within the ANSI standard, ?Earned Schedule? (ES) can be calculated as another measure of schedule performance. Corovic describes the use of the earned value vs. planned value comparison as the basis for a separate EVMS indicator for schedule performance. (Corovic 2007) Earned schedule is calculated by simply plotting both earned value and planned value over time and then measuring the horizontal distance between from the earned value curve to the planned value curve. If the earned value curve is to the right of the planned value curve, then the project is behind by an amount of time represented by the length of the line segment. If the curves are reversed, then the project is ahead of schedule by an amount of time calculated the same way. 132 Figure 6-5 Earned Schedule But the validity of earned schedule as a means for assessing schedule performance is extremely limited. It?s utility as a measure of schedule performance is strongly, if not completely, influenced by how much monetary value has been assigned or allocated to individual activities on the critical path of the project schedule. In fact, Earned Schedule is only relevant where there is a strong, positive correlation between monetary value and schedule criticality. Since earned schedule, however, does not account for the criticality of individual activities, projects with more money assigned to the critical path will have more meaningful earned schedule calculations, but this is not practical as a sound scheduling practice. The term ?critical path value? or ?critical value? may be used to describe the total monetary value of all critical path activities. These are perhaps the only activities that should be considered in evaluations of schedule performance. If earned schedule calculations are performed using only critical path activities, an ?Earned 133 Critical Schedule? indicator could be used to evaluate schedule performance in its stead. Two fictitious sample projects can be used to demonstrate this point. Projects A and B are identical in almost every sense. Each are for the construction of a highly specialized, stand alone building being built immediately adjacent to one another, from the same set of architectural and engineering plans, in the same county and municipality by the same general contractor and subcontractors. Each project has a separate contract and CPM schedule associated with it and the value of each project is $300,000,000.00 and the schedules for projects A and B are the same, consisting of the same 9,000 activities. Despite the similarities of the two projects, identical in every sense, the schedules submitted by different project managers have different cost loading. The value of critical activities for Project A total $50 Million, while Project B?s total $250 Million. Using alternative terminology, Projects A and B have ?critical values? of $50 Million and $250 Million, respectively. Project B, having a higher relative amount of monetary value on the critical path, could be evaluated more effectively for time performance using the earned schedule whereas Project A, where there is very little monetary value on the critical path could not. In light of the fact that the ?critical value? for most projects is about 10% of overall project cost, this provides evidence that as a practical matter, the earned schedule indicator is possibly not a reasonable means to assess schedule performance. Kelley noted the relatively small number of activities on the critical path in his first workings on the critical path method which speak to the basic limitation of Earned Schedule. ?It is of interest to not that in all ?real? projects studied to date, less than 10 per cent of the activities have been critical ? even for the shortest duration 134 schedules. This fact points out the fallacy, prevalent in project work, of embarking on an ?across-the-board? crash program when expediting the project end date is required. This is probably an illustration of Pareto?s principle that ?In any series of elements to be controlled, a selected small fraction, in terms of numbers of elements, always accounts for a large fraction, in terms of effect.? (Kelley 1961) 6.5.6 The Duration of the Analytical Period EVMS indices are based upon a comparison of the most current cost and schedule platform to a pre-existing or ?baseline? platform. As such, EVMS requires that both the most current and pre-existing platforms represent: (1) an accurate representation of contract requirements; (2) a reasonable allocation of contract costs over the contract performance period; (3) an accurate or up to date presentation of actual costs; (4) an accurate recording of actual progress and events to date; or (5) a reasonable plan for work yet to be performed. In most EVMS methodologies, the pre-existing platform is often the original baseline schedule that was established very early on in the project, or one that is updated no more frequently than on an annual basis. This approach is consistent with the U. S. Department of Defense standard for EVMS correctly notes that it is the comparison of schedule and cost information contained within (1.) the project?s most current cost/schedule update, to (2.) the project?s cost/schedule baseline, that forms the basis of the EVMS calculations at given point in time. ?As work is performed and measured against the baseline, the corresponding budge value is ?earned?. From this earned value metric, cost and schedule variances can be determined and analyzed. From these basic variance measurements, the program manager (PM) can identify significant drivers, forecast future cost and schedule performance, and construct corrective action plans to get the program back on track.? (USG 2006) 135 In cases where project baselines are aged several months, or longer, it is quite possible that the project?s cost/schedule baseline no longer represents the contractor?s intended approach to the project. Reasons for deviations from the original ?baseline? could include the clarification of the contractor?s intended approach to the project, changed conditions, the addition of scope by the owner, acts of god, or other events that might have revised the contractor?s means and methods. It is also possible, sometimes, that the baseline cost/schedule platform was indeed never reasonable and/or represented the contractor?s intended approach to the project. The distinct contrast of granularity when a comparing-to-baseline approach is adopted is illustrated under Figure 6-6. Also, it is more likely that differences in the type of work performed will be less pronounced under an update-to-update approach. 136 Figure 6-6 Granular Differences in the Comparison-To-Baseline Approach To minimize the effects of aged baselines, a methodology that steps away from comparisons between the current update and the original baseline in favor of a comparison to the most recent update to the cost loaded schedule. This approach is consistent with analysis performed by the Weyerhauser Corporation in 1999 although not embraced by ANSI/EIA-748-A. (Chen 1989) 137 Figure 6-7 Earned Value to Date vs. Baseline PV and Most Recent PV While the project schedule and project cost management platforms change over time, they change less from month to month. Schedule granularity and Scope granularity, just two examples, are more consistent between the two schedules that are one month apart as compared to one year apart. United States case law supports this periodic approach which resembles more of a ?windows? or ?time impact analysis? methodology the ?as-planned versus as-built? methodology which is no longer an acceptable for the demonstration of time impacts. While not suggesting that existing EVMS methodology be abandoned, this supplemental approach could support the use of EVMS as a means of detecting adverse cost/schedule performance at an earlier point in time than is currently possible. Another advantage of the periodic analyses is that EVMS metrics become compromised in comparison-to-baseline analyses once the original planned 138 completion date of the schedule has passed and the project is still underway. When this has happened, earned value is recorded but with no planned value against which to measure performance, the indicators will eventually revert to their equilibrium positions and as such are compromised. 6.5.7 Banana Curves Although earned value management systems are applied to projects having CPM schedules planned value EVMS indices do not account for the fact that these profiles have float and can be earned within a period, rather than at a discrete point in time. Were one to recognize the two profiles for Planned Value (PV), there could be additional indicators there could indeed be two separate sets of EVMS indicators; one from the PV Early profile and a second from the PV Late profile. The resulting profiles create a set of curves resembling a banana. Figure 6-8 ?Banana Curves? ? Dual Profiles for Planned Value 139 While banana curves have been used to describe project cash curves, these have not been applied to EVMS. Where EVMS is implemented within a project or program having a CPM schedule, recognizing the two ?extreme? profiles (i.e. early and late) for planned value is a useful management tool. 6.5.8 Consideration of Dissimilar Work in Final Cost Estimates Another complication with EVMS analyses using aged baselines is that long term projections are made using performance to date. The ?Estimate at Completion? (EAC) is perhaps the best example for this point. EAC is calculated through extrapolation using cost performance to date. Figure 6-9 Extrapolating for the Estimate at Completion 140 But this approach fails to account for the fact that rarely, is an endeavor that is large enough to require the EVMS methodology, performed by one subcontractor, trade or represent a uniform type of work. Projects or programs that merit a formal EVMS requirement typically have very diverse requirements, trades and types of work. In Stephen Revay?s article ?Calculating Impact Costs? which dealt with the quantification of productivity losses, he noted that there must be a consideration given towards how calculations must consider how ?representative? a particular work type is of another if they are to be combined in a calculation. (Revay 1987) Applying his thought to EVMS, in order for meaningful ?Estimate at Completion? values, the work performed up through the interim measurement point would need to be representative of the work throughout the entire project. While this is possible, it is seemingly unlikely for large, complex projects or programs having dozens if not hundreds of separate performers, and presumably, varying levels of cost and schedule performance. It is perhaps possible to define a term ?Representative Value? as the measure of how the value of an individual project cost element is representative of that for the entire project or program. 6.6 Chapter Summary The latitude in EVMS cost and schedule management allows for several cost and schedule phenomenon to interfere with meaningful EVMS results. The concepts of ?Detachable Value,? ?Substitute Value,? ?Critical Value,? the ?Costing Lag,? ?Representative Value? and the ?Return to Equilibrium? are perhaps considerations might be of assistance to any EMVS practitioner. With these concepts it is perhaps possible to improve both the accuracy and meaningfulness of performance metrics 141 and EVMS through not only an awareness of these methodologies that include dual profiles for planned value (i.e. ?banana curves?), the measurement of EVMS metrics across shorter periods of time and across fewer scope elements within the project?s cost and schedule management platforms. 142 CHAPTER 7 The Rationale for An Event-Centric Network for Time Management 7.1 The Loss of Basic PERT Principles in Modern Construction Scheduling As of 2008, CPM networks consist of a network of activities and events within a software that utilizing the Precedence Diagramming Method using a personal computer. Figure 7-1 illustrates the PERT, Activity-on-Arrow CPM and Precedence Diagramming Method for the same network of activities and events. 1959 Activity-on-Arrow, Event-on-Node U.S. Navy Polaris Program 1959 Activity on Arrow 1961 Precedence Diagram Kelley-Walker CPM U.S. Navy Bureau of Yards and Docks Professor John W. Fondahl Figure 7-1 Diagramming Methodologies of PERT and CPM A1 A3 A4 A5 A2 A1 A3 A2 A4 A5 E E E E E A1 A3 A2 A4 A5 143 Today?s scheduling industry relies almost exclusively on what Primavera Systems Incorporated describes as ?PDM? and ?Gannt? views of the network. A typical illustration of each is provided in Figure 7-2. Figure 7-2 ?PDM? & ?Gantt? Views Using Primavera Suretrak Today?s CPMs are typically prepared by a single CPM scheduler modeling the project on a personal computer. They are often full time professional scheduling consultants working on multiple projects, or are field personnel who have been assigned scheduling tasks as a collateral duty. This picture is perhaps different from the networks prepared by the Polaris team. Here, there were a large number of personnel involved eliciting the time estimates from engineers who maintained ?a thorough understanding of the work to be done.? This ?team? established and followed ?explicit definitions? for each condition under an ?interrogation process? that was deliberately designed to ?disassociate the engineer from his built-in 144 knowledge of the existing schedule and?provide information concerning the inherent difficulties and variability in the activity being estimated.? (Malcolm et al. 1959) Not only are such deliberations involving teams of skilled project participants less common within the industries of 2008, the professional scheduler may know very little about the technical aspects of the project itself, having instead a limited technical skill set focused more on CPM software than how the project or program needs to be sequenced. Today?s networks, further, are typically provided at an exhaustive level of detail, perhaps far beyond the detail necessary for an owner to simply monitor and assess progress. It is not uncommon to have between 500 to 10,000 activities on building projects with values above $5,000,000.00 (USD). And while this detail has a purpose, such as allowing subcontractors to know their approximate performance periods, this level of granularity provides an opportunity for at least some project participants to ?lose their way.? It is not uncommon to find contracts where owners have specified a high level of detail for the schedule, requiring a minimum number of activities in the thousands. What can also be troublesome are contract requirements stating that no activity duration may exceed a specified duration of say three or four weeks, as is often the case for many agencies of the U.S. Government. ?Use the Critical Path Method (CPM) of network calculation to generate the project schedule. Prepare the project schedule using the Precedence Diagram Method (PDM)?(d)evelop the project schedule to an appropriate level of detail...(r)easonable activity durations are those that allow the progress of ongoing activities to be accurately determined between update periods. Less than 2 percent of all non-procurement activities shall have original durations (OD) greater than 20 work days or 30 calendar days.? (U. S. Army 2007) 145 This is not in concert with the Project Management Institute?s ?Rolling Wave? concept, or most contractor?s means and methods, which call for the addition of definition to the latter portion of the schedule as the project proceeds. Under ?Rolling Wave,? the short term activities are generally greater in number and shorter in duration, while those several months or years in the future are longer. This technique is entirely appropriate given the uncertainty associated with long term planning. To the extent that an ?artificial? requirement such as the Army?s is imposed, and this requirement is not unusual in the construction industry, the contractor must often make assumptions about means and methods which may as yet be unknown, and remain so for several months. Figure 7-3 Schedule Granularity Over Time The level of detail, or schedule granularity, is a very key point in this study and to some extent, may be common cause to many of the scheduling problems that 146 the construction industry struggles with in 2008. A 2003 article in Engineering News Record (ENR) described ?widespread abuses of powerful software (that) produce(ed) badly flawed or deliberately deceptive schedules that look good but lack mathematical coherence or common sense about the way the industry works. The result is confusion, delayed project and lawsuits? (Korman and Daniels 2003) Four men were identified as experts in the article; attorneys Jon Wickwire and Fred Plotnik; and professional schedulers James O?Brien and Stuart Ockman. These men identified the advent of the PC and accessibility of scheduling software ?in the hands of inexperienced and poorly trained practitioners? as the problem, lamented the loss of the more intuitive Activity-on-Arrow method to the ?de-facto standard? PDM and suggested that the software industry, most notably Primavera Systems Incorporated, was largely to blame for the situation. Primavera?s software stopped supporting an activity-on-arrow format when it dedicated itself to the Microsoft Windows operating system in 1994. Primavera?s President, Richard K. Faris indicated that the statements of these four men were ?dead wrong? and that ?Primavera can?t be responsible for abuses any more than a spreadsheet company is responsible for those who use its product to draw up faulty or deceptive reports, he contends.? (Korman and Daniels 2003) The coverage of the scheduling ?crisis? by ENR, which is perhaps the most widely read weekly construction journal in the United States, was something of an announcement of sorts for the newly founded College of Scheduling of the Project Management Institute which was then and is now under the control of the four 147 primary interviewees: Wickwire, Ockman, O?Brien and Plotnick. James E. Kelley, Jr. provided a response to the article in the follow-on issue of ENR: ?Your cover story?paints a disheartening picture of the current state of CPM schedules... It's only 46 years since Morgan Walker and I first worked out CPM for duPont and yet project people are still falling into some of the same scheduling traps warned against during CPM's childhood. The use of features like "leads and lags," "multiple calendars" and "assigned constraints" do provide some levels of schedule flexibility. In practice, their use too often leads to inconsistent schedules and misleading views of project condition.? (Kelley 2003) Clearly the industry faced a crisis, at least perhaps on a project to project basis, and little has happened to alter the situation in the last five years. Cost and schedule overruns on large public projects, while not new developments in 2003, are perhaps more caustic when one recognizes that the CPM can be and is used a tool to ?set up? a large recovery during a claims scenario (i.e. litigation). One contractor strategy apparent on a recent $200-Million U.S. Government construction project saw an unrealistically shortened initial baseline schedule submitted prior to the start of construction so that the time between planned and actual completion, were it to be late, were maximized allowing for a greater monetary recovery for time related costs. The project did indeed finish late and the contractor?s post-project claim at $100- million (with interest) was the largest in the history of that individual agency. Another contractor on a multi-billion dollar project used the combination of a remote software called ?progress override? and the recording of extremely small amounts of progress on only a small number of activities to obscure the fact that the project had already experienced a slippage of more than one year. While this latter example seems like something that could be detectable, the combination of a large number of 148 activities (over 50,000 for this project) and an understaffed government field office without monies for the hiring of a ?counter-scheduler? to analyze the submission in great detail, this sort of manipulation is most often detected only through exception. As a practical matter there are a myriad of schedule manipulations that can be performed within the depths of a large CPM and only the heavily staffed owners with the ability to hire a scheduling expert will be able to detect these strategies, which are if not fraudulent, at least beyond the confines of respectable behavior in certain circles. Also, where a project is delivered after the contract completion date, most federal contracts penalize the contactor by imposing ?liquidated damages? -- a monetary penalty for every day that the project is late. In order that contractors may avoid both liquidated damages and recover their time related costs where the project is delivered late, they must successfully prove which issue(s) or event(s) delayed the project and which party was responsible. Where the delay is the fault of the owner, a ?compensable? delay allows the contractor to be paid for its time related costs and not be subject to the assessment of liquidated damages. Where the delay is the fault of neither party, an ?excusable? delay excuses the assessment of liquidate damages but does not provide the contractor with compensation for time related costs. In both situations, the process will result in a formal change to the contract completion date by the contracting officer, but satisfactory supporting documentation is required. The project?s schedule platform is critical to this proof and the degree to which the contractor can successfully express his position can provide significant swings to the contractor?s cost performance on a project. 149 Before elaborating on the various manipulative practices that can be employed within a CPM schedule, it is first important to speak to motive, specifically, why project participants might seek to engage in a deceptive scheduling practice. A contractor?s time related costs such as those for salaried personnel, workspace costs, consumable supplies and other overhead items, make up a significant amount of the contractor?s overall contract amount, often more than 10% of total project cost. This value is even more significant when one looks at how much time related costs are attributed to the prime contractor which are often not passed on to the subcontractor due to contract provisions that preclude the sub from enjoying the ability to recover time related costs in all situations where the prime does. The topic of manipulative scheduling has been somewhat confined, at least in the United States, to subjects related to scheduling software. Discussions, flowing primarily from the 2003 article in ENR, have focused upon whether CPM software operators are making revisions to individual work activities, network logic or relationships, adjustments to software settings that affecting the network calculation because of actual project conditions or to present a more favorable position. There have been one or more software platforms developed specifically for the purpose of itemizing changes between two electronic schedule updates, and the requirement of this software has found its way into the scheduling requirements of many federal construction contracts. (U. S. Navy 2007) But it is the student?s belief that manipulative scheduling practices can also fall outside the confines of a software platform. A contractor, for example, might choose to skip a schedule submission entirely allowing its staff time to fully evaluate 150 a schedule disruption or delay in order to present a stronger position at a later point in time. It is important to suggest, therefore, that schedule manipulations can take several forms, some expressed in software and formal project records, others expressed only in the strategies, tactics, words and movements of the members of the project team. The cost of implementing a formal CPM is not insignificant. Something that appears to have remained constant through the last fifty years. ??regarding (PERT?s) use as an analytical tool, a close examination shows that the basic theoretical approach explained in most textbooks on PERT is not always practical. It is an exceptional case where PERT is useful as an analytical tool? ?Other drawbacks appear in the implementation of PERT networks, which in some cases require a team to keep them current; this becomes costly. ?Key punches for such cards are available for rental, for about $40 per month, or sale, from both IBM and UNIVAC.? (Fourre 1968) Notwithstanding the advent of the personal computer, which has enhanced accessibility, the implementation of a CPM function remains costly as of 2008. Software prices range from $400 to $3,000 for a single seat license and hourly rates for professional schedulers range between $75 to $150, with higher rates for the more experienced. New editions of the software are issued every three to five years, owner?s place specific requirements for these systems, and the individual contractors must invest money and time in order to keep up. Those schedulers that are typically lent to a project are somewhat detached from the project, often appearing for several hours per month on small to mid-size projects to prepare updates. It is this detachment and the ?inexperience or poorly trained? schedulers, as Mr. O?Brien 151 describes, that is common. Where schedulers are working independently, or in a ?vacuum,? Fourre?s article is relevant. ?In other cases networks have become so mechanized that labeling has been done with a numbering system, making it difficult to read or understand the network. As a result interest diminishes until the effort is either dropped altogether or a token effort is maintained to appease management. There are cases where development and maintenance of the network have been given over to PERT group that knows little if anything of the details, resulting in an incorrect schedule.? (Fourre 1968) As an aside, the fact that the U.S. Government often specifies Primavera ?or equivalent? within their construction contracts does help this situation, although one must question whether this is in keeping with the spirit of ?full and open? competition. This type of quasi-proprietary requirement, which is essentially no longer debated for CPM software, is under debate by industry groups and the agencies of the U.S. Government for Building Information Management (BIM) software. (Associated General Contractors of America 2007) BIM is a full or partial integration of the three dimensional or four dimensional project designs with a facility management function. 25 But with several federal agencies specifying different BIM software platform, and with each platform requiring substantial contractor investments for software licenses and training, at least one contractor trade group has expressed the concerns about this new requirement. (Associated General Contractors 2007) The BIM problem grows worse for those contractors who have work with multiple federal agencies and must invest in all three software platforms simply to maintain relations. The federal agencies with BIM requirements were approached by 25 Fourth dimension technology (4-D) is an integration of a three dimensional building design with time as the fourth dimension. 4-D software allows one to witness the 3-D electronic model of the project be built piece-by-piece over time. 152 AGC at a recent forum in Washington, DC but seemed reticent to specify a single system as they had done several years before for Primavera. It is perhaps the subject of granularity that could provide a solution to many of the problems of 2008. To the extent that the industry might develop a summary level means of assessing schedules, either in concert with existing CPMs, or without them entirely, this might solve many of these problems. The extensive detail associated with the typical CPM schedule is compounded by the fact that the PDM methodology facilitates the use of four separate relationship types, multiple calendars, constraints and complex relationship modifiers such as leads and lags. The PERT methodology, which utilized a weekly vice daily bin size, juxtaposed activities and events (i.e. Event-Activity-Event-Activity-Event, and on), did not use calendars, constraints, used only finish-to-start relationships, did not use lags or offer a series of remote settings that modified the mathematical calculation was far more straightforward, at least if one limits themselves to the deterministic application of PERT. Figure 7-4 The ?Simple? Relationships Found Within PERT 153 Also of significance is that within the Polaris approach the elapsed time estimate was given in weeks, not days as is the case for most CPM applications. This point is noteworthy. In light of the fact that the three year duration of the Polaris missile program was of the same general order as many of today?s projects and programs and had over 10,000 events in the network, why was it that the competent engineers responsible for providing the activity time estimates were instructed to give their answer in weeks and not days? Or, why is it that today?s competent engineers use ?days? and not ?weeks? for estimating durations on projects of duration? Did the program managers recognize that the variability these estimates, level of detail within the flow plan and error in the estimating process all combined to make the week the ideal bin size? Were the activity durations deliberately longer in PERT (i.e. expressed in weeks and not days), thereby affording the possibly of this different scale, which would support the viability of lower level of granularity? 7.2 Debates on the Mathematics of PERT The lack of a coherent and practical probabilistic approach in today?s construction industry is lamented by Dr. Glavinich: ??the probability associated with the critical path, which is frequently claimed as one of PERT?s best features, is seldom more than a misleading, incorrect number which should, in most cases, be disregarded.? (Glavinich 2004) This reliance upon a single number is particularly interesting when one considers that the assignment of planned durations to individual construction activities is can be subjective. In this case there is often little in the way of an historical records to use, 154 such as is often the case for most contractor cost estimating routines. Glavinich supports this point before stressing its importance. ?Activity durations are normally estimated in an intuitive and subjective way. Estimates are usually given little systematic attention even though activity durations are the basis for the construction schedule?The need for accurate estimates of activity duration cannot be overemphasized. The construction schedule is only as good as the activity durations that make it up.? (Glavinich 2004) The probabilistic techniques in PERT were largely set aside by the major federal agencies by the mid-1960s. Polaris? choice to use a deterministic approach to a probabilistic question in the late 1950s was noticed immediately. ?The usual practice in this situation (as in other linear programming problems with random objective functions) is simply to replace each distribution by its expected value, thereby obtaining a deterministic problem? (Fulkerson 1962) ?Currently emphasis is put on the activities on the ?critical path? in the network with the activity distributions replaced by their means?in current solution methods, the output does not depend on the structure of the activity duration distributions but only on their means and variances.? (van Slyke 1963) The implementation of a Monte Carlo solution to the PERT problem was provided by Richard van Slyke of the University of California, Berkeley in 1963. (van Slyke, 1963) Rather than converting the PERT problem to a deterministic approach, as Malcolm et. al., Fulkerson and Kelley did, van Slyke used a Monte 155 Carlo application which simulated the project multiple times using the actual probability distributions. 26 ?in current solution methods, the output does not depend on the structure of the activity duration distributions but only on their means and variances. The Monte Carlo approach, in order to gain extra accuracy, does depend on the shape of the distribution. On the other hand, the Monte Carlo approach has greater flexibility in that any distribution can be used for activity durations ? beta, normal, triangular, uniform, or discrete in any sort of mix.? (van Slyke 1963) Also debated were the ?dummies? or arcs with no work assigned to them but necessary to realize a true expression of the relationships between events. Kelley laments the excessive number of dummies necessary within any network. ?It has been observed that by using these rules on ?real? projects, the resulting number of activities (including dummies) averages 1.7 the number of events.? (Kely 1961) ?If this approach were to be adopted, the resulting project graph would be replete with fictitious or dummy activities ? anywhere from n to ?(n)(n-1) dummies in a project of n activities. Of course many of these dummies are not necessary and could be eliminated.? (Kely 1961) 7.2.1 Use of the Beta Distribution to Model the Opinions of Competent Engineers Perhaps the most widely debated aspect of PERT was the decision to use a beta distribution to model the probability distribution of the time estimate for an individual activity. ?With the flow plan laid out graphically and authenticated as representing the work and activities to be performed, elapsed time estimates for each activity 26 van Slyke cites the original PERT article of Operations Research authored by Malcolm at. al. as one of his references. Within his citation he cites a publication year of 1957 instead of 1959. This is most likely a typographical error. 156 are obtained from competent engineers?these t e values are computed from data given by engineers responsible for performing the indicated activity?? (Malcolm et al. 1959) Although the fundamental theory upon which PERT was based was probabilistic, faced with the limitations of computing technology of the time, the Polaris engineers would ultimately adopt a methodology that was indeed a hybrid of both probability theory and deterministic time estimates for each activity. Almost immediately after its publication, the PERT methodology would be ?picked on? for its assignment of a beta distribution to the time estimates for each activity, a step that was considered too simplistic by many mathematicians of the period, including van Slyke. ?Since, as will be seen later, only the mean and variance of the distributions are use in current calculations methods, the character of the distribution is treated somewhat cavalierly. The distribution is assumed to be a beta distribution with standard deviation equal to 1/6 the range. These are of course highly arbitrary assumptions and should not be taken too seriously.? (van Slyke 1963) In order to examine the validity of this critique it is necessary examine the Polaris solution at the stage in the solution where the network flow plan has been constructed and three time estimates (i.e. (a) optimistic, (m) likely and (b) pessimistic) have been elicited from ?competent engineer(s).? At this stage of the PERT analysis, the Polaris team sought ?to translate the engineers? estimates into measures descriptive of expected elapsed time (t e ) and the uncertainty involved in that expectation, ?(t e ).? (Malcolm et al. 1959) What Polaris did next is assume a beta distribution around the three time estimates. This allowed the calculation of the mean and variance of the distribution using a formulaic approach. Then, with a mean and a variance for each activity, the mean and variance of the expected completion time for the overall 157 network could be readily calculated through summation. 27 Polaris used a tabular solution, but this is the equivalent process of what would now be described as the forward and backward pass operations. These processes are described in Chapter 3. Expected Time (t e ) for Activity Finish Figure 7-5 Expected Time While it accomplished a solution to the network for Polaris, this was admittedly an approach of convenience. With the state of computing technology in 1958, urgency to achieve a functional submarine launched ballistic missile, such a solution, while not precise, was likely ?close enough? to be effective for the program. 27 It is significant that if one focuses exclusively on the Polaris team?s treatment of the mean expected time (t e ) for each activity, and ignores the portion of the methodology concerning variability and perhaps probability theory altogether, the PERT solution equates to the deterministic network solution provided by the Kelley-Walker CPM. It is for this reason that CPM?s presentation of a time network might not represent a contribution in this area. 158 Digging deeper, one sees other issues that go beyond the question of how well the beta distribution ?fits? the theoretical distribution of the competent engineer. Other assumptions, such as the single peak, are perhaps more significant. ?It was postulated that the three estimates could be used to construct a probability distribution of the time expected to perform the activity. It was felt that such a distribution would have one peak -?with the most probable time estimate, m, being representative of that value?(and with) relatively little chance that either the optimistic or pessimistic estimates, a and b, would be realized.? (Malcolm et al. 1959) The single peak theorem is perhaps capable of modeling ?routine? activities of great granularity. But what about the example of a construction project on the western Pacific island of Guam, where it is routine for the work to be heavily dependent upon material deliveries by cargo ship. If a container ship from the west coast of the United States, where the steel is being fabricated, only arrives every other week and the steel delivery has an equal probability of being on either ship, would the distribution not have two peaks simply due to the ship?s schedule? It would be reasonable to consider this a larger problem than those described by PERT?s adversaries in the early 1960s. This type of consideration is noted by Glavinich: ?Construction activity durations do not necessarily behave as beta distributions. The probability distribution for a particular construction activity will vary based on a number of factors. Due to the one-time nature of construction and all of the project-specific variables that can impact the duration of a construction activity, it is difficult to predict with certainty what the actual duration distribution should be for any construction activity?no one is sure what distribution is the best model for construction activities.? (Glavinich 2004) If we are to assume that there is no ?best model? for construction activities, perhaps more important to the debate of whether or not the beta distribution is the ?best fit? to 159 for the theoretical probability distribution of the competent engineer, but whether the competent engineer?s time estimate can always be expressed with a one-peak distribution. To the extent that the Beta distribution has not successfully captured the activity?s probability distribution, this is a function of the process of eliciting the subjective assessment of the competent human engineer(s) than a deficiency within the beta distribution itself. As of 2008, computing technology allow for a Monte Carlo analysis, providing the opportunity to provide individual probability distributions for each activity. 7.2.3 Use of the Normal Distribution to Calculate T E ?Utilizing the central-limit theorem, it may be assumed that the probability distribution of time for accomplishing an event can be closely approximated with the normal probability density.? (Malcolm et al. 1959) When calculating the probability of ?meeting an existent schedule,? the Polaris PERT team assumed that it was fair to assume that the distribution resulting from a large number of events, some 10,000 in the case of Polaris, would be a normal. With this normal distribution, Polaris engineers would then assess the probability of meeting a calendar date certain (T 0S ). Figure 7-6 illustrates this model, where T 0E is the ?expected? completion date, or week, produced by the network flow plan and T 0S is the proposed calendar date certain. 160 Figure 7-6 ?Estimate of Probability of Meeting Scheduled Date, T OS ? (Malcolm et al. 1959) PERT would be criticized for the assumption of a normal distribution for the purpose of evaluating the overall schedule. ?We should realize, of course, that strictly speaking we should not use normal distributions because they do not have finite ranges.? (van Slyke 1963) The discussion of constraints to the application of the central limit theorem is provided by Professor Gregory Baecher of the University of Maryland and John Christian. ?The Central Limit Theorem is not without constraints upon the underlying or component distributions. For example, Kaufman (1963) has shown in the context of oil reserve estimation that the sum of lognormal variables does not satisfy these conditions and thus the distribution of reserve estimates involving logNormally distributed pool volumes is more complex than would they be if the Central Limit Theorem applied to them.? (Baecher and Christian 2003) 161 Kelley seemed adverse to the idea of using probability distributions or elaborate mathematical solutions for large projects while also acknowledging that the PERT solution is indeed deterministic. ?although there are neat mathematical formulas for the information sought, they are most intractable for any reasonable computation for large projects.? ?(PERT) determines first the expected duration and variance for each activity?(then) the critical path is computed deterministically.? (Kelley 1964) His article ?Critical-Path Planning and Scheduling: Mathematical Basis? that appeared in the May-June edition of Operations Research provides the results of three methods for computing expected duration (T E ) for the project. Kelley provides a model where each activity is provided with three time estimates with ?odds? assigned to each. In order to describe properly evaluate Kelley?s work, which was provided in tabular form, it is helpful to construct a network of his tabular presentation of his six activity, activity-on-arrow network. 162 Figure 7-7 A Simple Network Based Upon James E. Kelley?s Table (Kelley 1964) Kelley?s article goes on to solve his network using the methodologies of PERT, Fulkerson and his own and the ?true values? which were known at the time of the analysis. Fulkerson had proposed an alternative methodology based upon network flow computations in his article ?A Network Flow Computation for Project Cost Curves? that appeared in Management Science in 1961. Kelley?s results show a closer approximation of expected duration and a lower variance to the ?true values? than the Fulkerson or PERT solutions and provide. His approximation, further, is the only pessimistic forecast whereas Fulkerson and PERT express remaining durations that are shorter than the ?True? duration of 9.49 months. 163 Table 7-1 Comparison of Results of Three Solution Types Expected Project Duration Method Months Variance Kelley 9.54 0.500 True Values 9.49 0.533 Fulkerson 9.38 0.229 PERT 8.89 1.157 (Kely 1964) If Kelley?s results are converted to weeks, consistent with the Polaris PERT, it is Fulkerson?s approach that is most impressive. Fulkerson?s analysis now provides the closest approximation of the true value (38 weeks vs. 38 weeks) and with an exceptionally small variance. Also noteworthy is the exactness of Kelley?s variance now, which computes to a perfectly rounded number (8.00). It is not clear if Kelley first performed the analysis in weeks and then converted the results to months, or if this is significant. Table 7-2 Comparison of Results of Three Solution Types (Weekly Units) Expected Project Duration Method Months Variance Weeks Variance Kelley 9.54 0.500 39 8.00 True Values 9.49 0.533 38 8.53 Fulkerson 9.38 0.229 38 3.66 PERT 8.89 1.157 36 18.51 When one recognizes that the limited computing capabilities of the late 1950s were indeed a constraint to practical use of the probability concepts embodied within PERT, it is fair to suggest that each of these approaches were rational in light of this constraint. Today, the process of obtaining a composite probability distribution based upon the individual distributions of the network activities is readily performed within 164 several software platforms, allowing this debate to be largely supplanted by technology and other forms of practice. 7.2.4 Addressing The Proximity and Variability of Non-Critical Paths An early criticism of PERT was the lack of proper accounting for proximity and variability of non-critical network paths. Van Slyke describes this inherent flaw of the PERT methodology (this flaw is also present within modern CPM platforms). ?One of the more misleading aspects of current PERT solution methods is the implication that there is a unique critical path. In general any of a number of paths should be critical, depending on the particular realization of the random activity durations that actually occurs. (van Slyke 1963) With deterministic approaches (i.e. single time estimates for each activity), only the most critical path will receive attention, notwithstanding the point that there might be far more variability on near ?non-critical? paths. It is, perhaps, a system that provides the project team with the ability to ask and answer six basic questions at each schedule update that might provide a reasonable solution: (1) Where is the critical path?; (2) How much uncertainty is associated with the activity durations along this path?; (3) What other paths are close to the critical path and what is there relative criticality? (i.e. how close are they to the critical path); (4) How much uncertainty is associated with these close paths?; (5) What jumps could occur and what is the probability of each; and (6) What is the effect these jumps on the planned completion date for the overall project schedule? Two measures are identified by van Slyke: (1) the measure of ?closeness? between paths; and (2) ?the number of paths that may become critical? are significant. In assessing these concerns van Slyke considers the means and variances of each path 165 length and their correlation. While these points were described in the original PERT article, van Slyke introduces a conceptual solution to this problem: ?Thus it makes sense to talk about a ?criticality index,? which is simply the probability that an arc will be on the critical path. (van Slyke 1963) van Slyke?s criticality index is absent from contemporaneous management approaches, but was simply an expression of the probability of an activity being on the critical path. Noting that ?the ramifications and use of this parameter, which is not available using current techniques, are developed.? (van Slyke 1963) To calculate this index, van Slyke described assigning a binomial distribution to an individual path and then utilizing a Monte Carlo simulation to determine this solution. Is it appropriate to draw holistic conclusions about future schedule performance by only considering the path with the smallest value of total float? This approach has been taken in U. S. Government contracting since the deterministic solution to the PERT problem was performed in the late 1950s by the Polaris team and remains so within the CPM analyses of 2008. Generally, there is no accounting for variability in activity time estimates within most construction contracts of the U. S. Government, which, for all practical purposes, considers only deterministic approaches when evaluating time impacts and/or CPM schedules. 7.3 The Adverse Effect of Negative Float Several agencies of the U. S. Government maintain a formal contract requirement within their contract documents for the imposition of a ?finish constraint? on the last activity of a project schedule. 166 ?Constraint of Last Activity Milestone: The Contractor shall include as the last activity in the contract schedule, an activity named ?End Contract?. (sic) The ?End Contract? activity shall have a mandatory finish constraint equal to the contract completion date.? (USG 2007) This finish constraint is meant to insure that the contractor?s schedule will never extend beyond the specified contract completion date. Where the contractor is behind schedule, for whatever reason, the periodic schedule updates will reflect on-time completion but float values on the critical path will have numeric values below zero. A recent construction contract solicitation by the U. S. Navy?s Naval Facilities Engineering Command for a $10 million design-build project provides the following schedule management requirement regarding negative float: ?? Contract status shall be evaluated on the basis of relative float on the critical path at the time of updating with negative relative float indicating the contract is behind schedule and positive relative float indicating status ahead of schedule. (Relative float is the current status of an activity in relation to the approved schedule completion date.)? (USG 2000) Negative float appears to be considered something of only a slight nuisance within these same contracts, which also state that it is still possible to identify the critical path in negative float situations. But what is lost in these situations is that because the late start and late finish dates are actually forcibly constrained to fall ?behind? the early dates, the project team?s ability to continue to produce meaning mathematical calculations using their CPM is compromised. The absence of meaningful ?late dates? adversely impacts the ability of the contractor to plan, prioritize and optimize the execution of future work. The following example illustrates this phenomenon. Figure 7-8 depicts a four activity schedule that is about to start. The data date (i.e. X- 167 Now or Today) is March 6, 2008. The critical path flows through Tasks 1, 2 and 4. Task 3 is eleven workdays off the critical path (Total Float = 11d). Figure 7-8 Project Status as of Project Start, 06 MAR 08 In this example, as of March 24, 2008, the project start milestone has been observed effective March 6, 2008. This was the date that the owner provided the formal Notice to Proceed. Nothing else has happened. The contractor has not performed any work and is not able to record progress on any of the four project tasks. Where no mandatory finish constraint is present, the network may extend freely to the right side of the Gantt Chart and depict the new planned completion date of May 12, 2008. These points are illustrated by Figure 7-9 which provides the project status as of March 24, 2008. 168 Figure 7-9 Project Status as of 24 MAR 08 without Finish Constraint Thus far the project schedule has both identified the critical path and represented that the planned completion date has slipped to May 12, 2008 while still providing meaningful start and finish dates (both early and late dates) for all activities. The non-critical Task 3 has float and may the early and late dates provide meaningful information to the project team. This picture changes, however, if we are to apply a finish constraint of April 24, 2008 and view the schedule as of March 24, 2008. See Figure 7-10. Figure 7-10 Project Status as of 24 MAR 08 with Finish Constraint 169 The total float value is now negative and the critical path may still be identified by those activities with the lowest value of total float, minus eleven days in this case. Task 3, although showing a total float of zero, remains non-critical. This is a minor nuisance and does not inhibit the team?s ability to discern which activities are critical (Total Float = -11d) and which are not (Total Float > -11d). The finish constraint has prevented the late finish date of ?1040 Project Complete? from moving beyond 25 APR 08, while the early finish date for this activity shows the late date of May 12, 2008. This is also not a significant issue for the critical activities as, by definition, they are driving the finish date and therefore the early dates can provide the project team with the necessary information for project planning. The more significant problem is that the early and late dates provided for ?1001 Task 3?, which is non- critical by 11 days, are the same. This situation provides no useful information to allow the project team to enjoy the benefits of the eleven days of float that actually exists on Task 3. Where contractors are using the early and late start and finish dates to plan their work, the effects of this compromised mathematical calculation have a deleterious effect on a contractor?s means and methods. It might be also feasible to suggest that the mandatory finish constraint is indeed an implicit order to finish on the contract completion date, even where delays have occurred and the project is behind schedule. It is arguable, therefore, that the U. S. Government?s finish constraint not only interferes with a contractor?s ?means and methods? but also constitutes a form of constructive acceleration by the owner. Were this opinion sustainable, the owner would be responsible to compensate the contractor 170 for a loss of efficiency, the concept of which is illustrated by the 1978 Guide to Contract Modifications, produced by the U.S. Army Corps of Engineers. Acceleration concerns aside, where late projects are subject to a finish constraint, the CPM schedule no longer provides the contractor with the ability to identify specific calendar dates for completing non-critical work activities. While the mandatory finish constraint does provide the owner with a measure of protection against an unwitting ?tacit approval? of a schedule update showing late delivery, it is not without adverse effect to the project?s time management platform and a contractor?s means and methods. 7.4 Projects, Programs, Humans and the CPM Schedule Is it possible that a single schedule and cost platform, which has been the standard contract deliverable on public and private sector construction projects for many years, can adequately reflect the needs and/or requirements of each project participant? Perhaps not. Expressed in their most primal form, contractor organizations have an innate requirement to be paid, as much and as early as possible. This revenue pays expenses, repays debt and allows the contractor to enjoy the benefits of interest on principle. Owners meanwhile, have a desire to pay as little as possible and as late as possible, at least in most cases. How is it then that a single platform for cost and schedule can be a true reflection of the needs of these two parties. Perhaps the single cost-schedule should be thought of as nothing more than a treaty reflecting agreements within the larger tug-of-war over project funds and time. And in these oft heated discussions, the reputations of a ?greedy? contractor and a 171 ?miserly? owner may not be fair. Perhaps these two caricatures are indeed accurate depictions of rational behavior for both parties. This nuance, however, is not recognized in most everyday industry discussions although it surfaces on rare exception. ?One high-level Lockheed executive, on hearing PERT described at a Special Projects meeting, banged his fist on the table and reportedly said: ?No management system is going to get me to admit that I am going to miss my scheduled delivery dates. This system is going to listen to some pessimistic Lockheed engineer say that Lockheed is likely to miss delivery but not to me. I sign the contract; I hire and fire Lockheed engineers.? ? (Sapolsky 1972) This is an explosive example of the point that the contractor?s presentation of schedule information can indeed be affected by this ?play? between the parties. The 1738 paper ?Specimen Theoriae Novae de Mensura Sortis? by Daniel Bernoulli presented the concept of utility theory for perhaps the very first time. The paper, whose English translation is ?Exposition of a New Theory on the Measurement of Risk,? expressed Bernoulli?s belief that existing risk theory did not properly consider the specific characteristics of the individuals facing risk. As such, a single hypothesis that for ?two persons encountering identical risks?the risk anticipated by each must be deemed equal in value?(and that)?(n)o characteristic of the persons themselves ought to be taken into consideration? was inappropriate. (Bernoulli 1738) The topic is important in the discussion of time management platforms because projects and programs generally require only one schedule be used by the contractor, subcontractors, owner and others. Contemporaneous scheduling software allows for the single CPM schedule to be arranged, sorted, filtered and summarized in many different ways depending on the requirements of the audience. 172 It is within his hypothesis that Bernoulli describes how different individuals might place different ?values? on certain events due to differing circumstance. He describes, for example, how a poor individual finding a lottery ticket might be more inclined to sell that ticket for a sum certain, while a very rich individual placed in the same circumstance might be well justified in preferring to play the ticket. But in another example, a wealthy prisoner needing only a small sum of money in order to purchase his freedom might place a greater value on that monetary amount than a poorer prisoner with no possibility of release or other avenue for spending once free. Bernoulli?s thoughts were that while situations such as the latter were rare (i.e. it was more often the case that individual?s utility for monetary gain was dependent upon one?s wealth), it is necessary to evaluate the circumstances of each instance in every case (Bernoulli 1738). Bernoulli also addresses how an individual?s attitude toward monetary gain might change as one becomes wealthier. Bernoulli provides that ?it is highly probable that any increase in wealth, no matter how insignificant, will always result in an increase in utility which is inversely proportionate to the quantity of goods already possessed.? In other words, Bernoulli is stating that the ?value? one places on a single wealth enhancing event, decreases as they get richer. This is consistent with the lottery ticket example. The concept also explains why a wealthy individual might say to their investment manager ?Remember this, young man, you don?t have to make me rich. I am rich already!? (Bernstein 1998) Bernoulli?s work goes so far as to offer a mathematical discussion of his concept, which has become the foundation of modern day utility theory. Figure 7-11 shows utility on the vertical axis and 173 monetary amount along the horizontal. Bernoulli selected a log-normal profile to model the utility profile of an individual, which provides for a smaller gain to utility as one travels to the right towards larger monetary amounts. Figure 7-11 Utility Profile Set Forth By Daniel Bernoulli (Bernoulli 1738) But the validity of Bernoulli?s Utility Theory and model should not be limited to discussions pertaining to individuals. They are both relevant and visible within the behavior of organizations both private and public and within discussions concerning two or more different individuals. As is the case for many projects, commonly held team goals focus on delivering project scope with the highest levels of quality, ?on time,? and within ?the budget.? Other common goals generally involve some 174 statement about safety, effective communications and points related to administrative or procedural matters. But without commenting on the validity or importance of the formal partnering process which when administered by a highly skilled facilitator can prove to be effective, common goal statements represent a monolithic treatment of the multiple perspectives of the stakeholders. We might have the same common goals on a project, but would it not be reasonable to acknowledge that some of those common goals are held for different reasons? Project team safety goals for example usually express something to the effect of a ?safety first? approach to operations (the conversations might express the desire that no mishaps, injuries or deaths occur, etc.). But if one is to step beyond the commonly held notions that the human life is valuable and accidents, injuries and death are unwanted to examine the follow-on consequences of such events, the picture becomes more complicated. In the case of an accidental worker death on the project site in the United States in the current day, both contractor and owner will likely grieve for the human being and their family in some form or another and this is appropriate. Also, both entities will aggressively investigate how the accident occurred and take appropriate corrective measures to minimize the risk of reoccurrence. The owner, meanwhile, might not be immediately concerned about the effect of the accident to the contractor?s OSHA rating, the contractor?s ability to obtain other construction work which will rely in part on their safety record, or the contractor?s cost of doing business. A contractor, however, will care that a lower OSHA rating could adversely affect their ability to attract future clients, result in a higher cost of doing business due to higher insurance premiums, introduce the prospects of a lawsuit from the deceased 175 family member, and on. ?Safety First? is but one example of how a seemingly basic and commonly held goal of the project team might be held for different reasons and is indeed a conversation having multiple dimensions. The ?On Time? goal is also commonly held but, in its most organic form, for different reasons. Typically, owners want a project on time for the same reason that they want the project. Because it fulfills a basic need of their organization. And while the need is fulfilled by the completion of the project, project timing can have a tremendous influence on the owner?s business operations. (1) The manufacturer of a computer chip might need their chip facility in time to manufacture a child?s video computer game in time for the December holiday shopping period. (2) The Washington Nationals professional baseball team may require their new ballpark in time for opening day in April 2008, or face the prospects of playing its home games on the road and losing considerable fan based revenue. (3) A new resort in Key West, Florida may need to be open not only to generate enough revenue from guests to meet its financial obligations but also to meet certain travel windows such as spring break or the post holiday travel season. (4) A levee repair may need to be completed at Lake Pontchartrain, Louisiana in advance of the start of the Atlantic hurricane season in August 2006. (5) The National Aeronautics and Space Administration might need a rocket launch complex to be fully functional before a small time window of opportunity that would allow a single space probe to achieve rendezvous with multiple planets as it travels beyond the solar system. (6) The Superintendent of the United States Naval Academy might need a comprehensive dormitory renovation project to complete in early August 1998 in order accommodate 4,000 Midshipman who are returning from summer cruises to start their academic classes. (7) It might be important to the National Institutes of Health that a promising cure for cancer be developed as quickly as possible under contract with a renowned medical research institution in order to save lives. 176 (8) The U. S. Navy?s Office of Naval Research might need to see the rapid completion of a research project into combating the use of ?improvised explosive devices? in Iraq. (9) Ad finitum. But do the contractors, subcontractors and suppliers, despite what is represented publicly, truly place the same import on these events? The impacts to the owner of late project delivery might dwarf the rise to the increased extended general conditions of the contractor brought about by late delivery. And while contractors are indeed interested in meeting the delivery goals of their clients, these client objectives are not truly ?organic? to their own business models. To the extent that the contractor is being compensated for additional time on the project, this goal seems artificial. Perhaps recognizing this concept, the construction owners place contract clauses into the contract which assess ?liquidated damages? for late delivery. These are established by the owner within the contract and must reflect the monetary value of the owner?s loss of use due to a late delivery, rather than a penal amount. With liquidated damages in place, contractors might wish to be ?On Time? for perhaps more practical reasons that are ?organic? to their organizations. Owners rarely produce their own project schedule, relying instead upon the CPM produced by the prime contractor for reporting purposes. But recognizing that a general contractor might have made a tactical decision not to fully share its schedule information, these schedule may lack the requisite detail to provide the owner with a practical means of accurately assessing schedule performance. Rarely, and most often only once a project has already exhibited troubled symptoms, will the owner perform its own schedule assessment. When this decision is made, the trouble is 177 often already in place and the time and effort to necessary to produce an independent owner?s schedule is significant. ?Within Budget? is a commonly held goal which, in its organic form, is held for different reasons. Unlike ?On Time,? a goal which can be measured objectively using clocks, calendars and even the critical path method, measuring performance against a budget can also present a multi-dimensional conversation. But for a truly holistic ?open book? arrangement whereby each party will see the other party?s cost estimates and accounting records for the project, contractor and owner budgets are indeed separate things. Even on the largest ?open book? contracts, such as the multi- billion dollar ?mega-projects? of the U. S. Department of Energy, the parties are generally not provided with the opportunity to see any or all records associated the government?s estimate or budget. Under firm-fixed price contracts, the owner is not permitted to see the contractor?s cost records but for a limited set of circumstances. It is fair to state that the instances of contractors and owners openly all sharing cost estimates and accounting data is rare, even for a mega project. The cost of a project to an owner who has executed a firm, fixed price contract is in fact, fixed. It does not, with limited exceptions, decrease over time. The contractor, meanwhile may enjoy the effects of such savings, or losses, where the project finishes either early or late. These basic relationships are illustrated in a variation of a ?cash curve? from 1957 presented under Figure 7-12. Note the addition of an owner cost ?curve? (it is indeed a straight horizontal line) as well as the extension of the contractor curve for a scenario where the owner assesses liquidated 178 damages. The student has named this representation a ?scorpion curve? due to its shape. Figure 7-12 Project Cost Curves for Owner And Contractor It is reasonable, therefore, to state that the owner budget is indeed different than the contractor?s budget, at least for a firm, fixed price contract. The owner-centric definition of ?budget? prior to contract award would equate to the estimated cost of construction plus an allowance for modifications during the course of the project. After award, a second owner?s budget is produced consisting of the contract award amount plus an allowance for contract modifications. Contractor budgets, meanwhile, account for their estimated cost of work plus an allowance for uncertainty and profits under what is generally described as ?contractor contingency.? Under the firm fixed price scenario, once award and contract amount have been established, the contractor is motivated to maximize profit by minimizing the cost of the work. The owner, meanwhile, is motivated at least ?organically,? to see that no contract modifications present themselves, thereby increasing the contract amount and Contractor?s Cost of Work Based Upon Different Performance Scenarios Effect of Liquidated Damages on Contractor?s Cost of Work Contract Amount ?Owner Cost Curve? Contract Completion Date 179 depleting owner contingency. The complexities of the ?Within Budget? goal is compounded when one considers that the ?Within Quality? goal may at times have an adverse affect on the contractor?s budget (i.e. there may be a strong positive correlation between quality and the contractor?s cost of work). Rarely, if ever, is there a distinction made during formal partnering processes that recognize the different bases for common goals in schedule, cost, schedule or quality. Figure 7-13 A Utility Model of Project and Program Managers Bernoulli?s model offers a convenient model for modeling these concepts which are related to both human and organizational behavior. Project Managers might place a far higher utilitarian value on monetary gains and losses than their program manager executive. This is true for both the owner and the contractor organizations. Program managers have presumably a larger number of projects, and Project Manager Program Manager 180 if one has sustained a loss, there are several others to buffer the affect. Project Managers have, perhaps, much more at stake in this regard. Program Managers are likely also more focused on future work with the same client and are perhaps calloused to short term fluctuations in project performance because they know that these might correct themselves as time proceeds. Also, Project Managers are likely younger, less settled professionally and have a shorter resume of noteworthy accomplishments, unlike Program Managers who are likely far more secure in their profession. If valid, these points could suggest that the utility profile for a project manager might indeed be different than that of the program manager. This is illustrated in Figure 7-13. Bernoulli?s model can also model the relationship between owners and contractors, say during the negotiation of a contract modification. It is interesting to note the relative scale and different positions along the monetary axis as well as the scale of the two parties, despite the same monetary amount of the modification. This is because the ?gain? to the contractor is simply the anticipated profit (say 6-7% of the total modification), while the cost to the owner is much greater. Perhaps one issue that obstructs the project management community from developing an appreciation for the multiple perspectives is the owner centric models of the project life cycle (i.e. define requirements, project funding, design, procure, construct, maintain). Even publications produced by contractor trade associations do little to question this model: ?A successful project is one that meets the project owner?s needs and expectations and is completed on time and within budget.? (Glavinich 2004) 181 The contractor?s project or program life cycle is more business related than this conventional treatment allows. Contractors have long term goals of sustained growth of their organization and this is achieved most often through maximizing their workload and profit while observing certain constraints such as financial capacity, bonding limits and available personnel. This perspective is not captured within the one dimensional ?owner-centric? model of the construction industry. It is also important to consider the relationship between the prime contractor and their subcontractors, who are often great in number and working on other projects simultaneously. Because of this, prime contractors are particularly reticent to provide the subcontractors with a significant amount of flexibility, if any, when providing them with planned periods of performance. Because of this tendency, the subcontractors often are provided with simple static start and finish dates for their work without any information related to float or flexibility. Free float, a measure of the amount of time an activity can be delayed without delaying its immediate successor activity would provide the subcontractor with this flexibility and allow the project team to enjoy reduced activity duration variability. Oddly, the construction industry often does not use this measure as noted by Glavinich. ?The calculation and use of free float is also presented because it is reported by commercially available scheduling software even though seldom used to manage projects in the construction industry.? (Glavinich 2004) This theoretical approach to modeling stakeholder perspectives could be supplement the field of project management, just as the field of behavioral economics seeks to predict rational behavior within the financial sector. With this awareness 182 might come a better understanding of opposite stakeholders thereby reducing the occurrence of recent ethical and financial collapses of such organizations as Enron, Fannie Mae or Adelphia Cable. Some goals are commonly held, but for independent reasons. It is, at least to some extent, the knowledge of perspective and the delicate balance of multi-dimensional relationships that may offer a profound influence on project and program performance. They at least provide a basis for identifying rational behavior. In his 1967 book I?m OK-You?re OK, the Psychiatrist Dr. Thomas A. Harris, M.D. provided a presentation of human behavior described as transactional analysis. ?Transactional Analysis constructs the following classification of the four possible life positions held with respect to oneself and others: 1. I?M NOT OK ? YOU?RE OK 2. I?M NOT OK ? YOU?RE NOT OKAY 3. I?M OK ? YOU?RE NOT OK 4. I?M OK ? YOU?RE OK? (Harris 1967) It is perhaps the projects where all parties are able to understand the motivations of their business partners to facilitate an ?OK? status (contractor making a profit, owner receiving its project on time, on budget, within quality, no one is injured or killed) that projects will enjoy the greatest possibility of success. 7.5 History as Rationale for a ?New? Approach to Time Management The historical discussions within this research serve two purposes, the first specific and discrete, the second more general in context. Both flow from a deliberate effort within the research to demonstrate not only that sophisticated management platforms such as CPM and EVMS have limitations which expose themselves to 183 misrepresentation or manipulation, but that some first principles have been lost and the platforms might not fully integrated other platforms. The student?s discussion of the 1959 presentation of PERT by Malcolm et al. demonstrated the first purposeful and specific use of history. These discussions were provided with the hopes of determining if the original principles of PERT could enhance contemporaneous industry discussions that have become somewhat convoluted, to the extent they are even occurring. The review of Malcolm?s article was also an attempt to re-introduce the first principles of network scheduling to see if they might offer solutions. The student believes these discussions provided, at least in part, the rationale for a network based scheduling platform embodying these first principles. PERT?s simple (if not organic) finish-to-start relationships, its repetitive juxtaposition of events and activities, its use of identifiable and measurable events, its attempt to seek time estimates from many separate experts, its deliberate attempts to remove bias (?built-in? schedule knowledge) by eliciting three time estimates, are all things that seem to have been lost with the passage of time, already fifty years. The discussions of the history of the Cost/Schedule Control Systems Criteria and Earned Value Management System were important for the same reasons. Like PERT, the Earned Value platform has also evolved since its development within the Minuteman Missile program of the 1960s. The historical discussions here were important as they provided two explanations as to how EVMS became vulnerable to manipulation and/or obfuscation. The first explanation was the private industry re- write of the C/SCSC in the mid to late 1990s, after which C/SCSC became known as EVMS. The re-write, while perhaps making the overall system terminology more 184 intuitive for the entry level project manager or senior executive, provided greater latitude to the contractor who could now take measurements at larger levels of detail within the WBS or cost loaded CPM. Upon study of these changes during the course of this research, concerns over ?substitute value,? ?detachable value,? and other manipulations have, in the student?s opinion, weakened the overall integrity of the EVMS platform. The second explanation is that while EVMS has evolved, so has the Government?s methodology for making progress payments. The student asked why is it that if major procurement projects of the U. S. Government utilize a cost loaded CPM schedule, one does not have ?early? and ?late? metrics for EVMS? Perhaps it is because that while EVMS has been used for forty some years, the U. S. Government?s use of cost loaded CPM schedules is a more recent development which, for some federal agencies, may only date back ten or fifteen. These points, which support the rationale for an alternative project performance measurement platform, could have only flowed from a historical review and discussion. The student?s more general use of history involved brief discussions of management processes and tools (e.g. the efficiency movement, scientific management and the Gantt chart), the increased involvement of scientists and scientific research during wartime (e.g. World War One, World War Two and the Cold War arms race) and some examples of the common notion that history can be ?forgotten? or ignored (e.g. Grand Central Terminal, Bethesda Naval Hospital and the Washington Subway System). The latter point, which is perhaps the most general and anecdotal in appearance, is relevant from the standpoint of project management 185 systems and methodologies. If we might neglect or forget the most obvious characteristics of a project, be it physical beauty, origins, or original requirements, what is the likelihood that the less visible management methodologies used during their construction will be remembered? And where the source of these methodologies is no longer immediately apparent because missions have been accomplished, memories fade, people move on to other jobs, retire or die, can these situations also provide others an opportunity to champion old advances as their own new inventions? Furthermore, does the advent of new approaches and new technology (e.g. Fondahl?s PDM and advances in computing technology) also get in the way of a broader understanding of older methods that were successful and might still be successful (e.g. the heavily detailed bar chart on the Manhattan project)? And if humankind might be prone or encouraged to embrace new technologies at the expense of older methods, does that also not suggest the possibility that there might be sound methods that may have simply been lost to the passage of time? Also of interest in the historical discussions were the brief mention of the involvement of the scientific community and their role in creating destructive technology before, during and after World War II. How does one explain why some scientists of the period might have rationalized their participation in such projects as those related to the Manhattan Project, or the ICBM, or the German V-2 Rocket (e.g. Hahn, Heisenberg, Oppenheimer, Ferme, Von Braun) while others refused to pursue or attempted to hide new destructive technologies from others within the scientific communities (e.g. Rutherford, Einstein, Meitner). And to the extent these rationalizations can be known, are they indeed any different from those who might 186 choose to misrepresent or skew financial or schedule measurement records in a modern corporate setting? These are the larger question related to this research. 7.7 Chapter Summary Using the student?s definition, the soundest schedule platform provides an accurate recording of work to date, progress as of a given point in time and provides a reasonable and accurate depiction of the planned approach to remaining work. To the extent the critical path method is used, relationships between individual work activities must be expressed as reasonable reflections of the intended approach and any contractual requirements. However, debates over mathematics, artificial constraints to the project schedule which interfere with the proper network calculation, the diminished involvement of skilled construction personnel in the scheduling processes, excessive granularity and proper understandings of the obligations, rights and motivations of other stakeholders are perhaps key ingredients to what has been described by others as a scheduling ?crisis? for CPM scheduling. To the extent that these points might be accommodated within an alternative methodology in a supplemental methodology, this might represent a contribution. 187 CHAPTER 8 The Superpath Methodology ?In critical and baffling situations it is always best to recur to first principles and simple action.? Sir Winston Churchill (Thomas 2007) 8.1 The Concept and Appearance of Superpath Figure 8-1 A Superpath Network of 16 Events and Super-Arrows Superpath represents a methodology for constructing a network of easily distinguishable events and summary level ?representations? of tasks for a project or program of any size or in any industry by any individual, group or team. Perhaps the single most important requirement for conducting the Superpath review is to be physically present on-site in order to make visual confirmation of project status. Also important are a basic understanding of the project?s purpose, its major and definable features of work and/or performance requirements. A required contract completion date is not a requirement of the Superpath methodology and may be included at the 188 discretion of the evaluator(s), but the consideration here is that if the required completion date is known it may introduce a bias to, or ?anchor,? the expert opinions. The Superpath platform is a simple network that is constructed by first identifying the principal events of the endeavor (circles) and then applying connective arrows (?super-arrows?) between each event in such a manner that, at a summary level, the event-to-event relationship is captured. It is acceptable to have more than one super- arrow between two events, where the modeler wishes to track to separate things, but this departs slightly from the overall concept which is to maintain a very large granularity. As with PERT, the tail and head of each super-arrow is connected to an event, time is measured in weeks and only finish-to-start relationships are used. The super- arrows are deliberately not titled ?tasks? or ?activities? as the purpose of the network is not to focus on the individual work tasks necessary to achieve an event, but to only provide a spatial representation of relationships and proximity of the events. The events, which are expressed as circles, can be of uniform size or may be sized or given color pursuant to some attribute such as the monetary value of the event, responsible party, area of project, type of work, or other variable. The super-arrows, meanwhile are representative of work, but are deliberately non-specific as to task content. The rationale here is that just as contractor centric CPM ?ignored? events, Superpath, as an event-centric methodology may ?ignore? activities. Once the summary network is constructed, ?conventional? PERT-CPM mathematical calculations are performed to identify the critical superpath(s) and the total float on ?non-critical? superpath(s). The non-critical superpaths, which are likely ?farther? 189 off the critical path because of the summary level approach, may also be sub-divided into ?near critical? and ?non-critical? which may be defined by the evaluator(s). Overall, the summarized detail found within Superpath prevents many of the traditional ?obfuscations? found in modern CPM schedules that are cultured through excessive granularity (complex, preferential or soft logic, multiple calendars, lags and software settings) and even allows the project team to ignore the paths completely and visualize the events as stars in the night sky. This last visualization is congruent with the ?event centric? owner?s perspective that has been lacking in CPM and PDM methodologies for some time. Figure 8-2 introduces this graphical approach which is provided on the following page. 190 Figure 8-2 The Transition to Superpath?s ?Stargaze? or ?Night? View This methodology can be performed using either deterministic or probabilistic treatments of the super-arrow durations and the resulting Superpath network may be compared to identical events within the contractor?s CPM, or simply evaluated independently. The contributions of this approach include: (1) intuitive summary level displays of basic schedule information; (2) a reduction in the number and 191 proximity of non-critical paths by virtue of the fact that there will be fewer paths within the network with greater separation; (3) providing the owner with the ability to independently assess progress without the contractor?s CPM; (4) allow the contractor to not be forced into preparing schedule information purely for the purposes of the owner?s needs, which is not uncommon; and (5) avoid the obfuscating effects of CPM software within excessively detailed schedules. Upon performing conventional mathematical calculations found within the CPM methodology (i.e. the ?forward pass? and the ?backward pass?), an intuitive ?range gaze? conveys the early and late points in time for each event. Figure 8-3 The ?Night View? of ?Early? and ?Late? Events Where the event maintains its circular appearance, those events are on the critical superpath, which because an imaginary path can be drawn through the circles, does not have to be drawn, but for the case where there are Superpaths that are close in criticality. The Superpath network may also be viewed with the Super-Arrows fully visible. 192 Figure 8-4 The ?Day View? of ?Early? and ?Late? Events 8.2 The Rationale for Superpath Superpath has its origin in that there exists a need for a simplified and independent methodology for evaluating schedule performance while at the same time presenting the information in a much more intuitive fashion that is accessible - - both physically and intellectually -- without significant effort, cost, or requiring the engagement of a highly experienced CPM professional or the use of a personal computer. Chapter 7 discussed the technical rationale for Superpath but did not address the human interface. The Celestial appearance of Superpath is not coincidental but rather a deliberate effort to treat a summary look at a project as an act of, while not astronomy, at least something similar to stargazing. In stargazing one is typically able to discern information about the deep reaches of space by focusing on distant objects ? planets or stars. One might observe where a planet or star is relative to the horizon, or one?s position, where these objects are in relation to another, etc. And while it is very difficult to discern any qualitative information about the information about the space in between these stars with the naked eye, or even the 193 stars themselves without powerful telescopes or images beamed to earth from spacecraft, celestial approaches have provided guidance to mankind, both on land and on the sea, for many centuries. 8.3 The Superpath Methodology (Deterministic Solution) The Superpath methodology follows the sequence of steps provided by the Polaris PERT program offered in 1959 by Malcolm et al. But for the fact that Superpath interests itself in only a small number of events (the Polaris program?s PERT network had over 10,000 events as of late 1958) the process is generally the same and is detailed within subsections 8.4.1 to 8.4.9. 8.3.1 The Flow Plan A set of events must be selected for the project or program and then placed into a logic flow plan, just as in the Polaris methodology. These events must be readily identifiable and measurable and represent discrete points in time vice things that occur over a longer period. ?Building Enclosed,? ?Start of Foundation Excavation,? ?Stormwater Permit Issued? are all examples of events. Examples of tasks, which are not part of the Superpath model, related to these events might be ?Install Windows and Doors,? ?Excavate Foundations,? ?Review of Permit by State of Maryland.? The distinction between ?tasks? and ?events,? the latter being instantaneous, is an important consideration. Once the events are selected, the super-arrows must be drawn. These super- arrows do not represent individual tasks, rather an overall allowance for the one or more tasks that are necessary to accomplish a particular event. And just as Kelley- 194 Walker CPM typically did not provide event descriptions at the nodes within an Activity-on-Arrow diagram, Superpath does not provide activity descriptions on the Super-Arrows. This is because the super-arrows are only representative of the basic spatial relationships between two events and not definitive models of the various task(s) that lie between. Dummy arrows are also used in Superpath, consistent with other conventional activity-on-arrow methods. 8.3.2 Elapsed-Time Estimates Under a deterministic approach, the owner provides individual durations for the time along each Super-Arrow (i.e. between each connected Event, with the exception of the dummy arrows). Although this one time estimate seems arbitrary, it is indeed consistent with mainstream CPM scheduling which uses single time estimate for each activity. Section 8.4 will present a probabilistic methodology for Superpath utilizing three time estimates which will also address the elicitation of these time estimates. 8.3.3 Organization of Data Unlike PERT, which used a tabular approach to the PERT solution, a flow plan representation is far more conducive to the remaining steps in the methodology, as was demonstrated within chapter three. 8.3.4 The Analysis With the network constructed graphically a conventional forward and backward passes are performed to determine the early and late dates for the individual 195 events. These processes were presented in chapter three. A handwritten network suffices for this stage. 8.3.5 Computation of ?Expected Time? for Events The early start and early finish dates obtained during the forward pass operation is calculated. 8.3.6 Computation of ?Latest Time? for Events The late start and late finish dates obtained during the backward pass operation is calculated. 8.3.7 Computation of ?Slack? in the System Total float for each event would have been calculated during the preceding ?forward? and ?backward? passes. 8.3.8 Identify the ?Critical Path? in the Network The critical Superpath is the path with the smallest value of total float. Where the project or program is projected to complete ?on time,? this value is zero. Although the critical superpath may be drawn with a darker, or bolder line, it may also be omitted entirely, as the events that remain circular (i.e. where early and late dates are identical) are the ones that are one the critical superpath. 196 Figure 8-5 The ?Night View? of ?Early? and ?Late? Events 8.3.9 The Display of Information Figure 8-6 through 8-8 provide separate ?night? views of the Superpath model showing ?early events,? ?late events? and both ?early? and ?late? events. Figure 8-6 Early Gaze (top to bottom). 197 Figure 8-7 Late Gaze (top to bottom). Figure 8-8 ?Range Gaze? Showing Early and Late Extremes 8.4 The Superpath Methodology (Probabilistic Solution) The Superpath network may also be approached probabilistically. Instead of eliciting a single time estimate for each Super-Arrow, separate time estimates for each activity (optimistic, likely and pessimistic) are assigned. In order to maintain a non-computerized solution though, the question of how to construct probability distributions for each Super-Arrow, and then solving the composite distribution for 198 the final events must be addressed. James Kelley?s 1961 article ?Critical-Path Planning and Scheduling: Mathematical Basis? that was described in Chapter 7 provides a solution to this question. Herein Kelley provided a model where each activity was assigned three time estimates with ?odds? assigned to each. In order to describe properly evaluate Kelley?s work, the student constructed a network representation of his tabular solution which is provided again under figure 8-9. Figure 8-9 A Simple Network Based Upon James E. Kelley?s Tabular Approach The next two sub-sections will discuss the methodology for identifying probability distributions for each Super-Arrow and the alternative concept for evaluating delays to the project or program which only focuses on Super-Arrows which are either in progress, or about to start. This latter concept is a substantial departure from existing habits within the practice of probabilistic modeling which would re-calculate all 199 super-arrows within the network, no matter how far into the future they are planned to occur. This methodology finds a practical approach to the complexities of events and activities wherein each event might see any of multiple durations. This probabilistic approach, albeit a modest one, addresses what has affected and essentially terminated the probabilistic components of the PERT technique. This complexity is visible within the four event network shown in Figure 8-10 which, if each event is limited to three discrete results, has eighteen possible outcomes. Optimistic Point of Occurrence Likely Point of Occurrence Pessimistic Point of Occurrence Figure 8-10 Conceptual Illustration of a simplified PERT Network 8.4.1 The Elicitation of Time Estimates for Each Super-Arrow A phased process for the elicitation of expert opinion in judgmental probabilities is provided by Baecher and Christian. 1. Motivating Phase 2. Training Phase 3. Structuring (deterministic) Phase 4. Assessing (probabilistic) Phase 200 5. Documenting Phase The first two phases of this process: ?intend?to develop rapport with the experts and to explain why and how judgmental probabilities will be elicited and how the results will be used?(and also) has the purpose of making the experts aware of the processes and aids people typically use in quantifying judgmental uncertainties and how well calibrated judgmental probabilities are with respect to observed frequencies of assessed events in the world. The goal of this training is to encourage the experts to think critically about how they quantify judgment and to avoid the common biases encountered in quantifying judgmental probability.? (Baecher and Christian 2003) These are relevant considerations not only within the discussions of the elicitation of time estimates for Superpath, but to also accent the oft missing collaborative processes within modern CPM scheduling described in chapter seven. These first two phases are appropriate for both Superpath?s deterministic and probabilistic approaches. Superpath employs Kelley?s use of assigning ?odds? to individual completion times, rather than relying upon computer based solutions. Odds making, a science in its own right involving far more intricate expressions of method and results within such industries as the gaming industry, is also simplified in Superpath using a timeline and a set of gaming chips to construct a rudimentary representation of the expert(s) custom probability distribution. The chips associated with Kelley?s first activity (1,2) are applied to a variation of an American Roulette table. (Kilby et al. 2005) 201 Figure 8-11 Assignment of Chip Distribution for a Single Super-Arrow Although deliberately simplified, Superpath does require a formal ?interrogation process,? either with ones self or with a competent professional for the inter-event space (i.e. the Super-Arrow). Because of the gaming chip approach, it is not necessary to limit the individual to assigning the chips to only three points in time, such as in PERT and most contemporaneous simulation applications. It is also not necessary to limit the resulting distribution to a single peak, as was the case for PERT. It is preferable, rather, to provide no guidance to the individual(s) providing the estimate, other than whatever information might be necessary to ensure that they 202 have a full understanding of the events at either end of the super-arrow that they are studying (and/or whatever other information or discussions are appropriate under the phases identified by Baecher et al.). Allowing a multi-peaked distribution, or any shape, resulting from the distribution of the poker chips by the individual or expert(s) is a fundamental tenet of the probabilistic application of Superpath. Beyond its manual solution, one overarching philosophy is that while attempting to closely model both the project and human opinion, the execution must neither alter nor influence either. Although the number could vary, thirty chips seemed to be a number large enough to create an observable distribution ?shape,? yet not so many as to become a nuisance or a time consuming endeavor. The process, while perhaps too simplistic for some -- a thirty sample simulation is perhaps far too small by any reasonable mathematical standard ? is sound, provided the field based application can impart a basic representation of the expert?s opinion. The far greater development within this probabilistic approach is its improved accessibility. The thirty poker chip solution, which may be performed on the tailgate of a project site pickup truck, within the cargo hold of a C-117 military transport, or on the battlefield is dubbed ?dirty thirty? to represent the grittiness and simplicity of the field based application. It may, of course be performed without chips using only pencil and paper, or a finger in the dirt. As such it represents a feasible non-computerized solution. To the extent a computer is available and desired, the visual display of the results of probabilistic Superpath analysis is summarized in Figure 8-12. In this example, each event was indeed assigned three separate outcomes for the purposes of 203 this illustration (optimistic: green, likely: yellow, pessimistic: red). Note the absence of the yellow circle for non-critical events. This is purely for visual presentation, allowing the critical Superpath to be more readily identifiable. Also of interest is the growing range between the events on the superpath as one travels through the network. Figure 8-12 Probabilistic Superpath 8.4.2 Evaluating Delays to the Overall Project or Program Earlier discussions of meaningful granularity and a ?Rolling Wave? approach were intended as topics of discussion purely for the justification of Superpath?s summary level approach. But they are also relevant in the conversation related to the probabilistic application of Superpath. If one is to continue to acknowledge the notion that there is greater uncertainty with events as one peers into the future -- something that is fair to assume and seems readily accepted -- how meaningful is it to perform probabilistic evaluations on events that are deep in the future? Say beyond two years, or even six months forward? If one can accept the notion that probabilistic analyses are less meaningful on these ?far future? events, why not recognize that 204 there exists some discrete point into the future, a horizon perhaps, beyond which a deterministic method suffices, or might even be more appropriate? ?People, even geotechnical engineers, do not enter a situation with a well- structured, mathematical conception of the probabilities of events pre-formed in their minds. The protocol of assessment must evoke such a structure. Current usage call this process, elicitation. The protocol cannot simply ask a subject to use a number for the probability for an event and expect that numbers so generated will be consistent, coherent, and well calibrated.? (Baecher and Christian 2003) But perhaps unlike the considerations within the classic engineering sciences (e.g. geotechnical, structural, mechanical, electrical, etc.), elicitations in support of a probabilistic scheduling process must account for the fact that much of the subject matter is not only unknown at the time of analyses, its outcome is heavily dependent upon the future behavior and decisions of many humans and their organizations. Superpath embraces these concerns, adopting a hybrid approach that treats short term events probabilistically and long term events deterministically. This hybrid method measures slippages to the overall project end date by assessing the probabilities on all ongoing super-arrows at the time of the assessment and then applying the probability distribution for the most critical super-arrow to the project?s end event. All other super-arrows are treated deterministically. 205 Figure 8-13 Near Term Probabilistic Approach (?Bow Wave? Method) This approach can also be described by the terms Near-Term Probabilistic/Long- Range Deterministic, Over the Horizon Deterministic and/or various other combinations or permutations of these terms. The student has titled this probabilistic- deterministic approach ?Bow Wave Probability.? Philosophically, one of the most important considerations of the probabilistic techniques in Superpath are: (1) that the probabilistic solution must maintain the non- computerized capability as a primary purpose; and (2) that probabilistic schedule assessments are of limited value when conducted on events or activities that will not occur for an extended period of time. Where the division between the ?near future? and ?far future? falls is the choice of the ?competent engineer(s)? for any given project or program, but perhaps the most intuitive method involves performing probabilistic analyses on only those super-arrows that are under way or are soon to be so. 206 8.5 Superpath?s Relationship to the Critical Path Method Superpath and CPM are each network based expressions of project elements and their relationships for the same ultimate purpose: to identify the criticality of paths through the network in an effort to monitor and manage time. Since Superpath is intended to supplement, but not replace CPM, the platforms might co-exist on a single project. A co-existence can be illustrated by creating a Superpath network and overlaying it atop the simplified CPM network used in Chapter 7. See Figure 8-14. It must be noted, however, that a CPM schedule need not be in place in order to implement Superpath. The Case Study within Chapter 9 will demonstrate the use of Superpath without a CPM schedule. But before proceeding with a physical comparison between a Superpath network and a CPM network as this section will provide, it is important to note that Superpath, in its most basic format, might only be in the mind of the project manager who has broken the job down into a relatively small number of events, understands the relationships and holds rough approximations of time requirements for the work involved. Once this is established, Superpath is no different than the approach taken in the kitchen by a master chef or that of the General Manager of the Washington Nationals while planning and executing the move of the Montreal Expos to Washington, DC in 2005. 28 In committing the Superpath network to a physical network, it is important to note that some consider the identification and prioritization 28 The student contacted the offices of the Montreal Expos/Washington Nationals in 2005 to determine if they would be interested in having a CPM schedule prepared for them as part of this research. The schedule would have been used to support the move of the franchise from Montreal to the District of Columbia within a relatively short period of time. Despite the numerous tasks and events requiring close monitoring by the organization during this period, there was no interest in having a CPM schedule, even at no cost. 207 of tasks an ?ancient notion? and this would explain why not everyone wants or needs a CPM style network to understand what is on a critical path. (Canby 2007) Figure 8-14 Small CPM Schedule Network To create the CPM-Superpath comparison for this simplified network, one would first select a set of events that will be monitored. This is a process requiring the subjective judgment of the evaluator. In this fictitious, generic comparison to a very small CPM network, the evaluator has decided that he is not significantly concerned with the transition between the completion of CPM Task 2 and the start of CPM Task 4. See Figure 8-14. In this particular instance it is because both tasks are in fact of a similar nature and are performed by the same subcontractor who intends to flow their crews from CPM Task 2 to CPM Task 4. The start of CPM Task 2, however, is an important event and one that the evaluator deems worth monitoring. This might be because he is aware of strained conditions in the local marketplace and believes that there is a high probability that this trade subcontractor might be late in mobilizing to the project. The establishment of an event at the beginning of this task 208 allows him to monitor this concern. This event becomes ?Event 4? in the Superpath network. With Superpath Event 4 established, the assessor feels comfortable that the subcontractor will continue on through the completion of tasks 2 and 4 without a high risk of demobilization. Therefore he elects to place another event at the end of CPM Task 4. This next event is Superpath Event 5. This assumption, also, is arbitrary also relying on the subjective judgment of the evaluator. But it does seem like a logical place to position an event as the project has two possible areas of work prior to conclusion (Task 3 and Task 5). This reasoning illustrates the thought process in Superpath. These same types of considerations are made throughout the network resulting in the identification of six Superpath events. See Figure 8-15. Figure 8-15 Superpath Events Overlaying a Small CPM Schedule Ordinarily, the CPM would not be so abbreviated and would have a far greater number of activities, so in this case the difference of scale between the detail intensive CPM platform and summary Superpath network is not apparent. With the superpath event network and CPM so similar in numbers of events and tasks, the Superpath Event Number 1 2 3 4 5 6 6 209 conceptual difference between a superpath path and a CPM path -- particularly the spacing of the network paths which provides one the opportunity to make general and subjective classifications with respect to the criticality of the various paths -- is not illustrated. Also, since the CPM is already ?in hand,? the summary example shows Superpath events that basically coincide with the starts and finishes of existing CPM activities. This is not always the case and of course would not be possible where a CPM is not in place or available to the evaluator. With the events in place, connective relationships can be applied to the network of events. This is done by considering the spatial relationships between the events and making connections that will allow the evaluator to assign critical, near critical and non critical superpaths. The Chapter 9 case study will demonstrate how this is performed, but for these purposes, it is assumed that the basic logic of the CPM is congruent with that of the subjective opinions evaluator. The overlay of the basic Superpath network before criticality is considered is provided under Figure 8-16. Note the slight shifting of event 5 (to the left and down) which is intended to provide a more intuitive graphical display of the Superpath relationships than is possible using the CPM bar chart representation. This is possible because the Superpath diagram is not time scaled, allowing for some freedom in the placement of events. 210 Figure 8-16 View of Early Event Dates Before Criticality is Considered A more significant difference between Superpath and CPM is illustrated by comparing the space between Superpath Events 4 and 5. Superpath only has one connective link between these two events, while CPM has two activities across this same area of the project. When magnified beyond this rather small generic example, the ability to summarily treat a large number of tasks, if not ignore them altogether, with a super arrow is a major difference between the two platforms. Super arrows do not represent discrete work, but rather are intended to be summary level expressions of the interstitial space between events. This is an abstraction on the very discrete task definitions embodied within CPM networks, and is also a key difference. This point will also be demonstrated in the Chapter 9 case study where Superpath is applied to a very large project that today would likely have tens of thousands of CPM tasks. With the network of events constructed and the connective logic in place, the Superpath analysis required identifying ?critical,? ?near critical? and ?non-critical? network paths. Here, one is simply ?looking? forward from a particular point in time, evaluating the current project status, what events lie in the future, how they relate to Superpath Event Number 1 2 3 4 5 6 6 1 2 3 4 5 6 211 one another and which are driving an overall end date or milestone of interest. In this regard, Superpath and CPM are identical in concept. But in this comparison, the subjective judgment of the Superpath assessor will identify whether the event is ?critical,? ?near critical,? or ?non-critical.? In CPM, the calculation is far more rigorous and is performed across all activities. Superpath does not require that a formal forward and backward pass exercise be performed. Presumably the evaluator is seasoned enough to identify the critical path without performing these steps, particularly once the major events have been identified, spatially arranged and the critical path is obvious. Where this is not the case, the forward and backward passes are certainly plausible steps to incorporate into the Superpath solution. Returning to the example, assuming it is March 6, 2008 and Event 1 has just occurred (?Project Start?), Event 2 becomes the first event of interest in the Superpath network. Event 2 is estimated to complete in 10 workdays, or on March 19, 2008. With only one activity in play, the assessor is comfortable enough with the concepts of CPM to assign this event a ?critical path? status within the Superpath network. A ?critical? super arrow is assigned between events 1 and 2. Moving forward in the network, the assessor must then evaluate the criticality of the path running through events 2-4-5-6 and through events 2-3-6. Assuming the evaluator identifies super arrow durations that are close to those within the CPM, or simply agrees with them, the path through events 2-4-5-6 (which CPM quantifies as a 26 day duration) will be classified as ?critical? and the path through events 2-3-6 (15 days in CPM) will be classified as ?non-critical? as it is not ?near? path 2-4-5-6. This is again a subjective assignment, but the relative differences in time would likely provide the evaluator to 212 the ability to comfortably make this assignment. An important point here is that while the Superpath evaluator may consider CPM durations in their assessment, a reliance upon them is discouraged. Figure 8-17 View of Early Event Dates With Super Arrows Assigned Finally, with each event assigned a classification of critical, near critical or non-critical, it is possible to assign late positions for each event. For critical events, both early and late positions are identical. When the paths are later removed in ?star gaze? view, these events will appear as single dots or circles with no connective line. Near Critical and Non Critical path events will have their late events in a different position than their early dates. Where a formal forward and backward pass are not performed, either due to a lack of schedule information, or the decision of the evaluator to maintain a summary level approach, the ?gap? between early and late event dates are the same length for each near critical event. Non-critical path events also are assigned the same ?gap? length, approximately twice that of the ?near critical? events. These event ?gaps? are illustrated in the completed Superpath Network of Figure 8-18. Superpath Event Number 1 2 3 4 5 6 6 1 3 4 5 6 2 Critical Path Near Critical Path Non Critical Path Restraint 213 Figure 8-18 View of Early and Late Event Dates With Super Arrows Now the network can be viewed with the range gaze in Figure 8-19. Note the two event locations for Superpath Event 3 which indicates that it is non-critical. Events 1, 2, 4, 5 and 6, meanwhile, are in a single location indicating they are on the critical path of the Superpath network. Figure 8-19 View of Early and Late Event Dates With Super Arrows A further advantage of the Superpath network is that the spatial relationships between events are unchanged, even where no work has been performed on the project but there exists a mandatory finish constraint. The loss of different early and late dates for the non-critical Task 3 was described as a problem in Chapter 7 and is repeated Superpath Event Number 1 2 3 4 5 6 6 1 4 5 6 2 Critical Path Near Critical Path Non Critical Path Restraint 3 3 Early-Late Gap Superpath Event Number 1 2 3 4 5 6 6 1 4 5 6 2 3 3 Early-Late Gap 214 within Figure 8-20. Note that the Superpath model at the bottom of the figure remains consistent with its appearance in Figure 8-19 allowing the evaluator to view Event 3 as non-critical in both cases. Figure 8-20 Project Status as of 24 MAR 08 with Finish Constraint 8.6 Chapter Summary This chapter described the Superpath methodology, a summary level network of events where basic relationships between events are expressed using connective logic. Once constructed, subjective classifications of criticality are assigned to each network super arrow or network path allowing a Superpath evaluator to be provided with at least a conceptual interpretation of what is driving the project completion date. The platform may be considered deterministically and/or probabilistically. It can be maintained separately from a highly detailed CPM schedule, or be integrated into that platform to some degree through the identification and usage of common milestone 1 4 5 6 2 3 3 Early-Late Gap 215 events. The degree to which this integration exists at the discretion of the Superpath evaluator. Since Superpath may exist without a CPM, it is fair to consider it as a time management device that could be produced before, during or after a project. 216 CHAPTER 9 Case Study Figure 9-1 Photograph of Hoover Dam, Power House and Tunnel Outlets (Adams Unknown) 9.1 Chapter Overview This chapter describes the application of the concepts of Superpath to the Hoover Dam construction project built on the Colorado River between the states of Arizona and Nevada during the 1930s. This project represented the largest dam, public project and government contract in the history of the United States at the time of the award on March 20, 1931. This case study demonstrates the overall concept of how even the largest and most complex projects can be reduced into a manageable network of events and either deterministic or short-term probabilistic approaches introduced in Chapter 8. 217 9.2 Pre-Construction History Hoover Dam was authorized by the U.S. Congress and signed into law by President Calvin Coolidge on December 21, 1928 under The Boulder Canyon Project Act. Hoover Dam was to serve ?four major purposes?flood control, water storage, silt control and generation of electrical energy.? (U. S. Department of the Interior 1976) The project had been influenced by the loss of farmland in the 1905 flood of the Imperial Valley which had been induced, at least in part, by man?s attempts to utilize the Colorado River as means of irrigation for ?reclaimed? farmland. The 1905 flood, which breached the valley?s headgate, would create the Salton Sea out of the Salton Sync, a body of water that remains in place as of 2008. (U. S. Department of the Interior 1955) It is possible to argue that Hoover Dam served a fifth purpose which was political. The Hoover Dam project was publicized by both Presidents Hoover and Roosevelt and during the great depression gave hope to millions of Americans. (Kramer et al. 2002) The naming of the Dam was also influenced by political forces. At the groundbreaking ceremony President Hoover?s Secretary of the Interior made the surprise announcement that the Dam would be named ?Hoover Dam.? (Kramer et al. 2002) Later, President Roosevelt?s administration would rename the dam ?Boulder Dam,? a name that remained in place until 1946 when the U. S. Congress re- established its name as ?Hoover Dam.? The name has remained unchanged since that time. The Imperial Valley and the growth of the west, which was constrained by a dependable supply of water and electricity, were also factors contributing to the 218 project. Engineering studies were conducted by the U.S. Bureau of Reclamation (BuRec) and by 1918, Arthur Powell Davis, Reclamation Director and Chief Engineer, would recommend ?control of the Colorado by a dam of unprecedented height? in Boulder Canyon. (U. S. Department of the Interior 1955) Further studies would ensue and it would take four more years for six of the seven states of the Colorado basin to agree to a division of the river?s resources and six more before the project would be authorized by the U.S. Congress. 29 It was not until the aftermath of the Great Flood along the Mississippi River in 1927 that the Boulder Canyon bill received enough support in the U.S. Congress for passage, in exchange for the support of the Mississippi project. (Kramer et al. 2002) On January 10, 1931 the BuRec issued drawings and specifications for the Hoover Dam project. Six Companies, a Delaware corporation established on February 19, 1931 which represented a joint venture of construction giants that included Bechtel, Keyser, Morris-Knudsen and Utah Construction, was the successful bidder at $48,890,955.00, well below the only other responsive bids of $53.9-Million (Arundel Corporation) and $58.6-Million (Woods Brothers Corporation). Two non- responsive bids included those of Edwin A.Smith of Louisville, Kentucky (?$80,000 less than the lowest bid you get?) and John Bernard Simon Company (?$200 million or ?cost plus 10 percent? ?). The Six Companies bid, prepared by Morris-Knudson?s Engineer Frank Crowe, was higher than the U.S. Government estimate by only 29 Secretary of the Interior Herbert Hoover appears to have been instrumental in persuading the states to agree to the ?Colorado Compact.? (Kramer et al. 2002) 219 $24,000. 30 (Stephens 1988) Under the terms of the contract, Six Companies would be required to re-direct the Colorado into diversion tunnels within 2-1/2 years and complete the dam in seven. Measuring from the actual Notice to Proceed (April 20, 1931), these two contractual requirements were October 20, 1933 (Colorado Re- Direction) and April 20, 1938 (Contract Complete). Separate contracts were issued by the Bureau of Reclamation for the dam?s plate steel piping (to Babcock & Wilcox in the amount of $11,500,000.00), electrical generation equipment (to the Allis-Chalmers Company), materiel, equipment and other construction related services. Under separate agreements, BuRec had the Union Pacific Railroad build a spur line from Las Vegas to Boulder City, from which it constructed its own rail line to the site of the dam which would see 300 rail cars per day at the project?s peak. Roads and utilities also were run for the project. The electrical supply for the project was run from San Bernadino, California, a distance of 222 miles. (U. S. Department of the Interior 1976) Figure 9-2 U.S. Bureau of Reclamation?s Plan View of Hoover Dam (Wikipedia 2008) 30 The Six Companies bid of $48,890,955 consisted of a 25% contractor contingency for profit and an estimated total cost was $39,112,764. (Stevens 1988) This breakdown was not provided to the Government at the time of the project as was and is standard for a lump sum contract. 220 The Boulder Canyon Project Act had several requirements, including that the dam project would pay for itself through the sale of power, provide a dependable water supply for irrigation, industry and domestic uses within the seven state Colorado River basin in addition to its other requirement of flood control and silt control. The City of Los Angeles and Southern California were to be the largest consumers of the power and water and the act also authorized the construction of the All American Canal which would supply the Colorado?s water via aqueduct to Southern California. Before this time, the supply of water to certain sections of Southern California from the Colorado River flowed through the nation of Mexico. (Stept 1999) The work related to the All American Canal and electrical transmission lines to distant areas were not part of Six Companies? contract scope. The major features of the Hoover Dam project for which Six Companies was responsible included four diversion tunnels, two spillways, four intake towers and tunnels, the dam, two power stations and two waterworks buildings. With the exception of the dam, the aforementioned project features are either on the ?Nevada side? or ?Arizona? side of the river in equal number. Many of these features are visible in Figures 9-3 through 9-5. 221 Figure 9-3 U.S. Bureau of Reclamation?s Section of Hoover Dam (U. S. Department of the Interior 1976) 9.3 Six Companies? Mobilization to the Site of Hoover Dam Following award and notice to proceed, Six Companies? first phase of the project involved blasting four diversion tunnels in the canyon walls of Black Canyon, on either side of the river, so the Colorado River could be re-directed. With the river out of the way, the dam site would be excavated and built upon. The company set up along the river bottom and work began on the project on a 24 hour, 7 day week work schedule. The Six Company workers would be permitted to take two optional days off per year, December 25 and July 4, but without pay. (Stept 1999) As the tunneling operations progressed, Six Companies also constructed project support facilities that included two concrete plants, machine shops, an air compressor plant, equipment maintenance facilities, bridge crossings, cableways, a gravel screening plant, a chilled 222 water plant, a settling works to reduce the silt content of the river water used for construction, a 2,000,000 gallon water tank and Boulder City, and a worker community eight miles from the project site. (U.S BuRec 1976) 9.4 Overview of the Superpath Review on the Hoover Dam Project This Superpath review was performed with limited access to project schedule information. This is in keeping with the concept of Superpath which is a summary level assessment which can be based upon very limited information (it is not uncommon for independent schedule evaluations to be prepared without the knowledge of an on-site project team, particularly on very large projects or those with multiple stakeholders). A significant amount of narrative information for Hoover Dam has been produced by the U.S. Department of the Interior?s Bureau of Reclamation and other sources and these were used as the basis for preparing a summary level Superpath network for the project. Since the construction of the dam preceded the advent of the critical path method by at least twenty-five years, there was no CPM schedule in existence. The flow plan for Hoover Dam consists of the identification and spatial arrangement of events for the definable features of work was established. It consisted of 54 events with connective super-arrows and restraints (a.k.a. ?dummies?). It did not include the generator installation within the power plants, as those were performed under separate contract with the last generator installation occurring as late as 1961. The flow plan depicted the spatial relationships between the major definable features of work for the project. Since one of the most spectacular days on the project was November 13, 1932, the day that the first two diversion tunnels were blasted 223 open (i.e. ?last blast?) and the Colorado River was re-directed, the night before this date made for an interesting point at which to perform an evaluation. As of November 12, 1932, the Critical Superpath ? in the opinion of the student -- flowed through the re-direction of the Colorado, the pumping and excavation of the dam foundation area, cleaning of bedrock, placement of the dam and buildings, roads and support structures. This is a reasonable assumption and demonstrates a key point in the Superpath method. With a very basic set of events based upon definable features of work and contractor actions, it is hoped that a basic understanding of the time between events, coupled with what is hopefully relatively large amounts of float (when compared to CPM) support subjective assignments of criticality to each superpath. Where more detailed analysis is desired, detailed forward and backward passes using estimated durations are possible and also less arduous than ?typical? CPM schedules due to the relatively small size of the network. The assumption that the critical path runs through the dam seems most intuitive, but this must be validated. In Superpath it is fair to assign criticality to one path and then ?check it? by reviewing the other paths to see if they might be ?as? or ?more? critical. In the case of the assumed critical path for the Hoover Dam (through the excavation and construction of the dam), the critical path ?encounters? other paths at two events. The first event is where the dam has reached minimum height and the intake gates are closed, and the second event is where the project is complete. If the other paths that flow into these two events are deemed far enough away from the dam path, they are subjectively classified as either ?non-critical? or ?near critical.? 224 In the case of the first event where the intake gates are closed, other paths flowing into this event are the completion of the cofferdams and the completion of the two Nevada diversion tunnels. These must be evaluated to determine their proximity to the dam path. Knowing that the river was re-directed on November 13, 1932 and the dam reached at minimum height on or before February 1, 1935, this would mean that this other work (cofferdams & diversion tunnels) would have 26-1/2 months to complete before becoming critical. Without any other information, it seems reasonable to assert that neither the two Cofferdams nor the two Nevada diversion tunnels are close to the critical path, but a review of the actual dates of performance for this other work also supports this assessment. Cofferdams were completed on April 1, 1933 and the two Nevada diversion tunnels were also completed in the Spring of 1933. (U. S. Department of the Interior 1976) These two other paths have approximately two years of float. This is perhaps, the quintessential example of Superpath. By maintaining a macroscopic perspective, one is able to achieve large ?distances? between the paths and make a reasonably reliable subjective assessment. The same approach is taken for the second event where the dam path interacts with the rest of the network which is at project completion, or the event titled ?Roads and Buildings, Complete.? See Figure 9-6. The work on other paths (intake tunnels, intake towers, spillways, power works and needle works) also does not ?come close? to the assumed critical path that runs from river diversion, down to the bedrock and up through the top out of the dam. 31 This is because with the exception of the power house and needle works, these can each progress from a very early point in time 31 The Six Companies contract did not include installation of the intake tunnel piping and penstocks which was the responsibility of Babcock and Wilcox. 225 (prior to November 1932) as they are somewhat isolated from the activity at the dam, have shorter durations and only tie in to the other work at the very end of the project. The powerhouse and needleworks work, meanwhile, must wait until the bedrock is exposed and the dam is up to a specific height. But the Six Companies scope for these facilities should have a duration that is shorter than the remaining work on the dam, so it is fair to consider work in these two areas as non-critical. Again, their classification as ?non-critical? or ?near critical? is subjective, but seems reasonable in light of the spacing of the paths and what is known as of November 12, 1932. 32 Superpath can be re-done periodically, so the analysis provided in the following section provides a reasonable amount of information for this point in time. Perhaps the most significant result is that it is possible to detail a very complicated project using very few events. Also, the lack of detail inhibits the ability of critical path ?jumps,? because there are indeed very few paths, and perhaps more separation from the critical one than would be possible within a highly detailed schedule. The methodology used for providing this assessment if detailed within section 9.5. 9.5 Deterministic Superpath Review for the Hoover Dam Construction Project Chapter eight described the Superpath methodology, laying out four basic steps within the deterministic solution. These steps are as follows: 32 If one assumes that powerhouse and needleworks could not begin until the bedrock was fully exposed (occurred in June 1933), a conservative assumption, and these superpaths tied in to the dam work at the very end of the Six Companies Project (March 1936), this would have provided thirty-three months to perform this work. Considering that the Six Companies contract was for the buildings only, this would appear to be more than enough time to accomplish this work. Installation of the generators was the responsibility of Allis Chalmers. 226 Step 1. Selection of Superpath Events Step 2. Expression of Relationships Between Superpath Events Step 3. Classification of Superpath as Critical, Near Critical or Non-Critical Step 4. Identification of Early and Late Event Positions Together, these four steps represent the deterministic Superpath analysis. The following sub-sections of section 9.5 detail these four steps for Hoover Dam. 9.5.1 Selection of Superpath Events Consideration of the project?s definable features of work allows the evaluator to begin to establish a conceptual understanding of the project requirements. While it might be practical to review a project estimate or CPM schedule to identify the definable features of work, it is also possible to simply consider the most basic set of documents to understand where the project is now and where it is going. For Hoover Dam, that might mean a look at the design drawings which would surely include detailed representations of both existing and the final dam design. Alternatively a detailed narrative might also suffice. For an ongoing project, project records to include project reports or a project schedule are also possible sources of information when identifying Superpath events. It is up to the evaluator as to which documents are needed. Those more experienced with the type of project might require less information than those who might be unfamiliar or less experienced with the type of work. In identifying the Superpath events for Hoover Dam, the student relied upon only a basic understanding of the project?s definable features of work obtained during 227 reviews of several reference documents, primarily relying upon those of the U. S. Department of Interior. Oregon State University Professor Paul L. Kleinsorge provides what amounts to a listing of the definable features of work for the project. 33 ?The major items of the specifications called for the construction of the dam?four diversion tunnels (two of which would be used later as spillway conduits and two as penstock tunnels), a powerhouse, four intake towers each 30 feet in diameter, two outlet valve houses located on the canyon walls, two overflow spillways each about 650 long and connected to the outer diversion tunnels by inclined shafts from 50 to 70 feet in diameter?? (Kleinsorge 1941) The starts and finishes of all of these definable features of work were identified as Superpath events. The student also identified other events which will be summarized later on in this sub-section. Figures 9-4 and 9-5 on the next page provide plan and section views of the Hoover Dam design, allowing one to identify the major definable features of work for the project. Overall, the goal of identifying Superpath events is not to simply ?tag? the start and finish of major work, but to create a small set of activities which, together, will provide the evaluator the means to assess the basic relationships between the elements of a project. 33 Kleinsorge?s work, which was not reviewed by the student until after his Superpath analysis, is provided here for purposes of clarity only. The student relied upon the plan and section views of the dam provided by the U. S. Bureau of Reclamation as the principle documents for the analysis. 228 Figure 9-4 Section View of Hoover Dam (U. S. Department of the Interior 1976) Figure 9-5 Plan and Elevation of Hoover Dam (U. S. Department of the Interior 1976) 229 Definable features of work are not the only elements of interest when identifying Superath events. Also important are events that are less associated with the finished product and more with project sequence (i.e. how the various ?project pieces go together?). In order to identify these other types of events, a basic understanding of the how the project must be assembled, or ?flow,? is necessary. One such example on the Hoover Dam project was the requirement for the dam to reach a specified minimum height during construction before the intake gates to the four diversion tunnels were closed, allowing for the water to begin to collect behind the dam. A second example was the requirement to remove water from between the cofferdams prior to excavating down to bedrock in the river bottom. A third example was the requirement for the intake gates to be closed after the dam reached minimum height but before the four diversion tunnels could be plugged and converted into either penstocks or spillway conduits. The student identified events titled ?Dam at Minimum Height?, ?Water Removed from Between Cofferdams? and ?Intake Gates Closed? as events that were responsive to these three important requirements. Each are events describing necessary action by the contractor vice events having definable features of work as their basis. The four intake tunnels of Hoover Dam were the first major pieces of construction work, following mobilization and other work that could be described as site set up and support facilities. Much of this work was performed under separate government contracts, particularly that which related to the establishment of roads, rails and utilities to the dam site. The student?s analysis focused on the work within the Six Companies scope, so these preliminary work items were not included in the 230 Superpath analysis. Each diversion tunnel was over one mile long and 56 feet in diameter. Each was lined with a three foot thick concrete lining resulting in a 50 foot tunnel diameter. Once these were in place, cofferdams could be placed and the Colorado River could be diverted from the site of the future dam. Once the dam reached a minimum height, the intake gates could be closed, and the four diversion tunnels could be plugged with concrete at precise locations and converted to their final use as either penstocks (in the case of the two inner diversion tunnels) or spillway conduits (in the case of the two outer diversion tunnels). A listing of the student?s seventeen Superpath events within the diversion tunnel category is provided under table 9-1 on the next page. 231 Table 9-1 Superpath Events Related to the Diversion Tunnels Once the diversion tunnels were ready to take the flows of the Colorado River, the upper and lower cofferdams could be placed. These two cofferdams isolated the dam site from the river, allowing for the trapped river water to be pumped from the future dam site. Next, excavation could commence and once the bedrock was exposed and cleaned, concrete placement could begin. Once the dam reached a specified minimum height, gates at the entrances of the four diversion tunnels could be closed and water could begin to collect in the reservoir. Work would continue on the dam through the ?topping out,? installation of roadways and the completion of 232 buildings. The student identified two separate event categories for these portions of the project. The first category contained eight Superpath events related to the cofferdams. The second category contained seven Superpath events related to the Colorado River, Dam and Reservoir. Listings of these two event categories are provided within tables 9-2 and 9-3. Table 9-2 Superpath Events Related to Cofferdams Table 9-3 Superpath Events Related to the Colorado River, Dam and Reservoir The water supply for Hoover Dam?s electrical generation plant and needle works was provided through a system of intake towers and tunnels. Each of the four intake towers sit atop the granite bedrock, approximately two hundred feet above the floor of the Colorado River prior to construction. The towers extend upward several hundred feet from these locations, two on either side of the reservoir. Between the 233 bottom of the intake towers and the power plant, the intake water runs through steel penstocks, first thirty feet in diameter, then thirteen feet in diameter, before arriving at either one of the electrical generators in the power plant or the Needle Works. These penstocks were installed within tunnels blasted within the granite bedrock. The student has identified sixteen Superpath events within the category of Intake Tunnels and Towers. Table 9-4 Superpath Events for Intake Tunnels and Towers For protection against overtopping, spillways were designed to carry reservoir water around the dam. There are a total of two spillways, one on each side of the reservoir. The electrical generating stations and needle works are summarized into four separate events. There are a total of eight Superpath events across these areas of the Hoover Dam project. 234 Table 9-5 Superpath Events for Spillways, Powerhouse and Needle Works In total, there are 56 Superpath events that have been identified for the Hoover Dam project. 9.5.2 Expression of Relationships and Criticality of Superpath Events Step 2 (?Expression of Relationships Between Superpath Events?) and Step 3 (?Classification of super arrows as Critical, Near Critical or Non-Critical?) are combined within this subsection for the purposes of brevity. This sub-section will describe the various considerations considered when evaluating the spatial relationships between events and their criticality, building upon the basic work descriptions provided within the previous sub-section. Due to the existence of project performance information as of November 1932, some 18 months after contract award, November 12, 1932 is chosen as the date of the Superpath assessment. Briefly, the choice of November 12, 1932 as the Superpath evaluation date is interesting for other reasons. The date is one day before the ?last blasts? in the two Arizona diversion tunnels, placement of cofferdams and re-direction of the Colorado River. It is also one week after President Hoover lost the presidential election to Franklin D. Roosevelt, whose political party would also assume the leadership of the 235 Department of the Interior and Bureau of Reclamation. Why is this important? Because up until this point in time Six Companies had been able to successfully break a worker strike that shuttered all operations on site for one week, fully thwart subsequent efforts by the workers to organize, enjoy ?the pick of the nation?s labor pool? at very little expense during the great depression, operate gasoline powered equipment in the diversion tunnels without ventilation and in direct conflict with both Arizona and Nevada mining laws 34 and exclude minority workers from the project while offering relatively low wages overall. (Stept 1999) Would the incoming Roosevelt administration react differently to worker issues on site? And if so, would this have an effect on the project schedule? As of this point the project was eleven months ahead of schedule. 35 Would the new administration influence Six Companies? performance on the project? These are questions that presumably would have been on the mind of Six Companies? General Superintendant Frank Crowe and could be factored into the Superpath review if desired. The Hoover Dam Superpath network is provided under Figure 9-6, reflecting a data date (i.e. ?today date?) of November 12, 1932. As of this point in time, two of the four diversion tunnels are within one day being completed and are ready to assume the flow of the Colorado. The two other diversion tunnels (on the Nevada side) are further behind, but they will not be necessary for at least six more months when the river begins to swell due to the mountain snow melt. 36 Preparatory work 34 Six Companies successfully argued that the project site was under federal jurisdiction and was not privy to these state laws. 35 The Six Companies contract required that it divert the Colorado River no later than October 20, 1933. It would re-direct the river on November 13, 1932. 36 The Colorado River receives three fourths of its flows between the months of April and June. (U. S. Department of the Interior 1976) 236 related to the cofferdams is also complete, consisting of placing a railroad bridge at the upper cofferdam and equipment at the lower cofferdam for rock placement once the two Arizona diversion tunnels are blasted open. Initial work has also begun in advancement of the construction of the four intake towers and tunnels and spillways. Figure 9-6 Superpath Network for Hoover Dam as of November 12, 1932 It is necessary to describe the spatial relationships or paths between the events and their criticality. One significant path flows through the ?last blast? events in the two Arizona Diversion Tunnels after which they are complete, the Phase 1 cofferdam operations (i.e. temporary cofferdams), The re-direction of the Colorado River, the removal of water from between the cofferdams, the excavation of the river bed until 237 the bedrock is fully exposed, the cleaning and preparation of the bedrock for the dam, the placement of concrete and then on through the construction of the dam and completion of roads and buildings. There are a total of thirteen events along this path. The placement of the super arrows were based upon the student?s general understanding of the project?s definable features and a feasible approach to the sequence of construction. Although a critical path status is assigned subjectively to this set of activities at first, the remaining network is analyzed to determine if this classification might change. But if we note that the as built records for the project provide that the intake gates were not closed until February 1, 1935, we know that it was roughly 26-1/2 months between the redirection of the river and the closure of the gates. This requires the Superpath evaluator to ask himself if there is any other work on the project leading up to the closure of the intake gates that might be pushing this Superpath event. So, in the subjective opinion of the evaluator, there is not and the assignment of a critical status to the path through the dam construction is reasonable. Other areas of the network include the installation of the permanent cofferdams (i.e. Phase 2 cofferdam work) which must be completed prior to the closure of intake gates at the four diversion tunnels. Since a significant amount of work must be performed on the adjacent dam path before these gates are closed, it is the subjective opinion of the student that Phase 2 cofferdam work may be classified as ?non-critical.? It is provided with the ?double-dotted? super arrow (see legend in upper left corner of figure 9-6) and restraints are employed to connect the finish events to the follow-on event titled ?Intake Gates Closed.? Elsewhere in the network, 238 the remaining work on the two Nevada diversion tunnels, while not considered critical by the student, are deemed near-critical as one or both may be needed to accommodate the flows of the river beginning in April 1933 (it is assumed that they will complete well before that point in time). The events associated with the finish of these two activities are also connected to the ?Intake Gates Closed? event using restraints. Work on the intake tunnels and towers are punctuated with start and finish events for each of the four tunnels and the four towers. It is assumed that Six Companies will stagger the starts of each tunnel and tower slightly but the issue of which tunnel or tower should go first, is not considered to be a significant issue for the evaluator as each tower-tunnel pair flows to the exact same areas (on to diversion tunnel plugs and connections and the powerhouse and needleworks). This point is considered minor and should not affect the results of the Superpath analysis. The work associated with plugging the diversion tunnels and plugs is also not considered significant relative to the work that must occur on the dam path following the achievement of ?minimum height.? However, due to the fact that this work must await the closure of the four gates and the project is in the latter stages of construction, it is assigned a near critical path status. Work on the two spillways (Arizona Spillway and Nevada Spillway) were well away from the site of the dam, located towards the top of the canyon on either side of the river bank. Other than having to connect to the diversion tunnels in the latter stages of the project, they may be thought of, summarily anyway and during the first two thirds of the project, as isolated project elements. Because these spillways 239 could start early on in the project, will not take four years to construct and not be required to ?connect up? with the other diversion tunnels until sometime after February 1, 1935, they were assigned a ?non-critical? status by the student. Finally, the events associated with the powerhouse and needleworks were assigned a near critical status. The most significant reason why these facilities were considered non-critical is because the installation of the electrical generators and establishment of a fully functioning power plant was not part of the Six Companies scope. 37 Six Companies ?simply? had to build the power plant and needle works facilities and turn over to others. The project did, however, require that the river be re-directed before commencing with this work, so it has been assigned near critical path status. Remembering that the Colorado was re-directed on November 13, 1932 and the Bureau of Reclamation accepted the project from Six Companies on March 1, 1936, this seems to be a reasonable assignment. (U. S. Department of the Interior 1976) It is noted that two blackened circles appear in the lower right portion of the Superpath network provide under figure 9-6. These are provided to facilitate a better organization of the restraints in certain areas and have no impact on the analysis. 9.5.3 Identification of Early and Late Event Positions There are two possible approaches when assigning the single lines, or ?gaps? between early and late event dates. The first approach requires assigning durations to each super arrow and performing detailed forward and backward pass calculations just as in manual CPM applications. The second method, and the one used in the 37 Allis Chalmers was awarded a separate contract for the electrical generation plant. This work continued well beyond the duration of the Six Companies contract for construction. 240 Hoover Dam case study, is to subjectively classify each path into one of three broad but uniform categories (critical, near critical, non-critical). Within these three categories there no further definition of criticality. It is feasible to have more, or less, than three categories. This is also at the discretion of the evaluator. The legend in the upper left hand portion of figure 9-7 illustrates the second approach showing the non- critical super arrow at approximately twice the length of the near critical super arrow. Figure 9-7 is the resulting ?Stargaze? view of the Hoover Dam Superpath analysis. Figure 9-7 Hoover Dam Superpath Network as of November 12, 1932 241 Note that critical Superpath events appear as a single circle, whereas the near critical and non-critical events are represented by both early and late event dates with a connecting horizontal line. This presentation, while perhaps less impressive on letter size paper, provides the viewer with an immediate representation of what is critical and what is not. 9.6 Probabilistic Superpath Review for the Hoover Dam Construction Project With the Superpath network for Hoover Dam already in place, it is possible to perform a probabilistic assessment. As was described in Chapter 8, Superpath employs a probabilistic assessment for the immediate period (the short term) and then treats long term events deterministically. Once performed, the assessment provides the Superpath evaluator with the ability to provide a statement concerning the likelihood of project completion by a specific calendar date. With the Superpath network in place from the deterministic solution described in the preceding section, the probabilistic analysis will also be performed as of November 12, 1932, looking forward to the end of the project. The analysis is made along the critical path of the Superpath network which was also identified in the preceding section. This critical path flows through the ?last blasts? in the two Arizona diversion tunnels, the placement of cofferdams, the re-direction of the river, the removal of trapped water, excavations to bedrock, bedrock preparation and dam construction. This path is illustrated in Figure 9-8 and forms the basis of the probabilistic assessment. 242 Figure 9-8 Critical Path For Probabilistic Superpath Analysis Next, the first ?event of concern? must be selected along the critical path. Here, it is not absolutely necessary to select the first event along the critical path. Were this to be the case, the evaluation would be limited to the river re-direction operation, something that will likely take less than two days (it actually took one). Instead, the first ?event of concern? along the critical path is the one titled ?Excavation Complete, Bedrock Exposed.? As with other portions of the Superpath analysis, this selection relies upon the subjective judgment of the evaluator. A review of the project narratives provided by BuRec suggests this is a sound selection as it appears that there was possibly some uncertainty about the extent of the excavation beneath the river bed at the dam site. The actual depth of the bedrock below the existing river bottom turned out to be approximately 40 feet, with the exception of a rather deep gouge, in the middle of the river bed, that extended down an additional 100 feet. (U. S. Department of the Interior 1976) It is not unreasonable to assume that these actual geological conditions might not have been fully known as 243 of November 12, 1932 and were cause for concern. Say for example, that the gouge in the middle of the river had gone so deep that an entirely different manner of excavation would have been required? Or once the bedrock was fully exposed that the bedrock required more preparatory work before concrete placement? To the extent these concerns were legitimate, selecting the Superpath event ?Excavation Complete, Bedrock Exposed? seemed an appropriate point to separate the short term probabilistic assessment. This event, which separates the short term period from the long term period is named the ?horizon event? in the Superpath methodology. See Figure 9-9. Figure 9-9 Near Term Probabilistic Evaluation as of November 12, 1932 Next, one must treat the time between the data date, November 12, 1932 and the horizon event probabilistically. It is not necessary to develop separate time estimates for each super arrow if there are indeed more than event one between the data date and the horizon event. Rather, the entire short term period is treated as if it had only one super arrow. The student, who is pretending he is Six Companies General Superintendant Frank Crowe in this discussion (?Crowe?), first assesses the 244 current conditions on site in this simulation. A summary look shows him that the two Arizona diversion tunnels are essentially complete and the work crews are prepared to re-direct the Colorado River on the following day. To estimate the time between today and the horizon event (?Excavation Complete, Bedrock Exposed?) ?Crowe? has several pieces of information to support what will be a subjective schedule assessment. From having worked several similar dam projects for the Bureau of Reclamation, he has developed a very keen sense for the durations involved. Although Hoover Dam is much larger than any of his other projects, the operations that lie between river re-direction and the exposure of bedrock are identical and any differences are purely a matter of scale (i.e. the operations will just take longer). ?Crowe? knows productivity rates that can be achieved and also has productivity records from his other projects to back up his subjective opinions if questioned by Six Companies executives. In his mind the overall duration between river re-direction and the exposure of bedrock should be six months. Interestingly, ?Crowe?s? number has not changed from the first rough, parametric schedule estimate that he prepared during the pre-award phase and has compared favorably to several independent validations by Six Companies personnel. In ?Crowe?s? mind this number is his ?most likely? time estimate. Six months from river re-direction on November 13, 1932 would mean that the horizon event would be achieved on May 13, 1933, or 26 weeks from now. The weekly bin size is used for the purposes of this analysis. Again, a subjective selection, but one that seems appropriate given the length of period and uncertainties involved. ?Crowe? also recognizes the uncertainty with the excavation operations 245 and has a desire to express some range on his time estimate. Even if no significant unforeseen conditions are encountered during diversion of the river, excavation and preparation of the bedrock, he feels that there is some possibility of finishing earlier and also of finishing later than 26 weeks. Recognizing that Six Companies operations are basically at their limits, working 24 hours a day, 7 days a week with no days off, and have as many people and as much equipment as possible in the dam site at all times, he believes it is somewhat unlikely that the work can finish significantly earlier than his ?most likely? time estimate. Therefore chooses 25 weeks as the optimistic duration for the super arrow. Conversely, he believes that 28 weeks represents the ?pessimistic? duration of this upcoming work. ?Crowe? now has a range stretching from 25 weeks (optimistic), 26 weeks (most likely) to 28 weeks (pessimistic). These time estimates do not yet take into account unforeseen events related to recent geotechnical concerns. This first scenario is dubbed the ?No Complications? scenario. Finally, ?Crowe? must distribute his chips across the weekly bins 25, 26, 27 and 28 weeks, the results of which are displayed in Table 9-6. He chooses ten chips to express his assessment, just enough to reflect his overall intuition. Table 9-6 Assignment of Chips Under the ?No Complications? Scenario Next, ?Crowe? wants to account for the aforementioned uncertainties related to unforeseen conditions, labor unrest and all else that could go wrong, to the extent 246 this is possible. Instead of treating these issues individually, as he knows there are more than two discrete issue areas that could affect the project, he steps to a higher level of granularity and identifies two other outcomes: Scenario 2. ?Complications? and Scenario 3. ?More Complications.? 38 In doing so, ?Crowe? is trying to express his general sense of all the issues that might factor in to the time estimate without bogging down. In this approach, Superpath is allowing ?Crowe? to go with his subjective assessment if not his ?gut? feel. ?Crowe? believes there is a 1-in-3 chance of the ?No Complications? scenario from occurring and that there is a 2-in-3 chance that the project will experience either ?Complications? or ?More Complications.? The relatively low assignment of odds to ?No Complications? scenario is based upon ?Crowe?s? very recent review of credible geotechnical data revealing a deeper silt pocket of approximately 140? depth in the middle of the existing river bed. But although he has just discovered that his excavation might need to be deeper, he has not validated this new information enough to ?come off? his original estimate. He also believes the pocket might be very narrow or the recent information might not be credible. 39 For these reasons he has not yet abandoned the ?No Complications? Scenario. ?Crowe? also treats scenarios two and three deterministically, assigning a 4 week duration to scenario two (with no range), and a three week duration to scenario three (with no range). Finally he places a 1-in-10 chance of scenario three occurring once ?complications? of any sort have been realized. Accounting for these three 38 ?Crowe? could just as readily limit his probability assessment to two issues, and the approach would remain consistent from this point forward. 39 It is not known by the student when Frank Crowe and Six Companies became aware of the deep pocket of silt in the middle of the existing river that extended 140 feet below river bottom. For purposes of this simulation, it is assumed that this information was not yet known as of November 12, 1932. 247 scenarios, Figure 9-10 models ?Crowe?s? subjective probability assessments of these scenarios.. Figure 9-10 ?Crowe?s? Assignment of Odds to Three Scenarios Table 9-7 provides the probability of occurrence for scenarios 2 and 3 along with their ?most likely? point of occurrence if realized. Table 9-7 Probability of Occurrence for Three Scenarios Next, ?Crowe? will duplicate his basic ?No Complications? chip model around three points in time (Weeks 26, 30 and 33) in proportions dictated by the probability of occurrence for each scenario. Some iteration is required to identify the total number + 4 Weeks + 3 Weeks 248 of chips necessary to construct these distributions while maintaining proper proportions. For this simulation, the total number of chips required is 150. Table 9-8 and Figure 9-11 on the next pages provide the resulting chip distribution in both tabular and roulette felt format. 249 Table 9-8 Assignment of Chips for Three Scenarios 250 Figure 9-11 ?Crowe?s? Expression of Odds Using the Roulette Felt Format While it might seem redundant to express the chip distribution found in Table 9-8 on a gaming felt, the student believes that humans, project managers in the Hoover Dam scenario, are indeed able to start their work at the roulette felt and avoid the various machinations found within the preceding paragraphs of this section. Following this idea, Superpath would allow for the project manager to take a set of chips, and with all the issues ?on their mind,? both conscious and subconscious, assemble a distribution by hand reflecting a subjective quantification of the uncertainty. Such an approach would reflect the very essence of the probabilistic solution to Superpath, but either approach might suffice. It is the opinion of the student that a project manager who is familiar with the project and risks, is capable of assembling such a distribution by hand on the roulette felt, just as he or she might be 251 able to identify the critical path without intensive CPM analysis, or any network diagram at all. Recognizing that the overall goal is to provide some sort of statement of the probability of achieving a specific project milestone by a calendar date, the results of the probabilistic assessment must be examined. Using the values for ?Cumulative Probability? found on the bottom row of Table 9-8, a cumulative probability curve is constructed for the period under review. See Figure 9-12. Figure 9-12 Cumulative Probability Curve for the Short Term Period Since the time between the ?horizon event? and project completion is treated deterministically, it is important to consider what time value will be assigned to this later work. Perhaps the most convenient and conservative approach would be to simply retain the original planned duration between the horizon event and finish milestone from the original as planned schedule. If one were to simply retain the original planned duration reflected by the construction contract, that duration would Cumulative Probability 252 be 48 months. 40 For the Hoover Dam simulation, however, it might also be worth considering the better than expected schedule performance through November 12, 1932 which might suggest that the initial schedule estimates that were used as the basis of the contract period were inflated. If these efficiencies were applied across the remaining 48 month contract period, the remaining work would take only 30.4 months. Given the relatively large split between these two values, ?Crowe? extrapolates between these two estimates to identify a 39.2 month duration between the horizon event and contract completion. Table 9-9 provides the cumulative probability of the various completion dates under these three scenarios. Table 9-9 Cumulative Probabilities for Three Long Term Scenarios 40 The Six Companies contract was for seven years and required the re-direction of the Colorado River in 2-1/2 years and follow-on work to be completed in 4-1/2. Subtracting ?Crowe?s? six month estimate for work performed during the short term probabilistic assessment period results in a forty- eight month period beyond the horizon event. 253 With this information, it is now possible to make statements concerning the probability of on time completion. One example, using the 30.4 month time estimate column, would be reported as follows: ?Based upon a short term probabilistic analysis, the probability of Hoover Dam?s completion on or before the week of December 30, 1935 is 87.3%.? These same statements could be made for the other entries table. 41 9.7 Chapter Summary Hoover Dam?s summary network analysis presented in this chapter suggests that it is possible to represent even the largest and most complex projects in meaningful summary form. An effective summary model can be constructed and analyzed either by probabilistic or deterministic approaches for the largest projects. 41 Hoover Dam was dedicated on September 30, 1935 and accepted by the Bureau of Reclamation on March 1, 1936. 254 Figure 9-13 Arial Photograph of Hoover Dam (U.S. BuRec 2008) 255 CHAPTER 10 Discussion 10.1 Chapter Overview This chapter describes the contributions of this paper to the project and program management communities. 10.2 Contributions The contributions of this research are based upon a limited consideration of history and human/corporate behavior observed by the student during his research, primarily in the field of engineering and construction. The contributions result from a research about that was as much about treating cost and schedule platforms as extensions of the human being, project team and corporate entity (owner or contractor) as it was about the subject matter itself. Consistent with this notion, it became appropriate to consider these technical management platforms as forms of expression rather than inert electronics. They became no different than a conversation, a letter, an E-Mail or a telegraph. Project cost and schedule platforms may be, therefore, as important as human speech, and perhaps more significant. Because where at the most inconvenient level of detail, down deep in these platforms where observations are difficult and interpretations subjective and uncertain, someone might be saying something they prefer not to divulge in a more conventional manner. Sending notice of, or misrepresenting, a project condition for later use during litigation or change order negotiation would be the most practical and immediate concern to most in industry. Presenting an optimistic project picture very early on in 256 order to pilot the project beyond a point of convenient termination is another, particularly on the largest technical programs of the U.S. Government. The student?s contributions, therefore, were the result of an approach that chose to take ?conflict for granted? in order to focus ?on the more rational, conscious, artful kind of behavior? that he believes will remain part of the repartee of project and program management at least for the immediate future, if not forever. These behaviors are not ?pathological? expressions that lend themselves to treatment. 10.2.1 Contribution #1: On The Role of Conflict and Games in Project Management The conceptual application of pre-existing theories of conflict and games set forth by Professors Thomas Schelling, John von Neuman and Oskar Morganstern to the project and program setting. The formal acceptance that project management is a ?game? and conflict is inevitable, to the extent this could be publicly recognized by the U.S. Government, private industry, would represent a paradigm shift for some and might facilitate greater understanding and performance. 10.2.2 Contribution #2: The Importance of Project Management Histories The discussion of the original works of Henry Lawrence Gantt, the U.S. Army Ordnance Bureau, the U.S. Special Projects Office and the U.S. Bureau of Yards and Docks, which are all but absent from the history of operations research and project management as of 2008 in the United States. These original works were relied upon in the development of Superpath. The ?discovery? of the BRAC 2005 legislation that would result in the renaming of Bethesda and the suggestions to elected officials about alternative approaches also flowed from this research. 257 10.2.3 Contribution #3: The Problematic Theory of CPM?s Parallel Development The discussion of how the U.S. Navy?s deterministic solution to the probabilistic PERT problem ?covered? what would later be claimed by three individual members of private industry to be their own intellectual content. That these men, two of whom were senior members of a principle contractor within the U.S. Navy?s Polaris program, might have developed a ?critical path method? independently from the Navy is difficult to reconcile given other historical accounts of the period. This claim has gone unchecked as of 2008. Instead, the idea that CPM and PERT represent two separate concepts has been facilitated by a management consulting industry with a vested financial interest in fostering this version of events, as much as the personal preferences of these three men. That Kelly-Walker-Mauchly ?paced? their announcement of the CPM invention (December 1959) until after PERT?s limited declassification (sometime before late 1958) and publication (April 1959) suggest, regardless of who fathered the network schedule concept, these three men were aware of the source of the concepts embodied within CPM. To have published the classified technique prior to the Navy would likely have been an act of treason. This represents a new theory within the history of project management and merits further research and, if supportable, presentation to the U.S. Navy. 10.2.4 Contribution #4: Describing the Loss of the Activity-Event Juxtaposition The discussion of the significance of the loss of a network model wherein long strings of activities, with very few events, make it difficult to obtain object ?fixes? on project status, particularly for owners, executives and even trades 258 personnel who are in a need-to-know the larger picture but are often not privy to complete renditions of their prime contractor?s CPM schedule. 10.2.5 Contribution #5: The Rationale for an Event-Centric Network The ?activity-centric? CPM platforms of 2008 are often prepared in excruciating levels of detail with thousands of activities, non-intuitive expressions of logic, software settings, and on. This situation has facilitated dysfunctional and/or inaccurate schedule networks that are either unusable or platforms of manipulation. While the discussion of the symptoms are not new intellectual material, the proposed cure, which is a summary level event-centric network, offers the potential to eliminate the concerns over dysfunctional or nefarious CPM platforms. The event-centric network is not intended to replace the CPM, but rather supplement it. Where contractors might have the ability to use the CPM as its own management, rather than a multi-party platform or prescriptive owner scheduling requirements, this is a substantive contribution. The event-centric network, meanwhile, provides the owner with objective and understandable schedule information without reliance upon the contractor, although continued integration remains possible. 10.2.6 Contribution #6: Expressions on the Limitations of Earned Value The discussion of earned value focused on the limitations or inherent flaws of the methodology. The introduction of the concepts of ?representative value,? ?deferred value,? ?premature value,? ?substituted value,? ?detachable value? and the ?costing lag? are believed to represent new concepts that are based upon opportunities for misrepresentation within the current EVMS framework. ?Critical value? is also a 259 term that is intended to express the significance of the ?Earned Schedule? indicator, which in the student?s research, appears to be of a ?limited? value. While it was not the intent to introduce even more EVMS terminology, these terms may be reserved to discussions of the conceptual limitations of this standard. 10.2.7 Contribution #7: The Summarized Event Centric Network The establishment of a meaningful summarized network that has the potential for an intuitive interface with the user on a project of any size or complexity is perhaps the most visible contribution of this research. But beyond the appearance of the network, is the student?s belief that the model is a more useful model than a highly detailed schedule because it is a better representation of an individual?s understanding of the project. This would allow far more individuals, perhaps many more, to understand, manage or simply observe time in a very intuitive manner. 10.2.8 Contribution #8: The Non-Computerized Solution to Probabilistic Scheduling The adaptation of a roulette felt, gaming chips and the ?short term? probabilistic approach is significant for at least three reasons. (1) Its simplicity makes it more accessible to the experts for each super-event, regardless of past experience with or understanding of probability theory. (2) The effort is non-computerized. (3) The approach, which combines probabilistic treatment of short term events with deterministic treatment of long term events, expresses a practical treatment of the greater uncertainty associated with far off events while maintaining a simplified approach. 260 The second point suggests that the non-computerized probabilistic approach provides a modest supplement to the 1961 non-computerized, deterministic solution of Stanford University Professor John W. Fondahl for the U.S. Navy?s Bureau of Yards and Docks. It is possible that the U.S. Naval Facilities Engineering Command might have an interest in this contribution. 10.3 Close Where the situational awareness of project and program management bodies are positively influenced by any of these presentations, it is hoped that this might be the single most significant contribution of this research. 261 Appendix A Correspondences of Albert Einstein and President F. D. Roosevelt 262 Appendix A Correspondences of Albert Einstein and President F. D. Roosevelt (Wikipedia 2008) 263 Appendix A Correspondences of Albert Einstein and President F. D. Roosevelt ?THE WHITE HOUSE WASHINGTON October 19, 1939 My Dear Professor, I want to thank you for your recent letter and the most interesting and important enclosure. I found this data of such import that I have convened a board?Please accept my sincere thanks. Very Sincerely Yours, Franklin Roosevelt? (Bodanis 2000) 264 Appendix B The 41 Fleet Ballistic Missile Submarines of the U. S. Fleet, 1967 Number Name Builder 1 SSBN 598 GEORGE WASHINGTON General Dynamics (Electric Boat) 2 SSBN 599 PATRICK HENRY General Dynamics (Electric Boat) 3 SSBN 600 THEODORE ROOSEVELT Mare Island Naval Shipyard 4 SSBN 601 ROBERT E. LEE Newport News 5 SSBN 602 ABRAHAM LINCOLN Portsmouth Naval Shipyard 6 SSBN 608 ETHAN ALLEN General Dynamics (Electric Boat) 7 SSBN 609 SAM HOUSTON Newport News 8 SSBN 610 THOMAS A. EDISON General Dynamics (Electric Boat) 9 SSBN 611 JOHN MARSHALL Newport News 10 SSBN 616 LAFAYETTE General Dynamics (Electric Boat) 11 SSBN 618 THOMAS JEFFERSON Newport News 12 SSBN 619 ANDREW JACKSON Mare Island Naval Shipyard 13 SSBN 620 JOHN ADAMS Portsmouth Naval Shipyard 14 SSBN 617 ALEXANDER HAMILTON General Dynamics (Electric Boat) 15 SSBN 622 JAMES MONROE Newport News 16 SSBN 624 WOODROW WILSON Mare Island Naval Shipyard 17 SSBN 623 NATHAN HALE General Dynamics (Electric Boat) 18 SSBN 625 HENRY CLAY Newport News 19 SSBN 626 DANIEL WEBSTER General Dynamics (Electric Boat) 20 SSBN 629 DANIEL BOONE Mare Island Naval Shipyard 21 SSBN 627 JAMES MADISON Newport News 22 SSBN 636 NATHANAEL GREENE Portsmouth Naval Shipyard 23 SSBN 628 TECUMSEH General Dynamics (Electric Boat) 24 SSBN 630 JOHN C. CALHOUN Newport News 25 SSBN 634 STONEWALL JACKSON Mare Island Naval Shipyard 26 SSBN 631 ULYSSES S. GRANT General Dynamics (Electric Boat) 27 SSBN 632 VON STEUBEN Newport News 28 SSBN 635 SAM RAYBURN Newport News 29 SSBN 633 CASIMIR PULASKI General Dynamics (Electric Boat) 30 SSBN 641 SIMON BOLIVAR Newport News 31 SSBN 642 KAMEHAMEHA Mare Island Naval Shipyard 32 SSBN 640 BENJAMIN FRANKLIN General Dynamics (Electric Boat) 33 SSBN 644 LEWIS AND CLARK Newport News 34 SSBN 643 GEORGE BANCROFT General Dynamics (Electric Boat) 35 SSBN 645 JAMES K. POLK General Dynamics (Electric Boat) 36 SSBN 654 GEORGE C. MARSHALL Newport News 37 SSBN 655 HENRY L. STIMSON General Dynamics (Electric Boat) 38 SSBN 658 MARIANO G. VALLEJO Mare Island Naval Shipyard 39 SSBN 656 GEORGE WASHINGTON CARVER Newport News 40 SSBN 657 FRANCIS SCOTT KEY General Dynamics (Electric Boat) 41 SSBN 659 WILL ROGERS General Dynamics (Electric Boat) (Polmar 1978) 265 Appendix C Actual Production Milestones for George Washington Class Submarine Number Name Laid Down Launched Commissioned 1 SSBN 598 GEORGE WASHINGTON 11/1/1957 6/9/1959 12/30/1959 2 SSBN 599 PATRICK HENRY 5/27/1958 9/22/1959 4/9/1960 3 SSBN 600 THEODORE ROOSEVELT 5/30/1958 10/3/1959 2/13/1961 4 SSBN 601 ROBERT E. LEE 8/25/1958 12/18/1959 9/16/1960 5 SSBN 602 ABRAHAM LINCOLN 11/1/1958 5/14/1960 3/11/1961 6 SSBN 608 ETHAN ALLEN 9/14/1959 11/22/1960 8/8/1961 7 SSBN 609 SAM HOUSTON 12/28/1959 2/2/1961 3/6/1962 8 SSBN 610 THOMAS A. EDISON 3/15/1960 6/15/1961 3/10/1962 9 SSBN 611 JOHN MARSHALL 4/4/1960 7/15/1961 5/21/1962 10 SSBN 616 LAFAYETTE 1/17/1961 5/8/1962 4/23/1963 11 SSBN 618 THOMAS JEFFERSON 2/3/1961 2/24/1962 1/4/1963 12 SSBN 619 ANDREW JACKSON 4/26/1961 9/15/1962 7/3/1963 13 SSBN 620 JOHN ADAMS 5/19/1961 1/12/1963 5/12/1964 14 SSBN 617 ALEXANDER HAMILTON 6/29/1961 8/18/1962 6/27/1963 15 SSBN 622 JAMES MONROE 7/31/1961 8/4/1962 12/7/1963 16 SSBN 624 WOODROW WILSON 9/13/1961 2/22/1963 12/27/1963 17 SSBN 623 NATHAN HALE 10/2/1961 1/12/1963 11/23/1963 18 SSBN 625 HENRY CLAY 10/22/1961 11/30/1962 2/20/1964 19 SSBN 626 DANIEL WEBSTER 12/28/1961 4/27/1963 4/9/1964 20 SSBN 629 DANIEL BOONE 2/6/1962 6/22/1963 4/23/1964 21 SSBN 627 JAMES MADISON 3/5/1962 3/15/1963 7/28/1964 22 SSBN 636 NATHANAEL GREENE 5/21/1962 5/12/1964 12/19/1964 23 SSBN 628 TECUMSEH 6/1/1962 6/22/1963 5/29/1964 24 SSBN 630 JOHN C. CALHOUN 6/4/1962 6/22/1963 9/15/1964 25 SSBN 634 STONEWALL JACKSON 7/4/1962 11/30/1963 8/26/1964 26 SSBN 631 ULYSSES S. GRANT 8/18/1962 11/2/1963 7/17/1964 27 SSBN 632 VON STEUBEN 9/4/1962 10/18/1963 9/30/1964 28 SSBN 635 SAM RAYBURN 12/3/1962 12/20/1963 12/2/1964 29 SSBN 633 CASIMIR PULASKI 1/12/1963 2/1/1964 8/14/1964 30 SSBN 641 SIMON BOLIVAR 4/17/1963 8/22/1964 10/29/1965 31 SSBN 642 KAMEHAMEHA 5/2/1963 1/16/1965 12/10/1965 32 SSBN 640 BENJAMIN FRANKLIN 5/25/1963 12/5/1964 10/22/1965 33 SSBN 644 LEWIS AND CLARK 7/29/1963 11/21/1964 12/22/1965 34 SSBN 643 GEORGE BANCROFT 8/24/1963 3/20/1965 1/22/1966 35 SSBN 645 JAMES K. POLK 11/23/1963 5/22/1965 4/16/1966 36 SSBN 654 GEORGE C. MARSHALL 3/2/1964 5/21/1965 4/29/1966 37 SSBN 655 HENRY L. STIMSON 4/4/1964 11/13/1965 8/20/1966 38 SSBN 658 MARIANO G. VALLEJO 7/7/1964 10/23/1965 12/16/1966 39 SSBN 656 GEORGE WASHINGTON CARVER 8/24/1964 8/14/1965 6/15/1966 40 SSBN 657 FRANCIS SCOTT KEY 12/5/1964 4/23/1966 12/3/1966 41 SSBN 659 WILL ROGERS 3/20/1965 7/21/1966 4/1/1967 (Polmar 1978) 266 Appendix D Tables for the Normal Probability Distribution ? (Clemen and Reilly 2001) 267 (Clemen and Reilly 2001) 268 (Clemen and Reilly 2001) 269 Appendix E Draft Letter from J. W. Mauchly to Remington Rand Executive (Goldschmidt and Akera 2008) 270 Appendix F C/SCSC versus ANSI/EIA EVMS Standard (Fleming and Koppelman 2000) 271 (Fleming and Koppelman 2000) 272 (Fleming and Koppelman 2000) 273 (Fleming and Koppelman 2000) 274 (Fleming and Koppelman 2000) 275 (Fleming and Koppelman 2000) 276 Glossary ANSI American National Standards Institute CPM Critical Path Method CS2 Cost/Schedule Control Systems Criteria CS 2 Cost/Schedule Control Systems Criteria C/SCSC Cost/Schedule Control Systems Criteria CV Cost Variance DOD Department of Defense (U. S. Government) EAC Estimate at Completion EV Earned Value EVMS Earned Value Management System MAD Mutually Assured Destruction NASA National Aeronautical and Space Administration (U. S. Government) PERT Program Evaluation Research Task (Prior to mid-1958) PERT Program Evaluation Review Technique (mid-1958 onwards) PMBOK Project Management Body of Knowledge PMI Project Management Institute PV Planned Value SV Schedule Variance 277 Bibliography Adams, A. (Unknown). "Hoover Dam Photograph." Alford, L. P. (1934). Henry Laurence Gantt, Leader in Industry, American Society of Mechanical Engineers, New York. American Institute of Physics. (2008). "From Graduate Studies to Bomb Design, 1945-1950." American Institute of Physics, College Park. Associated General Contractors of America. "U. S. GSA Conference Agenda." Federal Conference, Washington, DC. Baecher, G. B., and Christian, J. T. (2003). Reliability and Statistics in Geotechnical Engineering, Wiley, Hoboken. Bernoulli, D. (1738). "Exposition of a New Theory on the Measurement of Risk." Econometrica, 22(1), 23-36. Bernstein, P. L. (1998). Against the Gods, Wiley, New York. Bildson, R. A., and Gillespie, J. R. (1962). "Critical Path Planning - PERT Integration." Operations Research, 10(6), 909-912. Billings, C. W. (1989). Grace Hopper: Navy Admiral and Computer Pioneer, Enslow, Hillside. Bodanis, D. (2000). E=MC2: A Biography of the World's Most Famous Equation, Walker & Company, New York. Business Week. (1962). "Shortcut for Project Planning." Business Week. Canby, S. L. (2007). "Interview." Potomac. Chen, A. T. (1989). "Applying Earned Value Procedure to Engineering Management." AACE Transactions. Clemen, R. (2001). Making Hard Decisions, Duxbury, Pacific Grove. Clemen, R. T., and Reilly, T. (2001). Making Hard Decisions, Duxbury, Pacific Grove. Corovic, R. (2007). "Why EVM is Not Good for Schedule Performance Analyses (and how it could be...)." E. I. du Pont de Nemours & Company, I. (1945). "Construction, Hanford Engineering Works, History of the Project." E. I. du Pont de Nemours & Company, Inc., Wilmington, 1-1385. E. I. du Pont de Nemours & Company, I. (2008). "Corporate Web Page." Newark. Fleming, Q., and Koppelman, J. (2000). Earned Value Project Management, Project Management Institute, Newtown Square. Fondahl, J. W. (1964). "Methods for Extending the Range of Non-Computer Critical Path Applications." Stanford University, Stanford. Fourre, J. P. (1968). "Critical Path Scheduling, A Practical Appraisal of PERT." AMA Bulletin, 114, 1-16. Fulkerson, D. R. (1961). "A Network Flow Computation for Project Cost Curves." Management Science, 7(2), 167-178. Fulkerson, D. R. (1962). "Expected Critical Path Lengths in PERT Networks." Operations Research, 10(6), 808-817. Gantt, H. L. (1974). Work, Wages, and Profits, Hive Publishing Company, New York. 278 Garamone, J. (2008). "Bush Sends Budget to Congress." The Journal, National Naval Medical Center Bethesda. Gelb, A., Rosenthal, A. M., and Siegel, M. (1988). Great Lives of the Twentieth Century, The New York Times, New York. Gilbreth, F. B. (1911). Motion Study, A Method for Increasing the Efficiency of the Workman, Van Nostrand, New York. Glavinich, T. E. (2004). Construction Planning and Scheduling, Associated General Contractors of America, Washington. Goldschmidt, A., and Akera, A. (2008). "John W. Mauchly and the Development of the ENIAC Computer." Penn Library Exhibitions, U. o. Pennsylvania, ed., University of Pennsylvania, Philadelphia. Gould, F. (1997). Managing the Construction Process: Estimating, Scheduling, and Project Control, Prentice-Hall, Upper Saddle River. Grand Central. (2008). "Grand Central Terminal." New York. Greenewalt, C. H. (1942). "Unclassified Portions of C. H. Greenewalt's Notes." Du Pont Corporations, Newark. Groves, L. M. (1962). Now It Can Be Told: The Story of the Manhattan Project, Harper, New York. Grubbs, F. E. (1962). "Attempts to Validate Certain PERT Statistics or 'Picking on PERT'." Operations Research, 10(2). Harris, T. (1967). I'm OK-You're OK, Avon, New York. Janis, J. H., and Thompson, M. H. (1972). New Standard Reference for Secretaries and Administrative Assistants, Collier-MacMillan, New York. Johnson, S. B. (2002). The Secret of Apollo, The Johns Hopkins University Press, Baltimore. Kelley, J. E. (1961). "Critical-Path Planning and Scheduling: Mathematical Basis." Operations Research, 9(3), 296-320. Kelley, J. E. (1964). "The Nature and Use of Critical Path Method." Industrial Management, 4-14. Kelley, J. E. (2003). "Letter to Editors of Engineering News Record." Engineering News Record. Kelley, J. E., and Walker, M. R. "Critical-Path Planning and Scheduling." Eastern Joint Computer Conference, Boston. Kilby, J., Fox, J., and Lucas, A. F. (2005). Casino Operations Management, Wiley, Hoboken. Kinnane, A. (2002). Du Pont: From the Banks of the Brandywine to Miracles of Science, E. I. du Pont de Nemours and Company, Wilmington. Kleinsorge, P. L. (1941). The Boulder Canyon Project, Historical and Economic Aspects, Stanford University Press, Stanford. Korman, R., and Daniels, S. H. (2003). "Critics Can't Find the Logic in Many of Today's CPM Schedules." Engineering News Record. Kramer, S., Polin, D., and Morreale, A. (2002). "A Tale of Two Rivers." Great Projects, The Building of America, National Academy of Engineering, United States. Lang, D. W. (1977). Critical Path Analysis, Hodder and Stoughton, Kent. 279 Malcolm, D. G., Roseboom, J. H., Clark, C. E., and Fazar, W. (1959). "Application of a Techinque for Research and Development Program Evaluation." Operations Research, 7(5), 646-669. Moder, J. J., and Phillips, C. R. (1964). Project Management with CPM and PERT, Reinhold, New York. Muth, J. F., and Thompson, G. L. (1963). Industrial Scheduling, Prentice-Hall, Englewood Cliffs. National Geographic. (2008). "Photograph." National Geographic, Washington DC. O'Brien, J. J., and Plotnick, F. L. (1999). CPM in Construction Management, McGraw-Hill, New York. O'Conner, M. (2008). "University of Maryland Risk Group Meeting." College Park. Parsch, A. (2008). "Directory of U.S. Military Rockets and Missiles, Lockheed UGM-27 Polaris." Directory of U.S. Military Rockets and Missiles, Lockheed UGM-27 Polaris. PMI. (2000). A Guide to the Project Management Body of Knowledge, Project Management Institute, Newton Square. PMI. (2004). A Guide to the Project Management Body of Knowledge, Project Management Institute, Newton Square. Polmar, N. (1978). The Ships and Aircraft of the U. S. Fleet, Naval Institute Press, Annapolis. Revay, S. G. (1987). "Calculating Impact Costs." International Business Lawyer, 400- 409. Sapolsky, H. M. (1972). The Polaris System Development, Bureaucratic and Programmatic Success in Government, Harvard University Press, Cambridge. Schelling, T. C. (1960). The Strategy of Conflict, Oxford University Press, London. Schleip, W. S. a. R. (1972). Planning & Control in Management: The German RPS System, Peter Peregrinus, Dusseldorf. Scientific Advisory Board. (1957). "Defense and Survival in the Nuclear Age." Washington, D.C. Stephens, J. E. (1988). Hoover Dam, an American Adventure, University of Oklahoma Press, Norman. Stept, S. (1999). "Hoover Dam." American Experience, Public Broadcasting Servce, United States. Swanson, D. P., and Gibb, H. R. (1978). The Historical Records of the Components of Conrail, A Survey and Inventory, Eleutherian Mills Historical Foundation, Wilmington. Taylor, F. W. (1911). Scientific Management, Greenwood, Westport. Thomas, T. H. C. (2007). "C-SPAN speech." Time. (1960). "Power for Peace." Time Magazine. Tversky, A., and Kahneman, D. (1974). "Judgment Under Uncertainty: Heuristics and Biases." Science, 185(4157), 1124-1131. U. S. Army. (2007). Design Build Contract, Fort Bragg, North Carolina, Savannah. U. S. Defense Acquistion University. (2006). "Defense Acquisition University Gold Card." Fort McNair. U. S. Department of Defense. (2005). "BRAC Report." Arlington. 280 U. S. Department of Defense. (2006). "Earned Value Management Implementation Guide." D. o. Defense, ed. U. S. Department of the Interior. (1955). The Story of Hoover Dam, Washington, DC. U. S. Department of the Interior. (1976). Construction of Hoover Dam, KC Publications, Las Vegas. U. S. General Accounting Office. (1997). "Significant Changes Underway in DOD's Earned Value Management Process." U. S. G. A. Office, ed., U. S. General Accounting Officer, 31. U. S. General Services Administration. (1999). Federal Acquisition Regulation, USG. U. S. Navy. (2007). "Contract N40085-07-R-1900, SOF Marine Corps Special Operations Command (MARSOC) Complex." N. F. E. Command, ed. U. S. Navy Office of Naval Research. (2008). "File Photographs." Arlington. USG. (2000). "Contract N44255-97-C-5515, Naval Air Station Whidbey Island." N. F. E. Command, ed. USG. (2006). "Earned Value Management Implementation Guide." D. o. Defense, ed. USG. (2007). "Contract N40085-07-R-1900, SOF Marine Corps Special Operations Command (MARSOC) Complex." N. F. E. Command, ed. van Slyke, R. M. (1963). "Monte Carlo methods and the PERT Problem." Operations Research, 11(5), 839-860. von Neumann, J., and Morgenstern, O. (1953). Theory of Games and Economic Behavior, Princeton University Press, Princeton. Wikipedia. (2006). "Walter Reed."