ABSTRACT Title: RELIABILITY CAPABILITY EVALUATION FOR ELECTRONICS MANUFACTURERS Sanjay Tiku Doctor of Philosophy, 2005 Directed By: Professor Michael G. Pecht Department of Mechanical Engineering In the last decade of the twentieth century, competitive and regulatory pressures have driven all types of electronics manufacturers to low-cost manufacturing, and to the evolution of a worldwide supply chain. Reliability being a risk factor associated with profit making, it is essential that reliability is managed across all tiers of the supply chain. System integrators, who are at the top of the supply chain, generally set the requirements for system reliability. However, they cannot wait until they receive the parts or sub-assemblies to assess if they are reliable. This can be an expensive iterative process. An upfront evaluation of suppliers based on their ability to meet reliability requirements can provide valuable competitive advantage. This dissertation introduces a set of key practices that can be used to assess whether an organization has the ability to design, develop and manufacture reliable electronic products. This ability is defined in terms of a reliability capability maturity metric which is a measure of the practices within an organization that contribute to the reliability of the final product, and the effectiveness of these practices in meeting the reliability requirements of customers. In order to validate the theoretical model for reliability capability evaluation, psychometric methods based on statistical multivariate correlational analysis were used. Psychometric methods are rigorous statistical tools that are used to construct theoretical instruments which measure abstract organizational variables. The result of the analysis is a list of tasks that are critical to reliability for an electronics company. Comparative weighting factors have also been obtained empirically for reliability tasks. The dissertation presents a procedure for evaluating and benchmarking the reliability capability of electronics companies. Five levels of maturity are defined in terms of associated reliability tasks at each level. Evaluation results are presented for reliability capability benchmarking for an electronics company as a case study. A methodology is also presented to evaluate the reliability capability of a printed circuit board (PCB) assembly manufacturer. The methodology determines the manufacturing capability of an assembler, and then evaluates the maturity of practices affecting reliability to assign a reliability capability maturity score to an assembler. RELIABILITY CAPABILITY EVALUATION FOR ELECTRONICS MANUFACTURERS By Sanjay Tiku Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park, in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2005 Advisory Committee: Professor Michael Pecht, Chair Professor Mohammed Modarres Professor Bilal Ayyub Associate Professor Peter Sandborn Associate Professor Patrick McCluskey ? Copyright by Sanjay Tiku 2005 Acknowledgements This dissertation is the culmination of nearly four years of a research journey during which I have been accompanied and supported by many people around me. Here I express my gratitude to all those people. I am highly grateful to my advisor, Dr. Michael G. Pecht, who provides a highly disciplined and energetic lead to follow. Besides his guidance on academic matters, we have shared other interests that have enriched my life during the last few years. I would also like to thank Dr. Davinder Anand, Dr. Mohammed Modarres, Dr. Bilal Ayyub, Dr. Peter Sandborn, Dr. Patrick McCluskey, Dr. Gilvan Souza, and Dr. Robert Rivett for kindly consenting to be a part of my dissertation committee and providing their time, suggestions and comments. I also want to give my special thanks to Dr. Diganta Das, Dr. Michael Azarian, Dr. Sanka Ganesan, and Dr. Ji Wu, who have advised me during my research, and provided valuable inputs. I am grateful to all my friends in CALCE who come from many nationalities of the world. This microcosm of diversity has introduced me to many cultures, traditions and food. I would like to thank Yuki Fukuda, Jia Jiang, Tong Fang, Yu Chul Hwang, Eric Stelrecht, Paul Casey, Reza Keimasi, Yuliang Deng, Jake Yang, and many others who provided this enriching experience. And of course, mention must be made of the Indian crowd, a group that always sought to dissect the past to seek directions for future. My group includes Niranjan, Vidya, Anoop, Sachi, Arindam, Sathya, Nikhil, Anupam, Anshul and Rajeev. Without their being around, the last few years would have felt much longer. ii I would also like to express my thanks to all colleagues and other people in the CALCE Electronic Products and Systems Center for their valuable help and friendship including Dr. Osterman, Ania, David, Raju, Cindy, Joan, and Howard. Last and most certainly the most, I am especially indebted to my wife, Nimmi and to my parents in India for their encouragement, support, patience and love during this journey. iii Table of Contents Acknowledgements....................................................................................................... ii Table of Contents......................................................................................................... iv List of Tables .............................................................................................................. vii List of Figures............................................................................................................viii Chapter 1 RELIABILITY CAPABILITY............................................................. 1 1.1 Reliability as a competitive opportunity....................................................... 1 1.2 Capability maturity models........................................................................... 3 1.3 Key reliability practices ................................................................................ 4 1.3.1 Reliability requirements and planning.................................................. 7 1.3.2 Training and development .................................................................... 8 1.3.3 Reliability analysis.............................................................................. 10 1.3.4 Reliability testing................................................................................ 11 1.3.5 Supply chain management .................................................................. 12 1.3.6 Failure data tracking and analysis....................................................... 13 1.3.7 Verification and validation ................................................................. 14 1.3.8 Reliability improvements.................................................................... 15 1.4 Conclusions................................................................................................. 16 Chapter 2 VALIDATION OF RELIABILITY PRACTICES AND TASKS...... 19 2.1 Introduction................................................................................................. 19 2.2 Survey questionnaire and data collection ................................................... 22 2.3 Assessing internal consistency.................................................................... 23 2.4 Assessing validity ....................................................................................... 26 2.4.1 Content validity................................................................................... 27 2.4.2 Predictive validity ............................................................................... 28 2.4.3 Construct validity................................................................................ 29 iv 2.5 Weighting factors for reliability tasks......................................................... 31 2.6 Conclusions................................................................................................. 34 Chapter 3 CAPABILITY MATURITY LEVELS............................................... 36 3.1 Introduction................................................................................................. 36 3.2 Maturity levels ............................................................................................ 37 3.2.1 Solely reactive..................................................................................... 38 3.2.2 Repeatable........................................................................................... 38 3.2.3 Defined................................................................................................ 39 3.2.4 Managed.............................................................................................. 39 3.2.5 Proactive ............................................................................................. 40 3.3 Use of radar charts for supplier selection ................................................... 45 3.4 Quantitative reliability capability evaluation using weighting factors ....... 47 3.5 Conclusions................................................................................................. 50 Chapter 4 EVALUATION PROCESS: CASE-STUDY ..................................... 52 4.1 The evaluation process................................................................................ 52 4.2 Case study: a defined company................................................................... 54 4.2.1 Evaluation results and recommendations............................................ 57 4.2.2 Benchmarking..................................................................................... 60 4.3 Conclusions................................................................................................. 61 Chapter 5 CASE-STUDY: PCB ASSEMBLY MANUFACTURER ................. 63 5.1 Introduction................................................................................................. 63 5.2 Printed circuit board assembly process....................................................... 65 5.3 PCB assembler evaluation methodology .................................................... 68 5.4 Part-1: Manufacturing compatibility evaluation......................................... 70 5.5 Part-2: Reliability capability maturity evaluation....................................... 71 5.5.1 Practices associated with development of requirements and plans .... 72 5.5.2 Practices associated with meeting reliability requirements ................ 73 v 5.5.3 Practices associated with reliability assurance and growth ................ 73 5.6 Case-study................................................................................................... 74 5.6.1 Practices associated with development of requirements and plans .... 74 5.6.2 Practices associated with meeting reliability requirements ................ 77 5.6.3 Practices associated with reliability assurance and growth ................ 78 5.7 Case-study evaluation results...................................................................... 79 5.8 Conclusions................................................................................................. 81 Chapter 6 APPENDICES .................................................................................... 83 6.1 Appendix-1: Reliability tasks under different key practices....................... 83 6.2 Appendix-2: Structure of two questionnaires ............................................. 87 6.3 Appendix-3: Details of respondents to the survey...................................... 88 6.4 Appendix-4: Item analysis results for ninety-one tasks.............................. 89 6.5 Appendix-5: Internal consistency of a theoretical measure........................ 92 6.6 Appendix-6: Covariance matrix for average rating of key practices.......... 97 6.7 Appendix-7: Factor analysis ....................................................................... 98 6.8 Appendix-8: Principal component analysis (PCA) results ....................... 104 6.9 Appendix-9: Principal axis factoring (PAF) results.................................. 106 6.10 Appendix-10: List of reliability tasks and weighting factors.................... 108 6.11 Appendix-11: PCB assembler evaluation questionnaire........................... 112 Contributions............................................................................................................. 130 References................................................................................................................. 132 vi List of Tables Table 1: Comparison of reliability practices prescribed by different schemes................... 6 Table 2: Key reliability practices and their purpose ......................................................... 17 Table 3: Cronbach?s alpha values for different key practices........................................... 25 Table 4: Q-sorts methodology correlational results.......................................................... 29 Table 5: Summary of results from Principal Axis Factoring (PAF)................................. 32 Table 6: Weighting factors for tasks under different key practices.................................. 33 Table 7: Requirements definition at different maturity levels for key practices .............. 41 Table 8: Weighted maturity level scores for different key practices ................................ 48 Table 9: PCB assembler reliability capability evaluation questionnaire .......................... 71 Table 10: Evaluation scorecard for an assembly facility.................................................. 80 Table 11: Survey respondent details................................................................................. 88 Table 12: Item analysis results for 91 reliability tasks ..................................................... 89 Table 13: Covariance matrix for average rating of key practices..................................... 97 Table 14: Weighting factors for reliability tasks ............................................................ 108 vii List of Figures Figure 1: Developing evaluation tasks from reliability objectives ..................................... 5 Figure 2: Key reliability practices....................................................................................... 5 Figure 3: Difference between physical experimental research and empirical psychometric research............................................................................................................. 21 Figure 4: Reliability capability model development and validation process.................... 22 Figure 5: Scree plot for PAF analysis of tasks under reliability improvements ............... 31 Figure 6: Using radar charts for supplier selection........................................................... 46 Figure 7: Weighted radar chart showing different maturity levels................................... 49 Figure 8: Radar chart showing an example reliability capability evaluation result.......... 50 Figure 9: The IPC printed circuit boards designation....................................................... 65 Figure 10: Typical printed circuit board assembly process .............................................. 66 Figure 11: PCB assembler reliability capability evaluation methodology development process............................................................................................................. 70 Figure 12: Structure of the survey questionnaire.............................................................. 87 Figure 13: Structure of the content validity questionnaire................................................ 87 Figure 14: Sources of variance and factor analysis .......................................................... 98 viii Chapter 1 RELIABILITY CAPABILITY This chapter introduces a set of key practices that can be used to assess whether an organization has the ability to design, develop and manufacture reliable electronic products. This ability is defined in terms of a reliability capability maturity metric for an organization. 1.1 Reliability as a competitive opportunity Reliability is the ability of a product or system to perform as intended (i.e., without failure and within specified performance limits) for a specified time, in its life cycle application environment [1]. For any electronics business, time-to-profit is a key metric for establishing product design, product operation and high level management goals, including cost, schedule, and social responsibility. Since, reliability is associated with preventing or minimizing the likelihood of failure occurrences, reliability is a risk factor associated with profit making. Failures lead to costs that extend the time-to-profit for a product. Failures can stain the reputation of a company 1 , and cause financial losses 2 . Financial losses can be in 1 A month after its release in July 2000, Intel recalled its new 1.13GHz Pentium III microprocessors. The chips had a hardware glitch that caused them to crash or hang under certain conditions. Apparently, pressure from AMD led Intel to push products to market faster than it had in the past, leaving less time for testing. Although less than 10,000 units were affected, the recall led to embarrassment and a loss of reputation for Intel at a time when competition in the microprocessor market was at its fiercest [2]. 2 Toshiba was sued in 1999 for selling defective laptop computers. More than five million laptops were allegedly built with a defective floppy disk drive controller chip that would randomly corrupt data without 1 the form of loss of market share due to damaged consumer confidence, increase in insurance rates, costs to replace parts, claims for damages resulting from personal injury, and maintenance of a service infrastructure to handle failures [4]. Legally, most states in the US operate on the theory of strict liability. Under this law, a company can be liable for damages resulting from a defect for no reason other than that one exists, and a plaintiff does not need to prove any form of negligence to win their case [5]. A history or reputation of poor reliability can also prevent potential future customers from buying a product, even if the causes of past failures have been corrected. Therefore to be competitive, electronics manufacturers need to know how things fail, in addition to knowing how things work. The last decade of the twentieth century witnessed a rapid globalization of all businesses. Competitive and regulatory pressures have driven electronics manufacturers to low-cost manufacturing and to the evolution of a worldwide supply chain. Today, external sourcing of components and contract manufacturing is widespread. Electronics manufacturers are dependent upon worldwide suppliers who provide them with parts and subassemblies. Therefore for any product design, it is essential that the reliability requirements be applied to all the incoming sub-contracted items so that reliability can be managed across all the tiers of the supply chain. The ultimate goal is that suppliers have sufficient reliability practices to satisfy requirements of their customers. System integrators, who are at the top of the supply chain, generally set the requirements for system reliability. Parts and manufacturing processes purchased on the market as commodities are selected based on information provided by suppliers. warning. Lawsuits claimed that Toshiba knew about the defects since the 1980s, but failed to correct them or notify customers. Toshiba agreed to a $2.1 billion settlement to prevent the case from going to trial [3]. 2 However, system integrators cannot wait until they receive the parts or sub-assemblies to assess if they are reliable. This can be an expensive iterative process. An upfront evaluation of suppliers based on their ability to meet reliability requirements can provide valuable competitive advantage. A manufacturer?s capability to design for reliability and to implement a reliable design through manufacturing and testing can yield important information about the likelihood that the company will provide a reliable product. 1.2 Capability maturity models The maturity approach to determine organizational abilities has roots in quality management. Crosby?s Quality Management Maturity Grid [6] describes the typical behavior of a company, which evolves through five phases (uncertainty, regression, awakening, enlightenment and certainty) in their ascent to quality management excellence. Since then maturity models have been proposed for a wide range of activities, including software development [7]-[9], supplier relationships [10], research and development effectiveness [11][12], product development [13], innovation [14], collaboration [15], product design [16]-[18], and reliability information flows [19]-[22]. In this dissertation, a maturity model is being developed for reliability capability. Reliability capability is the ability of an organization to design, develop and manufacture reliable products. To measure reliability capability, a metric called reliability capability maturity is proposed using which electronics manufacturers with worldwide suppliers can evaluate the maturity of the reliability practices of their suppliers [23]. 3 Reliability capability maturity is a measure of the practices within an organization that contribute to the reliability of the final product, and the effectiveness of these practices in meeting the reliability requirements of customers. 1.3 Key reliability practices The IEEE Reliability Program Standard 1332 [24][25], defines broad guidelines for the development of a reliability program, based on three reliability objectives: 1. The supplier, working with the customer, should determine and understand the customer?s requirements and product needs so that a comprehensive design specification can be generated. 2. The supplier should structure and follow a series of engineering activities so that the resulting product satisfies the customer?s requirements and product needs with regard to product reliability. 3. The supplier should include activities that assure the customer that reliability requirements and product needs have been satisfied. These objectives were used as the building blocks for developing the reliability capability model by following a hierarchical process as shown in Figure 1. For each of the IEEE reliability objectives, key practices for evaluating reliability capability can be assigned, and each key practice can be defined in terms of specific reliability tasks associated with it. Reliability capability evaluation for a company can be based on the level of planning, available resources and facilities, and implementation of reliability tasks applicable for that company. 4 Figure 2 presents eight key practices identified from a study of reliability standards from the electronics industry [26]-[32], and reliability literature [33]-[54]. Each of the eight key reliability practices is described in the following sections [55][56]. Table 1 provides a comparison between the key practices shown in Figure 2 with Reliability objectives Reliability practices Reliability tasks Questions based on tasks applicable for a company Figure 1: Developing evaluation tasks from reliability objectives A: Practices associated with the development of reliability requirements and plans A: Practices associated with the development of reliability requirements and plans B: Practices associated with meeting reliability requirements B: Practices associated with meeting reliability requirements C: Practices associated with reliability assurance and growth C: Practices associated with reliability assurance and growth 1. Reliability requirements and planning 2. Training and development 3. Reliability analysis 4. Reliability testing 5. Supply chain management 6. Failure data tracking and analysis 7. Verification and validation 8. Reliability improvements Figure 2: Key reliability practices 5 the practices of three reliability standards, which were found to identify reliability activities for military, commercial electronics and automobiles. The table indicates that all the identified reliability practices are not included in different schemes. Particularly training and development is not included in anyone. Some of the reliability activities prescribed in these schemes are spread over more than one key practice. Also, the reliability activities listed under different schemes differ in the detail of their description. For example, while Mil-Std prescribes activities like FMECA and SCA specifically for analyzing reliability, the IEC Std. only mentions ?identify methods for reliability evaluation?. Table 1: Comparison of reliability practices prescribed by different schemes Key reliability practices Military ?MIL ? STD 785B? [26] Commercial electronics ?IEC 56/775/NP? [30] Automotive electronics ?SAE J-1938? [28] 1 Reliability requirements and planning ? Develop a reliability program plan ? Allocate reliability ? Plan and monitor a reliability program ? Collect data for reliability assessment ? Identify sources for reliability information ? Identify design requirements ? Finalize the initial design ? Determine reliability prediction models ? Ensure compatibility of parts/tooling ? Evaluate design engineer?s interaction with manufacturing ? Determine in-process and end-of-line test requirements 2 Training and development -- -- -- 3 Reliability analysis ? Model reliability ? Identify reliability critical items ? Conduct failure modes, effects and criticality analysis (FMECA) ? Conduct sneak circuit analysis (SCA) ? Analyze electronic parts and circuit tolerances ? Evaluate effects of non- manufacturing activities ? Make reliability predictions ? Establish reliability ? Identify methods for reliability evaluation, qualification and validation ? Conduct Fault Tree Analysis (FTA) ? Conduct Failure Modes and Effects analysis (FMEA) ? Conduct sneak circuit analysis. ? Conduct feasibility study for making a reliable product ? Analyze ?likely? performance for complex circuits 6 4 Reliability testing ? Screen for environmental stress (ESS) ? Conduct a reliability qualification test (RQT) program ? Conduct a production reliability acceptance test (PRAT) ? Identify methods for reliability evaluation, qualification and validation ? Collect data for reliability assessment ? Conduct criticality analysis test for confirmation. 5 Supply chain management ? Create a parts program ? Monitor and control subcontractors and suppliers -- ? Specify incoming inspection for vendor quality 6 Failure tracking and reporting ? Utilize failure reporting, analysis and corrective action system (FRACAS) ? Utilize a Failure Review Board (FRB) ? Utilize a reliability development/growth test program (RDGT) ? Collect data for reliability assessment ? Formulate a closed loop failure analysis and corrective action plan ? Analyze warranty returns 7 Verification and validation ? Conduct program reviews -- ? Conduct technical design reviews 8 Reliability improvements ? Utilize a Failure Review Board (FRB) ? Utilize a reliability development/growth test program (RDGT) ? Use reliability assessment and testing results for equipment design, system architecture, safety and business decisions ? Improve reliability assessment ? Use a reliability growth model ? Formulate design change procedures 1.3.1 Reliability requirements and planning During product development, customer?s needs and operational conditions for all phases of product lifecycle must be understood to arrive at a set of customer reliability requirements. The different considerations for establishing reliability requirements for an electronic product include the design and operational specifications (information about the manner in which the product will be used), regulatory and mandatory requirements, definition of failure, expected field life, criticality of application, cost and schedule limitations, and business constraints like potential market size. 7 Reliability requirements and planning incorporates activities needed to understand customers? requirements 3 , to generate reliability goals for products, and to plan reliability activities to meet those goals. The inputs for generating reliability requirements for products include customer inputs, reliability data specifications for competitive products, and lessons learned from reliability experience of previous products, including test and field failure data. Reliability planning is a continuous process, from preliminary design to product maturity, which is needed to establish and maintain plans that define reliability activities and manage the defined activities. The planning activity starts with identifying available resources such as materials, human resources, and equipment; and determining the need for additional resources. Reliability analysis and testing needed for the product and logistics to obtain feedback on the implementation of these activities can be identified within a reliability plan. The output from this key practice is a reliability plan. The reliability plan identifies and ties together all the reliability activities. The plan should also include a schedule and allocate resources and responsibilities. Decision criteria for altering reliability plans can also be included. 1.3.2 Training and development Training and development enhances the specialized skills and knowledge of people so that they can perform their roles in the development of a reliable product effectively and efficiently. The aim is to ensure that employees understand the reliability 3 In this dissertation, the terms ?requirements? and goals have been used interchangeably. 8 plans and goals for products, and have sufficient expertise in methods required to achieve those goals. This includes development of innovative technologies or methods to support business objectives. Training and education of employees for career advancement and job proficiency is important for employee morale. Education and training in the reliability - related technological areas also enhance the possibility of obtaining a better, more reliable product. Reliability managers must be aware how specific reliability activities can impact or improve reliability, and business managers should appreciate the importance of reliability to ensure implementation of reliability training within a company. Presence of regular training programs indicates the willingness of business managers to spend time, effort, and money on training of employees. Effective training requires assessment of needs, planning, instructional design, and appropriate training media. The main components of employee training include a training-development program with documented plans and means for measuring the effectiveness of the training program. The main activity for this key practice is the development of a training plan including training needs for individual personnel with a schedule. The implementation of the plan requires procurement of training infrastructure including training instructors and training material. The different modes of imparting training include in-class training, mentoring, web-based training, guided self-study, or a formal on-the-job training program. Employees must be trained on lifecycle reliability management of products, including training in specific areas like failure analysis, root cause analysis, and corrective action 9 system. The training must develop an understanding of reliability concepts and statistical methods. 1.3.3 Reliability analysis Reliability analysis incorporates activities to identify potential failure modes and mechanisms, to make reliability predictions, and to quantify risks for critical components in order to optimize the lifecycle costs for a product. Criticality level for components can be based upon complexity, application of emerging technologies, demand for maintenance and logistics support and, most importantly, the impact of potential failure on overall product success. Prior experience and history can be helpful in this analysis. The data used to make reliability predictions may be historical, from previous testing of similar products, or from the reported field failures of similar products. Reliability analysis activities include conducting failure modes, mechanisms, and effects analysis to identify potential single points of failure, failure modes, and failure mechanisms for a product. The next step is to identify criticality of these failure modes and mechanisms. Reliability analysis also includes identification of reliability logic for products as a system, and to create reliability models at the component and the product level in order to make reliability predictions. Assessing adherence to design rules including derating, electrical, mechanical and other guidelines is also a part of reliability analysis. The outputs from this analysis are an estimate of the basic reliability of the product, expected failure modes at the system and the component level, and identification of design weaknesses to determine suitability of the existing design to avoid early-life 10 failures and its susceptibility to wear-out failures. The information from reliability analysis can be used to create a list of reliability critical parts, sub-assemblies or processes and to design reliability tests. Predictions regarding expected warranty costs and logistics support including spares provisioning can also be made. 1.3.4 Reliability testing Reliability testing can be used to explore the design limits of a product, to stress screen products for design flaws, and to demonstrate the reliability of products by running tests. The tests may be conducted according to some industry standards or to required customer specifications. The reliability testing procedures may be generic, i.e., common for all products or the tests may be custom designed for specific products. The tests may or may not be used for the verification of known failure modes and mechanisms. Detailed reliability test plans can include the sample size for tests and corresponding confidence level specifications. Important considerations for any type of reliability testing are establishing the nature of the test (failure or time terminated), the definition of failure, the correct interpretation of the test results, and co-relating the test results with the reliability requirements for the product. The information required for designing product specific reliability tests include the expected lifecycle conditions, the reliability plans and goals for a product, and failure modes and mechanisms identified during reliability analysis. The different types of reliability tests that can be conducted are discovery testing ? identifying design marginality or destruct limits for the product, design verification 11 testing before mass production, on-going reliability testing, MTBF testing, and accelerated testing. The output from this key practice is the data obtained from testing of different types. Test data analysis can be used to make design changes prior to mass production, to identify the failure models and model parameters, and for modification of reliability predictions for the product. Test data can also be used to create guidelines for manufacturing tests including burn-in and environmental stress screening, and to create test requirements for parts and sub-assemblies obtained from suppliers. 1.3.5 Supply chain management Supply chain management activities include monitoring a list of potential suppliers, conducting supplier assessment or audits, and selecting vendors or sub- contractors for parts or processes. Other activities include part or process qualification through review of process, quality, reliability testing, or accelerated test data from the suppliers. Activities like tracking product change notices, changes in the part traceability markings and management of part obsolescence are also included under this key practice. These activities are essential for sustaining product reliability through its lifecycle. The information required for initiating supplier selection is the parts list, bill of materials, and engineering specifications based on functional requirements for the product. The decision criteria for supplier selection include their ability to supply reliable components in a cost and schedule effective manner and their demonstrated ability to control their own supply chain. Possible control over the supplier?s reliability practices through exchange of technological expertise and sharing of information also increases the 12 possibility of achieving and maintaining product reliability. In some cases, multi-sourcing of parts may be necessary due to product manufacturing schedule and supplier capacity considerations, or due to supply fluctuations anticipated in future. An output from this key practice is a list of preferred/qualified/approved parts, vendors and sub-contractors; and a system for supplier rating. Other outputs include component qualification reports, supplier audit reports, and development of supply contracts that include contractual quality and reliability requirements. 1.3.6 Failure data tracking and analysis Failure tracking activities are used to collect manufacturing, test and field failed components, and related failure information. Failures must then be analyzed to identify the root causes of manufacturing defects and test or field failures and to generate failure analysis reports. The documented records for each report can include the date and lot code of the returned product, the failure point (quality testing, reliability testing or field), the return date, the failure site, the failure mode and mechanism, and recommendations for avoiding the failure mode in existing and future products. For each product category, a Pareto chart of failure causes can be created and continually updated. The failure sources that initiate failure analysis of a product include manufacturing, production testing, reliability testing, pre and post-warranty field returns, and customer complaints. Failure analysis includes statistical analyses of field return data, and analysis of the cause of failure at various levels down to the identification of the root cause of failure. 13 The outputs for this key practice are a failure summary report arranged in groups of failures of like items and similar functional failures, forward and backward traceability of failed components through date and lot code information, actual times to failure of components based on time specific part returns, and a documented summary of corrective actions implementation and effectiveness. Failure analysis reports as an output from this key practice can include failure distribution models for products including model parameters. All the lessons learned information from failure analysis reports can be included in a corrective actions database for future reference. This database can help save considerable cost in fault isolation and rework associated with problems that may be encountered in future. 1.3.7 Verification and validation Verification and validation through an internal review/audit of reliability planning, testing and analysis activities helps to ensure that planned reliability activities are implemented so that the product fulfills the specified reliability requirements. Benchmarking can be used to study the best internal practices that produce superior reliability performance and for ensuring that noncompliance is addressed. Part of the process is to understand how some practices are better and finding ways to improve others driving the needs for improved facilities, equipment, and methodologies. The inputs for this key practice are the outputs from previous practices like planning, analysis, testing and failure data tracking. The inputs include reliability plans and goals for products, potential failure modes and mechanisms identified during 14 reliability analysis, information on failure mechanisms from reliability testing, specific reliability test plans and specifications, and the corrective actions database. Verification and validation activities include comparison of identified potential problems against those experienced in field. It includes comparison of expected and field failure modes and mechanisms and comparison of reliability prediction models for a product against field failure distributions. The outputs from this key practice include an updated failure modes and mechanisms database, modification of reliability predictions and failure models for a product, and modification of warranty costs and spares provisioning estimates. Reliability test conditions may also be modified based on field information on products. 1.3.8 Reliability improvements Reliability improvements is associated with improving the basic reliability of products by using lessons learned from testing, reported field failures, technological improvements or any other information. This key practice primarily involves implementation of corrective actions based on failure analysis. It also involves initiating design changes in products or processes due to change in reliability requirements for products or due to changes in lifecycle application (operating and non-operating) conditions of products. Reliability improvements can be affected either by making design changes in products or by using alternate parts, processes or suppliers. Design changes can include improved design using an older technology, or implementation of developing technologies within an older design. Implementation of new modeling and analysis 15 techniques and trends that could be used to improve reliability of products can also be used. The inputs required to initiate reliability improvement in products also come from previous key practices. The information includes Pareto charts for field failure modes and mechanisms, recommendations from the corrective actions database, and documented anomalies from verification and validation. Other reasons that can initiate a reliability improvement process are changes in lifecycle usage conditions for a product or changes in the reliability requirements due to business or other considerations. The output activities from this key practice include preventing recurrence of identified failures and implementation of corrective actions from failure analysis. Corrective actions can be implemented by issuing engineering change notices, or through modifications in manufacturing and design guidelines for future products. 1.4 Conclusions In the last decade of the twentieth century, competitive and regulatory pressures have driven all types of electronics manufacturers to low-cost manufacturing and to the evolution of a worldwide supply chain. Reliability being a risk factor associated with profit making, it is essential that reliability is managed across all the tiers of the supply chain. System integrators, who are at the top of the supply chain, generally set the requirements for system reliability. However, they cannot wait until they receive the parts or sub-assemblies to assess if they are reliable. This can be an expensive iterative process. 16 An upfront evaluation of suppliers based on their ability to meet reliability requirements can provide valuable competitive advantage. Reliability capability is the ability of an organization to design, develop and manufacture reliable products. Reliability capability maturity is a measure of the practices within an organization that contribute to the reliability of the final product, and the effectiveness of these practices in meeting the reliability requirements of customers. This chapter defines eight key reliability practices that form the basis of a strategy for reliability management, and for reliability capability evaluation. The purpose of each of these reliability key practices is briefly described in Table 2 below. Table 2: Key reliability practices and their purpose Key reliability practice Purpose Reliability requirements and planning ? To understand the customer?s reliability requirements ? To generate reliability requirements for products ? To plan reliability activities to meet requirements Training and development ? To enhance the technical and specialized skills of people ? To ensure that employees understand reliability plans and goals for products ? To track or develop techniques or methods that can impact reliability Reliability analysis ? To conduct design analysis to identify potential failure modes and mechanisms ? To determine criticality levels of parts or sub-systems through system modeling ? To make reliability predictions for products Reliability testing ? To explore design limits for products and identify design flaws ? To demonstrate the reliability of products by running tests ? To make or modify reliability predictions for products based on testing Supply chain management ? To identify sources of parts or processes to satisfy product reliability requirements ? To manage vendors and sub-contractors ? To track change notices for sustaining a product through its lifecycle Failure data tracking and analysis ? To track failures from manufacturing, reliability testing and from field ? To conduct failure analysis and identify the root causes of failures ? To record possible corrective actions to remove the root causes of failures 17 Verification and validation ? To verify the implementation of the reliability plan ? To conduct internal or external audits of reliability activities ? To validate reliability predictions from field performance and record anomalies Reliability improvements ? To track changes in reliability requirements of products ? To improve product reliability through implementation of corrective actions ? To improve reliability through the use of new methods or techniques The key practices lay the foundation for a reliability capability maturity model that can help electronics manufacturers to assess their potential suppliers or for suppliers to assess themselves. Reliability tasks under each key practice can be used as evaluation items to assign maturity scores to electronics companies. The maturity scores thus obtained can provide a quantitative metric for grading electronics companies. Appendix-1 provides a list of 91 reliability tasks based on the description of the eight key reliability practices provided in this chapter. 18 Chapter 2 VALIDATION OF RELIABILITY PRACTICES AND TASKS A model for evaluating the reliability capability of electronics manufacturers has been proposed in the first chapter. The model consists of eight key reliability practices and ninety-one reliability tasks associated with them. In this chapter, statistical methods have been used to validate this theoretical measuring instrument for reliability capability. The result of the analysis is a list of tasks that are critical to reliability for an electronics company. Comparative weighting factors have also been obtained empirically for reliability tasks, which can be used for quantitative reliability capability evaluation. 2.1 Introduction Maturity models have been proposed for a wide range of activities, including quality management [6] software development [7][8][9], supplier relationships [10], research and development effectiveness [11][12], product development [13], innovation [14], collaboration [15], product design [16]-[18], and reliability information flows [19]- [22]. Maturity models for organizational abilities must have empirical validation. In management and marketing research, even though a relatively large number of abstract theoretical variables are used to explore the relationship among different organizational phenomenon, it has been reported that a serious shortcoming of most of these theoretical 19 measuring instruments is that they lack validation [57]. A study of the measurement practices reported in management research over a period showed a lack of validation of the instruments used to measure different management attributes [57]. Jacoby [58] noted that: ?more stupefying than the sheer number of our measures is the ease with which they are proposed and the uncritical manner in which they are accepted...... most of our measures are only measures because someone says that they are, not because they have been shown to satisfy standard measurement criteria (validity, reliability and sensitivity).? This is true for all the maturity models listed above as well. Most of the maturity models or theoretical measuring instruments for organizational attributes are essentially similar ?to the development of scales and sub- scales for the assessment of more abstract issues as in social science and marketing research? [59]. Quantitative techniques are already available for generating and validating lists of items which might represent such hypothesized theoretical measures. These techniques fall under the realm of a branch of science called psychometrics. Psychometric methods are rigorous statistical tools that are used to construct theoretical instruments which measure abstract organizational variables. The process of measurement involves rules for assigning numbers to objects to represent quantities of attributes [60]. The attributes of objects as opposed to the objects themselves are measured. Figure 3 compares the steps in physical experimental research process and the empirical psychometric research process [61]. In the former case, the test vehicle is a physical specimen, while in the latter; the test vehicle is a survey questionnaire. In the former, the test results constitute the output data; in the latter, the scores or ratings from respondents constitute the output data. 20 Theory Hypothesis Specification of indicators or variables Physical experimental research Empirical psychometric research Design of experiments (Creating control and experimental groups among samples) Survey/Correlational design (Creating questionnaire with sections and measurement items) Selection of test vehicles and test conditions Selection of survey items and respondents Conduct experiment Administer questionnaire Collect and analyze data Compare findings with hypothesis Figure 3: Difference between physical experimental research and empirical psychometric research There is published research on the use of psychometric methods for developing and validating measurement instruments. The psychometric principles have been used for generating and evaluating measures for quality management practices [62][63], for measuring supply chain quality factors [64], for measuring implementation of total quality management [65], for measuring job satisfaction [66], and for measuring project management culture in organizations [67]. Psychometric methods, which are based on statistical multivariate co-relational analysis, can be used to validate the theoretical measurement model proposed for reliability capability. The fundamental objective of any measuring instrument is to produce observable scores that approximate the true scores. The measures are always inferences, and the quality of the inferences depends on the procedures that are used to develop the measures, and the evidence supporting the ?goodness? of these measures [66]. The 21 ?goodness? is typically specified using the indices for internal consistency and validity. Figure 4 shows the process for development and validation of the reliability capability model [63][66][68]. 2.2 Survey questionnaire and data collection The first step in generating measurement items is exploratory research including literature research and feedback from experienced professional [66]. In the previous chapter, eight key reliability practices for reliability capability evaluation were identified. An evaluation questionnaire was then created, and as a pre-test, reliability audits were conducted for two electronics companies. Based on these activities (the first three steps of the development process), 91 reliability tasks (Appendix-1) were identified for measuring reliability capability. The list of 91 tasks is based on the currently reported reliability Identify key practices critical for good reliability capability Identify reliability tasks specific to each reliability key practice Refine tasks through case-studies and feedback Develop and conduct survey with a rating scale for model validation Step 1 Step 2 Step 3 Step 4 Create weighting factors for reliability tasks Delete tasks that will improve internal consistency No Yes Step 5 Item analysis Cronbach?s alpha Step 7 Factor loadings from PCA Assess instrument ?reliability?: ? Is assignment of tasks proper? ? Are tasks internally consistent? Assess instrument ?validity?: ? Content validity ? Predictive validity ? Construct validity Step 6 Q-sort analysis Factor analysis Figure 4: Reliability capability model development and validation process 22 activities in literature. In this study, a survey questionnaire, containing 91 reliability tasks, was created as a scientific instrument [69]. The statement of each task was reviewed by researchers and reliability professionals to make them concise and unambiguous. In the survey, the respondents were required to grade each task on a Likert-type five point interval rating scale (?negligible? to ?very high?) in terms of the relevance of the task in ensuring or improving the reliability of an electronics product. The structure of the survey is shown in Appendix-2. The respondents to the survey questionnaire were chosen such that they would represent those who will eventually use or interpret the results of the instrument [60]. The questionnaire was provided for filling up to reliability professionals at a technical conference and sent out through e-mail to reliability practitioners in the electronics industry to solicit responses. In all, 211 responses were obtained from industry professionals, consultants and researchers associated with electronic reliability. These people also represent organizations of various sizes. The details of respondents are shown in Appendix-3. The obtained data was analyzed using the Statistical Package for Social Sciences (SPSS) version 13.0 [61][70][71][72] to evaluate the internal consistency and validity indices in Steps 5 and 6, and creating weighting factors in Step 7. 2.3 Assessing internal consistency Internal consistency (also called ?reliability? in psychometric parlance) refers to the stability or reproducibility of a score based on a theoretical instrument [60]. A measure is internally consistent if it will give the same results if the measurement is 23 repeated, i.e., if the measurements are stable over a variety of conditions. However, internal consistency is only a necessary, but not a sufficient condition for validity. Item analysis was first used to evaluate the appropriateness of the assignment of tasks to key practices [60][63], by considering the correlation of each task rating to the average rating for each key practice. A task is eliminated if it correlates more with some other key practice than the one to which it is assigned. The analysis was completed for all 91 tasks. The results are included in Appendix-4. Tasks 1-04 and 2-10 showed close correlations with two key practice scores. However, they have the maximum correlations with their assigned practices. On the other hand, 8-11 shows better correlation with TAD (0.61), compared with RIMP (0.59), and hence was excluded from further analysis. Within each key practice, one of the most commonly used coefficients for measuring internal consistency of a list of tasks under it is Cronbach?s alpha [60][63][73]. Mathematically, Cronbach?s alpha is the average of correlations between all possible split-half estimates within the key practice. The value of Cronbach?s alpha for a key practice containing ?k? reliability tasks is given by [73][74]: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? = ? 2 2 1 1 sum i s s k k ? where is variance of each task rating, and is the variance of the average key practice rating. For each of the key practices, the square root of the coefficient alpha value is the correlation between the score that companies will get on the selected tasks (sample score) to the score that companies would get if all possible tasks corresponding to the key practice are included for evaluation (true score). Typically an alpha value of 2 i s 2 sum s 24 0.7 or more is considered adequate [60]. Additional theoretical details on this subject are included in Appendix-5. Table 3 shows the Cronbach?s alpha values for different key practices. As can be seen, all the key practices have more than adequate alpha values. Deletion of any one task does not substantially improve the alpha value for the key practice. Hence, the tasks listed under different key practices used for measuring reliability capability demonstrate internal consistency in psychometric terms. In other words, coefficient alpha values show that the specified tasks are necessary and sufficient to describe each key practice. For the entire reliability capability measuring instrument, which is a linear combination of measures of different key practices, the internal consistency can be estimated by the knowledge of Cronbach?s alpha coefficients of key practices and the covariance among their average ratings [60]. The internal consistency co-efficient for this linear combination is given by: Table 3: Cronbach?s alpha values for different key practices Key Practice Symbol Number of tasks ?? ? value Reliability requirements and planning RRP 12 0.779 Training and development TAD 10 0.827 Reliability analysis RA 11 0.838 Reliability testing RTST 13 0.851 Supply chain management SCM 15 0.897 Failure data tracking and analysis FDTA 11 0.899 Verification and validation VAV 08 0.871 Reliability improvements RIMP 11 0.856 25 2 22 1 y iii RCM r ? ??? ?? ? ?= where is the variance in the rating for the i 2 i ? th key practice; i ? is the value of the coefficient alpha for the i th key practice; and 2 y ? is the sum of all elements in the covariance matrix of average key practice ratings. Appendix-6 shows the covariance matrix for the average ratings of different key practices. An average rating is the average of the ratings provided for the different tasks listed under each key practice. Using these values, the internal consistency coefficient of the entire reliability capability model was found to be 0.972. This value indicates that the key reliability practices and included tasks are significantly necessary and sufficient to evaluate reliability capability. 2.4 Assessing validity The validity of a measure refers to extent to which it measures what it is intended to measure [60]. It is also the extent to which the differences in scores based on the instrument reflect the true differences among organizations on the characteristic that the instrument is supposed to measure, and nothing else [66]. Thus a measurement instrument is valid when the observed score matches the true score and the variation due to both systematic and random errors is very low. 26 Validity of a measuring instrument is of three types - content or face validity, criterion related or predictive validity, and construct validity. All three types of validities are discussed below. 2.4.1 Content validity A measuring instrument has content validity if the measurement items cover all aspects of the variable being measured. Content validity exists when ?a measure is judged by one or more persons as containing a reasonable and representative sample of items from the construct?s theoretical domain? [57]. In our case, the reliability capability model has some degree of content validity because it was constructed based on literature and standards on the topic [26]-[54], and evaluation by academicians and practicing reliability managers from the electronics industry. Although, content validity is subjectively judged by researchers, and not usually quantitatively measured, a quantitative approach to the assessment of content validity, called the Q-sorts methodology, was also used to establish content validity [57]. Q-sorts technique is a method of sorting objects into theoretical categories for statistical purposes [60]. The method requires judges to classify tasks into categories whose definitions or purposes are provided. Undergraduate or graduate students are appropriate to be used as the panel of judges. According to Schreisheim [57], ?? the only requirement for a set of judges to be considered adequate for this task is that they possess sufficient intellectual ability to perform the item rating task and that they be relatively free of serious potential biases.? 27 For using this method, a content validity questionnaire was created, the structure of which is shown in Appendix-2. The questionnaire randomly listed the 91 reliability tasks, and required the judges to classify them into the eight key reliability practices. The judges were provided with a brief definition of the purpose of each key practice. In our case, there were 56 responses to the questionnaire from three groups of people - 24 responses from researchers in electronics, 16 responses from general engineering graduate students, and 16 responses from non- engineering graduate students. The above 56 responses were classified randomly into two segments (S1and S2) selecting half from each of the groups above. Data was compiled for the number of times each task was classified under different key practices for each segment. As suggested by Schreisheim [57], correlation coefficients were obtained from the data for the two segments. The correlation values between the classifications under each key practice in the two segments are shown in Table 4. The results show a very good correlation between the key practice classifications for the two segments, at significance levels much lower than 0.01% demonstrating content validity. 2.4.2 Predictive validity The reliability capability measuring instrument will have predictive validity if the evaluation scores for different companies are correlated with the actual reliability of their products [63]. This requires correlating the field reliability of products supplied by the company to the maturity score obtained from an evaluation. Unfortunately, this data is extremely difficult to obtain. As an alternative, we rely on content validity and construct validity instead. As per Nunnally [60], ?Even though a test that is used specifically for a 28 Table 4: Q-sorts methodology correlational results S2_RRP S2_TAD S2_RA S2_RTST S2_SCM S2_FDTA S2_VAV S2_RIMP S1_RRP 0.944 * -0.066 -0.053 -0.144 -0.151 -0.346 -0.121 -0.139 S1_TAD -0.094 0.986 * -0.259 -0.160 -0.161 -0.225 -0.170 -0.161 S1_RA -0.174 -0.245 0.937 * -0.007 -0.278 0.098 -0.007 -0.134 S1_RTST -0.121 -0.185 -0.042 0.949 * -0.176 -0.179 0.050 -0.160 S1_SCM -0.111 -0.188 -0.259 -0.192 0.979 * -0.192 -0.122 -0.185 S1_FDTA -0.344 -0.221 0.132 -0.169 -0.217 0.976 * -0.076 -0.017 S1_VAV -0.154 -0.188 -0.118 0.011 -0.081 0.010 0.921 * 0.023 S1_RIMP -0.125 -0.135 -0.204 -0.126 -0.228 -0.028 0.051 0.936 * * Correlation is significant at the 0.01 level (2-tailed) prediction function should be validated as such, the only recourse is to rely heavily on content validity and construct validity instead. The reason is that in many cases a test must be selected for use before there is an opportunity to perform studies in which it is correlated with a criterion. In many performance situations, the criterion measure might not be available for years, or the ones that are available are obviously biased in one way or the other or are highly unreliable.? 2.4.3 Construct validity A measuring instrument has construct validity if it measures the trait (theoretical construct) that it was designed to measure [66]. Construct validity of each key practice can be evaluated by using factor analysis. Factor analysis validates a scale (key practice) by demonstrating that its constituents (reliability tasks) load on the same common factor. 29 If all the tasks listed under a key practice load on a single factor, they measure the same trait. Factor analysis and construct validity have long been associated with each other, and construct validity is also sometimes called ?factorial validity? [75][76]. Additional description of the theory behind factor analysis is provided in Appendix-7. For the analysis of data obtained through the survey, each key practice is treated as a separate measure of an organizational trait. Two factor analysis methods, the Principal Component Analysis (PCA) and Principal Axis Factoring (PAF) methods are used for this verification, since there is enough evidence to suggest that nearly all factoring methods should provide the same results, if there are really clear groupings of variables in a correlational matrix [60][77]. The Catell?s scree test criterion was used for selection of number of factors to be extracted. In this test, the successive eigenvalues of factors are plotted, and the point where the plot abruptly levels out is noted. Only the number of factors before the leveling off point is extracted [71][78]. A representative Scree plot for principal axis factoring of RIMP is shown in Figure 5. Using this criterion, it was found that only one factor can be extracted for each key practice for both types of analyses (PCA and PAF). Task 8-11 was excluded from this analysis. The outputs from factor analysis include factor loadings for each measurement item, and eigenvalues for each factor that is extracted. The factor loadings are the correlation coefficients between variables or measurement tasks and the identified factors. The eigenvalue for a given factor measures the variance in all variables which is accounted for by that factor. In factor analysis, loadings of 0.3 or larger are regarded as significant [60][61][71][78]. 30 Figure 5: Scree plot for PAF analysis of tasks under reliability improvements Factor analysis using both PCA and PFA was used to determine factor loadings. Results were similar from both types of factor analysis techniques. The values of factor loadings from the two analyses for all the tasks are included in Appendices- 8 and 9. Summary of the results from principal axis factoring (Table 5) shows that tasks 1-04 and 4-06 should be deleted since they do not have factor loadings of more than the recommended significant value of 0.3 with their respective factors. After eliminating tasks 1-04 and 4-06, co-efficient alpha values were re-calculated for RRP and RTST, and were found to be 0.784 and 0.857, respectively. 2.5 Weighting factors for reliability tasks The validation process resulted in a list of 88 tasks that can be used for reliability capability evaluation. The validation process, however, does not provide any information 31 Table 5: Summary of results from Principal Axis Factoring (PAF) Key practice Range of factor loadings Tasks with loading < 0.3 RRP 0.238 ? 0.640 1-04 (0.238) TAD 0.367 ? 0.698 None RA 0.464 ? 0.652 None RTST 0.284 ? 0.753 4-06 (0.284) SCM 0.377 ? 0.715 None FDTA 0.560 ? 0.763 None VAV 0.393 ? 0.778 None RIMP 0.495 ? 0.715 None on the relative importance of these tasks for each key practice. This importance can be expressed in the form of weighting factors that can be assigned to reliability tasks during an evaluation. These weighting factors can also be useful for assigning tasks within a key practice to different levels of maturity. Through factor analysis, it was found that each key practice represents a single factor or organizational trait. Since factors represent linear combination of variables that load significantly on it, each key practice can be written as a linear combination of tasks that load significantly on it: kk awawawA +++=KK 2211 where A is the score on a key practice, a i is the scores on individual tasks and w i is the weighting factor assigned to the i th task. The factor loadings for tasks under different key practices obtained from Principal Component Analysis (PCA) can be used as weighting factors in the above equation. For each key practice, the factor loadings were scaled such that the minimum weighting factor for any task became 1. The factor loadings and 32 weighting factors for each reliability task are included in Appendix-10. Weighting factors were obtained for the 88 tasks, excluding tasks 1-04, 4-06 and 8-11. Table 6 provides the range of weighting factor values and the sum of weighting factors for tasks under each key practice. The weighting factors obtained for 88 reliability tasks represent their relative relevance for a key practice. These factors can be used for quantitative reliability capability evaluation, and for creating quantitative comparison among companies. The sum of the weighting factors for all tasks indicates the maximum score that a company can obtain from an evaluation to be regarded as following best-in-class reliability practices. Based on these weighting factors, a company can be assigned scores from an evaluation. The obtained scores can be used to build a bar chart and a radar chart as graphical illustrations of evaluation results and for comparative analysis among companies. Table 6: Weighting factors for tasks under different key practices Key practice Number of tasks Range of weighting factors Sum of weighting factors RRP 11 1.00 ? 1.88 17.00 TAD 10 1.00 ? 1.70 14.57 RA 11 1.00 ? 1.32 13.17 RTST 12 1.00 ? 1.66 16.06 SCM 15 1.00 ? 1.78 23.18 FDTA 11 1.00 ? 1.29 12.79 VAV 8 1.00 ? 1.74 12.68 RIMP 10 1.00 ? 1.35 12.07 88 121.52 33 The weighting factors for tasks indicate that presence of a separate reliability department within an organization is least important for product reliability planning, whereas presence of a reliability plan with details on reliability analysis and testing is most critical. However, the ranking for training and development related tasks shows that presence of formally trained reliability engineers and commitment of an organization to reliability training of its employees is also critical. The weighting factors for supply chain management related tasks show that product sustainment through PCN and obsolescence tracking is more important than vendor and sub-contractor selection during product development. Analysis of weighting factors also reveals that identification of failure mechanisms for products is critical for improving reliability of products. Potential failure mechanisms should be identified during reliability analysis, reliability testing should be based on these and failure distributions to determine product reliability should be mechanism dependent. Understanding failure mechanisms for products and associated follow-up activities appears at the top in most key practices. 2.6 Conclusions This chapter uses the statistical methods suggested in the field of psychometrics for validating the key reliability practices and associated reliability tasks proposed earlier for the reliability capability evaluation model. A survey questionnaire was used to obtain relevance ratings for reliability tasks divided among eight key reliability practices. Item analysis, Cronbach?s alpha calculations, Q-sort method and factor analysis were used to demonstrate the internal consistency and validity (content and construct) of the key practices and associated tasks. Factor loadings obtained from factor analysis results were subsequently used to develop weighting factors for reliability tasks useful for a quantitative assessment. Item analysis resulted in elimination of one task (8-11) since it was found to correlate better with a different key practice than the one to which it was assigned. 34 Cronbach alpha co-efficient values were found to exceed the recommend value of 0.7 for each key practice. The internal consistency coefficient (also called ?reliability co- efficient? in psychometric parlance) for the entire reliability capability measuring instrument was found to be 0.972. This value indicates that the key reliability practices and included tasks are significantly necessary and sufficient to evaluate reliability capability. Content validity of the measuring instrument was demonstrated using the Q-sort method. Factor analysis was used for demonstrating construct validity. Two tasks (1-04 and 4-06) were found to have factor loadings less than the recommended lower limit of 0.3, and were deleted from the model. Weighting factors were then obtained for the remaining 88 tasks using factor loadings from Principal Components Analysis (PCA). The list of reliability tasks and the corresponding weighting factors are provided in Appendix-10. The 88 reliability tasks validated in this chapter can be used by decision makers and practitioners to assess the status of the reliability management practices within their organization to direct improvements. The sum of weighting factors for each key practice, and then sum of weighting factors for all key practices is a maximum score against which electronics companies can be benchmarked during an evaluation of reliability capability. The weighting factors and the scoring scheme are very useful for prima facie risk assessment during supplier and sub-contractor selection. Graphical tools like bar and radar charts can be used for comparative analysis among companies. 35 Chapter 3 CAPABILITY MATURITY LEVELS This chapter introduces the concept of maturity and presents the criteria for assigning different capability maturity levels to reliability key practices discussed in the first chapter. 3.1 Introduction Reliability is the ability of a product or system to perform as intended (i.e., without failure and within specified performance limits) for a specified time, in its life cycle application environment. To produce high value products with low life cycle costs, companies must include reliability in the product development process to reduce the probability of failures that may lead to increases in costs (warranty, schedule, market, or liability) or cause public hazards. Reliability capability is a measure of the practices within an organization that contribute to the reliability of the final product, and the effectiveness of these practices in meeting the reliability requirements of customers. The evaluation of reliability capability is based on a set of eight key reliability practices, which fulfill the objectives for a reliability program as per the IEEE Standard 1332. These key practices encompass all aspects of operation in a company from the product reliability perspective. Appendix-1 lists 91 reliability tasks that were identified as critical for reliability. This chapter 36 illustrates the use of these tasks in assigning capability maturity levels to different key practices. 3.2 Maturity levels Maturity is ?the state of being fully grown or developed? [59]. From a reliability perspective, maturity implies that reliability practices within a company are well understood, are supported by documentation and training, are being continually monitored and improved by the users, and are effective and efficient. In my model, reliability capability of a company is assigned five levels of maturity that represent stages in the evolutionary transition of a company. Some of the nomenclature is adapted from the Software Engineering Institute?s (SEI?s) Capability Maturity Model (CMM) [9]. Associated with each level, there are reliability tasks that should be conducted by a company as shown in Table 7. The assignment of tasks to increasing levels of maturity is consistent with the weighting factors described in Chapter 2, i.e., tasks with higher weighting factors within each key practice are assigned as requirements at progressively higher levels of maturity. The assignment of reliability tasks was reviewed by reliability researchers and reliability professionals from the electronics industry. For a company to be assigned a level of maturity, requirements listed at that level and all lower levels need to be fulfilled. 4 The generic definitions of maturity levels are provided below. 4 An exception arises when some reliability task conducted at a lower level of maturity is made redundant by a task at a higher level. For example, under reliability analysis, at level-2, only point reliability estimates are made for products, while at level-3, by making reliability predictions in the form of distributions, the need to make point estimates is precluded. 37 3.2.1 Solely reactive The ?solely reactive? level is defined by the absence of qualities linked to the higher levels. Companies at this level are essentially ad hoc in their approach to reliability. These companies are characterized by a lack of written procedures and an ad hoc, or sometimes chaotic, nature of design, manufacturing and reliability practices. The reliability practices, if there exist any, are constantly changed or modified as a reaction to crisis situations. Reliability performance depends primarily on the capabilities and motivation of individuals, in the absence of any effort at the organizational level. As a consequence, these companies generally produce products with unstable reliability. 3.2.2 Repeatable The ?repeatable? level is characterized by consistent and repeatable design, manufacturing and reliability practices. At this level, reliability practices are disciplined and successes can be repeated. Planning and managing new products is based on precedents or prior experience with similar products. The company is able to satisfy written customer requirements. Practices that satisfy established standards or that have become accepted by industry are repeated. These companies are able to deliver products that can show conformance to codes, standards or requirements. However, there is little or no data on actual reliability of products. The reliability activities like testing are generic for all products, and not tailored for specific applications. Reliability of the products is not assessed based on an understanding of the actual lifecycle conditions. 38 3.2.3 Defined The ?defined? level companies understand and define the reliability requirements and goals for their products. There is a standardized and consistent documentation for reliability activities, and a common understanding among employees about their roles and responsibilities. At this level, specific reliability training is provided to reliability engineers and managers to ensure that the employees have the knowledge and expertise to fulfill their assigned roles. These companies are responsive to test and field failures and conduct analysis of all failures. Companies at this level have established practices to satisfy initial product reliability requirements, but their practices are not mature enough to make design changes in existing products. These companies have limited ability to use feedback to initiate reliability improvements in products. 3.2.4 Managed At the ?managed? level, companies change product designs from reliability consideration. A documented reliability plan includes a schedule of product specific reliability activities. These companies can improve reliability by changing product designs to achieve desired reliability targets. Impact of changes in reliability requirements or general operating environment also initiates a product design change. All the failure mechanisms affecting the reliability of the products are investigated and documented. The major improvement over the level-3 company is that defined reliability practices are used to influence product designs during development as well as during the rest of the product lifecycle. 39 These companies are also successfully able to use their supply chain members in ensuring reliability of products. They create and update a select list of parts and suppliers based on defined criteria, and the criticality of components used in design is quantified. These companies lay down requirements for all reliability activities, and through audits or reviews ensure that these are met. However, the lessons learned are used to make design changes for existing products only. 3.2.5 Proactive The ?proactive? level companies are the best in practice companies. They are characterized as being responsive, adaptive and pro-actively focused on continuous reliability improvement across product lines. These companies do not use experience only to correct problems, but they also change the nature of the reliability practices that they use. The feedback from different stages of a product life cycle, such as predictions, simulations, testing, analysis and field performance, is disseminated widely throughout the company. The lessons learned from the feedback are incorporated at the development phase of new products. The feedback not only influences all the manufactured products but also impacts the reliability management process. In these companies, improvements can occur by incremental advances in the existing reliability practices or through innovations using new technologies and methods. Innovations in design of products as well the manufacturing processes that exploit the best reliability engineering practices are identified and transferred throughout the company. 40 Table 7: Requirements definition at different maturity levels for key practices Reliability requirements and planning Training and development Level 1: Solely reactive ? Reliability plans or requirements that exist are adhoc, and changed continuously. ? Only some informal on-the-job training is provided to employees. Level 2: Repeatable ? A separate reliability department exists. ? Reliability requirements are based on customer inputs and specifications for competitive products. ? Reliability goals are expressed as point estimates. ? New technologies, modeling or analysis techniques that impact reliability are constantly tracked, but are not used to make any changes. ? Some reliability training is provided to personnel including those who are not directly associated with the product. Level 3: Defined ? Reliability goals are expressed as a distribution instead of a point estimate. ? Reliability goals are based on specific lifecycle conditions for a product. ? Reliability engineers are trained in statistical methods for reliability prediction and data analysis. ? Training is provided to business managers to appreciate how reliability impacts business. Level 4: Managed ? Reliability goals are established for sub-assemblies and components in a product. ? Reliability goals and plans are documented for all products including the schedule of activities. ? A reliability plan exists and includes a list of required resources like materials, personnel and equipment. ? Reliability engineers are trained to identify failure modes and mechanisms in a product design. ? Reliability engineers are trained in root cause analysis and suggesting corrective actions. ? A generic reliability training plan or program exists. Level 5: Proactive ? Reliability plan includes details on reliability analysis and testing for specific products. ? Contingency planning is used and decision criteria for altering the reliability goals are identified. ? Reliability plan includes a process for communicating results from reliability activities. ? Formally trained reliability engineers are part of the reliability department. ? Training is provided to reliability managers on how specific reliability activities can impact reliability. ? Proactive support is provided by top management for reliability training. 41 Reliability analysis Reliability testing Level 1: Solely reactive ? Analysis of product design is minimal, mainly based on manufacturing issues. ? Only some functional tests are conducted to determine product operation prior to shipping. Level 2: Repeatable ? Point reliability predictions are made for products using modeling or reliability prediction handbooks. ? Lifecycle costs of a product are optimized based on reliability vs. cost trade-offs. ? Reliability testing is based on customer specifications. ? Products are subjected to burn-in or screening before shipping. ? Design verification and qualification tests are conducted for all products. Level 3: Defined ? Materials used in product design are characterized. ? Adherence to design rules is verified. ? The warranty cost estimates and spares provisioning is made based on reliability predictions. ? Tests to identify design limits and destruct limits are conducted for all products. ? Reliability testing based on generic specifications is conducted for all products. Level 4: Managed ? Potential failure modes and single points of failure are identified for products. ? The criticality of components in a product design is quantified. ? Reliability predictions are provided as distributions, and not as point estimates. ? Detailed reliability test plans exist including sample sizes and confidence intervals. ? Accelerated tests are tailored for expected failure mechanisms in full lifecycle conditions for specific products. ? Reliability test results are used to make design changes in products prior to production. Level 5: Proactive ? Potential failure mechanisms are identified for products. ? Critical failure modes and mechanisms are identified for all products. ? Reliability analysis is used to design specific reliability tests for a product. ? The reliability test data is analyzed to determine statistical failure distributions for products. ? Models for specific failure mechanisms are used to make reliability predictions for products. ? Reliability test requirements for parts supplied by vendors are modified and updated. 42 Supply chain management Failure data tracking and analysis Level 1: Solely reactive ? Components are procured from any source depending upon necessity. ? Failures during functional testing are only recorded as yield data. Level 2: Repeatable ? Component engineers manage the parts selection and management process. ? Components are procured from multiple suppliers (with some certification) without any further evaluation. ? Techniques like uprating are used for qualifying parts for use outside specifications. ? Pareto charts based on failure sites and failure modes are created and updated regularly without any further action. Level 3: Defined ? Contractual agreements containing quality and reliability requirements are signed with suppliers. ? Vendor or supplier assessments or audits are conducted. ? In-coming lots are rejected based on supplier?s reliability tests data. ? Pareto charts based on failure mechanisms are created and updated regularly without any further action. ? Failure and root cause analysis is conducted on failed products from all sources from manufacturing to field. Level 4: Managed ? In-coming lots are rejected based on supplier?s manufacturing quality data. ? Technology maturity is considered during the selection of components. ? Approved parts and supplier lists are created and maintained based on qualification reports and audits. ? Parts are procured only from authorized distributors and not from part brokers. ? All manufacturing defects, production testing failures and field failures are tracked and recorded in a database. ? Failure analysis reports detailing underlying failure mechanisms are generated for all products. ? Failure mechanisms are correlated with specific materials or processes. Level 5: Proactive ? A supplier rating system is created and maintained. ? Product change notices are evaluated for their effect on manufacturability and product reliability. ? Component traceability markings are tracked to identify any changes. ? Part obsolescence is tracked to ensure continued supply of parts. ? Reliability testing failures are tracked and recorded in a database. ? Traceability of a failed part is ensured from manufacture to failure. ? A database of corrective actions based on failure modes and mechanisms is maintained and updated regularly. 43 Verification and validation Reliability improvements Level 1: Solely reactive ? The company is in a process of getting some external certification. ? Improvements are made only in processes and not in product designs. Level 2: Repeatable ? External certifications like ISO are obtained for organizational processes including the reliability activities. ? Corrective actions based on field failure modes are implemented. ? Product reliability requirements are updated due to business or marketing considerations. Level 3: Defined ? Warranty cost estimates and spares provisioning is modified based on field returns. ? Engineering change notices for reliability improvement are issued and implemented. ? Bill of materials is modified to exclude parts that have reliability problems in field. ? Recurrence of identified failures is prevented in future products. Level 4: Managed ? The statistical failure distributions used for reliability predictions are modified based on field failure data. ? Reliability predictions are updated for the products based on field failure distributions. ? Internal audits are conducted for reliability planning, analysis and testing activities. ? New modeling and analysis techniques are evaluated and implemented to improve product reliability. ? Changes in lifecycle operating environment initiate a design change for a product. Level 5: Proactive ? Reliability test conditions are modified for current and future products based on observed field failure mechanisms. ? The failure modes and mechanisms database is updated based on new modes and mechanisms observed in field. ? New technologies are evaluated and implemented to improve product reliability. ? Failure information is included for updating the design rules and process control requirements. ? Corrective actions based on field failure mechanisms are implemented. 44 3.3 Use of radar charts for supplier selection The principal method of using radar charts is well established in economics and management. These charts prima facie integrate four or more scales into one radial chart which looks similar to a radar screen or a spider-web, hence the name. This approach is also sometimes called the Surface Measure of Overall Performance (SMOP) approach [79]. Connecting the performance or maturity levels attained in each dimension of the radar chart by straight lines produces an angular plane figure. The surface area of this figure can be calculated to give a dimensionless indicator of the overall performance achieved in all measured dimensions. For the reliability capability maturity model, it is not appropriate to sum up a supplier?s individual scores on different key practices into a single total, and to use this figure to compare against scores of other suppliers. Firstly, the customer may not require the same level of capability maturity for all key practices from its suppliers. Secondly each key practice may carry unequal weighting in terms of its contribution to the overall reliability capability. To compare suppliers, and to indicate the extent of match between the customer?s requirements and the supplier?s capabilities in various key practices, the Surface Measure of Overall Performance (SMOP) or the radar chart approach can be used [79]. The first step in using radar charts is to create a target reliability capability octagon for the customer, based on the maturity level required for each key practice (Table 7). The required maturity levels are then plotted for each key practice along the eight different axes, and the plotted points are joined to form the customer?s requirement octagon for reliability capability. In the second step, reliability capability octagons are drawn for each 45 supplier following the same procedure. Figure 6 illustrates the comparison between two suppliers against customer?s requirement octagon. Supplier ?A? whose octagon has the larger area-overlap with the customer?s octagon is selected. The radar chart approach has four main goals. The first is the visualization of interrelated performance measures through standardized scales. The second is to produce an effective and revealing description of selective performance dimensions using one synthetic indicator - the surface area of the radar chart. Third is the ability to analyze change in the overall performance between two points in time. The increase (or decrease) of the surface area indicates the improvement (or deterioration) in total performance independent of countervailing effects like improvement in one scale and deterioration in another. Fourth is that the shape of the radar chart and the overall surface area measure can be used for comparison of companies. RRP TADRA RTST SCM FDTA VAV RIMP Supplier ?A? Target reliability capability octagon for the customer RRP TADRA SCM FDTA VAV RIMP RTST Supplier ?B? Target reliability capability octagon for the customer Figure 6: Using radar charts for supplier selection 46 3.4 Quantitative reliability capability evaluation using weighting factors The weighting factors obtained for 88 reliability tasks (Appendix-10) represent the relative relevance that respondents assigned to them. These factors can be used for quantitative reliability capability evaluation, and for creating quantitative comparison among companies. Based on these weighting factors, a company can be assigned scores from an evaluation. The sum of the weighting factors for all tasks indicates the maximum score that a company can obtain from an evaluation to be regarded as following best-in- class reliability practices. The obtained scores can also be used to create a bar chart and a radar chart as graphical illustrations of evaluation results and for comparative analysis among companies. The radar charts shown in Figure 6 have equi-spaced marks (level indicators) for the five maturity levels along the eight key reliability axes. In this representation, it is assumed that tasks at all five maturity levels are equally important. However, this is not borne out by the weighting factors that were calculated for the reliability tasks in section 2.5. The first modification in these radar charts is that the first maturity level (solely reactive) collapses to a point at the origin of the axes since no reliability tasks are associated with it. The second modification is that the location of level indicators for the remaining four levels along all eight axes will no longer be equi-spaced. The weighting factors calculated for reliability tasks (Appendix-10) and the assignment of tasks to maturity levels (Table 7) were used to calculate and plot the level indicators for all key practice axes. The weighted maturity level scores based on task assignments for different key practices are shown in Table 8. The resulting radar chart along with the irregular octagons representing four maturity levels is shown in Figure 7. 47 Table 8: Weighted maturity level scores for different key practices Level -2 Level -3 Level -4 Level -5 Key practice ?Repeatable? ?Defined? ?Managed? ?Proactive? RRP 3.21 6.26 11.45 17.00 TAD 2.26 5.11 9.62 14.57 RA 2.08 5.52 9.30 13.17 RTST 3.18 6.68 11.11 16.06 SCM 3.81 8.23 14.61 23.18 FDTA 1.00 4.33 9.02 12.79 VAV 1.00 2.51 7.48 12.68 RIMP 2.10 5.68 8.11 12.07 Level total 18.64 44.32 80.7 121.52 This radar chart instead of the previous ones provides a better comparison of requirements against supplier capabilities. The radar chart also provides a quantitative metric for reliability capability evaluation by calculating the area of overlap between the maximum weighted area (not the weighted score) that a company obtains during an evaluation to the maximum that it could as shown in Figure 8. During an evaluation, the evaluators use a worksheet in which they rate the performance of all 88 reliability tasks within the company on a three-point rating scale: 1. No evidence of activity 2. Limited evidence of activity / little implementation history 3. Ample evidence of well established activity The average ratings of all evaluators and the weighting factors are used to create the reliability capability octagon for the company to be used as an absolute measure of its reliability capability or for comparison with other companies. The company can also be 48 0 6 12 18 24 RRP TAD RA RTST SCM FDTA VAV RIMP Level -2 ?Repeatable? Level -3 ?Defined? Level -4 ?Managed? Level -5 ?Optimized? Figure 7: Weighted radar chart showing different maturity levels provided with a list of its ten best and ten worst reliability tasks identified during the evaluation. For this purpose, the following two indices are used: 1. Performance index: Performance index indicates the ratio of the contribution of a task to the score for a key practice obtained during the evaluation to the contribution of the task towards the total score for the key practice in the model. Therefore, this index represents the relative performance of tasks within a key practice. Tasks are arranged in ascending order of performance index to get from worst to the best performed reliability tasks. belongsitwhichtopracticekeyforfactorsweightingofSum tasktheforfactorWeighting belongsitwhichtopracticekeytheforScore tasktheforScore taskaforindexcePerformana = 49 RRP TAD RA RTST SCM FDTA VAV RIMP Maximum Company Area = 41% Figure 8: Radar chart showing an example reliability capability evaluation result 2. Task importance: Task importance for a task is the ratio of its weighting factor to the sum of weighting factors for all tasks. This index is required to distinguish between two or more tasks that have the same performance index. In case of a tie, tasks can be arranged in descending order of their task importance to get the worst to best classification of tasks. tasksallforfactorsweightingofSum taskaforfactorWeighting ceimporTask 88 tan = 3.5 Conclusions This chapter completes the description of the proposed reliability capability maturity model consisting of eight key reliability practices and five levels of maturity. In 50 this chapter, five levels of maturity along with their characteristics have been discussed. The five levels represent stages in the evolutionary transition for a company. To assign a maturity level to a key practice, requirements in terms of reliability tasks have been enumerated. An assessment based on key practices can place companies at one of the five maturity levels. A quantitative reliability capability assessment process and use of radar charts for supplier selection based on maturity levels is also presented. The reliability capability maturity model can also help to establish reliability management practices for use by designers, suppliers, customers, and independent authorities. It can produce increased customer satisfaction, provide competitive opportunities, and shorten the product development cycle. It is expected that this model can also be used to identify shortcomings in the reliability program of a company, which can be overcome by subsequent improvement actions. 51 Chapter 4 EVALUATION PROCESS: CASE-STUDY This chapter presents a procedure for evaluating and benchmarking the reliability capability of electronics companies. A case study corresponding to reliability capability benchmarking of an electronics company is also presented. 4.1 The evaluation process The reliability capability evaluation process is comprised of three phases. In the first phase, initial information about the process is sent to the company being evaluated. A reliability capability evaluation questionnaire is included for the company to answer and collect evidence supporting the answers. In the second phase, evaluators visit the facility, and verify the responses to the questions with the supporting evidence. The third phase involves the compilation of an evaluation report. The first phase is initiated by sending information about the concept of reliability capability and maturity evaluation to the company being evaluated. This helps the personnel within the company to appreciate the benefits of such evaluation and enables them to answer the questions asked during the evaluation with a positive frame of mind. A questionnaire for the evaluation is sent at least twenty days before the evaluators visit the company. The evaluation consists of nine sub-sections ? eight sections pertaining to each of the key practices essential to reliability achievement, and one section on 52 background information about the company. A schedule for the second phase involving physical evaluation is also included. The respondents are required to provide ?objective evidence? 5 in support of their responses. The evidence may be in the form of data, reports, policy drafts or other documents. In the second phase of the evaluation, evaluators visit the facility. The evaluation team usually includes one representative from the company. The company presents an overview of their reliability objectives and practices. The evaluation team then reviews the responses to the questionnaire and the supporting evidence. Additional follow-up questions are asked and additional supporting information is identified to clarify some responses and obtain the correct information. Evidence is sought and judgments are made based on: 1. Commitment to perform (leadership, resources) 2. Ability to perform (experience, training, tools) 3. Methodology used to perform (logic, framework, planning) 4. What has been performed (tasks, activities) 5. How are the results of product performance used (integration at organizational level) In the third phase, the company is provided a draft evaluation report summarizing the evaluation team?s observations and recommendations for reliability improvement. The company is typically given one week to review the draft report and provide comments. A final report incorporating the feedback comments and clarifications is sent to the company, usually within four weeks after the evaluation. Based on the documented 5 Objective evidence is any piece of information that leads two or three independent evaluators to the same conclusion. 53 information and responses received, a reliability capability level is assigned to the company. 4.2 Case study: a defined company To assess the practicality of the reliability capability evaluation process, and as a part of the reliability capability maturity model development, four case studies were conducted. The details about one of the case-studies is presented here. This section provides a brief profile of the company in terms of its reliability activities, followed by the results of the evaluation, and the recommendations made. This company is a leading manufacturer of electronic control products providing thousands of products to customers in many countries. The warranty of the products usually ranges from 1 year to 2 years, with a limited warranty of 5 years provided for some products. Most of their products are high-end products with specific reliability requirements, established based on past experience with similar products and customer feedback questionnaires. Reliability tasks are part of a quality plan, which is different for each business unit. A custom quality plan is generated for each product keeping in view the requirements of the customer. Prior to implementation, the quality plan is reviewed by a cross-functional team, including people dealing with reliability. The company has reliability testing and failure analysis facilities, although some testing work is also outsourced to leading test labs. The company does not offer specific ?in-house? training to its employees in broad areas of reliability. However, some of the employees have had outside training in specific 54 topics like six-sigma, physics-of-failure (PoF) approach, and highly accelerated life testing (HALT). The company conducts very limited failure modes and effects analysis (FMEA) for their product designs. They believe in designing systems and using parts that are tested to work beyond the expected usage cycles in the application environment. They feel that by adopting this approach, predicting reliability for their products becomes unnecessary. However, the company does have regular meetings with their service departments to inform them about potential component failures. Yearly meetings are also held to plan for reduction in field returns and component failure rates. The company designs most of their products for a worst case environment for a nominal ten-year useful life, and to have cumulative failures of less than a fraction of a percent over the life of the product. Most of the products are designed to internal specifications. Internal derating guidelines and thermal imaging are used in design. Materials used in product manufacture are also characterized for their heat resistance at elevated temperature usage. Any design changes made during a product development process are followed by re-qualification of the product. An internal product testing guideline has been developed to test a product design. The guideline incorporates tests including HALT, temperature cycling, mechanical cycling, elevated temperature tests, maximum load testing, minimum load testing, and electrostatic discharge (ESD) resistance tests. A standard series of tests is conducted for all products within a business unit. The company also conducts 100% end-of-line functional testing for their products. A documented new product checklist is completed before any product goes into mass-production. 55 The company is proficient at understanding and monitoring life-cycle application conditions for their products. In some products, built-in software is used to assess the usage. The company also conducts a simulation of the application and collects customer surveys to get the information. The purpose of these activities is to match application requirements with conducted tests. The company is currently also looking at methods for stress-health monitoring. An approved vendor list is used for parts selection. This is accompanied by regular supplier audits conducted by the quality assurance group and statistical multiple- lot sample analysis of incoming parts and materials. The sample analysis includes mechanical and electrical testing. The selection of parts is generally made by the design group. The purchasing group is only used to keep track of the schedule and cost issues. Suppliers of critical parts are controlled directly by engineering. Otherwise after initial selection, purchasing maintains the control to ensure scheduled supplies. The company generally prefers to single source parts, except for some commodity items that are multiple sourced. The company very rarely uses parts outside their datasheet or supplier specifications. They use an internally maintained database to specify design ratings for supplied parts. All the parts used on existing products are approved for use on other products. Repeated ?failures? of parts from a supplier will initiate action at the corporate level through the quality assurance department. The action can include exclusion of a supplier from future consideration. The company relies on its suppliers for testing of parts and for providing information about any product changes. The company is currently in a process of 56 developing a new system for assessing and updating the information about the impact of product change notices (PCNs) on their products. They believe in re-engineering or redesigning their products and systems rather than rely on finding obsolete parts for older systems. The company uses a failure tracking system during and after manufacture. Manufacturing defects are tracked by corporate quality assurance, who may initiate a corrective action in some cases if defects rates are high. The post-warranty service and parts replacement provided by the company to their customers is noteworthy. Field failures are tracked even after the warranty period is over. Information of failures is obtained through a failure hotline, defective returns and warranty returns. All tracked failures are included in a database providing information on the date of manufacture and date of return. However, shipping and sale dates are not tracked. All products that are returned from the field are analyzed. If a new failure mode is found, a new unit is subject to tests to reproduce the failure. The company uses the data from field returns database to make improvements in their products by removing the failure causes or defective components. Field failures are tracked through successive generations of products to identify discrepancies. An improvement or deterioration initiates an investigation for the cause of the change. Some reliability tests have been redesigned based on field failures. 4.2.1 Evaluation results and recommendations It was recommended that the company should increase the education and training of employees responsible for reliability functions in different reliability topics including 57 component failure mechanisms. Lessons learned from failure analysis could also be incorporated as short courses. The company should review and update component derating guidelines for all parts. The older derating guidelines currently used are not useful for new technologies and products. The process of how a supplier is obtaining derating curves for their parts also requires revision. The company does not incorporate failure mechanism identification in their reliability tests. The testing conducted is customer driven and focused on testing the operation of the products using power cycling. Although electrical or mechanical failures may be precipitated by these tests, the company does not conduct specific tests for precipitating device level failure mechanisms in semiconductor devices. The company must design these tests for their product, or have these conducted by their semiconductor suppliers. Generating a repository of cause and effect diagrams for different failure mechanisms affecting their products is also useful. There is a need for a better understanding on life tests conducted by suppliers on parts to determine the service life of these parts under the life-cycle conditions for company products. For example, lifetime information about a part at 150 o C may not be enough to obtain information about its expected life at 70 o C without any information about the failure mechanism. If the failure mechanism is understood, and the model for the failure mechanism is known, the qualification data from a supplier may supplement the company?s test data. A better understanding of exactly how long a product will work without failure in a particular life-cycle application environment is also useful for 58 adjusting warranties of products. Mapping from application conditions to distinct failure mechanisms could be valuable to the company. The parts database and its use should be evaluated. The database appears to be updated only if some severe problem is observed for some part. The company should routinely review the reliability test data from a part manufacturer and also consider not using parts for which there is no qualification data provided by the manufacturer. If qualification data for a part cannot be obtained from a supplier, the supplier should be avoided. Although some tracking is conducted for PCNs, the company should have a cross- functional team to evaluate all PCNs in terms of their impact on reliability. The team can also assess the effect of product changes in terms of availability and expected obsolescence of parts used in existing designs of products. Any issued PCNs should be mapped to potential failure mechanisms in terms of risks associated with change of specifications. There should be a further mapping from the PCNs to the bill of materials (BOM) for the company products. This mapping will ensure that each business unit gets a list of the ?critical? PCNs potentially affecting their products. The company must assess the hazard rate (possible non-constant failure rate) of all the field return data to assess trends. This is especially important if an early wearout mechanism arises. The company should also conduct more data analysis and experimentation to assess the actual reliability of their products. This may provide the company with a product differentiation opportunity, which they are currently not utilizing. 59 The company currently specifies failure modes as the failure causes for semiconductor devices. Understanding the root cause of failures and associated loads can help to effectively remove problems. A fundamental understanding of failure mechanisms should help to improve the lessons learned program. Design should be verified to make changes to ensure that the loads that precipitate the failure mechanism are eliminated or reduced. The company was also advised to assess the effect of any manufacturing change within the company or any manufacturing change made by suppliers of parts to assess the potential impact on reliability. The company has engineers that stay aware of the current reliability issues and conduct some studies to assess ?unresolved? reliability concerns. For example, the company is addressing lead-free solder reliability challenges. However, a dedicated reliability resource would supplement the knowledge base. The company should utilize failure analysis laboratory personnel to keep up on industry failure trends on specific parts. There is also a need to get up-to-date with current reliability issues with parts used in products. 4.2.2 Benchmarking The company has a separate quality plan for each business unit. Reliability tasks for each business unit are part of this quality plan. It uses good quality control processes, complimented by a 100% end-of-line functional testing of products. The company has also invested in reliability engineering and created an infrastructure for reliability testing and failure analysis which is used as per the quality plan for each product. 60 The company does not have defined testing procedures that are conducted to evaluate or guarantee reliability of products. Accelerated testing to prove life-time reliability for an intended application is not used. Any additional testing is based only on specific customer requirements. The company does not evaluate PCNs in terms of their impact on product reliability. Only if a serious problem occurs, an informal discussion (usually verbal) is used to determine the cause and the effect. The company does not conduct bench-marking or an internal review of its reliability practices. There is no reliability improvement plan for products since all products are designed for a life of more than 10 years without an analysis of the actual reliability of the products. The company does not use the knowledge from failure analysis of field returns to improve designs and reliability practices across product lines. Only defective components are replaced in new designs. The characteristics of this company are typical for a company at the ?Defined? level. The company is assigned a Level-3 maturity in its reliability capability, according to the characteristics listed above. 4.3 Conclusions In this chapter, a reliability capability benchmarking process is outlined. Based on this process, reliability capability evaluations were conducted for four companies. Details of one evaluation are presented as a case-study. The suggestions and recommendations made in the evaluation reports to the four companies were well received, and steps have already been initiated for improvement. In one of the companies, the reliability department has been re-organized, and more 61 resources and personnel have been allocated to reliability activities. A revised reliability plan is being developed based on our recommendations, and training of personnel in specific reliability topics has been initiated. In the second company, with a better maturity rating, existing data collection and root cause analysis procedures are being remodeled. The database of lessons learned is being made more comprehensive and made available across different product divisions so that the design teams can avoid previous mistakes. The results of the case studies indicate that reliability capability evaluation of a company can be conducted not only to assign a maturity level, but also to add value. It was found that an evaluation can help a company to understand how they can improve reliability of products by focusing on set of activities identified during the benchmarking process. 62 Chapter 5 CASE-STUDY: PCB ASSEMBLY MANUFACTURER In this chapter, a methodology is proposed to evaluate the reliability capability and maturity of a printed circuit board assembly manufacturer. A case-study of an assembly manufacturer where the evaluation found problems associated with reliability is also presented. 5.1 Introduction In general, companies that sub-contract their printed circuit board assemblies either rely on the stated abilities of the printed circuit board assembly manufacturers, or they conduct audits to ascertain the capabilities of the prospective suppliers. The IPC has developed standards for the evaluation of printed circuit board assemblies as well as assembly manufacturers. The IPC-A-610D standard provides industry-accepted workmanship criteria for electronics assemblies [80]. The IPC-1710A standard titled ?OEM Standard for Printed Board Manufacturers' Qualification Profile? sets the standard for assessing PWB manufacturers? capabilities [81]. For the printed circuit board assembly particularly, IPC-1720A standard titled ?Assembly Qualification Profile? provides guidelines to categorize an assembly manufacturer's capabilities and to provide the OEM customer with ?detailed, substantive? information in terms of manufacturing and testing capabilities (site capability), technology profile specifics, and quality profile [82]. 63 The site capability sections of IPC-1720A include information about the assembler regarding types of PCBs assembled, assembly equipment and processes, testing capabilities, product complexities and volumes handled, and overview of quality systems. The section on technology profile provides information like capacity of the assembly site, its revenue distribution among assembly types, plant layout, and approval or certifications for the assembly site. The quality profile section of IPC-1720A includes information on existence of quality programs like receiving inspection, process documentation, subcontractor control, and statistical process control. The possible responses to the questions in IPC-1720A for the first two areas have a multiple-choice format with distinct and well defined responses. However for the section on quality profile, the evaluation is subjective with possible responses like not applicable, not started, approach developed, percent deployed, and percent results. Based on the subjective information provided by assembly manufacturers through the IPC 1720A standard, a number of assemblers can satisfy a customer?s requirements. However, this standard does not provide any quantifiable metric for comparing one supplier from another in terms of their quality or reliability practices. Information on specific reliability tasks is also not included. In order to meet cost and schedule requirements, a method is needed to assess the reliability practices of the assembler. This chapter presents a methodology to evaluate reliability capability for a printed circuit board assembly manufacturer, which can be used for supplier selection. The output of the evaluation methodology is a maturity score that is assigned to an assembler with respect to activities that affect the reliability of the assemblies. A case study is provided to demonstrate the methodology. 64 5.2 Printed circuit board assembly process A printed circuit board is the main constituent for the PCB assembly. The IPC designates printed circuit boards by a number followed by an alphabet based on the component mounting and component type as per definition provided in Figure 9 [82]. When a board assembly is outsourced, it is not a product design but a process which is outsourced. The board design including the components and their layout is provided by the customer. An assembly manufacturer generally does not have or receive information about the application conditions in which assembled boards will be used. They also do not generally evaluate board design based on specific testing or suggest any design changes. However, a customer may require them to conduct tests to ensure the robustness of the assembly process. An assembler can impact the reliability of boards through its assembly process and through other value added services that it provides to its customers. Consequently, reliability capability evaluation of the assembly manufacturers has to be based on the Components (mounted) on both sides of the board2 Components (mounted) only on one side of board1 Complex intermixed assembly, through-hole, surface mount, ultra fine pitch, chip scale Y Complex intermixed assembly, through-hole, ultra fine pitch, COB, Flip chip, TAB Z Simplistic through-hole and surface mount intermixed assemblyC Complex intermixed assembly, through-hole, surface mount, fine pitch BGA X Surface mount components onlyB A Through hole components mounting only Figure 9: The IPC printed circuit boards designation 65 assembly process issues and upon the nature of other value added services that the assembler can provide. Figure 10 shows the activities for the system integrator and the PCB assembly manufacturer [83]-[86]. The ?*? items can be conducted by either the system integrator or the printed circuit board assembler. Provided below is a brief description of different steps. Pre-assembly inspection of boards and components and post-assembly inspection of interconnections are a part of the quality assurance process. Kitting involves gathering all necessary components for a PCB assembly. The kitting process ensures the suitability of the components to provide a reliable component placement on the PCB. Component placement can be manual or automated using pick and place machines. Manual or automated inspection can be used before and after component soldering. Printed Circuit Board Assembly Requirements Capture PCB design Components procurement* System Integrator Activities PCB procurement* Components procurement* PCBA Manufacturer Activities Component/PCB inspection & testing Kit preparation Component placement (automatic) Component placement (manual if needed) Post-placement inspection Soldering (wave/reflow/mixed) Post-reflow inspection and cleaning Quality inspection/electrical testing Repair or rework (if required) PCB procurement* Conformal coating (if used) Reliability testing (if required) Figure 10: Typical printed circuit board assembly process 66 The two major techniques used for attaching components (and to make interconnections) to the PCB are wave and reflow soldering. The former involves passing a circuit board with components assembled on it across a molten wave that adds solder to make the interconnections, while in reflow soldering, solder that is already present on the board is heated to its melting point which when cooled provides the attachment. The electrical testing conducted on printed circuit boards fall under three categories, the bare board testing, the in-circuit testing, and functional testing [87]. Bare board testing is used to test bare boards prior to assembly to find shorts or opens on inner layers or non-compliance to parametric requirements. In-circuit testers (ICT) are used to verify PCB assembly electrical functionality (continuity), and to identify any manufacturing defects (shorts and opens) for subsequent repair and rework. Functional testing is used to verify performance of PCB assembly when it is installed in its intended next level assembly. Rework on a PCB assembly can include correction of defective solder joints, removal and replacement of components, or repair on the circuit board traces. Circuit board modifications may include removal of solder material causing shorts, repair of cracked or open traces, re-attachment of partially lifted pads, or addition of jumpers to create new circuit paths. Repair or rework, although not desirable may be unavoidable for some assemblies. However, they should be conducted without damaging leads, internal function or structure of the assemblies. Any damage to the land pattern and the substrate, excessive heat exposure to adjacent components and solder joints should be avoided. Reliability testing is used to determine the suitability of assemblies for use under different applications conditions, by exposing them in a test chamber to a combination of 67 worst conditions in which the assembly is expected to operate. This requires definition of the environmental conditions, and determination of the testing parameters for different stresses like temperature, altitude, shock and vibration, humidity, or contamination. When a printed circuit board assembly activity is sub-contracted, the customer sets requirements on the assembly manufacturer. These requirements may include the primary supply requirements, the product manufacturing requirements, and the post- assembly test requirements. Primary supply requirements include cost and schedule specifications, supply documentation, and broad product performance specifications. Manufacturing requirements may include dimensions and tolerances on mechanical and electrical parameters of board characteristics, as well as acceptable assembly conditions including the equipment to be used, quality policies, suppliers? policies, and any applicable industry certifications that are needed. The assembly manufacturer may also be required to provide manufacturing process data, assembly inspection data, or other electrical and reliability testing data to the customer. All these requirements may be part of the supply agreement. These requirements along with the quoted price and schedule can influence the selection of a PCB assembly manufacturer. 5.3 PCB assembler evaluation methodology The generic reliability capability evaluation model consists of eighty-eight critical tasks listed under eight key reliability practices. Based on this model, reliability capability evaluations were conducted for four companies. The evaluation results were very satisfactory and well received by the companies. When the same list of tasks was used to evaluate printed circuit board assemblers, it was found that not all the tasks were 68 applicable for this evaluation. It was realized that this is true for all companies that do not design anything, but only act as manufacturing facilities to which work is outsourced. These companies have no control over the design of the product, and are in-effect told what to do. The circuit board assemblers procure bare circuit boards and components (either themselves or as consignment items from their customers). They are required to assemble the components on the boards using some manufacturing processes. In most cases, they do not have any idea about the functional specifications or the application conditions of the circuit board that they are assembling. Most of the activities, including inspection and testing, are customer driven based on contractual supply agreements. Out of the 88 reliability tasks, only 44 tasks were found applicable for reliability capability evaluation of PCB assemblers. It was found that for some of the applicable tasks, more than one question specific to a PCB assembler should be included. Following the development process shown in Figure 11, the reliability capability evaluation methodology for evaluation of a circuit board assembly manufacturer was developed as a sub-set of the generic model. The PCB assembler evaluation methodology is divided into two parts. The first part is used to screen assembly manufacturers based on their capabilities to satisfy customer requirements. Any mismatch or non-compliance in this compatibility analysis leads to outright rejection. Perfect matching between the customer requirement and the manufacturers? capabilities initiates the process of maturity score evaluation conducted using the second part of the questionnaire. The second part is used to evaluate reliability capability maturity score. The combined questionnaire is included in Appendix-11. 69 Reliability objectives Reliability practices Reliability tasks Questions based on tasks applicable for a PCB assembler Figure 11: PCB assembler reliability capability evaluation methodology development process 5.4 Part-1: Manufacturing compatibility evaluation The first part involves the evaluation of the compatibility of basic manufacturing requirements for the customer, and the capability of the assembly manufacturer. The IPC AQP questionnaire was used as a baseline for developing this part of the questionnaire. The preliminary compatibility assessment is a zero level screen since all the assembly manufacturers who pass the compatibility test are assumed to be at the lowest level of maturity with respect to their reliability practices. There are 31 multiple choice questions in this part of the questionnaire. Most, but not all, of the information for this assessment can be obtained from the IPC-1720A document from the assembly manufacturer. Information which cannot be obtained from the IPC-1720 document includes specialized manufacturing, testing and repair capabilities. Specialized manufacturing includes issues like lead free assembly, 70 availability of specific soldering capabilities, and capabilities for specific processes like underfill dispensing and curing. Repair capabilities include the sophistication of the inspection methods used, and the capabilities for rework and repair of solder joints, removal and replacement of components, and modification and repair of circuit boards. 5.5 Part-2: Reliability capability maturity evaluation The second part of the questionnaire is used to evaluate assembly manufacturers for the maturity of their quality and reliability practices. Reliability tasks from the reliability capability model were used as a baseline for developing this part of the questionnaire. The reliability capability maturity questionnaire consists of relevant questions from eight key reliability practices representing an evolutionary improvement in the manufacturing, inspection, testing, and reliability practices of an assembly manufacturer. Table 9 provides an overview of this questionnaire including the number of applicable tasks, and the number of questions under each key practice. Table 9: PCB assembler reliability capability evaluation questionnaire 121111Failure data tracking and analysis 5310Training and development 88 10 08 15 12 11 11 Generic Tasks 44 5 3 12 3 1 6 Applicable Tasks 7Reliability improvements 3Verification and validation 16Supply chain management 8Reliability testing 1Reliability analysis 25Reliability requirements and planning 77 Total Questions Key Practice 71 This part contains three types of questions ? simple yes/no type questions, questions where multiple selections can be made out of the choices available, and questions where only a single selection can be made out of the many options. For the second type, the score depends on the number of choices selected as a response. For the third type, the responses are ordered and selection of a choice that is higher in order gives a higher score. The specifics about the evaluation questions are discussed in detail below. 5.5.1 Practices associated with development of requirements and plans The key practices included here are reliability requirements and planning, and training and development. Questions under these key practices are used to assess the repeatability of the manufacturing processes and planning procedures for equipment and their maintenance. The opportunities for employee training within the organization, and nature of the training programs are also evaluated. Questions regarding the existence, scope and implementation of a quality and reliability plan within an assembly manufacturer?s organization are included [83][88]. Questions address the implementation of statistical process control (SPC) for solder-paste deposition and finished solder joint quality [89][90]. Questions on visual inspection [80], automated optical inspection (AOI) [87][91], board rejection based on solder joint defects, and policies for repair, rework and modification of assemblies are also included. Existence of procedures for process issues that affect reliability is also evaluated. These include procedures for preventive maintenance of equipment and facilities, contamination control, electrostatic discharge (ESD) prevention, and tracking moisture sensitivity level (MSL) for components. 72 5.5.2 Practices associated with meeting reliability requirements The key practices included here are reliability analysis, reliability testing and supply chain management. Questions are included to assess whether the assembler conducts any functional and reliability testing of its assemblies, and whether the tests are conducted according to some industry standards or modified for meeting specific requirements for a customer [92]. The types of bond testing procedures for checking COB, TAB, QFP or flip-chip bonding are also assessed. There are questions about the existence of component engineers and to assess the criteria used for selection of suppliers for parts and materials and creation of approved parts and vendor lists. Questions on parts and materials management involve incoming inspection, rejection criteria for boards or components, handling, storage, non-conforming material policies, traceability, change notices, and obsolescence. 5.5.3 Practices associated with reliability assurance and growth The key practices included here are failure data tracking, verification and validation, and reliability improvements. Questions here are focused on evaluating an assembly manufacturer on the use of data collected from manufacturing, field and testing for implementing corrective actions for changes in process or modification of assemblies to improve reliability. There are questions on failure analysis capabilities, existence of a database for reported failures and corrective actions recommended. The existence of industry accepted certifications, a corrective action system and an internal auditing system are also evaluated. There are also questions regarding existence of a process for improvements in process reliability. 73 5.6 Case-study The purpose of this case-study was to evaluate the validity of the questions by evaluating a printed circuit board assembly facility considered to be a global leader in the Electronics Manufacturing Services (EMS) industry. This company manufactures high and low-mix PCB and backplane assemblies for volumes that range from just a few units for prototypes, to hundreds of thousands in production per year. The compatibility part of the questionnaire was not used for this case-study because there was no specific product to match requirements against capabilities. Results from the evaluation based on second part of the questionnaire are discussed below. 5.6.1 Practices associated with development of requirements and plans The assembly facility has a separate department dealing with reliability called ?Advanced Manufacturing Engineering?, wherein more than 75% people working on reliability related problems are engineers. The facility has documented procedures that are followed to control the contamination level of assembly areas and to prevent electro- static discharge (ESD) damage to parts and assemblies. There is also a system to keep track of the moisture sensitivity level (MSL) of parts during assembly to prevent any delamination or pop-corning failures. The assembly facility has a generic quality manual that is used for the entire facility. The quality manual includes guidelines for statistical process control (SPC), process improvement strategies, sample based inspection, corrective action, and includes a documented audit plan. The guidelines of the quality manual are fully implemented in all departments. Documented records are maintained for receiving inspection, process 74 control, and equipment calibration as well as production material rejects. A documented schedule for equipment calibration and preventive maintenance is maintained. The SPC program uses documented control charts, which are used for continuous improvement of stable manufacturing processes. Data from periodic on-line and post- production testing along with machine operation data are used to determine process control. Process capabilities are calculated and used as a tool for initiating corrective actions. For example, a correction is initiated for a process when the C pk value for the process goes below 1.0. In addition to process capability values, the percent defect 6 , defects per unit (dpu) 7 , and defects per million opportunities (DPMO) 8 are also used as measures for manufacturing process capabilities. Several PCB assembly characteristics including solder paste deposition and solder joint shape parameters are used as process control parameters. These parameters are obtained through visual inspection, automated optical inspection or through three- dimensional laser scanning of solder paste topology. Automated optical inspection is used at the post-paste application and post-soldering to detect defects or non-conformities. Post-placement AOI checks are used to check for component presence, placement and orientation as well as for any visible component or solder joint damage. The inspection procedures for the final assemblies, in particular the solder joints, are in accordance with 6 Ratio of units failed to the total number of units inspected or tested. 7 Also represented as first time yield (FTY), this measure gives the statistical probability that any given unit can pass through a manufacturing process, and reach inspection or testing without picking up a defect on the way. 8 The DPMO index is used to compensate for the complexities of different products. It is the ratio of the number of defects per unit, and the total defect opportunities per unit in a million samples of the product. This is to prevent the yield comparison based on percent yield between two products with entirely different complexities or possibilities for defects. [90] 75 the IPC-610 Standard, and are conducted using 3D X-ray tomography. The types of defects that the final assemblies are checked for include misalignments, solder bridging, solder open or insufficient solder, solder voiding, solder non-wetting and tombstoning of passives. Based on solder joint inspection data, the solder paste deposition process is controlled by varying the printing speed, squeegee pressure, on-off contacts, separation speed, and separation distances. Assemblies are verified to conform to the IPC Standard, IPC-A-610D 9 Class-II specifications, unless requested otherwise [80]. A number of assembly features including general cleanliness, component mounting and solder joint defects are checked during visual inspection of the final assembly. The assembly facility has documented procedures for conducting re-work and repair on assemblies identified as defective during manufacturing or functional testing. The documented procedures include procedures to prevent moisture induced damage during assembly, prevent ESD damage, and to control contamination during the rework process. On an average, up to 25% of all assemblies undergo rework. However, there is no limitation on the number of part or site rework that can be conducted for a single assembly. Although, they re-inspect the re-worked assemblies using optical, X-ray, and electrical methods, these are not marked as such, i.e., they are not differentiated from the assemblies that did not undergo any rework or repair. 9 IPC-A-610D is a ?pictorial interpretive document that indicates various characteristics of the board and/or assembly as appropriate relating to desirable conditions that exceed the minimum acceptable characteristics indicated by the end item performance standard and reflect various out-of-control (nonconforming) conditions to assist the shop process evaluators in judging need for corrective action? [80]. 76 The assembly facility does not have a facility-wide training program for employees, but they do provide certified training to selected personnel. The facility has staff members who are dedicated to research and development. There is also a feedback system through which suggestions for design or process improvement are solicited from employees. 5.6.2 Practices associated with meeting reliability requirements Electrical continuity tests like in-circuit and functional testing are conducted during the assembly process. Application specific reliability testing is conducted if requested by a customer. These tests are used to make reliability predictions if required. There are defined accept/reject criteria for each type of testing. Specific reliability test plans are not used, since most of the reliability planning and testing are customer-driven. Process failure modes and effects analysis (FMEA) is conducted to identify potential problems in manufacturing. The facility maintains an approved supplier list, which is updated through a monthly analysis program, and augmented with periodic supplier performance reviews. All the suppliers are expected to follow quality management principles and have a certified parts program. Although more than 50 percent of the parts are double sourced, the assembly facility does buy parts from brokers in certain cases. There is an established system to identify problems in parts and materials from suppliers, and to initiate and verify the corrective action undertaken. Documented procedures exist for receiving inspection, handling control, storage control, material resource planning, and non-conforming materials quarantine. Under a material 77 traceability system, the incoming parts and materials are verified through traceability markings like serial number, lot number, or date code. For non-conforming parts or materials, procedures exist for their identification, segregation from regular materials, proper disposition, and possible corrective action. A record is maintained of all the corrective actions. The assembly facility has capabilities for electronic data interchange (EDI), engineering change order process, an on-line shop floor materials control, and component kitting. The assembly facility keeps track of product change notices (PCNs) from their suppliers, and tracks potential obsolescence of parts used in their assemblies. However, the reliability department is not informed about the PCNs. The customers are kept informed about any changes made to products controlled by customer?s drawings or specifications. 5.6.3 Practices associated with reliability assurance and growth The assembly facility has a process to make improvements in assembly process based on test failures, field failures, customer feedback and lessons learned. A database of all reported failures is maintained. There is a return material authorization (RMA) system through which the failed assemblies are tracked. The system provides information on the date of manufacture of the failed assembly, its shipping and return dates, and the reason specified for return. There is also a system for documenting customer dissatisfaction, which identifies and analyses the cause of dissatisfaction, and results in the implementation of a corrective action. The assembly facility, however, does not 78 conduct regular internal or external review of its manufacturing, quality or reliability practices. The assembly facility has documented procedures for conducting in-house analysis of failed assemblies. The types of failures include the quality rejects, electrical functional failures, reliability test failures, and customer returned failures. The failure analysis reports include identified root cause, the failure mode, site and mechanism and the corrective action proposed in each case. All the failures are ranked in Pareto charts based on their failure mode, site, and mechanism. The effectiveness of each corrective action is reviewed and monitored over time. In general, all types of corrective actions are documented, and are available for review. However, most of the improvement activity is customer driven, i.e., if the customer does not ask for it, the improvement action is not followed through. 5.7 Case-study evaluation results The score-card for the facility based on the case study is shown in Table 10. As a stand alone evaluation, the evaluation can be used to identify areas of improvement. As a measure of comparing prospective suppliers, the scores provide a quantitative means for objective differentiation. This case-study also brought forward some shortcomings of this assembly facility. Automated optical inspection (AOI) is not used after component placement to verify component presence or their correct placement and alignment. Introducing this inspection can help to preempt any defects that might become evident only in finished assemblies. Although the final assemblies are checked for a number of solder joint defects, only 79 Table 10: Evaluation scorecard for an assembly facility 83.3310.0012Failure data tracking and analysis 69.003.455Training and development 77 7 3 16 8 1 25 Maximum score 65.65 6.00 2.00 14.75 7.00 1.00 21.45 Obtained score 85.71Reliability improvements 66.67Verification and validation 92.19Supply chain management 87.50Reliability testing 100.00Reliability analysis 85.80Reliability requirements and planning 85.26 Percentage score Key Practice presence of some defects leads to scrapping of the board. Bond testing procedures like ball-shear test or TAB push test are not used for checking COB, TAB, QFP or flip-chip bonding on final assemblies. Internal audits of quality, manufacturing, reliability or corrective action systems are not conducted. No training is provided in reliability areas like testing or failure analysis, which could help employees to initiate process improvement to avoid earlier failures. During the assembly rework process, the reworked boards and sites are not marked and thermo-mechanical degradation of assemblies due to rework is not assessed. Most of the reliability planning and testing is customer driven. A general reliability plan for finished assemblies does not exist. The reliability department is not informed when any process or product change notices are issued by the suppliers, or anywhere within the assembly facility. Failure analysis of customer returned assemblies 80 is not conducted in most cases. This may prevent identification of many recurring defects that could be corrected through simple process changes. 5.8 Conclusions Increased focus on core competencies and availability of low-cost contract manufacturing facilities has made sub-contracting of PCB assembly manufacturing a common business practice. This chapter presents a methodology that can help system integrators to make an assessment of their prospective PCB assembly suppliers. The PCB assembly manufacturer reliability capability evaluation methodology consists of a part of questions which evaluate manufacturing compatibility and is used to select assemblers for the second part of evaluation. The second part of the questionnaire is used to calculate a reliability capability maturity score for the assembler. A case-study using the methodology was conducted for a leading PCB assembly manufacturer. Evaluation results and some of the shortcomings of this assembly manufacturer that were brought forward by the evaluation are included. The PCB assembler reliability capability evaluation methodology presented in this chapter can be used to calculate maturity scores for assembly manufacturers in terms of their ability to manufacture reliable assemblies. The maturity score provides a quantitative means for objective differentiation between two or more prospective suppliers. The methodology can be used for PCB assembly manufacturer selection by customers that sub-contract their PCB assembly work. The proposed methodology only uses the information provided by the assembly manufacturers, and can be used to create a 81 shortlist of prospective suppliers. A physical audit for the short-listed suppliers is recommended to make the final supplier selection. 82 Chapter 6 APPENDICES 6.1 Appendix-1: Reliability tasks under different key practices 1. RELIABILITY REQUIREMENTS AND PLANNING 1-01 Presence of a reliability department 1-02 Customer inputs in the form of their requirements and expectations 1-03 Capturing reliability specifications of competitive products 1-04 Using reliability specs from old products while establishing requirements for new products 1-05 Product reliability plan that includes reliability goals and reliability activities schedule 1-06 Establishing reliability goals for sub-assemblies and components in a product 1-07 Establishing reliability goals as a distribution and not as a point estimate 1-08 Establishing reliability goals for products based on specific lifecycle conditions 1-09 While preparing a reliability plan, planning for required resources like materials, personnel and equipment 1-10 Including details on reliability analysis and testing for specific products as part of a reliability plan 1-11 Contingency planning and specification of decision criteria for altering reliability plans 1-12 Reliability plan includes a process for communicating results from reliability activities 2. TRAINING AND DEVELOPMENT 2-01 Reliability training plan or program 2-02 Formally trained reliability engineers 2-03 Top management commitment to reliability training 2-04 Business managers trained to appreciate importance of reliability to products or business 2-05 Reliability managers trained on how specific reliability activities can impact reliability 2-06 Reliability engineers trained to identify failure modes and mechanisms in a product design 2-07 Reliability engineers trained in statistical methods for reliability prediction and data analysis 2-08 Reliability engineers trained in failure analysis, root cause analysis, and corrective actions 2-09 Reliability training provided to employees not directly associated with reliability, e.g., procurement, purchasing, etc. 2-10 Tracking new technologies, modeling or analysis techniques that can impact reliability 3. RELIABILITY ANALYSIS 83 3-01 Identification of potential single points of failure and failure modes in a product design 3-02 Identification of potential failure mechanisms that can cause failures in a product design 3-03 Identification of critical failure modes and mechanisms in a product design 3-04 Quantification of risks and weaknesses for critical components in a product design 3-05 Checking adherence of a design to design rules 3-06 Making reliability point estimates using modeling or reliability prediction handbooks 3-07 Making reliability distribution predictions based on times-to-failure for potential failure mechanisms 3-08 Characterization of materials used in a product design 3-09 Using reliability predictions for specifying warranty periods and making spares provisioning 3-10 Using reliability analysis to design specific reliability tests for a product 3-11 Optimizing lifecycle costs for a product based on reliability vs. cost trade-offs 4. RELIABILITY TESTING 4-01 Tests to identify design margins and destruct limits for a product 4-02 Design verification or qualification tests for a product 4-03 Using reliability testing to make design changes in a product prior to production 4-04 Reliability testing based on generic specifications for all products 4-05 Reliability testing based on customer specifications 4-06 Burn-in or screening of products prior to shipping 4-07 Reliability tests tailored to specific products 4-08 Detailed reliability test plans for products including sample sizes and confidence limits 4-09 Accelerated tests based on specific failure mechanisms to determine times-to-failure 4-10 Analysis of the test data to determine statistical failure distributions 4-11 Application of failure distributions to make reliability predictions using acceleration factors 4-12 Reviewing and updating reliability qualification test requirements for components 4-13 Minimizing reliability testing by using burn-in or environmental stress screening 5. SUPPLY CHAIN MANAGEMENT 5-01 Component engineers for parts selection and supply management 5-02 Procuring parts only from authorized distributors and not from part brokers 5-03 Using manufacturing quality data for part selection and for in-coming lot rejection 5-04 Using reliability test data from suppliers for part selection and for in-coming lot rejection 5-05 Considering technology maturity of parts during part selection 5-06 Vendor or supplier assessments or audits 5-07 Using and maintaining a list of preferred/qualified/approved parts and suppliers 5-08 Using and maintaining a supplier rating system 84 5-09 Using techniques like uprating for qualifying parts for use outside their datasheet specs 5-10 Supplier contractual agreements containing quality and reliability requirements 5-11 Multiple-sourcing of parts 5-12 Tracking component traceability markings to identify any changes 5-13 Tracking part obsolescence to ensure continued supply or to make alternate supply arrangements 5-14 Review of supplier product change notices (PCNs) to assess their impact on manufacturability 5-15 Review of supplier PCNs to assess their impact on reliability 6. FAILURE DATA TRACKING AND ANALYSIS 6-01 Manufacturing defects and production testing failures tracked, and recorded in a database 6-02 Reliability testing failures tracked and recorded in a database 6-03 Field failures tracked and recorded in a database 6-04 Ensuring traceability of products from manufacture to failure 6-05 Conducting failure analysis on failed products from all sources from manufacturing to field 6-06 Creating Pareto charts based on failure modes and failure sites 6-07 Conducting root cause analysis on failed products from all sources 6-08 Generating failure analysis reports detailing underlying failure mechanisms for failed products 6-09 Creating Pareto charts based on failure mechanisms 6-10 Correlating failure mechanisms with specific materials and processes 6-11 Creating and updating a database of corrective actions based on identified failure modes and mechanisms 7. VERIFICATION AND VALIDATION 7-01 Obtaining certifications like ISO for all management processes including reliability 7-02 Updating reliability predictions for products based on field data for present and previous products 7-03 Modifying statistical failure distributions used for reliability predictions on the basis of field failure data 7-04 Modifying reliability test conditions for current and future products based on failure mechanisms observed in field 7-05 Updating the failure modes database to incorporate any new failure modes observed in field 7-06 Updating the failure mechanisms database to incorporate any new failure mechanisms observed in field 7-07 Verifying and modifying warranty estimates and spares provisioning based on field returns 7-08 Internal audits for reliability planning, analysis and testing activities 8. RELIABILITY IMPROVEMENTS 8-01 Bill of materials modification to exclude parts that have had reliability problems in field 8-02 Updating product reliability requirements due to business or market considerations 8-03 Making design changes, if required, to accommodate changes in lifecycle environment 85 8-04 Implementing corrective actions based on field failure modes 8-05 Implementing corrective actions based on field failure mechanisms 8-06 Requiring engineering change notifications for reliability improvements 8-07 Preventing recurrence of failures in future products, which have already been observed in existing products 8-08 Using field failure information to improve company design rules and process control requirements 8-09 Evaluating and implementing new modeling or analysis techniques to improve product reliability 8-10 Evaluating and implementing new technologies to improve product reliability 8-11 Support by top management for a proactive approach to reliability improvement 86 6.2 Appendix-2: Structure of two questionnaires Very HighHighMediumLowNegligible Based on your experience, what is the relevance of this task for ensuring and improving product reliability? ?5??4??3??2??1? Establishing reliability goals for products based on specific lifecycle conditions Tracking part obsolescence to ensure continued supply or to make alternate supply arrangements Conducting failure analysis on failed products from all sources from manufacturing to field Measurement tasks Figure 12: Structure of the survey questionnaire Based on your understanding of the purpose of each key reliability practice, which key practice should this reliability task or trait belong to? Randomly arranged measurement tasks Judges are provided the definition and the purpose of each key practice R e li ab il i t y im pro v e m ent s Ve r i f i ca tio n a n d va lid at io n Rel i a b ilit y requi rements a nd p l an ni ng Rel i a b ilit y anal ysi s R e li ab il i t y t e st in g Training a n d de v e lo pm ent Sup ply ch a i n m a n agem e nt Fa ilure da t a tra c k i n g an a l ys is a nd R e li ab il i t y im pro v e m ent s Ve r i f i ca tio n a n d va lid at io n Rel i a b ilit y requi rements a nd p l an ni ng Rel i a b ilit y anal ysi s R e li ab il i t y t e st in g Training a n d de v e lo pm ent Sup ply ch a i n m a n agem e nt Fa ilure da t a tra c k i n g an a l ys is a nd R e li ab il i t y im pro v e m ent s Ve r i f i ca tio n a n d va lid at io n Rel i a b ilit y requi rements a nd p l an ni ng Rel i a b ilit y anal ysi s R e li ab il i t y t e st in g Training a n d de v e lo pm ent Sup ply ch a i n m a n agem e nt Fa ilure da t a tra c k i n g an a l ys is a nd Establishing reliability goals for products based on specific lifecycle conditions Tracking part obsolescence to ensure continued supply or to make alternate supply arrangements Conducting failure analysis on failed products from all sources from manufacturing to field Figure 13: Structure of the content validity questionnaire 87 6.3 Appendix-3: Details of respondents to the survey Table 11: Survey respondent details Number of employees Number of responses Name of company / affiliation 1 (Consultants/ researchers) 9 Engelmaier Associates, L.C. FS Consulting iNEMI Mikroelectronik Konsult AB Ryan Computer Systems, Inc. Shanghai Jiao Tong University 2 - 250 39 Alpha & Omega Semiconductor Basari Elektronik Buehler Curamik Electronics Inc. First Solar, LLC GrafTech International Ltd Matt MacDonald Metalor Technologies, USA Plantronics Pro-Dex, Inc. Qineti Q Quartzdyne Inc. Serco SPA Trimble Tyco Electronics Universal Avionics 251 - 1000 60 AMETEK Aerospace Astec Power Curamik Electronics Inc. Dow Corning EADS CCR EMC 2 Ericsson Power Modules AB Ford Motor Company General Dynamics AIS Goodrich Engine Controls Leroy Somer, Emerson Motorola Automotive Nokia Networks Nortel Networks Philips Semiconductors Samsung Electronics Schlumberger Schneider Electric Seagate Technology Semikron Electronik GmbH, KG Solectron Corporation TRW Automotive ViaSat Inc Whirlpool 1001-2500 47 Aerospace Corporation Agere Systems Allison Transmission / GM BAE Systems EMC Ireland Emerson Process Management GE GRC Halliburton Hamilton Sunstrand Hewlett Packard Honeywell International Hughes Network Systems ISRO, India L-3 Communication Systems Liebert Corp Lucent Technologies Medtronic Raytheon Rockwell Automation Smiths Aerospace > 2500 49 Agilent Technologies Alcatel BAE Systems, UK Boeing Cardone Industries Inc Dell ECI Telecom Ltd GE Healthcare Grundfos, Denmark Hutchinson Technology Inc. Lockheed Martin MBDA UK Ltd Motorola NASA / GSFC Northrup Grumman NSWC Raytheon Space & Airborne Sys Rockwell Collins Inc. Sandia Labs Sun Microsystems Tatung Co. Texas Instruments Wistron ZTE Corporation 88 6.4 Appendix-4: Item analysis results for ninety-one tasks Table 12: Item analysis results for 91 reliability tasks RRP TAD RA RTST SCM FDTA VAV RIMP 1-01 0.47 0.39 0.32 0.30 0.28 0.29 0.31 0.32 1-02 0.42 0.24 0.26 0.11 0.17 0.18 0.19 0.24 1-03 0.46 0.29 0.29 0.20 0.30 0.15 0.29 0.21 1-04 0.40 0.26 0.26 0.21 0.38 0.34 0.35 0.23 1-05 0.60 0.46 0.39 0.18 0.34 0.29 0.41 0.38 1-06 0.57 0.37 0.41 0.33 0.30 0.24 0.31 0.43 1-07 0.51 0.20 0.34 0.26 0.25 0.20 0.31 0.27 1-08 0.58 0.39 0.44 0.30 0.32 0.28 0.31 0.44 1-09 0.60 0.38 0.36 0.31 0.30 0.28 0.32 0.37 1-10 0.63 0.49 0.41 0.36 0.28 0.39 0.37 0.41 1-11 0.61 0.38 0.45 0.26 0.25 0.21 0.37 0.32 1-12 0.65 0.49 0.43 0.24 0.27 0.29 0.37 0.42 2-01 0.49 0.69 0.50 0.29 0.35 0.37 0.40 0.44 2-02 0.40 0.68 0.38 0.23 0.34 0.33 0.32 0.35 2-03 0.48 0.70 0.39 0.32 0.30 0.31 0.35 0.40 2-04 0.36 0.60 0.29 0.15 0.18 0.28 0.27 0.34 2-05 0.49 0.72 0.42 0.28 0.32 0.40 0.33 0.41 2-06 0.37 0.62 0.36 0.22 0.26 0.41 0.33 0.43 2-07 0.41 0.59 0.49 0.33 0.40 0.37 0.40 0.39 2-08 0.31 0.58 0.31 0.21 0.26 0.36 0.25 0.45 2-09 0.28 0.49 0.26 0.15 0.25 0.27 0.28 0.25 2-10 0.51 0.57 0.53 0.41 0.35 0.34 0.34 0.43 3-01 0.38 0.38 0.62 0.30 0.32 0.34 0.28 0.37 3-02 0.38 0.41 0.64 0.38 0.22 0.40 0.18 0.35 3-03 0.35 0.37 0.60 0.33 0.27 0.39 0.31 0.43 3-04 0.42 0.41 0.64 0.31 0.42 0.39 0.43 0.46 3-05 0.35 0.39 0.58 0.38 0.43 0.42 0.41 0.44 3-06 0.43 0.32 0.62 0.38 0.35 0.26 0.41 0.32 3-07 0.48 0.36 0.68 0.46 0.42 0.33 0.52 0.37 3-08 0.39 0.37 0.61 0.32 0.37 0.39 0.35 0.43 89 3-09 0.46 0.45 0.66 0.43 0.48 0.30 0.61 0.44 3-10 0.48 0.47 0.66 0.43 0.35 0.37 0.47 0.53 3-11 0.44 0.45 0.58 0.38 0.43 0.29 0.45 0.41 4-01 0.37 0.29 0.40 0.52 0.27 0.36 0.32 0.39 4-02 0.23 0.16 0.25 0.52 0.34 0.28 0.23 0.29 4-03 0.29 0.31 0.37 0.56 0.25 0.37 0.29 0.43 4-04 0.23 0.25 0.27 0.55 0.37 0.31 0.33 0.32 4-05 0.31 0.27 0.29 0.52 0.44 0.34 0.37 0.38 4-06 0.07 0.10 0.19 0.42 0.20 0.18 0.12 0.17 4-07 0.34 0.28 0.30 0.55 0.29 0.36 0.30 0.39 4-08 0.39 0.37 0.50 0.71 0.40 0.38 0.46 0.41 4-09 0.31 0.24 0.40 0.72 0.32 0.46 0.44 0.46 4-10 0.38 0.26 0.48 0.73 0.39 0.45 0.49 0.37 4-11 0.37 0.31 0.49 0.70 0.42 0.45 0.45 0.35 4-12 0.38 0.33 0.48 0.73 0.48 0.46 0.45 0.45 4-13 0.08 0.06 0.25 0.53 0.27 0.14 0.20 0.11 5-01 0.30 0.32 0.35 0.29 0.58 0.36 0.34 0.36 5-02 0.37 0.32 0.33 0.33 0.65 0.49 0.38 0.37 5-03 0.27 0.28 0.42 0.41 0.66 0.41 0.41 0.33 5-04 0.26 0.18 0.33 0.37 0.60 0.35 0.37 0.29 5-05 0.31 0.32 0.41 0.39 0.64 0.37 0.36 0.39 5-06 0.20 0.16 0.29 0.37 0.59 0.40 0.36 0.25 5-07 0.25 0.22 0.31 0.32 0.64 0.41 0.39 0.34 5-08 0.34 0.29 0.36 0.30 0.70 0.39 0.37 0.30 5-09 0.31 0.24 0.29 0.35 0.46 0.24 0.29 0.26 5-10 0.39 0.36 0.34 0.28 0.62 0.37 0.40 0.28 5-11 0.31 0.26 0.39 0.34 0.61 0.28 0.35 0.31 5-12 0.41 0.44 0.47 0.46 0.70 0.53 0.46 0.36 5-13 0.38 0.36 0.45 0.41 0.69 0.40 0.45 0.42 5-14 0.47 0.39 0.47 0.41 0.70 0.44 0.55 0.46 5-15 0.42 0.40 0.47 0.45 0.71 0.49 0.51 0.50 6-01 0.26 0.36 0.34 0.46 0.51 0.69 0.43 0.40 6-02 0.29 0.46 0.40 0.51 0.43 0.73 0.42 0.45 6-03 0.25 0.42 0.44 0.33 0.39 0.70 0.51 0.46 90 6-04 0.34 0.46 0.47 0.44 0.55 0.74 0.53 0.49 6-05 0.27 0.37 0.38 0.40 0.71 0.33 0.28 0.41 6-06 0.33 0.26 0.26 0.35 0.36 0.65 0.37 0.30 6-07 0.33 0.38 0.37 0.27 0.66 0.34 0.31 0.41 6-08 0.43 0.41 0.45 0.44 0.51 0.72 0.40 0.45 6-09 0.36 0.32 0.35 0.33 0.39 0.70 0.42 0.34 6-10 0.46 0.45 0.70 0.42 0.41 0.49 0.40 0.46 6-11 0.44 0.41 0.48 0.45 0.55 0.78 0.53 0.48 7-01 0.29 0.22 0.26 0.24 0.39 0.20 0.53 0.26 7-02 0.44 0.35 0.52 0.46 0.49 0.42 0.78 0.48 7-03 0.45 0.45 0.78 0.46 0.40 0.54 0.47 0.52 7-04 0.50 0.45 0.47 0.41 0.36 0.49 0.76 0.57 7-05 0.44 0.58 0.76 0.42 0.45 0.48 0.47 0.58 7-06 0.42 0.44 0.47 0.41 0.42 0.58 0.76 0.60 7-07 0.46 0.40 0.55 0.38 0.51 0.34 0.71 0.50 7-08 0.52 0.36 0.50 0.47 0.56 0.47 0.74 0.54 8-01 0.33 0.33 0.37 0.35 0.41 0.30 0.43 0.66 8-02 0.39 0.33 0.42 0.44 0.45 0.37 0.55 0.60 8-03 0.37 0.34 0.35 0.35 0.33 0.32 0.46 0.67 8-04 0.24 0.36 0.32 0.28 0.25 0.31 0.39 0.61 8-05 0.35 0.48 0.38 0.33 0.30 0.43 0.41 0.72 8-06 0.45 0.37 0.40 0.30 0.33 0.36 0.39 0.66 8-07 0.41 0.48 0.45 0.33 0.29 0.38 0.37 0.66 8-08 0.47 0.43 0.49 0.42 0.42 0.51 0.48 0.72 8-09 0.51 0.40 0.57 0.52 0.44 0.46 0.57 0.69 8-10 0.46 0.44 0.55 0.47 0.40 0.46 0.48 0.74 8-11 0.47 0.61 0.44 0.33 0.33 0.42 0.43 0.59 91 6.5 Appendix-5: Internal consistency of a theoretical measure According to the theory of measurement error, any error in measurement is composed of systematic bias and random errors. To the extent that these errors are slight, a measure is said to be reliable. Internal consistency (also called ?reliability? in psychometric parlance) refers to the stability or reproducibility of a test score based on a theoretical instrument [60]. A measure is internally consistent if it will give the same results if the measurement is repeated, i.e., if the measurements are stable over a variety of conditions. Even in the absence of any measurement error, there is no guarantee of validity. Internal consistency is only a necessary, but not a sufficient condition for validity. Internal consistency is defined as the proportion of the variability in the responses to the survey that is the result of differences in the opinion of the respondents. This implies that the answers to a reliable survey will differ only because respondents have different opinions, not because the survey items are confusing or have multiple interpretations. Thus internal consistency co-efficient provides an indication of the extent of repeatability or reproducibility of the scores due to the survey. Related to the precision of internal consistency estimates, there are two sources of errors that can arise ? one concerned with the sampling of respondents and another concerned with the sampling of items, called ?population sampling? and ?content sampling? respectively. The first source of error can be minimized by selecting a large sample (>200 respondents). However, major source of measurement error remains the sampling of content [60]. This is taken care of in the domain sampling model. 92 The roots of assessing reliability lie in the domain sampling model for developing measurement instruments [60]. According to this model, any measure is composed of a random sample of items from a hypothetical domain of items, or a universe of items. The score that any subject would obtain over the whole domain is called the true score, or domain score. To the extent that the score obtained from any sample of items correlates highly with the domain scores, the sample of items would be highly reliable. The model is based on the concept of an infinitely large correlation matrix showing all correlations among items in the domain. The average correlation in the matrix would indicate the extent to which some common core exists in the items, and the dispersion of the correlation would indicate the extent to which the items varied in sharing the common core. If the assumption is made that all items have an equal amount of common core, the average correlation in each column would be the same as the average correlation in the whole matrix. This leads to the conclusion that the correlation of any variable with true scores in the domain (the sum of scores on all items in the domain), also called the ?reliability coefficient? equals the square root of the average correlation of the item with all other items [60]. Since the square root of any correlation equals the variance in one variable explainable by the variance in another variable, the reliability co-efficient also gives the percentage of true score variance explained by an observable measure. Hence mathematically, reliability coefficient is the ratio of the true score variance in a measure to the actual observable variance of the measure. All the errors that occur within a survey can be easily encompassed by the domain-sampling model. For example, guessing causes variation in responses from item to item, reducing the overall correlation and the internal consistency of the item. 93 Therefore, for any survey, the sampling of items from a domain can be thought of in terms of not only the physical collection of items but also the sampling of many situational factors that will influence responses to those items. All such sources of error will tend to lower the correlation among items within a scale, which is needed to estimate internal consistency. One of the most commonly used coefficients for measuring internal consistency of a scale is Cronbach?s alpha [73][60][63]. It can be calculated for any set of items, even a subset of items. Accordingly, it is possible to identify a set of items within a scale that have the highest internal consistency. Mathematically, Cronbach?s alpha is the average of correlations between all possible split-half estimates within a scale [73]. The value of Cronbach?s alpha for a scale containing ?k? items is given by [73]: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? = ? 2 2 1 1 sum i s s k k ? where is variance of each item score, and is the variance of the scale score. 2 i s 2 sum s The square root of coefficient alpha is the estimated correlation of a k-item scale score to the errorless true score for the whole domain of items from which a list of items is created. If there is no true score but only error in the items (that is unique for each subject and uncorrelated across subjects), the variance of the scale score will be equal to the sum of variances of the individual items. Hence ?? ? will be equal to 0. If the items are perfectly reliable, and measure the same true score, then ?? ? is equal to 1. Typically an alpha value of 0.7 or more is considered adequate for any scale [60]. Regardless of the number of items sampled from a domain, the internal consistency of a scale is directly related to the average correlation among those items. 94 Hence, the assessment of internal consistency of a scale is based on the correlations between individual items or measurements that make up the scale, relative to the variances of the items. Any sources of error like transient personal factors and ambiguous questions, etc. present within a measurement instrument lower the average correlation among the items of the survey [66]. For example, if one of the items on the survey is vague, and the respondents have to guess its meaning, the guessing will lower coefficient alpha, and subsequent item score to scale score correlation will suggest the item for elimination. Coefficient alpha cannot be used directly for measuring reliability of a linear combination of scales or items from different domains. This is because the reliability of items in a scale from the same domain depends entirely on the average correlations among the items, but this does not hold for items from different domains. For example coefficient alpha values for different key practices cannot be used directly to determine the ?reliability? of the entire reliability capability measuring instrument, since it is a linear combination of eight domains, i.e., 87654321 RRRRRRRRRCM +++++++= Here the different R?s represent the eight key practices. For this linear combination of measures from different domains, the internal consistency of the entire measure can be estimated by the knowledge of Cronbach?s alpha coefficients of each domain and the covariance among their average ratings [60]. The internal consistency co-efficient for this linear combination is given by: 2 22 1 y iii RCM r ? ??? ?? ? ?= 95 where is the variance in the rating for the i 2 i ? th domain; i ? is the value of the coefficient alpha for the i th domain; and is the sum of all elements in the covariance matrix of domain scores. 2 y ? 96 6.6 Appendix-6: Covariance matrix for average rating of key practices Table 13: Covariance matrix for average rating of key practices 0.2900.2540.1840.1830.1640.1840.1740.155RIMP 0.2540.4670.2340.2660.2070.2330.1870.193VAV 0.1840.2340.3350.2230.1780.1680.1660.130FDTA 0.1830.2660.2230.3840.1880.1940.1560.154SCM 0.1640.2070.1780.1880.2830.1660.1150.119RTST 0.1840.2330.1680.1940.1660.2730.1730.164RA 0.1740.1870.1660.1560.1150.1730.2750.162TAD 0.1550.1930.1300.1540.1190.1640.1620.219RRP RIMPVAVFDTASCMRTSTRATADRRP 97 6.7 Appendix-7: Factor analysis The concept of factor analysis can be explained in a simplified manner using Figure 14. As a reliability task is rated by different individuals for its relevance, there will be variation in their ratings. As shown in the figure, the rating for any reliability task can be affected by common factors, specific factors, or error of measurement factors [77]. A common factor may affect the rating of all tasks within a key practice, whereas the specific and the error of measurement factors combine to form the unique factors that affect only one measurement task individually. The total variance of rating for any task is accordingly divided into two sources ? the common variance and the unique variance (specific variance plus error of measurement variance). The variation due to common factors is called the common variance or communality, and the variance due to unique Relevance rating of reliability tasks Common factor Specific factors Error of measurement factors Transient influences: ? Mood ?Interest ? Time pressure ? Error in marking + Basic preferences: ? Understanding of organizational reliability management ? Understanding of reliability key practices, tasks or traits External influences: ? Education level ? Job experience ? Current job profile ? Cultural effects Factor analysis aims to identify the common factors explaining the covariance between different tasks listed under a key practice Unique factors Figure 14: Sources of variance and factor analysis 98 factors is called the uniqueness [61][78]. Factor analysis serves to partition variables into sources of these common and unique variances. According to the factor analytic theory, it is only the common factors which account for covariance between tasks within a key practice [77]. The unique factors do not contribute to these covariances. The degree of covariance between tasks is accordingly taken as an indicator of the degree to which the tasks are influenced by common factors. Mathematically, factor analysis is designed to simplify the correlation matrix between the task ratings and reveal the number of factors that can explain the correlations [71]. A factor is ?a dimension or construct which is a condensed statement of the relationship between a set of variables? [78]. Any linear combination of variables in a data matrix is said to be a factor of that matrix [60]. Often factors are spoken of as dimensions, and factoring is spoken of as dimensionalizing a space of variables. The primary purpose in factor analysis is to determine the number and nature of common factors, and the pattern of their influence on the measurement tasks. However, factor analysis does not create factors, but reveals them based on pattern of correlations between tasks. Factor analysis can therefore be used to validate a scale (key practice) by demonstrating that its constituents (reliability tasks) load on the same common factor. If all the tasks listed under a key practice load on a single factor, they measure the same attribute ? the higher the loadings the better the composition of the key practice. If the tasks do not load on one factor, it implies that the variables are not correlated, and only specific variance is reflected in task ratings [78]. Thus, factor analysis allows checking whether all the tasks used to measure any key practice are associated with some common core or factor to which they are significantly correlated [61]. 99 The outputs from factor analysis include factor loadings for each measurement task, and eigenvalues for each factor that is extracted. The factor loadings are the correlation coefficients between variables or measurement tasks and the identified factors. Therefore the square of any factor loading gives the proportion of variance explained in a particular variable by a factor. The eigenvalue for a factor measures the variance in all variables which is accounted for by that factor. This is obtained by summing the square of factor loadings for each factor. In factor analysis, loadings of 0.3 or larger are regarded as significant [60][61][71][78]. Factor analysis is based on the analysis of the standardized correlational matrix of tasks constituting each key practice. There are many methods of factor analyzing a matrix of correlations, the most common being the Principal Component Analysis (PCA) and Principal Axis Factoring (PAF), also called Common Factor Analysis [61][71][78]. PCA determines the factors that can account for the total variance (unique plus common) in a set of variables, whereas PAF determines the least number of factors which can account for only the common variance in a set of variables. The difference between the two approaches involves the entries on the diagonal of the matrix of correlations that is analyzed. During factor analysis, PCA uses 1?s on the diagonal whereas PAF uses estimates of extracted communalities for each variable or task. PAF is appropriate for determining the dimensionality of a set of variables (such as tasks within a key practice) specifically to test whether one factor can account for the bulk of common variance in the set. PCA is a way of identifying patterns in data, and to highlight some similarities and differences. PCA is an optimum method for determining the number of factors that 100 explain the covariance among a set of variables. This method maximizes the sum of square loadings of each factor extracted in turn. This means that each component factor explains more variance than would the loadings obtained from any other method of factoring. Determining factors through PCA involves the classic problem in matrix algebra that involves finding the ?eigenvectors? and ?eigenvalues? from the characteristic equation of a symmetric matrix. In this study, the matrix used in PCA is the correlation or the covariance matrix of reliability task ratings under different key practices. For example if ?M? is a (n x n) covariance matrix for a key practice, the mathematical problem corresponds to finding eigenvalues (? i ) and eigenvectors (V i ) that satisfy the matrix equation: [60][78] niwhereVVM iii ,...,3,2,1;.. ==? The outputs from PCA thus include eigenvalues and associated eigenvectors. Each eigenvalue and associated eigenvector corresponds to an extracted factor of the covariance matrix. The eigenvector with the highest eigenvalue is the principal component of the data. Each eigenvector is proportional to its corresponding column of factor loadings and the co-efficient of proportionality is the square root of the numerical value of the eigenvalue for that factor. Although the sum of squares of elements of each eigenvector is 1, the sum of squares of factor loadings for any factor is not 1, but represents the total amount of variance among variables explained by the factor. Accordingly, this total amount of variance explained is simply the eigenvalue for the factor. For small n (n=2 or 3), the solution of an eigenvalue problem is easy. However, for larger matrices, rather than directly deriving eigenvalues and eigenvectors from the 101 correlation matrix, an iterative approach is used. Iterations are initiated with a trial vector and its closeness to a statistical criterion value is determined. The iterative process is continued until the solution converges, i.e., additional iterations produce almost identical results. For a (n x n) symmetric matrix, theoretically ?n? different factors can be extracted. However, all of these factors may not be significant. Various methods are used to determine the number of factors to be extracted during factor analyzing a set of variables. The most commonly used methods are Kaiser?s eigenvalue criteria and Catell?s scree plot criterion. According to Kaiser?s criterion, all the factors that explain variance greater then the variance of a single variable should be extracted, i.e., all the factors with eigenvalues higher than one should be extracted. In Catell?s scree plot test, the successive eigenvalues of factors are plotted, and the point where the plot abruptly levels out is noted. Only the number of factors before the leveling off point is extracted and factor loadings are determined for all variables for each factor [71][78]. Factors represent linear combination of variables that load significantly on it. Factor analysis creates a combination of variables so weighted as to account for variance in the correlations. Factor loadings are the weighted combination of variables which best explains the variance [78]. For each key practice ?A? that has ?k? tasks loading significantly on it, to calculate key practice scores we can write: kk awawawA +++=KK 2211 where ?A? is the score for a key practice; ?a i ? is the score for a task under the key practice; ?w i ? is the factor loading of the task on the key practice that can be used as a 102 weighting factor for this summation. Ability to sum scores on different tasks using weight factors to obtain a single score for a key practice provides a convenient way of evaluating companies using an eight axes radar chart as discussed in the thesis. The factor loadings for tasks under different key practices obtained from Principal Component Analysis (PCA) can be used as weighting factors in the above equation. 103 6.8 Appendix-8: Principal component analysis (PCA) results Reliability requirements and planning: Training and development: Task Factor Loadings Task Factor Loadings 1-01 0.364 2-01 0.669 1-02 0.407 2-02 0.687 1-03 0.397 2-03 0.709 1-04 0.291 2-04 0.616 1-05 0.639 2-05 0.733 1-06 0.618 2-06 0.651 1-07 0.507 2-07 0.612 1-08 0.605 2-08 0.620 1-09 0.630 2-09 0.430 1-10 0.684 2-10 0.540 1-11 0.653 Eigenvalue 3.999 1-12 0.683 % var. explained 39.99 Eigenvalue 3.711 % var. explained 30.93 Reliability analysis: Reliability testing: Task Factor Loadings Task Factor Loadings 3-01 0.660 4-01 0.545 3-02 0.690 4-02 0.505 3-03 0.666 4-03 0.563 3-04 0.660 4-04 0.551 3-05 0.594 4-05 0.509 3-06 0.567 4-06 0.332 3-07 0.660 4-07 0.537 3-08 0.602 4-08 0.737 3-09 0.601 4-09 0.761 3-10 0.668 4-10 0.771 3-11 0.523 4-11 0.763 Eigenvalue 4.344 4-12 0.765 % var. explained 39.49 4-13 0.465 Eigenvalue 4.931 % var. explained 37.93 104 Supply chain management: Failure data tracking: Task Factor Loadings Task Factor Loadings 5-01 0.574 6-01 0.711 5-02 0.667 6-02 0.755 5-03 0.662 6-03 0.715 5-04 0.597 6-04 0.755 5-05 0.654 6-05 0.704 5-06 0.616 6-06 0.609 5-07 0.661 6-07 0.657 5-08 0.707 6-08 0.726 5-09 0.415 6-09 0.664 5-10 0.622 6-10 0.706 5-11 0.594 6-11 0.784 5-12 0.693 Eigenvalue 5.536 5-13 0.694 % var. explained 50.32 5-14 0.726 5-15 0.737 Eigenvalue 6.259 % var. explained 41.73 Verification and validation: Reliability improvements: Task Factor Loadings Task Factor Loadings 7-01 0.463 8-01 0.665 7-02 0.781 8-02 0.555 7-03 0.794 8-03 0.679 7-04 0.801 8-04 0.611 7-05 0.802 8-05 0.724 7-06 0.804 8-06 0.647 7-07 0.698 8-07 0.671 7-08 0.725 8-08 0.748 Eigenvalue 4.398 8-09 0.674 % var. explained 54.98 8-10 0.726 Eigenvalue 4.516 % var. explained 45.16 105 6.9 Appendix-9: Principal axis factoring (PAF) results Reliability requirements and planning: Training and development: Task Factor Loadings Task Factor Loadings 1-01 0.303 2-01 0.617 1-02 0.344 2-02 0.640 1-03 0.332 2-03 0.665 1-04 0.238 2-04 0.555 1-05 0.581 2-05 0.698 1-06 0.559 2-06 0.597 1-07 0.438 2-07 0.551 1-08 0.542 2-08 0.560 1-09 0.573 2-09 0.367 1-10 0.640 2-10 0.474 1-11 0.602 Eigenvalue 3.362 1-12 0.638 % var. explained 33.62 Eigenvalue 3.021 % var. explained 25.17 Reliability analysis: Reliability testing: Task Factor Loadings Task Factor Loadings 3-01 0.615 4-01 0.492 3-02 0.652 4-02 0.449 3-03 0.624 4-03 0.507 3-04 0.613 4-04 0.496 3-05 0.539 4-05 0.454 3-06 0.509 4-06 0.284 3-07 0.611 4-07 0.483 3-08 0.549 4-08 0.706 3-09 0.544 4-09 0.739 3-10 0.621 4-10 0.753 3-11 0.464 4-11 0.742 Eigenvalue 3.690 4-12 0.740 % var. explained 33.55 4-13 0.409 Eigenvalue 4.345 % var. explained 33.42 106 Supply chain management: Failure data tracking: Task Factor Loadings Task Factor Loadings 5-01 0.536 6-01 0.676 5-02 0.635 6-02 0.727 5-03 0.629 6-03 0.680 5-04 0.559 6-04 0.727 5-05 0.619 6-05 0.665 5-06 0.580 6-06 0.560 5-07 0.629 6-07 0.613 5-08 0.680 6-08 0.693 5-09 0.377 6-09 0.619 5-10 0.585 6-10 0.669 5-11 0.556 6-11 0.763 5-12 0.663 Eigenvalue 5.001 5-13 0.665 % var. explained 45.46 5-14 0.703 5-15 0.715 Eigenvalue 5.657 % var. explained 37.71 Verification and validation: Reliability improvements: Task Factor Loadings Task Factor Loadings 7-01 0.393 8-01 0.616 7-02 0.739 8-02 0.495 7-03 0.759 8-03 0.629 7-04 0.774 8-04 0.555 7-05 0.774 8-05 0.685 7-06 0.778 8-06 0.595 7-07 0.636 8-07 0.624 7-08 0.667 8-08 0.715 Eigenvalue 3.930 8-09 0.628 % var. explained 49.12 8-10 0.689 Eigenvalue 3.922 % var. explained 39.22 107 6.10 Appendix-10: List of reliability tasks and weighting factors Table 14: Weighting factors for reliability tasks Reliability task or trait Factor loading Weighting factor 1. RELIABILITY REQUIREMENTS AND PLANNING XX XX 1-01 Presence of a reliability department 0.364 1.00 1-02 Customer inputs in the form of their requirements and expectations 0.407 1.12 1-03 Capturing reliability specifications of competitive products 0.397 1.09 1-05 Product reliability plan that includes reliability goals and reliability activities schedule 0.639 1.76 1-06 Establishing reliability goals for sub-assemblies and components in a product 0.618 1.70 1-07 Establishing reliability goals as a distribution and not as a point estimate 0.507 1.39 1-08 Establishing reliability goals for products based on specific lifecycle conditions 0.605 1.66 1-09 While preparing a reliability plan, planning for required resources like materials, personnel and equipment 0.63 1.73 1-10 Including details on reliability analysis and testing for specific products as part of a reliability plan 0.684 1.88 1-11 Contingency planning and specification of decision criteria for altering reliability plans 0.653 1.79 1-12 Reliability plan includes a process for communicating results from reliability activities 0.683 1.88 1-04 Using reliability specs from old products while establishing requirements for new products DELETED 2. TRAINING AND DEVELOPMENT XX XX 2-01 Reliability training plan or program 0.669 1.56 2-02 Formally trained reliability engineers 0.687 1.60 2-03 Top management commitment to reliability training 0.709 1.65 2-04 Business managers trained to appreciate importance of reliability to products or business 0.616 1.43 2-05 Reliability managers trained on how specific reliability activities can impact reliability 0.733 1.70 2-06 Reliability engineers trained to identify failure modes and mechanisms in a product design 0.651 1.51 2-07 Reliability engineers trained in statistical methods for reliability prediction and data analysis 0.612 1.42 2-08 Reliability engineers trained in failure analysis, root cause analysis, and corrective actions 0.62 1.44 2-09 Reliability training provided to employees not directly associated with reliability, e.g., procurement, purchasing, etc. 0.43 1.00 2-10 Tracking new technologies, modeling or analysis techniques that can impact reliability 0.54 1.26 108 3. RELIABILITY ANALYSIS XX XX 3-01 Identification of potential single points of failure and failure modes in a product design 0.66 1.26 3-02 Identification of potential failure mechanisms that can cause failures in a product design 0.69 1.32 3-03 Identification of critical failure modes and mechanisms in a product design 0.666 1.27 3-04 Quantification of risks and weaknesses for critical components in a product design 0.66 1.26 3-05 Checking adherence of a design to design rules 0.594 1.14 3-06 Making reliability point estimates using modeling or reliability prediction handbooks 0.567 1.08 3-07 Making reliability distribution predictions based on times-to-failure for potential failure mechanisms 0.66 1.26 3-08 Characterization of materials used in a product design 0.602 1.15 3-09 Using reliability predictions for specifying warranty periods and making spares provisioning 0.601 1.15 3-10 Using reliability analysis to design specific reliability tests for a product 0.668 1.28 3-11 Optimizing lifecycle costs for a product based on reliability vs. cost trade-offs 0.523 1.00 4. RELIABILITY TESTING XX XX 4-01 Tests to identify design margins and destruct limits for a product 0.545 1.17 4-02 Design verification or qualification tests for a product 0.505 1.09 4-03 Using reliability testing to make design changes in a product prior to production 0.563 1.21 4-04 Reliability testing based on generic specifications for all products 0.551 1.18 4-05 Reliability testing based on customer specifications 0.509 1.09 4-07 Reliability tests tailored to specific products 0.537 1.15 4-08 Detailed reliability test plans for products including sample sizes and confidence limits 0.737 1.58 4-09 Accelerated tests based on specific failure mechanisms to determine times-to-failure 0.761 1.64 4-10 Analysis of the test data to determine statistical failure distributions 0.771 1.66 4-11 Application of failure distributions to make reliability predictions using acceleration factors 0.763 1.64 4-12 Reviewing and updating reliability qualification test requirements for components 0.765 1.65 4-13 Minimizing reliability testing by using burn-in or environmental stress screening 0.465 1.00 4-06 Burn-in or screening of products prior to shipping DELETED 109 5. SUPPLY CHAIN MANAGEMENT XX XX 5-01 Component engineers for parts selection and supply management 0.574 1.38 5-02 Procuring parts only from authorized distributors and not from part brokers 0.667 1.61 5-03 Using manufacturing quality data for part selection and for in-coming lot rejection 0.662 1.60 5-04 Using reliability test data from suppliers for part selection and for in- coming lot rejection 0.597 1.44 5-05 Considering technology maturity of parts during part selection 0.654 1.58 5-06 Vendor or supplier assessments or audits 0.616 1.48 5-07 Using and maintaining a list of preferred/qualified/approved parts and suppliers 0.661 1.59 5-08 Using and maintaining a supplier rating system 0.707 1.70 5-09 Using techniques like uprating for qualifying parts for use outside their datasheet specs 0.415 1.00 5-10 Supplier contractual agreements containing quality and reliability requirements 0.622 1.50 5-11 Multiple-sourcing of parts 0.594 1.43 5-12 Tracking component traceability markings to identify any changes 0.693 1.67 5-13 Tracking part obsolescence to ensure continued supply or to make alternate supply arrangements 0.694 1.67 5-14 Review of supplier product change notices (PCNs) to assess their impact on manufacturability 0.726 1.75 5-15 Review of supplier PCNs to assess their impact on reliability 0.737 1.78 6. FAILURE DATA TRACKING AND ANALYSIS XX XX 6-01 Manufacturing defects and production testing failures tracked, and recorded in a database 0.711 1.17 6-02 Reliability testing failures tracked and recorded in a database 0.755 1.24 6-03 Field failures tracked and recorded in a database 0.715 1.17 6-04 Ensuring traceability of products from manufacture to failure 0.755 1.24 6-05 Conducting failure analysis on failed products from all sources from manufacturing to field 0.704 1.16 6-06 Creating Pareto charts based on failure modes and failure sites 0.609 1.00 6-07 Conducting root cause analysis on failed products from all sources 0.657 1.08 6-08 Generating failure analysis reports detailing underlying failure mechanisms for failed products 0.726 1.19 6-09 Creating Pareto charts based on failure mechanisms 0.664 1.09 6-10 Correlating failure mechanisms with specific materials and processes 0.706 1.16 6-11 Creating and updating a database of corrective actions based on identified failure modes and mechanisms 0.784 1.29 110 7. VERIFICATION AND VALIDATION XX XX 7-01 Obtaining certifications like ISO for all management processes including reliability 0.463 1.00 7-02 Updating reliability predictions for products based on field data for present and previous products 0.781 1.69 7-03 Modifying statistical failure distributions used for reliability predictions on the basis of field failure data 0.794 1.71 7-04 Modifying reliability test conditions for current and future products based on failure mechanisms observed in field 0.801 1.73 7-05 Updating the failure modes database to incorporate any new failure modes observed in field 0.802 1.73 7-06 Updating the failure mechanisms database to incorporate any new failure mechanisms observed in field 0.804 1.74 7-07 Verifying and modifying warranty estimates and spares provisioning based on field returns 0.698 1.51 7-08 Internal audits for reliability planning, analysis and testing activities 0.725 1.57 8. RELIABILITY IMPROVEMENTS XX XX 8-01 Bill of materials modification to exclude parts that have had reliability problems in field 0.665 1.20 8-02 Updating product reliability requirements due to business or market considerations 0.555 1.00 8-03 Making design changes, if required, to accommodate changes in lifecycle environment 0.679 1.22 8-04 Implementing corrective actions based on field failure modes 0.611 1.10 8-05 Implementing corrective actions based on field failure mechanisms 0.724 1.30 8-06 Requiring engineering change notifications for reliability improvements 0.647 1.17 8-07 Preventing recurrence of failures in future products, which have already been observed in existing products 0.671 1.21 8-08 Using field failure information to improve company design rules and process control requirements 0.748 1.35 8-09 Evaluating and implementing new modeling or analysis techniques to improve product reliability 0.674 1.21 8-10 Evaluating and implementing new technologies to improve product reliability 0.726 1.31 111 6.11 Appendix-11: PCB assembler evaluation questionnaire Part I. Manufacturing compatibility questionnaire: 1. Can you process PCBs with components mounted on both sides of the board? Yes No 2. Which of the following components do you assemble using automation? Through-hole components Surface-mount components Mixed technology (SMT and through-hole together) Fine pitch BGAs (>1.0 mm pitch) Ultra-fine pitch BGAs (<1.0 mm pitch) and Chip scale packages Flip Chips, Chip-on-board and TAB packages 3. Which of the following components do you assemble manually? Through-hole components Surface-mount components Mixed technology (SMT and through-hole together) Fine pitch BGAs (>1.0 mm pitch) Ultra-fine pitch BGAs (<1.0 mm pitch) and Chip scale packages Flip Chips, Chip-on-board and TAB packages 4. Which of the following board constructions are you assembling? Rigid printed board Flex printed board Rigid flex board Rigid back plane Molded board MCM-Ceramic modules and Hybrids MCM-Laminated modules MCM-Deposited dielectric 5. What board size diagonals are you currently assembling? <250 mm [<10.0 in.] 250 [10.0] 350 [14.0] 450 [17.5] 550 [21.5] 650 [25.5] 750 [29.5] 850 [33.5] > 850 [33.5] 112 6. Which maximum thru-hole work area are you currently assembling? <300 sq. cm [<50 sq. in] 300 [50] 600 [100] 1000 [160] 1500 [230] 2100 [330] 2800 [430] 3600 [550] >3600 [>550] 7. Which maximum SMT work area are you currently assembling? <300 sq. cm [<50 sq. in] 300 [50] 600 [100] 1000 [160] 1500 [230] 2100 [330] 2800 [430] 3600 [550] >3600 [>550] 8. Which completed end products are you currently assembling? Consumer products General purpose computers Telecommunication products Commercial aircraft products Industrial and automotive products High performance military products Outer space (Low-earth orbit and geostationary-earth orbit) Military avionics Automotive (under the hood) 9. Which of the following through-hole components are on the PCBs that you assemble? Two-leaded axial Two leaded radial Multiple leaded radial with less than 6 leads Single-Inline Packages (SIPs) Dual-Inline packages (DIPs) with 24 leads or less Dual-Inline packages (DIPs) with more than 24 leads Pin Grid Arrays (PGAs) Component Sockets Card/Edge two-piece connectors 113 10. Which of the following surface-mount components are on the PCBs that you assemble? Chip resistors or capacitors on a reel Bulk chip capacitors/resistors Tantalum capacitors Metal Electrode Leadless Face components (MELFs) Small Outline Diodes (SODs) Small Outline Transistors (SOTs) Small Outline ICs (SOICs) Variable Resistor Trim Pots Surface Mount Sockets/Test point connects 11. Which of the following high-pin count surface-mount components are on the PCBs that you assemble? Chip-on-tape (Molded ring) >0.4 mm pitch Chip-on-tape (Molded ring) ? 0.3 mm pitch Quad Flat Pack (QFP) >0.4 mm pitch Quad Flat Pack (QFP) ? 0.3 mm pitch Shrink Quad Flat Pack (QFP) Thin Small Outline Package (TSOP) Ball / Post Grid Array (BGA) >1.0 mm pitch Ball / Post Grid Array (BGA) ?1.0 mm pitch Land Grid Array (LGA) 12. For peripheral surface mount packages, e.g., QFPs, what is the smallest pitch of the package leads that you can assemble for an I/O count greater than 100? > 0.65 mm 0.65 mm 0.5 mm 0.4 mm 0.3 mm < 0.3 mm 13. For peripheral surface mount packages, e.g., QFPs, what is the smallest pitch of the package leads that you can assemble for packages with I/O count less than 100? > 0.65 mm 0.65 mm 0.5 mm 0.4 mm 0.3 mm < 0.3 mm 14. For surface mount array packages, e.g., BGAs or CSPs, what is the smallest pitch of the solder balls that you can assemble for an I/O count greater than 225? > 1.25 mm 1.25 mm 1.0 mm 0.8 mm 0.5 mm 114 < 0.5 mm 15. For surface mount array packages, e.g., BGAs or CSPs, what is the smallest pitch of the solder balls that you can assemble for an I/O count less than 225? > 1.25 mm 1.25 mm 1.0 mm 0.8 mm 0.5 mm < 0.5 mm 16. Which of the following cable or harness for multiple wires are on the PCBs that you assemble? High power wire with thickness ? Gauge 10 Lower power wire with thickness < Gauge 10 Electrical cable or wire Optical cable (glass) Electrical harness Optical harness Ribbon cable harness Combination harness 17. Which of the following distance wiring terminals or connectors are on the PCBs that you assemble? Solid wire Standard wire Shielded wire Coaxial wire Terminal bifurcated and turret Clip and pin terminals Crimped terminals Board connectors Backplane connectors Press-fit connectors 18. Which of the following mechanical assemblies are on the PCBs that you assemble? Mechanical hardware Shielding hardware Thermal conductive hardware Front panel hardware Jumper wires Final system assemblies (box build) 19. Which of the following circuit board attachment techniques are on the PCBs that you assemble? Hot bar soldering Focused hot air soldering Wave soldering Infrared reflow soldering Vapor phase soldering 115 Hot Air Convection Soldering Laser soldering Conductive adhesive attachment Selective soldering 20. Which of the following IC attachment techniques are on the PCBs that you assemble? Thermal wire bonding Ball bonding Ultrasonic wire bonding Beam lead chip bonding Generic tape automated bonding Custom tape automated bonding Flip-chip on ceramic or glass base Flip-chip on rigid printed boards Flip Chip on Flex Printed Boards 21. Which of the following coatings and encapsulations are on the PCBs that you assemble? Bare die glob top Flip-chip underfill Assembly (1 or 2 sides) epoxy coating Assembly (1 or 2 sides) polyurethane coating Assembly (1 or 2 sides) acrylic coating Assembly (1 or 2 sides) vacuum deposition coating 22. Which of the following cleaning technologies do you use on the PCBs that you assemble? No clean system/Never clean system Aqueous cleaning in-line system Aqueous cleaning static soak Modified in-line solvent clean Modified static soak solvent clean Ultrasonic agitation cleaning 23. Which of the following board types do you currently procure for the PCBs that you assemble? None - Consignment Item Single-sided Double-sided Multilayer (Rigid) Multilayer (Rigid-Flex) Metal core boards CTE boards MCM's and Hybrids PCMCIA's 24. Which of the following component types do you currently procure for the PCBs that you assemble? None - Consignment Item Passive thru-hole components Passive surface mount components 116 Surface mount I/Cs High pin-count peripheral devices High pin-count array devices Bare dies (Chips) Application Specific I/Cs 25. Which of the following solders have you the capability of using for assembling components on boards? Sn-Pb High temp 90Sn-Pb SnAg SnCu SnZn SnBi SnAgCu SnAgBi SnZnCu SnZnBi SnAgCuBi 26. Which of the following types of testing capabilities do you have? In-circuit testing Functional testing 27. What is the minimum probe point pitch for your electrical or functional testers (in mm)? No capabilities Greater than 1 1 0.8 0.65 0.5 0.4 0.3 0.2 Less than 0.2 28. What is the maximum number of probe points for any of your electrical or functional testers? No capabilities Less than 200 200 500 1000 1500 2000 2500 3000 Greater than 3000 117 29. What is the maximum number of test vectors that your electrical or functional testers can generate and use? No capabilities Less than 500 500 1000 2000 3000 4000 5000 6000 Greater than 6000 30. Which of the following types of repair are you capable of doing on the assembled boards? No capabilities/Not used Correction of defective solder joints Removal and replacement of components Circuit board modification and repair 31. Which of the following types of components do you have the capability of replacing during repair and rework? No capabilities/Not used Through-hole components PGAs/Connectors Chip components Leadless components Gull-wing components J-leaded components BGAs/CSPs Flip-chips 118 Part II. Reliability capability evaluation questionnaire: Question type Description T1 Simple yes/no type questions ? either ?yes? or ?no? is scored. T2 Multiple selections can be made out of the choices available - the score depends on the number of choices selected as a response. T3 Only a single selection can be made out of the many options - the responses are ordered and selection of a choice that is higher in order gives a higher score Reliability requirements and planning 1. Do you have people in your organization who are dedicated to ensuring reliability of assemblies in customer application conditions? [T1] 10 Yes No 2. Do you have documented contamination control protocol for assembly areas to meet customer requirements? [T1] Yes No 3. Do you have documented electrostatic discharge (ESD) policies and procedures in place for handling electronic parts and equipment? [T1] Yes No 4. Do you have a system to keep track of the Moisture Sensitivity Level (MSL) of the components during the assembly operation? [T1] Yes No 5. What best describes the status of your quality plan? [T3] Functional steering committee formed / Quality manual started Quality philosophy established and published Documented quality policy under review Generic quality manual exists for whole facility Controlled quality manuals for all departments 6. Which of the following elements of quality assurance are implemented within your organization? [T2] Training of personnel Quantitative methodologies for SPC Process improvement strategies Criteria for selecting total or sample-based inspection Corrective action system Documented audit plan 10 [T1] represents [Question Type] 119 7. For which of the following do you maintain documented records? [T2] Receiving inspection Process control Equipment calibration Equipment maintenance Production material rejects Training 8. Mark any/all of the following that are part of your documented Statistical Process Control implementation process? [T2] Documented plan exists Employees trained Control-charts used for process control Data from on-line inspection Data from non-destructive evaluation Data from machine operation Data from periodic testing of production samples (coupons) Processes stable and under control Continued improvement of stable processes 9. Which of the following metrics do you use to specify the performance of your processes? [T2] C pk Percent defective Defects per unit (dpu) Defects per million opportunities (DPMO) 10. For which of the following factors do you use documented procedures to control the solder-paste deposition process? [T2] Printing speed Squeegee pressure On-off contacts Separation speed Separation distance Cleaning frequency Stencil design 11. Which of the following techniques do you use to examine paste deposits (height/paste volume)? [T2] Visual inspection Two-dimensional optical inspection Automated optical inspection 3-D laser scanning 12. Which of the following solder joint characteristics do you use as assembly quality parameters? [T2] Solder height/lead thickness Pad/fillet wetting angle Fillet/lead wetting angle Fillet curvature 120 Fillet solder volume 13. Which of the following assembly features do you check during your visual inspection of the final assembly? [T2] No visual inspection conducted General cleanliness Legibility of markings Extraneous conductive material Dimensional conformance Component mounting Exterior solder joint defects Scratched charred or burned areas 14. If used, at what stages of manufacturing is the automated optical inspection (AOI) used in your assembly process? [T2] Post paste application Post component placement Post-soldering 15. What systems do you use to reduce downtime for your assembly equipment? [T2] Scheduled calibration Scheduled preventive maintenance Periodic operator retraining Spare parts provisioning Scheduling and sequencing of operations 16. Are "accept/reject" criteria defined and available for use for each of the inspection tests? [T1] Yes No 17. If used, which of the following is the assembled board checked for in the post-soldering AOI or optical inspection? [T2] Missing or superfluous components Misoriented components Through-hole pins Solder defects Lifted component leads Gold finger contamination Incorrect jumper position Improperly inserted connectors 18. For which of the following solder joint defects do you check the final assemblies for? [T2] Component mis-alignment Cold solder joint Dewetting Solder bridging Solder balling Solder voids Solder wicking Starved solder joints 121 Icicles Tombstoning 19. Do you have documented procedures for conducting post-reflow repair, rework or modification of the assemblies? [T1] Yes No 20. Do you track the number of repairs or reworks conducted on a particular site or assembly? [T1] Yes No 21. What percentage of your final assemblies have to go through some kind of rework or repair? [T3] Between 75 to 100% Between 50 to 75% Between 25 to 50% Between 0 to 25% None 22. Which of the following risks have documented mitigation procedures for rework? [T2] Moisture sensitivity level Thermo-mechanical degradation Electrostatic discharge Contamination control 23. Do you mark the reworked or modified boards that you supply? [T1] Yes No 24. Do you assess the reliability of the reworked assemblies? [T1] Yes No 25. What is the critical value of the capability index (C pk ) below which a corrective action is initiated for a manufacturing process? [T3] C pk ? 0.5 0.5 < C pk ? 1.0 1.0 < C pk ? 1.5 C pk ? 1.5 Training and development 26. Do you have self-improvement incentive programs for your employees? [T1] Yes No 27. Which of the following are included in your employee training program? [T2] Certified training for selected personnel New process implementation training Advanced statistical and DoE training to employees Periodic retraining of employees 122 On-going improvement program for employees 28. Do you have a system to assess the effectiveness of employee training? [T1] Yes No 29. Do you have staff dedicated to research and development activities? [T1] Yes No 30. Which of the following reliability related training courses have been offered or taken by your employees? [T2] Failure modes and effects analysis (FMEA) Material characterization Reliability testing Failure analysis methods Reliability analysis 31. Do you conduct failure mode and effect analysis (FMEA) for your assembly processes? [T1] Yes No Reliability testing 32. Which of the following electrical continuity and functionality tests do you conduct during the assembly process? [T2] Digital ICTs Analog ICTs Bed-of-nails testers Flying probe testers Double-sided simultaneous electrical testers Boundary-scan protocol testing Electro-magnetic Interference System level electrical test System level functional test 33. Which of the following tests do you have the capabilities of performing on finished assemblies? [T2] Burn-in/ESS at some temperature Burn-in/ESS with temperature cycling Burn-in/ESS with temperature cycling and humidity Power cycling on-off Interconnect Stress Test (IST) Altitude Isothermal mechanical cycling Vibration testing Mechanical shock Thermal shock 123 Salt spray 34. Which of the following cleanliness tests do you perform on the PCBs that you assemble? [T2] Ionic salt/residue test Organic contaminant impregnation test Surface insulation resistance test 35. Do you have a clear documented definition for failure for classifying assemblies as passed or failed during reliability testing? [T1] Yes No 36. Do you report reliability test results for the final assembly? [T1] Yes No 37. Which of the following bond testing procedures do you use for checking COB, TAB, QFP or flip- chip bonding? [T2] Wire-pull test Ball-shear test Die-shear test TAB push test Stud pull test Tweezer pull test 38. Do you use reliability tests for making reliability predictions for your assemblies in customer's application environment? [T1] Yes No 39. Do you conduct application specific reliability testing for PCB assemblies? [T1] Yes No Supply chain management 40. Do you have engineers who conduct part and material selection and management with respect to reliability? [T1] Yes No 41. Do you ever buy parts or materials from brokers? (Yes=0, No=1) [T1] Yes No 42. Are the suppliers required to provide their qualification and reliability test data on their parts and materials? [T1] Yes No 43. Which of the following are part of your control system for all in-coming parts and materials? [T2] Receiving inspection 124 Handling control Storage control Material resource planning Non-conforming material quarantine 44. What is the nature of the receiving inspection procedures for parts and materials? [T3] Documented Not documented but followed Documented and followed 45. Do you make repairs on bare printed circuit boards that are found defective? (Yes=0, No=1) [T1] Yes No 46. Which of the following are elements of your supplier control program? [T2] Approved supplier list Monthly analysis program Supplier performance reviews TQM acceptance by suppliers All key suppliers using certified parts program 47. Which of the following materials management systems do you practice? [T2] Material resource planning (MRP) system Electronic data interchange (EDI) Engineering change order process On-line shop floor materials control Kitting capability for components 48. Do you use parts or materials outside their datasheet specification limits or expiration dates? (Yes=0, No=1) [T1] Yes No 49. Which of the following are a part of current procedures for storage and timely disposition of non- conforming parts and materials? [T2] Identification Segregation from regular material Proper disposition Corrective action 50. What is the nature of the procedures for storage of limited life parts like prepreg, epoxies, solder pastes, fluxes, etc.? [T3] Documented Not documented but followed Documented and followed 51. At present, what percentage of your parts or materials is multiple sourced? [T3] None Between 0 and 25% Between 25 and 50% Between 50 and 75% 125 Do you verify the supplied parts during incoming inspection for their traceability markings, e.g., serial number, lot number, date code, etc.? [T1] Between 75 and 100% 52. Yes No 53. Do you have a part or material traceability system to track and verify in-coming parts and materials? [T1] Yes No 54. Do you keep track of the obsolescence of parts or materials used? [T1] Yes No 55. How do you handle the product change notices (PCNs) from your part or material suppliers? [T3] Not tracked Tracked internally Tracked and communicated to customers Failure data tracking and analysis 56. Do you maintain a database for reported failures? [T1] Yes No 57. For which of the following types of data do you have established procedures for collecting, summarizing and analyzing? [T2] Incoming inspection Paste deposition Solder joint geometry Solder joint defects Interconnect strength Electrical functional tests Reliability tests Customer returns 58. Which of the following are tracked in your "return material authorization" system? [T2] Customer purchase order number Number of parts returned Reason for return Assigned failure cause Corrective action proposed Interconnect strength data Electrical functional tests data Reliability tests data Customer return data 59. Do you have capabilities for failure analysis of assemblies? [T1] Yes 126 No 60. Do you perform failure analysis on assemblies failed during manufacture? [T1] Yes No 61. Do you perform failure analysis on customer returned assemblies? [T1] Yes No 62. Do you have documented procedures to conduct analysis of failures? [T1] Yes No 63. Which of the following do you rank using Pareto Charts? [T2] Failure modes Failure sites Failure mechanisms 64. Which of the following information does your failure database contain? [T2] Manufacture date Shipping date Returned date Failure mode Failure site Failure mechanisms 65. Do you correlate failure of assemblies with specific materials and processes? [T1] Yes No 66. Do you have an established system with your suppliers through which you identify problems in supplied parts or materials, and verify that corrective actions have been taken? [T1] Yes No 67. Do you maintain a record of all corrective actions taken? [T1] Yes No Verification and validation 68. Which of the following approvals and/or certifications do you provide on the assemblies that you assemble? [T2] J-STD-001 IPC-A-610 Class-1 IPC-A-610 Class-2 IPC-A-610 Class-3 MIL-STD-2000 UL Approval UL Level 94V0 127 UL Level 94V1 UL Level 94V2 Canadian Standards MIL-P-55110 (Rigid Boards Qualification Standard) MIL-P-50884 (Rigid/Flex Boards Qualification Standard) ISO-9003 ISO-9002 ISO-9001 BABT (British Approvals Board of Telecommunications) QS-9000 Equipment Engineering Capabilities (EEC) Pb-free Compliance Certification 69. Do you update your process FMEA based on failures at various stages of assembly? [T1] Yes No 70. For which of the following do you conduct a regular internal audit? [T2] Quality system Manufacturing processes Reliability planning Reliability testing Corrective actions system Reliability improvements 71. If you receive feedback on customer dissatisfaction, which of the following is performed? [T2] Documentation of reported dissatisfaction Identification of cause of dissatisfaction Report to concerned personnel Analysis of cause of dissatisfaction Corrective action implementation 72. Do you review and monitor the effectiveness of corrective actions? [T1] Yes No 73. When are your customers notified about changes made to products controlled by customer drawings and specifications? [T3] Not informed Informed only after change Informed before change Informed before and after change 74. Do you have a documented process to make reliability improvements in the assembly process based on lessons learned from failures of earlier assemblies? [T1] Yes No 128 75. Do you have a documented process to verify improvements in the reliability of your processes and assemblies? [T1] Yes No 76. Which of the following are used for initiating corrective and preventive action? [T2] Incoming inspection Employee input Paste deposition data Solder joint geometry data Solder joint defects data 77. Do you have documented procedures to make improvements in processes based on field failures of assemblies? [T1] Yes No 129 Contributions In the presence of a global supply chain, companies are looking for means to conduct an upfront evaluation of suppliers based on their ability to meet reliability requirements. This can provide valuable competitive advantage for them. This dissertation discusses a reliability capability evaluation model that can be used for this evaluation. The evaluation model is validated, and a quantitative evaluation method is proposed. The contributions of this dissertation are: 1. I used the concept of maturity models to develop the reliability capability maturity model for electronics manufacturers, and created an evaluation procedure for supplier selection. Eight key reliability practices have been defined in terms of their purpose, underlying reliability tasks, and outputs. Five levels of reliability capability maturity have been identified, and requirements are defined at each level of maturity for the eight key practices. 2. I adapted the statistical methods, based on multivariate correlation analysis, suggested in the field of psychometrics to empirically validate the reliability capability maturity model. I created a survey as a scientific instrument to solicit relevance ratings for reliability tasks from industry professional and researchers. Analysis of the survey data resulted in a listing of eighty-eight critical reliability tasks spread over eight key practices, which can be used for reliability capability evaluation. This is the first empirically validated list of critical to reliability tasks. 130 3. I developed a quantitative reliability capability evaluation process by using factor loadings from Principal Components Analysis as weighting factors for all eighty- eight reliability tasks. An evaluation using SMOP (Radar) charts based on empirically developed weighting factors can be used for quantitative discrimination between suppliers. 4. I created a procedure, which includes a questionnaire, for conducting reliability capability maturity evaluations. Based on the procedure, I conducted reliability capability evaluations for four electronics companies and results for one of the evaluations are reported in this dissertation as a case-study. 5. I created the printed circuit board assembly (PCBA) manufacturer reliability capability benchmarking methodology. The methodology consists of a manufacturing compatibility evaluation followed by reliability capability maturity score evaluation for a printed circuit board assembler. 131 References [1] Vichare, N., Rodgers, P., Pecht, M., ?In Situ Temperature Measurement of a Notebook Computer - A Case Study in Health and Usage Monitoring of Electronics,? IEEE Transactions on Device and Materials Reliability, vol. 4, no. 4, pp. 658-663, December 2004. [9] [2] Jayant, M., ?Intel Recalls Fastest Pentium,? Electronic News, September 4, 2000. [3] Pasztor, A. and Landers, P., ?Toshiba to Pay $2B Settlement on Laptops,? Wall Street Journal, p. 1, November 1, 1999. [4] Dummer, G.W.A., Tooley, M.H., and Winton, R.C., ?An Elementary Guide to Reliability,? 5th Edition, Oxford: Butterworth Heinemann, 1997. [5] Pecht, M., and Biagini, R., ?The Business, Product Liability and Technical Issues Associated with Using Electronic Parts Outside the Manufacturer's Specified Temperature Range,? Proceedings of the Pan Pacific Microelectronics Symposium, pp. 391-398, February 5-7, Maui, Hawaii, 2002. [6] Crosby, P.B., ?Quality is Still Free: Making Quality Certain in Uncertain Times,? McGraw Hill, New York, 1996. [7] Bamberger, J., ?Essence of the Capability Maturity Model,? Computer, pp. 112- 114, June 1997. [8] Bollinger, T.B., and McGowan, C., ?A Critical Look at Software Capability Evaluations,? IEEE Software, vol. 8, no. 4, pp. 25-41, July 1991. Paulk, M.C., Weber, C.V., Garcia, S.M., Chrisis, M.B., and Bush, M., ?Key Practices of the Capability Maturity Model SM , Version 1.1,? Technical Report CMU/SEI-93-TR-025, ESC-TR-93-178, Software Engineering Institute, Carnegie Mellon University, February 1993. [10] Macbeth, D., and Fergusson, N., ?Partnership Sourcing: An Integrated Supply Chain Management Approach,? Pittman publishing, London, 1994. [11] Szakonyi, R, ?Measuring R&D effectiveness ? I,? Research Technology Management, vol. 37(2), pp. 27-32, 1994a. [12] Szakonyi, R, ?Measuring R&D effectiveness ? II,? Research Technology Management, vol. 37(3), pp. 44-55, 1994b. 132 [13] McGrath, Michael E. (ed), ?Setting the PACE in Product Development: A Guide to Product And Cycle Time Excellence,? Butterworth-Heinemann, 1996. [14] Chiesa, V., Coughlan, P., and Voss, C., ?Development of a Technical Innovation Audit,? Journal of Product Innovation Management, vol. 13(2), pp. 105-136, 1996. [15] Fraser, P., and Gregory, M., ?A Maturity Grid Approach for the Assessment of Product Development Collaborations,? Proceedings of 9th International Product Development Management Conference, Sophia, Antipolis, 27-28 May, 2002. [16] Fraser, P., Moultrie, J., and Holdway, R., ?Exploratory Studies of a Proposed Design Maturity Model,? Proceedings of 8th International Product Development Management Conference, University of Twentie, Holland, 11-12 June, 2001. [17] Strutt, J.E., ?Reliability Capability Maturity Briefing Document,? Report no. R- 03/2/1, Reliability Engineering & Risk Management Centre, Cranfield University, UK, 2001. [18] Williams, K., Robertson N., Haritonov, C.R., Strutt, J., ?Reliability Capability Evaluation and Improvement Strategies for Subsea Equipment Suppliers,? Journal of the Society for Underwater Technology, vol. 25, no. 4, 2003. [19] Boersma, J., Loke, G., Petkova, V.T., and Sander, P.C., ?Quality of Information Flow in the Backend of a Product Development Process: a Case Study,? Quality and Reliability Engineering International, vol.20, no.4, pp. 255-263, June 2004. [20] Brombacher, A.C., ?Maturity Index on Reliability: Covering Non-technical Aspects of IEC61508 Reliability Certification,? Reliability Engineering & System Safety, vol.66, no.2, pp. 109-120, Nov. 1999. [21] Sander, P.C., and Brombacher, A.C., ?Analysis of Quality Information Flows in the Product Creation Process of High-volume Consumer Products,? International Journal of Production Economics, vol.67, no.1, pp. 37-52, Aug. 2000. [22] Sander, P.C., and Brombacher, A.C., ?MIR: the Use of Reliability Information Flows as a Maturity Index for Quality Management,? Quality and Reliability Engineering International, vol.15, no.6, pp. 439-447, Nov.-Dec. 1999. [23] Tiku, S., Pecht, M., and Strutt, J., ?Organizational Reliability Capability,? Proceedings of Canadian Reliability and Maintainability Symposium, Ottawa, Canada, October 16-17, 2003. [24] IEEE Standards Board, ?IEEE Standard Reliability Program for the Development and Production of Electronics Systems and Equipment,? IEEE Std 1332-1998, 30 June 1998. 133 [25] Pecht, M. and Ramakrishnan, A., ?Development and Activities of the IEEE Reliability Standards Group,? Journal of the Reliability Engineering Association of Japan, vol. 22, no. 8, pp. 699-706, November 2000. [26] United Stated Department of Defense, ?Reliability Program for Systems and Equipment Development and Production,? MIL-STD-785B, 15 September 15 1980. [31] [27] Bell Communications Research (Bellcore, now Telcordia), ?Generic Requirements for Assuring the Reliability of Components Used in Telecommunication Equipment,? Technical Reference TR-NWT-000357, October, 1993. [28] SAE Standards Board, ?Design/process Checklist for Vehicle Electronic Systems,? SAE Document no. 1938, revised May 1998. [29] IEEE Standards Board, ?IEEE Standard Methodology for Reliability Prediction and Assessment for Electronic Systems and Equipment,? IEEE Std 1413-1998, 15 January, 1999. [30] IEC Technical Committee ? 56, ?Process for Assessing Reliability of Equipment,? International Electrotechnical Commission (IEC) New Work Item Proposal no. 56/775/NP, Date of proposal July 2001. IEEE Standards Board, ?IEEE Guide for Selecting and Using Reliability Predictions based on IEEE 1413 TM ,? IEEE Std 1413.1 TM -2002, 19 February, 2003. [32] American Institute of Aeronautics & Astronautics, ?Objective-Oriented Reliability and Maintainability Program Data Product Requirements,? AIAA S-102 Draft Document, 2004. [33] Blanks, H.S., ?The challenge of quantitative reliability,? Quality and Reliability Engineering International, vol. 14, issue 3, pp. 167-176, 1998. [34] Bowles, J.B., ?A survey of reliability-prediction procedures for microelectronic devices?, IEEE Transactions on Reliability, vol.41, issue 1, pp. 2 ?12, March 1992. [35] Cartwright, J., Donahoe, D.N., Jackson, M., ?Reliability Prediction and Assessment of Electronic Systems and Equipment?, vol. 22, issue 1, pp. 127 ?128, March 1999. [36] Condra, L.W., ?Reliability Improvements with Design of Experiments,? Marcel Dekker, Inc., 2001. [37] Correia, M., and Freeman, W.J., III, ?Building the bridge between design and manufacturing?, IEEE Transactions on Components, Hybrids, and Manufacturing Technology, vol. 13, issue 2, pp. 252 ?257, June 1990. 134 [38] Evans, R., ?Real Reliability?, IEEE Transactions on Reliability, vol. 45, issue 3, pp. 357, September 1996. [39] Evans, R.A., ?Electronics reliability: a personal view?, IEEE Transactions on Reliability, vol. 47, issue 3, part 2, pp. SP329 -SP332, September 1998. [40] Foucher, B., Kennedy, R., Kelkar, N., Ranade, Y., Govind, A., Blake, W., Mathur, A., Solomon, R., ?Why a new parts selection and management program??, IEEE Transactions on Components, Packaging, and Manufacturing Technology, Part A, vol. 21, issue 2, pp. 375 ?382, June 1998. [41] Golomski, W.A., ?Reliability and quality in design?, IEEE Transactions on Reliability, vol. 44, issue 2, pp. 216 ?219, June 1995. [42] Jackson, M., Sandborn, P., Pecht, M., Davis, C.H., and Audette, P., ?A Risk- Informed Methodology for Parts Selection and Management,? Quality and Reliability Engineering International, vol. 15, issue 5, pp. 261-271, September 1999. [43] Ke, H-Y., and Hwang, C-P., ?Reliability programme management based on ISO 9000,? International Journal of Quality & Reliability Management, vol. 14, no. 3, pp. 309-318, 1997. [44] Leech, D.J., ?Proof of designed reliability?, Engineering Management Journal, vol. 5, issue 4, pp. 169 ?174, August 1995. [45] Lewis, E.E., ?Introduction to Reliability Engineering?, John Wiley and Sons, Inc., New York, 1994. [46] 'Connor, P.D.T., ?Commentary: reliability-past, present, and future?, IEEE Transactions on Reliability, vol. 49, issue 4, pp. 335 ?341, December. 2000. [47] Pecht, M., ?Parts Selection and Management?, John Wiley and Co., 2004. [48] Pecht, M., ?Product Reliability, Maintainability, and Supportability Handbook,? CRC Press, New York, NY, 1995. [49] Pecht, M., Nash, F., and Lory, J., ?Understanding and Solving the Real Reliability Assurance Problems,? Proceedings of the Annual Reliability & Maintainability Symposium, pp. 159-161, January 1995. [50] Pecht, M., Sandborn, P., Solomon, R., Das, D., Wilkinson, C., ?Life Cycle Forecasting, Mitigation Assessment and Obsolescence Strategies?, CALCE EPSC Press, College Park, 2002. 135 [51] Pecht, M.G., and Nash, F.R., ?Predicting the reliability of electronic equipment?, Proceedings of the IEEE, vol. 82, issue 7, pp. 992-1004, July 1994. [52] Syrus, T., Pecht, M., and Humphrey, D., ?Part Assessment Guidelines and Criteria for Parts Selection and Management,? IEEE Transactions on Electronics Packaging Manufacturing, vol. 24, no. 4, pp. 339-350, October 2001. [53] Syrus, T., Pecht, M., and Uppalapati, R., ?Manufacturer Assessment Procedure and Criteria for Parts Selection and Management,? IEEE Transactions on Electronics Packaging Manufacturing, vol. 24, no. 4, pp. 351-358, October 2001. [54] Van Donk, P.K., and Sanders, G., ?Organizational Culture as a Missing Link in Quality Management,? International Journal of Quality & Reliability Management, vol. 10, no. 5, 1993. [55] Tiku, S., and Pecht, M., ?Auditing The Reliability Capability of Electronics Manufacturers,? Proceedings of IPACK 03: International Electronic Packaging Technical Conference and Exhibition, July 6-11, 2003, Maui, Hawaii, USA. [56] Tiku, S., and Pecht, M., ?Reliability Capability Assessment Methodology,? Proceedings of IMAPS Brazil 2003, the International Technical Symposium on Packaging, Assembling and Testing & Exhibition, Campinas - SP, Brazil, August 6- 8, 2003. [57] Schriesheim, C.A., et al, ?Improving Construct Measurement in Management Research : Comments and a Quantitative Approach for Assessing the Theoretical Content Adequacy of Paper-and-Pencil Survey-Type Instruments,? Journal of Management, vol. 19, no. 2, pp. 385-417, 1993. [58] Jacoby, J., ?Consumer Research: A State of the Art Review,? Journal of Marketing, vol. 42, pp. 87-96, April 1978. [59] Fraser, P, Moultrie, J., and Gregory, M., ?The Use of Maturity Models/grids as a Tool in Assessing Product Development Capability,? IEEE International Engineering Management Conference, IEMC '02, vol. 1, pp. 244 ? 249, 2002. [60] Nunnally, J., ?Psychometric Theory,? Mc-Graw Hill, New York, 1978. [61] Bryman, A., and Cramer, D., ?Quantitative Data Analysis with SPSS Release 10 for Windows: a Guide for Social Scientists,? Routledge, London, 2001. [62] Benson, P.G., Saraph, J.V., and Schroeder, R.G., ?The Effects of Organizational Context on Quality Management: An Empirical Investigation,? Management Science, vol. 37, no. 9, September 1991. 136 [63] Saraph, J.V., Benson, P.G., and Schroeder, R.G., ?An Instrument for Measuring the Critical Factors of Quality Management,? Decision Sciences, vol. 20, no. 4, pp. 810-829, 1989. [64] Kuei, C.H., Madu, C.N., and Lin, C., ?The Relationship Between Supply Chain Quality Management Practices and Organizational Performance,? International Journal of Quality and Reliability Management, vol. 18, no. 8, pp. 864-872, 2001. [65] Zhang, Z., ?Implementation of Total Quality Management: An Empirical Study of Chinese Manufacturing Firms,? PhD dissertation, University of Groningen, Netherlands, 2001. [66] Churchill, G.A., ?A Paradigm for Developing Better Measures of Marketing Research,? Journal of Marketing Research, vol. XVI, pp. 64-71, February 1979. [67] Plessis, Y du, ?The Development of an Assessment Tool for Measuring Project Management Culture in Organizations,? PhD dissertation, University of Pretoria, South Africa, 2003. [68] Hinkin, T.R., ?A Review of Scale Development Practices in the Study of Organizations,? Journal of Management, vol. 21, no. 5, pp. 967-988, 1995. [69] Berdie, D.R., and Anderson, J.F., ?Questionnaires: Design and Use,? Scarecrow Press Inc., Metuchen, N.J., 1974. [70] Antonius, R., ?Interpreting Quantitative Data with SPSS,? Sage Publications, London, 2003. [71] Foster, J. J., ?Data Analysis using SPSS for Windows: a Beginner?s Guide,? Sage Publications, London, 1998. [72] Kerr, A.W., Hall H.K., and Kozub, S.A., ?Doing Statistics with SPSS,? Sage Publications, London, 2002. [73] Cronbach, L.J., ?Coefficient Alpha and the Internal Structure of Tests,? Psychometrika, vol. 16, pp. 297-334, 1951. [74] Cronbach, L.J., ?My Current Thoughts on Coefficient Alpha and Successor Procedures,? Educational and Psychological Measurements, vol. 64, no. 3, pp. 341- 418, June 2004. [75] Cronbach, L.J., and Meehl, P.E., ?Construct Validity in Psychological Tests,? Psychological Bulletin, no. 52, pp. 281-302, 1955. 137 [76] Thompson, B., and Daniel, L.G., ?Factor Analytic Evidence for the Construct Validity of Scores: A historical Overview and some Guidelines,? Educational and Psychological Measurement, vol. 56, pp. 197-208, April 1996. [77] Tucker, L.R., and MacCallum, R.C., ?Exploratory Factor Analysis,? Book Manuscript, Retrieved on May 02, 2005 from http://www.unc.edu/~rcm/book/factornew.htm [78] Kline, P., ?An Easy Guide to Factor Analysis,? Routledge, London, 1994. [79] Sch?tz, H., Speckesser, S., and Schmid G., ?Benchmarking Labour Market Performance and Labour Market Policies: Theoretical Foundations and Applications,? Discussion Paper No. FS I 98 ? 205, Social Science Research Center, Berlin, June 1998. [80] IPC, ?Acceptability of Electronic Assemblies?, IPC-A-610 Standard, Rev D, Northbrook, IL, February 2005. [81] IPC, ?OEM Standard for Printed Board Manufacturers' Qualification Profile,? IPC- 1710A, Northbrook, IL, May 2004. [82] IPC, ?Assembly Qualification Profile,? IPC?1720A Standard, Northbrook, IL, May 2004. [83] Ganesan, S., and Pecht, M., ?Lead-free Electronics,? CALCE EPSC Press, 2004. [84] Marks, L., and Caterina, J.A., ?Printed Circuit Assembly Design,? McGraw-Hill, Washington D.C., 2000. [85] Noble, P.J.W., ?Printed Circuit Board Assembly,? Open University Press, Milton Keynes, 1989. [86] Pecht, M.G., ?Soldering Processes and Equipment,? John Wiley & Sons Inc., New York, 1993. [87] Arena, J., and McKenzie, R., ?Test and Inspection,? Surface Mount Technology, October 2001. [88] Watts, N, ?Establishing a PCB Quality Assurance and Reliability Program,? Electronic Packaging and Production, vol. 33, no.5, pp. 36/25-28, May 1993. [89] Hwang, J.S., ?Modern Solder Technology for Competitive Electronics Manufacturing,? McGraw Hill, Boston, 1996. 138 [90] Ngo, P., ?How to Measure PCB Assembly Process Performance,? Surface Mount Technology, pp. 62-64, October 1994. [91] Wischoffer, S., ?Is Your PCB Assembly Process on Target?,? Electronic Packaging and Production, vol.42, no.6, pp. 24-27, June 2002. [92] IPC, ?Guidelines for Accelerated Reliability Testing of Surface Mount Solder Attachments,? IPC-SM-785, Northbrook, IL, November 1992. 139