Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The feasibility of using Standardized Carrier Performance Measures (SCPM) among vehicle assemblers in… Carroll, Philip J. 1999

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1999-0305.pdf [ 5.04MB ]
JSON: 831-1.0089012.json
JSON-LD: 831-1.0089012-ld.json
RDF/XML (Pretty): 831-1.0089012-rdf.xml
RDF/JSON: 831-1.0089012-rdf.json
Turtle: 831-1.0089012-turtle.txt
N-Triples: 831-1.0089012-rdf-ntriples.txt
Original Record: 831-1.0089012-source.json
Full Text

Full Text

THE FEASIBILITY OF USING STANDARDIZED CARRIER P E R F O R M A N C E M E A S U R E S (SCPM) A M O N G V E H I C L E A S S E M B L E R S IN C A N A D A A N D THE UNITED STATES by Philip J. Carroll B. Comm., Concordia University, 1987 A THESIS SUBMITTED IN PARTIAL F U L F I L L M E N T OF THE REQUIREMENTS FOR THE DEGREE OF M A S T E R OF SCIENCE in THE F A C U L T Y OF G R A D U A T E STUDIES ( F A C U L T Y OF C O M M E R C E A N D BUSINESS ADMINISTRATION) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH C O L U M B I A April , 1999 © Philip James Carroll, 1999 A P R - 2 8 - 9 9 0 3 : 1 3 P M S U E . P E T E R C L A Y T O N C A R R O L 1 6 0 4 2 6 1 4 8 1 7 P . 0 3 I n p r e s e n t i n g th is thes is in pa r t i a l f u l f i l m e n t o f t h e r e q u i r e m e n t s f o r an a d v a n c e d d e g r e e at t h e U n i v e r s i t y o f Br i t i sh C o l u m b i a , I a g r e e t h a t t h e L ib ra ry sha l l m a k e it f r e e l y ava i l ab le f o r r e f e r e n c e a n d s t u d y . I f u r t h e r a g r e e t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g o f t h i s t hes i s f o r s c h o l a r l y p u r p o s e s m a y b e g r a n t e d b y t h e h e a d o f m y d e p a r t m e n t o r b y h is o r he r r e p r e s e n t a t i v e s . U is u n d e r s t o o d t h a t c o p y i n g o r p u b l i c a t i o n o f th is thes is f o r f i nanc ia l g a i n shal l n o t b e a l l o w e d w i t h o u t m y w r i t t e n p e r m i s s i o n . D e p a r t m e n t o f T h e U n i v e r s i t y o f Br i t i sh C o l u m b i a V a n c o u v e r . C a n a d a D a l e D E - 6 (2/88) A P R - 2 8 - 1 9 9 9 1 5 = 0 9 1 6 0 4 2 6 1 4 8 1 ? 9 5 * P . 0 3 A B S T R A C T Increasingly, shippers need accurate motor carrier performance information. Carrier selection and performance evaluation programs, carrier certification programs and quality management programs all require accurate performance information. Traditionally, shippers do not have much experience in formally gathering and measuring such information. For those shippers and carriers who do measure performance, no standardized measuring and reporting rules exist within industry. Over the years, the accounting profession has established standardized financial performance information reporting rules based on user needs. The process used by the profession involves input from information users, exposure drafts that summarize information needs, and proposed measuring and reporting rules. Exposure drafts are criticized, modified, and recirculated. This iterative process continues until users accept rules. In an attempt to establish standardized carrier performance measuring and reporting rules, this study completes the first iteration of this process. This study examines the information needs of vehicle assemblers in Canada and the United States. This work is conducted while examining the feasibility of standardized measuring and reporting within this industry segment. From this research, the study suggests industry recommendations and future research needs. This study finds that vehicle assemblers generally have similar performance information needs but go about meeting these needs with different measurements. These information needs exist on two tiers. Popular delivery service attributes are on the first tier, while infrequent freight damage and loss, billing and service availability attributes are on the second tier. Although interest exists among vehicle assemblers to explore standardized carrier performance measures, barriers such as carrier performance evaluation program confidentiality stand in the way. ii Table of Contents Abstract i i Table of Contents i i i Acknowledgment v Chapter 1: Introduction 1.1 The importance of standardizing carrier performance measures 1 1.2 What are Standardized Carrier Performance Measures? 3 1.3 Issues concerning the feasibility of Standardized Carrier Performance Measures 4 1.4 Why focus on vehicle assemblers in Canada and the United States? 5 1.5 Organization of the paper 6 Chapter 2: Analysis of literature and industry practices 2.1 Introduction 7 2.2 What is the state of carrier performance information? 7 2.3 What is the impact of current carrier performance information on shippers and carriers? 9 2.4 What is the content of carrier performance information? 10 2.5 The content of Standardized Carrier Performance Measures 13 2.6 The benefits of Standardized Carrier Performance Measures 16 2.7 What is required for Standardized Carrier Performance Measures to exist? 19 2.8 SCPM implementation and operational issues 22 2.9 Summary of literature and industry practices 24 iii Chapter 3: Phase 1 findings and analysis: Do vehicle assemblers in Canada and the United States measure motor carrier performance in the same manner? 3.1 Introduction 27 3.2 Research design 27 3.3 Background information of participants in the survey 37 3.4 Phase 1 research findings 43 3.4.1 H j : Vehicle assemblers measure the same service attributes. 43 3.4.2 H 2 A : Vehicle assemblers process the same data to measure service attributes.57 3.4.3 H 2 B : Vehicle assemblers convert data into information in the same manner. 62 3.4.4 H 3 : Vehicle assemblers weigh service attributes equally. 68 3.5 Phase 1 summary 73 Chapter 4: Phase 2 findings and analysis: Is the use of SCPM feasible among vehicle assemblers in Canada and the United States? 4.1 Introduction 77 4.2 Measurement issues 77 4.3 Technological issues 81 4.4 Management issues 83 4.5 Standardization process issues 88 4.6 Population issues 90 4.7 Phase 2 summary 92 Chapter 5: Summary and conclusions 5.1 Introduction 98 5.2 Research findings 101 5.3 Industry recommendations 105 5.4 Future research needs 108 Bibliography 113 iv A C K N O W L E D G M E N T The author wishes to thank Professors Garland Chow, Trevor Heaver (both of the University of British Columbia) and Paul Larson (University of Nevada) for their direction and support in completing this research project. The author wishes to thank Professor Brian Gibson (Georgia Southern University) for his research into the topic of carrier certification, which proved extremely helpful in planning this project The author wishes to thank Professor Dean Uyeno (University of British Columbia) for providing vision and guidance in the early stages of this project. The author wishes to thank the participants from the 13 car and truck manufacturers who so openly and enthusiastically participated in telephone interviews. Finally, the author wishes to thank Virginia Smith for her help in editing this paper. Chapter 1 1. Introduction 1.1 The importance of standardizing carrier performance measures Two important shipper decisions require what we believe is the need for standardized carrier performance measures. These are: - Which prospective carrier should be selected? - Should an existing carrier continue to be used? Answering these questions is essential yet challenging for shippers. The importance of these decisions stems from the influence transportation quality has on manufacturing and distribution in a competitive world. For example, in order to cut inventory costs, production plants using Just-In-Time techniques (JIT) require flawless, on-time delivery of components. This importance is reflected in shippers' use of carrier selection and performance evaluation models. In order to use these models, accurate and timely carrier performance information is required. The challenge in these decisions stems from the difficulty in evaluating the quality of transportation services. Whereas the quality of a product is measured with standard calibrated instruments, no such instruments exist to measure service quality. In order to measure service quality provided by carriers, service attributes that contribute to the overall performance must be measured, using some scale of reference. The approaches used by shippers to measure carrier performance are diverse. For example, some shippers use a combination of internally generated and carrier generated data and information to measure carrier performance. Other shippers rely solely on internally generated data while others rely solely on carrier generated information. Depending on the degree of formality used in measuring carrier performance, some service attributes are measured subjectively while others are measured objectively. 1 Finally, even objective measures are inconsistent since the measurement rules used are inconsistent. To complicate matters more, the impact of transportation service failures on operations of individual companies creates variations, which results in a wide diversity of measures. In other words, carriers and shippers measure transportation service quality with varying degrees of formality, using customized lists of service attributes. For example, a Less-Than-Truckload (LTL) shipper with a history of high damage claims will measure the number of these occurrences while a Truckload (TL) shipper without such history will not. In an environment with such inconsistencies, the cost of generating comparable performance information among carriers and shippers is significant. For example: • The recent trend of megabids, which features shippers frequently inviting service bids from or sending request for proposals (RFP) to carriers on a wide scale, requires considerable resources from carriers. Where shippers have different performance information needs, carriers face the expensive task of generating customized performance information. • Where shippers depend solely on carrier generated performance information, i f carrier measurement rules differ, shippers must reconcile the various information to allow for cross-carrier comparison. • Benchmarking is the process where similar companies evaluate operational processes and performance measures and establish "best of class" targets. Carriers wishing to benchmark face the difficult task of restating operational results in a comparable manner. 2 As noted by Lambert et al. (1993, 139), inconsistent and unclear carrier performance communication between shippers and carriers results in carriers' mis-allocation of resources. Thus, the complexity and the cost of generating carrier performance information require an efficient and effective solution. We believe Standardized Carrier Performance Measures (SCPM) are part of the solution. Briefly, S C P M is a comprehensive version of carrier performance reporting as used by airlines, i.e., where airlines report on their on-time departure performance, SCPM would be used to measure various service attributes considered important by shippers. 1.2 What are Standardized Carrier Performance Measures? The use of SCPM is an innovative approach we put forward to improve the quality of carrier performance information. The use of SCPM requires all shippers and carriers to use the same rules for measuring and reporting carrier performance information. In other words, the use of SCPM results in the development of a consistent language for evaluating carrier performance. This approach is not new. SCPM parallel Generally Accepted Accounting Principles (GAAP) used by the accounting profession for many years. The use of G A A P , a collection of rules to generate financial information, dictates how accountants should measure and report information in financial statements. Thus, the outcome of G A A P is consistent, comparable financial statements. The feasibility of SCPM opens the door to the possible creation of a centralized clearinghouse to gather performance information on all carriers. Such a site, conceivably operated as an Internet site, would make carrier performance information easily available to shippers and competing carriers. Shippers could access either their own historical information or simply generic summary information. Carriers benchmarking themselves against other carriers could access summary information. This information could then be downloaded and inserted into customized carrier evaluation models. 3 The standardization of performance measures we propose is not new in the field of logistics. Since 1996, the Supply Chain Council has developed the Supply Chain Operation Reference model (SCOR). This model identifies supply chain processes and 91 metrics (measures) which allow supply chain participants to discuss among each other supply chain components and performance using a common language. SCPM parallels this idea of a common language for logistics use, but unlike SCOR, SCPM specifically targets carriers and shippers (vehicle assemblers) evaluating carrier performance. 1.3 Issues concerning the feasibility of Standardized Carrier Performance Measures We believe the process used by the accounting profession to develop G A A P could be used by the transportation industry to generate its own standardized performance measuring and reporting rules. To reach this goal, we must first assess this approach's feasibility. The purpose of this study is to determine whether standardizing carrier performance measures is feasible among vehicle assemblers in Canada and the United States. In this study, we examine the systems used by vehicle assemblers in measuring carrier performance. We identify similarities among carrier performance evaluation programs. We also examine the source of differences among these programs. Finally, we pool these findings and examine various implementation and operational issues which we believe influence the feasibility of SCPM among vehicle assemblers in Canada and the United States. Two questions are central to this study. These are: QI Do vehicle assemblers in Canada and the United States measure motor carrier performance in the same manner? The accounting industry shows us that standardization of measures is possible i f users of the information can agree on a common set of rules to measure and report performance 4 information. Therefore, as a starting point, it is necessary to determine the existing degree of commonality among carrier performance evaluation programs of vehicle assemblers. Q2 Is the use of SCPM feasible among vehicle assemblers in Canada and the United States? The concept of standardization is not instinctive for business organizations, which seek a competitive edge. Thus, standardizing the measurement and reporting of carrier performance has not been considered important in the past by vehicle assemblers. In this study, participants believe that better information is a competitive edge. Therefore, for the use of S C P M to be adopted, certain implementation and operational aspects, as well as performance measurement commonality, are necessary. If these aspects do not already exist or are unlikely to exist, standardization is improbable. 1.4 Why focus on vehicle assemblers in Canada and the United States? To assess the feasibility of SCPM, we must examine differences among the methods used by shippers in evaluating carrier performance. These differences will ultimately determine to what level and to which market segments SCPM could apply. In planning this study, we had to determine the limits of the population to consider. Since we have to examine the process used in measuring carrier performance, we want to minimize the variation among participants under study. Thus, we decided to focus on a group of homogeneous shippers. We decided to focus the study on the vehicle assembly segment, which includes 19 companies in Canada and the United States. These vehicle assemblers exhibit similarities such as assembly processes, the reliance on the delivery of components from outside suppliers, and the heavy use of motor carriers for transportation. With a focus on such a population, we were able to examine the source of differences and similarities within what is apparently a homogeneous segment. Even with such a small 5 population, we discovered a significant difference between car and truck manufacturers, which causes them to monitor carrier performance differently. We note that studying inbound logistics among vehicle assemblers results in focusing on motor carriers. Participants to our survey indicated that, although various modes of transportation are used to move components between component manufacturers and vehicle assemblers, excluding a small amount of rail delivery in some assembly plants, motor carriers perform all component deliveries. More significant to vehicle assemblers, motor carriers make up the largest group of transportation service providers and accordingly get most of the attention. Therefore, in addition to focusing on vehicle assemblers, this study focuses on the process used by vehicle assemblers to monitor the performance of L T L and TL carriers. 1.5 Organization of the paper In this study the feasibility of SCPM among vehicle assemblers in Canada and the United States is assessed. The research paper is organized as follows. In Chapter 2, we explain the process used to assess the feasibility of SCPM, i.e., we analyze the literature and industry practices to explain the structure of SCPM. In Chapter 3, we present Phase 1 research findings. These findings focus on answering whether vehicle assemblers in Canada and the United States measure carrier performance in the same manner. In Chapter 4, we discuss Phase 2 findings. These findings focus on answering whether implementation and operational issues exist that impede the feasibility of SCPM among vehicle assemblers in Canada and the United States. Finally, in Chapter 5, we summarize research findings, present industry recommendations, and discuss future research needs. 6 Chapter 2 2. Analysis of literature and industry practices 2.1 Introduction The purpose of this chapter is to review relevant literature and industry practices, which guided us in putting forward the concept of SCPM. In arriving at this concept, we review the current state of performance information and its shortcomings, and suggest how following the approach used to develop G A A P can provide better performance information. 2.2 What is the state of carrier performance information? Transportation includes both the movement of goods and information (e.g., waybills to be paid). As noted by Lambert et al. (1993, 139), in assessing transportation service quality, multi-dimensional performance measurement is the ideal path indicated by shippers. This approach is complex. Instead of measuring a single service attribute, several service attributes are measured simultaneously. To compound the problem, difficulties arise i f the shipper depends on carrier supplied information generated using different measurements for the same service attribute. Such differences may reflect each carrier's internal approach in measuring service quality. Chow and Poist (1984, 41) concluded that there is a lack of formal carrier performance measurement among shippers. Some shippers have developed formal monitoring systems, but many more have not. In the same study, Chow and Poist concluded that, although some service attributes are considered important in making decisions about carrier retention, they are not necessarily measured formally. Thus, we believe that, imprecise information results from informal, often subjective, measurements. 7 More recently, an industry survey reported by Thomas (1995) demonstrated that 60% of respondents (shippers) have some formal carrier performance evaluation program. Although, this progress is positive, these programs have been developed in isolation. Without cooperation among carriers and shippers, these programs result in a wide array of measurements. For example, different scales of reference are used to measure on-time delivery. Thus, imprecise information results from the use of subjective measurements and inconsistent scales of reference. The SCOR model developed by the Supply Chain Council is a tool which provides common terminology and enables supply chain participants to "visualize, articulate, measure, and improve this complex management process (supply chain).1 To achieve this goal, the model presents common terminology, measures, and perspectives for use by supply chain participants across industries. This model provides positive insight for this study as well as falls short. The shortcomings are discussed here, while the positive insight is discussed in later sections. Presently, there are two shortcomings in the SCOR model, which prevents its use at the detailed level of carrier performance measurement under study. The model includes 91 metrics to be used in measuring supply chain processes. Presently, we find that the model falls short in two areas in meeting the need for better carrier performance information. First, the model does not specifically address the operational relationship between shipper and carrier, i.e., the model focuses on "customers, distributors, retailers, manufacturers, and suppliers".2 However, there is one transportation related metric (measure). This metric, faultless invoices, is defined as "the number of invoices issued without errors". This example highlights the second shortfall This definition of the SCOR model was summarized form a press release by the Supply Chain Council on April 12, 1996. This list of target participants in the SCOR model was circulated in a press release by the Supply Chain Council on April 12, 1996 8 of the model, i.e., this measure is not operationally effective. Without reference to the total number of invoices issued or the actual number of incorrect invoices, this measure is not relevant to shippers or carriers. As is, the measure does not indicate the billing accuracy performance of a carrier. This latter point provides an important challenge the SCOR model must address. That is, performance measures must be realistic to meet the needs of decision makers. Decision makers spending millions of dollars on transportation are held accountable for their decisions. As a result, these decision makers require information that satisfies their needs. As discovered by the findings by Chow and Poist (1984), the level of education and experience determines the level of formality in measuring carrier performance. To us, this means that decisions are individual based rather than process based. Thus, as the SCOR model develops performance measures, powerful decision making processes must also be develop to satisfy the needs of decision makers. 2.3 What is the impact of current carrier performance information on shippers and carriers? Lambert et al. (1993, 139) identified the financial impact associated with miscommunication regarding carrier service quality. The researchers concluded that, the inability of carriers to clearly understand the expectations of shippers results in inefficient allocation of carrier resources. Thus, expensive equipment and personnel are wasted as carriers incorrectly allocate them. Generating performance information is also expensive. If there is no consistency in the measurement of performance information, no economy of scale exists to drive down information costs. Thus, shippers and carriers must develop their own performance measurement systems at great costs. As noted by Brooks (1995, 15), multi-dimensional carrier performance measurement has become the domain of large shippers. Unfortunately, there are a lot more small manufacturers than large ones. As noted by 9 Evans (1990, 26), small manufacturers (those with less than 500 employees) account for 98% of manufacturing firms in the United States. Since small shippers do not have the financial resources to invest in systems to gather and evaluate carrier performance information formally, they either rely on carrier generated performance information or use subjective judgment in making carrier decisions. Based on this sub-optimal information, sub-optimal carrier decisions likely result. Where carriers are required to provide customized information to influential shippers, increased costs arise as carriers are unable to generate and report standardized information. Two examples highlight this point. First, large influential shippers often require some unique performance data or information to be submitted by carriers. In this example, carriers must customize their systems to meet these needs. Second, where carriers are invited to bid for services for a prospective shipper, performance information must be submitted in the requested format. Thus, where requested, special shipper requirements prevent carriers from providing performance information in a consistent efficient manner. 2.4 What is the content of carrier performance information? 2.4.1 Service quality defined In their study, Chow and Poist (1984, 26) make the link between quality of service and carrier performance. This link is important because carrier performance reflects service quality provided by carriers. According to Evans (1989, 9), quality is "the degree of conformance of a product's characteristics to what is expected by its customer". From this definition, we note that quality originates from customers' expectations. Thus, transportation service quality must be defined in measurable attributes, which reflect the various tasks carriers must perform in providing transportation services. These tasks include picking-up and delivering shipments, invoicing their services, and resolving any shipment damage claims. 10 The present state of performance measurement leads Lambert et al. (1993, 139) to conclude that carriers cannot understand shippers' service expectations. That is why we believe a language of performance measurement and reporting using SCPM can be beneficial. 2.4.2 Static versus dynamic information Current carrier service quality programs, such as the one used by Johnson & Johnson, make the distinction between information used for "pre-selecting" carriers and information used for "on-going monitoring." In his article on Johnson & Johnson, MacDonald (1994, 38) explains how this duality is included and measured in Johnson & Johnson's carrier evaluation program. Usually during the pre-selection stage, a shipper needs information, which indicates a carrier's ability to provide satisfactory transportation services. This type of information, which we call static, includes terminal location, electronic data interchange (EDI) capability, and transit times. In completing the evaluation of a prospective carrier, the shipper also examines historic, dynamic information. This information includes on-time delivery, damage claims, and billing errors for a generic shipper. Based on this historic information, the shipper can project the ability of the carrier to provide good service. Once the shipper / carrier relationship begins, so does the on-going monitoring stage. Here the focus remains on information related to actual service delivery. That is, the focus is on dynamic information. This focus on dynamic information is echoed in a report by Andersen Consulting (1995,15). In that report, it was found that in evaluating carrier service quality, shippers monitor "operational metrics detailing the flow of merchandise" most frequently. The distinction between the two types of information, i.e., static and dynamic, is important since it parallels the two principal financial statements in use today; the balance 11 sheet and the income statement. The balance sheet, which lists assets and the liabilities, uses static information to provide an outlook of an organization's future earnings' flow. Static information provides a snap shot of the current situation. The income statement, which lists revenues and expenses, uses dynamic information to provide a picture of financial performance of an organization during a given period. Note that despite their different use, the two financial statements and their information are integrated. That is, the balance sheet for a period just ended reflects the previous period's balance sheet after adjustments have been made to reflect the results from the income statement. In this study, we link SCPM with on-going carrier performance. Like the income statement, we link on-going performance monitoring with dynamic information. This is the same purpose for the income statement we find in the CICA Handbook (1991, Section 1000.15), i.e., the income statement provides information about "the economic performance of the entity." This is how we came to the idea of SCPM, i.e., since the income statement reports on-going performance using information generated using rules developed under G A A P , we propose to use the same structured approach used to prepare this financial statement to generate carrier performance information. 2.4.3 The cost of performance information Performance data are generated only i f someone or a system measures a task. For example, as an incorrect waybill is processed through the bill payment process, the carrier's performance is not affected unless the existence of the incorrect waybill is documented in a performance monitoring system. Once the service attributes to be measured are identified, the cost of generating performance information is incurred at several points. Costs are generated in gathering and measuring data, comparing data to standards, and generating information for 12 evaluation. Thus, the use of various data gathering approaches and data processing techniques have different cost structures. For example, the use of exception reporting such as damaged shipments requires much less data to be collected. Thus, the challenge in generating useful carrier performance information is to balance between cost of generating performance information and its relevance. 2.5 The content of Standardized Carrier Performance Measures Since we wish to apply the same approach of structured rules to measure carrier performance as is used to measure financial performance, we need to understand the process used to develop the latter. 2.5.1 Generally Accepted Accounting Principles The accounting profession has resolved shortcomings caused by inconsistent measurement similar to those described in Section 2.3 by developing standardized rules for measuring and reporting financial information known as Generally Accepted Accounting Principles (GAAP). In Canada, G A A P originated in 1946 when the first accounting bulletin was published by the Canadian Institute of Chartered Accountants (CICA). Over the years, rules have been added to G A A P as the need arose due to our evolving economy. These rules spell out how each item presented in financial statements must be measured and reported. G A A P is more than just measuring and reporting rules. G A A P also includes the process utilized to come up with these rules. We discuss this process in the next section. 2.5.2 Standard setting under GAAP and SCPM The strength of G A A P resides in the process used to develop reporting rules. This process is participatory and iterative. The process begins with the standards setting board, such as the one within the CICA, asking users of financial information for their opinion on a specific accounting topic. These users are accountants, business people, and scholars. From this input, a position paper, called an exposure draft, is circulated among 13 the same users. In the exposure draft, opinions are summarized and a proposed accounting treatment is proposed. Critical support is sought on this matter. Once sufficient agreement exists within the accounting community, an accounting regulation is officially included in G A A P . With this addition, the language of financial performance is further enhanced. 2.5.3 The use of formal measurements under GAAP It is important to note that, in developing measurement rules the accounting profession is biased towards formal measurements. Although, users of financial information may use subjective rules in evaluating the information, the information itself is generated using formal measurements. Formally measured financial performance resulting from G A A P is significantly different from the heavy use of subjective measurements by shippers in evaluating carrier performance. For example, in Lambert et al. (1993, 133), as part of the process to develop the list of attributes used by shippers to select and evaluate carriers and their performance, the researchers interviewed 30 shippers. These were questioned on which attributes should be included in the study's questionnaire. It is not mentioned whether this search was limited to formally measured attributes only. Yet, 166 attributes were included in the questionnaire. We believe that such a study, i f based on formally measured attributes, would have resulted in a shorter list of attributes. To support our claim, Chow and Poist (1984, 29) noted that 15 of the 22 quality of service factors used to evaluate carrier performance are monitored, either formally or informally, by more than half of the participants in their study (see Table 1). From the same table, we note that, when only formal measurements are considered, none of the quality of service factors are measured by at least half the participants. Our conclusion is that, as formal measurements are used in carrier performance evaluation, the number of 14 Table 1 Chow and Poist (1984) Importance of quality factors in transportation selection Category Recorded Recorded Quality factor designation formally formally anc informally 1 Door to door transportation rates and costs R 45.0 % 81.3 % 2 Freight loss and damage experience C 43.7 78.7 3 Claims processing experience C 39.8 75.3 4 Transit time reliability or consistency T 30.7 78.4 5 Experience with carrier in negotiating rate changes R 27.3 68.5 6 Shipment tracing 0 26.7 70.9 7 Total door-to door transit time T 23.8 68.0 8 Quality of pick-up and delivery service 0 22.4 64.0 9 Availability of single-line service to key points in shipper's market 0 22.4 63.5 10 Equipment availability at shipment date E 21.6 68.9 11 Shipment expediting 0 18.6 68.3 12 Experience with carrier in negotiating service changes 0 16.1 60.6 13 Specialized equipment to meet shipper needs E 16.8 53.3 14 Frequency of service to key points in shipper's market area 0 15.6 61.9 15 Physical condition of equipment E 12.7 50.0 16 In transit privileges 0 11.9 39.4 17 Diversion / reconsignment privileges 0 11.1 33.9 18 Quality of operating personnel P 6.2 48.8 19 Carrier image or reputation M 3.1 44.4 20 Quality of carrier salesmanship P 3.1 35.8 21 Reciprocity M 2.6 26.3 22 Gifts / gratuities offered by carrier M 2.0 15.0 Index to category designation Rate related R Time related T Claims related C Equipment related E People related P Operational related 0 Miscellaneous M 15 service attributes monitored decreases. Thus, relevance of information is more a function of the ability to generate accurate data rather than creating a wish list of measurements. 2.6 The benefits of Standardized Carrier Performance Measures In order to obtain support from industry, the use of SCPM must provide benefits to users. We believe its benefits are both qualitative and quantitative. We analyze quantitative benefits derived from the use of SCPM briefly in Section 2.6.5. The following paragraphs examine the qualitative benefits derived from the use of SCPM. The CICA have put forward in the CICA handbook (1991, Section 1000.18) "qualitative characteristics which define and describe the attributes of information provided in financial statements that make that information useful to users. The four principal qualitative characteristics are understandability, relevance, reliability and comparability". Since carrier performance information as well as financial information affects management decision making, we believe these four attributes relate to both types of information. These characteristics are the four we use to examine the benefits of SCPM. Underlying all these characteristics is the exclusive inclusion of formally measured information in financial statements. 2.6.1 Understandability As stated in the CICA handbook, "for the information provided in financial statements to be useful, it must be capable of being understood by users." SCPM, which would reflect essential components of carrier performance measurement and use documented rules, would make it easier for information preparers and various users to understand performance information. For example, as discussed in Section 2.2, the measure of faultless invoices as defined in the SCOR model is vague. More precisely, a faultless invoice measure of 5000 is vague to a shipper, a faultless invoice measure of 99.5% or 5000 out of 5050 invoices submitted in the past six months is more precise, thus more understandable. 16 A n area where information understandability is essential is within traffic councils. Traffic councils are internal groups composed of transportation professionals with similar concerns from various divisions. As divisional boundaries may reflect on transportation terms used by each individual, communication gaps may result. Thus, a carrier performance language based on common terms and commonly defined measurements is necessary to bridge these gaps. 2.6.2 Relevance As stated in the CICA handbook, "for the information provided in financial statements to be useful, it must be relevant to the decisions made by users." Under SCPM, clear rules would spell out which service attributes are to be measured. This, means that transit time information, which says little about how a carrier actually performs, would likely not be included in SCPM. In this instance, note that transit time is different from transit time reliability. The latter, as a dynamic measure of carrier performance, would be included in SCPM. 2.6.3 Reliability As stated in the CICA handbook, "Information is reliable when it is in agreement with the actual underlying transactions and events, the agreement is capable of independent verifiability and the information is reasonably free from error and bias." Knowledge of what attribute to measure and how to measure it are both important. In Section 2.7.1, we examine the importance of establishing the mechanics for transforming performance data into relevant information. In our vision of SCPM, rules would spell out specifically how objective data must be processed into objective information. Reliable, objective measurements would please shippers who want to move away from subjective measures often used in the past. For example, as reported by Bowman (1992), when Allied Signal spent 18 months developing an approach to measure carrier quality, it 17 was determined to avoid the kind of subjective judgments that define so many quality programs. Both grass roots pressure and industrial quality oriented programs demand objective measurements. SCPM would provide the objective performance information these programs require. As noted by O'Sullivan (1995, 23), quality programs such as ISO 9000 are increasingly popular in the logistics field. 2.6.4 Comparability Comparability is an important attribute of information because it allows users to evaluate two sets of numbers on a common basis either across time or across companies. Just as two income statements prepared using G A A P can be compared, S C P M would allow shippers to compare different carriers' performance information. SCPM would also allow carriers to measure their operations systematically and clearly identify problem areas. In this way, SCPM would simplify the task of benchmarking within the carrier community. Benchmarking among shippers and carriers is difficult now because of the multitude of evaluation programs using various combinations of service attributes and measurement rules. 2.6.5 Lower the cost of generating performance information In Section 2.3, we discussed the cost barrier small shippers face vis-a-vis access to quality performance information in monitoring carriers. We believe one of the significant benefits of G A A P has been the reduction in the cost of generating financial information. In the field of transportation, SCPM holds the promise of reducing the cost of generating carrier performance information for both shippers and carriers. For example, lower costs would result in cases where carriers could process performance data in the same manner instead of customizing systems for various shippers. 18 Without SCPM, some shippers prefer the route that more is better. That is, the more information available the better. Unfortunately, information costs money to generate. S C P M would spell out the service attributes to be monitored. Hopefully, the number of these would be kept at an appropriate minimum. There are large organizations which presently monitor few service attributes yet provide them with appropriate performance information. For example, Johnson & Johnson Hospital Services has developed a carrier performance evaluation program which only monitors four service attributes. Certainly carriers receiving requests for proposals would appreciate being able to submit historical information consistently. As a result of SCPM, smaller shippers would not necessarily enjoy lower cost of information but rather greater value of information. For example, although smaller shippers presently receive performance information from some carriers, this information is not well received. Smaller shippers doubt the accuracy of this information since they are unsure of the rules used by carriers to generate it. Therefore, smaller shippers must rely on internally generated information. Under SCPM, any carrier supplied performance information would be based on understood rules. Thus, smaller shippers could receive accurate information at little cost to them. Low cost access to information might also occur through the existence of standardized software, which would allow a small shipper to feed SCPM information about various carriers into a computer and generate sophisticated carrier performance evaluations. 2.7 What is required for Standardized Carrier Performance Measures to exist? We now look at the components, which would support the existence of SCPM. As explained in Section 2.5.2, the continuing development of accounting rules under G A A P comes about from an iterative process of user input and feedback. This process continues until a general consensus among users of financial information is reached. Thus, in 19 addition to similar information needs, the will among users to compromise individual information requirements must exist in order to reach consensus. Unfortunately, consensus among users is not easy to obtain. Consensus occurs when various users compromise their position or needs for the benefit of the group. Compromise about financial statements means that certain pieces of information are not presented in the income statement although a small group of users would like to see this information. For example, income statements reflect depreciation expense on assets based on historical costs. Some analysts, who dispute the validity of these figures, prefer depreciation measurements based on current market values. In this instance, since G A A P does not require this disclosure, these analysts must generate their own information. We believe the same kind of difficult compromise would be needed if SCPM are to exist. That is, some information required by a few shippers would not be included in SCPM. To the detriment of SCPM, many organizations have already developed their own programs to evaluate carrier performance. Thus, these organizations have already developed their definition of carrier service quality, which may differ significantly from other organizations. In addition, as noted by Brooks (1995, 15), these organizations would rather keep program content confidential. We suspect that in such organizations, the wil l to compromise may not exist. Of course, i f the use of SCPM proves to be financially beneficial to these organizations, the will to participate in this process in a limited manner may resurface. 2.7.1 Measuring carrier performance Before dealing with consensus issues, we must first understand how carrier performance is measured. The information gathering process in use by shippers must be systematically analyzed and common elements determined. 20 We examined two studies on performance evaluation to arrive at our measurement model. In Harrington et al. (1991), a model was developed for a pharmaceutical company to measure vendor performance. Since carriers are vendors of transportation services, we believe that this model is applicable in guiding our investigation into measuring carrier performance. In addition, this study provides insight about the approach needed to measure performance among various carriers. In Kleinsorge et al. (1992), a model was adapted which allows shippers to evaluate carrier performance. From these two models, we summarized three components we believe are critical in measuring performance. They are: 1. Listing criteria important in measuring performance. The selected criteria (service attributes) will determine how a service is measured. Thus, in keeping with the cost of generating performance as discussed in Section 2.3, the list of criteria needs to balance comprehensiveness and conciseness. 2. Defining each criterion through objective measures. This component highlights our discussion in Section 2.5.3 on the use of formal measurements in G A A P . For example, billing accuracy is a carrier service attribute. By itself though, it is meaningless until it is defined in measurable terms. A n example of a measure of billing accuracy is the percentage of incorrect waybills to the total number of waybills submitted by a carrier. 3. Assigning a relative weight to each criterion to arrive at a performance index. In other words, not all service attributes are equal in the eyes of shippers. As demonstrated by Abshire and Premeaux (1991, 32) some service attributes are more significant in evaluating the quality of a carrier's service. Therefore, a recipe for carrier service quality spells out how much each performance measure figures in the overall carrier performance index. 21 The study by Chow and Poist (1984, 38) indicates that there are several measurement points for each service attribute. That is, there are various ways of defining each service attribute. For example, the study lists eight specific measures used by shippers to measure freight damage and loss experience. This means that shippers have various rules to evaluate similar service attributes. If a similar variety of measures exists among vehicle assemblers, we must determine its cause and impact. Resolving differences among shippers measurements will simplify carrier performance evaluation and demonstrate their will to compromise information needs in order to achieve standardized performance measures. 2.8 SCPM implementation and operational issues In Section 2.7, we discuss that reaching consensus on performance measurements is the ultimate objective of SCPM. We realize though, that initially, divergence in measurements is more likely. Thus, we believe that initial divergence should not be the only consideration in analyzing the feasibility of SCPM. SCPM is about more than measuring carrier performance; it is about developing a consistent, universal language of carrier performance. Remember that SCPM, just like G A A P , would be developed using an iterative process, which, over time, would resolve differences among shippers and carriers' information requirements. We believe the SCPM development process is feasible i f the implementation and operational issues we identified can be managed. 2.8.1 Performance reporting as a public good The establishment of common performance reporting rules has been motivated by many forces over the years. Understanding these helps in examining the development of such rules. Described below are three examples of public reporting rules and the motivation that drove their creation. 22 Performance reporting rules enter the public arena when generating performance information becomes too onerous for users. This was the original motivation for developing accounting standards. In 19th century England, public companies reported financial results using customized rules. This situation made inter-company comparison difficult for users. As a result small shareholders were left with no means of properly comparing company results. In order to prevent government imposed financial reporting rules as a remedy, accountants decided to establish their own set of rules. Thus, the creation of G A A P . Another example of public reporting is that of the airline industry in the United States. In that country, passenger airlines must report on their on-time departure performance based on a specific time window. The motivation behind this reporting is to allow individual passengers to formally evaluate each airline's performance as they consider their next flight. In this case, the government concluded that individual passengers were unable to make optimal decisions due to their lack of access to proper performance information. It is interesting to note that although passengers did not have access to formal performance information before the legislation was enacted, regular passengers certainly must have had their own subjective ranking of airline performance. Finally, the SCOR model is an industry initiative to establish a common language of supply chain processes and metrics. This initiative was driven not by fear of government intervention but by the need for supply chain participants to communicate among themselves using common terms. The reason for a common language is to allow supply chain participants to work together in improving supply chain process performance. To achieve this goal, benchmarking, the practice of comparing operational results against like companies, is achieved using a common language consistent across industries. 23 These three examples address three important issues with regard to the implementation of SCPM: 1. There is little risk of the government imposing standardized carrier performance measures on motor carriers with regard to vehicle assemblers. This means that vehicle assemblers who do not want to participate in SCPM face no government reprisal. 2. Organizations from various industries have already established a common language to discuss, evaluate, and configure supply chain processes. This means that there presently exists a common language of operational performance for supply chain organizations. Organizations have been able to set aside their fear of cooperating with competitors for the benefit of developing a common language in order to drive better results. 3. Unlike government imposed reporting rules which are based on political and economic evaluation, the existence of SCPM must be financially viable to shippers and carriers. 2.9 Summary of literature and industry practices Carrier service quality can be defined as the degree of conformance between the expectations of shippers and services provided by carriers. Since carrier services are composed of several service attributes, multi-dimensional evaluation is required in order to measure carrier service quality. To accomplish this evaluation, shippers must specify important service attributes to be measured, define each service attributes through objective measures, and assign relative weights to each service attribute to arrive at a carrier performance index. The requirement for multi-dimensional measurement of carrier service quality contributes to the present state of disarray. Carrier performance measurement ranges from informal evaluation of important service attributes to inconsistent formal measurement among shippers. 24 We noted a gap between research and practice with regard to the number of service attributes monitored by shippers. Studies on transportation purchasing have documented long lists of service attributes measured by shippers, but we find that formal carrier performance evaluation programs measure only a few service attributes. The SCOR model developed by the Supply Chain Council has made major progress in establishing a common language of supply chain management. This model does not yet address specifically the carrier performance information needs of shippers. Inconsistent carrier performance measurement creates an environment where powerful, large shippers demand or generate carrier performance information in a customized format. Small shippers without the same resources and access to accurate information are left to make sub-optimal transportation purchasing decisions. From a different angle, carriers face the challenge to provide existing and prospective shippers performance information using various rules. Similarities exist between the accounting profession's goal of structuring financial performance reporting and the shipping community's goal of measuring carrier performance more precisely. Specifically, the income statement provides users an organization's financial performance similarly to the need of shippers to review a statement of a carrier's service quality performance. As an approach to structure carrier performance monitoring, we put forward Standardized Carrier Performance Measures (SCPM), an approach which is based on Generally Accepted Accounting Principles (GAAP), S C P M borrows from the iterative process used by the accounting profession in developing measuring and reporting rules for financial information. This process begins 25 with input from users of financial information and ends with an official accounting guideline. This process eliminates reporting of information which is not popular among users. If feasible, SCPM would exhibit the same qualitative characteristics as information under G A A P . That is, SCPM would generate performance information, which is understandable, relevant, reliable, and comparable. In addition to its qualitative benefits, S C P M holds the possibility of reducing the cost of generating performance information. In order to prove the feasibility of SCPM, we must assess the present level of measurement standardization which exists among vehicle assemblers as well as their wil l to compromise on issues and work together. SCPM is not only about numbers; it is a language which participants must agree to use. Agreement among competitors depends on vehicle assemblers overcoming implementation and operational barriers. These issues are critical to the successful development of SCPM and must be addressed and resolved by both vehicle assemblers and carriers. 26 Chapter 3 3. Phase 1 findings and analysis: Do vehicle assemblers in Canada and the United States measure motor carrier performance in the same manner? 3.1 Introduction In this chapter, we discuss Phase 1 research from which we answer QI, that is, "Do vehicle assemblers in Canada and the United States measure motor carrier performance in the same manner?" The chapter is organized as follows. First, we explain the overall research design of the study. This includes a discussion about the survey population, the methodology, the data collection instrument, and the hypotheses tested. Second, we summarize background information on participants in the survey. Third, through hypothesis testing, we discuss Phase 1 research findings. Finally, we gather the evidence and answer the above question (QI). 3.2 Research design 3.2.1 Survey population To keep this study manageable, the focus on shippers is narrow. Hall and Wagner (1996) also took this approach. Their study focused on the selection of tank truck carriers by bulk chemical shippers. This focused approach allows for the detailed analysis among a homogeneous group of participants necessary for this study. In order for the approach used to develop SCPM to apply to more industries, populations in future studies wil l have to be expanded to shippers from various industries. This is the approach taken in the development of the SCOR model. That is, 70 organizations across industries were surveyed by the Supply Chain Council in developing the SCOR model. The focus of our study is on a segment of the automotive industry. The decision to focus on the automotive industry originates in an observation made in a recent study by Brooks (1995), in which she noted that 80 % of automotive industry organizations responding to 27 the study's survey have formal carrier performance evaluation programs (second highest after the chemical industry). With its few, large, and influential players, significant industry coverage is possible with few participants. The automotive industry is composed of two main activities, component manufacturing and vehicle assembly. Due to the vehicle assemblers' need to manage the inbound movement of hundreds of components on a timely basis, vehicle assembly is the activity most relevant to this study. In addition, within the automotive industry, vehicle assemblers likely exert the most concentrated purchasing clout. Therefore, this study focuses on the inbound transportation of components of car, and light and heavy truck assemblers with plants in Canada and the United States, as listed in Table 2. This table lists the 19 principal vehicle assemblers in Canada and the United States and their production volumes for 1994. From this table, we note that homogeneity is not apparent when comparing production volumes of cars versus trucks or comparing the production volumes of each vehicle assembler. That is, truck assemblers have much smaller volumes of production than car assemblers and non Big 3 car assemblers have much smaller production volumes than Big 3 car assemblers.3 Furthermore, although vehicle assemblers use all modes of transportation for inbound logistics, road transportation is the most important mode. Therefore, studying vehicle assembly essentially focuses on motor carriers. 3.2.2 Methodology Nineteen vehicle assemblers were contacted by telephone. From these, 13 agreed to participate in the study.4 The 13 vehicle assemblers who agreed to participate in the Traditionally, the Big 3 car assemblers are Chrysler, Ford, and General Motors. 4 Vehicle assemblers who participated in this study requested confidentiality of performance evaluation program contents. Therefore, in the study, we never link specific vehicle assemblers 28 Table 2 Car and truck manufacturers with assembly plants in Canada and the United States Car Truck Assembler 1994 production volume Assembler 1994 production volume Chrysler 2,388,451 AM General 18,440 Ford 4,230,951 Freightliner 51,405 General Motors 5,168,859 Mack 22,565 Autoalliance 246,991 Navistar 103,007 BMW 429* Paccar 41,081 CAMI 165,718 Volvo Heavy 23,729 Trucks Honda 607,018 Western Star 1,535 Mitsubishi 168,726 Nissan 445,610 Nummi 363,083 Subaru-lsuzu 154,801 Toyota 370,635 S o u r c e : " A u t o m o t i v e N e w s , 1 9 9 5 M a r k e t D a t a B o o k , " ( D e t r o i t : C r a i n C o m m u n i c a t i o n s Inc . , M a y 1 9 9 5 ) , p p . 1 0 - 1 3 . * N o r t h A m e r i c a n a s s e m b l y b e g a n in l a t e 1 9 9 4 with program specifics. We believe this request does not impede the goal of the study because we can still analyze program content with general references. 2 9 study are listed in Figure 1. The telephone calls were made during February and March 1996. The telephone surveys and discussions lasted from fifteen to thirty minutes. The individuals contacted have in-depth knowledge of their organization's carrier performance evaluation programs. These individuals are for the most part logistics or transportation analysts and managers. Participants were first asked to answer the study's questionnaire. Second, participants employed by a vehicle assembler where either a formal or semi-formal carrier performance evaluation programs exists were asked to fax us a copy of the documentation describing their programs' measurement rules. Four vehicle assemblers forwarded the documentation used by their programs. The remaining four vehicle assemblers with semi-formal programs had no documentation. Figure 1 Vehicle assemblers who participated in our study Car assemblers Truck assemblers Chrysler A M General Ford Freightliner General Motors Mack C A M I Navistar Nissan Paccar Toyota Volvo Heavy Trucks Western Star Our reliance on documentation allowed us to meet our self-imposed relevance test as discussed in Section 2.6.2. Where documentation was forwarded, we relied solely on it to 30 perform our analysis. Where no documentation is used, we relied on information provided by respondents. 3.2.3 Data collection instrument In order to collect background information about the use of carriers by vehicle assemblers, a questionnaire was designed, based on the information gathered during the analysis of relevant literature. The demographic information obtained includes the following: • Share of inbound shipments controlled - The greater the role played by the vehicle assembler in controlling inbound movements, the greater the need to manage carrier performance. • L T L versus T L shipments - Where L T L carriers are used, it becomes challenging for vehicle assemblers to monitor the performance of a carrier on a huge volume of shipments. • Annual motor carrier expense - When it comes to monitoring service attributes, access to large resources increases the likelihood of the existence of a formal monitoring program. • Size of carrier base - Given the industry trend to reduce the size of transportation partners, vehicle assemblers face difficult decisions. In addition to this trend, partners work more closely and require a well-understood language of performance. The questionnaire had three revisions during Phase 1. As the telephone interviews progressed, participants provided information about the use of third party logistics service providers that required changes in the first and second versions of the questionnaire. These changes were required to obtain more insightful background information from 31 Table 3 Standardized Carrier Performance Measures (SCPM) Vehicle assemblers Evaluating Motor Carrier Service Quality QUESTIONNAIRE F O R C A R AND T R U C K M A N U F A C T U R E R S (VERSION 3) February 19,1996 B A C K G R O U N D I N F O R M A T I O N A B O U T USE O F CARRIERS 1. What percentage of inbound shipment movements do you control? 2. Of that percentage, how much is TL vs. LTL? 3. Do T L loads include pre-delivery consolidation? 4. What is your annual motor carrier expense for inbound movements? 5. How many carriers do you employ? DEFINING T R A N S P O R T A T I O N SERVICE A T T R I B U T E S 1. Do you have a formal carrier performance monitoring system? 2. Which service attributes do you monitor? Why? 3. Do carriers provide you with performance report cards? Are they appropriate? M E A S U R I N G SERVICE A T T R I B U T E S 1. For both TL and L T L , are shipment arrivals specified a specific estimated time of arrival (ETA) or rather a time window? 2. What time margin is considered on-time? Why? 3. What information is required to measure other service attributes? 32 Table 3 W E I G H I N G S E R V I C E A T T R I B U T E S 1. Do you assign varying weights to attribute measurements in evaluating overall performance? G E N E R A L 1. Do you see future changes taking place in how motor carrier performance wil l be monitored in your company? 2. Can you forward documentation regarding the method used by your company in documenting the performance of your motor carriers? 33 participants not yet contacted. Other changes included questions about the type of motor carriers used and the use of consolidation of L T L shipments. These modifications provided us with a better understanding of carrier performance monitoring among vehicle assemblers. Table 3 shows the final version of the questionnaire. In addition to background information, the questionnaire allowed us to obtain insight into the reasons certain service attributes are measured. 3.2.4 Hypotheses Aimed at answering Q t , "Do vehicle assemblers in Canada and the United States measure motor carrier performance in the same manner?" in Phase 1, we determined how vehicle assemblers measure carrier performance presently. We achieved this by testing the four hypotheses described below. H j : Vehicle assemblers measure the same service attributes. This hypothesis asserts that vehicle assemblers measure the same service attributes. If the hypothesis is not rejected, standardization at this level is possible. That is, the same service attributes are measured, even it they are not measured in the same manner. H 2 : Vehicle assemblers measure service attributes in the same manner. This hypothesis is divided into two components. These are: H 2 A : Vehicle assemblers process the same data to measure service attributes. H 2 B : Vehicle assemblers convert data into information in the same manner. These hypotheses assert that vehicle assemblers measure service attributes in the same manner. To standardize performance measures at this level, it is necessary that service attributes be measured in the same manner. 34 As vehicle assemblers convert data into relevant information, data lose their original properties. Therefore, we identify two levels of processing where hypotheses are tested. We believe that for standardization to exist, common data must be collected in measuring service attributes. Thus, we first assert that vehicle assembler use the same data in measuring service attributes (H2A). Second, we assert that vehicle assemblers' carrier performance evaluation programs convert data into information in the same manner (H2B). For example, from the number of shipments handled by a carrier, the number and the value of damage claims due to loss and damage measures can be calculated either as a ratio of claims to total shipments or simply as the number of claims. If these two hypotheses are not rejected, the door is opened to the feasibility of S C P M at the same level as G A A P , i.e., under SCPM, any vehicle assembler or carrier knowing the measurement rule would understand a carrier's claim of 96% on-time delivery. H 3 : Vehicle assemblers weigh service attributes equally. In section 2.4.1, we discussed the need for multi-dimensional evaluation of carrier performance. In order to combine several measures into a carrier index, weights must be assigned to each measure. If this hypothesis is not rejected, the final gate opens to performance information standardization through carrier indices. The use of indices means that vehicle assemblers would evaluate carriers' performance in the same manner and would rank them accordingly. Partial Standardized Carrier Performance Measures The hypotheses tested in Phase 1 act as hierarchical or successive gates, which determine the level of commonality of service attribute measurement that exists. That is, performance measurement is broken down into its components: the service attribute 35 measured, the method used to measure the service attribute, and the weight assigned to the measure within the overall carrier performance index. We believe that, although complete standardization may not be possible, partial standardization can be beneficial for vehicle assemblers and carriers. Partial S C P M would include certain measures, which are generated in the same manner, while other rules would simply state which data, should be measured for other measures. By design, our hypothesis testing structure allows for this outcome. The four hypotheses create successive gates for information standardization. Therefore, i f a service attribute is measured in the same manner, even if vehicle assemblers weigh that service attribute differently, at least this service attribute is standardized in its measurement. Thus, it is possible to have various stages of standardization for various service attribute measures. Testing hypotheses The hypotheses were tested using basic comparison of program components. Thus, no statistical testing of the hypotheses was performed. The following reasons support our approach to testing: 1. The amount of data collected was small. Thirteen participants provided information for hypotheses testing. This information included the list of attributes measured, the measurements, and any weights assigned to measures. 2. The nature of the data and their testing did not allow for statistical analysis. Testing the hypotheses required only basic comparison of data. The data were mostly not numeric. Thus, the testing was simple comparisons among carrier performance programs to determine i f their components were the same. 36 3.3 B a c k g r o u n d i n f o r m a t i o n o f p a r t i c i p a n t s i n the survey A total of 13 vehicle assemblers participated in this study. These 13 vehicle assemblers account for 90% of the 14.6 million vehicles assembled in Canada and the United States in 1994. Most vehicle assemblers who participated in the study supplied us with their annual motor carrier expense for inbound logistics. This amount totaled $1.9 billion. Clearly, vehicle assemblers spend a lot on and expect a lot from their inbound motor carriers. Components travel to assembly plants by plane, rail, and trucks. For example, Asian owned assembly plants in the United States have components shipped from Asia to California, use rail to move marine containers across the continent, and finally use trucks for local delivery. Some other assembly plants rely on boxcar and flat car rail service to bring in components. Despite these examples, participants indicated that road transportation accounts for a large percentage of the money spent on component delivery to assembly plants. 3.3.1 Ex is tence o f a f o r m a l c a r r i e r performance evaluation p r o g r a m Although the research methodology described in Section 3.2.2 requires observation of documented carrier performance evaluation programs, it was quickly noted during telephone interviews that the use of formal programs is infrequent in the industry. It seems that, despite the emphasis on JIT in the automotive industry, the level of formal carrier performance measurement in this market segment is closer to the state observed by Chow and Poist (1984), i.e., performance measurement tends to be informal and subjective. This contrasts with the state of formal measurement observed by Brooks (1995). Part of the explanation stems from Brooks' lack of definition of the automotive industry in her study. In this small population of 13 participants, it was difficult to categorize programs as formal or informal. Instead, participants in the questionnaire have carrier performance evaluation programs with varying degrees of formality. Therefore, to proceed with hypothesis testing, the programs were placed in three categories. These are: 37 1. Formal: These programs have well-documented procedures for collecting and measuring performance data and reporting performance information. 2. Semi-formal: These programs are not as robust as formal programs. Although rules exist for evaluating carrier performance, the approach is simplistic when compared to formal programs. The use of some objective evaluation is likely. 3. Informal: These programs have few or no systematic procedures in place. Thus, these programs require the extensive use of subjective analysis. Figure 2 shows the distribution of the vehicle assemblers' carrier performance evaluation programs examined in this study, based on the three categories. In the table, we see that eight vehicle assemblers who answered the questionnaire have formal and semi-formal performance monitoring programs. These eight programs are used for hypothesis testing. Figure 2 Formal program Semi-formal program Informal program No response Population Car Chrysler C A M I assemblers Ford General Motors Nissan Toyota 6 12 A M General Truck Freightliner Mack 7 assemblers Paccar Navistar Volvo White Western Star 3 5 5 6 19 38 Note in Figure 2 that the link between production volume and the use of formal carrier performance evaluation programs among car assemblers is consistent with the research by Brooks (1995), i.e., large organizations have formal carrier performance evaluation programs. As a group, the same conclusion does not apply to truck assemblers. Navistar, by far the largest truck assembler, does not have a formal carrier performance evaluation program. Alternatively, when combining car and truck assemblers together, all truck assemblers are small and thus the conclusion by Brooks (1995) does apply. 3.3.2 Share of inbound shipments controlled Vehicle assemblers surveyed, except one, control between 85% and 100% of inbound shipments. One truck assembler with an informal program monitors 85% with the others between 95% and 100%. The one exception receives components freight charges prepaid. Controlling shipments means that vehicle assemblers manage the movement of shipments. This includes carrier selection, scheduling pick-ups, scheduling deliveries, and paying carrier waybills. Vehicle assemblers, which do not already control the movement of all shipments, plan to do so soon. This indicates that vehicle assemblers are concerned about the inbound movement of components. The monitoring of on-time delivery by all vehicle assemblers reflects this concern. This attitude is expected as the integration of JIT inbound logistics into JIT manufacturing requires precise material flow balance. We find that share of inbound shipments controlled is not clearly linked with the degree of formality of carrier performance evaluation programs. That is, although the Big 3 assemblers control 100% of inbound shipments and have formal programs, most smaller vehicle assemblers also control all inbound shipments while using either semi-formal or informal programs. 39 3.3.3 L T L versus TL shipments Although vehicle assemblers have production volumes requiring TL shipments, L T L shipments are still used. For example, one truck assembler surveyed indicated that only L T L carriers bring in components to the assembly plant. More significant for this study is the reliance of several vehicle assemblers on the same national L T L carriers. This is significant since these carriers are likely required to supply performance information to several vehicle assemblers, each with specific information needs. If S C P M existed, such a carrier could reduce its information cost by collecting data to generate standardized performance measures. During our interviews, we noted that large car assemblers use off-site consolidation locations. These consolidation locations create full TL loads from L T L shipments. Thus, large car assemblers are not directly exposed to L T L carriers as are smaller assemblers who directly receive at the plant. Smaller vehicle assemblers often rely on L T L carriers to supply them on-time delivery performance information. Unfortunately, several vehicle assemblers, notably truck assemblers, have little faith in the information provided. Thus, smaller vehicle assemblers find themselves in a vicious circle where they rely on carriers to report on their own performance although vehicle assemblers doubt the accuracy of this information. This scenario spells out the need for carriers and smaller vehicle assemblers to work closely in developing performance reporting rules. 3.3.4 Annual motor carrier expense (inbound components) Participants who responded to this question indicated annual motor carrier expenses varying between $5 million and $1.6 billion. Among vehicle assemblers, there are large and smaller shippers. This translates into large vehicle assemblers monitoring carrier performance with their own formal programs while smaller vehicle assemblers rely on semi-formal and informal programs. The structure of programs among vehicle 40 assemblers reflects the findings by Brooks (1995, 15). That study concluded that smaller shippers use informal carrier performance evaluation programs while larger shippers use formal programs. With their small production volumes, none of the truck assemblers have a formal carrier performance evaluation program. We interpret this to mean that truck assemblers have the most to gain from the development and use of SCPM while the Big 3 car assemblers have the least to gain. 3.3.5 Size of carrier base The vehicle assembler with the smallest carrier base uses six carriers (two L T L and four TL). The vehicle assembler with the largest carrier base uses 112 core carriers5. The participants have two general profiles. Some vehicle assemblers use small carrier bases, with fleets ranging from six to 35 carriers. These carriers face the now popular task of developing partnerships based on continuous improvement. As discussed in Section 2.6.3, carrier performance evaluation programs, such as Allied Signal's, require the use of reliable information that SCPM would generate. The other vehicle assemblers still use large carrier bases. Counting core and occasional carriers, one Big 3 car assembler uses over 800 carriers. There is considerable cost involved in operating a carrier performance evaluation program where the performance of 800 carriers is monitored. Clearly, the greater the number of carriers employed, the greater the effort to monitor carrier performance. We found size of the vehicle assembler and use of a formal carrier performance evaluation program by vehicle assemblers is independent of the size of its carrier base. Echoing one of the recommendations in Lambert et al. (1993), vehicle assemblers using large carrier bases expressed great interest in reducing the number of carriers to what they 5 Core carriers are those used by shippers on a frequent basis. Carrier use frequency differentiates core from occasional carriers. 41 consider a more competitive level. To achieve this goal, vehicle assemblers need relevant performance information to grade carriers and terminate low performance ones. S C P M could meet this requirement. 3.3.6 Summary of background information of participants in the survey From the discussion, we note that unlike the scenario presented by Brooks (1995), only three of the 13 vehicle assemblers surveyed have a formal carrier performance evaluation program. Although the 13 vehicle assemblers surveyed carry well known brand names these 13 vehicle assemblers do not represent the automotive industry Brooks studied. Consistent with Brooks' study is our finding that the size of the organization is related to the existence of a formal carrier performance evaluation program. Whether analyzed by production volume or motor carrier expense, both approaches demonstrate that vehicle assemblers with large production volumes and large carrier expenses have formal carrier evaluation programs. Share of inbound shipments controlled and size of carrier base are not related to the degree of formality of carrier performance evaluation programs. Most vehicle assemblers control most shipments. In doing so, vehicle assemblers evaluate carrier performance with programs ranging from informal to formal. Similarly, large vehicle assemblers with formal programs and smaller vehicle assemblers with semi and informal programs rely on small to large carrier bases. Finally, an unproductive scenario exists between smaller vehicle assemblers and their L T L carriers. While these vehicle assemblers rely on carrier generated carrier performance information, as a result of unclear and inconsistent reporting rules, these vehicle assemblers do not have great confidence in the information. 4 2 3.4 Phase 1 research findings The results of hypothesis testing in phase 1 are summarized in Table 4. We note from this table that none of the vehicle assemblers weigh service attributes equally. This means that vehicle assemblers use different models to calculate each carrier's performance index. Only two service attributes, assembly line stoppage and value of damage and loss claims, are measured and reported in the same manner. Note that, assembly line stoppage is not a direct service attribute. Yet, this measure is a clear indication of carrier service disruption on the business of vehicle assemblers. These two are not the most important service attributes indicated by shippers in studies. The collection of raw data is similar for five service attributes. From this, it seems that at this time, which data to collect is more common than the information derived from them. Finally, nine common service attributes are monitored among the eight programs under study. This list includes at least one service attribute from three important aspects of carrier performance that vehicle assemblers like to monitor. In the rest of this chapter, we discuss in detail the Phase 1 findings. 3.4.1 H j : Vehicle assemblers measure the same service attributes This hypothesis asserts that vehicle assemblers monitor the same service attributes. Service attributes monitored by vehicle assemblers were tested to determine whether they are similar. Hypothesis testing The service attributes measured by vehicle assemblers are listed in Table 5. Performance information for service attributes, which we call special requests, were not tested further. 43 Table 4 Results of service attribute hypothesis testing Service attr ibute Same service attribute measured by programs Service attribute measured using the same data Data converted into information in the same manner Service attributes weighed equally H1 Section 3.4.1 H2A Section 3.4.2 H2B Section 3.4.3 H3 Section 3.4.4 Timel iness of del ivery O n - t i m e p i c k - u p Y e s U p d a t i n g e s t i m a t e d t i m e o f a r r i v a l ( E T A ) Y e s T r a n s i t t i m e p e r f o r m a n c e Y e s O n - t i m e d e l i v e r y Y e s Y e s A s s e m b l y l i n e s t o p p a g e Y e s Y e s Y e s S e r v i c e f a i l u r e Freight damage and loss N u m b e r o f l o s s a n d d a m a g e c l a i m s Y e s Y e s T i m e l y c l a i m o f s e t t l e m e n t Y e s Y e s D a m a g e f r e e d e l i v e r y V a l u e o f c l a i m s Y e s Y e s Y e s Bil l ing N u m b e r o f b i l l i ng e r r o r s Y e s N u m b e r o f r a t i n g e r r o r s Service availabil i ty A b i l i t y t o h a n d l e s h i p m e n t s C a r r i e r o p e r a t i n g f l e x i b i l i t y E q u i p m e n t a v a i l a b i l i t y G e o g r a p h i c c o v e r a g e o f s e r v i c e 44 Occasional service attributes were tested further along with on-time delivery. The following paragraphs describe these in detail. Special requests are information needs about a service attribute which only one program requires. This means that any carrier providing data for this service attribute would be satisfying the need of only one user. This situation goes against measurement standardization. It is unlikely that all vehicle assemblers would agree to or care about a performance measure, which only one vehicle assembler currently monitors. We doubt this information would be relevant in measuring carrier performance. Of course, it is possible, but not likely, that such a performance measure might have eluded other vehicle assemblers as a relevant measure. Occasional service attributes are attributes monitored by a few programs. These service attributes may appear to be special requests, but the distinction is that at least two programs monitor occasional service attributes. If two or more vehicle assemblers measure a certain service attribute, then this shows us that there exists a common information need. To illustrate this concept, we turn to G A A P . Although revenue calculation provides financial information used by everyone, certain special information must be presented as footnotes to the financial statements in order to meet the specific information needs of a smaller segment of users. These notes are similar to occasional performance measures. Thus, occasional performance measures, although not widely used, need to be standardized. Therefore, service attributes measured by all participants as well as occasional performance attributes, were tested to determine whether they are measured using the same data. Test results In Table 5, we list 16 of the 24 service attributes tested. Missing from this table are eight service attributes we cannot list as requested by certain vehicle assemblers. These eight 45 Table 5 Number of programs monitoring each service attribute # Service attr ibute 1 On-time pick-up 2 Updating estimated time of arrival (ETA) 3 Transit time performance 4 On-time delivery 5 Assembly line stoppage 6 Service failure 7 Number of loss and damage claims 8 Timely settlement of claim 9 Damage free delivery 10 Value of claims 11 Number of billing errors 12 Number of rating errors 13 Ability to handle shipments 14 Carrier operating flexibility 15 Equipment availability 16 Geographic coverage of service Group CD "55 CO CO CD c "CD E TD C CO CD CD CO C CO £ w CO O •a — -*—< SZ D) '2 LL CD C CO X J jco 'co > CO CD o CD C O Formal programs Semi-formal programs 46 Table 5 Definition of expressions: 1 C a r r i e r p i c k s u p t h e s h i p m e n t a t t h e s h i p p i n g p o i n t a t t h e s c h e d u l e d t i m e 2 C a r r i e r c o n f i r m s o r c h a n g e s t h e p r e - e s t a b l i s h e d E T A f o r t h e s h i p m e n t a t t h e a s s e m b l y p l a n t 3 C a r r i e r m e e t s t h e e s t a b l i s h e d t r a n s i t t i m e f o r c a r r y i n g a s h i p m e n t 4 T h e s h i p m e n t is d e l i v e r e d a t t h e a g r e e d u p o n t i m e ( E T A ) 5 T h e p l a n t ' s a s s e m b l y l i n e is s t o p p e d d u e t o t h e l a t e d e l i v e r y o f c o m p o n e n t s c a u s e d b y t h e c a r r i e r 6 S e r v i c e f a i l u r e d e t e r m i n e d b y t h e v e h i c l e a s s e m b l e r 7 T h e n u m b e r o f c l a i m s s u b m i t t e d b y t h e v e h i c l e a s s e m b l e r t o t h e c a r r i e r 8 T h e t i m e it t o o k t h e c a r r i e r t o s e t t l e t h e c l a i m 9 T h e r e c o r d o f t h e c a r r i e r in d e l i v e r i n g s h i p m e n t s d a m a g e f r e e 1 0 T h e v a l u e o f c l a i m s s u b m i t t e d b y t h e v e h i c l e a s s e m b l e r 11 T h e n u m b e r o f c a r r i e r i n v o i c e s c o n t a i n i n g e r r o r s 12 T h e n u m b e r o f t i m e s t h e w r o n g r a t e w a s u s e d b y t h e c a r r i e r in t h e i n v o i c e s s u b m i t t e d b y t h e c a r r i e r 1 3 T h e c a r r i e r ' s a b i l i t y t o h a n d l e s h i p m e n t s s u b m i t t e d b y t h e v e h i c l e a s s e m b l e r 1 4 T h e c a r r i e r is a b l e t o m e e t v a r y i n g d e m a n d s f r o m v e h o c l e a s s e m b l e r s 1 5 T h e c a r r i e r h a s e n o u u g h o f t h e r i g h t e q u i p m e n t t o m e e t t h e d e m a n d 1 6 T h e c a r r i e r s e r v e s t h e a p p r o p r i a t e a r e a s 47 service attributes are all special requests, and some of them have very little to do with transportation. From this table, we find that on-time delivery is the only service attribute measured by all programs. The measurement of this service attribute by all monitoring programs suggests that timely delivery of components is the main concern of vehicle assemblers. This concern is consistent with the use of JIT assembly systems by vehicle assemblers. Since vehicle assemblers operate with very little space for inventory, timely delivery of components is critical. This concern for timeliness of delivery among vehicle assemblers is consistent with previous studies on carrier selection. In addition, we find that eight service attributes fall in the occasional service attribute category and warrant some discussion. These service attributes are: • On-time pick-up Many will agree that in a tightly scheduled inbound movement, the likelihood of late delivery increases as the pick-up time is delayed. Also, it can be frustrating for a loading point to have carriers show up late to pick up shipments. The difficulty in measuring this service attribute in the vehicle assembly market explains its low popularity among vehicle assemblers. In this market, the traditional shipper location is reversed. Traditionally, the shipper is the party who surrenders shipments to the carrier. In the vehicle assembly market, the shipper is the party who receives shipments. Thus, it is difficult for vehicle assembler to monitor pick-up time. Unless component manufacturers or carriers can easily capture and communicate relevant data to vehicle assemblers, the cost of generating this information probably exceeds the benefit derived from it. Note that only one formal program monitors this service attribute. 48 In addition to the complex collection of data, this service attribute seems redundant, Given that the objective of inbound logistics is the timely arrival of components, the focus must be on the arrival rather than the departure of shipments. This service attribute may be important for carriers but it may not provide valuable information to vehicle assemblers. As a result of the reversal of the traditional location of the shipper, physical barriers, and the focus on delivery, this service attribute is not popular among vehicle assemblers. This finding contrasts with the finding in the study by Lambert et al. (1993, 135) where it was noted that on-time pick-up was the second most important attribute in selecting and evaluating carriers. Updating estimated time of arrival (ETA) This service attribute is useful for vehicle assemblers as it allows them to adjust production schedule and schedule emergency transportation where necessary. In other words, this service attribute yields predictability within a not always predictable transportation network. Only two formal programs measure this service attribute. This is due to the large volume of data gathering it requires. In order to update ETA, vehicle assemblers and carriers must develop automated communication systems such as Electronic Data Interchange (EDI). As with on-time pick-up, monitoring this service attribute is within the realm of larger vehicle assemblers who can afford sophisticated communication systems. Transit time performance This service attribute applies to carriers with fixed transit times. In the L T L environment, carriers promise days of service between pick-up and delivery rather than specific time delivery. That is exactly the service level one of the truck 49 assemblers requires from its L T L carriers. A l l an L T L carrier must do is deliver components any time on the day before the components are required on the assembly line. Thus, it is simple to measure performance against the carrier's transit times. Although the comparison is simple, the smaller vehicle assemblers who measure this service attribute do not have independent measurements. They rely on carriers to provide the information. Thus, doubtful large vehicle assemblers who can afford to monitor on-time delivery do not monitor specifically monitor this service attribute. Although carriers are well positioned to provide information, unless vehicle assemblers believe it its accuracy, the effort by carriers is of little use. Assembly line stoppage (caused by carrier problem) This is not a direct service attribute but rather the consequence resulting from service failure. For two vehicle assemblers, this measure is the acid test for the physical inbound supply chain. The goal of carrying an object from one place to another is for it to arrive at destination when needed. If a carrier fails to deliver components when required, transportation has failed. Interestingly enough, this measure is a composite measure in that it encompasses both timeliness of delivery and freight loss and damage. No previous study reviewed notes this consequence as an important carrier selection or carrier evaluation criteria. This may be due to the fact that the focus of previous studies has been on the interaction between carriers and traditional shippers. Traditionally, shipper surrender shipments to carriers with concerns for the receiver limited to timeliness of delivery and shipment integrity. In our study, the shipper's location is displaced to the receipt of shipments. Thus, in addition to timeliness of delivery and shipment integrity, the focus is on the actual use of the components in the assembly process. 50 For two vehicle assemblers, this measure passes the relevance test described in Section 2.6.2. That is, two vehicle assemblers monitor this measure in making decisions. Despite its relevance, this measure is an expensive substitute for measuring timeliness of delivery and freight loss and damage. Although monitoring this measure is simple, it is not precise enough to differentiate performance among carriers. For example, i f a vehicle assembler used 24 carriers in a previous month and none of them caused assembly line stoppages, i f that vehicle assembler only monitored this measure, all 24 carriers would rate equally. This is doubtful. Only formal programs measure this consequence. We do not believe this is due to a need for sophisticated systems. This consequence can be measured in a low cost way. Number of loss and damage claims Although three programs measure this service attribute, a popular observation among car assemblers, especially large ones, is that, over the years, improvements in handling techniques resulting from better trailers and the use of well designed returnable containers have drastically reduced the number of claims. Loss and damage claims are so few that typically receiving personnel at the plant level deal with them with little intervention from above ranks. Simply stated, car assemblers do not believe that it is cost beneficial for them to monitor this service attribute. This attitude contradicts Chow and Poist (1984, 33) and then Abshire and Premeaux (1991, 33), who listed this service attribute as very important to shippers in evaluating carrier service quality. Compared with car assemblers, truck assemblers depend on a much greater variety of special order components, which must arrive in good condition. Their greater use of L T L shipments and fewer returnable containers increase their exposure to loss and damage. As a result, the monitoring of this service attribute is more popular among them. 51 From this analysis, we note that TL shippers (car assemblers) are not likely to monitor loss and damage claims while L T L shippers (truck assemblers) do. This fact is supported by a previous carrier service quality survey reported by McKee (1994), and may explain the discrepancy with prior studies. Since the studies by Chow and Poist (1984) and Abshire and Premeaux (1991) do not segregate TL and L T L carrier responses, it is difficult to isolate the rank of this service attribute among both types of shippers and carriers. It seems that this service attribute is an example of a service measure that might be standardized but not necessarily used by all vehicle assemblers. Timely settlement of loss and damage claims This attribute measures the movement of information associated with transportation instead of physical movement. Timely settlement of claims measures one of the factors associated with transportation service failures. Its popularity, like that of number of loss and damage claims is not great. Only two programs monitor this service attribute. This service attribute is ranked ninth most important in the study by Lambert et al. (1993) and third in the study by Chow and Poist (1984). In this study, only two vehicle assemblers monitor it. Chow and Poist did note that although 75.3% of respondents monitored this service attribute only 39.8% did so formally. Value of loss and damage claims As with the number of loss and damage claims, this service attribute is more important to truck assemblers than car assemblers. The two assemblers, which monitor this service attribute, are truck assemblers. Car assemblers are concerned about components arriving on time and the value of missing or damaged components is a less important issue. Truck assemblers are more concerned about this service attribute because components' value often equates to their availability. In other words, high value components, i f lost 52 or damaged, are not easily replaced. For example, i f special order diesel engine costing $20,000 is damaged in transit, the assembly of the recipient truck can be delayed until a replacement component is manufactured and delivered. Number of billing errors This service attribute is often cited in studies as important in selecting a carrier. In the study by Lambert et al. (1993, 135), billing accuracy is ranked as the fifth most important attribute shippers use in selecting L T L carriers. Despite their significant use of L T L carriers, vehicle assemblers do not reflect this finding as they do not monitor this service attribute frequently. Only one formal and one semi-formal program monitor this service attribute. We found four trends that account for this attitude among vehicle assemblers and may explain the discrepancy with the study by Lambert et al. (1993). First, car assemblers, with stable production runs, view their inbound systems as so routine that carrier billing is a basic repetitive task with few variations and little risk of errors. Second, some vehicle assemblers require component suppliers to include transportation charges in the cost of the component. Components arrive at the assembly plant freight prepaid; thus, vehicle assemblers are not concerned about transportation billing errors. The third trend is outsourcing of bill payment. Although outsourcing this process insulates vehicle assemblers from dealing with this problem, billing errors still occur and cost money to resolve. Inevitably these costs are indirectly charged back. 53 Finally, vehicle assemblers are increasingly using third party logistics service providers such as shipment consolidators.6 Similarly to outsourcing bill payment, this practice insulates vehicle assemblers from a multitude of carriers. From the point of view of vehicle assemblers, billing is simplified to a simple, monthly contractual remittance to a third party logistics service providers, who assume the worry about carrier billing errors. In addition to these four trends, which may be industry specific, the lack of focus on service attributes formally measured in the study by Lambert et al. (1993), unlike this study, also explains the discrepancy. Other service attributes Eight service attributes in this group were tested. The details of the tests are not revealed due to their confidential nature. Survey participants made this request. We can state that these attributes are all special vehicle assembler requests and some are not relevant to actual carrier performance. Summary of Hj : Vehicle assemblers measure the same service attributes. Of twenty-four service attributes identified in carrier performance evaluation programs, only one, on-time delivery, is monitored by all eight carrier performance evaluation programs tested. We believe the popularity of this service attribute reflects the importance of timely delivery of components among vehicle assemblers. This finding is consistent with previous studies such as Abshire and Premeaux (1991) and Lambert et al. (1993). 6 Subsequent to the telephone surveys, General Motors and Ford have both outsourced inbound logistics to third party providers. 54 Unexpectedly, in the study by Lambert et al. (1993, 135), in addition to on-time delivery, on-time pick-up, accurate billing, and transit time all rank as very important by shippers in selecting and evaluating L T L carriers . We expected the importance of these service attributes to be reflected by their frequent use among vehicle assemblers in this study; that is not the case. We find an explanation for this inconsistency in the study by Chow and Poist (1984, 26) which concluded that the importance of an evaluation criterion does not mean that it is formally measured. Thus, while this study focuses on formally measured service attributes, the study by Lambert et al. (1993) does not. In addition, with regard to on-time pick-up, vehicle assemblers are not located in the traditional shipper location. Since they are located at the receiving of shipments, it is difficult for them to access shipping point data. At this point, we conclude that a two-tier hierarchy of service attributes exists. A l l participants believe on-time delivery is important (first tier performance measure). Other dimensions of transportation such as damaged freight, inaccurate billing, and service availability are second tier performance measures. This two tier structure is similar to that described by Andersen Consulting staff in their 1994 transportation quality benchmarking survey (1995, 15). That is, they found that shippers are first concerned by the intact physical movement of shipments then by the other service attributes. Thus, focusing on the first tier performance measure, SCPM would dictate measurement and reporting rules for on-time delivery only. These rules would apply for all vehicle assemblers and carriers. Where SCPM would focus on both first tier and second tier performance measures, SCPM would include measurement and reporting rules for nine 7 In the study, on-time pick-up, on-time delivery, accurate billing, claims processing, shipment status update and transit time were listed in the 18 most important attributes shippers consider when selecting carriers. 55 service attributes. Although the rules would exist, not all vehicle assemblers would need the information nor would carriers have to collect the data. The attitude toward loss and damage claims differs between car and truck assemblers. Truck assemblers are more concerned about this issue than car assemblers. This is caused by truck assemblers' use of complex, customized components. In this case, we find that transportation service failure impacts car and truck assemblers differently. In turn, these differences result in different information needs. We note that this industry segment is not as homogeneous as we originally thought. Our earlier perception that similar carrier services, similar inputs and similar operating processes defined homogeneity is wrong. We now find that homogeneity, for our purpose, also is defined by the impact of transportation service failure on operating processes. We note that exception based service attributes, such as assembly line stoppage caused by carrier problems, and billing errors are simple to monitor. Due to the low cost approach, monitoring such service attributes can provide smaller vehicle assemblers an affordable approach to partially accessing quality performance information. With their need for continuous monitoring, timeliness of delivery service attributes may be out of the reach of smaller vehicle assemblers without the help of carrier input. Although assembly line stoppage caused by carrier problems is clear, it does not provide a discrete measure of carrier performance. The lack of inclusion of certain performance measures in programs such as billing accuracy is affected by vehicle assemblers' use of third party logistics service providers. The use of these service providers insulates vehicle assemblers from the direct consequence of carrier service failures. When we look at inbound logistics as a supply chain, these third party logistics service providers may monitor service attributes that do 56 not concern vehicle assemblers. Therefore, in examining the use of carrier performance measures among vehicle assemblers, it may be short sighted on our part not to examine the whole inbound supply chain rather than just vehicle assemblers. In this section we note that smaller vehicle assemblers lack the ability to collect data from outside their immediate systems. Only two formal programs measure the service attribute which requires input of carrier generated data (updating estimated time of arrival ETA). This leads us to conclude that carriers capturing data may be a good idea, but presently vehicle assemblers tend to limit themselves to internally generated data. In our discussion of SCPM, we do not address the origin or the communication of performance data among carriers and vehicle assemblers. There is a need to address the optimality of the collection of data, the conversion of them into information, and the communication of information within the carrier / vehicle assembler relationship. 3.4.2 H 2 A : Vehicle assemblers process the same data to measure service attributes. Service attributes must be measured using the same source of data to standardize performance measures within an industry. For example, the use of subjective data makes it impossible to replicate a measure. In testing this hypothesis, this study achieves a level of detail not examined by other studies. Therefore, it is impossible to compare the following results with those of previous studies. Despite the lack of critical evidence, it is feasible to achieve the objective of this study without reference to previous studies. Testing procedures This hypothesis was tested by comparing the source of data used to measure each of the service attributes monitored by two or more programs. If we found that programs use the same source of data, we retained these for further testing. 57 At this stage of testing, the distinction between objective and subjective evaluation is significant. Subjective evaluation of carrier performance is composed of various information which does not use standard scales of reference. Given our self-imposed focus on formal, objective measurements, where no objective data are captured, we ignored such programs in determining whether two or more programs use the same data to measure service attributes. Results Table 6 lists the results of this analysis. From this table we find that five of the nine service attributes monitored use the same data. These results illustrate why we believe studying performance measures at this level of detail is important. In order to measure carrier performance, carriers collect timeliness of delivery performance data daily at great costs. This means that, i f performance information needs differ among a carrier's customer base, that carrier must operate and maintain systems to collect several performance data. Under SCPM, this would not happen since the carrier would need one system to generate the same performance data. Rather than focus on service attributes monitored using the same data, as listed in Table 6, we now examine those service attributes measured using different sources of data. • On-time pick-up Two programs evaluate this service attribute. One program evaluates this service attribute subjectively using a scale from one to five, while the other program compares 58 Table 6 Analysis of sources of data for each service attribute monitored Service attribute Source of data Programs using each data source Conclusion On-time pick-up • • Subjective score Actual vs. scheduled pick-up time No objective data 1 out of 1 Different source of data used Updating estimated time of arrival (ETA) • • Updated delivery time if different from original Updated delivery time on all deliveries 1 out of 2 1 out of 2 Different source of data used Transit time performance • • Subjective score Actual vs. expected transit time No objective data 1 out of 1 Different source of data used On-time delivery • Actual vs. scheduled arrival time 7 out of 7 Same data used • Subjective score No objective data Assembly line stoppage • # of occasions assembly line stopped due to delivery delays 2 out of 2 Same data used Number of loss and damage claims • # of claims submitted and waybills processed 3 out of 3 Same data used Timely settlement of claim • Age of each claim submitted 2 out of 2 Same data used Value of claims • Value of each claim submitted 2 out of 2 Same data used Number of billing errors • • Subjective score # of waybills with errors and waybills processed No objective data 1 out of 1 Different source of data used 59 the actual to the expected pick-up time systematically. Note that the gap between the two programs is huge. The subjective measure is not in-depth compared to the formal program, which monitors every shipment pick-up. In other words, formally monitoring this service attribute requires extensive information technology linked to shipping points and is the domain of large vehicle assemblers and carriers. Updating ETA In measuring this service attribute, one program requires the carrier to update the E T A only i f a shipment will be delivered late. This means that, i f during a month 20 shipments out of a possible 2000 were late, this program only needs to know whether the ETA's for the 20 late shipments were updated. This is an economical exception based approach. The other program requires all ETA's be updated. That is, the E T A update is compulsory whether the update modifies the E T A or merely confirms it. This program must verify that for every shipment tendered to a carrier, it received an updated E T A from the carrier. Compared to the other approach, this represents a much greater volume of data to be collected. Similarly to on-time pick-up, formal measurement of this service attribute requires extensive information technology linked to carriers. Transit time performance Two programs evaluate this service attribute. One program evaluates this service attribute subjectively using a scale from one to five, while the other program compares the actual to the expected transit time of shipments. As noted earlier, the data is usually supplied by carriers. 60 • Number of billing errors Two programs evaluate this service attribute. One program evaluates this service attribute subjectively using a scale from one to five, while the other program requires the number of carrier bills with errors and the total number of carrier bills submitted. Summary of H 2 A : Vehicle assemblers process the same data to measure service attributes. Five out of the nine service attributes monitored by two or more carrier performance evaluation programs use the same data. The five service attributes vehicle assemblers process the same data are the important on-time delivery, assembly line stoppage, number of loss and damage claims, timely settlement of loss and damage claims, and value of loss and damage claims. We note that programs, which monitor service attributes objectively, process the same data. One exception to this finding is the source of data used to measure updating ETA's . One program measures all shipments while the other measures exceptions. A n exception based approach reduces the cost of collecting and processing the data. Such an approach can be efficient and effective for smaller vehicle assemblers in monitoring carrier performance. From the five service attributes listed above, only on-time delivery is not measured on an exception base. This means that in order to monitor on-time delivery formally, smaller vehicle assemblers must work closely with carriers who have access to the information technology necessary to generate relevant information. A n alternative to this is for smaller vehicle assemblers to monitor assembly line stoppage rather than on-time delivery. Also noted in this section is the use of subjective data to monitor carrier performance. Although, carrier performance evaluation programs using subjective evaluation cost less to operate, we do not know the overall value of such information. Similarly to small 61 shippers, smaller vehicle assemblers rely more on such information since they may not be able to afford better information. 3.4.3 H 2 B : Vehicle assemblers convert data into information in the same manner. This hypothesis asserts that vehicle assemblers convert data into useful information using the same mathematical and reporting rules. This kind of similar conversion is needed to standardize performance measures within an industry. In testing this hypothesis, this study achieves a level of detail not examined by other studies. Therefore, it is impossible to compare the following results with those of previous studies. Despite the lack of critical evidence, it is feasible to achieve the objective of this study without reference to previous studies. Testing procedures The five service attributes processing the same data were tested under this hypothesis. The process used by each carrier performance evaluation program to convert data of a service attribute into information was examined to determine whether these programs yield the same information given the same data. This hypothesis exposes the various ways programs manipulate data to generate useful information. Results • On-time delivery The measurement of on-time delivery uses on-time criteria, which are based on the time window concept. When a shipment is tendered to a carrier, the carrier is expected to deliver the shipment within a specified time window. Programs differ in defining this time window. Three programs use a true time window while the four other programs use an E T A plus or minus an allowance for variation. Figure 3 lists the description of the time windows in use: 62 Figure 3 Criteria used to define # of programs using on-time delivery each definition Use of time window 3 E T A +/- 15 minutes 2 E T A +/- 20 minutes 1 E T A +/- 30 minutes 1 7 A l l seven programs, which monitor this service attribute objectively, follow a success based approach, i.e., all programs focus on carriers successfully delivering shipments on-time rather than late. Not all programs convert data in the same manner. Six programs look at performance for this service attribute by calculating the ratio of on-time shipments to the total carrier's number of shipments. This calculation is demonstrated in figure 4. The seventh carrier performance evaluation program takes a different approach and looks at each shipment's performance. Each shipment's performance is then weighed, The definition of delivery time windows by vehicle assemblers varies. If a shipment were delivered 19 minutes after its ETA, under two programs that shipment would be on time, while under a third program that shipment would not be on-time. Although Figure 4 % of shipments on-time = number of shipments on-time total number of shipments tendered 63 the variation in window size seems misleading to users of performance information, the time window can act as a service level indicator. That is, the smaller the time window used by a vehicle assembler to measure on-time delivery, the greater the level of service quality the vehicle assembler expects. Thus, carriers must carefully deploy their resources accordingly. In other words, i f a carrier wants to, it can decide to always meet delivery time windows no matter what their level. This means that, a carrier with a 98% on-time delivery rating does not need to explain what delivery time window was used in measuring its performance. • Assembly line stoppage Both programs which monitor assembly line stoppage report the data in a the same cumulative manner, i.e., the number of assembly line stoppages caused by each carrier is recorded. • Number of loss and damage claims Three carrier performance programs monitor this service attribute. Earlier we concluded that measurement of this service attribute begins with the same data. Despite this uniformity, the information derived from it varies. The following are the different methods used: 1. The first program monitors claims as a ratio of claims to the number of bills submitted by each carrier. 2. The second program monitors claims as a ratio of claims per 1000 bills submitted by the carrier. 3. Finally, the third program simply lists the number of claims submitted. 64 Table 7 Three programs converting loss and damage claims data into information using different rules Assume the following data Number of 12 claims: Number of bills 2825 submitted : Calculation of performance measure according to each method: Measure Program 1: 12/ 2825 = 0.00425 Program 2 : 1 2 / 2825 X 1000 = 4.25 Program 3: 12= 12 65 The difference among these programs, which do not convert data into information in the same manner, is shown in Table 7. Note that the difference between the calculation in program 1 and the calculation in program 2 is presentation only. It would be fairly simple to reconcile the two measurements. This reconciliation would be part of the reconciliation required after this first iteration of comparison this study completes. For the time being though, as part of Phase 1, these two programs do not convert data into information in the same manner, • Timely settlement of claims Two programs monitor this service attribute but do not convert data into information in the same manner. One program simply notes the age of the oldest claim to be settled while the other program calculates the average age of the claims to be settled. The difference between the two programs is exemplified in Table 8. • Value of claims Two programs monitor this service attribute. Both programs simply report the value of the claims outstanding. Therefore, both programs convert data into information in the same manner. Summary of H 2 B : Vehicle assemblers convert data into information in the same manner. The feasibility of S C P M is seriously challenged by the method used to report performance information. While similar data are processed for each service attribute, the conversion of data into information varies. This is due to the frequent use of ratios in measuring carrier performance. The heavy use of ratios in carrier performance is quite different from G A A P . Within G A A P , most of the measurement rules are for simple, cumulative information. For example, simple addition and subtraction determine revenue or net income. The use of simple addition in measuring both assembly line stoppage and value of claims explains why these are measured in the same manner. 66 Table 8 Two programs converting outstanding loss and damage claims data into information using different rules Assume the following data Age of various claims outstanding Number of claims outstanding: Calculation of performance measure according to each method: Claim # # of days outstanding 12389 90 12459 60 12459 45 12567 45 12569 30 12785 30 Total 300 Measure Program 1: 90 = 90 days Program 2: 300 / 6 = 50 days 67 The other three service attributes, i.e., on-time delivery, number of loss and damage claims and timely settlement of claims, all use ratios. Ratios are difficult to standardize. The only ratio for which there are rules under G A A P is the calculation of earnings per share. The earnings figure is not easily determined because extraordinary items complicate this calculation. Even worse, is the determination of the number of shares, assuming different share dilution scenarios. Thus, this ratio is very complicated. To conclude, while measures based on cumulative information are straightforward and thus simple to standardize, the conversion of data into ratios, as used in carrier performance evaluation programs, is more open to variation and thus more difficult to standardize. In this section, we discussed the use of time windows. Although time windows vary and seem to preclude standardization, they can also act as criteria of expected service levels. Thus, their variability among vehicle assemblers does not affect measurement standardization. 3.4.4 H 3 : Vehicle assemblers weigh service attributes equally. This hypothesis tests whether vehicle assemblers weigh service attributes equally. As discussed in Section 2.7.1, the goal of a vehicle assembler in assigning weights to service attributes is to determine overall carrier performance indices. Testing procedures In keeping with our study's focus on documented programs, we did not test programs, which do not document this process formally. Of the eight formal and semi-formal programs studied, four fall in this category. That is, four carrier performance evaluation programs weigh service attributes formally in order to calculate each carrier's 68 performance rating or index. Three of these programs are formal while the fourth is semi-formal. In testing this hypothesis, we examined weights assigned to the five service attribute groups in a manner similar to Chow and Poist's (1984, 29) study, instead of focusing on individual service attributes. We used simple comparison of the weights assigned to each group to determine if the weight assignment is same. If the weighing is similar then we cannot reject this hypothesis. Results The results appear in the calculations shown in Table 9. From this we note that there is no consistency among programs. Very striking is the great diversity between the three formal programs. A l l three programs belong to car assemblers. From this observation it is apparent that each formal program takes a different approach in determining carrier performance indices. This diversity which results in unique combinations of service attributes monitored and calculations may explain why each of these three programs are such well guarded secrets. • Timeliness of service related attributes Earlier we concluded that on-time delivery is the only service attribute, among 24 tested, that is monitored by all vehicle assemblers. We concluded that this service attribute is a first tier performance measure. In Table 9, we find support for this conclusion. The three formal programs examined, belonging to car assemblers, assign the highest weight to this group. The fourth, semi-formal program does not assign such a high weight. In fact, in this program belonging to a truck assembler, timeliness of service attributes are assigned weights similar to other groups. This is consistent with the earlier finding that for truck assemblers, timeliness of delivery is not by itself the most critical service attribute. 69 Table 9 Comparison of service attribute group weighing Formal Semi-formal programs program 1st Program 2nd Program 3rd Program 4th Program Timeliness of service 100% 40% 55% 14% Freight loss and damage 0% 10% 0% 14% Billing accuracy 0% 10% 0% 14% Service availability 0% 0% 30% 14% Other attributes 0% 40% 15% 44% 100% 100% 100% 100% 70 Freight loss and damage related attributes and billing related attributes Only two programs monitor service attributes in these groups. In the second and fourth program, these groups have much lower weights than others. This supports our conclusion that there exists a two-tier hierarchy of performance measures. Certain service attributes are presently not as important as the popular timeliness of service attributes. Service availability related attributes One program weighs this group significantly, while the others either do not monitor service attributes in this group or assign them a low weight. The attributes in this group point to the evolution of the traditional service attributes. These service attributes surface as carriers become increasingly integral to the success of vehicle assemblers. For example, in their study on bulk shippers, Hall and Wagner (1996, 239) found that service availability is an important carrier selection criterion for bulk shippers. The weights show however, this evolution is not consistent among vehicle assemblers. Other service attributes. The weight assignment in this service attribute group is quite different among programs. Service attributes in this group present a significant challenge to standardizing carrier performance measures. This is due to the fact that these service attributes are all special requests. Therefore, these cannot be standardized across programs. To make matters worse, programs weigh these attributes differently, but some of the weights are relatively high. Thus, some vehicle assemblers consider special service attributes, which may have little to do directly with transportation, to be important in selecting carriers. This may explain why Lambert et al. (1993) concluded that carriers have difficulty understanding shipper expectations. Note that the heavy weight assigned to these attributes by the fourth program may be due to its informal 7 1 nature. That is, the heavy weight assignment to this group may reflect the programs less than thorough design. Summary of H 3 : Vehicle assemblers weigh service attributes equally. None of the programs weighs service attributes equally. From this, we conclude that there is little likelihood for the concept of carrier performance indices. Yet, this finding contradicts previous research by Lambert et al. (1993) which ranks the importance of various service attributes in a homogeneous structure. In the study by Lambert et al. (1993), we are presented with a list of average rankings of service attributes used by shippers in evaluating L T L carrier performance. The underlying concept is that, there exists a consistent model which shippers use to evaluate overall L T L carrier performance. In other words, shippers measure similar service attributes and weigh them equally in arriving at a value used in evaluating different carriers. Somehow we must reconcile the findings of that study and this one. We believe that the findings in the study by Lambert et al. (1993) reflect a general carrier performance evaluation model used by the average L T L shipper. In that model, the emphasis is not on how service attributes are measured but simply on the weights assigned to service attributes in arriving at an overall carrier performance index. Our study differs in two aspects. First, in our study, we focus exclusively on carrier performance evaluation performed with some degree of formality. This means vehicle assemblers provided us with the mechanics they use to measure carrier performance. It is possible that, vehicle assemblers may have unknowingly developed different yet similar models. These different models, through the combination of service attributes monitored, the conversion method used, and the weights assigned, may actually yield similar carrier performance indices. If this conclusion is true, then as implied by Lambert et al. (1993), there is a single method to evaluate overall carrier performance. Thus, there is a need to 72 investigate this possibility, because vehicle assemblers may be wasting time and effort in measuring carrier performance with different models that yield similar results. Second, our study is also different from Lambert et al. (1993), in that we did not focus only on shippers who use L T L carriers. With this approach, we noted that truck assemblers, who more often ship with L T L carriers, are concerned about damage claims while car assemblers, who more often ship with TL carriers, are not. Thus, we expect L T L shippers would weigh damage claims differently than TL shippers in arriving at carrier performance indices. We conclude that for the situation described in the previous paragraph to be valid, only homogeneous shippers would benefit by assigning similar weights to common service attributes measured. 3.5 Phase 1 summary On-time delivery is the only service attribute monitored by all eight carrier performance evaluation programs tested. Unfortunately, this popularity does not reflect the fact that measuring this service attribute requires the most data collection and complicated ratio calculations. Other service attributes such as on-time pick-up, accurate billing, and transit time were not as popular among vehicle assemblers. We find that the importance of a service attribute in evaluating carrier performance is supported by its objective measurement. There is a large gap between the requirements to monitor service attributes requiring continuous input and those requiring exception reporting. This means that monitoring popular on-time delivery service attributes (continuous input) is challenging for smaller vehicle assemblers. Only large vehicle assemblers have powerful information systems to continuously monitor these service attributes. Smaller vehicle assemblers must depend on either judgment or information supplied by their carriers to monitor this group of service attributes. There is a need for innovative solutions in order for smaller vehicle 73 assemblers to have access to quality timeliness of delivery information. Service attributes requiring exception reporting (e.g. billing errors) are less onerous to monitor. With service attributes measurement distributed among two tiers, SCPM would include either measuring and reporting rules for the popular on-time delivery or include measuring and reporting rules for the nine service attributes we identified, which vehicle assemblers could choose to monitor selectively. When looking at inbound logistics as a supply chain, vehicle assemblers, as well as their third party logistics service providers, may monitor similar service attributes which are not apparent in this study. This means that, not only vehicle assemblers require performance data, but also other service providers within the supply chain. Thus, this study could have been more comprehensive by looking at complete inbound supply chains rather than focusing on vehicle assemblers only. This is an area where the SCOR model can help, since it defines performance measures beyond the immediate organization. Note that at time of publishing of this study, both General Motors and Ford have outsourced inbound logistics. Based on the research methodology used in this study, i f the study were initiated today, both these vehicle assemblers would not have been included in the study. Findings show that performance information needs are dictated by the impact of service failure. Thus, in identifying a homogeneous industry segment for our type of study, we look not only for similar carriers, similar inputs, and similar operating processes, but also similar consequences from similar transportation service failure. We believe that monitoring loss and damage claims by vehicle assemblers is linked to the degree of use of L T L carriers as opposed to T L carriers. Carrier performance monitoring does not have to be complex. By monitoring critical service failures, such as assembly line stoppage and loss and damage claims, vehicle 74 assemblers can develop simple low cost carrier performance evaluation programs based on exception reporting. Such an approach would convert informal, subjective programs into formal, objective ones. Note that exception reporting is not an appropriate method for monitoring timeliness of delivery service attributes. We discussed that assembly line stoppage, although an indication of a service failure, is not very precise in measuring timeliness of delivery. Carrier performance data measured by vehicle assemblers often originates from within operations. The reversal of the traditional location of the shipper within the vehicle assembly industry makes accessing shipping point information difficult for smaller vehicle assemblers. Carriers and shipping locations can also provide input as is presently the case. To achieve this, all participants must discuss the optimal structure to capture data and communicate it to the proper users in an efficient and effective manner. Carriers have a key role to play in supplying information to smaller vehicle assemblers which cannot afford sophisticated information systems. Although the source of data about some service attributes is similar among programs studied, the conversion of data into information varies among programs especially where ratios are calculated. It seems that SCPM could spell out the data requirements for carrier performance measurements. In spelling out the actual conversion rules from data into the popular ratios, further compromise and discussion is required. None of the four carrier performance evaluation programs examined weigh service attributes equally. In fact, our study reveals great variations among three formal programs used by car assemblers. Comparison between this finding and findings from previous research brings us to question whether carrier performance evaluation programs used by similar vehicle assemblers yield different carrier performance ratings. We are curious as to whether one model is better than another. It would be interesting to investigate whether various models yield significantly different carrier performance 75 ratings. As it stands now, establishing rules to calculate carrier performance indices is impractical. As second tier service attributes, we noted that L T L shippers, such as truck assemblers, are very concerned about damage claim related service attributes while TL shippers, such as car assemblers, are less so. Thus, these two groups of shippers weigh damage claim related service attributes differently. 76 Chapter 4 4. Phase 2 findings and analysis: Is the use of SCPM feasible among vehicle assemblers in Canada and the United States? 4.1 Introduction In this chapter, we answer Q2, that is, "Is the use of SCPM feasible among vehicle assemblers in Canada and the United States?" As described in Section 2.7, there must exist a will among vehicle assemblers and carriers to compromise on positions i f industry wide carrier performance measures are to be accepted. If this will does not exist, then standardization is not likely. In order to discuss the existence and the resolve of this will to compromise among vehicle assemblers, we developed implementation and operational issues based on the input from participants to our questionnaire and our analysis of literature and industry practices. We discuss these in this chapter. After this discussion, we gather the evidence presented in Phases 1 and 2 to answer the above question (Q2). 4.2 Measurement issues We address these issues since they relate to the actual measurement mechanics that can cause problems for various users trying to develop a common performance language. 4.2.1 Hierarchy of service attributes Although vehicle assemblers have defined, with varying degree, the components of transportation service quality, Phase 1 findings show us that no one defines transportation service quality similarly. Some of the measurement components may be the same, but the overall composition varies. In Section 2.4.1, we discussed the multi-dimensional components of transportation quality. The image presented was one where shippers monitor the various components of transportation in arriving at an overall assessment of 77 service quality. In the vehicle assembly industry, this approach does not clearly exist. What does exist is a hierarchy of service attributes which says, "First deliver the components on time, then I will look at loss and damage claims, billing accuracy, service availability and other service attributes". Awareness of this hierarchy is important to us as we put forward likely scenarios for the process of carrier performance standard setting. For example, since timely delivery standards (first tier) interest all vehicle assemblers, we expect much discussion and confrontation of opinions on this matter. For example, should time windows used in measuring on-time delivery be standardized or left to each vehicle assembler to determine? While successful efforts in this group of criteria may motivate further standardization, unsuccessful efforts may quickly stop the process. On the other hand, since second tier attributes do not attract the same consistent concern among vehicle assemblers, it may be easier to bridge existing gaps between the two or three assemblers, which measure an attribute on that tier. For example, the measurement of assembly line stoppages, which is already standardized, could be formalized as a SCPM by the two programs, which monitor this service failure. Thus, early successes can leverage further standardization efforts. 4.2.2 Objective preferred over subjective performance measures Participants in the survey indicated a preference for the reporting of objectively measured service attributes. This attitude is consistent with the overall move toward formal measurement in other industries. We believe that the one vehicle assembler with a semi-formal program, which relies solely on subjective measurements, would resist this trend. The people operating the program indicated that the present state of the program reflects a lack of a better, affordable system rather than their commitment to the use of subjective measures. 78 4.2.3 Specific and consistent carrier performance measurement rules We noted a preference among smaller vehicle assemblers to use carrier performance information prepared using specific rules. These assemblers do not have formal carrier performance evaluation programs and depend on carriers to generate performance information. Access to formally measured performance information would help smaller vehicle assemblers make optimal carrier decisions. The need for specific carrier performance measurement rules was not as important for vehicle assemblers with formal programs. They already use measurement rules developed internally. This divergence in attitude indicates that smaller vehicle assemblers would likely be less resistant to the initial use of SCPM. 4.2.4 Fewer measures to monitor carrier performance It is not surprising that vehicle assemblers prefer to monitor a few key service attributes. If accurate and comprehensive, the fewer the measures the lower the cost of evaluating carrier performance. Table 10 lists the number of attributes monitored by the three formal and five semi-formal programs tested in Phase 1. Although there is little consistency among program dimensions, the most attributes monitored by the programs is 15 (including many special requests). The majority of the programs monitor fewer than five attributes. From this observation, we conclude that vehicle assemblers want small rather than large "instrument panels." to monitor carrier performance. Similar to a car, "instrument panels" should include gauges (measures) which provide them with easy to read, relevant carrier performance information. Note that the formal programs monitor no fewer than three service attributes. This reflects the multi-dimensional structure of transportation services. Compared to this situation, three semi-formal programs monitor one service attribute. This does not mean that these vehicle assemblers do not recognize 79 Table 10 Number of service attributes monitored in each program Formal # of service Semi-formal # of service programs attributes programs attributes monitored monitored B 3 A 1 C 4 D 4 E 13 F 1 G 1 H 15 A letter identifies each carrier performance evaluation program. 80 transportation as multi-dimensional, rather it is an indication that multi-dimensional carrier performance monitoring is a complex process requiring data processing, which is not financially viable yet. It seems that the fewer the service attributes monitored, the easier it might be to achieve consensus among vehicle assemblers. Yet we suspect that in a competitive, industrial environment, political posturing and group dynamics would slow down the standardization process. 4.2.5 Continuous versus exception based measures A consideration for vehicle assemblers and carriers is the source and quantity of data available to provide relevant information. Some measures are based on monitoring every transaction while others are based only on monitoring exceptions. For example, we discussed in Section, that on-time delivery is measured by vehicle assemblers on every shipment. On the other hand, assembly line stoppage, which indirectly monitors timeliness of delivery, only examines late delivery of shipments when the assembly line stops. That is, there is no active monitoring of on-time delivery until the assembly line stops. It would be easier and cheaper for vehicle assemblers to focus standardization on exceptions rather than on monitoring every transaction. More appropriate for smaller vehicle assemblers, would be the monitoring of exception based service attributes such as those related to incorrect billing and freight damage and loss. 4.3 Technological issues As mentioned in Section 2.3, information is not free. Fortunately powerful data processing systems exist today to handle the volume of data required to measure carrier performance which provides the opportunity to reduce costs. We address these specific issues with a focus on the management of these resources. 8 1 Although it seems information technology capabilities abound in industry, acquiring and using information technology effectively is still an obstacle for vehicle assemblers monitoring carrier performance. For example, one vehicle assembler with a formal system has not yet implemented the process required to keep track of shipment delivery time. This, despite the fact that monitoring on-time delivery of shipments is included in its program and the measure carries significant weight within the evaluation model. 4.3.1 The role of carriers in collecting and processing data Collecting data required to measure service attributes such as billing and loss and damage claims requires vehicle assembler information technology resources. This requirement partly explains why so few vehicle assemblers have formal monitoring programs. While some vehicle assemblers do not have access to resources, others simply do not want to invest resources in this process. For example, one vehicle assembler interviewed indicated that inbound logistics was being outsourced. This decision was partially motivated by the vehicle assembler's lack of interest in investing in adequate resources (people and information technology) required to properly manage inbound logistics and the carrier performance evaluation program. A possible way to lower this barrier is to have carriers perform more of the data collecting, processing, and reporting. Many vehicle assemblers like this idea. If carriers are encouraged to provide performance information, they will probably seek significant input on which service attribute to monitor and report on. This scenario would certainly encourage standardization of measures across vehicle assemblers. As mentioned in Section 3.3.2, certain national L T L carriers presently service several vehicle assemblers surveyed. To avoid duplication of effort, it would be in carriers' interests to standardize performance reporting to vehicle assemblers. 82 4.3.2 Carrier performance information Internet sites Preparing and accessing performance information are challenging tasks. Fortunately, the proliferation of Internet sites provides a likely information hub. With the use of S C P M and the Internet combined, it is possible to envision carrier Internet sites where carriers would report their performance for each vehicle assembler. Another possibility is an industry site where both carriers and vehicle assemblers provide information with the goal of generating relevant carrier performance information accessible to both groups. 4.4 Management issues We address these issues since the development of SCPM requires significant change from management's point of view. 4.4.1 Confidentiality of carrier performance evaluation program contents During the telephone surveys, participants came across with one of two different attitudes. A vehicle assembler with a formal program exemplifies the first attitude. This vehicle assembler is proud of the program it uses and is not interested in sharing its contents with competitors. The respondent believes the mix of performance measures and weights used in the program accurately represents carrier service quality. This vehicle assembler does not want to give up what it considers a competitive advantage. Smaller vehicle assemblers with semi-formal or informal programs hold the second attitude. They are curious about the measures used by competitors. Either they acknowledge that they are not doing a good job at monitoring carrier performance or, although they monitor carrier performance, they worry that they are not monitoring it correctly. Since they wish to access performance information with quality similar to that of formal programs, they appreciate the idea of everyone measuring the same service attributes consistently. 83 The open attitude of smaller vehicle assemblers lends further support to the development of SCPM. In fact, they would benefit the most by the development of S C P M since S C P M would give them access to formal carrier performance information. We believe that the status of SCPM as an industry initiative would be maximized i f all vehicle assemblers participated in its development. This means that, for total participation to take place, the three vehicle assemblers with formal carrier evaluation programs would have to divulge the contents of their programs. Thus, the question arises, "Why would vehicle assemblers divulge the contents of their formal carrier performance evaluation programs and give up what they consider to be a competitive advantage?" We put forward three answers to this question. First, the programs may not measure the optimal combination of service attributes which accurately reflects carrier performance. For example, Table 5 and Table 10 demonstrate that the three formal programs do not measure the same service attributes. This means that despite their similar operating processes, which we believe makes them homogeneous, none of them agree on which service attributes to measure. This discrepancy leads us to conclude that, at most, only one program measures the optimal combination of service attributes. Thus, this leaves the other two assemblers in a false sense of competitive advantage. Therefore, by participating in the development of SCPM, these three vehicle assemblers can assure their input to the process and ensure they benefit from the research into the development of SPCM. In order to support the validity of this point, it is important to investigate whether various carrier performance evaluation programs indeed yield different carrier evaluations. If various programs do not yield different carrier evaluations, then there is no real need to have the various proprietary programs. 84 Second, the optimal combination of service attributes does not provide the final index of carrier performance. To arrive at this index, one needs to weigh each service attribute. Table 9 makes it clear that each vehicle assembler weighs service attributes differently in arriving at an overall carrier index. As summarize in Phase 1, trying to standardize weights assigned to service attributes is a futile attempt. Therefore, there is no need for the three vehicle assemblers with formal programs to disclose their approach to weighing service attributes. Third, the value of information resides in the actions that result from its analysis rather than simply holding it. In our opinion, having access to quality performance information is not an end in itself. For the information to be valuable it must become the source of action, i.e., appropriate action plans must be developed and implemented. That is where the true competitive advantage of the three vehicle assemblers with formal programs resides: that is, in the resources available to them to formulate and implement action plans. Therefore, by participating in the development of SCPM, this ability is not compromised and neither is their competitive advantage. 4.4.2 Independence of performance data and information In section 2.6.3, we discussed reliability of information. One aspect not discussed in that section is the source of the data and the possible manipulation of it. Depending on the structure of data collection, transformation, and reporting used, carriers and or vehicle assemblers can be involved in the process. Presently, some vehicle assemblers collect data themselves, while others rely on the cooperation of carriers. If carriers are invited to play a greater role in collecting and processing performance data, it is inevitable that vehicle assemblers wil l doubt the accuracy of the information generated. One vehicle assembler already indicated this doubt. Fortunately, the accounting profession faces the same dilemma and has developed a solution. Since the responsibility of company executives to report financial results to 85 users places them in a conflicting position, i.e., company executives are motivated to report positive information, the accounting profession has countered this pressure with information auditing. Auditing requires certified accountants or auditors to assess the accuracy of information reported. Through tests of internal reporting controls and disclosure accuracy, auditors issue reports attesting to the accuracy of the information reported. Through this process, financial statements gain legitimacy with users. The same approach could apply to carrier performance information, i.e., i f need be, auditors could be engaged to determine whether the internal reporting systems which generate performance information, do so accurately. 4.4.3 Benefit versus cost of formal carrier performance measurement Through our telephone interviews, vehicle assemblers expressed satisfaction with current carrier service levels. Despite this vote of confidence, most vehicle assemblers believe the use of carrier performance evaluation programs is necessary so as to keep up with competitors. Thus, the existence and use of these programs seems to be a competitive necessity. Confronting vehicle assemblers wishing to develop basic programs or upgrade semi-formal programs is the difficulty in substantiating the development and operating costs of such programs. Certainly, the decision to implement such a program in order to keep up with competitors makes the decision easy. Of course there are exceptions. One smaller vehicle assembler, contrary to others, made the decision not to develop a formal or semi-formal carrier performance evaluation program. This assembler prefers to use personal judgment to evaluate carrier performance. From this observation, we conclude that this vehicle assembler made the determination that the cost of such a program could not be offset by the benefits derived from it. 86 Unfortunately, we could not find costs / benefit analyses derived from the use of formal programs. As a rule of thumb, given the industry's $1.9 billion transportation expense, every 1% point improvement in transportation represents $19 million in savings. 4.4.4 Carrier performance measurement precision As mentioned in the previous section, most vehicle assemblers are satisfied with the quality of transportation services received. Still, this statement does not mean that vehicle assemblers are not concerned about service quality improvement. We believe transportation service quality improvements are incremental rather than large. Thus, in order to measure quality improvements, the measuring tool must use a sufficiently fine scale. In our opinion, only programs, which monitor carrier performance objectively, allow for such accurate, incremental measurements. Therefore, only vehicle assemblers with formal programs have the required measuring tools to focus on continuous improvement. Through the use of objective measures, SCPM would provide an appropriate scale for the monitoring of continuous improvements. In considering measurement precision, the decision to use indirect measures such as assembly line stoppage will reduce the precision in measuring timeliness of delivery. In Section 4.2.5, we discussed the cost reduction associated with the lower amount of data associated with this performance measure. Unfortunately, the off-setting cost is a decrease in precision in differentiating the performance of various carriers. For example, i f during a month all 20 carriers that regularly deliver components to an assembly plant never cause an assembly line stoppage, then all 20 carriers would be rated as 100% on time. This means that none of the carriers is better or worse than the other. We doubt this would be true. 87 4.5 S t a n d a r d i z a t i o n process issues We address these issues, as the development of SCPM, just like that of G A A P , is not a quick and easy process. The initiative must be managed over time within a sound framework. 4.5.1 Cos t o f s t a n d a r d i z a t i o n o f c a r r i e r p e r f o r m a n c e measures The cost involved in developing and maintaining SCPM is unknown. Of course this issue must be addressed. Vehicle assemblers, which have already developed formal carrier performance evaluation programs, will be reluctant to spend additional money on activities they have already completed. If the cost of developing SCPM measures exceeds its benefits, there will be little industry interest to pursue this approach. As a cooperative and investigative process, the costs would include the following: • Each vehicle assembler would have to form a committee to investigate the internal structure in place to evaluate carrier performance. This committee would examine both the formal and informal processes as well as the source of data and the approach used to measure service attributes. • Each vehicle assembler would send a representative to participate in the S C P M discussions with other vehicle assemblers and carriers. • As significant participants to and beneficiaries of SCPM, carriers would be required to contribute time to explain their present performance reporting structures and to discuss necessary changes to data collection and processing. 88 • Due to the geographic spread of vehicle assemblers in Canada and the United States, S C P M meetings would require a central hub for communication. These costs would have to be funded by the members of SCPM. • Especially during the early development stage of SCPM, frequent meetings would be necessary to establish the process. Afterwards, meetings would still be needed to address developing issues such as technological changes and disciplinary action. • The S C P M members would have to fund research work to investigate various aspects of S C P M such as service attributes to be measured, conversion rules, information reporting technology. 4.5.2 Working with competitors Although vehicle assemblers compete against each other, they have worked together in the past. For example, vehicle assemblers worked together to adapt ISO 9000 to the automotive industry in the resulting QS 9000. QS 9000 lists procedures required by automotive manufacturers to develop processes which comply with ISO 9000 requirements. Thus, vehicle assemblers have successfully worked together on standardizing operating procedures, which benefits everyone. To support the development of SCPM, the Automotive Industry Action Group (AIAG) or carrier associations, such as the American Trucking Association (ATA), can play key roles in driving the process. One word of caution though, QS 9000 does not spell out how to measure and monitor processes as the use of SCPM does. SCPM deals with a much more confidential topic as evidenced by the reluctance of large vehicle assemblers to disclose information to competitors. Also, although we often refer to G A A P as a model, the creation of G A A P was an industry approach to prevent government intervention from setting financial 89 reporting rules. The evaluation of carrier performance, without regard to carrier safety, is not high on the government's agenda. We cannot ignore the success of the SCOR model. Similarly to G A A P , this supply chain model has eliminated industry barriers in putting forward a conceptual framework in configuring supply chain processes regardless of industry specifics. The council and its members have made great strides in identifying supply chain components as part of the SCOR model. The development of the SCOR model resulted from the participation and the input of various organizations from various industries. The process to develop the SCOR model is exactly the process necessary to develop SCPM. 4.6 Population issues As a language of performance, it is difficult to gauge the applicability of SCPM to a whole industry from the small industry segment under study. The feasibility of industry wide S C P M is important in determining the commitment of key industry players in the early stages of development. 4.6.1 Homogeneity of motor carriers' services Vehicle assemblers purchase transportation services from a wide array of carriers. Within this array, some carriers offer rush services, while others offer precise delivery times. Still others offer day specific delivery. Another variation is that some carriers use specialized equipment to deliver components to assembly plants, while other carriers use basic equipment. Despite this variety, we believe that motor carrier services are basically similar, i.e., carriers are expected to deliver intact shipments on time and invoice their services in a timely and accurate manner. To support this claim, Abshire and Premeaux (1991) did not find it necessary to segregate between the various motor carriers in evaluating carrier selection criteria used by shippers. 90 Although vehicle assemblers have indicated special performance information needs, carrier performance monitoring focuses on basic tasks. None of the vehicle assemblers interviewed differentiate carrier performance monitoring between motor carrier types. Thus, a likely possibility for SCPM is the standardization of basic service attribute performance measurement such as on-time delivery, loss and damage claims, and billing accuracy. This approach would allow vehicle assemblers to use similar performance measures to monitor the service quality of a wide array of carriers, ranging from specialized expediter carriers to TL and L T L carriers. SCPM would greatly help carriers in preparing and providing historical performance information to prospective shippers. 4.6.2 Lack of homogeneity among vehicle assemblers Throughout Phase 1, we noted differences among vehicle assemblers. Some differences relate to measurement issues, while others reflect operating issues. Truck assemblers have smaller assembly lines assembling vehicles with a great variety of component options. Their smaller size does not provide them with the same economy of scale for inbound logistics as the Big 3 car assemblers. Therefore, truck assemblers are concerned about bringing in smaller component shipments intact. Thus, their carrier performance evaluations focus on timely delivery of L T L shipments, and the occurrence of loss and damage claims. Car assemblers have assembly lines with enormous production runs using similar components. To meet this type of operation, car assemblers rely on trailer loads of similar components, which arrive in customized packaging. Thus, their focus is on precise delivery of these components. We conclude that, based on their different operating processes, car and truck assemblers have different carrier performance information needs. 91 Still, we believe differences among vehicle assemblers do not automatically eliminate the possibility of standardizing performance measures. Rather, we believe assemblers could benefit from an offering of standardized measures, which they would then decide to use. One must remember that carriers service both truck and car assemblers. Since the providers of transportation services are homogeneous, and vehicle assemblers have multi-tiered performance information requirements, SCPM could accommodate these differences. We believe that, focusing early development of SCPM for the use of truck assemblers would result in the necessary early successes for further development of SCPM. 4.7 Phase 2 summary In Phase 1, we identified just two service attributes which vehicle assemblers measure in the same manner, assembly line stoppage and value of damage claims. These two originated from a list of 24. Note that the popular on-time delivery is not included in this short list. The existence of SCPM, at the multi-dimensional level we envision is not close at hand. Just consider that, i f this study were expanded to include other shippers, even more service attributes would be identified. For example, in the study by Hall and Wagner (1996, 239), it is noted that bulk shippers are concerned about carrier safety and accordingly monitor this service attribute. In our study, not one vehicle assemblers mentioned this service attribute. Thus, standardizing carrier performance measures would be that much more difficult. Weighing against our first iteration in the SCPM process are the issues we identified which we believe impact the development and the use of SCPM. Some issues are positive while others are negative. Measurement issues are positive. According to vehicle assemblers timely delivery of freight is critical. In order to monitor this service attribute, assembly line stoppage can be measured easily until the more complicated measure of on-time delivery is developed. 92 The development of SCPM would provide vehicle assemblers the objective measures they desire and need for their quality programs. In addition, as part of SCPM, specific measurement rules would elevate the level of understanding of carrier performance among vehicle assembler and carrier personnel. In order to balance completeness with relevance, we are hopeful SCPM would include only a few measures. In Phase 1, we found in assembly line stoppage and value of damage claims two measures which monitor areas shippers in general consider critical, namely timeliness of delivery and exceptions to shipments. We are hopeful in our belief since vehicle assemblers presently monitor only a few service attributes in monitoring carrier performance. Monitoring a few measures is a good way to keep the cost of carrier performance monitoring under control. In addition, measures, such as assembly line stoppage and value of damage claims focus on service failure, thus reducing the amount of data required. Technology issues are also positive. We see large carriers as providers of performance data to vehicle assemblers. Carriers have information systems, which can keep track of a large volume of transactions. This capability can be offered to vehicle assemblers who hesitate to invest in systems required for carrier performance monitoring. This contribution by carriers would benefit smaller assemblers the most. As for accessing data and information over wide areas, the Internet offers the possibility of linking carriers and vehicle assemblers. Management issues are somewhat negative. Although we did not find large vehicle assemblers supportive of the idea of sharing carrier performance evaluation program contents with others, we found a will among smaller assemblers to accept formal program components offered to them. With the development of SCPM, measurement rules would be provided to smaller vehicle assemblers. This would allow them to confidently 93 generate data internally or request data from their carriers. At the same time, they could confidently develop formal carrier performance evaluation programs. As is presently the case, vehicle assemblers question the validity of carrier supplied performance information. To some degree, carriers adhering to SCPM would reduce this doubt. If need be, the information itself or the systems which generate the data could be part of a professional audit. At this time though, we do not propose who would pay audit fees. It seems that smaller vehicle assemblers are more focused on developing formal carrier performance evaluation programs as competitive stance rather than as a well thought out decision. The value derived form the use of formal carrier performance evaluation programs and S C P M should be determined to enhance acceptance of the concept. Although financially the value of SCPM is not yet known, we believe that no comparable tool to S C P M exists which allows vehicle assemblers to accurately and discretely measure changes in carrier performance. We believe benchmarking among carriers would benefit from such a precise measurement tool. To drive the S C P M process along, there is a need for further research to determine the financial value of SCPM. As for standardization process issues, we find these to be positive. Vehicle assemblers have demonstrated recently their ability to work together in order to adapt ISO 9000 standards to their industry. As mentioned earlier, we now propose a common evaluation tool which large vehicle assemblers may not want to lower themselves to. A n important unknown at this time is the party who will own the S C P M development process. The existence of SCPM will not materialize suddenly. Great efforts are required to move the process along as a concept first and then as an operating structure. The A I A G was instrumental in bringing together vehicle manufacturers to develop QS9000. 94 They may have a role in driving the development of SCPM. More appropriate in this role would be the Supply Chain Council. The council and its member organizations is rapidly moving in developing a comprehensive supply chain conceptual framework and its components which include performance measurement. As for population issues, we feel positive about the homogeneous nature of the transportation services offered to vehicle assemblers by carriers. We do not feel the same about the homogeneity of vehicle assemblers. What we first believed was a homogeneous market segment, is not. We find that, the impact of transportation service failure on operating processes of vehicle assemblers vary and consequently result in differences in the approach used by vehicle assemblers in evaluating carrier performance. These differences limit the scope of SCPM from being applied in determining carrier indices. Although the existence of SCPM could offer a number of measures, certain groups of vehicle assemblers would presently find little value in them. Our conclusion to the question, "Is the use of SCPM feasible among vehicle assemblers in Canada and the United States", is threefold. First, the use of SCPM in generating carrier performance indices is not practical. Each vehicle assembler has its own formal or informal recipe in assigning weights to various service attributes measured. We recognize that assigning weights in measuring overall vendor performance is specific to each purchaser. Second, the uniform use of SCPM is also improbable. We find that differences among vehicle assembler operations result in varying carrier performance information needs. Note our use of the word uniform in the first sentence of the paragraph. We use this word to qualify our sentence since we believe the use of SCPM is feasible for truck assemblers. We conclude that SCPM is feasible for their use for the following reasons: 95 • Truck assemblers, as a group, have similar information needs. • Truck assemblers, as a group, need help the most since they have not yet clearly define transportation service quality. • Truck assemblers can accelerate the development of their formal carrier performance evaluation programs by leveraging the use of standardized performance information supplied by carriers. Such an initiative would elevate the level of formality in measuring the important on-time delivery among truck assemblers. • The two service attributes we found to be measured in the same manner, the imprecise assembly line stoppage and value of freight damage and loss, reflect aspects of transportation truck assemblers wish to use is evaluating carrier performance, namely timely delivery of freight and intact delivery of freight. Third, the use of SCPM in defining carrier performance monitoring terminology to be i used by vehicle assemblers is feasible. Similarly to the SCOR model, SCPM could be used to define performance measures in terms of reporting rules, calculations, and source of data to be used in measuring carrier performance among vehicle assemblers. For example, S C P M could be used as reference by vehicle assemblers in specifying to carriers their service expectations and performance information requirements. Specifically, a vehicle assembler could decide, among the selection included in SCPM, which service attributes to measure, whether to use the prescribed data source and whether to process data in the prescribed manner. In this conclusion, large vehicle assemblers could still decide the level at which to contribute to the setting of standards without compromising confidentiality. In this study, we find that there is little to be gained by the large vehicle assemblers with formal programs from the use of SCPM. The only incentive we identified is the two in 96 three chance that a formal carrier evaluation program is not accurate. At this stage their willingness to participate in the SCPM process is unlikely. We believe that i f the use of S C P M were considered across industries similar to the breadth of the SCOR model, their willingness to participate would increase. In moving the SCPM process along, we see a two prong development taking place. On the one hand, in order to realize SCPM benefits in the short term, truck assemblers must form a united voice in communicating their information needs to carriers. In order to keep information cost low and information value high, truck assemblers as a group need the timeliness of delivery information which carriers can supply. On the other hand, in the long term, as the SCOR model is refined, Supply Chain Council participants wil l eventually define terms that vehicle assemblers will find themselves using in their communication with carriers, and benchmarking partners. 97 Chapter 5 5. Summary and conclusions 5.1 Introduction From the review of literature and industry practices, we conclude that shippers and carriers do not have a clear language to communicate carrier performance. This is troublesome for shippers, as they need evidence to support their carrier selection and retention decisions. Carriers face a similar challenge when attempting to support their claims about the level of service provided to present as well as prospective clients. In our opinion, previous research does not provide sufficient direction for creating a comprehensive language of carrier performance. One current attempt to help remedy this problem is the Supply Chain Operating Reference model (SCOR) developed by member organizations of the Supply Chain Council. This cross-industry model developed by leading organizations identifies processes and metrics for use in configuring and evaluating supply chain processes. The model provides a broad language of performance, which improves communication among manufacturers, wholesalers, and distributors. However, the SCOR model falls short as a language of carrier performance as it does not focus on the management of shipper / carrier relationships. Also the model does not address with sufficient detail the actual mechanics necessary for relevant and understandable performance measures for use by transportation service purchasers. A major achievement of the SCOR model is its development based on participant input. That is, supply chain participants from various industries developed a language for their use based on their input. From this model, we note that participation and discussion among users is critical in developing a language of carrier performance. 98 In light of the absence of an appropriate solution to the problem of carrier performance evaluation, we put forward our own approach. We name our approach, Standardized Carrier Performance Measures (SCPM). The design and the development process of S C P M we put forward borrow from the accounting industry. That is, the concept of S C P M parallels the concept of Generally Accepted Accounting Principles (GAAP). Similarly to the development of the SCOR model, the participative development of G A A P has created a successful, widely use standardized language of financial performance, which should be emulated in measuring carrier performance. Unlike the SCOR model, G A A P provides guidance in developing the mechanics necessary to measure carrier performance with the precision required for management evaluation. In assessing the feasibility of SCPM, we focused our research on the inbound transportation of components of vehicle assemblers in Canada and the United States. Our research design was twofold. First, we compared the content of formal and semi-formal carrier performance evaluation programs used by vehicle assemblers. We examined closely the service attributes measured, the methods used to measure them, and the weights assigned to each service attribute in arriving at an overall carrier performance index. In this study, we identify 24 service attributes covering areas of timeliness of delivery, freight damage and loss, billing accuracy, and others monitored by vehicle assemblers. From these 24, only nine service attributes are measured by two or more vehicle assemblers. Among the nine is the important on-time delivery. Of these nine, only two service attributes, namely assembly line stoppage and value of loss and damage claims are measured in the same manner. Measuring these service attributes is a matter of simply adding information. Measuring on-time delivery involves complicated comparisons and ratio calculations, which none of the vehicle assemblers perform in the same manner. Finally, none of the service attributes are weighed equally in calculating and overall carrier performance index. 99 In addition to assessing similarities among carrier performance evaluation programs, we examined various implementation and operational issues, which we believe impact the feasibility of SCPM. We examined measurement, technological, management, standardization process, and population issues. From our analysis of similarities among carrier performance evaluation programs and implementation and operational issues, we arrive at several conclusions: 1. Barriers exist to the implementation and operation of SCPM among all vehicle assemblers. These barriers make the present use of SCPM unfeasible. 2. Even i f implementation and operational barriers were removed, the use of S C P M in calculating carrier performance indices would be impractical. 3. Although the implementation of SCPM among all vehicle assemblers is not presently feasible, the use of SCPM is feasible among the smaller group of truck assemblers. 4. Finally, as a reference map similar to SCOR, vehicle assemblers and their carriers could use SCPM to define certain aspects of performance measures. In addition to these conclusions, this study contributes to the overall body of knowledge on carrier performance evaluation. In the remaining chapter, we discuss these. First, we present our research findings. Then, we lay out an industry action plan based on our findings to bring about the partial implementation and the operation of SCPM. Finally, in order to motivate further research on the topics covered in our study, we discuss areas, which require additional research. 100 5.2 Research findings 5.2.1 Standardized Carrier Performance Measures for transportation We noted a need among most vehicle assemblers to measure and report carrier performance information consistently. Based on our findings, it is feasible that S C P M can be implemented to provide vehicle assemblers and carriers a selection of consistent performance measures. These measures would likely become part of the much larger model. 5.2.2 Few key performance measures are necessary Vehicle assemblers do not formally consider many service attributes in evaluating motor carriers. The formal and semi-formal carrier performance evaluation programs we examined monitor between one and fifteen service attributes. This finding parallels financial materiality used by accountants in preparing financial statements, i.e., only relevant information which substantially affects decision making is monitored. This finding contrasts with the list of 166 factors of quality listed in the study by Lambert et al. (1993). The difference in the number is partially due to the different focus of the studies. While the study by Lambert et al. (1993) examined carrier selection and performance evaluation, this study focuses on carrier performance evaluation. Another explanation, for the difference is the focus of this study on formal carrier performance measures. The study by Lambert et al. (1993) did not use formal measurement as a criterion in composing the list of factors of quality. This study does. As indicated by Chow and Poist (1984), service attributes considered important in evaluating carrier performance are not necessarily measured formally. We hope future studies on carrier performance measurement and evaluation will reflect current industry practices and focus on data from formal selection and performance evaluation programs. 5.2.3 Vehicle assemblers do not need similar carrier performance information In this study, we find that different assembly processes between car and truck assemblers, and differences in sophistication of carrier performance evaluation programs result in 1 0 1 different carrier performance information needs. Simply stated, from a core of service attributes identified, some of these are not commonly monitored by vehicle assemblers. This practice does not mean that, within a vehicle assembler's supply chain, a partner does not need the information. In this study, we note that, some inbound logistics tasks are handled by third party logistics service providers. We suspect that the need for performance information absent at the vehicle assembler level may in fact exist at the third party provider level. 5.2.4 Vehicle assemblers calculate carrier performance indices differently Similarly to the paragraph above, the impact of service failure on assembly plant processes also dictate the weights assigned to service attributes in each carrier's performance evaluation program. Thus, carrier performance index information supplied to a vehicle assembler from a carrier or a central source would have to be broken down into its measurement components and then inserted into a customized evaluation model to be of any use for that vehicle assembler. Therefore, it is not practical to include in S C P M the calculation of carrier performance indices. 5.2.5 Monitoring the number of loss and damage claims is linked with the use of L T L shipments Vehicle assemblers, which use mostly L T L shipments, are more likely to monitor number of loss and damage claims than vehicle assemblers who use mostly TL shipments. Over the years, vehicle assemblers, carriers, and component manufacturers have designed more protective packaging and improve trailer design. These improvements, which are more apparent in T L shipments, have drastically reduced the number of loss and damage claims. Although previous studies have not arrived at this conclusion, this finding is consistent with a previous industry survey. 102 5.2.6 The impact of service failure on operating processes In this study, we found that service attributes measured are linked to operating processes. That is, where an operating process is not affected significantly by service failure, the service attribute is not measured. Thus, value of loss and damage claims is important to truck assemblers (expensive scarce components) whereas assembly line stoppage caused by carrier service failure is important to car assemblers (expensive cost of assembly line shutdown). We concluded that size of the vehicle assembler is not necessarily a good indicator of the service attributes monitored. Rather, the impact of service failure, given the vehicle assemblers' operating processes, is more likely to determine service attributes monitored. 5.2.7 Implementing decisions based on information is more important than simply controlling information Emphasis on confidentiality of information has led vehicle assemblers to develop carrier performance evaluation models secretly. Thus, each model is believed to provide a competitive edge. This advantage is based on a more accurate carrier performance evaluation resulting from the combination of service attributes measured and the weights assigned to these. We did not attempt to determine whether these various models actually yield substantially different carrier performance indices. Further research in this area would reveal whether the confidential environment created by vehicle assemblers in their quest for the best combination of performance measures is warranted. Nonetheless, the true power of information resides in the decisions based on it and the resulting actions. The development of G A A P and the SCOR model is based on this premise. In these systems, rules describe standardized measures. When it comes to describing how to use this information, these systems are silent. Thus, until proven otherwise, vehicle assemblers should not focus on developing confidential information. Rather, vehicle assemblers and carriers should openly develop an appropriate 103 combination of performance measures and reporting rules. With this information, vehicle assemblers should then develop communication and decision implementation systems. 5.2.8 Few automotive assemblers clearly communicate their service expectations to carriers In this study, we demonstrate that only three vehicle assemblers with formal carrier performance evaluation programs clearly define transportation service quality. This means that the other ten smaller vehicle assemblers we studied, representing 17.6% of car assembly and 100% of truck assembly in Canada and the United States, do not clearly define transportation service quality. In their study, Lambert et al. (1993) concluded that misunderstandings about service expectations between carriers and shippers resulted in wasteful deployment of carrier resources. This lends further support that a precise language of carrier performance would clarify service expectations between smaller vehicle assemblers and carriers. 5.2.9 Shippers can also be defined by the operating process they use Previous studies have defined shippers in different terms with regard to carrier performance evaluation. For example, Chow and Poist (1984) and Abshire and Premeaux (1991) viewed shippers as a homogeneous population in their studies. In their study, Lambert et al. (1993) defined shippers in terms of the carrier type used (LTL). In the study by Hall and Wagner (1996), a specific type of TL transport was used to define shippers (bulk). This study defines shippers in a new dimension. That is, shippers can be defined in terms of the operating process used. We note in the study that the assembly of truck resembles a job shop operation while the assembly of cars resemble a continuous flow operation. As a result of this difference, transportation service failures affect these two types of assemblers differently. Consequently, carrier performance evaluation among the two types of assemblers differ in terms of which service attributes are monitored and how each service attribute is weighed in arriving at an overall carrier performance index. This distinction between operating processes is incorporated in the 104 SCOR model. In the SCOR model, job shop operations are described differently than continuous flow operations. 5.2.10 Vehicle assembler demographics and their use of formal carrier performance evaluation programs Only large vehicle assemblers, namely the Big 3, have formal programs. A l l other car assemblers and all truck assemblers have either semi-formal or informal programs. As indicated by Brooks (1995), size matters. The Big 3 have enormous production volumes and transportation expenses which provide them with the economy of scale to operate formal programs. Most vehicle assemblers control all inbound shipments. Thus, the share of inbound shipments controlled by vehicle assemblers is not a good indicator of the use of a formal carrier performance evaluation program. Size of carrier fleet is also not a good indicator. Vehicle assemblers have carrier fleets of varying size independent of whether they use a formal program or not. 5.2.11 The reliance of carrier generated performance information by small vehicle assemblers Small vehicle assemblers rely on carriers to supply them with on-time delivery performance information. Unfortunately, small vehicle assemblers do not always believe this information. This doubt results from the lack of understanding of the rules used by carriers in measuring their own performance. These vehicle assemblers have the most to gain from the development of standardized performance measurement rules. 5.3 Industry recommendations Presently, the concept of SCPM is simply an approach we developed. In order to proceed with the implementation of SCPM, we recommend the following action plan. This action plan could be executed either as an industry initiative or by an individual consulting with 105 truck assemblers and carriers. This plan focuses on truck assemblers since the findings show that this group of vehicle assemblers is presently more compatible with the S C P M process. 5.3.1 Create a S C P M process A n initiative such as SCPM is not an informal process. Both the truck assembler and carrier communities must work together within a structured process. This process must support an initiative where a common will to compromise and cooperate must counteract the natural urge of competitors to isolate themselves. Thus, we believe this process should be developed and sustained by an industry umbrella organization. In the study, we identified the Automotive Industry Action Group or the American Trucking Association as possible drivers for this initiative. At the inter-industry level, the Supply Chain Council has had great success in bringing together organizations in defining supply chain components. Carrier performance measurement is definitely within the scope of the SCOR model. Thus, the Supply Chain Council can play a key role in coordinating measurement standardization. Alternatively, an organization, similar in structure to the Supply Chain Council can be created to drive the S C P M process among truck assemblers and their carriers. This organization would be funded by the member truck assemblers and carriers. The organization would pay for all expenses incurred to develop, implement, and operate SCPM. 5.3.2 Seek input from carriers In this study, we examined carrier performance from the point of view of shippers. In our findings we identified the critical role carriers can play in collecting data and generating performance information. In order for truck assemblers to leverage this situation, a study 106 examining the processes used by carriers to measure their performance is necessary. This study would highlight gaps and opportunities in implementing and operating SCPM. 5.3.3 Create a Committee to draft initial SCPM Once the research on carriers is complete, enough information would be available to draft SCPM. A committee composed of truck assembler and carrier representatives would congregate to determine the service attributes to be measured, the data to collect, and the rules to convert data into information. 5.3.4 Seek critical support by users Similarly to standard setting within G A A P , users of information generated by the proposed SCPM rules must agree on them. Support for the proposed SCPM will be needed by all truck assemblers participating in the initiative. Such wide spread support among carriers is not necessary for the following reason. Support from carriers, which have the resources to allow them to fulfill their responsibilities according to SCPM, wil l be critical. Support from carriers not committed to participating in S C P M will not be as critical initially. We believe that having the A T A involved in the SCPM process might encourage these smaller carriers to participate in the SCPM process in order to maintain their competitive positioning. 5.3.5 Develop information system controls To address the issue of information accuracy and independence, advice from professional auditors is encouraged to assess the need for internal controls in capturing data and converting them into information. In addition, a tentative auditing engagement program should be developed. 5.3.6 Develop software solutions To help both truck assemblers and large and small carriers implement systems necessary to generate performance information in accordance to SCPM, software solutions 107 integrated into existing information technology systems are necessary. Thus, software vendors must be consulted to develop such software solutions. Packages developed through this initiative could then be marketed to future organizations wishing to take part in the S C P M process. 5.3.7 Create a SCPM communication system Once operational, S C P M will allow truck assemblers and carriers to communicate clearly. In addition to the information itself, a process is needed to communicate the information. In the study, we suggested that an Internet base solution could provide access to selected performance information between truck assembler and carrier as well as between carriers or even have information available to prospective shippers. 5.3.8 Leverage early successes A key individual within this process can lead the dissemination of the early successes of this initiative. Some present successes are the standardization of the measures assembly line stoppage and value of loss and damage claims. Standardizing several other service attribute measures, including the popular on-time delivery, only requires the ratio calculation to be agreed by SCPM participants. Once operational, S C P M may prove to be attractive to other vehicle assemblers. Ultimately, membership in SCPM may become a necessity to remain competitive. 5.4 Future research needs The future research needs described below are aimed at both industry practitioners and academics. Industry practitioners interested in implementing SCPM must gather information about the financial impact of SCPM. Thus, various cost aspects as well as benefits associated with the use of SCPM must be researched to provide users a clearer picture of the future. Other research needs are presented to academics interested in further research on the topic of carrier performance information. 108 5.4.1 Separate carrier performance evaluation from carrier selection Our research design allowed us to isolate carrier performance evaluation from carrier selection. Studies on this topic, such as Lambert et al. (1993), do not specify this difference. We believe this difference is important since the two processes have different objectives. The objective of performance evaluation is to determine the actual level of carrier service quality. The objective of carrier selection is to evaluate the ability of a prospective carrier to provide a shipper's expected level of service quality whether based on historical and generic performance information or service delivery components such as size of fleet. In our study, we demonstrated that formal carrier performance evaluation programs, such as Johnson & Johnson's, do not mix the two processes. Similarly, vehicle assemblers in our study also differentiate between the stage of initial selection and on-going performance monitoring. Therefore, to reflect industry practices, future studies should not ignore the difference between the two processes. 5.4.2 Investigate any carrier performance evaluation program content convergence As performance monitoring programs increase in use, it will be interesting to examine whether a convergence of these programs takes place. It is possible that standardization wil l evolve naturally as shippers copy each other's programs. The fact that our study identified commonalties in service attributes monitored and measurement techniques among vehicle assemblers' programs already points to this likelihood. Thus, longitudinal studies, which would examine the evolution of these programs, could highlight converging trends. 5.4.3 Assess the true value of performance information We question the claim that there is a best method to measure transportation service quality. Research is needed to assess the reality of differences among programs, i.e.; a 109 sample of performance data should be used to determine whether programs generate significantly different carrier evaluations. This would establish whether there really is a best combination of carrier performance measures and weights. The findings of such research would either legitimize or eliminate the resistance to sharing program contents, which presently exists among sophisticated vehicle assemblers. Research on this topic should be part of the industry action plan in order to provide support for the use of SCPM. 5.4.4 Evaluate the benefits arising from the use of Standardized Carrier Performance Measures We discussed the benefits arising from the use of SCPM, such as lower cost of performance information and ease of understanding. The development of a proper methodology to evaluate these benefits would help shippers and carriers decide whether this approach is financially viable. Thus, the cost of generating performance information in the present disorganized environment should be compared to the cost of generating the same information under the use of SCPM. Research on this topic should be part of the industry action plan in order to provide support for the use of SCPM. 5.4.5 Evaluate various performance data gathering strategy Different data gathering strategies affect the cost of generating information. Presently, vehicle assemblers rely extensively on objective dynamic data in monitoring carrier performance. Dynamic data reflect actual services provided by carriers such as on-time delivery and loss and damage claims. The data and information, which support the measurement of these service attributes, are expensive to generate. Lower cost alternatives to this strategy must be investigated. For example, data sampling would reduce the amount of data required to generate accurate on-time delivery performance information. Another example is the monitoring of service quality based on service failures only. Research on this topic should be part of the industry action plan in order to provide support for the use of SCPM. 110 5.4.6 Examine the context ion which various carrier performance evaluation programs operate Given that the size of a shipper's operations and the degree of formality of its carrier performance evaluation programs are related, it would be beneficial for future studies to examine other possible barriers holding back smaller shippers from developing more formal programs. Of course cost comes to mind as an important barrier. Still, there may be other barriers holding back this development such as the lack of formal operating processes. Findings in this area might result in the design of simple carrier performance evaluation programs accessible by shippers of all sizes. Research on this topic should be part of the industry action plan in order to provide support for the use of SCPM. 5.4.7 Larger study sample Finally, this study was limited to the vehicle assembly industry in Canada and the United States, i.e., a small segment of the whole shipper community. It would be beneficial to apply the same research approach to a much larger population. For example, similar to Lambert et al. (1993), L T L shippers across industries could be surveyed. Such a population would likely include many more formal programs. This would increase the legitimacy of the findings. After all, the legitimacy of G A A P is partly due to its consistency across industries. By enlarging the population, risks surface. These risks relate to the ability of the researcher to understand the cause of differences among shipper programs. For example, in this study we demonstrated that vehicle assemblers are not necessarily homogeneous simply because they assemble vehicles from components. 5.4.8 Examine carrier performance comprehensively within each supply chain This study limited its focus to vehicle assemblers. Where third party logistics providers are involved in a certain supply chain, its carrier performance information needs were not examined. This is a shortfall of this study, which should not be repeated. In future 111 studies on this topic, complete supply chains must be examined to provide a complete picture of carrier performance information needs. Research on this topic should be part of the industry action plan in order to provide support for the use of SCPM. 5.4.9 Study carrier performance focusing on shippers based on their operating processes 1 Based on the finding that service attributes measured by vehicle assemblers reflect their operating processes, future research on the topic of carrier performance should group shippers in terms of their operating processes. Thus, shippers with job shop like operations could be examined as a group. Such a study could cross industry lines. This ability to cross industry borders could create interesting benchmarking scenarios for non-competitive organizations. These cross-industry scenarios is the direction that the SCOR model has taken since its inception. 112 Bibliography Abshire, Roger D. and Premeaux, Shane R., "Motor carrier selection criteria: perceptual differences between shippers and carriers," Transportation Journal 31, no. 1 (1991): 31-35. Andersen Consulting, 1994 Transportation Quality Benchmarking Survey. Bowman, Robert, "The state of quality in Logistics," Distribution 91, no. 8 (1992): 90-96. Brooks, Mary R., Performance Evaluation in the North American Transport Industry. Halifax: Center for International Business Studies, Dalhousie University, 1995. Chow, Garland and Poist, Richard F., "The measurement of Quality of service and the transportation purchase decision," Logistics and Transportation Review 20, no. 1 (1984): 30-43. CICA Handbook: Accounting Recommendation. Toronto: Canadian Institute of Chartered Accountants, March 1991. Evans, James R. and Lindsay, William M . , The Management and Control of Quality. St. Paul: West Publishing Co., 1989. Evans, Kenneth, "Purchasing motor carrier services: an investigation of the criteria used by small business," Journal of Small Business Management 28, no. 1 (1990): 39-47. Hall, Patricia K . and Wagner, William B., "Tank truck carrier selection by bulk chemical shippers: an empirical study," The Logistics and Transportation Review 32, no. 2 (1996): 231-246. King, Carol A. , "Service quality assurance is different," Quality progress 18, no. 6 (1985): 14-18. Harrington, Thomas C , Lambert, Douglas M . , and Christopher, Martin, " A methodology for measuring vendor performance." Journal of Business Logistics 12, no. 1 (1991): 83-104. Kleinsorge, Ilene K. , Schary, Philip B. , and Tanner, Ray D., "Data envelopment analysis for monitoring customer-supplier relationships," Journal of Accounting and Public Policy 11, no. 4 (1992) : 357-372. Lambert, Douglas M . , Lewis, Christine M . , and Stock, James R., "How shippers select and evaluate general commodities L T L motor carriers," Journal of Business Logistics 14, no. 1 (1993) : 131-143. MacDonald, Mitchell E., "How Johnson & Johnson built its World-class program," Traffic Management 33, no. 3 (1994): 38-40. 113 Mckee, William, "Carriers make shipper quest easier," Distribution 93, no. 8 (1994): 88-91. O'Sullivan, Mark, "ISO 9000: A growing trend," Canadian Transportation Logistics (Feb. 1995): 23-24. Thomas, Jim, "Getting the payoff," Distribution 94, no. 2 (1995): 50-52.. 114 


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items