"Science, Faculty of"@en . "Computer Science, Department of"@en . "DSpace"@en . "UBCV"@en . "Pang, Jee Fung"@en . "2010-06-27T17:01:52Z"@en . "1986"@en . "Master of Science - MSc"@en . "University of British Columbia"@en . "With the widespread use of computers in today's industry, planning system configurations in computer sites plays an increasingly important role. The process of planning system configurations or determining hardware requirements for new or existing systems is commonly known as capacity planning among performance researchers and analysts.\r\nThis thesis presents a refined capacity planning process for centralized computing system, with special attention to characterizing user workload for capacity planning. The objective is to make the entire process simpler for the computer user community, while relieving the capacity planner or performance analyst from having to rely on guesswork for the user workload performance factors.\r\nThe process is divided into four phases; namely, data collection, data reduction, workload/user classification and, modeling and performance analysis. The second and third phases are collectively known as user workload characterization.\r\nThe main objective of our workload characterization is to avoid any guess work on the performance factors that cannot be easily measured. The results of the workload characterization process are specifically meant to be used in analytic and simulation modeling. Three software tools required for the data reduction, workload/user classification and performance analysis phases have been developed and are discussed in the thesis."@en . "https://circle.library.ubc.ca/rest/handle/2429/26021?expand=metadata"@en . "CHARACTERIZING USER WORKLOAD FOR CAPACITY PLANNING By JEE FUNG PANG B.Sc, University of British Columbia, 1982 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF GRADUATE STUDIES (DEPARTMENT OF COMPUTER SCIENCE) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA October 1986 \u00C2\u00A9 Jee Fung Pang, 1986 I n p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t o f t h e r e q u i r e m e n t s f o r an advanced degree a t the U n i v e r s i t y o f B r i t i s h C o l u m b i a , I agree t h a t t h e L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and s t u d y . I f u r t h e r agree t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g o f t h i s t h e s i s f o r s c h o l a r l y purposes may be g r a n t e d by t h e head o f my department o r by h i s o r h e r r e p r e s e n t a t i v e s . I t i s u n d e r s t o o d t h a t c o p y i n g o r p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l n o t be a l l o w e d w i t h o u t my w r i t t e n p e r m i s s i o n . Department o f .1* Reduced Data \u00E2\u0080\u0094=user_class==> Model Data where the tools that manipulate the respective data are embedded in the arrows (in italics). The event data is collected and dumped by the system event monitor, typically onto a tape. Condenser processes and reduces the raw event data into reduced data. Userjclass manipulates the reduced data to produce model data useful for modeling tools. The modeling tools use the model data to do performance validations and predictions. 2.2.1 D A T A C O L L E C T I O N During this phase, a system event-driven software monitor is used to record traces of selected events whenever they occur. These traces, known as event data are usually dumped onto a tape because its size is normally very large. Prior to data collection, software probes must have been placed at pre-selected locations in the system. Each event has two probes to respectively indicate the start and end of the event. Each event record includes the CPU clock and the elapsed time clock so that statistics on the event can be calculated offline during the next phase. To minimize the monitor's overhead, all the information associated with an event record must be readily available in the system. Note that this technique of data collection is far more accurate and introduces less system overhead than other measurement tools such as sampling tools and benchmarks. Sampling tools often do not provide enough information required by modeling techniques. Examples are CPU burst time and page fault service time. This is because such tools merely examine existing system meters and counters at the end of each sampling period. In order to obtain more detailed information, they have to perform computations during the sampling period; - 8 -thus, introducing extra overhead. Most benchmarks merely consist of command script files. During the benchmark runs, extra overhead results from reading and timing the scripts (e.g. using the UNIX's time command). In general, benchmarks are only useful for single-user calibration. They cannot be used for multi-user environment, nor for I/O bound environment. There are also hardware measurement tools that can be used for data collection (see [3] for details). The most common of them are hardware monitors. Although they are more accurate and efficient than software monitors, they lack flexibility. Also, the placement of hardware probes is restricted to hardware level (i.e. difficult to interact with the operating system to obtain all the required user statistics). 2.2.2 DATA REDUCTION Event data recorded by the monitor is typically very large. Each analysis through the raw event data typically takes one or more hours. The reason for this is due to the slow speed of tape reads (or even disk reads). As mentioned earlier, performance analysts often need to go through the data several times for analyses. It is, therefore, desirable that the data is condensed so that each analysis can be done in a few minutes. A tool known as condenser is introduced in our capacity planning process to perform the task of data reduction; the resulting data is known as the reduced data. The data reduction process involves computing and aggregating all the statistics associated with all the monitored events. Details of the data reduction phase will be given in the subsequent chapters. The resulting reduced data is dumped into a binary file to be used for workload classification. An ASCII readable form of the reduced file is also printed for the analyst's convenience. Note that all the aggregate statistics given by condenser are on a per user basis. - 9 -Appendix A gives a complete description of condenser. 2.2.3 W O R K L O A D / U S E R C L A S S I F I C A T I O N The purpose of this phase is to provide workload data that fulfills the input requirements of both the analytic and simulation modeling tools. A tool known as user_class (user classification) takes the reduced data from condenser and groups the workload of users or jobs into classes. The objective here is to group users of the same workload characteristics into the same class. A common example is to group users running the same application program into the same class. The performance analysts are given the option of classification by selecting specific users (identified by their user numbers) or letting user_class classify the users workloads according to a prespecified formula. Each run through the reduced data typically takes several seconds, depending on the number of users and the effective speed of the input/output operations. The model data produced by userjclass contains representation of the user workload derived from the measured data. A tabular readable form of the model data is also given by userjclass for the analyst's convenience. Appendix B contains a complete description of user_class. 2.2.4 M O D E L I N G A N D P E R F O R M A N C E A N A L Y S I S During this phase, the model input data from the user classification phase is used by the analysis tools for validation and projection. The input data is divided into two categories, namely, configuration data and workload data. The configuration data consists of information on the measured and projected system configurations, while the workload data is a representation of the user workload. - 10 -During the validation process, the analyst merely sets the configuration data for both the measured and projected systems to be identical. The results given by the modeling tools should be reasonably close to those given by userjclass (i.e. the measured results) before the validation process is considered successful. Although the validation process may not be necessary for a well-known and proven model of a particular system, it is often carried out to ensure that the classification of the users has been done properly. In the projection process, the configuration data for the projected system is set accordingly. The results given by the analysis tools is then analyzed and compared to the capacity planning objective. If the objective is not met, the model data is modified and the projection process is repeated. Note that during either the validation process or the projection process, the workload data need not be modified nor adjusted. The values of the workload parameters required by the modeling tools are obtained from the measured data. As a result, the analyst need not do any guess work or use published results of other installations as part of workload data. In general, the choice of the modeling approach is left to the capacity planner. Depending on the objective and scope of his analysis, he can use existing modeling tools (simulation or analytic tools, or both), or he can develop his own tools based on existing algorithms/methodologies, or he can implement a new model entirely from scratch. As a simple example, an analytic modeling tool known as qnets, based on the linearizer algorithm, is described in chapter 5 . - 11 -C H A P T E R 3 W O R K L O A D C H A R A C T E R I Z A T I O N The phases of data reduction and workload classification in our capacity planning process is collectively called workload characterization. In general, workload characterization is the quantitative representation of the hardware and software resources utilized by users in a computer system. This chapter discusses the requirements of workload representation and how they can be fulfilled using data reduction and workload classification. 3.1 REQUIREMENTS For our capacity planning process, the requirements of workload characterization will be geared towards the requirements of both analytic and simulation modeling. These requirements are: (1) Elapsed Time Independence. This means that the workload representation should remain relatively invariant regardless of the measurement period, provided that the system is in steady state and the measuring period is not too short (e.g. more than 1 hour). Elapsed time independence can be achieved by representing workload on a per transaction per user basis. For example, the average CPU demand per transaction by a single user remains constant regardless of the duration of the measurement period. This assumes that the system is in, or near steady state. (2) Representativeness of All Resources. The resources used in the system must be properly and accurately represented. Note that this requires accurate measured data on the resources to be represented. Data on the resources used can be measured - 12 -using an event-driven monitor. Software probes are inserted into properly pre-selected areas. Workload data can then be derived from statistical information measured by these probes. The representativeness of the workload data can be verified from the results of the modeling tools. (3) Independence on the Number of Users. In this case, the workload characteristics of a single user should not vary regardless of other users in the measured system. By representing workload on a single user basis, the dependence on the number of users is removed. This also assumes that the workload representation does not contain statistics due to contention. (4) Linearly Dependence on Hardware Speeds. In other words, it should be possible to linearly extrapolate the workload data collected on one system configuration to that of another configuration based on the relative speeds of hardware (see [3] for details). As long as the hardware speeds are provided to the modeling tools, the extrapolation is simple and can be done by the modeling tools. For example, if the CPU service time for a measured machine is n milliseconds, it will be n/2 on a machine that is twice as fast. (5) Flexibility. The workload representation should allow for easy modification to reflect variations in the real system. For our purpose, we will restrict the flexibility. Our workload representation will be divided into two parts. The first part can easily be modified based on changes in the system configuration. The second part is the representation of the invariant user workloads. (6) Compactness. The degree of detail that a workload is represented. A more compact - 13 -model is usually less detailed and less representative. In general, compactness is dictated by the availability of information from the measured data. Software probes are placed to monitor all the resources utilized. Because each event is monitored, it is easy to obtain detailed information on the workload data. Our objective is to collect and represent workload to fulfill the requirements of modeling tools. 3.2 W O R K L O A D REPRESENTATION The data produced by the first three phases of our capacity planning process, namely, data collection, data reduction and workload classification, consist of three categories. They are measured statistics, configuration data and workload data. However, only condenser and user_class provide the user with a readable form of these data. The configuration data and workload data is collectively known as model data. 3.2.1 M E A S U R E D STATISTICS The measured statistics serve as a calibration of the measured systems. In other words, they help the analyst assess the system performance. They are dependent on system configuration and workload. The analyst can also use them directly to validate modeling tools by comparing the measured statistics with the output from the modeling tools. The measured statistics are given in two forms: system wide, and per user/class per transaction type basis. Explanation for transaction type will be given in section 3.3. The system wide or global statistics are as follows: (1) System throughput This is equivalent to the system arrival rate for a system in steady state. - 14 -It is the rate at which jobs or transactions are being serviced at the system. (2) Device utilizations The percentage of time each device is busy during the measurement period. (3) Page fault rate The rate at which page faults occur in the system. The second form of statistics which are on a per user/class per transaction basis are as follows: (1) Response times On the average, the amount of real time it takes to complete a transaction. (2) Throughputs The rate at which transactions are being serviced at each service center. For terminals (which is essentially a delay service center), this is the rate at which transactions are being generated. 3.2.2 CONFIGURATION D A T A The configuration data is a representation of the system configuration. It is divided into two parts, namely the measured system configuration data and the projected system configuration data. The former set of data is for the system where data measurement was previously done. The latter set is for the system whose performance we wish to project. The components of each set of configuration data are: - 15 -(1) C P U type The name of the processor or alternatively, the processor speed. (2) Memory size The size of the memory in the system. (3) Disks The number of disk controllers, and the number of disks associated with each controller. (4) User classes The number of job classes, and the number of users in each class. 3.2.3 W O R K L O A D D A T A The workload data is the actual representation of the user workload in the measured system. It is also made up of two parts. The first part represents the resource demands of the users or job classes. The second part describes the behaviour of the service centers (except the CPU processor). The demand on a resource by a user or a job class is represented by the average resource service time, and the rate of demand. Note that the representation is on a per user basis (actually on a per transaction type basis as well). The resource demands represented are: (1) C P U demand The average CPU burst time and the number of CPU bursts per transaction. (2) I/O demand The average I/O service time and the number of I/Os per transaction. (3) Page fault C P U demand This is for CPU used to service page faults. The representation is the - 16 -average page fault CPU burst time and the number of page fault CPU bursts per transaction. (4) Think time The average interval time at which jobs are generated at the terminal. This can also be viewed as the average time a user spend \"thinking\" before he generates a transaction. The resource behaviour is also represented by the resource service time and the rate the resource is used. However, the representation is on a per center per transaction basis. (1) True disk I/O A true disk I/O operation is any I/O that is not due to page fault. The representation is the average I/O service time and the rate of I/O requests per disk per transaction. (2) Page fault disk I/O A page fault disk I/O operation is any I/O that is caused by a page fault. The representation is the average page fault I/O service time and the rate of page fault I/O per disk per transaction. - 17 3.3 T R A N S A C T I O N C L A S S E S In the industrial world, computer users' perception of response time is sometimes different from that reported by performance analysts and capacity planners. As an illustration, consider an environment where there are terminal users running simple commands and large compilations simultaneously. For the analysts, the response time is often expressed in terms of the average response times of the commands and the large compilations. It may be difficult to assess the system performance based on the average response time alone. Also, from the performance point of view, the users are usually less concerned with the response times of large compilations, and are more interested in the response times of these simple commands. The inadequacy of average response times was addressed in E. Lazowska's thesis [10]. For our purpose, we will use the following simple illustration. An average response time of 10 seconds, for example, could mean that the simple commands take an average of 10 seconds to complete when there are few large compilations. This would indicate that the system is performing poorly. On the other hand, this response time could also indicate excellent system performance if there are many large compilations. In this case, the response time for the simple commands would be very small. As a solution to the above problem, it is necessary to classify user transactions into different classes based on their resource demands. This should not be confused with the user workload classification mentioned in the capacity planning process. Transaction classification is essentially a subclassification of the user workload. At the moment, the capacity planning tools subclassify transactions into three types, namely, micro transaction, normal transaction and large transaction. Details of these transaction types are given below, assuming a 1MIP machine. - 18 -A micro transaction is any transaction that utilizes less than 10 milliseconds of pure CPU usage. Examples are keystrokes commands of visual editors (e.g. EMACS, vi), and trivial UNIX commands such as date and echo. A normal transaction is any other transaction that utilizes less than 100 milliseconds of pure CPU time. Most commands and small programs in any operating system fall into this type. Typical examples are UNIX commands Is and small compilations. A large transaction is any transaction that uses more that 100 milliseconds of CPU time. Most commercial packages, large compilations and scientific applications constitute large transactions. - 19 -C H A P T E R 4 R E S O U R C E D E M A N D R E P R E S E N T A T I O N As described in the last chapter, the resources used in a system will be represented by the service times and the frequency of use. This representation is used most commonly by performance analyst. The only resource that cannot be represented using this representation is the system memory. Description of memory representation will be deferred to the next chapter. This chapter describes how resource utilization data are manipulated and represented during the data reduction and workload classification phase. 4.1 E V E N T T R A C E In order to produce meaningful data to the condenser, the software monitor must record specific information. This information includes: (1) Event Group This is used to distinguish the system events. Each event group has its own characteristics, usually very distinct from other groups. Examples of event groups are transactions, page faults and I/O events. (2) Event Type For the purpose of condenser, this is primarily used to define the occurrence and duration of any event group. The two possible event types are start event and end event. The former type denotes the actual start of an event group, while the latter denotes the end of an event group. Note that, in general, software monitor typically have more than two event types for various uses. (3) Real Clock The system real time clock that gives the elapsed time. - 20 -(4) C P U Clock The CPU processor clock that gives the total processor time used. Most systems maintain a CPU clock for each user. (5) Auxiliary Information This varies depending on the event group. For example, for an I/O event, the auxiliary information should include a drive number that identifies the disk where the I/O is taking place. In order to record the occurrence of events, software probes are placed in appropriate places in the system software. At the occurrence of each event, the probe will result in a subroutine call to record all the information related to the probe, i.e. the event group, the event type, the system real time clock, and the CPU clock for the user, and some auxiliary information. More detailed information on the format and contents of an event record are given in Appendix For an easier understanding of the process, an example of a typical trace will be used. Consider the following trace for a particular user: A. Event Event Group Type Real Clock (ms) CPU Clock (ms) Aux 1 2 3 4 5 6 7 8 9 10 11 12 13 14 2179 2182 2185 2190 2190 2193 2198 2220 2225 2300 6015 6050 6055 6207 30 32 35 38 38 41 46 60 63 85 85 100 103 241 Diskl Diskl Diskl Diskl Disk2 Disk2 Trans start I/O start Disk2 Disk2 where both the clock values are given in units of milliseconds. - 21 -4.2 D A T A R E D U C T I O N The primary function of condenser is to gather and accumulate statistics in between start and end events of all the desired event groups. Details of the required statistics was given earlier in Section 3.2. The basic operation used to calculate most statistics is to simply compute the difference in the clock values given in a corresponding start and end event of a transaction group. From the trace given above, for example, the first page fault event group (given by event 2 and 7) uses 14 milliseconds of CPU and takes 16 milliseconds of real time. Note that the 14 milliseconds of CPU time also include 6 milliseconds used to perform two page fault disk operations. A revised approach is used to calculate the statistics in fragments and attribute them to the appropriate event groups. For simplicity, we will only consider the CPU statistic. First, we define a cpu slice to be the CPU time used between any two events. In the above example, the page fault event has 5 CPU slices. Two slices were used to perform the corresponding page fault disk operations. As a result, the page fault event uses 8 milliseconds of CPU time to perform the page fault and 6 milliseconds to perform the associated disk operations. The elapsed time (or response time) can also be calculated in a similar manner. Note that we are also interested other statistics such as the number of other events within a particular event group. Examples are the number of I/Os for a page fault and the number of page faults per transaction. These statistics can easily be calculated by counting the occurrences of those events within an event group. Aggregate statistics are then calculated by adding and averaging all the above statistics on a per transaction basis. Some of the average statistics of the two transactions given in the above table is listed below: - 22 -Number of CPU bursts \u00E2\u0080\u0094 4 Average CPU burst Time = 51.25 ms Number of CPU bursts per transaction = 2 Number of page faults = 1 Number of I/Os per transaction = 2 Average page fault burst time = 3.0 ms Number of I/Os per page fault = 2 Average disk service time per disk = 3.0 ms Total visits to disks = 2 Average paging disk sevice time per disk = 2.75 ms Total visits to paging disk = 2 Average Think time = 3715 ms Average Response Time = 0.1565 seconds Throughput = 0.4965 transactions per second CPU utilization = total CPU used / elapsed time = 5.238% A more detailed example of the output statistics given by condenser is given in Appendix A. NOTE: A CPU burst is defined as the CPU time used between true I/O events (i.e. excluding I/Os due to page faults), or between the start of a transaction and the start of a true I/O event, or between the end of a true I/O and the end of a transaction event. As an example, a transaction with two true I/Os will have three CPU bursts. In general, a transaction with n true I/Os will have n+1 CPU bursts. Similarly, a page fault CPU burst is defined as the CPU time used between page fault I/Os, or between the start of a page fault and the start of a page fault - 23 -I/O event, or the end of a page fault I/O event and the end of a page fault. Hence, a page fault with n page fault 1/Os will have n+1 page fault CPU bursts. 4.3 WORKLOAD CLASSIFICATION To perform workload classification, user_class has to group the statistics of all the users in a particular class. The process of grouping the statistics is very simple and straightforward. It involves adding the corresponding statistics for all the users and recalculate the average statistics. As an example, let us assume that from the reduced data provided by condenser, there are two users in a class, and there is only one class. For simplicity, let us consider only the CPU statistics. User A requires an average of 8.2 milliseconds of CPU burst time, and has a total of 200 CPU bursts and 20 transactions. User B requires an average of 9.0 milliseconds of CPU burst time, and has a total of 170 CPU bursts and 21 transactions. By grouping them into a class, they used up a total of (8.2*200) + (9.0*170) or 3170 milliseconds of CPU. Also, they have a total of 370 CPU bursts and 41 transactions. The resulting average statistics will be an average of 8.57 CPU burst time, and 9.02 CPU bursts per transaction. All other statistics, except memory statistics, can be obtained in a similar manner. The average statistics calculated using this grouping and averaging process are used directly to represent the workload of all the user in a class, as described in Chapter 3. Detailed output and model data provided by user_class are given in Appendix B. - 24 -C H A P T E R 5 M E M O R Y R E P R E S E N T A T I O N The representation of memory demand by users have always been a problem mainly because very few tools can accurately and efficiently collect data on memory demand. In our capacity planning process, we elect to use Chamberlain's lifetime equation [1] that has long been proven to approximate the lifetime behavior. This equation is given below: where L is the lifetime, m is the active memory held by a user, and b, c are constants of the equation. The lifetime is defined as the mean CPU time used between successive page faults in a user transaction. 5.1 D A T A C O L L E C T I O N The system monitor should provide the lifetime values and the active memory m,- for each user at every page fault. For systems where the active memory held by each user is not easily obtainable (i.e. without causing unnecessary overhead), one can set m,- to be the average available memory. This can be done by dividing the total memory by the number of active users during a particular page faults. Note that this requires the monitor to register the number of active users at every page fault event and it usually under-estimates m. At the moment, condenser indirectly calculates the m values based on information given in the event data. Details on the computation will be given later. - 25 -5.2 C U R V E F I T T I N G For each user, given the values of Li and m,- (where i ranges from one to the number of page faults), our objective is to obtain the values of b and c that fit the lifetime function as closely as possible. Because the sample size or the number of page faults during a data collection period is typically very large, an approximation technique that uses iterative approximation is needed. This means that we cannot use the standard Least Square Approximation because the algorithm requires all the sample data to be available first. After some research, the best possible solution is to apply the absolute deviation technique. A few simple experiments showed that the difference in the results produced by the absolute deviation is within 2% when compared to those by of the least square approximation method. The second problem is that the lifetime function is not a linear equation, making any method to solve a matrix of equations (required for approximation techniques) non-trivial. However, it is possible to translate the lifetime function to a linear equation and the b, c values obtained using substitution. The entire derivation process is given below, using the absolute deviation approximation: Given a set of data L,- and m,- where \u00C2\u00BB=l,...,n and n is the total number of page faults, we want to minimize the absolute error of the following: 2 s \u00C2\u00AB=1 26 1+ ml Using the substitutions: - 26 --Li \u00C2\u00AB , - = \u00E2\u0080\u0094 f , v{ = L{, C=c\ B=2b, m-the above lifetime equation becomes Vf-Cuf ' B, or vt = Cut+B Since ut- and u,- can be directly obtained from the measured values of L,- and m,-, our objective now is to minimize the error: 2 E \u00E2\u0080\u00A2 = i v{ - (Cui+B) For the absolute deviation approximation, the conditions below must hold: d n 0 = \u00E2\u0080\u0094V] vrCut-B 2 = 2Yl(vrCurB)(-ut) \u00C2\u00BB = i and d n 0 = \u00E2\u0080\u0094 y ; dBh vrCurB 2 = 2j](vrCurB)(-l) i - l These simplify to the following normal equations: ] [-output ] [-reduce ] [-help] [-\u00E2\u0080\u00A2version] [-windowbuffer] [-nvindowtime] [-fuUhelp] Condenser takes event data as input and reduces the data to be used by the Capacity Planning user-classification utility user class. The input may be from a tape or from a disk file. The reduced file wi l l be in binary form, but an ASCII form of the reduced file wi l l be written in the output file. If no options are given, condenser wi l l prompt the user for the necessary set of input options. Condenser supports the following options: -input, -i -output, -o -reduce, -r -help, -h -version, -v -fullhelp, -fh -windowbuffer, -wb -windowtime, -wt The following paragraphs summarize all the condenser options, which can be selected in any order. -input, -i Takes a file name as an argument (for input file). If this option is omitted, condenser wi l l prompt the user for an input file name. Also, condenser wi l l always continue to prompt the user if it fails to open the associated file. Currently, the event data on tape is assumed to be at the beginning of the tape. - 51 -APPENDIX A CONDENSER 1.0 -reduce, -r Takes a file name where the reduced data is to be stored. The user will be queried before an existing file is overwritten. If this option is omitted, condenser will prompt the user for a reduced file name. The reduced file must be a disk file. -output, -o Takes an output file name as an argument. If the output file exists the user will be queried before it is overwritten. If this option is omitted, condenser will prompt the user for an output file name. The output file must be a disk file. Note that the output produced by condenser is essentially a readable form of the reduced data. -help, -h Prints the command line format on how to invoke condenser, -windowbuffer, -wb Turns on buffer windowing option. The user will be prompted for the starting buffer number and the number of buffers to be processed. -windowtime, -wt Turns on time windowing option. The user will be prompted for the starting time in the data and the duration desired. This time is the real time in the event data. -version, -v Prints condenser version stamp plus the date and time that it was built. -fullhelp, -fh Prints this full help information on how to use condenser. A sample run is given below (with user's input in bold face): OK, condenser [CONDENSER R e v . 1 . 6 - 1 9 8 6 ] E n t e r i n p u t f i l e nome ? EVENT DATA E n t e r o u t p u t f i l e name ? EVENT_OUTPUT E n t e r r e d u c e d f i l e nome ? EVENT_REDUCE M o n i t o r s t a r t e d on 0 5 / 2 5 / 8 4 1 5 : 5 5 : 0 3 . 5 1 2 M o n i t o r V e r s i o n : 1 on 4 . 2 B S D M o n i t o r U s e r Name: r o o t U s e r N u m b e r : 3 0 C P U : VAX 1 1 / 7 5 0 M e m o r y : 2 0 4 8 p a g e s Maximum u s e r s : 32 REMARK: E l a p s e d t i m e \u00C2\u00AB= 1 9 6 9 . 2 8 5 s e c o n d s Number o f e v e n t s p r o c e s s e d = 1 3 3 1 0 T o t a l b l o c k s / b u f f e r s r e a d = 9 3 OK. - 5 2 -APPENDIX A CONDENSER 1.0 2.2.2 User Output Each item of the statistics printed by condenser in the output file is always made up of two values; namely, the total count and the total usage. For I/O statistics, for example, the total count is the total number of I/O operations and the total usage is the total I/O time used. A description of all the statistics is given below: Response times Response times for all interactive users on a per user per transaction basis. Think times Think times for all interactive users on a per user per transaction basis. True I/O The pure I/O operations excludes any I/O caused by page faults. Any disk queueing statistics are also excluded. This is given on a per user per transaction basis. Page Fault I/O The I/O operations caused by page faults only. Any disk queueing statistics are excluded. This is given on a per user per transaction basis. True CPU The pure CPU usage (i.e. excludes any CPU time for page faults). This is given on a per user per transaction basis. Page Fault CPU The CPU usage for handling page faults only. This is given on a per user per transaction basis. Physical Page Fault Page faults that actually cause one or more I/O operations, given on a per user per transaction basis. It should be pointed that the usage statistics for physical page faults are average elapsed times and not service/virtual time. Virtual Page Fault Page faults that do not cause any I/O operation at all. This is given on a per user per transaction basis. The usage statistics for virtual page faults are also average elapsed times and not service/virtual time. Disk I/O The disk true I/O operations on a per user per disk basis. Queueing at the disks are excluded in the statistics. Disk PF I/O The disk page fault I/O operations on a per user per disk basis. Queueing statistics at the disks are excluded. Login/Logout The login and logout of terminal users and child processes. Lifetime Function This is unlike all the above statistics. The approximated b and c parameters wil l be printed for each user. Transient This is due to the event monitor being shutdown before any end events are encountered (i.e. for those events that have been \"started\"). A sample output file is given below: - 53 -APPENDIX A CONDENSER 1-0 M o n i t o r s t a r t e d o n 0 5 / 2 5 / 8 4 1 5 : 5 5 : 0 3 . 5 1 2 M o n i t o r V e r s i o n : 1 o n 4 . 2 B S D M o n i t o r U s e r Name: r o o t M o n i t o r U s e r N u m b e r : 3 0 C P U : VAX 1 1 / 7 5 0 M e m o r y : 2 0 4 8 p a g e s Maximum u s e r s : 3 2 R E M A R K : E l a p s e d t i m e \u00C2\u00BB= 1 9 6 9 . 2 8 5 s e c o n d s N u m b e r o f e v e n t s p r o c e s s e d = 13310 T o t a l b l o c k s / b u f f e r s r e a d = 9 3 N u m b e r o f u s e r s = 7 RESPONSE T IMES ( s e c o n d s ) MICRO NORMAL LARGE OVERALL USER N AVERAGE N AVERAGE N AVERAGE N AVERAGE 1 0 0 . 0 0 0 1 0 . 0 3 9 1 0 . 0 3 3 2 0 . 6 3 6 2 6 9 1 . 0 9 3 8 6 7 . 0 2 1 57 1 2 . 2 7 0 2 1 2 6 . 5 0 3 3 1 0 . 0 0 0 19 0 . 0 4 6 42 0 . 6 1 6 6 2 0 . 4 3 2 TOTAL 7 0 1 . 0 7 7 106 5 . 7 0 5 100 7 . 2 5 3 2 7 6 5 . 0 9 2 THINK T I M E S ( s e c o n d s ) MICRO NORMAL LARGE OVERALL USER N AVERAGE N AVERAGE N AVERAGE N AVERAGE 1 0 0 . 0 0 0 1 5 . 5 6 7 1 2 . 0 0 0 2 3 . 7 8 3 2 69 1 . 7 8 3 8 6 3 . 2 9 1 56 2 . 8 8 5 211 2 . 6 9 0 3 1 0 . 0 5 2 18 2 2 . 4 4 9 42 3 0 . 7 9 4 61 2 7 . 8 2 8 TOTAL 7 0 1 . 7 5 8 105 6 . 5 9 7 99 1 4 . 7 1 7 2 7 4 8 . 2 9 5 TRUE I / O WITHOUT QUEUE S T A T I S T I C S (ms) MICRO NORMAL LARGE . OVERALL USER N AVERAGE N AVERAGE N AVERAGE N AVERAGE 1 0 0 . 0 0 0 0 0 . 0 0 0 21 3 1 . 0 2 5 21 3 1 . 0 2 5 2 0 0 . 0 0 0 0 0 . 0 0 0 2 5 3 1 7 . 4 3 9 2 5 3 1 7 . 4 3 9 3 0 0 . 0 0 0 1 1 2 . 1 2 1 2 4 4 2 6 . 3 9 1 2 4 5 2 6 . 3 3 3 TOTAL 0 0 . 0 0 0 1 1 2 . 1 2 1 5 1 8 2 2 . 2 0 7 5 1 9 2 2 . 1 8 7 PAGE FAULT I / O WITHOUT QUEUE S T A T I S T I C S ( m s ) MICRO NORMAL LARGE OVERALL USER N AVERAGE N AVERAGE N AVERAGE N AVERAGE 2 0 0 . 0 0 0 5 2 2 . 4 2 4 68 1 6 . 4 8 8 7 3 1 6 . 8 9 5 3 0 0 . 0 0 0 0 0 . 0 0 0 112 1 8 . 5 3 4 112 1 8 . 5 3 4 TOTAL 0 0 . 0 0 0 5 2 2 . 4 2 4 180 1 7 . 7 6 1 1 8 5 1 7 . 8 8 7 TRUE CPU BURST S T A T I S T I C S (ms ) MICRO NORMAL L A R G E OVERALL USER N AVERAGE N AVERAGE N AVERAGE N AVERAGE 1 0 0 . 0 0 0 1 3 6 . 8 6 4 21 5 . 0 7 1 2 2 6 . 5 1 6 2 69 5 . 9 2 1 91 2 8 . 1 3 2 5 6 9 5 5 . 6 8 5 7 2 9 4 7 . 5 3 5 3 1 1 . 0 2 4 2 0 3 8 . 8 1 0 4 1 5 3 3 . 5 9 7 4 3 6 3 3 . 7 6 1 TOTAL 7 0 5 . 8 5 1 112 3 0 . 1 1 7 1005 4 5 . 5 0 6 1 1 8 7 4 1 . 7 1 6 - 54 -APPENDIX A CONDENSER 1.0 PAGE F A U L T C P U S T A T I S T I C S (ms) MICRO NORMAL LARGE O V E R A L L USER N AVERAGE N AVERAGE N AVERAGE N AVERAGE 2 0 0 . 0 0 0 10 1 . 2 2 9 3 2 8 1 .861 3 3 8 1 . 8 4 2 3 0 0 . 0 0 0 0 0 . 0 0 0 241 1 . 2 8 7 241 1 . 2 8 7 TOTAL 0 0 . 0 0 0 10 1 . 2 2 9 5 6 9 1 . 6 1 8 5 7 9 1 .611 P H Y S I C A L P A G E FAULT S T A T I S T I C S ( m s ) MICRO NORMAL LARGE O V E R A L L USER N AVERAGE N AVERAGE N AVERAGE N AVERAGE 2 0 0 . 0 0 0 5 2 6 . 0 6 1 6 8 2 0 . 4 1 0 7 3 2 0 . 7 9 7 3 0 0 . 0 0 0 0 0 . 0 0 0 112 2 4 . 7 2 9 112 2 4 . 7 2 9 TOTAL 0 0 . 0 0 0 5 2 6 . 0 6 1 180 2 3 . 0 9 8 185 2 3 . 1 7 8 V I R T U A L P A G E FAULT S T A T I S T I C S ( m s ) MICRO NORMAL LARGE O V E R A L L USER N AVERAGE N AVERAGE N AVERAGE N AVERAGE 2 0 0 . 0 0 0 0 0 . 0 0 0 192 2 . 8 5 7 192 2 . 8 5 7 3 0 0 . 0 0 0 0 0 . 0 0 0 17 3 . 0 3 0 17 3 . 0 3 0 TOTAL 0 0 . 0 0 0 0 0 . 0 0 0 2 0 9 2 . 8 7 1 2 0 9 2 . 8 7 1 D I S K I / O WITHOUT QUEUE S T A T I S T I C S ( m s ) D I S K 8 O V E R A L L USER N AVERAGE N AVERAGE 1 116 2 9 . 3 6 3 116 2 9 . 3 6 3 2 2 5 3 1 7 . 4 3 9 2 5 3 1 7 . 4 3 9 3 2 4 8 2 6 . 0 7 5 2 4 8 2 6 . 0 7 5 2 3 7 2 4 1 4 . 4 9 4 7 2 4 1 4 . 4 9 4 24 3 3 0 1 4 . 1 0 5 3 3 0 1 4 . 1 0 5 25 4 2 3 . 4 8 5 4 2 3 . 4 8 5 3 0 118 3 4 . 3 6 1 118 3 4 . 3 6 1 TOTAL 1 7 9 3 1 8 . 7 2 9 1793 1 8 . 7 2 9 D I S K P F I / O WITHOUT QUEUE S T A T I S T I C S (ms ) D I S K 8 O V E R A L L USER N AVERAGE N A V E R A G E 2 7 3 1 6 . 8 9 5 7 3 1 6 . 8 9 5 3 112 1 8 . 5 3 4 112 1 8 . 5 3 4 TOTAL 1 8 5 1 7 . 8 8 7 185 1 7 . 8 8 7 L O G I N S T A T I S T I C S ( s e c o n d s ) USER NO NO. TRANS AVERAGE LOGOUT S T A T I S T I C S ( s e c o n d s ) USER NO NO. TRANS AVERAGE - 55 -APPENDIX A CONDENSER 1.0 L I F E T I M E FUNCTION PARAMETERS U S E R S B C AVG MEM 1 4 . . 2 0 6 4 762 . 8 9 4 5 1 6 7 9 . 0 2 7 8 2 0 . . 2 9 9 0 38 . 0 0 0 3 4 6 . 5 0 0 0 3 4 . . 7 0 9 4 113 . 9 0 4 8 1 6 7 . 4 9 3 1 T R A N S I E N T S T A T I S T I C S ( m s ) I / O USER N TOTAL N 1 9 5 2 7 5 4 . 5 4 5 5 94 3 3 1 5 . 1 5 1 5 3 2 3 7 2 4 1 0 4 9 3 . 9 3 9 4 7 4 0 24 3 3 0 4 6 5 4 . 5 4 5 5 3 4 7 2 5 4 9 3 . 9 3 9 4 3 30 118 4 0 5 4 . 5 4 5 5 117 CPU T O T A L 3 6 8 . 6 4 0 0 1 4 5 . 4 0 8 0 3 2 0 5 9 . 3 9 2 0 2 0 8 3 1 . 2 3 2 0 6 3 . 4 8 8 0 1 5 6 2 . 6 2 4 0 T R A N S I E N T T I M E S ( s e c o n d s ) USER RESPONSE THINK 1 0 2 4 7 2 7 0 2 1 7 1 8 5 1 3 176 11 TRANSIENT LOGIN/LOGOUT ( s e c o n d s ) USER LOGIN LOGOUT 2 7361 0 - 56 -APPENDIX A CONDENSER 1.0 2.3 Program Interfaces The most important ones are the layouts of the raw data that condenser reads and writes. These data are the event data read by condenser and the reduced aggregate that condenser writes for user class. 2.3.1 Event Data The layouts event data, which can either be on a tape or disk, must be made up of fixed size buffers. Each of these buffer is of fixed size (usually 4096 bytes). Every buffer is made up of event records. The format of each record is as follows: NAME Length Event Group Event Type User Number CPU time Real time Auxiliary Information SIZE DATA TYPE 2 bytes binary short 1 bytes binary 1 byte binary 2 bytes binary short 4 bytes binary long 4 bytes binary long (microsecs) Length-18 Variable The first one or two records of each set of event data must contain header information. All header records must have an associated event group of 0. The layout of the header (i.e. auxiliary information of the header event record) is given as follows: NAME SIZE DATA TYPE Event record 18 Bytes See above for layout Date 6 bytes ASCII (MMDDYY) Minutes 2 bytes binary short Seconds 2 bytes binary short Ticks 2 bytes binary short Tick Rate 2 bytes binary short Monitor user number 2 bytes binary short Monitor user name 32 bytes ASCII UNIX version length 2 bytes binary short UNIX version 16 bytes ASCII Memory size 2 bytes binary short Number of users 2 bytes binary short Following the header buffer will be the contents of UNIX page map. Hence, the first buffer will only contain header information. - 57 -APPENDIX A CONDENSER 1.0 2.3.2 Reduced Data Condenser also writes all its statistics to a file to be Tead in by user class. A l l data are written in binary format. The statistics is first preceded by condenser's header and the header format is as follows: N A M E SIZE DATA TYPE Month 2 bytes binary short Day 2 bytes binary short Year 2 bytes binary short Hour 2 bytes binary short Minutes 2 bytes binary short Seconds 2 bytes binary short Ticks 2 bytes binary short Monitor version 2 bytes binary short Monitor user no 2 bytes binary short Monitor user name 32 bytes ASCII CPU timer 2 bytes binary short Real timer 2 bytes binary short UNIX version stamp 16 bytes binary short CPU type/name 16 bytes binary short Memory size 2 bytes binary short Number of users 2 bytes binary short Number of event events 2 bytes binary short Number of event buffers 2 bytes binary short Length of remark 2 bytes binary short Elapsed time 4 bytes double Maximum user number 2 bytes binary short Maximum disk number 2 bytes binary short Remark varying ASCII Following the header are all the statistical matrices and arrays. Their sizes are dependent on the maximum user number and maximum disk number (which are given in condenser's header). The details of the statistical data are given below: STATISTIC DIMENSION DATA TYPE DESCRIPTION recorded max user short flags for recorded users resp tot NTRANS by max user double total response times resp n NTRANS by max user long number of transactions think tot NTRANS by max user double total think times think n NTRANS by max user long total idle transactions io_noq_tot NTRANS by max user double total I/O without queue usage io noq n NTRANS by max user long total I/Os without queue pf io noq tot NTRANS by max user double total PF I/O usage pf io noq n NTRANS by max user long total PF I/Os cpu tot NTRANS by max user double total true CPU burst time cpu n NTRANS by max user long total no. of true CPU bursts pf cpu tot NTRANS by max user double total PF CPU burst time pf cpu n NTRANS by max user long total no of PF CPU bursts disk noq tot max drive by max user double total true disk I/O usage - 58 -APPENDIX A CONDENSER 1.0 disk noq n max_drive by max. _user long total no of true disk I/Os pf disk noq tot max_drive by max. _user double total PF disk I/O usage pf_disk noq_n max_drive by max. _user long total no of PF disk I/Os lftb max__user double b parameter lftc max user double c parameter - 59 -APPENDIX A CONDENSER 1.0 3. Program Design 3.1 Design Overview The general algorithm of condenser is to match start and end event types of the associated event group and calculate the appropriate statistics. An overview of the algorithm is as follow: P r o c e s s command l i n e U s e r I n p u t f r o m t e r m i n a l G e t e v e n t d a t a h e a d e r G e t p a g e map C o m p u t e a c t i v e memory s i z e f o r e a c h u s e r AI I o c a t e s t o r a g e w h i l e m o r e e v e n t r e c o r d s C l a s s i f y E v e n t _ G r o u p C l a s s i f y E v e n t _ T y p e f o r e a c h G r o u p I f S t o r t _ E v e n t s t o r e t i m e r v a l u e s I f E n d _ E v e n t c a l c u l a t e s t a t i s t i c s by c u r r e n t t i m e r v a l u e s \u00E2\u0080\u0094 s t o r e d t i m e r v a l u e s S p r e a d t r a n s i e n t s t a t i s t i c s C o m p u t e a g g r e g a t e s t a t i s t i c s P r i n t e v e n t h e a d e r a n d a l l s t a t i s t i c s R e d u c e s t a t i s t i c s C l e a n u p t r a n s i e n t s t a t i s t i c s f o r user class 3.2 Internal Data Structures 3.2.1 Constants The following constants are used to ^ d e f i n e MAXUSRPLUS 2 5 7 / \u00E2\u0080\u00A2 # d e f i n e M A X D R I V E S P L U S 17 / \u00E2\u0080\u00A2 # d e f i n e NTRANSPLUS 4 / \u00E2\u0080\u00A2 define the dimensions of matrices and arrays. maximum number o f u s e r s + 1 \u00C2\u00BB / maximum number o f d i s k d r i v e s + 1 * / number o f t r a n s a c t i o n c l a s s e s + 1 * / 3.2.2 Types The structures below are used to store special statistics such as memory. t y p e d e f s t r u c t l t _ s t r u c t i d o u b l e I f t p O , I f t p 1 . I f t q l . I f t re. I f t r 1 , I f t b , I f t c ; j L F T S T R U C T ; / * l i f e t i m e f u n c t i o n s t r u c t u r e / * b e l o w 5 a r e c u m u l a t i v e / * u s e d t o c o m p u t e t h e f i n a l two p a r a m e t e r s . / ' b \" p a r a m e t e r ' c \" p o r a m e t e r - 60 -APPENDIX A CONDENSER 1.0 3.2.3 Data Structures For each set of statistical data collected (e.g. CPU usage), there are always two associated values; namely, the total usage (e.g. total CPU usage) and the total number of transactions (e.g. total number of CPU bursts). Except for login/logout statistics, all other statistics are given on a per user per transaction basis (or per user per disk basis for disk statistics). The login/logout statistics are given merely on a per user basis. / * c u m u l a t i v e / a g g r e g a t e s t a t i s t i c s p e r u s e r p e r t r a n s a c t i o n \u00E2\u0080\u00A2/ d o u b 1 e \u00E2\u0080\u00A2 \u00C2\u00BB c p u _ t o t , A c u m u l a t i v e C P U u s a g e \u00E2\u0080\u00A2/ \u00E2\u0080\u00A2 \u00C2\u00BB r e s p _ t o t , A c u m u l a t i v e r e s p o n s e t i m e s \u00E2\u0080\u00A2/ \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 t h i n k _ t o t , A c u m u l a t i v e t h i n k t i m e s \u00E2\u0080\u00A2/ \u00E2\u0080\u00A2 \u00C2\u00BB i o _ n o q _ t o t , A c u m u l a t i v e I / O w i t h o u t q u e u e \u00E2\u0080\u00A2/ \u00C2\u00BB * p f _ i o _ n o q _ t o t , A c u m u l a t i v e P F I / O w i t h o u t q u e u e \u00E2\u0080\u00A2/ \u00E2\u0080\u00A2 * p f _ c p u _ t o t , A c u m u l a t i v e P F C P U u s a g e \u00E2\u0080\u00A2/ \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 p f . t o t . /\u00E2\u0080\u00A2 c u m u l a t i v e p h y s i c a l p a g e f a u l t s */ \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 v p f _ t o t ; /\u00E2\u0080\u00A2 c u m u l a t i v e v i r t u a l p a g e f a u l t s \u00E2\u0080\u00A2/ 1 o n g > \u00C2\u00BB c p u _ n , /\u00E2\u0080\u00A2 t o t a l C P U b u r s t s \u00E2\u0080\u00A2/ r e s p _ n , /\u00E2\u0080\u00A2 t o t a l n u m b e r o f c o m m a n d l i n e \u00E2\u0080\u00A2/ \u00E2\u0080\u00A2 \u00C2\u00BB t h i n k _ n , A t r a n s a c t i o n s \u00E2\u0080\u00A2/ \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 i o _ n o q _ n , /\u00E2\u0080\u00A2 t o t a l n o . o f I / O s w i t h o u t q u e u e \u00E2\u0080\u00A2/ \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 p f _ i o _ n o q _ n , A t o t a l P F I / O s w i t h o u t q u e u e \u00E2\u0080\u00A2/ \u00E2\u0080\u00A2 * p f _ c p u _ n . A t o t a l C P U b u r s t s f o r P F \u00E2\u0080\u00A2/ \u00E2\u0080\u00A2 * p f _ n , A t o t a l p h y s i c a l p a g e f a u l t s \u00E2\u0080\u00A2/ \u00E2\u0080\u00A2 * v p f _ n ; /\u00E2\u0080\u00A2 t o t a l v i r t u a l p a g e f a u l t s \u00E2\u0080\u00A2/ / * o g g r e g a t e d i s k s t a t i s t i c s o n p e r u s e r p e r d i s k b a s i s \u00E2\u0080\u00A2/ d o u b l e \u00E2\u0080\u00A2 * d i s k _ n o q _ t o t , A a g g r e g a t e d i s k u s a g e \u00E2\u0080\u00A2/ \u00E2\u0080\u00A2 \u00C2\u00BB p f _ d i s k _ n o q _ t o t ; /\u00E2\u0080\u00A2 a g g r e g a t e P F d i s k u s a g e \u00E2\u0080\u00A2/ l o n g < \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 d i s k _ n o q _ n , /\u00E2\u0080\u00A2 t o t a l d i s k I / O s w i t h o u t q u e u e \u00E2\u0080\u00A2/ \u00E2\u0080\u00A2 * p f _ d i s k _ n o q _ n ; A t o t a l P F d i s k I / O s \" \u00E2\u0080\u00A2/ d o u b l e \u00C2\u00BB l o g i n _ t o t , / \u00E2\u0080\u00A2 a g g r e g a t e l o g i n e l a p s e d t i m e * / * l o g o u t _ t o t ; / \u00E2\u0080\u00A2 a g g r e g a t e l o g o u t e l a p s e d t i m e \u00E2\u0080\u00A2 / l o n g * l o g i n _ n , / \u00E2\u0080\u00A2 t o t a l t i m e s l o g g e d i n \u00E2\u0080\u00A2 / * l o g o u t _ n ; / * t o t a l t i m e s n o t l o g g e d i n \u00E2\u0080\u00A2 / The following are used to store temporary accumulative statistics on a per user basis. / * t e m p o r a r y s t a t i s t i c s o n a p e r u s e r b a s i s o n l y \u00E2\u0080\u00A2 / d o u b l e \u00C2\u00BB t c p u _ t o t , / \u00E2\u0080\u00A2 a c c u m u l a t i v e C P U u s a g e \u00E2\u0080\u00A2 / \u00C2\u00BB t i o _ n o q _ t o t , / \u00E2\u0080\u00A2 a c c u m u l a t i v e I / O n o q u e u e u s a g e \u00C2\u00BB / * t p f _ i o _ n o q _ t o t , / * \" P F 1/0 n o q u e u e u s a g e \u00E2\u0080\u00A2 / * t p f _ c p u _ t o t , / * p o g e f a u l t C P U u s a g e \u00E2\u0080\u00A2 / \u00E2\u0080\u00A2 t p f _ t o t , / \u00C2\u00BB p a g e f a u l t s e r v i c e t i m e \u00E2\u0080\u00A2 / \u00C2\u00BB t v p f _ t o t ; / \u00E2\u0080\u00A2 v i r t u a l p a g e f a u l t s e r v i c e t i m e \u00E2\u0080\u00A2 / l o n g \u00C2\u00BB t c p u _ n , / \u00C2\u00BB n o . o f C P U b u r s t s \u00E2\u0080\u00A2 / \u00E2\u0080\u00A2 t i o _ n o q _ n , / \u00E2\u0080\u00A2 n o . o f I / O w i t h o u t q u e u e * / * t p f _ i o _ n o q _ n , / \u00E2\u0080\u00A2 n o . o f P f I / O w i t h o u t q u e u e * / \u00E2\u0080\u00A2 t p f _ c p u _ n , / \u00C2\u00BB P F C P U b u r s t s \u00E2\u0080\u00A2 / \u00E2\u0080\u00A2 t p f _ n , / \u00E2\u0080\u00A2 n o . o f p h y s i c a l p a g e f a u l t s \u00C2\u00BB / \u00E2\u0080\u00A2 t v p f _ n ; / * n o . o f v i r t u a l p a g e f a u l t s \u00E2\u0080\u00A2 / The event header consists of the date and time structure and other system - 61 -APPENDIX A CONDENSER 1.0 information. These are all mapped into the following variables: c h o r m o n t h [ 2 ] , /\u00E2\u0080\u00A2 m o n t h of d a t e \u00E2\u0080\u00A2/ d o y [ 2 ] . /\u00E2\u0080\u00A2 day of d o t e \u00E2\u0080\u00A2/ y e o r [ 2 ] ; /\u00E2\u0080\u00A2 y e o r of d o t e \u00E2\u0080\u00A2/ s h o r t m i n . /\u00E2\u0080\u00A2 m i n u t e s of d a t e \u00E2\u0080\u00A2/ s e c , /\u00E2\u0080\u00A2 s e c o n d s of d a t e \u00E2\u0080\u00A2/ t i c k . /\u00E2\u0080\u00A2 t i c k s o f d o t e \u00E2\u0080\u00A2/ t i c _ r o t e , /\u00E2\u0080\u00A2 t i c k r a t e i n t i c k s / s e c o n d \u00E2\u0080\u00A2/ t s e r n o ; /\u00E2\u0080\u00A2 e v e n t u s e r number \u00E2\u0080\u00A2t c h o r u s e r n o m e [ 3 2 ] ; /\u00E2\u0080\u00A2 e v e n t u s e r name \u00E2\u0080\u00A2i END OF TIMEDAT s t r u c t u r e \u00E2\u0080\u00A2 / s h o r t v 1 e n ; /\u00E2\u0080\u00A2 UNIX v e r s i o n s t o m p l e n g t h \u00E2\u0080\u00A2/ c h o r ve re i o n [ 1 6 ] ; /\u00E2\u0080\u00A2 UNIX v e r s i o n s t a m p \u00E2\u0080\u00A2/ s h o r t c p u i d, /\u00E2\u0080\u00A2 i n d e x t o CPU names \u00E2\u0080\u00A2/ m e m o r y , /\u00E2\u0080\u00A2 memory s i z e i n p a g e s \u00E2\u0080\u00A2/ n u s e r s ; /\u00E2\u0080\u00A2 number o f u s e r s \u00E2\u0080\u00A2/ m o n _ v e r s i o n , /\u00E2\u0080\u00A2 m o n i t o r v e r s i o n number \u00E2\u0080\u00A2/ c p u _ t i c _ r o t e , /\u00E2\u0080\u00A2 p r o c e s s o r t i c k r a t e (Rev 3 ) \u00E2\u0080\u00A2/ /\u00E2\u0080\u00A2 END OF e v e n t HEADER FORMAT \u00E2\u0080\u00A2 / Each event record will be mapped into the following: s h o r t e v e n t _ g r o u p . /\u00E2\u0080\u00A2 e v e n t e v e n t g r o u p number \u00E2\u0080\u00A2 e v e n t _ t y p e , /\u00E2\u0080\u00A2 e v e n t e v e n t t y p e number \u00E2\u0080\u00A2 u s e r _ n o ; /\u00E2\u0080\u00A2 U s e r number c a u s e d t h e e v e n t \u00E2\u0080\u00A2/ doub 1 e c p u _ t i m e . /\u00E2\u0080\u00A2 CPU u s a g e i n m i l l i s e c o n d s \u00E2\u0080\u00A2/ r e o l _ t i m e ; /\u00E2\u0080\u00A2 e l o p s e d t i m e i n m i l l i s e c o n d s \u00E2\u0080\u00A2/ s h o r t o u x _ l e n g t h ; /\u00E2\u0080\u00A2 l e n g t h o f t h e a u x i l i a r y i n f o \u00E2\u0080\u00A2/ c h o r \u00E2\u0080\u00A2 o u * _ i n f o ; /\u00E2\u0080\u00A2 A u x i l l i o r y i n f o r m a t i o n \u00E2\u0080\u00A2/ s h o r t \u00E2\u0080\u00A2 p o g e m o p ; /\u00E2\u0080\u00A2 c o n t o i n s UNIX HMAP poge mop \u00E2\u0080\u00A2/ -62-APPENDIX A CONDENSER 1.0 3.3 Module Design For the modules below, a design/execution level number is included to indicate the modules position in the condenser's hierarchical algorithm. A brief description of each module's algorithm is also included. main(argc,argv) - Level 0 The main program. Does command line processing and calls level 1 routines. i n t a r g c ; c h a r * \u00C2\u00BB o r g v ; P r o c e s s c o m m a n d l i n e o p t i o n s U s e r _ I n p u t ( ) ; / * t o p r o c e s s u s e r ' s t e r m i n a l i n p u t * / G e t _ H e a d e r ( ) ; / * G e t e v e n t ' s h e a d e r i n f o r m a t i o n \u00C2\u00AB / P o s t _ I n i t ( ) ; / \u00E2\u0080\u00A2 P o s t i n i t i a l i z a t i o n * / D r i v e r ( ) ; / \u00C2\u00BB P r o c e s s e v e n t d a t a h e r e * / T r a n s i e n t Q ; / \u00E2\u0080\u00A2 s p r e a d t r a n s i e n t s t a t i s t i c s * / A g g r e g a t e ( ) ; / \u00E2\u0080\u00A2 C o m p u t e a g g r e g a t e s t a t i s t i c s * / P r i n t _ O u t p u t ( ) ; / \u00C2\u00AB P r i n t o u t a l l s t a t i s t i c s \u00E2\u0080\u00A2 / R e d u c e r Q ; / * R e d u c e a l l s t a t i s t i c s f o r user class * C l e a n u p Q ; / * C l e a n u p t r a n s i e n t s t a t i s t i c s \u00E2\u0080\u00A2 / User Input(ifname, of name, rfname) - Level 1 Simply prompts the user for the appropriate file names. Also, prompts for window range if the appropriate flags are turned on. c h a r * i f n a m e , \u00E2\u0080\u00A2 o f n a m e , \u00E2\u0080\u00A2 r f n a m e ; i f i n p u t f i l e n a m e ( i f n a m e ) n o t g i v e n p r o m p t t h e u s e r i f o u t p u t f i l e n a m e ( o f n a m e ) n o t g i v e n p r o m p t t h e u s e r i f r e d u c e d f i l e n o m e ( r f n a m e ) n o t g i v e n p r o m p t t h e u s e r i f w i n d o w _ t i m e , p r o m p t f o r t i m e r a n g e i f w i n d o w _ b u f f e r , p r o m p t f o r b u f f e r r a n g e Get HeaderQ - Level 1 Reads in event header information and stores them in memory. The number of header records read depends on the event revision. The first record is common for all event records. Subsequent records only have useful information in the auxilliary field. r e o d i n f i r s t r e c o r d / \u00E2\u0080\u00A2 c o m m o n f o r a l l e v e n t r e v s \u00E2\u0080\u00A2 / g e t s i z e o f p a g e m a p / * s i z e o f p a g e m a p d u m p e d \u00E2\u0080\u00A2 / d i s c a r d c u r r e n t b u f f e r / \u00C2\u00BB f o r c o m p a t i b i l t y o n l y \u00C2\u00BB / r e a d i n p a g e m a p / * r a w f o r m o f p a g e m a p * / Post_Init{) - Level 1 Does any initialization that requires event header information. Specifically, allocate storage for all statistical structures, and initialize them to default values. - 63 -APPENDIX A CONDENSER 1.0 Driver{) - Level 1 The actual high level driver that classify event event groups. w h i l e m o r e e v e n t r e c o r d s c a s e ( e v e n t _ g r o u p ) o f 1 : G r o u p 1 ( ) 2 : G r o u p 2 ( ) 3 : G r o u p 3 ( ) 4 : G r o u p 4 ( ) / * U s e r c o m m a n d l e v e l t r a n s a c t i o n \u00E2\u0080\u00A2 / / \u00E2\u0080\u00A2 L o g i n / L o g o u t e v e n t * / / * P a g e f a u l t e v e n t \u00C2\u00BB / / \u00C2\u00BB D i s k I / O e v e n t \u00E2\u0080\u00A2 / e n d _ c a s e ; Aggregate{) - Level 1 Compute all aggregate statistics by adding all rows and columns of every statistical matrices and arrays. The algorithm is straight-forward. Prints all statistics into the output file. The algorithm is straight-forward. P r e p a r e a n d w r i t e condenser h e a d e r f o r e v e r y m a t r i c e s c o n v e r t t h e m i n t o a n a r r a y w h i c h c a n b e i n d e x e d u s i n g ( i * n o _ o f _ c o I u m n s ) + j ; w h e r e i , j a r e i n d i c e s o f t h e m a t r i c e s w r i t e o u t a l l c o n v e r t e d m a t r i c e s a n d a r r a y s . Handles events that were still active when event was shutdown. The algorithm at this stage is to simple dump all the transient statistics for user class to process. The user command level event group. The following is done for the user causing the event. i f s t a r t _ e v e n t r e s e t t e m p o r a r y s t a t i s t i c s e l s e i f e n d _ e v e n t c l a s s i f y t r a n s a c t i o n a c c o r d i n g t o C P U u s a g e c o p y t e m p o r a r y s t a t i s t i c s t o g l o b a l s t a t i s t i c s The event group is user login/logout. This includes login/logout of terminal users, remote users, phantoms and child processes. The following is done for the event user only. Print_Output{) - Level 1 Reducer{) - Level 1 Reduce all statistics to minimize space. Cleanupi) - Level 1 Group l{) - Level 2 Group2{) - Level 2 - 64 -APPENDIX A CONDENSER 1.0 c a l l G r o u p 1 ( ) / \u00E2\u0080\u00A2 t r e a t l o g i n / l o g o u t e v e n t s a s t r a n s a c t i o n s * / i f 8 t a r t _ e v e n t r e s e t l o g i n t e m p o r a r y s t a t i s t i c s c o p y l o g o u t t e m p o r o r y s t a t i s t i c s t o g l o b a l s t a t i s t i c s e l s e i f e n d _ e v e n t r e s e t l o g o u t t e m p o r a r y s t a t i s t i c s c o p y l o g i n t e m p o r a r y s t a t i s t i c s t o g l o b a l s t a t i s t i c s Group3() - Level 2 Page fault event group. Under 4.1BSD or higher systems, we can have either a virtual or physical page fault event. Note that the p/_on flag for each user can have on of the three values TRUE, FALSE, and TRUE_PF. We only do the following for the event user. i f s t a r t _ e v e n t s e t p f _ o n f l a g t o T R U E r e s e t t e m p o r o r y s t a t i s t i c s e I s e i f e n d _ e v e n t i f ( p f _ o n i s T R U E _ P F ) c o p y t e m p o r a r y s t a t i s t i c s t o p h y s i c a l P F s t a t i s t i c s e l s e c o p y t e m p o r a r y s t a t i s t i c s t o v i r t u o l P F s t a t i s t i c s r e s e t p f _ o n t o F A L S E Group4{) - Level 2 The disk. I/O event group. An I/O event can be caused by a page fault or be a simple I/O operation. For the user responsible for the event, we do the following. i f s t a r t _ e v e n t i f ( p f _ o n i s T R U E ) s e t p f _ o n = T R U E _ P F r e s e t t e m p o r a r y 1 0 s t a t i s t i c s e l s e i f e n d _ e v e n t c o p y t e m p o r a r y s t a t i s t i c s t o g l o b a l s t a t i s t i c s Transient{) - Level 2 Spreading transient statistics into uniform transactions. f o r e a c h r e c o r d e d u s e r i f t e m p o r a r y c p u u s a g e < 2 * L A R G E _ L I M I T s e t n o _ o f _ t r a n s t o 1 e l s e / * h a v e a t r a n s a c t i o n e v e r y 1 0 0 s e c o n d s * / s e t n o _ o f _ t r a n s t o e l o p s e d _ t i m e / 1 0 0 . 0 s p r e a d o u t a l l o t h e r s t a t i s t i c s i n t o g l o b a l s t a t i s t i c s n a m e l y , c p u , p f _ c p u , i o , p f _ i o a d d t o r e s p _ t o t t h e s u m o f c p u _ t o t + p f + c p u _ t o t + i o _ t o t a d d t o t h i n k _ t o t a n y r e m a i n i n g i d l e t i m e s e t r e s p _ n a n d t h i n k _ n t o n o _ o f _ t r a n s - 65 -APPENDIX A CONDENSER 1.0 3.4 Design Issues The major concern of condenser's design is the memory usage. Because the event header gives the maximum number of configured users, this parameter is used for dynamic storage allocation. The storage for all the statistical matrices and arrays should not be allocated until any user interaction has been completed in order to minimize condenser's startup time. As a result, the storage is allocated in the procedure Init_Stats{). For login and logout events, condenser wi l l treated them as normal terminal transactions. For example, terminal logins/logouts are treated identically as Group 1 events. Also, a child process (including login through logout) is treated as a user command (i.e. Group 1 event). 3.5 Standards Condenser is developed to be used by a system monitor on 4.2BSD. If the program is to be used by monitors developed on other systems, it is important that the event data should conform to the format described in earlier sections. 3.6 Implementation Language The language used to develop condenser is the C programming language. The main reasons for using this language is the need for bit and byte manipulation, and the need for address manipulation, and it is well supported in UNIX systems. - 66 -APPENDIX B USER_CLASS 1.0 1. Proposal Before reading this document, the reader must be familiar with the functionalities of condenser and the requirements of the CAPP package. 1.1 The Problem A l l modeling tools require measurable representation of workload as input. The tools condenser and user_class of CAPP serve to extract workload statistics from event data and produce the input data using workload characterization and classification. In other words, condenser primarily deals with workload characterization while user_class mainly performs workload classification. In general, the size of the data produced by a system event monitor is usually too large to allow the user to repetitively go through the data to collect different sets of data. Each set of this data is typically made up of a cluster of users. The process of forming such a cluster is known as user classification. 1.2 Goals and Non-Goals The primary goal of user class is to allow the user to perform user classification without having to go through the raw event data again. This can easily be accomplished with the use of condenser (which can actually be viewed as an event data pre-processor as well). In other words, it is essential that the user can do user classification in a short amount of time. As for the user classification process, we should allow the user to do arbitrary classification. This allows the user to select specific users (by their user numbers) to different groups. User class w i l l also allow other kinds of user classification, such as automatic classification by workload. The third goal of user class is to automate the modeling phase of the capacity planning process as much as possible. To accomplish this, user class wi l l pipeline data to the modeling tools. In other words, the output from user__class can readily be used as input for modeling tools without user intervention or modification to the pipelined data. Note that user_class is not a product by itself. It will only accept data from condenser. - 67 -APPENDIX B USER_CLASS 1.0 2. Program Function 2.1 Terminology User class A group of users that share some common characteristics. The most typical characteristic is the user's type of workload or application (e.g. EMACS users or DBMS users). User classification The process where users are grouped into different classes. The criteria of the assignment of users to classes can be by user numbers or by the users' workload. Each user can be in at most one class. Note that this is essentially the same as workload classification. Model data The data that user_class produces to be used as input data by modeling tools. This data is essentially made up of parameter names and parameter values. Workload characterization The manner in which we represent workload. In general, workload characteristics include CPU, I/O and memory demands. 2.2 User Interface To invoke user_class, the command line must conform to the following format (with abbreviations in boldface): user_class [-input ] [-output ] [-reduce ] [-help] [-yersion] [-model ] [-fuUhelp] [-classtype [\iser\workload]] If no options are given, user class wi l l prompt the user for the necessary set of input options. User class supports the following options: -input, -i -output, -o -reduce, -r -help, -h -version, -v -model, -m -fullhelp, -fh -classtype, -ct The following screens summarize all the user class options. -input, -i Takes a file name as an argument (for input file). The input file should be saved by a previous user_class run, and should contain lists of user for all classes. If this option is omitted, user class w i l l prompt the user for classification information. -reduce, -o - 68 -APPENDIX B USER_CLASS 1.0 Takes a reduced file name as an argument. This file must be the produced by condenser. User_class wi l l prompt the user for a reduced file name if this option is omitted. -model, -m Takes a file name where the model data is to be stored. The user wi l l be queried before an existing file is overwritten. User class wi l l prompt the user for a model file name if this option is omitted. -output, -o Takes an output file name as an argument. If the output file exists the user wi l l be queried before it is overwritten. User class wil l prompt the user for an output file name if this option is omitted. Note that the output file merely contains a tabular form of the model data, plus some global system statistics. -help, -h Prints the command line format on how to invoke user class. -fuUhelp, -fh Prints this full help information on user class usage. -version, -v Prints user class version stamp plus the date and time that it was built. -verbose Tells user_class to print traces of its operations on the terminal. -force If this option is used, the user wil l not be prompted for confirmation before an output file is overwritten. Such files are model file and output file. -classtype, -ct Allows the user to do user classification. If the argument is WorkLoad, then the users w i l l be classified according to workload. Otherwise, the user can do arbitrary classification by user number. The default is classification by user number. - 69 -APPENDIX B USER_CLASS 1.0 A sample terminal session is given below, with user's typed input in boldface: user class -classtype user [ U S E R _ C L A S S R e v . 1 . 0 - 1 9 8 6 ] E n t e r r e d u c e d f i l e name ? ev60-3.red E n t e r o u t p u t f i l e name ? out E n t e r m o d e l o u t p u t f i l e name ? mod C L A S S I F Y I N G U S E R S : 1 2 3 6 7 8 9 10 11 12 13 14 15 16 17 18 19 2 0 21 22 2 3 2 4 2 5 26 2 7 28 29 3 0 31 32 3 3 3 4 3 5 3 6 3 7 3 8 3 9 4 0 41 4 2 4 3 44 4 5 4 6 4 7 4 8 49 5 0 51 52 5 3 5 4 5 5 5 6 5 7 5 8 59 6 0 61 62 6 3 64 126 1 2 8 130 E n t e r number o f u s e r c l a s s e s ? 6 NOTE : f o r t h e f o l l o w i n g , t e r m i n a t e u s e r number l i s t w i t h ' $ ' E n t e r name f o r c l a s s 1? C L A S S 1 E n t e r u s e r n u m b e r s f o r c l a s s 1? 6 12 18 24 30 36 42 48 54 60 $ E n t e r name f o r c l a s s 2 ? C L A S S 2 E n t e r u s e r n u m b e r s f o r c l a s s 2 ? 7 13 19 25 31 37 43 49 55 61 $ E n t e r name f o r c l a s s 3 ? C L A S S 3 E n t e r u s e r n u m b e r s f o r c l a s s 3 ? 8 14 20 26 32 38 44 50 56 62 $ E n t e r name f o r c l a s s 4 ? C L A S S 4 E n t e r u s e r n u m b e r s f o r c l a s s 4 ? 9 15 21 27 33 39 45 51 57 63 $ E n t e r name f o r c l a s s 5 ? C L A S S 5 E n t e r u s e r n u m b e r s f o r c l a s s 5 ? 10 16 22 28 34 40 46 52 58 64 $ E n t e r name f o r c l a s s 6 ? C L A S S 6 E n t e r u s e r n u m b e r s f o r c l a s s 6 ? 11 17 23 29 35 41 47 53 59 $ E n t e r s a v e f i l e name ? sav R e a d i n g d a t a f r o m ' e v 6 0 - 3 . r e d ' . . . NOTES 1. If a user number appears in more than one class, user_class will assign it to the class that it was first assigned. In other words, each user can belong to at most one class. A warning message wil l be printed whenever a class re-assignment is attempted. 2. If there are users without any class assigned, user class wi l l automatically create a dummy class for them. - 70 -APPENDIX B USER CLASS 1.0 2.2.1 '. User Input User_class allows the user to save user classification input data into a file. This data informs user_class on how to classify the users in the reduced data. The format of the input/saved file is made up of parameter names followed by 1 or more parameter values. Each parameter name must be on a line. A /* delimits the start of a comment. The parameter names supported at the moment are: class type specifies how the user classification is going to be done. Possible values are user and workload. In the former case, user__class wi l l classify the users in the data according to their user numbers. In the latter case, the classification is done using weighted functions of the their workloads. no class the number of classes desired. class_name a user-specified name used to identify a class (apart from the class number chosen by user class). user class used to enumerate a list of user numbers belonging to a class. work load used to specify the upper limit of a class' weighted workload. A sample copy of a saved file is given below: c 1 a s s _ t y p e u s e r /\u00E2\u0080\u00A2 u s e r c l a s s i f i c a t i o n t y p e n o _ c 1 a s s 6 number o f u s e r c I a s s e s / * U s e r c l a s s i f i c a t i o n by u s e r numbe r s c 1 a s s _ n a m e 1 CLASS1 u s e r _ c 1 a s s 1 6 12 18 2 4 3 0 36 4 2 4 8 5 4 6 0 c 1 a s s _ n a m e 2 C L A S S 2 u s e r _ c l a s s 2 7 13 19 2 5 31 37 4 3 49 5 5 61 c 1 a s s _ n a m e 3 C L A S S 3 u s e r _ c 1 a s s 3 8 14 2 0 2 6 32 3 8 4 4 5 0 5 6 62 c 1 a s s _ n a m e 4 C L A S S 4 u s e r _ c 1 a s s 4 9 15 21 2 7 3 3 39 4 5 51 5 7 6 3 c 1 a s s _ n a m e 5 C L A S S 5 u s e r _ c 1 a s s 5 10 16 22 2 8 3 4 4 0 4 6 52 5 8 64 c 1 a s s _ n a m e 6 C L A S S 6 u s e r _ c 1 a s s 6 11 17 2 3 2 9 3 5 41 4 7 5 3 59 - 71 -APPENDIX B USER_CLASS 1.0 2.2.2 User Output User class's output is essentially identical to that of condenser, except that the statistics are given on a per class per transaction basis. However, user class also gives system wide statistics that are not given by condenser. They are: System Throughput This is the system's throughput in the number of transactions per second. CPU utilization The percentage of CPU time used for true CPU usage, page faults and the total usage, assuming that the system has only one CPU. Page Fault Rate The number of true and virtual page faults per second, as well as the total page fault rate per second. Disk Utilization The percentage of the time when each disk is busy doing true I/Os and page fault I/Os. The total disk busy time is also given. Throughputs The throughputs or arrival rates of all job classes in the number of transactions per second. A description of all other statistics is given below: Response times Response times for all interactive users on a per class per transaction basis. Think times Think times for all interactive users on a per class per transaction basis. True CPU The pure CPU usage (i.e. excludes any CPU time used for page faults). This is given on a per class per transaction basis. Page Fault CPU The CPU usage for handling page faults only. This is given on a per class per transaction basis. True I/O The pure I/O operations excludes any I/O caused by page faults. Any disk queueing statistics are also excluded. This is given on a per class per transaction basis. Page Fault I/O The I/O operations caused by page faults only. Any disk queueing statistics are excluded. This is given on a per class per transaction basis. Disk I/O The disk true I/O operations on a per class per disk basis. Queueing at the disks are excluded in the statistics. Disk PF I/O The disk page fault I/O operations on a per class per disk basis. Queueing statistics at the disks are excluded. Lifetime Function This is unlike all the above statistics. The approximated b and c parameters wi l l be printed for all users. - 72 -APPENDIX B USER_CLASS 1.0 A sample copy of user_class's output file is given below: U S E R _ C L A S S OUTPUT M o n i t o r s t a r t e d o n 0 3 / 1 7 / 8 6 1 4 : 2 0 : 2 1 . 1 6 3 M o n i t o r V e r s i o n : 1 on 4 . 2 B S D M o n i t o r U s e r N a m e : r o o t M o n i t o r U s e r N u m b e r : 7 5 C P U : VAX 1 1 / 7 5 0 Memory : 4 0 9 6 p a g e s Maximum u s e r s : N u m b e r o f u s e r c l a s s e s : 3 REMARK: 7 8 E l a p s e d t i m e = 8 8 1 2 . 7 3 9 s e c o n d s N u m b e r o f e v e n t s p r o c e s s e d = 1 2 2 7 7 0 T o t a l b l o c k s / b u f f e r s r e a d = 8 1 7 GLOBAL S Y S T E M S T A T I S T I C S S Y S T E M THROUGHPUT S T A T I S T I C C P U U T I L I Z A T I O N P A G E FAULT R A T E D I S K 0 D I S K 1 D I S K 8 D I S K 9 2 . 7 4 4 t r a n s a c t i o n s / s e c TRUE 3 8 . 6 0 3 8 % 0 . 0 0 4 4 4 . 1 7 3 8 % 0 . 1 3 8 5 % 0 . 2 7 8 8 % 0 . 0 0 6 4 % V I R T U A L 0 . 3 4 9 3 % 0 . 0 1 1 1 0 . 1 6 1 2 % 0 . 2 2 8 7 % 0 . 2 1 4 2 % 0 . 0 0 0 0 % TOTAL 3 8 . 9 5 3 2 % 0 . 0 1 5 6 4 . 3 3 5 0 % 0 . 3 6 7 2 % 0 . 4 9 3 0 % 0 . 0 0 6 4 % C L A S S 1 2 3 TOTAL THROUGHPUTS ( t r a n s a c t i o n s / s e c ) MICRO NORMAL LARGE 1 . 3 1 6 0 . 0 4 4 0 . 0 0 0 1 . 3 6 0 1 . 0 6 0 0 . 0 2 5 0 . 0 5 0 1 . 1 3 5 0 . 1 1 4 0 . 0 0 4 0 . 1 3 1 0 . 2 4 9 O V E R A L L 2 . 4 9 1 0 . 0 7 2 0 . 1 8 1 2 . 7 4 4 C L A S S I F I C A T I O N OF USERS USER C L A S S USER NUMBERS 1 4 10 19 2 0 21 2 4 2 6 2 7 34 4 6 2 4 9 5 0 3 1 5 9 6 0 61 62 6 3 6 4 6 5 66 6 7 69 71 7 2 7 3 7 4 3 7 5 7 6 7 7 7 8 LARGE O V E R A L L AVERAGE N AVERAGE M I C R O C L A S S N A V E R A G E 1 11600 0 . 1 5 9 2 387 0 . 0 0 5 3 0 0 . 0 0 0 TOTAL 11987 0 . 1 5 4 RESPONSE T I M E S ( s e c o n d s ) NORMAL N A V E R A G E 9 3 4 4 0 . 4 4 5 1007 2 1 7 0 . 0 2 9 31 4 4 0 0 . 0 3 2 1155 10001 0 . 4 1 8 2 1 9 3 7 . 5 5 4 2 1 9 5 1 0 . 6 2 0 1 . 1 6 4 6 3 5 0 . 0 7 0 9 . 7 0 5 1 5 9 5 7 . 0 3 7 8 . 5 9 7 2 4 1 8 1 1 . 0 2 9 - 73 -A P P E N D I X B USER_CLASS 1.0 THINK T IMES ( s e c o n d s ) C L A S S MICRO N AVERAGE NORMAL N AVERAGE LARGE N AVERAGE O V E R A L L N AVERAGE 1 11601 3 . 1 6 5 9 3 4 4 3 . 0 3 6 1 0 0 7 9 . 3 5 6 2 1 9 5 2 3 . 3 9 4 2 3 8 7 7 . 4 7 5 2 1 7 5 1 . 8 6 2 31 1 1 0 . 7 8 2 6 3 5 2 7 . 6 8 7 3 0 0 . 0 0 0 4 4 0 4 7 . 4 9 1 1 1 5 5 1 0 9 . 5 3 3 1595 9 2 . 4 1 7 TOTAL 1 1 9 8 8 3 . 3 0 4 10001 6 . 0 5 2 2 1 9 3 6 3 . 5 5 0 2 4 1 8 2 9 . 9 0 4 C L A S S MICRO N AVERAGE TRUE CPU BURST S T A T I S T I C S ( m s e c ) NORMAL N AVERAGE LARGE N AVERAGE O V E R A L L N AVERAGE 1 0 . 5 2 8 3 . 4 6 5 0 . 4 2 7 2 1 . 9 1 8 0 . 2 1 7 2 0 . 6 0 9 4 . 2 9 4 0 . 3 8 1 2 1 . 9 7 4 0 . 2 3 5 3 0 . 0 0 0 0 . 0 0 0 0 . 3 1 4 2 4 . 4 8 2 1 4 . 6 9 0 TOTAL 0 . 4 9 6 3 . 4 9 1 0 . 4 1 8 2 2 . 0 4 7 1 . 1 7 2 1 3 7 . 4 7 3 1 . 1 7 2 3 5 . 0 1 8 1 6 6 . 4 5 8 1 . 2 2 5 4 0 . 8 5 1 1 0 4 . 8 5 8 1 5 . 0 0 4 1 0 3 . 1 7 5 1 1 0 . 6 7 0 2 . 0 8 6 6 7 . 4 4 2 PAGE FAULT CPU S T A T I S T I C S ( m s e c ) MICRO NORMAL LARGE O V E R A L L C L A S S N AVERAGE N AVERAGE N AVERAGE N AVERAGE 1 0 . 0 0 2 2 . 4 3 2 0 . . 0 0 3 5 . 7 9 8 0 . 0 6 8 8 . 6 9 4 0 . . 0 7 3 8 . 3 8 4 2 0 . 0 0 0 0 . 0 0 0 0 . 0 0 3 4 8 . 1 2 8 0 . . 0 6 9 1 8 . 2 2 3 0 . 0 7 2 1 9 . 5 6 7 3 0 . . 0 0 0 0 . 0 0 0 0 . . 0 6 5 3 . 6 5 9 0 . . 1 9 9 5 0 . 6 6 7 0 . 2 6 3 3 9 . 1 3 9 TOTAL 0 . 0 0 2 2 . 4 7 5 0 . 0 0 7 5 . 0 1 3 0 . . 0 7 7 1 6 . 1 1 7 0 . 0 8 6 1 4 . 8 7 2 TRUE I / O WITHOUT QUEUE T I M E S ( m s e c ) MICRO NORMAL LARGE O V E R A L L C L A S S N AVERAGE N AVERAGE N AVERAGE N AVERAGE 1 0 . 0 0 0 0 . 0 0 0 0 . 0 0 0 2 0 . 1 5 2 0 . .171 1 9 . 2 5 2 0 . 1 7 2 1 9 . 2 5 7 2 0 . . 0 0 0 0 . 0 0 0 0 . 0 3 9 2 3 . 7 5 8 0 . . 1 8 6 2 2 . 3 9 3 0 . 2 2 5 2 2 . 6 3 2 3 0 . . 0 0 0 0 . 0 0 0 0 . . 0 3 8 2 2 . 5 5 3 1 3 . 9 6 6 1 4 . 7 1 2 14, . 0 0 4 1 4 . 7 3 3 TOTAL 0 . 0 0 0 0 . 0 0 0 0 . . 0 0 4 2 2 . 3 8 4 1 . . 0 8 2 1 5 . 4 0 0 1 . 0 8 6 1 5 . 4 2 8 C L A S S PAGE FAULT I / O WITHOUT QUEUE T I M E S ( m s e c ) MICRO N AVERAGE NORMAL N AVERAGE N LARGE AVERAGE O V E R A L L N AVERAGE 1 0 . 0 0 2 2 1 . 2 1 2 0 . 0 0 3 2 0 . 0 0 0 0 . 0 0 0 0 . 0 0 3 3 0 . 0 0 0 0 . 0 0 0 0 . 0 6 5 TOTAL 0 . 0 0 2 2 1 . 2 1 2 0 . 0 0 7 2 0 . 9 5 2 0 . 0 9 0 1 8 . 2 6 8 0 . 0 9 5 1 8 . 4 2 5 1 8 . 1 8 2 0 . 1 5 1 1 9 . 2 8 7 0 . 1 5 4 1 9 . 2 6 4 1 9 . 1 5 3 0 . 3 5 2 1 9 . 1 4 9 0 . 4 1 6 1 9 . 1 4 9 1 9 . 8 6 1 0 . 1 0 9 1 8 . 4 9 2 0 . 1 1 8 1 8 . 6 2 2 C L A S S 1 MICRO N AVERAGE 0 . 0 0 2 TRUE P A G E FAULT T I M E S ( m s e c ) NORMAL N AVERAGE LARGE N AVERAGE 2 3 . 8 6 4 0 . 0 0 3 2 5 . 6 6 8 0 . 0 6 8 O V E R A L L N AVERAGE 3 0 . 0 7 1 0 . 0 7 3 2 9 . 6 9 8 - 74 -APPENDIX B USER_CLASS 1.0 2 3 T O T A L 0 . 0 0 0 0 . 0 0 0 0 . 0 0 2 0 . 0 0 0 0 . 0 0 0 2 3 . 8 6 4 0 . 0 0 3 0 . 0 6 5 0 . 0 0 7 1 9 . 6 9 7 2 2 . 1 8 3 2 3 . 5 2 4 0 . 0 6 9 0 . 1 9 9 0 . 0 7 7 5 1 . 7 9 1 4 3 . 0 5 5 3 2 . 8 1 4 0 . 0 7 2 0 . 2 6 3 0 . 0 8 6 5 0 . 3 9 5 3 7 . 9 3 7 3 1 . 8 3 0 M I C R O V I R T U A L P A G E F A U L T T I M E S ( m s e c ) N O R M A L L A R G E O V E R A L L C L A S S N A V E R A G E N A V E R A G E N A V E R A G E N A V E R A G E 1 0 . 0 0 0 1 . 5 1 5 0 . 0 0 3 3 . 3 2 8 0 . . 1 4 7 2 . 3 0 3 0 . , 1 5 0 2 . 3 2 1 2 0 . . 0 0 2 3 . 0 3 0 0 . . 0 7 6 2 . 3 9 9 0 . 4 4 7 2 . 1 1 3 0 . . 5 2 4 2 . 1 5 7 3 0 . . 0 0 0 0 . 0 0 0 0 . 0 4 5 2 . 4 8 3 2 . 5 2 4 4 . 3 8 5 2 . 5 6 9 4 . 3 5 2 T O T A L 0 . 0 0 0 1 . 8 1 8 0 . 0 0 7 2 . 7 4 6 0 . 3 1 1 3 . 4 0 9 0 . , 3 1 9 3 . 3 9 2 T R U E D I S K I / O W I T H O U T Q U E U E T I M E S ( m s e c ) D I S K 0 D I S K 1 D I S K 8 D I S K 9 C L A S S N A V E R A G E N A V E R A G E N A V E R A G E N A V E R A G E 1 0 . . 1 1 4 1 9 . 2 7 8 0 . , 0 2 5 1 8 . 0 5 3 0 . . 0 3 3 1 9 . 9 1 9 0 . . 0 0 0 3 9 . 3 9 4 2 0 . 1 9 2 2 1 . 9 0 8 0 . 0 0 0 0 . 0 0 0 0 . 0 3 3 2 6 . 8 4 0 0 . . 0 0 0 0 . 0 0 0 3 1 3 . . 6 6 1 1 4 . 5 3 7 0 . 0 7 7 1 9 . 5 3 7 0 . . 2 6 0 2 3 . 1 5 2 0 . . 0 0 6 3 6 . 7 0 0 T O T A L 1 . 0 1 0 1 5 . 0 6 1 0 . 0 2 8 1 8 . 3 2 7 0 . . 0 4 8 2 1 . 1 9 9 0 . . 0 0 0 3 7 . 7 7 8 O V E R A L L C L A S S N A V E R A G E 1 0 . 1 7 2 1 9 . 2 5 7 2 0 . . 2 2 5 2 2 . 6 3 2 3 1 4 . . 0 0 4 1 4 . 7 3 3 T O T A L 1 . 0 8 6 1 5 . 4 2 8 P A G E F A U L T D I S K I / O W I T H O U T Q U E U E T I M E S ( m s e c ) D I S K 0 D I S K 1 D I S K 8 O V E R A L L C L A S S N A V E R A G E N A V E R A G E N A V E R A G E N A V E R A G E 1 0 . 0 3 3 1 6 . 8 1 0 0 . 0 3 3 1 8 . 7 7 9 0 . 0 3 0 1 9 . 7 9 6 0 . . 0 9 5 1 8 . 4 2 5 2 0 . . 0 0 2 5 1 . 5 1 5 0 . 0 9 9 1 9 . 2 4 0 0 . . 0 5 4 1 8 . 3 6 0 0 . 1 5 4 1 9 . 2 6 4 3 0 . 0 7 8 1 6 . 6 7 9 0 . . 1 7 6 1 9 . 5 7 3 0 . . 1 6 2 1 9 . 9 6 0 0 . 4 1 7 1 9 . 1 8 0 T O T A L 0 . 0 3 5 1 6 . 8 3 2 0 . . 0 4 4 1 9 . 0 1 7 0 . 0 4 0 1 9 . 7 8 9 0 . . 1 1 8 1 8 . 6 2 9 L I F E T I M E F U N C T I O N P A R A M E T E R S C L A S S B C 1 2 . 7 5 0 0 e + 0 2 1 . 3 8 6 0 e + 0 2 2 2 . 2 7 5 3 e + 0 2 6 . 4 1 0 3 e + 0 0 3 8 . 2 0 8 8 < * 0 2 1 . 0 6 9 5 e + 0 2 A V E R A G E L I F E T I M E S C L A S S A V G . L T I M E 1 5 . 6 1 8 3 5 1 e + 0 2 2 6 . 9 0 9 1 0 6 e + 0 2 3 5 . 8 7 8 7 8 4 e + 0 3 - 75 -APPENDIX B USER_CLASS 1.0 2.3 Program Interfaces User class interfaces with condenser and modeling tools via files. The interface with condenser is a binary file containing all the reduced and aggregate statistics. The interface with modeling tools is a text file containing parameter names and values required by these tools. A detail description of these interfaces is given in the next two subsections. However, the knowledgable user can also use the output from user_class for input to any other modelling tools. In this case, the user has to know how to interpret this output and translate to the input of any modelling tool that the user may be using. 2.3.1 Condenser Interface The reduced file written by condenser must begin with header information necessary for user_class to determine the sizes of all following data within. The header also contains a simplified form of the event data header that condenser obtains from the event data. The format of the header is given below: N A M E SIZE DATA TYPE Month 2 bytes binary short Day 2 bytes binary short Year 2 bytes binary short Hour 2 bytes binary short Minutes 2 bytes binary short Seconds 2 bytes binary short Ticks 2 bytes binary short Monitor version 2 bytes binary short Monitor user 2 bytes binary short Monitor user name 32 bytes ASCII UNIX version stamp 16 bytes binary short CPU type/name 16 bytes binary short Memory size 2 bytes binary short Number of users 2 bytes binary short Number of events 2 bytes binary short Number of buffers 2 bytes binary short Length of remark 2 bytes binary short Elapsed time 4 bytes double Maximum user number 2 bytes binary short Maximum disk number 2 bytes binary short Remark varying ASCII Recorded users 2 * maxuser bytes binary short - 76 -APPENDIX B USER_CLASS 1.0 Following the header are all the statistical matrices and arrays. Their sizes are dependent oh the maximum user number and maximum disk number (which are given in condenser's header). The details of the statistical data are given below: S T A T I S T I C D IMENSION DATA T Y P E D E S C R I P T I O N r e c o r d e d m a x _ u s e r s h o r t f 1 a g s f o r r e c o r d e d u s e r s r e s p _ t o t NTRANS by t t i o x _ u s e r d o u b l e t o t a l r e s p o n s e t i m e s r e s p _ n NTRANS by n t a x _ u s e r 1 o n g number o f t r a n s a c t i o n s t h i n k _ t o t NTRANS by m a x _ u s e r d o u b l e t o t a l t h i n k t i m e s t h i n k _ n NTRANS by n i a x _ u s e r 1 o n g t o t a l i d l e t r a n s a c t i o n s i o _ n o q _ t o t NTRANS by m a x _ u s e r d o u b l e t o t a l I / O w i t h o u t q u e u e u s a g e i o _ n o q _ n NTRANS by m a x _ u s e r l o n g t o t a l I / O s w i t h o u t q u e u e p f _ i o _ n o q _ t o t NTRANS by n i o x _ u s e r d o u b l e t o t a l P F I / O u s a g e p f i o _ n o q _ n NTRANS by m a x _ u s e r 1 o n g t o t a l P F I / O s c p u _ t o t NTRANS by m o x _ u s e r d o u b l e t o t a l t r u e CPU b u r s t t i m e c p u _ n NTRANS by m a x _ u s e r l o n g t o t a l n o . o f t r u e CPU b u r s t s p f _ c p u _ t o t NTRANS by m a x _ u s e r d o u b l e t o t a l P F C P U b u r s t t i m e p f _ c p u _ n NTRANS by m a x _ u s e r 1 o n g t o t a l no o f P F CPU b u r s t s d i s k _ n o q _ t o t m a x _ d r i v e by m o x _ u s e r d o u b l e t o t a l t r u e d i s k I / O u s a g e d i s k _ n o q _ n m a x _ d r i ve by m a x _ u s e r 1 o n g t o t a l no o f t r u e d i s k I / O s p f _ d i s k _ n o q _ t o t m o x _ d r i v e by m a x _ u s e r d o u b l e t o t a l P F d i s k I / O u s a g e p f _ d i s k _ n o q _ n m a x _ d r i v e by m o x _ u s e r 1 o n g t o t a l no o f P F d i s k I / O s p f _ n NTRANS by n i o x _ u s e r 1 o n g t o t a l t r u e P F s v p f _ n NTRANS by m a x _ u s e r l o n g t o t a l v i r t u a l P F s I f t b m a x _ u s e r d o u b l e b p a r a m e t e r 1 f t c m a x _ u s e r d o u b l e C p a r a m e t e r - 77 -APPENDIX B USER_CLASS 1.0 2.3.2 Modeling Tools Interface The model data file from user class to be used by the modeling tools consists of parameter values and names. These parameters are the union of the input parameter requirements of these tools. The contents of the file is line-oriented, and each line must conform to the following format: Parameter Name Parameter Value{s) ; Comments Note that each parameter name can have one or more values. A sample subset of the model data file is given below: /\u00C2\u00BB*\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB**\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB*\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00AB\u00C2\u00BB\u00C2\u00BB*\u00C2\u00AB*\u00E2\u0080\u00A2\u00C2\u00BB\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00BB\u00E2\u0080\u00A2\u00C2\u00AB\u00C2\u00BB\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB*\u00C2\u00BB\u00E2\u0080\u00A2\u00C2\u00BB\u00E2\u0080\u00A2\u00E2\u0080\u00A2*\u00C2\u00BB\u00C2\u00BB\u00C2\u00AB*/ / \u00E2\u0080\u00A2 P R O J E C T I O N P A R A M E T E R S * / s i m _ c p u _ t y p e P 8 5 0 / * C P U T y p e ( p r o j e c t i o n ) s i m _ m e m o r y _ s i z e 4 0 9 5 / * M e m o r y s i z e ( p r o j e c t i o n ) i n p a g e s s i m _ t i m e 8 8 1 3 / \u00E2\u0080\u00A2 S i m u l a t i o n t i m e ( s e c o n d s ) /\u00C2\u00BB\u00C2\u00AB\u00C2\u00BB\u00C2\u00BB*\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00BB\u00C2\u00AB\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB\u00E2\u0080\u00A2\u00C2\u00BB*\u00C2\u00BB\u00C2\u00BB\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00BB\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00BB*\u00E2\u0080\u00A2\u00C2\u00BB\u00E2\u0080\u00A2*\u00C2\u00BB\u00C2\u00BB\u00E2\u0080\u00A2*****\u00E2\u0080\u00A2**\u00C2\u00BB\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00BB*\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00BB*\u00C2\u00BB\u00C2\u00BB/ / \u00E2\u0080\u00A2 G E M D A T A P A R A M E T E R S * / /\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB*\u00C2\u00BB\u00C2\u00BB**\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00BB*\u00C2\u00BB\u00E2\u0080\u00A2*\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB\u00C2\u00BB*\u00C2\u00BB*\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00BB\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00BB\u00C2\u00AB*\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00AB***\u00C2\u00BB\u00C2\u00BB*\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00BB\u00E2\u0080\u00A2\u00C2\u00BB\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00BB*\u00C2\u00BB/ c p u _ t y p e m e m o r y _ s i z e n _ c o n t n _ d i s k n _ d i s k n _ c I a s s /\u00C2\u00BB*\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00BB\u00E2\u0080\u00A2\u00C2\u00BB\u00E2\u0080\u00A2\u00C2\u00AB\u00C2\u00BB\u00C2\u00BB\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00BB\u00C2\u00BB\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00E2\u0080\u00A2\u00C2\u00AB\u00C2\u00BB\u00C2\u00BB*\u00E2\u0080\u00A2\u00E2\u0080\u00A2 / \u00E2\u0080\u00A2 U S E R C L A S S / \u00E2\u0080\u00A2 \u00C2\u00BB \u00C2\u00BB \u00E2\u0080\u00A2 \u00C2\u00BB \u00C2\u00BB \u00E2\u0080\u00A2 \u00C2\u00BB \u00C2\u00BB * * \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 * \u00E2\u0080\u00A2 * \u00C2\u00BB \u00C2\u00BB \u00E2\u0080\u00A2 \u00C2\u00BB * * \u00E2\u0080\u00A2 \u00C2\u00BB * u s e r c l a s s 1 T E R M I N A L P 8 5 0 / * C P U T y p e ( m e a s u r e d d a t a ) 4 0 9 5 / * M e m o r y s i z e ( m e a s u r e d d a t a ) i n p a g e s 2 / \u00E2\u0080\u00A2 N o . o f d i s k c o n t r o l l e r s 0 2 / \u00E2\u0080\u00A2 N o . o f d i s k s i n c o n t r o l l e r 0 2 2 / \u00E2\u0080\u00A2 N o . o f d i s k s i n c o n t r o l l e r 2 3 / * N u m b e r o f u s e r c l a s s e s * \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 * \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 * \u00E2\u0080\u00A2 * \u00C2\u00BB * \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00C2\u00BB \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 * \u00E2\u0080\u00A2 / 1 P A R A M E T E R S \u00E2\u0080\u00A2 / \u00E2\u0080\u00A2 \u00C2\u00BB \u00E2\u0080\u00A2 \u00C2\u00BB \u00C2\u00AB \u00C2\u00BB \u00C2\u00BB \u00C2\u00BB \u00C2\u00BB \u00C2\u00BB \u00C2\u00BB \u00C2\u00BB \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00C2\u00AB \u00C2\u00AB \u00C2\u00BB \u00C2\u00BB \u00C2\u00BB \u00E2\u0080\u00A2 \u00C2\u00BB \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 \u00C2\u00BB \u00C2\u00AB \u00E2\u0080\u00A2 \u00E2\u0080\u00A2 * / n _ u s e r s I f t b I f t c c p u c p u _ n c p u c p u _ n c p u c p u _ n p f _ c p u p f _ c p u _ n p f _ c p u p f _ c p u _ n p f _ c p u p f _ c p u _ n i o i o _ n i o i o _ n i o i o _ n p f _ i o p f i o _ n p f _ i o p f i o _ n p f _ i o p f _ i o _ n t h i n k 1 0 2 . 7 5 0 0 e + 0 2 1 . 3 8 6 0 e + 0 2 / n u m b e r o f u s e r s i n c l a s s 1 3 4 6 4 6 /\u00E2\u0080\u00A2 a v g C P U b u r s t t i m e p e r m i c r o t r a n s a c t i o n 1 0 5 2 8 5 /\u00E2\u0080\u00A2 n o o f C P U b u r s t s f o r m i c r o t r a n s a c t i o n s 2 2 1 9 1 8 5 /\u00E2\u0080\u00A2 a v g C P U b u r s t t i m e p e r n o r m a l t r a n s a c t i o n 2 0 4 2 6 6 /\u00E2\u0080\u00A2 n o o f C P U b u r s t s f o r n o r m a l t r a n s a c t i o n s 3 1 3 7 4 7 3 0 /\u00E2\u0080\u00A2 a v g C P U b u r s t t i m e p e r l a r g e t r a n s a c t i o n 3 0 2 1 7 3 /\u00E2\u0080\u00A2 n o o f C P U b u r s t s f o r l a r g e t r a n s a c t i o n s 1 2 4 3 2 0 /\u00E2\u0080\u00A2 a v g P F C P U b u r s t t i m e p e r m i c r o t r a n s a c t i o n 1 0 0 0 2 2 /\u00E2\u0080\u00A2 n o o f P F C P U b u r s t s f o r m i c r o t r a n s a c t i o n s 2 5 7 9 7 6 /* a v g P F C P U b u r s t t i m e p e r n o r m a l t r a n s a c t i o n 2 0 0 0 3 1 /* n o o f P F C P U b u r s t s f o r n o r m a l t r a n s a c t i o n s 3 8 6 9 3 7 / \u00E2\u0080\u00A2 o v g P F C P U b u r s t t i m e p e r l a r g e t r a n s a c t i o n 3 0 0 6 7 8 /\u00E2\u0080\u00A2 n o o f P F C P U b u r s t s f o r l a r g e t r a n s a c t i o n s 1 0 0 0 0 0 /* o v g I / O s e r v i c e t i m e p e r m i c r o t r a n s a c t i o n 1 0 0 0 0 0 /\u00E2\u0080\u00A2 n o o f I / O s f o r m i c r o t r a n s a c t i o n s 2 2 0 1 5 1 5 /\u00E2\u0080\u00A2 a v g I / O s e r v i c e t i m e p e r n o r m a l t r a n s a c t i o n 2 0 0 0 0 9 /\u00E2\u0080\u00A2 n o o f I / O s f o r n o r m a l t r a n s a c t i o n s 3 1 9 2 5 2 0 /\u00E2\u0080\u00A2 o v g I / O s e r v i c e t i m e p e r l a r g e t r a n s a c t i o n 3 0 1 7 1 4 /\u00E2\u0080\u00A2 n o o f I / O s f o r l a r g e t r a n s a c t i o n s 1 2 1 2 1 2 1 /\u00E2\u0080\u00A2 a v g P F I / O s e r v i c e t i m e p e r m i c r o t r a n s a c t i o n 1 0 0 0 2 2 /\u00E2\u0080\u00A2 n o o f P F I / O s f o r m i c r o t r a n s a c t i o n s 2 2 0 9 5 2 4 /\u00E2\u0080\u00A2 o v g P F I / O s e r v i c e t i m e p e r n o r m a l t r a n s a c t i o n 2 0 0 0 3 2 /\u00E2\u0080\u00A2 n o o f P F I / O s f o r n o r m a l t r a n s a c t i o n s 3 1 8 2 6 7 7 /\u00E2\u0080\u00A2 a v g P F I / O s e r v i c e t i m e p e r l a r g e t r a n s a c t i o n 3 0 0 9 0 1 /\u00E2\u0080\u00A2 n o o f P F I / O s f o r l a r g e t r a n s a c t i o n s 1 3 1 6 5 2 /\u00E2\u0080\u00A2 a v g T h i n k t i m e p e r m i c r o t r a n s a c t i o n - 78 -APPENDIX B USER_CLASS 1.0 t h i n k _ n 1 1 1 6 0 1 /\u00C2\u00AB N o . o f t r a n s f o r m i c r o t r a n s a c t i o n s t h i n k 2 3 . 0 3 6 3 / \u00E2\u0080\u00A2 a v g T h n k t i m e p e r n o r m a l t r a n s a c t i o n t h i n k _ n 2 9 3 4 4 / \u00E2\u0080\u00A2 N o . o f t r a n s f o r n o r m a l t r a n s a c t i o n s t h i n k 3 9 . 3 5 6 3 / * a v g T h i n k t i m e p e r l a r g e t r a n s a c t i o n t h i n k _ n 3 1 0 0 7 / \u00E2\u0080\u00A2 N o . o f t r a n s f o r l a r g e t r a n s a c t i o n s d i s k 0 1 9 . 2 7 8 0 /* m e a n / 0 s e r v i c e t i m e f o r d i s k 0 d i s k _ n 0 0 . 1 1 4 3 /* t o t a l r a t i o ) I / O s f o r d i s k 0 d i s k 1 1 8 . 0 5 3 5 /* m e a n /0 s e r v i c e t i m e f o r d i s k 1 d i s k _ n 1 0 . 0 2 4 7 /\u00E2\u0080\u00A2 t o t a l [ r a t i o ) I / O s f o r d i s k 1 d i s k 8 1 9 . 9 1 8 8 /\u00E2\u0080\u00A2 m e a n : / 0 s e r v i c e t i m e f o r d i s k 8 d i s k _ n 8 0 . 0 3 3 0 /* t o t a l ' r a t i o ) l / 0 s f o r d i s k 8 d i s k 9 3 9 . 3 9 3 9 /\u00E2\u0080\u00A2 m e a n : / 0 s e r v i c e t i m e f o r d i s k 9 d i s k _ n 9 0 0 0 0 3 /\u00E2\u0080\u00A2 t o t a l [ r a t i o ) l / 0 s f o r d i s k 9 p f _ d i s k 0 1 6 . 8 1 0 2 / \u00E2\u0080\u00A2 m e a n : / 0 s e r v i c e t i m e f o r d i s k 0 p f _ d i s k _ n 0 0 0 3 2 7 /\u00E2\u0080\u00A2 t o t a l [ r a t i o ) I / O s f o r d i s k 0 p f _ d i s k 1 1 8 7 7 8 6 /\u00E2\u0080\u00A2 m e a n I/O s e r v i c e t i m e f o r d i s k 1 p f _ d i s k _ n 1 0 . 0 3 2 6 /\u00E2\u0080\u00A2 t o t a l [ r a t i o ) I / O s f o r d i s k 1 p f _ d i s k 8 1 9 7 9 5 5 /\u00E2\u0080\u00A2 m e a n I / O s e r v i c e t i m e f o r d i s k 8 p f _ d i s k _ n 8 0 0 3 0 1 /\u00E2\u0080\u00A2 t o t o l [ r a t i o ) l / 0 s f o r d i s k 8 p f _ d i s k 9 0 0 0 0 0 /\u00E2\u0080\u00A2 m e a n I/O s e r v i c e t i m e f o r d i s k 9 p f _ d i s k _ n 9 0 0 0 0 0 /\u00E2\u0080\u00A2 t o t a l [ r a t i o ) l / 0 s f o r d i s k 9 - 79 -APPENDIX B USER_CLASS 1.0 3. Program Design 3.1 Design Overview The main objective of user_class is to collect groups of users and recompute the associated statistics for each group. An general view of the top level algorithm is given below: P r o c e s s c o m m a n d l i n e C l a s s i f y u s e r s t o g r o u p s F o r e a c h g r o u p o f u s e r s : a d d e a c h o f t h e i r t o t a l u s a g e s t a t i s t i c a d d e a c h o f t h e i r t o t a l c o u n t s t a t i s t i c R e s u l t i n g c l a s s s t a t i s t i c s i s t o t a l u s a g e / t o t a l c o u n t s C o m p u t e a g g r e g a t e s t a t i s t i c s o n a p e r c l a s s b a s i s C o m p u t e a v e r a g e s O u t p u t a l l c l a s s s t a t i s t i c s i n o u t p u t f i l e O u t p u t m o d e l d a t a i n t h e m o d e l f i l e 3.2 Internal Data Structures The foDowing is the header of the reduced data produced by condenser. t y p e d e f s t r u c t h e a d e r t y p e ) s h o r t m o n t h , /\u00E2\u0080\u00A2 m o n t h o f d a t e \u00E2\u0080\u00A2 / d a y , /\u00E2\u0080\u00A2 d a y o f d a t e \u00E2\u0080\u00A2/ y e a r , /\u00E2\u0080\u00A2 y e a r o f d a t e */ h o u r s , /\u00E2\u0080\u00A2 h o u r s o f t i m e ( m i l l i t a r y ) \u00E2\u0080\u00A2/ m i n u t e s . /\u00E2\u0080\u00A2 m i n u t e s o f t i m e \u00E2\u0080\u00A2/ s e c o n d s , /\u00E2\u0080\u00A2 s e c o n d s o f t i m e \u00E2\u0080\u00A2/ t i c k s ; /\u00E2\u0080\u00A2 t i c k s o f t i m e \u00E2\u0080\u00A2/ s h o r t m o n i t o r _ v e r s i o n . /* M o n i t o r v e r s i o n n u m b e r \u00E2\u0080\u00A2/ m o n i t o r _ u s e r ; /* M o n i t o r u s e r n u m b e r */ c h a r ffioni t o r _ u n o m e [ 3 2 ] ; /\u00E2\u0080\u00A2 M O n i t o r u s e r n a m e \u00E2\u0080\u00A2/ c h a r o s _ v e r s i o n [ 1 6 ] , /\u00E2\u0080\u00A2 U N I X v e r s i o n s t a m p \u00E2\u0080\u00A2/ c p u t y p e [ 1 6 ] ; /\u00E2\u0080\u00A2 C P U n a m e / t y p e \u00E2\u0080\u00A2/ s h o r t m e m o r y , /* M e m o r y s i z e \u00E2\u0080\u00A2/ n u s e r s , /\u00E2\u0080\u00A2 N o . o f c o n f i g u r e d u s e r s \u00E2\u0080\u00A2/ n e v e n t s . / \u00E2\u0080\u00A2 t o t a l n o . o f e v e n t s \u00E2\u0080\u00A2/ n b u f f e r s . /\u00E2\u0080\u00A2 t o t a l n o . o f b u f f e r s \u00E2\u0080\u00A2/ r I e n ; /* l e n g t h o f m o n i t o r r e m a r k \u00E2\u0080\u00A2/ d o u b l e e l a p s e d ; /\u00E2\u0080\u00A2 e l a p s e d t i m e \u00E2\u0080\u00A2/ s h o r t m a x u s e r , / \u00E2\u0080\u00A2 m a x i m u m u s e r n o . i n d a t a \u00E2\u0080\u00A2/ m a x d i s k ; / \u00E2\u0080\u00A2 m a x i m u m d i s k n o . i n d a t a \u00E2\u0080\u00A2/ { R E D U C E D _ H E A D E R ; - 80 -APPENDIX B USER_CLASS 1.0 3.3 Module Design main(argc#rgv) - Level 0 The main program. Does command line processing and calls level 1 routines. i n t a r g c ; c h a r * * a r g v ; P r o c e s s c o m m a n d l i n e o p t i o n s U s e r _ I n p u t ( ) ; G e t _ D o t a ( ) ; C l a s s i f y ( ) ; A g g r e g o t e Q ; A v e r a g e ( ) ; M o d e l _ O u t p u t ( ) ; P r i n t _ O u t p u t ( ) ; / * t o p r o c e s s u s e r ' s t e r m i n a l i n p u t \u00C2\u00BB / / * I n p u t d a t a f r o m condenser \u00C2\u00AB/ / * C l a s s i f y t h e u s e r s * / / \u00E2\u0080\u00A2 r e c o m p u t e o n a p e r c l a s s b a s i s \u00E2\u0080\u00A2 / / \u00E2\u0080\u00A2 c a l c u l a t e a v e r a g e v a l u e s \u00E2\u0080\u00A2 / / \u00E2\u0080\u00A2 P r i n t o u t m o d e l d a t a * / / \u00E2\u0080\u00A2 P r i n t user class o u t p u t s t a t i s t i c s * / User Input(ifname, of name, rfname) - Level 1 Simply prompts the user for the appropriate file names. Also, prompts for window range if the appropriate flags are turned on. c h a r \u00E2\u0080\u00A2 i f n a m e , * o f n a m e , \" r f n a m e ; i f i n p u t f i l e n a m e ( i f n a m e ) n o t g i v e n p r o m p t t h e u s e r i f o u t p u t f i l e n a m e ( o f n a m e ) n o t g i v e n p r o m p t t h e u s e r i f r e d u c e d f i l e n a m e ( r f n a m e ) n o t g i v e n p r o m p t t h e u s e r i f w i n d o w _ t i m e , p r o m p t f o r t i m e r a n g e i f w i n d o w _ b u f f e r , p r o m p t f o r b u f f e r r a n g e Get_Data{) - Level 1 Read in all the reduced data from the reduced file. Classify() - Level 1 Does the actual classification here. The resulting statistics wi l l be in separate structures given above. During classification, the statistics of users in a common class are summed. For lifetime function parameters, however, the average values of a single class users are used to generate numerous sets of data points for the orthogonal approximation to produce a resulting set of parameters. Aggregate^) - Level 1 Recompute the aggregate statistics on a per class basis. Average^) - Level 1 Compute average values of all statistics, replacing the total values stored. Model_Output{) - Level 1 Print the average statistics to the model data file, together with model configuration data. Print__Output{) - Level 1 Calculate and print global system statistics. Print all average - 81 -APPENDIX B USER_CLASS 1.0 statistics to the output file. 3.4 Design Issues The major concern in the design of user class is the size of the structures for storing all statistics. Because condenser can provide information on limits of these structures in the reduced file header, user class can easily allocate the required space for the structures dynamically. This avoids the use of static structures (which increases the startup time of user class) and minimizes memory requirements. During user classification, the b and c parameters provided by condenser for each user wil l be averaged for each of the 100 data points generated using the lifetime curve function. 3.5 Implementation Language The language used to develop user class is the C programming language. The main reasons for using this language is the need for bit and byte manipulation, the need for address manipulation, and that it is highly portable among UNLX systems. - R-> -APPENDIX C QNETS 1.0 1. Proposal Before reading this document, the reader must be familiar with the model data provided by user class. Also, a qnets user should also be familiar with general capacity planning techniques. This document describes the development of qnets, an analytical modeling tool that accepts model data from user_class and provides performance statistics of the system to be modeled. 1.1 Goals and Non-Goals The goal of qnets is to be able to model as many systems as possible. Besides being general, the tool should provide results with sufficient accuracies. The input to qnets is the model data provided by user class. The analytical algorithm used by qnets is Linearizer (see Linearizer: A Heuristic Algorithm for Queueing Network Models of Computing Systems, CACM 25, 2 (April 1982), 126-134 by Mandy Chandy et. al). Memory modeling is also added to qnets (note that linearizer does not model memory). Because qnets is based on linearizer, the user should be aware of the algorithm's restrictions. In particular, he should know whether or not qnets can be used to model the system under study. - 83 -APPENDIX C QNETS 1.0 2. Program Function 2.1 Terminology Performance indices The set of statistics that serves to calibrate a system's performance. Examples are utilizations, throughputs and response times. Model validation The purpose of model validation is to ensure that a model representation (for example, a mathematical model) of a system correctly represents the system. The process involves validating the performance indices given by the model with the measured statistics of the system. Workload characterization The manner in which we represent workload. In general, workload characteristics include CPU, I/O and memory demands. Performance Projection A validation model is evaluated using a representative workload to determine the performance indices of the projected system. 2.2 User Interface To invoke qnets, the command line must conform to the following format (with abbreviations in boldface): qnets [-input ] [-output ] [-help] [-fullhelp] [-version] /-force7 If no options are given, qnets wi l l prompt the user for the necessary set of input options. Qnets supports the following options: -input, -i -output, -o -help, -h -version, -v -fullhelp, -fh -force The following screens summarize all the qnets options. -input, -i Takes a file name as an argument (for input file). The input file should be saved by a previous qnets run, and should contain lists of user for all classes. If this option is omitted, qnets wil l prompt the user for classification information. -output, -o Takes an output file name as an argument. If the output file exists the user w i l l be queried before it is overwritten. Qnets w i l l prompt the user for an output file - 84 -APPENDIX C QNETS 1.0 name if this option is omitted. -help, -h Prints the command line format on how to invoke qnets. -fidlhelp, -fh Prints this full help information on qnets usage. -version, -v Prints qnets version stamp plus the date and time that it was built. -force If this option is used, the user wi l l not be prompted for confirmation before an output file is overwritten. Such files are model file and output file. A sample terminal session is given below, with user's typed input in boldface: qnets [ Q N E T S R e v . 1 . 0 - 1 9 8 6 ] E n t e r i n p u t f i l e n a m e ? model data E n t e r o u t p u t f i l e n a m e ? OUtfile 2.2.1 User Input The input file to qnets must be produced by user_class. Details of the input file and format are given in Appendix B. - R S -APPENDIX C QNETS 1.0 2.2.2 User Output The output produced by qnets consists of statistics identical to the measured statistics produced by user_class. They are as follows: System Throughput This is the system's throughput in the number of transactions per second. Page Fault Rate Utilizations Throughputs The number of page faults per second. The percentage of the time when each service is busy servicing jobs. The throughputs or arrival rates of all job classes in the number of transactions per second. A sample copy of qnets's output file is given below (device 0 is the CPU): C P U : V A X 1 1 / 7 5 0 M e m o r y : 4 0 9 6 p o g e s M a x i m u m u s e r s : 7 8 N u m b e r o f u s e r c l a s s e s : 3 N u m b e r o f d i s k s : 4 C L A S S D E V I C E 0 3 5 . 0 1 7 4 4 0 . 8 4 7 5 1 0 3 . 1 7 5 4 S E R V I C E T I M E S ( m s e c ) D E V I C E 1 D E V I C E 2 D E V I C E 3 1 8 . 5 2 4 3 2 2 . 2 7 7 4 1 4 . 5 9 2 9 1 8 . 5 3 9 2 1 9 . 2 4 0 0 1 9 . 5 6 9 9 1 9 . 8 4 6 8 2 0 . 8 1 4 9 2 0 . 7 7 5 6 D E V I C E 4 3 9 . 3 9 3 9 0 . 0 0 0 0 3 6 . 7 0 0 3 C L A S S D E V I C E 0 1 . 1 7 2 4 1 . 2 2 5 1 1 5 . 0 0 3 8 D E V I C E 1 0 . 1 6 4 6 0 . 1 9 4 5 1 4 . 0 2 6 6 V I S I T R A T I O S D E V I C E 2 D E V I C E 3 0 . 0 7 4 8 0 . 0 7 9 3 0 . 1 5 0 6 0 . 1 1 4 3 0 . 8 9 7 9 1 . 0 1 6 1 D E V I C E 4 0 . 0 0 0 3 0 . 0 0 0 0 0 . 0 0 5 6 P R O J E C T E D S T A T I S T I C S S Y S T E M T H R O U G H P U T = 3 . 2 1 7 9 6 6 t r a n s / s e c P A G E F A U L T R A T E \u00C2\u00AB 0 . 0 1 9 0 2 7 p e r s e c C L A S S T H I N K ( s e c ) 0 3 . 3 9 4 3 1 2 7 . 6 8 6 8 2 \" 9 2 . 4 1 7 5 R E S P O N S E ( s e c ) T H R U P U T ( / s e c ) 0 . 0 0 6 2 2 . 9 4 0 7 0 . 0 0 9 8 0 . 0 7 2 2 0 . 2 5 5 0 0 . 2 0 5 0 C E N T E R U T I L I Z A T I O N 0 4 4 . 1 7 2 4 % 1 5 . 1 2 4 3 % - 86 -APPENDIX C QNETS 1.0 2 0 . 7 8 9 0 % 3 0 . 9 1 2 6 % 4 0 . 0 0 7 7 % Q U E U E L E N G T H S C L A S S D E V I C E 0 D E V I C E 1 D E V I C E 2 D E V I C E 3 D E V I C E 4 0 9 . 9 8 1 8 0 . 0 0 9 4 0 . 0 0 4 1 0 . 0 0 4 7 0 . 0 0 0 0 1 1 . 9 9 9 3 0 . 0 0 0 3 0 . 0 0 0 2 0 . 0 0 0 2 0 . 0 0 0 0 2 1 8 . 9 4 7 7 0 . 0 4 4 2 0 . 0 0 3 6 0 . 0 0 4 4 0 . 0 0 0 0 - 87 -APPENDIX C QNETS 1.0 3. Program Design 3.1 Design Overview The modeling algorithm used in qnets is known as Linearizer. The overall design of qnets is as follows: P r o c e s s c o m m a n d l i n e R e a d i n i n p u t f i l e C a l c u l a t e p o g e f a u l t r a t e f r o m l i f e t i m e f u n c t i o n I n c l u d e d i s k v i s i t s d u e t o p a g e f a u l t t o d i s k v i s i t r a t i o I n v o k e L i n e o r t z e r O u t p u t s t a t i s t i c s 3.2 Internal Data Structures The following is the header of the reduced data produced by condenser: # d e f i n e M A X C E N T E R 3 0 # d e f i n e M A X C L A S S 1 0 t y p e d e f d o u b l e M A T R I X [ M A X C L A S S ] [ M A X C E N T E R ] ; t y p e d e f d o u b l e U C M A T [ 5 ] [ M A X C L A S S ] ; t y p e d e f d o u b l e D A R R f M A X C L A S S ] ; t y p e d e f l o n g L A R R [ M A X C L A S S ] ; / \u00E2\u0080\u00A2 s e r v i c e t i m e s p e r c l a s s p e r d e v i c e * / / * v i s i t r a t i o s p e r c l a s s p e r d e v i c e * / / * r e s p o n s e t i m e s p e r c l a s s \u00E2\u0080\u00A2 / / * q u e u e l e n g t h s p e r c l a s s p e r d e v i c e * / M A T R I X S e r _ t , V s t _ r . R e s . Q . / \u00E2\u0080\u00A2 i n p u t p a r a m e t e r s * / d i s k , d i s k _ n , p f _ d i s k , p f _ d i s k _ n ; U C M A T c p u , c p u _ n , i o , i o _ n , p f _ c p u , p f _ c p u _ n , t h i n k D A R R T h k _ t , / * t h i n k t i m e s p e r c l a s s T h p t , / * t h r o u g h p u t s p e r c l a s s p f s , / \u00E2\u0080\u00A2 p a g e f a u l t r a t e s p e r c l a s s N _ u s r , / \u00C2\u00BB s i z e o f e a c h j o b c l a s s I f t b , I f t c ; t h i n k n : \u00E2\u0080\u00A2/ */ - 88 -APPENDIX C QNETS 1.0 3 .3 Module Design nwin(argcjirgv) - Level 0 The main program. Does command line processing and calls level 1 routines. i n t o r g c ; c h o r * \u00C2\u00BB o r g v ; P r o c e s s c o m m o n d l i n e o p t i o n s R e s t o r e ( ) ; / * R e a d i n i n p u t f i l e \u00C2\u00BB / S i m p l i f y ^ ) ; / \u00C2\u00BB C o m p u t e p a g e f a u l t a n d d i s k V R \u00E2\u0080\u00A2 / L i n z r Q ; / \u00E2\u0080\u00A2 I n v o k e L i n e a r i z e r \u00E2\u0080\u00A2 / P r i n t _ O u t p u t ( ) ; / * P r i n t a l l s t a t i s t i c s * / Restore{) - Level 1 Read in all the input parameters from the input file. Store all data into arrays. Simplifyi) - Level 1 Aggregate transaction subclass statistics. Compute page fault rate from lifetime function. Compute disk visits due to page fault. Recompute all disk visit ratios. Print_Output{) - Level 1 Print all output statistics given by LinzrQ. 3.4 Design Issues Unlike user class and condenser, all the data structures used in qnets are static arrays. This is because the size of the arrays are comparatively smaller. If the user requires to model more service centers or more job classes that qnets currently supports, the constants MAXCENTER and MAXCLASS should be increased accordingly. 3.5 Implementation Language The language used to develop qnets is the C programming language. The main reason for using this language is that it is portable among UNIX systems. - 89 -"@en . "Thesis/Dissertation"@en . "10.14288/1.0051898"@en . "eng"@en . "Computer Science"@en . "Vancouver : University of British Columbia Library"@en . "University of British Columbia"@en . "For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use."@en . "Graduate"@en . "Characterizing user workload for capacity planning"@en . "Text"@en . "http://hdl.handle.net/2429/26021"@en .