Prototyping vs. Specifying. Evaluation of data of a Software Engineering Class Project. Individual Study Spring 1962.

Similar documents
SAMPLE PULSE REPORT. For the month of: February 2013 STR # Date Created: April 02, 2013

The Impacts of "All Inclusive Pricing" on Selected Resort Recreation Participation

Brief course information Strategic planning and project selection Project integration management Project scope management

Two-Sample Z-Tests Assuming Equal Variance

Confidence Intervals and Sample Size

Risk Analysis Risk Management

Code: [MU-PR-2-B] Title: Accruals and Comp Time

Better decision making under uncertain conditions using Monte Carlo Simulation

Module 4. Instructions:

MBEJ 1023 Dr. Mehdi Moeinaddini Dept. of Urban & Regional Planning Faculty of Built Environment

STOCHASTIC COST ESTIMATION AND RISK ANALYSIS IN MANAGING SOFTWARE PROJECTS

STOCHASTIC COST ESTIMATION AND RISK ANALYSIS IN MANAGING SOFTWARE PROJECTS

Corridors of Commerce DRAFT Scoring and Prioritization Process. Patrick Weidemann Director of Capital Planning and Programming November 1, 2017

Automated Deposit Holds

DATA HANDLING Five-Number Summary

CRIF Lending Solutions WHITE PAPER

Chapter 9 Case on Quality Engineering: Acceptance Sampling

Development of a Market Benchmark Price for AgMAS Performance Evaluations. Darrel L. Good, Scott H. Irwin, and Thomas E. Jackson

How to be Factor Aware

Online Course Manual By Craig Pence. Module 7

MONEY IN POLITICS JANUARY 2016

Analyzing Financial Performance Reports

Land Development Property Investment Evaluation. Windows Version 7.4

Lecture 3: Project Management, Part 2: Verification and Validation, Project Tracking, and Post Performance Analysis

Lecture 3: Project Management, Part 2: Verification and Validation, Project Tracking, and Post Performance Analysis

Project Progress HELP.PSPRG. Release 4.6C

Portfolio Behaviour of Nigerian Commercial Banks: A Decomposition Analysis

Manual. DHWcalc. Tool for the Generation of Domestic Hot Water (DHW) Profiles on a Statistical Basis. Version 1.10 (2003) Ulrike Jordan, Klaus Vajen

Consumption. Basic Determinants. the stream of income

Verdandi. Peter Volin, Verdandi Ingrid Tolvas, Verdandi Jukka Korva, Niklas Data

2018 Budget Planning Survey General Population Survey Results

R & R Study. Chapter 254. Introduction. Data Structure

Course: TA 318.C3 CyberCampus Advanced Federal Income Taxation Fall Michael Vinson

Information Assurance in Networked Enterprises: MICSS Lab Experiments, Results and Analysis

WEB-BASED COURSE SYLLABUS TEMPLATE. COURSE TITLE: Fundamentals of Corporate Budgeting

MODELLING INSURANCE BUSINESS IN PROPHET UNDER IFRS 17

Reducing Project Lifecycle Cost with exsilentia

FINANCE 305. Financial Markets, Institutions, and Economic Activity Fall 2010

WELCOME TO TRUMAN STATE UNIVERSITY. International Student Pre-Arrival Orientation Money Matters. Fall 2016

Textbook: pp Chapter 11: Project Management

Data Science Essentials

WIN NEW CLIENTS & INCREASE WALLET-SHARE with HiddenLevers Engaging prospects + clients with portfolio stress testing

Internet Appendix to Credit Ratings and the Cost of Municipal Financing 1

KE2 MCQ Questions. Identify the feasible projects Alpha can select to invest.

Advanced Verification Management and Coverage Closure Techniques

HOW TO MAKE RESERVATIONS EASY. Mike Schultz / Pramod Neelambaran

Syllabus. University of Colorado Denver School of Public Affairs. PUAD 5140/7140: Nonprofit Financial Management.

Financing Adequate Resources for New York Public Schools. Jon Sonstelie* University of California, Santa Barbara, and

Credit Performance Scorecard White Paper. (2016 Scorecard Updates, version 4.1) November Fannie Mae

Creative Project Accounting

Referral Language: Health Insurance and Managed Care (B) Committee is to complete the following:

Business Practice Manual for Settlements & Billing

SESAM Web user guide

Household & Member Statistics

UNIVERSITY OF MASSACHUSSETS DARTMOUTH College of Business Department of Accounting and Finance. FIN 484, Advanced Investment Analysis, Online section

Practical Session 8 Time series and index numbers

Accounting for Management: Concepts & Tools v.2.0- Course Transcript Presented by: TeachUcomp, Inc.

A probability distribution shows the possible outcomes of an experiment and the probability of each of these outcomes.

CONSULTANT EVALUATION SYSTEM GENERAL CRITERIA

Online Testing System & Examinee Scoring System

Using data mining to detect insurance fraud

Strategy and Settings for Tradonator nextgen!

JBookTrader User Guide

Load and Billing Impact Findings from California Residential Opt-in TOU Pilots

AP Statistics Section 6.1 Day 1 Multiple Choice Practice. a) a random variable. b) a parameter. c) biased. d) a random sample. e) a statistic.

Welcome to Finance 254

Work management. Work managers Training Guide

How to Satisfy GAO Schedule Best Practices

Managerial Accounting Prof. Dr. Varadraj Bapat Department School of Management Indian Institute of Technology, Bombay

PRINCE2 Sample Papers

True Program Costs: Program Budgets and Allocations

CLASSIFICATION OF COST

When events are measured: results improve. When feedback is provided the rate of improvement accelerates.

Budget Workbook Help. Discover. Solving Problems. Index. Set up categories, record transactions, and set preferences.

ILA LRM Model Solutions Fall Learning Objectives: 1. The candidate will demonstrate an understanding of the principles of Risk Management.

TradeSense : Functions Manual 2012

STUDY SET 2. Continuous Probability Distributions. ANSWER: Without continuity correction P(X>10) = P(Z>-0.66) =

Modelling the Sharpe ratio for investment strategies

Improving Returns-Based Style Analysis

An Analysis of Cost-overrun Projects using Financial Data and Software Metrics

Margin Direct User Guide

UNITED NATIONS ECONOMIC COMMISSION FOR EUROPE CONFERENCE OF EUROPEAN STATISTICIANS TIME USE IN SERBIA

2014 IRA Supertrain. Cruise the IRA Seas. August 4 7 Denver, CO. June New Orleans, LA. September 8 11 Las Vegas, NV

HPE Project and Portfolio Management Center

WHOLESALE Good Faith Estimate Compliance Manual

Pearl Banking Ltd. Beyond Banking Limits. Powered by Pearl Banking Ltd

The Current State of Global Share Plan Administration

Market attractiveness Energy Performance Certificate for Buildings Overall report

Listed below is a breakdown of specific project economic benefits:

O*U*C*H Version 3.2 Updated March 15, 2018

Custom Target Date Strategies: Considerations for Plan Sponsors

COS 318: Operating Systems. CPU Scheduling. Today s Topics. CPU Scheduler. Preemptive and Non-Preemptive Scheduling

Debits and Credits: Analyzing and Recording Business Transactions

DO NOT OPEN THIS QUESTION PAPER UNTIL YOU ARE TOLD TO DO SO. Performance Pillar. P1 Performance Operations. Wednesday 27 August 2014

Event A Value. Value. Choice

Intermediate Outlook July 13-20, 2009 Jim Curry, Publisher

Bringing Meaning to Measurement

SALESFORCE LIGHTNING

Citywide Cash Collections

Full file at

Transcription:

Prototyping vs. Specifying Evaluation of data of a Software Engineering Class Project Individual Study Spring 1962 Thomas Seewaldt

1. Introduction 2. The source data 2.1 Beginning questionnaire, product size, distribution of source code by function, documentation size, maintenance score 2.2 Productivity, maintainability vs. product size and pages of listing 2.3 Performance, performance vs. size, vs. effort, vs. maintainability, vs. size of design spec., and vs. programming experience 2.4 Effort distribution by activity and phase 2.5 Problem reports 3. Analysis of variance 4. Follow up questionnaire 5. Comparison of the COCOMO prediction with the real effort 6. Interpretation of the project data 7. Comparison USC project with UCLA project 8. Conclusion

Appendices A B C D E Awk program to count lines of pascal source code Results of calculations on the timesheets Answers on the follow up questionnaire COCOMO estimation of the team projects Program to enter timesheets and perform calculations on them

List of figures 1 Distribution of source code by function 2 Program size vs. development effort 3 Maintainability rating vs. program size 4 Maintainability rating vs. pages of listing 5 Performance score by group type 6 Performance score vs. program size 7 Performance score vs. development effort 8 Maintainability rating vs. performance score 9 Distribution of development effort by activity (absolute values) 10 Distribution of development effort by activity(relative values) 11 Distribution of total development effort by phase 12 Effort distribution by phase and activity (Ave. group type 1) 13 Effort distribution by phase and activity (Ave. group type 2) 14 Cumulative effort distribution by phase and activity (Ave. group type 1) 15 Cumulative effort distribution by phase and activity (Ave. group type 2) 16 Effort distribution by activity (USC vs. UCLA projects)

List of tables 1 Beginning questionnaire, product size, distribution of source code by function, documentation size, maintenance score 2 Maintainability rating 3 Productivity, maintainability vs product size and pages of listing 4 Performance rating 5 Performance, performance vs. size, vs. effort, vs. maintainability, vs. size of design spec., and vs. programming experience 6 List of available timesheets 7 Effort distribution by activity 8 Effort distribution by phase and activity 9 Number of problem reports 10 Analysis of variance 11 Follow up questionnaire 12 Answers on the follow up questionnaire 13 COCOMO prediction vs. real effort 14 Comparison USC data with UCLA data

1. Introduction In this paper the evaluation of data gathered during a software engineering course project in winter 1982 at the University of California, Los Angeles (UCLA) will be presented. During this project, 7 groups consisting of 2 or 3 people developed a small software product. 4 groups wrote requirements and design documents before coding, 3 groups built a prototype. Both the documents and the prototype were reviewed by the lecturers, and an acceptance test took place at the end of the quarter. The data collected during this project were evaluated in Spring 1982 in an individual study project supervised by Barry W. Boehm and Terry Gray. The results of this study are assembled in this paper. The source data, its analysis results, and the assumptions made when collecting and analyzing the data are described in chapter 2 to 5 in order to give a solid basis for final conclusions. Interpretation results and conclusions are gathered in chapter 6. Finally, chapter 7 compares the project results with the results of a similar project, which was conducted at the University of Southern California in fall 1979.

2. The source data 2.1 Beginning questionnaire, product size, distribution of source code function, documentation size, maintenance score The first group of data in table 1 shows the team size, the programming and virtual machine experience, and the GPA of the team members. The data are taken from the background surveys filled out by each student at the beginning of the quarter. The next group of data shows the product size and the line of code distribution per function. For counting the lines, the string editor awk, running under UNIX, was used. The awk program, developed for this purpose, is documented in appendix A. A demonstration program, showing which lines are counted by the program is also included there. The distribution of source code by function is shown in figure 1. The next block of data shows the size of the delivered documentation material. At the end of the quarter, each student indicated which product he would like to maintain and ranked the products accordingly (Table 2). The maintenance score is the sum of the ratings of each product multiplied by an adjustment factor, since not every product was rated by the same number of people. The lower the maintenance score, the more people preferred to maintain the product. The last group of data gives information about the development effort through certain deadlines. The procedure, how these data are collected is described in chapter 2.4. 2.2 productivity, maintainability, vs. product size and pages of listing In table 3 different productivity measures are established. The productivity is calculated considering the size of the product and the total development effort (DSI/MHtotal), the effort for programming, testing, and fixing (DSI/MH), and the effort for planning, designing, programming, testing, and fixing (DSI/MH). Figure 2 shows the relation between the product size and the total development effort graphically. A "documentation productivity" is established by comparing the delivered pages of documentation (without draft user manual) with the documentation effort. The maintainability is compared with the size of the products and the pages of listing. In order to rate how the values differ within the groups, the standard deviation is

shown as an index to the mean value. To make the standard deviation comparable, it is not given in absolute values, but in percent of the mean value. The lower this percentage is the less the values are spread within the group. A more reliable score of how significant the difference between the groups is compared to the variability within the groups appears in chapter 3. Figures 3 and 4 show graphically the correlation between maintainability, and size of product and listing. 2.3 Performance, Performance vs. size, vs. effort, vs. maintainability, vs. size of design spec., and vs. programming experience The performance of the products was rated in an evaluation session. The rating procedure itself was not. straight-forward. First, a criteria matrix was assembled similar to the criteria presented in /2/. Starting off with this matrix, it turned out that the rating process was somehow unsatisfying. This was due mainly to the fact that the criteria forced the raters to test and rate following a certain very detailed procedure, instead of experiencing the quality of the product a normal user would. This approach also turned out to be very time consuming. Hence, 4 criteria were chosen according to which the products were rated after testing them. The criteria are: Functionality, Frustration, Learning, and Tolerance. Functionality addresses the functions provided by the program, while the frustration score tries to measure how the product behavior corresponds to the user expectations and how easily the user can perform his task. Selfdescriptiveness and ease of learning is addressed by the learning score and the reaction on erroneous input by tolerance. In addition, a score was given on how well the program was debugged. The detailed ratings are shown in table 4. The score for normal performance in one category was 5. For performance above or below average, points were added or deducted. After testing a product, each of the rating persons scored the product according to the above mentioned criteria. The mean value of these scores was taken as the product score. Table 5 and Figure 5 summarize the performance scores. In addition, the performance score is compared with product size, development effort, maintainability rating, pages of design specification, and average programming experience of the teams. Again, the standard deviation is given in percent of mean in order to establish comparable values. The comparison is done for the total performance

score without the "Bugs-score" (first line of each item) and with the "Bugs-score" (second line). Figure 6 shows the comparison between performance and size graphically, figure 7 the comparison between performance and development effort. The different correlations for the two group types, shown in these figures, are later confirmed by the analysis of variance (chapter 3). In figure 8 maintainability and performance are compared graphically. To invert the scaling on the performance axis, the performance score was subtracted from 22 and the result used as score in figure B. 2.4 Effort distribution by activity and phase To monitor the effort spent for different activities, the students were given time sheets and asked to turn them in every week. The available time sheets are shown in table 6. For all missing time sheets, except the sheets of the first week, a time sheet was assumed containing the average time that all students of the same group type spent during that week. When time sheets of the first week were missing, it was assumed that these students spent no time for the project. In order to handle the large number OF time sheets, a program was developed to enter the data into the computer and to perform different calculations on these data. These programs are documented in appendix B. Appendix C contains the results of several calculations performed on the time sheets. These results are the basis for the tables and figures presented in this chapter. Table 7 and Figure 9 and 10 show the average effort distribution by activity for each group type in total as well as in percent of the total. Table 8 and figure 11 to 15 show the effort distribution broken down by phase. Since the deadlines (SRH, PDR, prototype exercise) were on Wednesday, Wednesday is used as a week boundary. However, in order to include every day in the calculation, the boundary of the first week was extended to Monday (project start), and the boundary of the ninth week to Sunday (project finish). 2.5 Problem reports In table 9 the number of problem reports is compared with the number of pages of the corresponding document. Again, the standard deviation is given in percent of the mean value.

3. Analysis of variance In order to investigate differences between the data of the specifying and prototyping groups and between the data of the 2 person and 3 person groups, an analysis of variance was conducted. The analysis OF variance is a statistical method to compare the variability of data within a group with the variability between the groups. A score is established indicating how significant differences between groups are. When calculating this score, not only the variability but also the size of the groups is taken into consideration. This is done in a way that the smaller the groups are, the larger the difference between the groups must be to be considered significant. The main data, collected in chapter 2, were analyzed using the statistical package SAS, running on a IBM 3033. The analysis was conducted for two group configurations: 1. specifying groups vs. prototyping groups 2. 2 person groups vs. 3 person groups The mean value and the significance score (^ PROB>F) for both group configuration and different data is shown in table 10. In statistics the score has to be less or equal 0.05 to be considered significant. For our purpose, scores between 0.1 and 0.05 can indicate nearly significant differences. In table 10 the significant and nearly significant scores are underlined.

3. Follow up questionnaire In order to collect more information about how certain factors might have influenced the project data, a follow up questionnaire was assembled. All students who were still available (12 of 18 students) were asked to fill out this questionnaire. The questionnaire is shown in table 11. The answers are collected in table 12 according to the questions and the team people were in. For instance, table 12 shows that 3 people of 3 person teams said that, if they would have been in a 2 person team, their product outcome would not have been different. Appendix D contains the detailed answers of the students.

5. Comparison of the COCOMO prediction with the real effort Table 13 compares the real effort with the prediction of the COCOMO model. The first block contains the nominal and the adjusted total effort. In the second group the effort for different activities are compared with the corresponding model prediction: - plan with planning and reading effort - design with design effort - programming and integration with programming, integration and test effort The remaining effort reported on the time sheets was equally spread over the three categories of activities. The last block of data shows the effort for the mentioned activities in percent of the total effort and compares it to the model data. The detailed COCOMO estimation print out is attached in appendix E.

6. Interpretation of the project data The interpretation of the project data is not, or at least not mainly from the author, but mainly from the weekly discussions held during the quarter and from a summary of conclusions by Barry W. Boehm. For the sake of completeness, these conclusions were included in this paper. Specifying vs. prototyping Prototyping correlates with - smaller products (tab. 1, tab. 10)* - less development effort (tab. 7, tab. 10) * - no overall performance loss (tab. 5, fig. 5, tab. 10) - lower on functionality, tolerance - better on learning - frustration product dependent - better maintainability rating (tab, 1,2,10) (but judged worse a5 a basis for planning add-one) - reduced deadline effect (mostly doc'n at end) (fig. 11-15) - less programming, testing, fixing at the end - more difficult integration (tab. 10) - no difference in "productivity (DSI/MH) (tab. 3) - proportionally less designing effort (fig. 10) - little difference in code distribution by function (tab. 1, fig. 1) - less documentation, less pp(doc)/mh (tab. 1 3, 10) - less designing, programming more testing, reviewing, fixing (percentwise or per KDSI) - always having something that works - prefered by people with programming experience (tab. 1, 10) * partly due to smaller average team size, but critique comments indicate a definite specification-vsprototype effect

- prototype 40-60% of endproduct - high percentage of prototype in endproduct (67-95%) smaller vs. larger teams smaller teams correlate with - slightly smaller products (tab. 1. 10)* - smaller development effort (tab. 7, 10) - higher productivity (DSI/MH)* - lower functionality (tab. 5, 10) - little difference in frustration, learning, tolerance (tab. 5, 10) - proportionally less programming and meeting effort (tab. 7, 10) - people OF larger teams would expect lower product performance, if team size would be smaller (tab. 12) - people of smaller teams would not expect higher performance, if team size would be larger (tab. 12) Development process, effort distribution - deadline effect holds (Fig. ii -15) -> less pronounced for prototype teams - dominant activity is programming (37%, 30%) -> reverse of USC results on programming vs. documentation UCLA : [35,12] USC : [14,30] - reasonably consistent across teams - effort overestimated by COCOMO Development process, other - need more front-end effort * perhaps due to one exceptional project

- need more planning, organizing -> critiques, USC project experiences -> strategy with scarce computer resources (10-20% of programming, testing, fixing time was "busy-waiting") - preferred organization highly people dependent 4 critiques : democratic approach best 3 critiques : needed a leader - if 4 more weeks (tab. 12) -> better debugged product, better documentation -> slightly product changes Product characteristics - model calculations take small portion of code (8%, 5%) (tab. 1, fig. 1) - user interface takes most of code (54%, 56%) (tab. 1. fig. 1) - with 1 exception (2303), little variation in DSI/person (757-1055) - maintainability ratings primarily a function of size - wide variation in product architectures -> very much a reflection of developer personalities

7. Comparison USC project with UCLA project In 1978 a similar experiment was conducted at the University of Southern California (USC) /3/. Two groups of 5 and 6 people developed the same product, one coding in FORTRAN the other coding in PASCAL. Both groups followed the specification approach. Unlike the UCLA teams, the USC teams had a fixed team organisation. The data of the USC and UCLA projects are compared in table 14 and figure 16. The UCLA results differ from the USC results in the following points: - the UCLA specifying groups produced larger products and had an higher productivity and effort per person* - the UCLA teams spent - more time for designing and programming - less time for documenting, Fixing, and meeting - the UCLA specifying groups produced, on the average, more documentation than the USC groups - the UCLA teams have, on the average, a higher "documentation productivity" due to teams of type 1 - the USC teams had slightly more problem reports per page of documentation, the UCLA teams more problem reports per page for their requirement specification - the USC teams needed 70% of the effort predicted by CDCOMO, the UCLA teams 33%* * it looks as if the situation at UCLA were more competitive, perhaps due to the larger number of teams participating in the project

8. Conclusion The presented class project is a basis for interesting conclusions in the area of software engineering of small products. Although the sample was small, statistically significant results could be established. In three areas the evaluation procedure could be simplified or improved: First, not reporting the effort on physical time sheets but, entering the effort directly in the computer and storing it on line would save a lot of time needed for coding the information in machine processable form. Second, it might be interesting to also have people experienced in maintaining software to rate the maintainability of the products. The results could be compared with the students' rating. Third, using a statistical package earlier in the evaluation process might save calculation effort. In addition, it might be fruitful to use other statistical analysis to draw and to support conclusions. Especially, an analysis of correlation between the different data might be helpful to replace the weaker approach using fraction and. Standard deviation.

REFERENCES /1/ B.W. BOEHM Software Engineering Economics Prentice- Hall, Inc., Englewood Cliffs, 1981. /2/ W. DZIDA, User-Perceived Guallty of Interactive S. HERDA, Systems D. ITZFELD in IEEE Transactions on Software Engineering. Vol. SE-4, No. 4, Julg 78, p. 270-276. /3/ B.W. BOEHM An Experiment in Small-Scale Application Software Engineering in IEEE Transactions on Software Engineering, Vol. SE-7, No. 5, Sept. 81, p. 482-493.

APPENDIX A Awk program to count lines of pascal source code

BEGIN {semicolonnr = -10; endnr = -10} {for (i = 1; i <= NF; ++i) {if ( ($i ~ /\(\*/) ($i ~ /\{/} ) {++switch; comment = 1} if ( ($i ~ /\*\)/) ($i ~ /\}/) ) {--switch; comment = 1} if (switch == 0) {if (($i ~ /^if$/) \ ($i ~ /^begin$/) \ ($i ~ /^for$/) \ ($i ~ /^record$/) \ ($i ~ /^case$/) \ ($i ~ /^while$/) \ ) {++count; flag = 1} 1)) if (($i ~ /^end$/) ($i ~ /^end;/)) {if ((NR == semicolonnr + 1} (NR == endnr + {++count; flag = 1} else {++count; ++count; flag = 1} endnr = NR } if (($I ~ /^else$/) && (NR!= endnr + 1)) {++count; flag = 1} if ($i ~ /i/) {if (NR!= endnr) {++count; flag = 1} semicolonnr = NR } } } if ( ( (flag == 0) && \ ((switch > 0) (comment!= 0)) \ ) \ (length == 0) \ ) {if (NR == endnr +I) ++endnr if (NR == semicolcnnr + 1) ++semicolonnr } flag = O comment = 0 } END {print "DSI of ",FILENAME, " : ",count}

APPENDIX B Results of calculations on the timesheets

Title : total of all timesheets Groups : la - 2c Weeks : 1 (Monday) - 10 (Sunday) Groups : 7 Persons : 18 TOTAL PER GROUP PER PERSON PERCENT reading 260 37 14 8 % planning 173 25 10 5 % designing 307 44 17 9 % programming 1158 165 64 35 % documentation 413 59 23 12 % testing 325 46 18 10 % reviewing 57 8 3 2 % fixing 234 33 13 7 % meeting 256 37 14 8 % miscellaneous 128 18 7 4 % TOTAL 3312 473 184 100 %

Title : group type total Groups : 1a 1d Weeks : 1 (Monday) - 10 (Sunday) Groups : 4 Persons : 11 TOTAL PER GROUP PER PERSON PERCENT Reading 173 43 16 7 % Planning 121 30 11 5 % designing 267 67 24 11 % programming 862 216 78 37 % documentation 276 69 25 12 % Testing 177 44 16 8 % reviewing 30 8 3 1 % Fixing 136 34 12 6 % Meeting 183 46 17 8 % miscellaneous 110 28 10 5 % TOTAL 2336 584 212 100 %

Title : group type total Groups : 2a 2c Weeks : 1 (Monday) - 10 (Sunday) Groups : 3 Persons : 7 TOTAL PER GROUP PER PERSON PERCENT Reading 87 29 12 9 % Planning 52 17 7 5 % Designing 40 13 6 4 % programming 295 98 42 30 % documentation 137 46 20 14 % Testing 148 49 21 15 % Reviewing 26 9 4 3 % Fixing 98 33 14 10 % Meeting 74 25 11 8 % miscellaneous 18 6 3 2 % TOTAL 976 325 139 100 %

Title : group total Groups : 1a Member : 1-3 Weeks : 1 (Monday) - 10 (Sunday) Persons : 3 TOTAL PER PERSON PERCENT reading 29 10 5 % planning 44 15 7 % designing 81 27 14 % programming 276 92 47 % documentation 48 16 8 % testing 45 15 8 % reviewing 2 1 0 % fixing 27 9 5 % meeting 30 10 5 % miscellaneous 7 2 1 % TOTAL 589 196 100 % 473

Title : group total Groups : 1b Member : 1-3 Weeks : 1 (Monday) - 10 (Sunday) Persons : 3 TOTAL PER PERSON PERCENT reading 33 11 7 % planning 7 2 1 % Designing 46 15 9 % programming 162 54 33 % documentation 82 27 17 % testing 49 16 10 % reviewing 3 1 1 % fixing 42 14 8 % meeting 49 16 10 % miscellaneous 26 9 5 % TOTAL 498 166 100 % 306

Title : group total Groups : 1c Member : 1-2 Weeks : 1 (Monday) - 10 (Sunday) Persons : 2 TOTAL PER PERSON PERCENT Reading 40 20 9 % Planning 30 15 6 % Designing 59 29 13 % programming 135 67 29 % documentation 53 27 12 % Testing 55 27 12 % Reviewing 6 3 1 % Fixing 19 10 4 % Meeting 24 12 5 % miscellaneous 40 20 9 % TOTAL 459 230 100 % 296

Title : group total Groups : 1d Member : 1-3 Weeks : 1 (Monday) - 10 (Sunday) Persons : 3 TOTAL PER PERSON PERCENT Reading 72 24 9 % Planning 41 14 5 % Designing 82 27 10 % programming 289 96 37 % documentation 92 31 12 % Testing 29 10 4 % Reviewing 20 7 2 % Fixing 48 16 6 % Meeting 80 27 10 % miscellaneous 37 12 5 % TOTAL 789 263 100 % 389

Title : group total Groups : 2a Member : 1-2 Weeks : 1 (Monday) - 10 (Sunday) Persons : 2 TOTAL PER PERSON PERCENT Reading 21 10 6 % Planning 22 11 7 % Designing 16 8 5 % programming 109 55 34 % documentation 54 27 17 % Testing 27 13 8 % Reviewing 19 9 6 % Fixing 33 16 10 % Meeting 19 10 6 % miscellaneous 3 2 1 % TOTAL 323 161 100 % 207

Title : group total Groups : 2b Member : 1-3 Weeks : 1 (Monday) - 10 (Sunday) Persons : 3 TOTAL PER PERSON PERCENT Reading 24 8 6 % Planning 15 5 3 % Designing 13 4 3 % programming 147 49 35 % documentation 50 17 12 % Testing 75 25 18 % Reviewing 3 1 1 % Fixing 39 13 9 % Meeting 44 15 10 % miscellaneous 13 4 3 % TOTAL 422 141 100 % 473

Title : group total Groups : 2c Member : 1-2 Weeks : 1 (Monday) - 10 (Sunday) Persons : 2 TOTAL PER PERSON PERCENT Reading 43 22 19 % Planning 15 8 6 % Designing 12 6 5 % programming 39 19 17 % documentation 33 17 14 % Testing 46 23 20 % Reviewing 5 3 2 % Fixing 27 13 11 % Meeting 11 6 5 % miscellaneous 2 1 1 % TOTAL 232 116 100 % 75 32% 29 12% 129 55% 39 46 134

Title : until srr Groups : 1a Member : 1-3 Weeks : 1 (Monday) - 3 (Wednesday) Persons : 3 TOTAL PER PERSON PERCENT Reading 27 9 32 % Planning 32 11 39 % Designing 16 5 19 % programming 0 0 0 % documentation 0 0 0 % Testing 0 0 0 % Reviewing 0 0 0 % Fixing 0 0 0 % Meeting 5 2 7 % miscellaneous 2 1 3 % TOTAL 83 28 100 %

Title : until srr Groups : 1b Member : 1-3 Weeks : 1 (Monday) 3 (Wednesday) Persons : 3 TOTAL PER PERSON PERCENT Reading 15 5 33 % Planning 3 1 7 % Designing 0 0 0 % programming 0 0 0 % documentation 2 1 5 % Testing 0 0 0 % Reviewing 1 0 2 % Fixing 0 0 0 % Meeting 15 5 33 % miscellaneous 9 3 20 % TOTAL 44 15 100 %

Title : until srr Groups : 1c Member : 1-2 Weeks : 1 (Monday) - 3 (Wednesday) Persons : 2 TOTAL PER PERSON PERCENT Reading 21 10 47 % Planning 7 3 15 % Designing 9 5 21 % programming 0 0 0 % documentation 4 2 9 % Testing 0 0 0 % Reviewing 0 0 0 % Fixing 0 0 0 % Meeting 2 1 3 % miscellaneous 2 1 5 % TOTAL 44 22 100 %

Title : until srr Groups : 1d Member : 1-3 Weeks : 1 (Monday) - 3 (Wednesday) Persons : 3 TOTAL PER PERSON PERCENT Reading 31 10 44 % Planning 3 1 4 % Designing 2 1 3 % programming 0 0 0 % documentation 0 0 0 % Testing 0 0 0 % Reviewing 5 2 7 % Fixing 7 2 10 % Meeting 21 7 30 % miscellaneous 1 0 1 % TOTAL 70 23 100 %

Title : until pdr Groups : 1a Member : 1-3 Weeks : 1 (Monday) - 6 (Wednesday) Persons : 3 TOTAL PER PERSON PERCENT Reading 29 10 13 % Planning 39 13 17 % Designing 76 25 34 % Programming 42 14 19 % documentation 9 3 4 % Testing 2 1 1 % Reviewing 1 0 0 % Fixing 0 0 0 % Meeting 24 8 11 % miscellaneous 3 1 1 % TOTAL 225 75 100 %

Title : until pdr Groups : 1b Member : 1-3 Weeks : 1 (Monday) - 6 (Wednesday) Persons : 3 TOTAL PER PERSON PERCENT Reading 20 7 12 % Planning 7 2 4 % Designing 34 11 21 % programming 0 0 0 % documentation 50 17 31 % Testing 0 0 0 % Reviewing 1 0 1 % Fixing 0 0 0 % Meeting 41 14 25 % miscellaneous 9 3 5 % TOTAL 160 53 100 %

Title : until pdr Groups : 1c Member : 1-2 Weeks : 1 (Monday) - 10 (Wednesday) Persons : 2 TOTAL PER PERSON PERCENT Reading 36 18 25 % Planning 22 11 15 % Designing 49 24 34 % programming 0 0 0 % documentation 14 7 10 % Testing 0 0 0 % Reviewing 3 2 2 % Fixing 0 0 0 % Meeting 13 6 9 % miscellaneous 9 4 6 % TOTAL 144 72 100 %

Title : until pdr Groups : 1d Member : 1-3 Weeks : 1 (Monday) - 6 (Wednesday) Persons : 3 TOTAL PER PERSON PERCENT Reading 71 24 29 % Planning 32 11 13 % Designing 42 14 17 % programming 1 0 0 % documentation 32 11 13 % Testing 0 0 0 % Reviewing 11 4 4 % Fixing 8 3 3 % Meeting 45 15 18 % miscellaneous 1 0 0 % TOTAL 242 81 100 %

Title : until prototype Groups : 2a Member : 1-2 Weeks : 1 (Monday) - 5 (Wednesday) Persons : 2 TOTAL PER PERSON PERCENT Reading 13 7 13 % Planning 12 6 12 % Designing 13 6 12 % programming 43 21 42 % documentation 4 2 3 % Testing 4 2 3 % Reviewing 2 1 2 % Fixing 3 2 3 % Meeting 10 5 9 % miscellaneous 0 0 0 % TOTAL 102 51 100 %

Title : until prototype Groups : 2b Member : 1-3 Weeks : 1 (Monday) - 5 (Wednesday) Persons : 3 TOTAL PER PERSON PERCENT Reading 20 7 15 % Planning 8 3 6 % Designing 6 2 5 % programming 41 14 32 % documentation 1 0 1 % Testing 23 8 18 % Reviewing 2 1 2 % Fixing 9 3 7 % Meeting 20 7 15 % miscellaneous 0 0 0 % TOTAL 129 43 100 %

Title : until prototype Groups : 2c Member : 1-2 Weeks : 1 (Monday) - 5 (Wednesday) Persons : 2 TOTAL PER PERSON PERCENT Reading 25 13 30 % Planning 8 4 10 % Designing 8 4 10 % programming 19 10 23 % documentation 0 0 0 % Testing 15 8 18 % Reviewing 0 0 0 % Fixing 3 2 4 % Meeting 6 3 7 % miscellaneous 0 0 0 % TOTAL 84 42 100 %

Title : activity per phase Groups : 1a 1d Weeks : 1 (Monday) 2 (Wednesday) Groups : 4 Persons : 11 TOTAL PER GROUP PER PERSON PERCENT reading 51 13 5 66 % planning 17 4 2 22 % designing 3 1 0 4 % programming 0 0 0 0 % documentation 0 0 0 0 % testing 0 0 0 0 % reviewing 1 0 0 1 % fixing 0 0 0 0 % meeting 5 1 0 6 % miscellaneous 0 0 0 0 % TOTAL 77 19 7 100 % From 1 (Thursday) 70 17 6 9%

Title : activity per phase Groups : 1a 1d Weeks : 2 (Thursday) - 3 (Wednesday) Groups : 4 Persons : 11 TOTAL PER GROUP PER PERSON PERCENT reading 42 10 4 26 % planning 28 7 3 17 % designing 24 6 2 15 % programming 0 0 0 0 % documentation 6 2 1 4 % testing 0 0 0 0 % reviewing 5 1 0 3 % fixing 7 2 1 4 % meeting 38 9 3 23 % miscellaneous 14 3 1 9 % TOTAL 163 41 15 100 %

Title : activity per phase Groups : 1a 1d Weeks : 3 (Thursday) - 4 (Wednesday) Groups : 4 Persons : 11 TOTAL PER GROUP PER PERSON PERCENT reading 24 6 2 26 % planning 17 4 2 18 % designing 36 9 3 39 % programming 0 0 0 0 % documentation 0 0 0 0 % testing 0 0 0 0 % reviewing 1 0 0 1 % fixing 0 0 0 0 % meeting 16 4 1 17 % miscellaneous 0 0 0 0 % TOTAL 93 23 8 100 %

Title : activity per phase Groups : 1a 1d Weeks : 4 (Thursday) - 5 (Wednesday) Groups : 4 Persons : 11 TOTAL PER GROUP PER PERSON PERCENT reading 25 6 2 19 % planning 18 4 2 13 % designing 43 11 4 33 % programming 0 0 0 0 % documentation 6 2 1 5 % testing 0 0 0 0 % reviewing 2 1 0 2 % fixing 0 0 0 0 % meeting 37 9 3 28 % miscellaneous 0 0 0 0 % TOTAL 132 33 12 100 %

Title : activity per phase Groups : 1a 1d Weeks : 5 (Thursday) - 6 (Wednesday) Groups : 4 Persons : 11 TOTAL PER GROUP PER PERSON PERCENT reading 13 3 1 4 % planning 20 5 2 6 % designing 94 23 9 31 % programming 43 11 4 14 % documentation 92 23 8 30 % testing 2 1 0 1 % reviewing 7 2 1 2 % fixing 1 0 0 0 % meeting 26 7 2 9 % miscellaneous 7 2 1 2 % TOTAL 305 76 28 100 %

Title : activity per phase Groups : 1a 1d Weeks : 6 (Thursday) 7 (Wednesday) Groups : 4 Persons : 11 TOTAL PER GROUP PER PERSON PERCENT reading 10 3 1 4 % planning 16 4 1 7 % designing 42 11 4 19 % programming 88 22 8 39 % documentation 16 4 1 7 % testing 20 5 2 9 % reviewing 2 1 0 1 % fixing 6 2 1 3 % meeting 17 4 2 8 % miscellaneous 6 2 1 3 % TOTAL 225 56 20 100 %

Title : activity per phase Groups : 1a 1d Weeks : 7 (Thursday) - 8 (Wednesday) Groups : 4 Persons : 11 TOTAL PER GROUP PER PERSON PERCENT reading 6 2 1 2 % planning 2 1 0 1 % designing 13 3 1 5 % programming 178 45 16 68 % documentation 14 4 1 5 % testing 20 5 2 8 % reviewing 3 1 0 1 % fixing 5 1 0 2 % meeting 9 2 1 3 % miscellaneous 12 3 1 5 % TOTAL 262 66 24 100 %

Title : activity per phase Groups : 1a 1d Weeks : 8 (Thursday) - 9 (Wednesday) Groups : 4 Persons : 11 TOTAL PER GROUP PER PERSON PERCENT reading 2 1 0 1 % planning 4 1 0 1 % designing 10 3 1 3 % programming 201 50 18 60 % documentation 2 1 0 1 % testing 32 8 3 10 % reviewing 1 0 0 0 % fixing 33 8 3 10 % meeting 12 3 1 3 % miscellaneous 38 10 3 11 % TOTAL 334 84 30 100 %

Title : activity per phase Groups : 1a 1d Weeks : 9 (Thursday) - 10 (Sunday) Groups : 4 Persons : 11 TOTAL PER GROUP PER PERSON PERCENT Reading 0 0 0 0 % planning 0 0 0 0 % designing 2 1 0 0 % programming 352 88 32 47 % documentation 139 35 13 19 % testing 103 26 9 14 % reviewing 9 2 1 1 % fixing 84 21 8 11 % meeting 23 6 2 3 % miscellaneous 33 8 3 4 % TOTAL 745 186 68 100 % Until 10 (Wednesday) 682 170 32 8%

Title : activity per phase Groups : 2a 2c Weeks : 1 (Monday) - 2 (Wednesday) Groups : 3 Persons : 7 TOTAL PER GROUP PER PERSON PERCENT reading 19 6 3 82 % planning 3 1 0 11 % designing 0 0 0 0 % programming 0 0 0 0 % documentation 0 0 0 0 % testing 0 0 0 0 % reviewing 0 0 0 0 % fixing 0 0 0 0 % meeting 2 1 0 8 % miscellaneous 0 0 0 0 % TOTAL 23 8 3 100 % 23 8 3

Title : activity per phase Groups : 2a 2c Weeks : 2 (Thursday) - 3 (Wednesday) Groups : 3 Persons : 7 TOTAL PER GROUP PER PERSON PERCENT reading 27 9 4 40 % planning 11 4 2 16 % designing 6 2 1 8 % programming 11 4 2 16 % documentation 1 0 0 1 % testing 1 0 0 1 % reviewing 0 0 0 0 % fixing 0 0 0 0 % meeting 11 4 2 16 % miscellaneous 0 0 0 0 % TOTAL 68 23 10 100 %

Title : activity per phase Groups : 2a 2c Weeks : 3 (Thursday) - 4 (Wednesday) Groups : 3 Persons : 7 TOTAL PER GROUP PER PERSON PERCENT reading 10 3 1 10 % planning 10 3 1 10% designing 14 5 2 14 % programming 39 13 6 39 % documentation 4 1 1 3 % testing 6 2 1 5 % reviewing 1 0 0 1 % fixing 3 1 0 2 % meeting 15 5 2 15 % miscellaneous 0 0 0 0 % TOTAL 101 34 14 100 %

Title : activity per phase Groups : 2a 2c Weeks : 4 (Thursday) - 5 (Wednesday) Groups : 3 Persons : 7 TOTAL PER GROUP PER PERSON PERCENT reading 2 1 0 2 % planning 4 1 1 3 % designing 7 2 1 6 % programming 53 18 8 43 % documentation 0 0 0 0 % testing 35 12 5 28 % reviewing 3 1 0 2 % fixing 13 4 2 10 % meeting 8 3 1 6 % miscellaneous 0 0 0 0 % TOTAL 124 41 18 100 %

Title : activity per phase Groups : 2a 2c Weeks : 5 (Thursday) - 6 (Wednesday) Groups : 3 Persons : 7 TOTAL PER GROUP PER PERSON PERCENT reading 6 2 1 8 % planning 5 2 1 7 % designing 10 3 1 13 % programming 33 11 5 44 % documentation 2 1 0 3 % testing 4 1 7 5 % reviewing 2 1 0 2 % fixing 4 1 1 5 % meeting 8 3 1 10 % miscellaneous 1 0 0 1 % TOTAL 74 25 11 100 %

Title : activity per phase Groups : 2a 2c Weeks : 6 (Thursday) - 7 (Wednesday) Groups : 3 Persons : 7 TOTAL PER GROUP PER PERSON PERCENT reading 5 2 1 6 % planning 5 2 1 5 % designing 2 1 0 2 % programming 42 14 6 48 % documentation 2 1 0 2 % testing 14 5 2 16 % reviewing 3 1 0 3 % fixing 7 2 1 8 % meeting 9 3 1 10 % miscellaneous 0 0 0 0 % TOTAL 87 29 12 100 %

Title : activity per phase Groups : 2a 2c Weeks : 2 (Thursday) - 3 (Wednesday) Groups : 3 Persons : 7 TOTAL PER GROUP PER PERSON PERCENT reading 3 1 0 2 % planning 6 2 1 5 % designing 2 1 0 2 % programming 42 14 6 31 % documentation 5 2 1 3 % testing 34 11 5 25 % reviewing 7 2 1 6 % fixing 29 10 4 22 % meeting 5 2 1 4 % miscellaneous 0 0 0 0 % TOTAL 132 44 19 100 %

Title : activity per phase Groups : 2a 2c Weeks : 8 (Thursday) - 9 (Wednesday) Groups : 3 Persons : 7 TOTAL PER GROUP PER PERSON PERCENT reading 6 2 1 4 % planning 4 1 1 2 % designing 0 0 0 0 % programming 50 17 7 34 % documentation 19 6 3 13 % testing 29 10 4 19 % reviewing 7 2 1 5 % fixing 26 9 4 18 % meeting 7 2 1 5 % miscellaneous 0 0 0 0 % TOTAL 149 50 21 100 %

Title : activity per phase Groups : 2a 2c Weeks : 9 (Thursday) - 10 (Sunday) Groups : 3 Persons : 7 TOTAL PER GROUP PER PERSON PERCENT reading 9 3 1 4 % planning 5 2 1 2 % designing 0 0 0 0 % programming 26 9 4 12 % documentation 105 35 15 48 % testing 26 9 4 12 % reviewing 3 1 0 1 % fixing 17 6 2 8 % meeting 11 4 2 5 % miscellaneous 17 6 2 8 % TOTAL 220 73 31 100 % Until Wednesday 10 187 62 27 15%