Document ID. Document Type. Hickey, Christopher J. Loveall, James B. Orr, James K. Klausman, Andrew L. Date Acquired. Publication Date. It is a time for a complete integrated system test flight software with flight hardware in operational domain scenarios. Crew training for mission practices is also performed at this time.
To manage the software production process for space shuttle flight control, descriptive data are systematically collected, maintained, and analyzed. At the beginning of the space shuttle program, global measurements were taken to track schedules and costs. But as software. The detail and granularity of data dictate not only the type but also the level of analysis that can be done.
Data related to failures have been specifically accumulated in a database along with all the other corollary information available, and a procedure has been established for reliability modeling, statistical analysis, and process improvement based on this information.
A composite description of all space shuttle software of various ages is maintained through a configuration management CM system. The CM data include not only a change itself, but also the lines of code affected, reasons for the change, and the date and time of change.
In addition, the CM system includes data detailing scenarios for possible failures and the probability of their occurrence, user response procedures, the severity of the failures, the explicit software version and specific lines of code involved, the reasons for no previous detection, how long the fault had existed, and the repair or resolution. Although these data seem abundant, it is important to acknowledge their time dependence, because the software system they describe is subject to constant "churn.
Over the years, the CM system for the space shuttle program has evolved into a common, minimum set of data that must be retained regarding every fault that is recognized anywhere in the life cycle, including faults found by inspections before software is actually built. This evolutionary development is amenable to evaluation by statistical methods. Trend analysis and predictions regarding testing, allocation of resources, and estimation of probabilities of failure are examples of the many activities that draw on the database.
This database also continues to be the basis for defining and developing sophisticated, insightful estimation techniques such as those described by Munson Management philosophy prescribes that process improvement is part of the process. Such proactive process improvement includes inspection at every step of the process, detailed documentation of the process, and analysis of the process itself.
The critical implications of an ill-timed failure in space shuttle flight control software require that remedies be decisive and aggressive. When a fault is identified, a feedback process involving detailed information on the fault enforces a search for similar faults in the existing system and changes the process to guard actively against such faults in flight control software development.
The characteristics of a single fault are actively documented in the following four-step reactive process-improvement protocol:. Eliminate the process deficiency that let the fault escape earlier detection, and. Further scrutiny of what occurred in the process between introduction and detection of a fault is aimed at determining why downstream process elements failed to detect and remove the fault. Such introspective analysis is designed to improve the process and specific process elements so that if a similar fault is introduced again, these process elements will detect it before it gets too far along in the product life cycle.
The complete recording of project events in the CM system phase of the process, change history of involved line s of code, the line of code that included an error, the individuals involved, and so on allows hindsight so that the development team can approach the occurrence of an error not as a failure but rather as an opportunity to improve the process and to find other, similar errors. The dependability of safety-critical software cannot be based merely on testing the software, counting and repairing the faults, and conducting "live tests" on shuttle missions.
Testing of software for many, many years, much longer than its life cycle, would be required in order to demonstrate software failure probability levels of 10 -7 or 10 -9 per operational hour.
A process must be established, and it must be demonstrated statistically that if that process is followed and maintained under statistical control, then software of known quality will result.
One result is the ability to predict a particular level of fault density, in the sense that fault density is proportional to failure intensity, and so provide a confidence level regarding software quality. This approach is designed to ensure that quality is built into the software at a measurable level.
IBM's historical data demonstrate a constantly improving process for comfort of space shuttle flight. The use of software engineering methodologies that incorporate statistical analysis methods generally allows the establishment of a benchmark for obtaining a valid measure of how well a product meets a specified level of quality.
This book identifies challenges and opportunities in the development and implementation of software that contain significant statistical content. While emphasizing the relevance of using rigorous statistical and probabilistic techniques in software engineering contexts, it presents opportunities for further research in the statistical sciences and their applications to software engineering.
It is intended to motivate and attract new researchers from statistics and the mathematical sciences to attack relevant and pressing problems in the software engineering setting. It describes the "big picture," as this approach provides the context in which statistical methods must be developed. The book's survey nature is directed at the mathematical sciences audience, but software engineers should also find the statistical emphasis refreshing and stimulating. It is hoped that the book will have the effect of seeding the field of statistical software engineering by its indication of opportunities where statistical thinking can help to increase understanding, productivity, and quality of software and software production.
Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website. Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.
Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.
To search the entire text of this book, type in your search term here and press Enter. Ready to take your reading offline?
Click here to buy this book in print or download it as a free PDF, if available. Do you enjoy reading reports from the Academies online for free?
0コメント