Test Plan Optimization Methodology Workshop

January 6-7, 1999

The Test Plan Optimization Methodology (TPOM) is a mechanism for generating test and evaluation (T&E) strategies based on mathematical optimization. TPOM addresses a life cycle planning support mechanism and allows modeling and simulation (M&S) and test planners to provide maximum reliable and credible system information with the most efficient use of resources. TPOM was developed by Cyrus Staniec of RDA Logicon, with support and encouragement from Dr. Henry Dubin, Technical Director, Army Operational Test and Evaluation Command (OPTEC).

The OPTEC Office of Methodology and the Air Force Operational Test and Evaluation Center (AFOTEC) sponsored the Test Planning Optimization Workshop hosted by the Test & Evaluation Research and Education Center (TEREC) at Georgia Tech on January 6-7, 1999 in Atlanta, Georgia. The workshop drew on multi-disciplinary viewpoints of 26 participants from industry, government, and academia. The workshop objective was to examine a concept for developing T&E plans that optimize the integrated use of M&S and traditional test methods. The goals were to determine the reasonableness of the concept being proposed, to examine the complexities of using the method operationally, to examine the method for structure and data supportability, and to suggest enhancements and methods of solution. The agenda was developed to support the workshop goals. Based on the DoD Simulation, Test and Evaluation Process (STEP) approach, TPOM was introduced, issues in applicability and scope were identified, alternative approaches were addressed, approaches to input parameters were explored, and further development areas were enumerated.

A Blue Ribbon Panel of T&E experts (Mr. Dyess; AAM; Mr. Ryan; OP91; Dr. Seglie, DOTE; and Mr. Knaur, TECOM) was invited to represent the T&E community and to consider TPOM from the perspective of utility and applicability. The Panel members provided strong recommendations to accelerate this effort.

The success of the workshop was predicated on the presence of a significant diversity of backgrounds and organizational representations among the participants. As a sign of this diversity, about one half of the workshop participants represented federal government organizations, one-quarter industry, and one-quarter academia. The names and organizational affiliations of all the attendees are shown in the attendee list. The participants were aware of the general features of TPOM, for they had been provided with a summary of TPOM in a read-ahead document.

The workshop began with Cyrus Staniec briefing the method overview. The assumptions of the TPOM were exposed and previous work described, including the application of the method to an example of a live fire test of the Paladin, an Army howitzer program. Workshop participants then presented their viewpoints in their areas of expertise. For example, an expert in the application of fuzzy set theory to decision problems presented an alternate mechanism to calibrate input values and distributions. The presentations generally considered the method of optimum selection, the input parameters for TPOM, and the scope and applicability of the TPOM. Subsequent discussions and reviews provided feedback and recommendations for the workshop sponsors

The issues summary discussion early on 7 January confirmed that all the subjects presented the day before were felt to be issues. The parallel sessions were then organized around the subjects of Input Parameters, Methodology, and Scope & Applicability. These parallel sessions met for detailed considerations and then briefed their results.

The Input Parameter Session was concerned with the TPOM parameters of importance, credibility, resources, and benefits. Common issues for all the parameters included their definitions, uses, sources, relationships, and temporal influences. The parameters should be study separately and are strongly related, particularly importance, benefit, and credibility. Resources are not only dollars but also time and other resource constraints (e.g., model maturity, test site availability, and personnel readiness). The method for generating parameter inputs will strongly influence the parameter definitions and even their sources.

The Methodology Session touched on the various mechanisms to develop T&E strategies (mathematical optimization, fuzzy logic, and genetic algorithms) and ways to calibrate input parameters (Bayesian methods, modeling and simulation, and fuzzy set theory). It also considered the application of the value-focus model to decompose levels of the evaluation hierarchy to produce a framework to define importance and indicate the availability of information. The session observed that objective functions might vary with application and user.

The Scope & Applicability Session was concerned with a better definition of the problem, a refinement of the mechanism and its features, the cost of TPOM ownership, and its implementation. The session emphasized that TPOM can only support decisions, not make them. It was also concerned with issues regarding requirements, policy, user intent, and ownership. TPOM moves the T&E community closer to a real implementation of STEP.

The Blue Ribbon Panel responded after the session debriefs with their comments. The Blue Ribbon Panel members were Dr. William Dyess, Mr. George Ryan, Dr. Ernest Seglie, and Mr. James Knaur.

Dr. Dyess made several comments to put TPOM in context, including emphasizing that it provides a potential mechanism for implementing STEP. He commented on the difficulties of implementation, such as the need for extensive databases. But he also said that the effort is probably worth pursuing, since one positive outcome would include the construction of the required databases which would have merit in their own right. He also called for a more stressing proof-of-principle demonstration than the one accomplished to date.

Mr. Ryan stressed that TPOM is a decision support system, and not a decision system. He was concerned that potential misapplications are likely without a clear understanding of the method's basic nature. He saw potential merit in the TPOM, and suggested that a couple of proof-of-principle demonstrations be conducted, to ensure that the method can achieve its goals over a range of programs. Mr. Ryan believes that the Navy has an interest in supporting the continued evaluation of TPOM.

Dr. Seglie found merit in TPOM on several fronts. First, though, he listed three areas that did not receive enough emphasis at the workshop: 1) using TPOM to find problems earlier in the development process, 2) involving the users in building the models, and 3) taking care not to slip into irrelevance by tacking theoretical rather than practical issues. Then, he said that TPOM should be focused on reducing system development cycle times, reducing life cycle costs, and finding failure modes faster in an operational context. He sees TPOM methods as potentially capable of identifying areas of ignorance so that they can be rectified earlier in the product development cycle. He said that the effort was worth pursuing and should be.

Mr. Knaur emphasized that models and tests are inextricably linked, and said that creating links between test, simulation, and modeling communities would be a necessary step in the process. He thinks that a proof-of-principle demonstration should be taken from OT or Live Fire programs, because they tend to be better defined and more circumscribed. He thinks the application to DT programs might be further off and less readily accomplished.

The Blue Ribbon Panel comments are summarized here: · The TPOM provides promise of a rational approach to plan test and simulation strategies, particularly important in the Simulation Test and Evaluation Process and Simulation Based Acquisition. · The next step in the program development should be a pilot method applied to a real program in progress. · Control scope and applicability of the method to keep the development manageable. · Do not overlook value of failure mode detection as a key element of "Benefit".

The proposed next steps include convening two working groups (i.e., Methodology WG and Input Parameter WG) to consider the methodology and input parameter issues and to propose a few optimization methods and techniques for evaluation and prototyping. The pilot program would explore the use of these promising approaches.

This workshop was conducted as part of the Test Plan Optimization Methodology Evaluation research project.

Last Updated December 20, 2007