As products continue to increase in complexity and global competition becomes more intense, manufacturers face daunting challenges, including achieving profitability, time to market and product differentiation, as well as meeting the needs of a shrewd buying community.
As such, many manufacturers today rely upon performance-indicating data to assess the risks involved in releasing new products. Performance-indicating data can include computer-aided engineering (CAE) results, test data, legacy information, knowledge databases and information from other sources. However, the exponential growth and management of such data, along with that of other product-engineering information, have made it difficult to gain insight into, and isolate, product risks as well as share information with all enterprise stakeholders to support their decision-making process.
What companies require is engineering intelligence (EI), or the ability to gather, process, analyze and present data and information about engineering-performance metrics. EI empowers decision-makers with relevant and actionable information, enabling them to make better engineering decisions in the context of overall business performance.
Most product development organizations leverage the state-of-the-art in CAE technology.
This has resulted in some unique challenges in acquiring and presenting key performance indicators (KPIs) resulting from thousands of virtual or physical tests.
Modern high-performance computing resources have enabled utilization of highly sophisticated modeling techniques that generate large volumes of data. The KPIs extracted from such analyses are of high value for the decision-makers. KPI extraction today can be quite informal, usually dependent upon the analysts preferences. Lack of standardized reporting and extraction of KPIs can lead to difficulties in producing concise reports. In addition to this lack of standardization for KPI extraction, lack of formal classification of validation data adds to the difficulties of presenting KPI reports that can clearly point out problem areas jeopardizing timely and correct decision-making.
The principal reason for the absence of formal classification of data is a general lack of systems tailored for managing, organizing and categorizing product performance data. While computer-aided design (CAD) and BOM data have been managed in product data management (PDM) systems for a number of years, management of engineering-performance data has been largely left to CAE and testing teams. Existing PDM or enterprise resource planning (ERP) systems are not particularly suited for managing such data.In short, lack of a data warehouse of product performance data is the main roadblock to applying the business intelligence (BI) technology in this domain.
Undertaking any data management and organization best effort instead of best practice does not result in management or organization of data. Mandating usage of product performance data management systems is critical to ensure meeting the end goals of producing such data, namely, bringing high-performance and quality products to market.
Requirements Driving Product Development
Product design encompasses considerations from a variety of requirements such as packaging, performance, manufacturing, cost of production, service and warranty claims. With increasing reliance on CAE and virtual testing for performance predictions and manufacturability issues, execution of a standard design validation plan with clear definition of performance measures becomes critical.
Lack of formal data organizing practices, exacerbated by geographically distributed work locations, makes it difficult to access KPIs on demand during development stages. This can easily cause problems to be addressed incorrectly and not in a timely manner, increasing the risks of taking a nonperforming or underperforming product to market.
Getting high-performance products to market in the shortest possible time demands rapid decision-making capabilities throughout the product development process. Timely access to data from the execution of various validation plans is needed.
Evaluating Product Performance
A variety of performance and manufacturability requirements are captured formally in a well-defined design validation plan. This plan calls out specific engineering targets to be achieved, from the complete system down to the component level. The plan also specifies validation methodology, calling for physical and virtual validation methods. Each validation has clear objectives and defines the required inputs (system characteristics or part geometry) and expected load cases in service, along with the exact KPIs.
Advances in CAE have also enabled design synthesis, which results in predicting optimized geometry for parts, given a specific load case and expected performance targets. Performance targets could be weight, compliance, fatigue life, etc. System optimization is also driven by performance requirements.
Accurate decision-making requires access to information and results from the execution of validation plans in a timely manner. In almost all environments, such information is available but generally resides on analysts computers in different locations. The first step to enable reliable access to the KPIs is to structure all validation activities using logical classification of validations.
Logical Organization of Validation Data
When a validation, a test or an optimization is performed on any product system or component, the engineers performing such activities can completely describe all characteristics of the data. The engineers can easily identify the following:
Specific system or component being analyzed,
Load case being evaluated and
Measures resulting from the validation (KPIs).
In addition, the name of the user, software used, geographic location, start and finish times, etc., can be easily gleaned from the users computer desktops. In short, it is realistic - not just a possibility - to classify each and every activity in a formal and logical manner. Such a structure lends itself to acquiring the resulting KPIs efficiently and consistently.
The main goal here is to define a formal way of completely describing validation tasks as they suit the organization.
Product Performance Dashboards
If validations are classified in a formal way, it is easy to capture all the required KPI data, logically organized to produce tailored product-performance dashboards for various stakeholders throughout the enterprise. Decision-makers can then quickly navigate through the various classification dimensions (including models, model year, components and loadcases), seeking out specific problem areas with the ability to get to the bottom of issues in a very timely manner. The product performance dashboard in Figure 1 technology can easily be integrated with commercial and custom database management systems, enabling role-based and real-time access to all product performance KPI data.
Users can apply a variety of filters based on validation classification to narrow or broaden search parameters. A typical example could be seeking all validation failure data for a given model or model year from a certain geographic location for a specific subsystem. The application lets users drill down to get more detailed views of other related KPIs extracted for a failed load case. Further drill-downs into the actual simulation or test data can be easily enabled by providing hotlinks into the underlying, structured data repository of product performance data.
The softwares applications can be exercised to produce a multitude of comparison charts using different dimensions of data without the need to develop any code. The tools reporting graphical user interface (GUI) provides direct access to all dimensions captured within the multidimensional solutions model.
Role-Based Access Requirements
Product performance data are generated by engineers and analysts, used by design release engineers or product designers, and accessed by decision-makers to ensure a good-quality product is released to the market. Based on the role of an individual, the depth and breadth of data access are remarkably different.
An analyst is typically concerned with a specific component or subsystem. The analyst also deals with modeling techniques and very granular details of a specific validation. The design release engineer might typically be interested in a pass/fail evaluation for a component resulting from all validations conducted on the said component. The view for the design release engineer is broader accessing KPIs from all validations but the details are lower-accessing KPIs instead of highly granular post-processed reports. Personnel with product release responsibilities need such data across multiple product lines across the globe.
Product performance dashboards satisfy users across the board by making required data available through cube-dicing. An analyst, for example, could quickly compare KPIs from a specific validation across all development phases, across multiple iterations and also compare current KPIs with historical data or data from competitive products.
BI software enables this narrowing or broadening of focus to provide exactly what is desired by accessing this multidimensional structure through a few mouse clicks. Access to all up-to-date, critical information is at the users fingertips, all the time. With EI in hand, all stakeholders remain informed about KPIs and potential product risks. They are able to drill down into information to gain insight and isolate problems. They can also perform sophisticated analyses, generate reports and collaborate with colleagues. Overall, this results in smarter business decisions.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access