Background

In the United Kingdom, a major Aerospace and Defense manufacturer needed a system that would allow them to quickly perform manufacturing and manufacturing repair data analysis on a complex, highly engineered product. The value of the product also demands that there is minimal wastage where failed assemblies are diagnosed and reworked rather than scrapped. To complicate matters, the customer test data was produced and stored in disparate formats (CSV, TXT, PDF, SQL, Access, and ATML among several others). Key deliverables were to track manufacturing/repair touch time (a metric of rework), and DPU (Defects per Unit).  By tracking these deliverables, the Customer would be able to reduce retest and rework costs and time.

Transforming data into intelligence

  Simply looking at Yield, DPU and Touch Time, however, did not provide sufficient detail for the customer to drive the desired process improvements. By utilizing the prebuilt ‘drill-down’ failure rate reports in conjunction with the SPC reports (SPC Histograms, X Bar-R, X Bar-S among others, the customer was and is able focus their DPU, yield,  and touch time investigations into the areas which would have the biggest impact on cost or throughput. By mining the source data for key attribute information, the customer is able to characterize on set environmental data. They were then able to spot trends such as measurement variation between test stations, impact of different versions of test software, and even variation introduced through test operators. By being able to characterize and drill down from high level yield, failure rate, and SPC reports to individual test record details, the customer identified improvement opportunities related to the test station, test procedure, material quality, training or product design.

Normalizing the datasets

The customer utilized pre-built, COTS analytic without customization. The trick was making sure that the customer data entered into the system in the first place. While the customer was undergoing a standardization transition from their disparate test data formats to the National Instruments TestStand version of ATML, it was critical that the legacy test data formats enter in the same analytics database as the newer ATML during that long-term transition. That meant that intelligent data adapters had to be created that automatically interrogate and discriminate data file type and format, then execute the various parsing methods to normalize the data for IntraStage analytics engine. By developing custom adapters for each of their critical legacy data formats, IntraStage was able to provide a reliable, capable Extract, Transform, and Load methodology. Coupled with IntraStage’s industry-leading COTS analytics and proven database design, this provided the customer with high-value, on-demand data analysis that continue to drive process improvements in multiple secure locations.