Webinar: Compliance & Traceability ~ Problems Happen…be Prepared for them

IntraStage recently held a Webinar on Compliance & Traceability…How to have a better sleep when Mayhem happens.  The content of the webinar focused on:

  • Why Traceability and Compliance are getting more important
  • Test Data and Database Best Practices
  • Demo of using IntraStage to rapidly look up Serial Numbers and pinpoint failure causes
    We also have a whitepaper available on Compliance & Traceability.

Struggling with Paper based test data?

How would your car behave if the Computer in your car never got data from its Engine Sensor? Not very good things I imagine. Like a car today, which has many sensors to determine the health of a car..what most Electronics manufacturers lack is a complete suite of Test Data sensors across ALL their manufacturing stations.

Today the most dominant sensor for getting Test Data on Electronic Products in manufacturing environments is through ATEs (Automatic Test Equipment). At IntraStage though, probably 90% of our customers also have some Stations in manufacturing whether repair, manual assembly stations etc.. where an ATE is not feasible or required. What happens at these Manufacturing Stations is that some operation is performed on the product and then a small number of Test Data is recorded on Paper.

While this is good that the Test Data is actually being recorded on paper..what is difficult about this situation is not having the ability to quickly aggregate and mine the Test Data to determine true Yields, SPC, Rework etc. Having a stack of paper located in some filing cabinet always seem to de-motivate people to do that…go figure. 🙂

No longer is this a necessary situation though..web based technology can solve this problem by allowing Test Data to be directly recorded in Paperless Web based forms. While this adoption has been slow…we feel that it is inevitable that all Test Data will be digitized..and only then will the Company have a complete view of its manufacturing metrics like yield, rework, scrap etc.

Webinar: Transforming Product Quality Test Data into into KPI based Dashboards

IntraStage recently held a Dashboard Webinar on May 8 2012. The content of the webinar focused on:

1. Why and How to use KPI Dashboards effectively

2. Dashboard Visual Design Best Practices

3. Building a customized IntraStage dashboard

Please check out the webinar!

A great resource and which we believe is a gold standard on Dashboard design is Stephen Few’s book “Information Dashboard Design”.

NASA & The Challenger Crash: The importance of Test Data Visualization

I’m currently reading Jim Collins’ new book, “How the Mighty Fall”, and it references how companies have to make difficult decisions with ambiguous data, and how hard it can be. He discusses the case of Iridium and Motorola, and how an early-stage experiment progressed to a huge bet, and how Motorola might have avoided the bankruptcy that followed. In the example, he discusses how hard it can be to make these decisions, since the data is rarely obvious to all at the time the decision is made.

To make his point, he references the book, “The Challenger Launch Decision”, by Diane Vaughan, and the events that led up to the decision to launch the Challenger in 1986. NASA had contracted a consultant regarding the cold temperature conditions (between 25-30 deg F) under which the Challenger would need to launch, and whether the O-rings might have an issue. Interestingly, the consultant gave the opinion that it may not be safe, due to the shuttle never having launched in such cold temperatures, and that the O-rings might fail and cause an explosion. The evidence cited was that the O-rings were often damaged at launches below 53 degrees. During a three hour meeting, NASA engineers and managers argued amongst themselves about what to do, since there were also O-ring failures at 70 degrees and above, and there wasn’t any clear evidence that a launch would be unsafe.

Interesting enough, it turns out that there WAS clear evidence available. It just wasn’t easy to visualize with NASA’s technology at the time. However, any IntraStage customer would tell you that what was needed was very easy to do in a tool like IntraStage. What they needed to map was a number of O-ring failures vs. launch temperature. If they had produced this graph, they would have seen that EVERY launch below 66 degrees showed O-ring failures, and this pattern mitigated substantially above 66 degrees. But, as Collins’ summarizes, “no one laid out the data in a clear and convincing visual manner, and the trend toward increased danger in colder temperatures remained obscured throughout the late-night teleconference debate.”

The O-ring Task Force stated, “We just didn’t have enough conclusive data to convince anyone.” But the evidence was there, hidden in the test data, and NASA didn’t have the tools to visualize it. The visualization would have saved the lives of seven people, including Christa McAuliffe (the first civilian astronaut).

That is the thing about good visualization tools. You never know exactly what report you’ll need, and you never know when you’ll need it. It might be an urgent teleconference where you are making a critical bet that, if wrong, could lead to a product recall or even loss of life. In these occasions, what you need are tools that allow you in real-time to deep dive on the test data, pivot on all kinds of scenarios, and visualize trends.

Reliability Data in Life Test – The difficulty of finding a needle in a haystack

I’m working on a project right now where it is a consumer kitchen appliance that is being life tested. To do so, the company has to do the equivalent of 10-20 years of usage of the product, but in a compressed timeframe (say 4 months). This is the same process that car companies do when they have professionals drive the car in difficult terrain 24/7 for several months up to 100,000 miles or more. The goal is to quickly use the product for its design life, and then see when failures start happening.

I’ve done software architecture on about 20 different life test projects in the last four years, with more than 15 different large OEM’s, in five different industries (Medical Device, Cell Phone, Semiconductor, Aerospace Component, Consumer Electronics). While they are all different, there are certain elements that are always the same.

The hardest thing is to figure out what specifically to measure, how to identify when something is a red flag, and how to identify when something should be called a FAILURE. Once things are identified as failures in a life test, everything gets easy. You plot the failures vs. the simulated years of life where the failure occurred, often using a Weibull plots, and this information helps you predict your product’s quality. If you do this for 10 or 20 units of the same model, then you can more accurately predict how any given product (with this design and production process) will perform during its life.

As I said, the easy part is plotting the stuff after you have a failure. The hard part is figuring out what to call a failure. Especially for a radically new product design, where you don’t have real-world product failure data to look at (for example, BMW has many years of real data for how often their drive trains have failed by customers). Once you have determined what constitutes a failure, you can figure out how to measure for that instance and then much of the measurement and determination of failure can be automated with software and instruments.

Back to the root issue – what is a failure? In the case of the consumer product I’m dealing with now, one idea we are implementing is to have a “20 questions” kind of diagnostic for operators. For example, as operators monitor the products during the four month testing, they may see certain things visually. When that happens, we want the system to track those issues. So, if an operator sees the display go blank, we want them to be able to not only report that the screen went blank, but also take certain steps to see if the screen will come back. We want to track the steps they took, and track if it worked or not. A design engineer can then review this data and decide if the issue constitutes a part failure that will show up on a Weibull plot.

So, in these cases, trying to enumerate all of the possible issues (and questions to ask when those issues arise) is a real challenge. When done right, the software and corresponding work flow process, can successfully find those “needles in the haystack” and feed the info back to R&D, therefore improving product quality and customer satisfaction.