Remote Monitoring of Data

Remotely Monitoring Supplier Data And Manufacturing

With the disruptions to global manufacturing due to the impact of the Corona  virus, we understand that manufacturers will need to implement an ability to remotely monitor manufacturing velocity, yield, and quality at sites worldwide.

At IntraStage, our customers in the complex electronics industry are already using our abilities to push and pull data from remote datasets and databases into a normalized system for full product quality.  With this visibility,  our customers can monitor data from overseas or domestic suppliers, contract manufacturers, and OEM sites. With this information, they can remotely help to diagnose root-cause issues for yield, performance, quality, and throughput.

We encourage our current and prospective customers to take concrete and reasonable steps to avoid the risk of infection. 

 

IntraStage Adopts the IPC-CFX Standard​

Accelerating the Smart Factory: IntraStage Adopts the IPC-CFX Standard

IntraStage is pleased to announce its inclusion and partnership with the new Connected Factory Exchange (IPC-CFX) global standard.

IPC-CFX, a free and open-source standard developed by the Connected Factory Initiative, securely connects automated and manual Industry 4.0 systems and machines. By providing a simple standard for machines and manual process on the manufacturing line, it helps manufacturers implement machine-to-machine interfaces with secure, open, omni-directional communication. With i4.0 rapidly becoming a priority,  manufacturers can jump-start their transformation of their manufacturing line by leveraging the IPC-CFX output and process to gain better visibility, reduced costs, and simpler data capture.

IntraStage’s initiative to seamlessly adapt IPC-CFX compliant manufacturing data will allow both our current and new customers faster deployments requiring fewer engineering resources, while providing a more complete visibility of the full manufacturing floor, better asset tracking, and better issue resolution. IntraStage’s leveraging of the data generated by the IPC-CFX compliant data will unlock new improvements to manufacturing process control and data analysis.

To learn more about IPC-CFX, read more at https://cfx.ipc.org/html/faq.htm

 

Achieving Success with a Phased Approach

Achieving Success
With a PHased Approach

Any enterprise quality management system deployment will have lofty goals with significant impact on business processes, value, and goals. That’s why it’s critical to implement in a staged approach, achieving incremental goals based on a shared deliverables that deliver immediate business results, while future-proofing for further adoption and goal achievement..

The first step is to understand the business value of the data quality system. Using the IntraStage gap analysis team, enterprise-level customers can:

  • Identify the data silos with highest ROI in quality improvement, process improvement, and yield improvement
    • Interviews and data analysis to understand where the
  • Identify the data silos with the easiest technical and process barriers for ETL
    • Raw data requires conversion and normalization into the COTS IntraStage database and system. An easy way to achieve this conversion is by using one of our stock adapters. Manual input processes can be captured via Paperless Manufacturing
  • Identify the analytics that have the highest impact in cost/labor savings
    • One factor in this is how long it currently takes to produce a report. Another is the expected business value (process improvement, yield improvement) that will occur with the automated production of the report.
  • Architect the data ETL process so that current and future data integrates seamlessly with other data and processes
    • Understanding how individual datasets integrate with other datasets is important for understanding the full product life-cycle.
  • Map out product genealogies and life cycles
    • Understand which datasets are critical for go-live, and which will be implemented in further phases
  • Identify data silos that could be enriched (push-pull via API) with IntraStage data or vice-versa

Using those result can develop the project plan and scorecard, where milestones and deliverables are ranked not only according to ease of implementation, but also according to the business value inherent in achieving the goals.

Technical considerations for data capture into the IntraStage database relies on a few factors: the basic format of the raw source data, the shape of the raw source data, and its relationship to other, related datasets are paramount.

deplAn example of a successful staged approach can be starting off a single set of data for adaption or with paperless forms. Another example is

Defining the Manufacturing Data Warehouse

Defining the modern
test data warehouse

Data warehouses for electronics manufacturing are quickly becoming a focal point of IT effort and funding. And considering the potential benefits for a manufacturing enterprise, it’s not difficult to understand why. When data from multiple sources becomes necessary for critical business decisions, it’s best to aggregate and store the data in a central repository for security, reliability, and availability reasons. A data warehouse can provide:

  • A centralized repository of disparate data sets, each of which can be derived from different silos.
  • An architecture that provides correlation and normalization between the different data sets from the different data silos.
  • On-demand deep dive analytics so the disparate data can immediately be analyzed in-depth without having to write a new query.
  • Analytics agnostic access: different users can use different analytical tools (Tableau, Minitab, Excel, PowerBI, Excel, etc) to create reports and visualizations on the normalized data.

 

ETL and databasing information and performance from global manufacturing infographic

1. Centralized Data Repository

No longer is it required for a user to know where a particular data set is located and have different security credentials and have to follow different query processes to access those individual data sets. Ease of access to the data from across the enterprise should allow for quicker analysis and accomplishment of business objectives.

2 . Data Normalization

Normalizing data is critical for repeatable results and analysis, as having uncorrelated, un-normalized data will mean users would not be able to easily and quickly organize and analyze data from different datasets, and analytic results can’t easily be collaborated on, corroborated, and validated.

3. On Demand Deep Dive Analytics

When problems occur in manufacturing, time is of the essence. Every hour spent building developing queries is an hour where the production line could be stopped, resulting in massive revenue loss. It’s critical that the data be accessible, usable, and able to dive down into details of correlated data points quickly.

4. Agnostic Access

A data scientist may use complex data analysis processes and applications, but that doesn’t mean that his or her analysis will result in more value than a spreadsheet or pivot created by someone with solid domain knowledge of the source data.

About IntraStage

IntraStage is a data warehouse uniquely suited for manufacturing data. A data warehouse requires normalization, which means that the architects of the warehouse’s data model need to understand the source data and how it will be reported on and analyzed. This is not a trivial task, as it would require extensive domain knowledge of manufacturing processes and data. Rather than trying to build indexes on top of data, it’s critical to know from the beginning how the reporting and deep dive analysis will be conducted, and what the primary business cases will be. The architecture needs to account for these business cases from the lowest detail up (what am I doing with that data, and how does that data need to be structured and correlated to accomplish that analysis?) to ensure accuracy and speed. In addition, by having separate IntraStage Read tables from IntraStage Write tables, reporting and analytics can occur at the same time as data is imported, which results in better data flow and analysis without interaction between the different tables.

A high-throughput data warehouse for complex electronics manufacturers with best in class pre-built reports (and a design that is holistic in regards to other analytic tools) is available today. Contact us to learn more, and to receive a demo.

MetaData: Characterize, Define, and Analyze

MetaData: Characterize, Define, and Analyze

The Value in the Details: Best Practices for Manufacturing Metadata

Meta data is defined as the characteristics of product performance or the characteristics of its process.

These meta data points, or attributes of test, are useful indicators of quality, product performance, or process compliance. Metadata, as opposed to the information captured in a data model (which, by definition, is limited and static to enable database storage and warehousing) have values that will vary over time. In addition, analysis of data will result in a need to capture more meta data points.

As an example, a test engineer in NPI could be studying the performance of an RF product. By running individual parametric measurements repeatedly against different environmental or performance conditions, the engineer would determine performance limits, test configuration, and design specification. Over his or her testing process, the engineer could discover that new conditions that affect product or process performance need to be captured and analyzed that weren’t originally written into the test code. As a result, meta data points must be extensible and flexible. 

Meta data is used by engineers to characterize performance of a product. The types of meta data captured in different stages of a product’s life-cycle (R&D vs NPI vs Production vs MRO/RMA) will necessarily be different due to the differing needs of the engineers responsible for product performance and data analysis at each stage. 

We can typically categorize meta data into a few categories:

  • Descriptions of the source of the data
    • Tester location
    • Environmental conditions (temperature, humidity, etc)
    • Hardware revision
    • Hardware ID
    • Software Revision
  • Descriptions of how the data was collected
    • Tester calibration
    • RF testing: bandwidth
    • RF testing: frequency
  • Descriptions of how the data performed
    • Loop count of a measurement
    • XY coordinates of failures
  • Descriptions of what processes were done in response to data performance
    • Failure descriptions
    • Rework descriptions

In the IntraStage data model, metadata can occur at any of three levels: Event, Test and Measurement. In addition, since data can come from anywhere in the product’s life-cycle, it’s critical that metadata is mapped in to allow data investigation from any part of that life-cycle to draw data from any other currently available life-cycle.

Related Topics, Examples, and Links

Lower Costs, Faster Production time

Analyzing data from across the product lifecycle

WiP and Cycle Times

Intelligence from supplier subcomonents, CM Lower-Level Assemblies, and Higher-Level Assemblies
Learn More

New Product Integration

Normalizing and characterizing RF data with hundreds of parameters

Learn How Motorola

On-demand characterization of complex RF data
Learn More

Turning Data Into Action

Learn how Medtronic integrated BlackBelt to gain a full picture of their KPIs

SPC insight

Full system monitoring product quality and process efficiency
Read the Case Study