Security Frameworks and Considerations

ISO 27001 and CMMC Level 3: Security Frameworks and Considerations

When IntraStage’s CyberSecurity Council evaluated the different possible frameworks for security, we decided on an ISO-27001 compliance path for the following reasons:

  • Global recognition: The international acceptance of the standard helps us support our customers in the UK and other nations across the globe.
  • Framework for auditing: IntraStage is committed to constant improvement and proactive measures in all of our development and customer support. We treat security processes and procedures no differently, and the ISO 27001 framework encourages ongoing auditing and risk assessment.
  • Focus on full organizational data and processes: the ISO framework sets procedures and best practices for the organization’s full data. While our primary consideration has been and always will be the security of our customers’ data, we also recognize that security and continuity of all of our key business processes and procedures is critical us being able to support our customers.

IntraStage has been keenly interested in the CMMC 2.0 evolution from a five-tier system to a three-tier system, and is keeping a focus on the evolving security guidelines in order to make sure that our tools and processes meet the security needs of our DOD-related customers when that standard for those contractors and their downstream vendors becomes required.

Reshoring Manufacturing: Visibility into Data Drives Cost, Quality and Delivery

Reshoring Manufacturing: Visibility into Data Drives Cost, Quality and Delivery

ETL and databasing information and performance from global manufacturing infographic

As domestic OEMs look to re-shore manufacturing back to the United States, it’s critical to remember how contract manufacturing always has to balance cost, quality, and throughput. With the recent difficulties in delivery of product, companies like Ford are taking the leap at developing their own resources to manufacture key subcomponents.

When you’re looking at qualifying new suppliers or contract manufacturers, your engineers will need access to product and process data. The product data will help ensure quality; the process data will help maintain throughput by identifying bottlenecks and predicting yield and quantity. Gaining and sharing this kind of visibility (including in real-time) has traditionally been seen as a cost, a cost ameliorated by overseas contract manufacturers by devoting extra engineering resources to gather data, normalize it, and send it along to partners and OEMs.


With reshoring, it’s less likely that CMs will be able to devote full time resources to this kind of reporting. In addition, OEMs will need better visibility into supply and contract manufacturing resources in order to prevent their own line stoppages due to insufficient quantity or quality of components.

How are you working with your manufacturing partners to improve data visibility and a cooperative, mutually efficient manufacturing process?

Digital Twin Fundamentals-Start With Your Goal

Digital Twin Fundamentals-Start With Your Goal

Digital Twin

Any time you’re thinking of deploying a manufacturing intelligence platform and a full digital twin, the first step is to think of your goal.

I watched a webinar recently from the Digital Twin Consortium titled ‘Unlock Transformative Business Outcomes with Digital Twin Fundamentals’. According to the Consortium, part of the definition of a Digital Twin is ‘…digital Twins use real-time and historical data to represent the past and present and simulate predicted futures.’

Historical data in manufacturing has a clear value: traceability, compliance, and issue root-cause analysis are all driven from data that are already gathered and normalized. For predictive insights, it’s also critical to remember that building an algorithm requires extensive historical data as well. The best way to predict a future issue is to examine a related issue in the past.

For instance, if your goal is to improve yield using predictive analytics, you should examine what factors could affect yield, and how you currently measure and capture those factors. The easiest place to start is how you’ve identified and resolved yield issues in the past.  If your yield has been affected by, say, bad non-serialized components, or outdated software on a tester, or by a specific firmware version on a product, what evidence in the past has told you those were the root-cause of yield issues? How did you prove that those non-serialized parts were failing early and often? How do you log and attribute that information?

You can’t improve what you can’t measure; you also can’t measure what you don’t store.

Can Your Historical Performance Data Drive Better NPI?

Heatmap overlay showing failures by component, commodity level, and supplier. Applying this insight on historical components and configurations drives faster NPI for new assemblies

Can You Use Historical Data to Drive Better New Product Design?

65 percent of electronics manufacturers are facing global shortages, especially for integrated circuits. Even older IC designs used in commercial applications like autos are being severely impacted by this shortage. OEMs need to be able to rapidly iterate on their New Product Introduction (NPI) process in order to verify that new design iterations that leverage the components that are available work as expected. By using historical performance data on existing models and components, an electronics OEM can simulate how a new product design and requirements would perform against historical data.

A prime requirement of having this kind of efficient NPI is having relevant data fused together into a single source of truth. When fusing this kind of critical data together, electronics manufacturers will need to integrate data from manufacturing, field, and rework cycles. 

Insight into these failure and rework metrics from known components and suppliers can be applied to the new model design, giving NPI engineers visibility into how new designs using these components will perform both at the parametric level. With this insight, engineers can estimate yield, future production issues, and optimize the design of the new product.

Challenge

  • Normalizing of product lifecycle data as inputs to Design and NPI 
  • The inherent complexity of test and repair data
  • Normalizing and aggregating defects
  • Linking PLM/CAD/MES data together
  • Integrating field service or MRO repair data with other performance data sets

Solution

  • Closed- loop DFM (Design for Manufacture) system
    • Real-time predictive yield simulation analytic
  • Component attributes included in the data model
  • Defect attributes included in the data model
  • Characterization and metrics on supplier, cost, availability
  • Leveraging Enterprise PPM modeling w/BOM
  • Closed-loop DFR (Design for Reliability) data model at:
    • Component level
    • Module level
    • System level

Benefits

  • Accelerated NPI, MFG, revisions
  • Optimized component selection
  • Optimized reliability
  • Improved service margin/profitability
  • Increased Engineering manpower efficiency
  • Reduced scrap
  • Increased RTY and throughput
Get the Demo. Use our sample data, or use your data for real insights.
Rework and Reassembly Animation

Assembly and Disassembly: The Full Digital Twin

Optimize Remote Manufacturing Data:
Ensure Process Quality with Complete Digital Twin

In order to gain full visibility into complex electronics’ genealogy, you’ll need to track the assembly, disassembly, and rework cycle of parent units (higher level assemblies), child units (lower level assemblies), and be able to drill down to the components on each of those units.

BlackBelt Fusion gives you full traceability of your complex electronics product from the high level to the most basic component. With this digital twin, you can improve your product quality and process fidelity.