Manufacturing Analytics
Assembly Line
LS ELECTRIC uses data to optimize power consumption with Sight Machine and Microsoft Cloud for Manufacturing
LS ELECTRIC, in collaboration with Sight Machine and Microsoft, is revolutionizing manufacturing sustainability and productivity through data optimization. By standardizing unstructured data, LS ELECTRIC gleans detailed insights into energy consumption, leading to a 20% reduction in power use on some manufacturing lines and integrating systems for better decision-making. This partnership drives towards a future of sustainable manufacturing, leveraging Microsoft Cloud for Manufacturing and Microsoft Copilot to drive efficiency and competitive operation.
LS ELECTRIC has already implemented the new solution in one of their factories and plans to roll it out to multiple facilities. The early results have been impressive: streamlined issue resolution, improved resource optimization, and greater agility in implementing and adapting to new inputs. By monitoring energy usage per production line, Dr. Wook-Dong Cho reports, “LS ELECTRIC has reduced power consumption by about 20% to certain manufacturing lines.”
Transforming Remote Monitoring with Advanced Analytics
Decoking—the removal of coke deposits from the internal surfaces of furnaces and reactors—is a vital process for maintaining efficient and safe operations. Although it varies based on the furnace and on organizational practice, some of the most commonly monitored parameters include furnace temperature, furnace pressure, steam and gas flow rates, decoking duration, effluent composition, coke removal rates and coke quality.
Poor decoking has several negative consequences, such as reduced heat transfer efficiency, which decreases furnace capacity and production rates. Additionally, lower performing furnaces result in higher energy consumption, requiring more fuel to reach optimal temperature and maintain target production rates. Poor decoking also causes frequent maintenance shutdowns, resulting in unplanned downtime and production schedule disruptions.
One global oil and gas company deployed Seeq, an advanced analytics platform, to closely monitor its decoke procedures, reducing engineering time spent creating dashboards by 20% and improving furnace decoke performance by 10%.
Seeq Selected by Equinor for Enterprise-Wide Analytics
Seeq, a leader in industrial analytics, AI, and monitoring, and Equinor, an international energy company, announced a multi-year commercial agreement for the Seeq Industrial Analytics and AI platform to be leveraged across Equinor’s global assets to further accelerate digital transformation outcomes.
Through the agreement, Equinor will implement Seeq to empower its engineering teams to optimize production and improve energy performance across a variety of assets. Initially, the company plans to leverage Seeq to monitor well and process behavior, thereby gaining a deeper understanding of daily operations to maximize production, enhance workforce collaboration and increase efficiency.
Amitec, a Norway-based, Seeq-certified partner with deep expertise in the energy industry, will support the Seeq implementation for Equinor.
Seeq and AspenTech accelerate self-service industrial analytics on AWS
With Seeq powered by the wealth of data stored in IP.21 running on AWS, you can clean, perform calculations on, and analyze IP.21 data—including context from relational data sources such as MES, batch, and other applications—to diagnose and predict issues and share findings across the organization. With NRT expert collaboration and deeper insights, Seeq helps organizations advance toward their sustainability and operational excellence goals. By tapping into rich data from IP.21, Seeq helps substantially reduce maintenance costs and minimize downtime. You can set up advanced workflows like ML with data-driven, state-of-the-art methods already proven in critical industries using the Seeq SaaS platform in conjunction with the AWS Cloud. The Seeq SaaS solution is listed on AWS Marketplace, making it easier to procure, deploy, and manage your workload.
Harness the Power of AI and Stay Ahead of Unplanned Downtime
The Connected Ecosystem in Life Sciences
Learnings from the global roll-out of a self-service data analytics tool at LANXESS
Discover the transformative journey of LANXESS, a global specialty chemicals conglomerate, as it integrates TrendMiner, a cutting-edge analytics platform, into its operations. From humble beginnings at a local plant to a global digital transformation, this video explores the strategic implementation, challenges, and successes of rolling out TrendMiner across diverse and widespread facilities.
Fogwing Industrial Cloud
What is Advanced Industrial Analytics (AIA)?
The AIA applications usually integrate with internal or external platforms, data connectors, and edge-to-cloud agents that facilitate data connectivity, modeling, and contextualization techniques required for effective analysis. Built on this data foundation, the applications then use several statistics, such as first principle, physics-based, and machine learning (ML) algorithms, to provide insights of varying levels of sophistication across the descriptive–prognostic spectrum.
Finally, these applications should ideally deliver value across several industrial use cases – including (but not limited to) asset performance, quality, manufacturing, productivity, process optimization, EHS, sustainability, etc., and target multiple user personas, such as engineers (industrial, process, reliability engineers), business users (cross-functional operations, quality, supply-chain, EHS personnel), and data scientists (data engineers, wranglers stewards, scientists, software engineers).
The Critical Intersection of Data Quality and AI in Industrial Operations
Fogwing Asset+ Video
Snowflake technology solution from industrial edge to the cloud
This is where the combined power of Snowflake’s data warehousing and Opto 22’s automation solutions comes into play. Data travels securely from groov products (edge hardware on the plant floor) up to Snowflake (data storage in the cloud). This combination gives you the tools needed to both collect and harness the power of big data, leveraging advanced analytics and machine learning to optimize plant floor operations and drive innovation.
With AI, ML, and anomaly detection (AD)–plus the integration of large language models (LLMs)–Snowflake helps you unearth patterns and insights from your data. Given the scale of data storage available in the cloud, a single human would be hard pressed to make sense of it all. But think of using simple language prompts like, “When was my peak energy consumption last quarter?” or “How many widgets did I produce between 11AM and 3PM on November 8, 2023?” This. Is. Powerful.
Harnessing Machine Learning for Anomaly Detection in the Building Products Industry with Databricks
One of the biggest data-driven use cases at LP was monitoring process anomalies with time-series data from thousands of sensors. With Apache Spark on Databricks, large amounts of data can be ingested and prepared at scale to assist mill decision-makers in improving quality and process metrics. To prepare these data for mill data analytics, data science, and advanced predictive analytics, it is necessary for companies like LP to process sensor information faster and more reliably than on-premises data warehousing solutions alone.
Automating Quality Machine Inspection Infused with Edge AI and Digital Twins for Device Monitoring
In this post, we will discuss an AI-based solution Kyndryl has built on Amazon Web Services (AWS) to detect pores on the welding process using acoustic data and a custom-built algorithm leveraging voltage data. We’ll describe how Kyndryl collaborated with AWS to design an end-to-end solution for detecting welding pores in a manufacturing plant using AWS analytics services and by enabling digital twins to monitor welding machines effectively.
Kyndryl’s solution flow consists of collecting acoustic data with voltage and current from welding machines, processing and inferencing data at the edge to detect welding pores while providing actionable insights to welding operators. Additionally, data is streamed to the cloud to perform historical analysis and improve operational efficiency and product quality over time. A digital twin is enabled to monitor the welding operation in real-time with warnings created to proactively manage the asset when predefined thresholds are met.
Canvass AI Unveils Real-Time Closed-Loop Optimization Using Prescriptive Analytics for Industrial Processes
Canvass AI announced availability of its real-time closed-loop optimization solution for process and sub-process level production. This capability allows operators and engineers to automate production processes across multiple industries and a wide range of manufacturing processes such as fermentation, distillation, co-generation power and much more.
The Canvass AI closed-loop optimization solution comprises of predefined data mapping, learning models, configuration files, AI workflows, and a setpoint optimizer. Using this framework engineers and operators can confidently apply virtual control to physical processes allowing them to constantly adapt to changing production conditions. Canvass AI’s solution overlays legacy OT investments such as APC and DCS to optimize process performance closer to the operating specification limits to maximize quality and output.
Automate plant maintenance using MDE with ABAP SDK for Google Cloud
Analyzing production data at scale for huge datasets is always a challenge, especially when there’s data from multiple production facilities involved with thousands of assets in production pipelines. To help solve this challenge, our Manufacturing Data Engine is designed to help manufacturers manage end-to-end shop floor business processes.
Manufacturing Data Engine (MDE) is a scalable solution that accelerates, simplifies, and enhances the ingestion, processing, contextualization, storage, and usage of manufacturing data for monitoring, analytical, and machine learning use cases. This suite of components can help manufacturers accelerate their transformation with Google Cloud’s analytics and AI capabilities.
How Data-Powered 3D Printers Will Change Manufacturing
Similar to how autonomous vehicles collect and apply data to continuously improve a car’s ability to drive, connected 3D printers can use collected data for artificial intelligence-powered automation. During each print job, 3D printers produce large quantities of data that are sent to and stored in the cloud. The print job data—ripe for AI, machine learning, and automation-based product features—can then be fed to algorithms, which printers and users can access through the cloud. Among other things, these data help businesses make decisions about what parts to print and how best to print them, while improving the quality of print jobs.
Enhancing CTQ Parameters with Traciviss - Traceability Software
In the dynamic landscape of the Indian automotive industry, efficiency, precision, and quality are paramount. Our client, a distinguished commercial automotive manufacturer based in Ennore, Chennai, sought to optimize their production process and enhance their critical-to-quality (CTQ) parameters—Nut Runners Availability, Torque Quality, and Vision Cot Pin Identification. To achieve this, they engaged Traciviss, an AI-driven traceability software solution, in conjunction with Rockwell PLC integration provided by MXHub Technocare. This innovative partnership aimed to streamline the production line, reduce errors, and automate the entire axle assembly process.
Launch of Smart Manufacturing Cell Transforms Rochester Operations
L3Harris is driving toward fully controlled and paced production of tactical radios with the launch of its first Smart Manufacturing Cell in its Rochester, New York, facilities, which streamlines assembly processes so the company can continue to meet customer demands and delivery schedules for critical communication devices.
The answer to the company’s current and future needs was the implementation of Smart Manufacturing Cell production. SMC is an Industry 4.0-level assembly process where control technologies, such as LightGuide augmented reality, Mountz precision torque drivers and Cognex® machine vision inspection, are integrated into one common platform by WorkSmart Systems. This capability delivers a line-agnostic station where different products with the same process can be built without requiring device-specific configurations when switching between production lines. Further, the system itself collects data including who worked on a specific unit and at what time for troubleshooting and root-cause analysis of potential defects found later in internal testing.
Using Industrial Automation to Monitor Vertical Farms
The adoption of artificial intelligence and machine learning algorithms now allow analysis of vast amounts of data collected from sensors to enable predictive analytics. Farmers can make more informed decisions about managing crops, optimizing resource usage, and predicting yields.
AeroFarms and Nokia discussed how to build a system to monitor a vertical farm where leafy greens including arugula, bok choy, and kale are grown. A typical facility can produce more than 1 million kilograms of leafy greens annually. A 13,000-square-meter facility such as the AeroFarms one in Danville is so large that workers can’t physically check all the plants. “Because the growth cycle in indoor farming is much shorter than outdoor farming, it is very important to know what’s going on at all times and not to miss anything,” Klein says. “If you fail to detect something, you will miss a huge opportunity. You might be at the end of your growth cycle, and you can’t take corrective measures in terms of the production yield, or the quality or quantity of produce.”
Transforming Semiconductor Yield Management with AWS and Deloitte
Together, AWS and Deloitte have developed a reference architecture to enable the aforementioned yield management capabilities. The architecture, shown in Figure 1, depicts how to collect, store, analyze and act on the yield related data throughout the supply chain. The following describes how the modernized yield management architecture enables the six capabilities discussed earlier.
🧠 Data Driven Optimization - AI, Analytics IIoT and Oden Technologies
If you can predict that offline quality test in real time, so that you know, in real time, that you’re making good products, it reduces the risk to improve the process in real time. We actually use that type of modeling to then prescribe the right set points for the customer to reach whatever outcome they want to achieve. If they want to lower the cost, lower the material consumption and lower energy consumption, increase the speed, then we actually give them the input parameters that they need to use in order to get a more efficient output.
And then the last step, which is more exploratory, which we’re working on now is also generating work instructions for the operators, kind of like an AI support system for the operator. Because still, and we recognize this, the big bottleneck for a lot of manufacturers is talent. Talent is very scarce, it’s very hard to hire a lot of people that can perform these processes, especially when they say that it’s more of an art than a science. We can lower the barrier to entry for operators to become top performers, through recommendations, predictions and generative AI for how to achieve high performance. By enabling operators to leverage science more than art or intuition, we can really change the game in terms of how we make things.
Predictive Maintenance for Semiconductor Manufacturers with SEEQ powered by AWS
There are challenges in creating predictive maintenance models, such as siloed data, the offline nature of data processing and analytics, and having the necessary domain knowledge to build, implement, and scale models. In this blog, we will explore how using Seeq software on Amazon Web Services can help overcome these challenges.
The combination of AWS and Seeq pairs a secure cloud services platform with advanced analytics innovation. Seeq on AWS can access time series and relational data stored in AWS data services including Amazon Redshift, Amazon DynamoDB, Amazon Simple Storage Service (S3), and Amazon Athena. Once connected, engineers and other technical staff have direct access to all the data in those databases in a live streaming environment, enabling exploration and data analytics without needing to go through the steps to extract data and align timestamps whenever more data is required. As a result, monitoring dashboards and running reports can be set to auto generate and are easily shared among groups or sites. This enables balancing machine downtimes and planning ahead for maintenance without disrupting schedules or compromising yields.
Advanced Analytics at BASF with TrendMiner
Through an insightful case study on monitoring instrument air pressure and flare flows, Rooha Khan highlights how TrendMiner’s platform effectively optimizes manufacturing processes. Witness the tangible value BASF has discovered by harnessing the capabilities of industrial data analysis and monitoring, and be prepared to embrace the transformative possibilities of digitalization.
Minimize Manufacturing Data Management Costs
As Intel manufactures hundreds of millions of complex products every year, Intel IT collects and stores terabytes of manufacturing data to support continual engineering data analysis. As the volume, velocity and complexity of the data increases, it is imperative that we maintain this decision support system at the lowest possible cost. Additionally, we need to be able to assess the cost for future scaling needs. Therefore, we decided to evaluate the scalability, performance and cost of several Intel® architecture-based massively parallel processing (MPP) relational database management systems (RDBMS). We found that industry standard benchmarks did not closely resemble our manufacturing data and did not measure the metrics that were important to us. Therefore, we created a custom MPP RDBMS benchmark that helped us choose a cost-optimized solution.
We used this custom benchmark to complete a comprehensive technical proof of concept (PoC) with several industry-leading MPP RDBMS vendors whose products run on Intel® architecture. We are confident that this benchmark enabled us to choose the best Intel® Xeon® processor-based MPP RDBMS solution while keeping manufacturing data management costs under control. Also, based on the evaluation results, the vendors we worked with have improved their products, strengthening the entire industry ecosystem. And, with the release of the 4th Gen Intel® Xeon® Scalable processors and associated accelerators, we’re expecting that RDBMS vendors will make their products even more cost competitive. By sharing our benchmark methodology, we hope to help other companies to understand their data better and select a data management system that meets their needs.
Using Data Models to Manage Your Digital Twins
A continuously evolving industrial knowledge graph is the foundation of creating industrial digital twins that solve real-world problems. Industrial digital twins are powerful representations of the physical world that can help you better understand how your assets are impacting your operations. A digital twin is only as useful as what you can do with it, and there is never only one all-encompassing digital twin. Your maintenance view of a physical installation will need to be different from the operational view, which is different from the engineering view for planning and construction.
Manufacturing Process Optimization in Times of Adversity
For the current era, we can usefully define manufacturing process optimization like this:
- Digitally connected plant teams learning and implementing data-driven strategies that impact their manufacturing processes to minimize cost and maximize production toward peak operational efficiency.
- Using data-to-value technologies that integrate seamlessly with their legacy systems and progressively automate an end-to-end, continuous improvement, production loop — freeing manufacturers from a reactive troubleshooting paradigm so they can layer in further innovations toward the smart factory.
Through the above process, machine learning workflows are able to solve current generation data-readiness and production process optimization issues while future-proofing operations. By easing cost pressures and driving up revenue via data-driven production efficiencies (and with increasingly data-mature plant personnel), the C-suite is free to develop strategies with innovation managers. Together, they can combat the broader external challenges experienced by many manufacturers today.
⭐ Hunting For Hardware-Related Errors In Data Centers
The data center computational errors that Google and Meta engineers reported in 2021 have raised concerns regarding an unexpected cause — manufacturing defect levels on the order of 1,000 DPPM. Specific to a single core in a multi-core SoC, these hardware defects are difficult to isolate during data center operations and manufacturing test processes. In fact, SDEs can go undetected for months because the precise inputs and local environmental conditions (temperature, noise, voltage, clock frequency) have not yet been applied.
For instance, Google engineers noted ‘an innocuous change to a low-level library’ started to give wrong answers for a massive-scale data analysis pipeline. They went on to write, “Deeper investigation revealed that these instructions malfunctioned due to manufacturing defects, in a way that could only be detected by checking the results of these instructions against the expected results; these are ‘silent’ corrupt execution errors, or CEEs.”
Engineers at Google further confirmed their need for internal data, “Our understanding of CEE impacts is primarily empirical. We have observations of the form, ‘This code has miscomputed (or crashed) on that core.’ We can control what code runs on what cores, and we partially control operating conditions (frequency, voltage, temperature). From this, we can identify some mercurial cores. But because we have limited knowledge of the detailed underlying hardware, and no access to the hardware-supported test structures available to chip makers, we cannot infer much about root causes.”
Our connected future: How industrial data sharing can unite a fragmented world
The rapid and effective development of the coronavirus vaccines has set a new benchmark for today’s industries–but it is not the only one. Increasingly, savvy enterprises are starting to share industrial data strategically and securely beyond their own four walls, to collaborate with partners, suppliers and even customers.
Worldwide, almost nine out of 10 (87%) business executives at larger industrial companies cite a need for the type of connected data that delivers unique insights to address challenges such as economic uncertainty, unstable geopolitical environments, historic labor shortages, and disrupted supply chains. In fact, executives report in a global study that the most common benefits of having an open and agnostic information-sharing ecosystem are greater efficiency and innovation (48%), increasing employee satisfaction (45%), and staying competitive with other companies (44%).
The future is now: Unlocking the promise of AI in industrials
Many executives remain unsure where to apply AI solutions to capture real bottom-line impact. The result has been slow rates of adoption, with many companies taking a wait-and-see approach rather than diving in.
Rather than endlessly contemplate possible applications, executives should set an overall direction and road map and then narrow their focus to areas in which AI can solve specific business problems and create tangible value. As a first step, industrial leaders could gain a better understanding of AI technology and how it can be used to solve specific business problems. They will then be better positioned to begin experimenting with new applications.
Manufacturing needs MVDA: An introduction to modern, scalable multivariate data analysis
In most settings, a qualitative/semi-quantitative process understanding exists. Through extensive experimentation and knowledge transfer, subject-matter experts (SMEs) know a generally acceptable range for distinct process parameters which is used to define the safe operating bounds of a process. In special cases, using bivariate analysis, SMEs understand how a small number of variables (no more than five) will interact to influence outputs.
Quantitative process understanding can be achieved through a holistic analysis of all process data gathered throughout the product lifecycle, from process design and development, through qualification and engineering runs, and routine manufacturing. Data comes from time series process sensors, laboratory logbooks, batch production records, raw material COAs, and lab databases containing results of offline analysis. As a process SME, the first reaction to a dataset this complex is that any analysis should be left to those with a deep understanding of machine learning and all the other big data buzzwords. However, this is the ideal opportunity for multivariate data analysis (MVDA).
Solution Accelerator: Multi-factory Overall Equipment Effectiveness (OEE) and KPI Monitoring
The Databricks Lakehouse provides an end-to-end data engineering, serving, ETL, and machine learning platform that enables organizations to accelerate their analytics workloads by automating the complexity of building and maintaining analytics pipelines through open architecture and formats. This facilitates the connection to high-velocity Industrial IoT data using standard protocols like MQTT, Kafka, Event Hubs, or Kinesis to external datasets, like ERP systems, allowing manufacturers to converge their IT/OT data infrastructure for advanced analytics.
Using a Delta Live Tables pipeline, we leverage the medallion architecture to ingest data from multiple sensors in a semi-structured format (JSON) into our bronze layer where data is replicated in its natural format. The silver layer transformations include parsing of key fields from sensor data that are needed to be extracted/structured for subsequent analysis, and the ingestion of preprocessed workforce data from ERP systems needed to complete the analysis. Finally, the gold layer aggregates sensor data using structured streaming stateful aggregations, calculates OT metrics e.g. OEE, TA (technical availability), and finally combines the aggregated metrics with workforce data based on shifts allowing for IT-OT convergence.
Luxury goods manufacturer gets a handle on production capacity from FourJaw
Machine monitoring software from FourJaw has driven a 14% uplift in machine utilisation at a luxury goods manufacturer. Fast-growing brass cabinet hardware manufacturer Armac Martin used data from FourJaw’s machine monitoring platform to increase its production capacity and meet a surge in demand for its product range.
Armac Martin’s Production Director, Rob McGrail, said: “When we were looking for a machine monitoring software supplier, a key criteria for us was not just about the ease of deployment and software functionality, but it was equally important that they were based locally, in the UK and that they had a good level of customer support, both for deployment and on-going customer success. FourJaw ticked all of these boxes”.
Using AI to increase asset utilization and production uptime for manufacturers
Google Cloud created purpose-built tools and solutions to organize manufacturing data, make it accessible and useful, and help manufacturers to quickly take significant steps on this journey by reducing the time to value. In this post, we will explore a practical example of how manufacturers can use Google Cloud manufacturing solutions to train, deploy and extract value from ML-enabled capabilities to predict asset utilization and maintenance needs. The first step to a successful machine learning project is to unify necessary data in a common repository. For this, we will use Manufacturing Connect, the factory edge platform co-developed with Litmus, to connect to manufacturing assets and stream the asset telemetries to Pub/Sub.
The following scenario is based on a hypothetical company, Cymbal Materials. This company is a factitious discrete manufacturing company that runs 50+ factories in 10+ countries. 90% of Cymbal Materials manufacturing processes involve milling, which are accomplished using industrial computer numerical control (CNC) milling machines. Although their factories implement routine maintenance checklists, there are unplanned and unknown failures that happen occasionally. However, many of the Cymbal Materials factory workers lack the experience to identify and troubleshoot failures due to labor shortage and high turnover rate in their factories. Hence, Cymbal Materials is working with Google Cloud to build a machine learning model that can identify and analyze failures on top of Manufacturing Connect, Manufacturing Data Engine, and Vertex AI.
The art of effective factory data visualization
How United Manufacturing Hub Is Introducing Open Source to Manufacturing and Using Time-Series Data for Predictive Maintenance
The United Manufacturing Hub is an open-source Helm chart for Kubernetes, which combines state-of-the-art IT/OT tools and technologies and brings them into the hands of the engineer. This allows us to standardize the IT/OT infrastructure across customers and makes the entire infrastructure easy to integrate and maintain. We typically deploy it on the edge and on-premise using k3s as light Kubernetes. In the cloud, we use managed Kubernetes services like AKS. If the customer is scaling out and okay with using the cloud, we recommend services like Timescale Cloud. We are using TimescaleDB with MQTT, Kafka, and Grafana. We have microservices to subscribe to the messages from the message brokers MQTT and Kafka and insert the data into TimescaleDB, as well as a microservice that reads out data and processes it before sending it to a Grafana plugin, which then allows for visualization.
We are currently positioning the United Manufacturing Hub with TimescaleDB as an open-source Historian. To achieve this, we are currently developing a user interface on top of the UMH so that OT engineers can use it and IT can still maintain it.
Leveraging Operations Data to Achieve 3%-5% Baseline Productivity Gains with Normalized KPIs
Traditional code-based data models are too cumbersome, cost prohibitive and resource intensive to support an enterprise data model. In a code-based environment, it can take six months just to write and test the code to bring a single plant’s operating data into alignment with enterprise data pipelines. By contrast, a no-code solution like the Element Unify platform allows all IT/OT/ET data sources to be quickly tagged and brought into an Asset Hierarchy. The timeframe for a single plant to bring their operating data into alignment with the enterprise data architecture and data pipelines drops from 6 months to 2 to 4 weeks.
Digital transformation tools improve plant sustainability and maintenance
Maintenance is inherent to all industrial facilities. In pneumatic systems, valves wear out over time, causing leakage that leads to excessive compressed air consumption. Some systems can have many valves, which can make identifying a faulty one challenging. Leak troubleshooting can be time-consuming and, with the ongoing labor shortage and skills gap, maintenance personnel may already be stretched thin. There may not be enough staff to keep up with what must be done, and historical knowledge may not exist. When production must stop for repairs, it can be very expensive. For mid-sized food and beverage facilities, unplanned downtime costs around $30,000 per hour.
Finding Frameworks For End-To-End Analytics
New standards, guidelines, and consortium efforts are being developed to remove these barriers to data sharing for analytics purposes. But the amount of work required to make this happen is significant, and it will take time to establish the necessary level of trust across groups that historically have had minimal or no interactions.
For decades, test program engineers have relied upon the STDF file format, which is inadequate for today’s use cases. STDF files cannot dynamically capture adaptive test limits, and they are unable to assist in real-time decisions at the ATE based upon current data and analytically derived models. In fact, most data analytic companies run a software agent on the ATE to extract data for decisions and model building. With ATE software updates, the agent often breaks, requiring the ATE vendor to fix each custom agent on every test platform. Emerging standards, TEMS and RITdb, address these limitations and enable new use cases.
But with a huge amount of data available in manufacturing settings, an API may be the best approach for sharing sensitive data from point of origin to a centralized repository, whether on-premise or in the cloud.
Improving asset criticality with better decision making at the plant level
The industry is beginning to see reliability, availability and maintainability (RAM) applications that integrally highlight the real constraints, including the other operational and mechanical limits. A RAM-based simulation application provides fault-tree analysis, based on actual material flows through a manufacturing process, with stage gates, inventory modeling, load sharing, standby/redundancy of equipment, operational phases, and duty cycles. In addition, a RAM application can simulate expectations of various random events such as weather, market dynamics, supply/distribution logistical events, and more. In one logistics example, a coker unit’s bottom pump was thought to be undersized and constraining the unit production. Changing the pump to a larger size did not fix the problem, because further investigation showed insufficient trucks on the train to carry the product away would not let the unit operate at full capacity.
Renault Group and Atos launch a unique service to collect large-scale manufacturing data and accelerate Industry 4.0
Renault Group and Atos launch ID@scale (Industrial Data @ Scale), a new service for industrial data collection to support manufacturing companies in their digital journey towards Industry 4.0. “ID@S” (Industrial Data @ Scale) will allow manufacturers to collect and structure data from industrial equipment at scale to improve operational excellence and product quality. Developed by the car manufacturer and already in operation within its factories, ID@scale is now industrialized, modularized and commercialized by the digital leader Atos.
More than 7,500 pieces of equipment are connected, with standardized data models representing over 50 different manufacturing processes from screwdriving to aluminum injection, including car frame welding, machining, painting, stamping, in addition to new manufacturing processes for electric motors and batteries. Renault Group is already saving 80 million euros per year and aims to deploy this solution across the remainder of its 35 plants, connecting over 22,000 pieces of equipment, by 2023 to generate savings of 200 million euros per year.
Advanced analytics improve process optimization
With advanced analytics, the engineers collaborated with data scientists to create a model comparing the theoretical and operational valve-flow coefficient of one control valve. Conditions in the algorithm were used to identify periods of valve degradation in addition to past failure events. By reviewing historical data, the SMEs determined the model would supply sufficient notification time to deploy maintenance resources so repairs could be made prior to failure.
Batch Optimization using Quartic.ai
Aarbakke + Cognite | Boosting production, maintenance, and quality
Battery Analytics: The Game Changer for Energy Storage
Battery analytics refers to getting more out of the battery using software – not only during operation, but also when selecting the right battery cell or designing the overall system. For now, the focus will be on the possibilities to optimize the in-field operation of battery storages.
The TWAICE cloud analytics platform provides insights and solutions based on field data. The differentiation factor is the end-to-end approach with analytics at its heart. After processing and mapping the data, the platform analytics layer runs different analytical algorithms, electrical, thermal and aging models as well as machine learning models. This variety of analytical approaches is the key to balance data input quality differences and is also the basis for the wide and expanding range of solutions.
Where And When End-To-End Analytics Works
To control a wafer factory operation, engineering teams rely on process equipment and inspection statistical process control (SPC) charts, each representing a single parameter (i.e., univariant-based). With the complexities of some processes the interactions between multiple parameters (i.e., multi-variant) can result in yield excursions. This is when engineers leverage data to make decisions on subsequent fab or metrology steps to improve yield and quality.
“When we look at fab data today, we’re doing that same type of adaptive learning,” McIntyre said. “If I start seeing things that don’t fit my expected behavior, they could still be okay by univariate control, but they don’t fit my model in a multi-variate sense. I’ll work toward understanding that new combination. For instance, in a specific equipment my pump down pressure is high, but my gas flow is low and my chamber is cold, relatively speaking, and all (parameters) individually are in spec. But I’ve never seen that condition before, so I need to determine if this new set of process conditions has an impact. I send that material to my metrology station. Now, if that inline metrology data is smack in the center, I can probably disregard the signal.”
The Hidden Factory: How to Expose Waste and Capacity on the Shop Floor
Without accurate production data, managers simply cannot hope to find the hidden waste on the shop floor. While strict manual data collection methods can take job shops to a certain degree, the sophisticated manufacturer is leveraging solutions that collect, aggregate, and standardize production data autonomously. With this data in hand, accurate benchmarks can be set (they may be quite surprising) and areas of hidden capacity, as well as waste-generators, can be far more easily identified.
How to Use Data in a Predictive Maintenance Strategy
Free-Text and label correction engines are a solution to clean up missing or inconsistent work order and parts order data. Pattern recognition algorithms can replace missing items such as funding center codes. They also fix work order (WO) descriptions to match the work actually performed. This can often yield a 15% shift in root cause binning over non-corrected WO and parts data.
With programmable logic controller-generated threshold alarms (like an alarm that is generated when a single sensor exceeds a static value), “nuisance” alarms are often generated and then ignored. These false alarms quickly degrade the culture of an operating staff as their focus is shifted away from finding the underlying problem that is causing the alarm. In time, these distractions threaten the health of the equipment, as teams focus on making the alarm stop rather than addressing the issue.
Toward smart production: Machine intelligence in business operations
Our research looked at five different ways that companies are using data and analytics to improve the speed, agility, and performance of operational decision making. This evolution of digital maturity begins with simple tools, such as dashboards to aid human decision making, and ends with true MI, machines that can adjust their own performance autonomously based on historical and real-time data.
Connecting an Industrial Universal Namespace to AWS IoT SiteWise using HighByte Intelligence Hub
Merging industrial and enterprise data across multiple on-premises deployments and industrial verticals can be challenging. This data comes from a complex ecosystem of industrial-focused products, hardware, and networks from various companies and service providers. This drives the creation of data silos and isolated systems that propagate one-to-one integration strategy.
HighByte Intelligence Hub does just that. It is a middleware solution for universal namespace that helps you build scalable, modern industrial data pipelines in AWS. It also allows users to collect data from various sources, add context to the data being collected, and transform it to a format that other systems can understand.
Rub-A-Dub-Dub...It's All About the Data Hub
If these terms leave you more confused than when you started reading, join the club. I am an OT guy, and so much of this was new to me. And it’s another reason to have a good IT/OT architect on your team. The bottom line is that these terms support the various perspectives that must be addressed in connecting and delivering data, from architecture and patterns to services and translation layers. Remember, we are not just talking about time-series or hierarchical asset data. Data such as time, events, alarms, units of work, units of production time, materials and material flows, and people can all be contextualized. And this is the tough nut to crack as the new OT Ecosystem operates in multiple modes, not just transactional as we find in the back office.
How to Build Scalable Data and AI Industrial IoT Solutions in Manufacturing
Unlike traditional data architectures, which are IT-based, in manufacturing there is an intersection between hardware and software that requires an OT (operational technology) architecture. OT has to contend with processes and physical machinery. Each component and aspect of this architecture is designed to address a specific need or challenge, when dealing with industrial operations.
The Databricks Lakehouse Platform is ideally suited to manage large amounts of streaming data. Built on the foundation of Delta Lake, you can work with the large quantities of data streams delivered in small chunks from these multiple sensors and devices, providing ACID compliances and eliminating job failures compared to traditional warehouse architectures. The Lakehouse platform is designed to scale with large data volumes. Manufacturing produces multiple data types consisting of semi-structured (JSON, XML, MQTT, etc.) or unstructured (video, audio, PDF, etc.), which the platform pattern fully supports. By merging all these data types onto one platform, only one version of the truth exists, leading to more accurate outcomes.
How to Reduce Tool Failure with CNC Tool Breakage Detection
There are several active technologies used in CNC machining that enable manufacturers to realize these benefits. The type of system used for tooling breakage detection may consist of one or more of the following technologies.
They’re often tied to production monitoring systems and ideally IIoT platforms that can analyze tooling data in the cloud to better predict breakages in the future. One innovation in the area of non-contact technologies is the use of high-frequency data that helps diagnose, predict and avoid failures. This technology is sensorless and uses instantaneous real-time data pulled at an extremely high rate to build accurate tool failure detection models.
Sight Machine, NVIDIA Collaborate to Turbocharge Manufacturing Data Labeling
The collaboration connects Sight Machine’s manufacturing data foundation with NVIDIA’s AI platform to break through the last bottleneck in the digital transformation of manufacturing – preparing raw factory data for analysis. Sight Machine’s manufacturing intelligence will guide NVIDIA machine learning software running on NVIDIA GPU hardware to process two or more orders of magnitude more data at the start of digital transformation projects.
Accelerating data labeling will enable Sight Machine to quickly onboard large enterprises with massive data lakes. It will automate and accelerate work and lead to even faster time to value. While similar automated data mapping technology is being developed for specific data sources or well documented systems, Sight Machine is the first to use data introspection to automatically map tags to models for a wide variety of plant floor systems.
Machining cycle time prediction: Data-driven modelling of machine tool feedrate behavior with neural networks
Accurate prediction of machining cycle times is important in the manufacturing industry. Usually, Computer-Aided Manufacturing (CAM) software estimates the machining times using the commanded feedrate from the toolpath file using basic kinematic settings. Typically, the methods do not account for toolpath geometry or toolpath tolerance and therefore underestimate the machining cycle times considerably. Removing the need for machine-specific knowledge, this paper presents a data-driven feedrate and machining cycle time prediction method by building a neural network model for each machine tool axis. In this study, datasets composed of the commanded feedrate, nominal acceleration, toolpath geometry and the measured feedrate were used to train a neural network model. Validation trials using a representative industrial thin-wall structure component on a commercial machining center showed that this method estimated the machining time with more than 90% accuracy. This method showed that neural network models have the capability to learn the behavior of a complex machine tool system and predict cycle times. Further integration of the methods will be critical in the implantation of digital twins in Industry 4.0.
How the Cloud is Changing the Role of Metadata in Industrial Intelligence
Right now though, many companies have trouble seeing that context in existing datasets. Much of that difficulty owes to the original design of operational technology (OT) systems like supervisory control and acquisition (SCADA) systems or data historians. Today, the story around the collection of data in OT systems is much the same. Each of these descriptive points about the data could paint a more holistic view of asset performance.
As many process businesses turn to a data lake strategy to leverage the value of their data, the preservation of metadata in the movement of OT data to their cloud environment represents a significant opportunity to optimize the maintenance, productivity, sustainability, and safety of critical assets. The loss of metadata has been among the most severe limiting factors in the value of OT data. By one estimate, industrial businesses are losing out on 20-30 percent of the value of their data from regular compression of metadata or losses in their asset hierarchy models. With an expertise shortage sweeping across process-intensive operations, many companies will need to digitize and conserve institutional (puppy-or-person) knowledge, beginning with their own data.
Automation Within Supply Chains: Optimizing the Manufacturing Process
Is Clip A ‘Slack’ For Factories?
Clip aims to bring data gathering and analytics, information sharing, and collaboration onto a single platform. The system connects all intelligent industrial equipment in a production facility, together with workers who can access all information and adjust operations through computers and portable devices.
It’s an ambitious undertaking, one that requires guaranteeing a very high degree of interoperability to ensure that people, machines and processes can communicate with each other seamlessly, and that all key systems such as Material Requirements Planning (MRP), Enterprise Resource Planning (ERP) and others can directly access up-to-date information from machines and processes. This higher level of automation, if implemented right, can unlock a new level of efficiency for manufacturing companies.
Build a Complete Analytics Pipeline to Optimize Smart Factory Operations
2021 Assembly Plant of the Year: GKN Drives Transformation With New Culture, Processes and Tools
All-wheel drive (AWD) technology has taken the automotive world by storm in recent years, because of its ability to effectively transfer power to the ground. Today, many sport utility vehicles use AWD for better acceleration, performance, safety and traction in all kinds of driving conditions. GKN’s state-of-the-art ePowertrain assembly plant in Newton, NC, supplies AWD systems to BMW, Ford, General Motors and Stellantis facilities in North America and internationally. The 505,000-square-foot facility operates multiple assembly lines that mass-produce more than 1.5 million units annually.
“Areas of improvement include a first-time-through tracking dashboard tailored to each individual line and shift that tracks each individual failure mode,” says Tim Nash, director of manufacturing engineering. “We use this tool to monitor improvements and progress on a daily basis.
“Overhaul of process control limits has been one of our biggest achievements,” claims Nash. “By setting tighter limits for assembly operations such as pressing and screwdriving, we are able to detect and reject defective units in station vs. a downstream test operation. This saves both time and scrap related to further assembly of the defective unit.”
“When we started on our turnaround journey, our not-right-first-time rate was about 26 percent,” adds Smith. “Today, it averages around 6 percent. A few years ago, cost of non-quality was roughly $23 million annually vs. less than $3 million today.”
Digital Transformation in the Beverage Manufacturing and Bottling
How W Machine Uses FactoryWiz Machine & Equipment Monitoring
Industry 4.0 and the Automotive Industry
“It takes about 30 hours to manufacture a vehicle. During that time, each car generates massive amounts of data,” points out Robert Engelhorn, director of the Munich plant. “With the help of artificial intelligence and smart data analytics, we can use this data to manage and analyze our production intelligently. AI is helping us to streamline our manufacturing even further and ensure premium quality for every customer. It also saves our employees from having to do monotonous, repetitive tasks.”
One part of the plant that is already seeing benefits from AI is the press shop, which turns more than 30,000 sheet metal blanks a day into body parts for vehicles. Each blank is given a laser code at the start of production so the body part can be clearly identified throughout the manufacturing process. This code is picked up by BMW’s iQ Press system, which records material and process parameters, such as the thickness of the metal and oil layer, and the temperature and speed of the presses. These parameters are related to the quality of the parts produced.
Big Data Analytics in Electronics Manufacturing: is MES the key to unlocking its true potential?
In a modern SMT fab, every time a stencil is loaded or a squeegee makes a pass, data is generated. Every time a nozzle picks and places a component, data is generated. Every time a camera records a component or board inspection image, data is generated. The abundance of data in the electronics industry is a result of the long-existing and widespread process automation and proliferation of sensors, gauges, meters and cameras, which capture process metrics, equipment data and quality data.
In SMT and electronics the main challenge isn’t the availability of data, rather the ability to look at the data generated from the process as a whole, making sense of data pertaining to each shop floor transaction, then being able to use this data to generate information from a single point of truth instead of disparate unconnected point solutions and use the generated insight to make decisions which ultimately improve process KPIs, OEE, productivity, yield, compliance and quality.
2021 IW Best Plants Winner: IPG Tremonton Wraps Up a Repeat IW Best Plants Win
“If you wrapped it and just wound it straight, it would look like a record, with peaks and valleys,” says Richardson. So instead, the machines rotate horizontally, like two cans of pop on turntables. Initially, IPG used a gauge that indicated whether the film was too thick or too thin. “That was OK,” says Richardson, “but it didn’t get us the information we needed.”
Working with an outside company, IPG Tremonton upgraded the gauge to one that could quantify the thickness of the cut plastic in real time as the machine operates.
The benefits of the tinkering were twofold. First, the upgrade gave operators the ability to correct deviations on the fly. Second, “we found that we had some variations between a couple of our machines,” Richardson says. Using the new gauge on both machines revealed that one of them was producing film “a few percentage points thicker” than its twin. “We [were] basically giving away free product,” Richardson recalled. The new sensor gave IPG the information it needed to label film more accurately.
AWS IoT SiteWise Edge Is Now Generally Available for Processing Industrial Equipment Data on Premises
With AWS IoT SiteWise Edge, you can organize and process your equipment data in the on-premises SiteWise gateway using AWS IoT SiteWise asset models. You can then read the equipment data locally from the gateway using the same application programming interfaces (APIs) that you use with AWS IoT SiteWise in the cloud. For example, you can compute metrics such as Overall Equipment Effectiveness (OEE) locally for use in a production-line monitoring dashboard on the factory floor.
Transforming quality and warranty through advanced analytics
For companies seeking to improve financial performance and customer satisfaction, the quickest route to success is often a product-quality transformation that focuses on reducing warranty costs. Quality problems can be found across all industries, and even the best companies can have weak spots in their quality systems. These problems can lead to accidents, failures, or product recalls that harm the company’s reputation. They also create the need for prevention measures that increase the total cost of quality. The ultimate outcomes are often poor customer satisfaction that decreases top-line growth, and additional costs that damage bottom-line profitability.
To transform quality and warranty, leading industrial companies are combining traditional tools with the latest in artificial-intelligence (AI) and machine-learning (ML) techniques. The combined approach allows these manufacturers to reduce the total cost of quality, ensure that their products perform, and improve customer expectations. The impact of a well-designed and rigorously executed transformation thus extends beyond cost reduction to include higher profits and revenues as well.
Survey: Data Analytics in the Chemical Industry
Seeq recently conducted a poll of chemical industry professionals—process engineers, mechanical and reliability engineers, production managers, chemists, research professionals, and others—to get their take on the state of data analytics and digitalization. Some of the responses confirmed behaviors we’ve witnessed first-hand in recent years: the challenges of organizational silos and workflow inefficiencies, and a common set of high-value use cases across organizations. Other responses surprised us, read on to see why.
Early And Fine Virtual Binning
ProteanTecs enables manufacturers to bin chips virtually, in a straightforward and inexpensive way based on Deep Data. By using a combination of tiny on-chip test circuits called “Agents” and sophisticated AI software, chip makers can find relationships between any chip’s internal behavior and the parameters measured during the standard characterization process. Those relationships can be used to measure similar chips’ internal characteristics at wafer sort to precisely predict how chips would perform during Final Test, even before the wafer is scribed.
AI Solution for Operational Excellence
Falkonry Clue is a plug-and-play solution for predictive production operations that identifies and addresses operational inefficiencies from operational data. It is designed to be used directly by operational practitioners, such as production engineers, equipment engineers or manufacturing engineers, without requiring the assistance of data scientists or software engineers.
Efficiency of production plants: how to track, manage and resolve micro-stops
Why are the micro-stops listed above not tracked by companies? Comparison with many entrepreneurs and maintenance managers shows that everyone is aware of the problem, but underestimate the impact of these stops on overall production efficiency. These stoppages are almost never justified by the operators because the personnel on board the machine is busy reaching its production targets and therefore does not considers it important to stop to justify the micro-stops. How often do you hear people say that the time to justify downtime is greater than the machine downtime!