Adapting to the new normal

The coronavirus pandemic is disrupting business on an unprecedented scale. From having reduced workforces, to implementing social distancing measures and working from home policies, the pandemic has redefined what can be considered normal business practice.

Nobody knows how long the disruption caused by the global pandemic will last or what shape the recovery will take. The pandemic has forced many businesses to adjust to a new digital reality far sooner than they might have planned, and there’s no guarantee that, once it is over, things will return to how they were before. For example, it is generally accepted that the explosion of Chinese e-commerce was a direct result of the SARS epidemic in 2003. That was their new normal.

One way manufacturers can adapt to the new normal of 2020 is by investing in artificial intelligence (AI) technologies like big data, which will help bridge the gap caused by the pandemic’s impact on the workforce. However, when it comes to big data in manufacturing, the same two questions keep popping up. First, what qualifies as big data? And secondly, how can it be useful for manufacturers who take advantage of it? These questions can easily be answered if we remember the three Vs — volume, velocity and variety.

The three Vs

The first V, volume, refers to the amount of data handled in big data systems. It comes as a shock to nobody that big data is large in volume and relies on massive datasets, often as large as petabytes or zetabytes, to operate. To put that in perspective, one petabyte is one million gigabytes, which is the combined storage capacity of 15,625 iPhone 11s. This scale might seem unfathomable, but these large datasets aren’t as difficult to collate as you might think.

The increasing prominence of smart technology, like smart sensors, on the factory floor means that manufacturers can capture large volumes of data from almost any type of machine. Variables such as temperature, humidity, pressure, vibration and changes in operations can be used to monitor individual components, such as motors, to predict equipment failure. Data analytics tools can use this information to predict when a component is likely to fail, meaning maintenance can be planned upfront, minimising costly unplanned downtime. 

The second V, velocity, refers to the speed at which data is generated and the time it takes for this data to be processed for use. For example, modern smart sensor technology using non-contact, high-speed laser sensors that can detect issues that traditional accelerometers cannot. With these laser sensors able to rapidly identify everything accelerometers can, as well as monitoring characteristics such as joint domain and modal analysis, condition monitoring capabilities are vastly improved.

The final V, variety, refers to the different types of data involved in big data processes. Equipment status, parts condition, inventory and product service life are just some of the variables that create the complex web of data that must be managed by manufacturers. Managing this data requires multiple integrated systems to create an all-encompassing view of the facility. For example, parts condition monitoring data might identify when a machine component is showing signs of failure and this can be automatically cross-referenced with the facilities inventory data to see if a replacement is available.

Share