1 ZB = 1000000000000000000000 bytes = 1 billion terabytes. The world’s digital data reached 2.8 zettabytes (ZB) in 2012. As astounding as that sounds, what good does that amount of data actually do us? According to a Digital Universe study, only 5% of data is actually analysed. Does that mean big data is slightly overrated? Here, Jonathan Wilkins, marketing and development manager at European Automation, argues the case for big data applications for industrial automation.
Big data is a term which refers to data sets that are so large and complex they can’t be captured, curated, managed or processed by commonly used software tools, within an acceptable period of time. Traditionally, areas that regularly encountered limitations caused by large data sets were meteorology, genomics, complex physics simulations or biological and environmental research. Recently, big data-related problems have extended to other sectors, including industry.
The increase in the number of data production and data gathering technologies, like mobile devices, software logs and wireless sensor networks, has allowed for the technological capacity to store information to roughly double every 40 months since the 1980s.
In light of the mass adoption of such technological breakthroughs, big data has now extended its reach to sectors like manufacturing and industrial automation. Nevertheless the potential big data holds for industrial automation players remains mostly untapped.
However, obsolete component provider European Automation has sensed its clients are making a definite move in this direction. In recent years, there has been a significant rise of orders for automation solutions compatible with big data applications. This includes high demand for products like sensors, PLCs and technology that can interpret large sets of information. Similarly, the PLCs and industrial computers European Automation sells are becoming more complex and powerful.
This reflects an initial stage of the big data phenomenon in industrial automation – purchasing the technologies that allow the collection of substantial amounts of data.
Counter intuitively, there has also been a move back towards thin client computing in an industrial context, allowing greater data processing by a central unit – similarly to an old fashioned mainframe.
Big data challenges
Here’s something that will really make your head spin; as of 2012, 2.5 exabytes (2.5 x 1018 bytes) of data have been created on a daily basis. The size of these new data sets is incredibly difficult to perceive. Just think of the technology required to actually manage them.
After deciding to exploit the potential of big data, companies could face three main problems: how to collect the data, where to store it and, most importantly, how to make sense of it. Collecting the data is no longer a major issue, as there is a vast range of automatic identification and data capture related technologies, which can be implemented in industrial automation equipment and building management systems. The rise of cloud computing has also resolved many issues regarding the storage of data.
At the moment, the biggest challenge for industrial automation companies who plan to use big data, is finding the right technology to help them interpret large and diverse data sets and turn them from noise, into practical, actionable information.
Analysing big data
In recent years, several software solutions have been developed for industrial big data applications. One way of processing large data sets in parallel, across many nodes in a cluster is the MapReduce programming model. It’s composed of a map formula, which filters data, and a reduce formula, which collates the information collected across several locations to a single result.
As opposed to traditional software, which painstakingly moves data over a network to be processed by the software, MapReduce moves the processing software to the data bank. This smart approach is tailor made for big data sets.
A popular open-source implementation of MapReduce comes in the form of Apache Hadoop, a software solution which can process large data sets across clusters of computers using simple programming models. It allows companies to collate data and analyse it using specific questions. Google, Dell, IBM, Microsoft and Oracle already offer commercial implementations of Hadoop and provide support for them. While the Hadoop software is very complex to use, the October launch of Hadoop 2.0 – now on general release – has encouraged start-ups and established companies to create tools to change that. This represents an important step towards finding big data analysis solutions.
Big data applications
With the aid of this new generation of software, the shop floor becomes a highly automated environment, ideal for leveraging big data.
Predictive diagnosis is one of the main big data applications for manufacturing. Identifying possible patterns in product quality, manufacturing data and service reports represent just some uses of big data on at the cutting edge of production engineering.
For example, manufacturers will be able to analyse years of past data and examine production variations to see whether they were followed by outages. This type of application can enable extremely useful predictive diagnosis. Big data also holds the potential for manufacturers to uncover looming business trends and improve responsiveness to them.
Internally, one of the biggest benefits of using big data is the ability to collect information from the system and identify poor performances or signs of equipment deterioration. Automation and manufacturing problems could then be detected, diagnosed and resolved, without any interruptions in the production process. Power or plant condition monitoring could also predict failures and equipment fatigue by analysing historical data. As such, big data applications have the potential to significantly reduce maintenance and downtime costs.
Finally, industrial automation companies can use big data from process measurement sources to identify where production improvements can be made. The information collected from meters, drivers, intelligent starters or motor control centres can allow the optimisation of complex system behaviours. By observing causal factors for quality issues, process variability and energy efficiency through the manufacturing process, big data analysis becomes the basis for gaining a competitive advantage.
To achieve this, companies have to find a highly personalised solution that turns big data into a valuable source of market insight information, predictive analysis and operation management.
Getting ready for big data
Faced with the big data challenge, we here at European Automation has been making conscious efforts to focus on collecting and analysing the right data. The first step is to determine what we want to find out and which information is relevant to that particular issue.
For example, if the purpose is to learn more about what areas our services need improving, we go to our customer survey and feedback forms. If, on the other hand, we want to know what types of automation parts we need to stock up on, we look at sales records to identify key products and we check our lean warehouse database.
European Automation has taken the first steps towards developing a high performance data analysis algorithm, but we’re still only at the beginning of the road. However, we are starting to see the advantages of using big sets of data to gain priceless market insight.
Looking at our sales and the trending products, we estimate that in the next couple of years, more industrial automation companies will look in the direction of big data to increase production efficiency, become more agile and boost sales. Big data is the factor that, much like the Dude’s rug in The Big Lebowski, will tie all these things together.