Complex event processing
The complex event processing paradigm (CEP) is a fundamental paradigm for a software system to self-adapt to environmental changes which has been introduced to follow, analyze and react to any incoming events which require near real-time responses, though early detection and reaction to emerging scenarios.
A CEP architecture has to handle data from multiple, heterogeneous sources, apply complex business rules, and drive outbound actions. The applied technique for tracking, analyzing, and processing data as an event happens and is useful for Big Data because it is intended to manage data „on-the-fly”.
The amount and complexity of data is growing
CEP utilizes data generated continuously - everywhere in a factory - from different sources such as sensors, PLCs, location tracking data, AOI, etc. This data is generally different from former data sources, as it is not prepared or clarified in any way, therefore it tends to be messy, intermittent, unstructured and dynamic. There is a need to handle data asynchronously, which means that the architecture should facilitate multiple processes simultaneously on a single message or trigger.
The number of devices, volume and variety of sources, the frequency of the data are all growing and will not scale using traditional approaches to computing, storing and transporting them.
Complex events in real time
In many use cases, latency matters. Delays between a data event and a reaction often must be near real-time. Throughput is impacted along with data growth, and delays will become unacceptable. Latency must be low; typically less than a few milliseconds, but sometimes less than one millisecond, between the time that an event arrives and it is processed.
Traditional approaches to centralizing all data and running analytics (even in the cloud) are unsustainable for real-time use cases. Traditional and cloud-based data management and analytics can pose security challenges as they are physically outside of the data center’s security perimeter.
As machine learning intelligence become more commonplace in devices out in the field, those devices become more complex, requiring greater CPU and memory and drawing more power. Increasing complexity slows processing down and leads to data results being discarded since by the time results from the device are gathered, more recent data is desired.
Walking the bridge towards complex event processing
There is a need to bridge the gap between the traditional approach and solutions with new Big Data technologies such as CEP. Leaders across all industries are looking for ways to extract real-time insight for their massive data resources and act at the right time. Bridging of this gap will enable your company to become a real Industry 4.0-ready factory, so that business outcomes can be maximized by making better-informed, more automated decisions and delivering a better service, higher quality to the customers.
Our REACH (Real-time Event-Based Analytics and Collaboration Hub) provides such a bridge for you. Let’s make a Proof-of-Concept together to reach your low-hanging-fruits and deliver tangible business benefits within a couple of months!
(img source: Steinberg, A., & Bowman, C., Handbook of Multisensor Data Fusion, CRC Press 2001)
EDGE vs FOG vs Cloud
In our previous technical blog post we revealed the technology behind the term “fog computing”. As we discussed in a very high level fog computing is something about bringing cloud functionality closer to the data while we transport the data a bit away from the edge. Where the data crosses the computing functionality we are talking about fog computing.
Now it is time to make a comparison between Edge, Fog and Cloud computing.
In edge computing, physical devices, machines and sensors are directly connected into data processing devices. This device does processing on the data, specially aggregating, transforming or running no performance intensive algorithms. This technology is usually used when the data in digital recognisable format. In this case edge nodes transform these non digital values to digital one and transport them into the fog layer for further analysis. It is also possible to do non performance intensive pre-calculations on the edge nodes, but it is not really preferred because we lose the possibility to run complex algorithms on the raw data in the fog layer.
As we discussed in our previous post, fog computing is about somehow process complex performance intensive algorithms on the data collected from the edge nodes. These algorithms uses more resources and / or requires data to perform the calculation that is nor available on the edge nodes, so this is not possible to run the on them. In the fog layer it is also possible to generate control signal, to transport them to the edge nodes and make the system ready to control any machine or device based on complex, calculated events.
Cloud computing is about to buy resources from the cloud providers and put the data into the cloud for analysis. The term of IoT it is unlikely to push all raw data into the cloud system. In the cloud computing layer we have to make difference between private and public cloud systems. Public cloud does not mean that your data will be available for public access but the resources and the services can be ordered by anyone who pays. Private cloud systems are more about connecting several locations into a centralized architecture, such as data centre and owing the whole architecture by the company. Both private and public cloud systems are (external) network dependent, so in the Industry 4.0 architecture it is advices to use them only for cross company aggregated data analysis.
As you see we could not say that Edge, Fog or Cloud computing can cover a complex Industry 4.0 system by alone. Our implementation experiences show that fog computing layer is the most important layer in such architecture but it must live in a perfect harmony with Edge and Cloud systems.
In this context, „Fog” means that „Cloud” moves down, closer to the ground, to the machines, sensors and legacy systems.
Fog and Cloud computing are complementary to each other. As the amount of data increases, transmitting it all to the cloud can lead to challenges such as high latency, unpredictable bandwidth bottlenecks and distributed coordination of systems and clients.
Fog computing brings computing and applications closer to the data, saving bandwidth on billions of devices and enabling real-time processing and analysis on huge datasets and streams.
Our product, REACH is a Fog computing platform that delivers real-time, event stream processing capabilities by using distributed computing, analytics-driven storage and networking functions that reside closer to the data-producing sources