Edge Computing and AI

AI isn't the sole turning point in IT right now. Another revolution is in the "Internet of Things" or IoT. The idea behind IoT is to give an IP address to tiny sensors and devices that aren't smartphones, tablets, workstations or cars. The smart home, if it wants to become mainstream, will require IoT instead of the current status quo: dozens of proprietary standards that don't interoperate.

But IoT's real benefit is in industry and services. The much-quoted Rolls-Royce jet engine that uses IoT sensors is just an example. Another is the use of such sensors to detect the number of plates that remain at a cafeteria line (and that trigger an event that causes the plates to be refilled if they run low).

Sensor data is irrelevant, however, if it isn't processed or at least stored for later processing, however. And depending on the device in question, the data stream coming from such a device can be immense, as in the jet engine (multiple terabytes per hour of flight).

Currently, the companies offering cloud computing (i.e. Microsoft, Google, Amazon and some outliers) are pushing for IoT data to be directly pumped into their respective clouds to be used for analysis. This can be problematic, however. Sticking with the example of the jet engine, the amount of data generated is not only extreme, it is also very difficult to route that information quickly enough into - in this case Microsoft's - cloud. The only live connection available to a jetliner in flight is a satellite uplink - very expensive and potentially not fast enough.

Bring in what IBM terms "Edge Computing", while Cisco prefers the term "Fog Computing" and Microsoft speaks of the "intelligent Cloud". Whichever terminus finally catches on is probably yet to be determined, but my preference is that of IBM's, as the term "Fog" implies "being in the cloud and not realizing it" to me.

It is actually quite surprising to me that this topic is seemingly sinking into the various cloud firms just now. Look at it this way: the human visual system doesn't pump raw sensor data (via the optic nerve) into the brain, as this would flood the brain with information, rendering it useless for other tasks (such as writing articles like this). Nature has realized long ago, that raw sensor data needs to be pre-processed before being handed off to the brain for interpretation, resulting in the development of visual cortex in all animals with complex vision systems.

Subsequently, it only seems reasonable that there is no need to dump terabytes of raw data into the "brain" of a cloud without reducing it to sensible batches of concentrated goodness first. Bring on the AI!

Some time ago, I wrote about an exciting new sensor being developed, that uses input from various sources (sound, light, movement) to determine whether Ann has left the water running at the kitchen sink (again) or the dog is crapping on the living room floor (again). This is "edge computing" at its finest - and compactest, as all the sensors are not strewn about the house, feeding data into the AI processing all that input, but rather they all sit on one, compact logic board which in turn feeds intelligent information to whatever backend system you have (like an app), such as "The water is running in the kitchen sink and no one is there".

Going back to the jet engine example, this is clearly a case where the consolidation of raw data into at least semi-intelligent output is an absolute imperative. My guess - to be honest - is that the story of a jet engine pumping terabytes of data into Azure per hour is a case of journalistic copy-catting. That's the same effect that caused half of all Formula-1 interested Germans to call the best slot at the start "Pool Position" (instead of Pole Position): some well-known journalist had fudged while writing up a report on some race that was "stolen" multiple times by other journalists not bothering to write their own report and just a short time later, you heard "Pool Position" not only from your friends but also from race commentators on TV!

It is unlikely that engineers at Rolls Royce put together a system that generates so much data, it can't be analyzed as it happens (which is the main idea behind pumping it into the cloud). Going by the article from 2016 there are 25 sensors feeding data such as fuel flow, temperature, pressure, etc. from various parts of the engine into the data stream.

However, wether the data stream is terabytes or megabytes per hour, the idea of feeding the raw data into the cloud just doesn't make sense. AI is more than capable of analysing even the data from the 25 sensors mentioned in the article in a deep learning system and feeding more concentrated information into the cloud for final analysis. The reason for going to these lengths on a jet engine, though it will be the same for a car or a high-speed passenger train or your house, is to save energy and enable predictive maintenance.

The solution probably lies in multiple deep learning modules analyzing a subset of sensors for key indicators that can be relayed to the cloud for individual analysis. Even more important, of course, is to use aggregated data from as many jet engines, cars, trains and houses as possible to feed an AI that can make decisions based on the pre-chewed data from an entire airplane fleet, for example. This is where a cloud-based system "shines", though more and more of this "fleet analysis" activity will likely be run in small deep learning centers of specialised companies.

Year

Categories

Tags