As the Internet of Things (IoT) reaches a hype level not seen since the Internet-driven tech bubble in the late 90’s, many pundits believe the model for IoT will be smart sensors communicating directly into machine clouds. As a friend recently said to me, “the cost of sensor technology and SIM cards has fallen so low that it makes no sense to do anything else.” I appreciated his conviction. I just didn’t buy his argument. In fact, I can think of seven reasons why edge computing is a critical element needed for the advancement of the IoT.
Let's take a deep dive into each reason.
One: The edge is the muffler for data exhaust. Data exhaust refers to the messages that ultimately have no real value. Machine data generated by sensors will be plentiful. But volume doesn't necessarily mean value. In fact, nearly all messages are inconsequential. For instance, over a 30-day period:
- Take the temperature in every room within a thousand-room building every second.
- At the same time, poll CO2, Luminosity, Noise, and Occupancy.
The end result? You would be compiling 12.94 billion messages each month. But the temperature (or other readings) just don’t change that often. Occasionally, even when they do change it isn’t enough to trigger any relevant threshold, resulting in 99 percent or more of the data ingested in this use case being thrown out.
Moreover, you should want to throw it out simply because the noise-to-signal ratio is too high with a high data exhaust rate. If you can filter out the inconsequential messages whle they're still at the edge, you increase the value of the payload you deliver to central processing. You also spend less money to transport and store all the data. The bottom line: Yur analysis of the resulting centralized data is more powerful. Indeed, the analytics are not being unduly challenged by adding in orders of magnitude of needless data.
Two: Time can matter. IoT implementations will ultimately be as much (or more) about the utility value of the data as it is about the closed-loop-message response systems -- but in no way does this reduce the importance of it. In fact, the more effective you are at leveraging historical data, the more likely you will achieve the best message-response possible.
However, edge processing can provide faster response in most instances. There are many use cases where this doesn’t really matter, such as turning on the lights, closing the garage door, or checking the vending machine status. Those are all examples where the latency of the system will not be impacted by moving the response closer to the point of ingestion. But, when the reaction time becomes more critical, such as in a car or any moving object responding to surrounding conditions, latency is extremely important. Moreover, as we move forward it is almost certain that some systems in motion (again, like cars) will be interacting with other systems in motion (like other cars) where the broadcasting is proximity based (e.g., mesh networking) and processing on the edge will be critical.
Three: Configurability, while sometimes overlooked, is important. Francis daCosta’s book, “Rethinking the Internet of Things,” makes a great case for edge processing. One of the more nuanced elements he discusses is the notion of configurability. Specifically, the idea that every sensor needs to be IPv6 addressable has real issues. For one, many sensors don’t actually need their own IPv6 address, but rather, can sit behind a specific address (like an edge device). Secondly, the energy, memory, and related compute power required for an IPv6 protocol is overkill for many sensors and associated use cases. By configuring the devices behind the edge, the sensors themselves can be smaller, simpler and cheaper.
Four: A secure IoT implementation is a happy one. There are cases to be made on both sides here. Some argue (I think effectively) that you compromise the security of your network whenever you open up the connection for protocol translation. But the idea that all sensors are end-points for host-to-host communications likely increases the potential entry points and could also compromise the security of the network. Others argue that “dumber” devices can also be less (or far less) vulnerable. The thesis that more simplistic devices behind an edge can provide better security for the network also has merit. The argument has merit on both sides, and it will be very interesting to see how this evolves over time (which it certainly will).
Five: Governance will become more important. While we live in a truly global society, we still must conform to many state regulations. How you keep data in the UK will not be the same as how you keep data in France, Brazil or the United States. There are certainly use cases where this will not matter, but if you are McDonald’s, The Gap, or GE, it will matter. There are many aspects to governance ranging from privacy to ownership to stewardship of the data. These considerations will likely be made easier and more practical with edge processing.
Six: The proper architecture will be needed to accommodate the market demands. This is a big statement. The idea that the IoT will evolve as a bunch of closed-loop message response systems is inconceivable. For a number of reasons, the market will demand that system architectures allow for the abstraction of the ingestion of the messages from the utilization of that data, and because of this, it is unwise to tie a particular message to a specific use case.
In the early days of computing, hierarchical databases were mainly established to support specific applications, and in time, people began to realize that underlying data had value to multiple applications (this gave rise to the System-R team at IBM and the onset of relational databases). For example, if a restaurant has 10 specific sensor systems, with closed-loop-message response systems, those messages would flow directly to the equipment vendors. Specifically, the lighting data would go to the lighting vendor, the kitchen equipment data to the kitchen equipment vendor, and so forth.
While it does make sense for those suppliers of the systems to have the data, it makes no sense for the primary constituent (in this case, the restaurant operator) to not have the data. In fact, that data is likely to be combined at the first receiver point (the restaurant) where the data is viewed in the context of the other silos (and may even be combined with other data, like street traffic, weather, demographic information, etc.). That data, now cleansed and enriched, can be the fuel for a range of applications including corporate, in-restaurant operational optimization, supply chain, and perhaps a variety of applications for third-party constituents, such as government regulators, suppliers and more. It is likely that some form of an event driven architecture (e.g., publish and subscribe) will evolve as the de facto standard.
What this all boils down to is, from the edge you can establish the proper permissions so that the right data still flows to the right constituents. The argument against this might be the feared loss of control by the equipment suppliers (i.e., the lighting vendor loses control of being able to do a firmware update to control millions of devices in the field through their device cloud), but this does not have to be the case. Edge processing does not require the vendors to relinquish that control, it simply requires the configurations at the edge to accommodate that need, which should be easy to do.
Seven: Cost. There are many instances where edge processing will not save you money, but there are other times where it will, and in a big way. This is a function of network traffic and central storage, and additional compute power and personnel. This is especially true if your message stream has a high exhaust rate.
While it seems that an edge would always be a physical, on-premise device, this is not always going to be true. In some instances, an edge might be in the cloud, but still not be the central processing point. There is growing evidence that edge processing will become more and more important in the Internet of Things, and it only continues to evolve. As with most technological evolution, the market will drive the adaptation of the technology, and in the case of edge processing, the trajectory seems more obvious when you think about how people will ultimately drive value from these systems.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access