Busting three cloud and edge computing myths
As more sensors, mobile devices and powerful applications drive data at the edge of our networks, edge computing continues to lead industry discussions and IT investments. In just three years, more than 40 percent of enterprise IT organizations are expected to be running an edge computing strategy, up from one percent last year, according to Gartner.
As we make this transition, millions of machines and objects are connecting to the Internet for the first time, challenging legacy architecture and changing how we look at our cloud infrastructures. More companies are putting computing resources at the edge of the network for close proximity to the devices that generate data and insights.
Edge computing is the processing of data at the edge of the network where information is generated where the remote primary data center or cloud have limited capabilities. By putting compute sources next to the sources that collect data, we can dramatically improve our responses to events like cybersecurity breaches or take advantage of real-time changes in the market and consumer behaviors.
Edge computing applications must work in real time and cannot wait for the cloud. Think about sensors in elevators that detect data around temperature fluctuations, energy consumption and even when the doors open and close. Or consider how driverless cars react to changes in traffic and road conditions.
This data is processed and analyzed on or near the device where it’s created, instead of traveling to a remote cloud or data center with latency and connectivity issues that could lead to missed opportunities or significant safety issues.
So as organizations begin eyeing a move to edge computing, various misperceptions are clouding their potential migrations. Here are three myths to consider as they relate to edge computing resources.
Myth #1 - Edge Computing requires massive resources
Edge computing requires on-premises resources outside a typical data center. But the resources can be minimal. Full or even small data centers at the edge are not necessary to connect and process data at the edge of a network.
Computing infrastructures can be as small as an IoT device, or an infrastructure as large as a micro data center of multiple compute appliances. Think of it in the context of remote office or branch office computing. However, edge computing can also be adjacent to manufacturing systems, medical equipment, point of sales and IoT devices.
Myth #2 - Network performance is important but not critical
Network performance within or near a data center is often taken for granted because of the standard high-availability connectivity and power systems. But on the edge it’s an absolute necessity.
Building out an edge network means changing the way you manage and run your data centers. Your systems are no longer in large, easy-to-access buildings with on-site operations teams. You have to build something that’s more like a cellular network, with hardware deployed in modular housings on remote sites that take time to reach.
Edge computing might require multiple network providers and multiple connectivity points to support full loads from the edge data center. The diversity and redundancy are critical so that if there’s a failure or loss of a network provider, you can still deliver the same high-quality service. That may mean mixing wired and wireless connectivity to ensure access even when one router is down. With edge computing, compute sources can run from cellular base stations or nearby metropolitan networks.
Myth #3 - Edge computing is easy
It’s hardly surprising that some vendors will tell you they can offer an easy path for this new form of networking combining compute and storage on the edge. However, there are elements of edge computing that require a dedicated IT team. Unlike full data center implementations, edge computing is easier than managing large complex data centers.
Edge computing environments are small enough to operate without dedicated IT staffs. But to operate in a low-maintenance fashion, the infrastructure needs to be easy to implement and manage, as well as easily connected to the primary data center, or cloud as needed.
Edge data centers aren’t one-size-fits-all — an installation could be anything from a single server, to a self-contained rack to 20 or 30. Whatever the size, they need the right equipment. But rather than viewing an edge data center as an inexpensive and small infrastructure, think of each individual node as a data center that must be designed and tested to support business needs.
Most data centers require onsite staff that works in shifts to maintain equipment. That’s not possible with edge computing because you’re managing multiple small data centers in a diverse set of locations, as well as your data center assets.
This arrangement requires remote monitoring and a significant amount of automation. Redundant hardware might be needed to address access issues. Edge computing applications need to be self-healing or capable of failing over to nearby nodes or data centers to maintain service levels.
So far the industry has yet to establish an abundant amount of best practices in this space. We’re all still somewhat in a trial and error mode, but once we crack the code and perfect this approach, the computing landscape will be an entirely different world.