In Blog

Joe Weinman

Author
 Cloudonomics and Digital Disciplines
USA

A new distributed computing architecture is emerging; one simultaneously representing an evolution and revolution of today’s approaches, and having implications for computing, networks, data centers, and interconnection.

Although this new architecture might be described in many ways, perhaps the simplest is to call it a “hybrid multicloud fog.”  It represents a hybrid of private and public clouds, multiple public clouds, and a hybrid of cloud and fog. The hybrid cloud-fog includes various elements ranging from hyperscale facilities with tens of thousands of servers at one end of the spectrum, to billions or eventually a trillion things at the other, with various intermediate layers (the “fog”) reaching from centralized clouds to a distributed edge.  Some may wish that there was simply a single approach, say, running everything in a hyperscale public cloud facility. However, for most applications there are benefits and trade-offs to private and public, to centralized and distributed, and to technology and/or vendor diversity, so typically the optimal approach uses the best of both (or all) worlds across multiple dimensions.

For example, private infrastructure can be optimal for stable workloads, custom, performance engineered solutions, and proximity to legacy systems and data; public can be optimal for varying workloads, proximity to cloud-based data and services, including cloud functions such as AWS Lambda, and relative immunity from DDoS attacks. Centralization can be good for aggregate workload smoothing due to statistical effects, simplicity of management, and for latency reduction to cloud-based data and services; dispersion can be good for latency reduction to edge devices, backhaul network bandwidth reduction, and business continuity / survivability.

An estimate of a trillion things sounds hyperbolic, since a variety of analysts have been revising estimates and calling even conservative ones overly optimistic.  However, as a rough estimate, let’s assume that half of these trillion things are not specific to any individual or household: industrial sensors, robots, drones, shared autonomous vehicles, and the like, placed in factories, mines, farms, or in cities to control traffic lights, for video surveillance, parking meters, and so forth.   That leaves 500 billion devices for individuals.  As the world rapidly closes in on 8 billion people, that implies just over 60 devices per person.  While that may seem high—and perhaps it is at this moment in regions of the world with more fundamental near-term goals such as running water and reliable electricity—consider that it is not unusual for someone to have multiple devices such as laptops, desktops, tablets, smartphones, and now, smartwatches.  Then add in multiple surveillance cameras, smart TVs, smart speakers, Wi-Fi light bulbs, Wi-Fi outlets, and the next wave of connected everything: wearables and implantables, such as connected pacemakers, connected kitchen appliances, and etc., and pretty soon a trillion seems low.

These drivers are being complemented by a variety of emerging technology enablers.  As 5G mobile deployments begin, bandwidth will increase and latency will decrease.  Software-Defined Wide-Area Networks (SD-WANs) will enable greater flexibility.  New offerings will simplify integration and management of private and public and centralized and edge resources.  For example, Amazon Outposts offers, among other things, APIs compatible with its public cloud offers, but in configurations as small as a single server.  This means that you could conceivably “run AWS” in an enterprise data center, a cell site, factory, farm, mine, or your living room, either alone or as a hybrid with public cloud services.

The implications of all of these trends are significant.  One is the nature of networks and interconnection.  Network traffic is likely to bifurcate to asymmetric downstream and upstream flows.  One type of flow is already prevalent: global content streaming—of HD, 4K, and soon, 8K video content—which already accounts for a substantial portion of bandwidth.

According to the recently released Sandvine Global Internet Phenomena Report, video is 58 percent of the downstream traffic volume globally, thanks to “traditional” services such as Netflix, Amazon Prime, and YouTube.  This is true even though efficient compression algorithms are used, content delivery infrastructure (such as Netflix’s Open Connect Appliance) reduces core network traffic by placing the entire library of content at the edge, and some network providers manage bandwidth to only permit standard definition video.  Web traffic, the rapidly growing segment of online gaming, file sharing, social media, and so forth comprise the balance.

The trillion things—or at least the billions of them—will complement these main flows, both as a destination for centrally sourced data, and as a source for data to be sent upstream to the fog and the cloud.  Bandwidth, latency, security, and management needs will greatly vary.  For instance, a video surveillance stream might be multiple megabits per second, but a smart meter might only generate a few bytes per hour.   And, because upstream and downstream traffic flows are highly asymmetric, each of their supporting network approaches are as well.  To illustrate, everyone watching a hit movie is watching the same version, so it can be distributed to and stored at the edge. However, everyone’s smart meter or retail sales data is unique, so it will typically need to be carried all the way to the center (except for cases such as compression or other pre-processing, distributed queries processed at the edge or in the device, or edge intelligence, such as offered by Swim.ai).

Consequently, total global traffic continues to increase, but average bandwidth per connection might actually decrease, as many of these low-bandwidth devices are brought online.  The net result overall, however, is a substantial increase in aggregate traffic bandwidth, number of interconnections, and interconnection bandwidth. This is due to the factors above as well as additional drivers including VR and AR, blockchain, data sovereignty, and global trade flows.

The Cisco Visual Networking Index predicts that global IP traffic will increase at a 26 percent CAGR over the next few years.  However, according to the Equinix Global Interconnection Index, Interconnection Bandwidth will grow at a CAGR of 48 percent. In other words, the capacity being deployed to exchange traffic at interconnection facilities is growing almost twice as fast as the volume of traffic. This is a function of both more interconnections and more bandwidth capacity per interconnection. This larger scale, more interconnected, heterogenous, intermittently-connected global architecture will present new challenges in terms of addressing, monitoring, management, operations, administration, billing, security, flexibility, intelligence, performance, and reliability.

To put it simply, we are moving to an even richer, denser, more distributed, more tightly interconnected global fabric, which will offer new opportunities for innovation but also new challenges for enterprises, governments, cloud providers, network service providers, and consumers.

To learn more, join me in Honolulu at PTC’19: From Pipes To Platforms, Tuesday, January 22nd, for my center stage discussion “A Trillion Things About the Cloud,” with Adrian Cockcroft, VP Cloud Architecture Strategy, Amazon Web Services, and James Staten, VP & Principal Analyst – CIOs & Digital Transformation, Forrester Research.

Author