By Todd Johnson, Chairman and President of Kollective
This is Part 1 in a series of articles on what I think you should be aware of in terms of your network infrastructure as it relates to your global, internal communications plans. While technology advancements are opening up the possibilities for more effective, streamlined communications inside the firewall, the set of factors that need to be taken into account to achieve these goals are getting no less complex.
I provide specifics on the “control and adapt” features of software-defined network solutions, such as the software-defined enterprise content delivery network (SD ECDN). These kinds of solutions are important because they offer cost-reduction, improved capabilities over hardware-based solutions and the ability to satisfy the growing communications needs of both Business and IT user communities.
Software-Defined Network Solutions
Software-defined is a term that is increasingly used to describe solutions that deliver network capability in software that may have historically been delivered via hardware devices. The ability for software-defined solutions to be delivered in a fraction of the time, for a fraction of the cost of their hardware-based predecessors has led to the rapid adoption of a wide range of software-defined technologies in today’s enterprise environment.
Software-defined enterprise content delivery network (SD ECDN) technology, for example, leverages existing infrastructure, notably storage and network bandwidth on end-user devices. It is extremely secure and delivers content via a multi-layered crypto-protected mesh that dynamically adapts to network and other consequential changes. SD ECDN is sometimes referred to as peer-to-peer technology, however, it encompasses much more than peer-based delivery and is, more accurately, a network, grid, or mesh solution.
What I Mean by “Control”
Regardless of how smart a particular network delivery technology may be, it cannot determine business priorities or make the right tradeoffs between competing resources on its own, all of the time.
No matter how refined the default settings of a solution may be, configurable “control” mechanisms must exist for network administrators to be able to respond to unique circumstances or requirements often present in large, complex, global networks. The ability to rise to the challenge of the uniqueness of a given network is one of the most powerful attributes a software-based content delivery solution can have.
Important control mechanisms to consider when evaluating network delivery infrastructure include:
- Locality settings
- Use case optimization
- Peering rules
Locality settings can be used to manage a wide range of attributes associated with how the software behaves. These are typically set as a range of IP addresses identified with a given locality or localities that have been selected to have a shared set of requirements.
Controllable elements with respect to localities are meant to affect:
- Use case optimization
- Peering rules
These parameters can be configured to accommodate the uniqueness of a network in, for example, remote facilities in places like South America, Eastern Europe, or smaller Asian countries. They can also be used to handle the unique needs of a manufacturing environment versus an environment designed for back-office operations.
These elements can be configured using grouping methodologies other than just by locality, they can be configured by user role, etc. Localities are a common approach and a straightforward way to describe the application of the control feature set.
Use Case Optimization
Control mechanisms can also be put in place so software-defined systems can adapt to different use cases such as:
- Live video
- Video on Demand (VoD)
- Pre-delivered background content
For live streaming, the software should act somewhat aggressive in the way it sources content from peers and the way it competes for bandwidth. Settings can be set accordingly for those use cases.
VoD is different, for response time is not as critical and viewer sensitivity to launch delay is reduced. For this use case, the software should be able to be configured to optimize on a different outcome such as to be more respectful to network traffic and more passive with regards to bandwidth contention.
Lastly, when using subscriptions or proactive content targeting to pre-deliver content, the software should be set to be more passive. Efforts to pre-position content are typically made with plenty of lead time and are often launched overnight from their geography of origin. In these cases, content delivery should be set to maximize east/west traffic while allowing little to no contention on the north/south routes.
Examples of peering rules that can be set for both the user and the serving source include:
- How far the agent searches to look for peer sources for a given piece of content
- How a given agent is allowed to act as a content source
- How many sources are allowed to be active as content servers to an agent
The connection between one network device to the next is called a hop. The number of such hops, known as a hop count, is a measure of the network distance between two devices and has a bearing on latency. Setting the peering rules to include hop counts and/or latency gives control over how the software behaves while still allowing for it to optimize within the boundaries of the established rules.
Some systems use both hop counts (number of network switching or gateway points traversed on a specific path to content peers) and latency configurations together. Network administrators should have the ability set how widely a specific group of agents is allowed to look for effective peers for the specific content item requested. The hop count method approximates whether an agent can peer from the LAN or not. Peering within 3 or 4 hops is usually within a LAN environment, so allowing higher hop counts typically means an agent will peer across a WAN segment which, in most cases, may not desirable.
The latency method consists of measuring round-trip speeds. Latency can also be a good proxy for distance in a network topology, as not all hops are created equal.
Determinations can be made as to whether a given agent is allowed to act as a content source and how many sources are allowed to be active servers to an agent. By limiting the maximum allowed bandwidth for downloading and serving, network administrators control the agent’s use of network resources.
Many factors influence bandwidth limits, and these can be governed by the highest-priority asset being downloaded. These factors allow the agent to aggressively stream live and on-demand content at the same bitrate.
Bandwidth limits may be disabled by default so that the agent may run unfettered except when throttling is deemed necessary as a result of user activity, or to avoid network congestion.
Bandwidth Limits include:
Maximum download and serving bandwidth settings that limit the bandwidth used for all downloading and for serving other peers, including on-demand content and live streams.
WAN download and serving bandwidth levels that govern the maximum allowed download or serving bandwidth when downloading from, or serving a source detected as being on the other side of a wide-area network link.
Absolute limits that control the maximum download and serving bandwidth the agent is allowed to use for all types of downloads and serving, including foreground downloads.
Throttle limits that control the maximum bandwidth for the agent while downloading and serving in throttled mode. The agent ramps down to these limits as quickly as possible when throttling is triggered and returns to normal bandwidth usage once throttling is no longer required. If more than one limit applies, the most restrictive limit is honored.
Device management settings that maintain the balance between increased performance and prevention of adverse impact to resources across all elements of the system. Tools can be put in place to ensure there is a proper balance on the end points as well.
I focused specifically on the “control” mechanisms of effective software-defined network infrastructure technology in this article. In the next article in this series, I’ll delve into what I call the “adapt” mechanisms, which are equally important to know about and consider.