Market Research Logo

Sea Change Series: Scale in the Mega Data Center

Sea Change Series: Scale in the Mega Data Center

WinterGreen Research announces that it has published a new study Scale in the Mega Data Center: Market Shift to Non-Blocking Network Inside Data Center Building. Next generation mega data center technology is able to leverage scale to implement cloud computing that is better than most of what is out there now.

Scale is a vital part of the technology used to support next generation data centers. The study is targeted to C-level executives that need to move quickly and surely to improve IT. Automation of IT depends on understanding the business market opportunity from an independent perspective. Vendors are smart but they are committed to the technology they are pushing; the Sea Change Series from WinterGreen Research is able to provide a perspective not available anywhere else.

Extreme scale is what brings enough pathways inside a Mega data center to create a non-blocking (CLOS) networked server architecture. Non-blocking network architecture benefits the business because it permits launching thousands of virtual severs on demand at the application layer. In this manner, innovation can be made to happen quickly.

Using a mega data center, DevOps and/or automated processes can request and deploy additional resource without backing the Dell truck up to the data center every week to provide on-demand capacity. Automated deprovisioning handles freeing of surplus resources, while being virtual means not having stacks of surplus hardware to dispose of as underutilized capital assets.

Modern data centers are organized into processing nodes that manage different applications at a layer above infrastructure. Data is stored permanently and operated on in place. These are the two technologies to check for when choosing a data center. These architectural features provide economies of scale that greatly reduce the IT spend while offering better quality IT.

Once scale is in place, then the economies of scale kick in. When negotiating for cloud capability, managers need to check to see that sufficient multiple pathways are available to reach any node in a non-blocking manner. Non-blocking architecture is more efficient than other IT infrastructure and supports better innovation for apps and smart digitization. Not all cloud architectures offer this business benefit.


This is the 698th report in a series of primary market research reports that provide senior executive analysis in communications, telecommunications, the Internet, computer, software, telephone equipment, health equipment, and energy. Automated process and significant growth potential are a priority in topic selection. The project leaders take direct responsibility for writing and preparing each report. They have significant experience preparing industry studies. They are supported by a team, each person with specific research tasks and proprietary automated process database analytics. Forecasts are based on primary research and proprietary data bases.

The primary research is conducted by talking to customers, distributors and companies. The survey data is not enough to make accurate assessment of market size, so WinterGreen Research looks at the value of shipments and the average price to achieve market assessments. Our track record in achieving accuracy is unsurpassed in the industry. We are known for being able to develop accurate market shares and projections. This is our specialty.

The analyst process is concentrated on getting good market numbers. This process involves looking at the markets from several different perspectives, including vendor shipments. The interview process is an essential aspect as well. We do have a lot of granular analysis of the different shipments by vendor in the study and addenda prepared after the study was published if that is appropriate.

Forecasts reflect analysis of the market trends in the segment and related segments. Unit and dollar shipments are analyzed through consideration of dollar volume of each market participant in the segment. Installed base analysis and unit analysis is based on interviews and an information search. Market share analysis includes conversations with key customers of products, industry segment leaders, marketing directors, distributors, leading market participants, opinion leaders, and companies seeking to develop measurable market share.

SEA CHANGE SERIES: SCALE IN THE MEGA DATA CENTER
SEA CHANGE SERIES: SCALE IN THE MEGA DATA CENTER EXECUTIVE SUMMARY
Amazon, Google, Microsoft, Facebook
IT Is Better When New Sets Of Servers Can Be Spun With The Push Of A Button
Aim to Realign IT Cost Structure
Scale Matters 5
Table of Contents
1. EFFECT OF SCALE IN THE MEGA DATA CENTER
1.1 Facebook Mega Datacenter Physical Infrastructure
1.1.1 Facebook Automation of Mega Data Center Process
1.1.2 Facebook Altoona Data Center Networking Fabric
1.1.3 Facebook Altoona Cloud Mega Data Center
1.1.4 Facebook Altoona Data Center Innovative Networking Fabric Depends on Scale
1.1.5 Facebook Fabric Operates Inside the Data Center
1.1.6 Facebook Fabric
1.1.7 Exchange Of Data Between Servers Represents A Complex Automation Of Process
2. APPLICATIONS CUSTOMIZED FOR EACH USER
2.1 Customized Back-End Service Tiers And Applications
2.2 Machine-To-Machine Management of Traffic Growth
2.2.1 Facebook Data Center Fabric Network Topology
2.2.2 Building-Wide Connectivity
2.3 Highly Modular Design Allows Users To Quickly Scale Capacity In Any Dimension
2.3.1 Back-End Service Tiers And Applications
2.3.2 Scaling Up As a Basic Function Of The Mega Data Center Network
2.3.3 Facebook Fabric Next-Generation Data Center Network Design: Pod Unit of Network
3. MEGA DATA CENTER SERVER PODS
3.1 Server Pods Permit An Architecture To Implement High-Performance Connectivity
3.1.1 Facebook Sample Pod: Unit of Network
3.1.2 Non-Blocking Network Architecture
3.2 Data Center Auto Discovery
3.3 Facebook Large-Scale Network
3.3.1 Rapid Deployment Architecture
3.3.2 Facebook Expedites Provisioning And Changes
4. GOOGLE MEGA DATA CENTER SCALE
4.1 Google Douglas County Mega Data Center
4.1.1 Google Data Center Efficiency Measurements
4.1.2 Google Programmable Access To Network Stack
4.1.3 Google Software Defined Networking (SDN)-Supports Scale and Automation
4.1.4 Google Compute Engine Load Balancing
4.1.5 Google Compute Engine Load Balanced Requests Architecture
4.1.6 Google Compute Engine Load Balancing Scaling
4.2 Google Switches Provide Scale-Out: Server And Storage Expansion
4.2.1 Google Uses Switches Deployed in Fabrics
4.2.2 Google Mega Data Center Multi-pathing
4.2.3 Google Mega Data Center Multipathing: Routing Destinations
4.2.4 Google Clos Topology Network Capacity Scalability
4.2.5 Google Aggregation Switches Are Lashed Together Through a Set Of Non-Blocking Spine Switches55
4.3 Google Network Called Jupiter
5. MICROSOFT MEGA DATA CENTER SCALE
5.1 Microsoft Cloud Data Center Multi-Tenant Containers
5.2 Microsoft Azure Running Docker Containers
5.2.1 Microsoft Data Center, Dublin, 550,000 Sf
5.2.2 Microsoft Builds Intelligent Cloud Platform
5.3 Microsoft Server Products And Cloud Services
5.3.1 Microsoft Crafts Homegrown Linux For Azure Switches
5.3.2 Microsoft Azure Has Scale
5.4 Microsoft Azure Stack Hardware Foundation
5.5 Microsoft Azure Stack Key Systems Partners: Cisco Systems, Lenovo, Fujitsu, and NEC
5.6 Microsoft Gradual Transformation From A Platform Cloud To A Broader Offering Leveraging Economies of Scale
5.7 Microsoft Contributing to Open Systems
5.8 Microsoft Mega Data Center Supply Chain
5.9 Microsoft Leverages Open Compute Project to Bring Benefit to Enterprise Customers
5.9.1 Microsoft Assists Open Compute to Close The Loop On The Hardware Side
5.10 Microsoft Project Olympus Modular And Flexible
5.11 Microsoft Azure
5.11.1 Microsoft Azure Active Directory Has Synchronization
5.12 Microsoft Azure Has Scale
6. MEGA DATA CENTER DIFFERENT FROM THE HYPERSCALE CLOUD
6.1 Hyperscale Cloud Computing Addresses The Issues Of Economies Of Scale
6.1.1 Mega Data Center Scaling
6.1.2 Mega Data Center Automatic Rules and Push-Button Actions
6.1.3 Keep It Simple Principle
7. AMAZON CAPEX FOR CLOUD 2.0 MEGA DATA CENTERS
7.1 Amazon Capex Dedicated To Support Datacenter
7.2 AWS Server Scale
7.3 Amazon North America
7.4 Amazon North America List of Locations
7.5 Innovation a Core Effort for Amazon
7.6 Amazon Offers the Richest Services Set
7.6.1 AWS Server Scale
7.7 On AWS, Customers Architect Their Applications
7.8 AWS Scale to Address Network Bottleneck
7.9 Networking A Concern for AWS Solved by Scale
7.10 AWS Regions and Network Scale
7.11 AWS Datacenter Bandwidth
7.12 Amazon (AWS) Regional Data Center
7.12.1 Map of Amazon Web Service Global Infrastructure
7.12.2 Rows of Servers Inside an Amazon (AWS) Data Center
7.12.3 Amazon Capex for Mega Data Centers
7.13 Amazon Addresses Enterprise Cloud Market, Partnering With VMware
7.13.1 Making Individual Circuits And Devices Unimportant Is A Primary Aim Of Fabric Architecture
8. CLOS NETWORK ARCHITECTURE TOPOLOGY
8.1 Google Clos Network Architecture Topology Allows the Building a Non-Blocking Network Using Small Switches
You Have To Hit A Certain Scale Before Clos Networks Work
8.1.1 Clos Network
8.2 Digital Data Expanding Exponentially, Global IP Traffic Passes Zettabyte (1000 Exabytes) Threshold
9. SUMMARY: ECONOMIES OF SCALE
WINTERGREEN RESEARCH,
WinterGreen Research Methodology
List of Figures
SEA CHANGE SERIES: SCALE IN THE MEGA DATA CENTER
Mega Data Center: Scale Supports Non-Blocking Network Inside Building and More Efficient Processing0
SEA CHANGE SERIES: SCALE IN THE MEGA DATA CENTER EXECUTIVE SUMMARY
Figure 1. Slow Growth Companies Do Not Have Data Center Scale
Figure 2. Mega Data Center Fabric Implementation
1. EFFECT OF SCALE IN THE MEGA DATA CENTER
Figure 3. Facebook Schematic Fabric-Optimized Datacenter Physical Topology
Figure 4. Facebook Automation of Mega Data Center Process
Figure 5. Facebook Altoona Positioning Of Global Infrastructure
Figure 6. FaceBook Equal Performance Paths Between Servers
Figure 7. FaceBook Data Center Fabric Depends on Scale
Figure 8. Facebook Fabric Operates Inside the Data Center, Fabric Is The Whole Data Center
Figure 9. Fabric Switches and Top of Rack Switches, Facebook Took a Disaggregated Approach
Figure 10. Exchange Of Data Between Servers Represents A Complex Automation Of Process
2. APPLICATIONS CUSTOMIZED FOR EACH USER
Figure 11. Samsung Galaxy J3
Figure 12. Facebook Back-End Service Tiers And Applications Account for Machine-To-Machine Traffic Growth
Figure 1. Facebook Data Center Fabric Network Topology
Figure 13. Implementing building-wide connectivity
Figure 14. Modular Design Allows Users To Quickly Scale Capacity In Any Dimension
Figure 15. Facebook Back-End Service Tiers And Applications Functions
Figure 16. Using Fabric to Scale Capacity
Figure 17. Facebook Fabric: Pod Unit of Network
3. MEGA DATA CENTER SERVER PODS
Figure 18. Server Pods Permit An Architecture Able To Implement Uniform High-Performance
Figure 19. Non-Blocking Network Architecture
Figure 20. Facebook Automation of Cloud 2.0 Mega Data Center Process
Figure 21. Facebook Creating a Modular Cloud 2.0 mega data center Solution
Figure 22. Facebook Cloud 2.0 mega data center Fabric High-Level Settings Components
Figure 23. Facebook Mega Data Center Fabric Unattended Mode
Figure 24. Facebook Data Center Auto Discovery Functions
Figure 25. Facebook Automated Process Rapid Deployment Architecture
4. GOOGLE MEGA DATA CENTER SCALE
Figure 26. Google Douglas County Cloud 2.0 Mega Data Center
Figure 27. Google Data Center Efficiency Measurements
Figure 28. Google Andromeda Cloud High-Level Architecture
Figure 29. Google Andromeda Software Defined Networking (SDN)-Based Substrate Functions
Figure 30. Google Compute Engine Load Balancing Functions
Figure 31. Google Compute Engine Load Balanced Requests Architecture
Figure 32. Google Compute Engine Load Balancing Scaling
Figure 33. Google Traffic Generated by Data Center Servers
Figure 34. Google Mega Data Center Multipathing: Implementing Lots And Lots Of Paths Between Each Source And Destination
Figure 35. Google Mega Data Center Multipathing: Routing Destinations
Figure 36. Google Builds Own Network Switches And Software
Figure 37. Google Clos Topology Network Capacity Scalability
Figure 38. Schematic fabric-optimized Facebook datacenter physical topology
Figure 39. Google Jupiter Network Delivers 1.3 Pb/Sec Of Aggregate Bisection Bandwidth Across Datacenter
5. MICROSOFT MEGA DATA CENTER SCALE
Figure 40. Microsoft Azure Cloud Software Stack Hyper-V hypervisor
Figure 41. Microsoft Azure Running Docker Containers
Figure 42. Microsoft Data Center, Dublin, 550,000 Sf
Figure 43. Microsoft-Azure-Stack-Block-Diagram
Figure 44. Microsoft Azure Stack Architecture
Figure 45. Microsoft Data Centers
Figure 46. Microsoft Open Hardware Design: Project Olympus
Figure 47. Microsoft Open Compute Closes That Loop On The Hardware Side
Figure 48. Microsoft Olympus Product
Figure 49. Microsoft Azure Has Scale
6. MEGA DATA CENTER DIFFERENT FROM THE HYPERSCALE CLOUD
Figure 50. Mega Data Center Cloud vs. Hyperscale Cloud
7. AMAZON CAPEX FOR CLOUD 2.0 MEGA DATA CENTERS
Figure 51. Amazon Web Services
Figure 52. Amazon North America Map
Figure 53. Amazon North America List of Locations
Figure 54. Woods Hole Bottleneck: Google Addresses Network Bottleneck with Scale
Figure 55. Example of AWS Region
Figure 56. Example of AWS Availability Zone
Figure 57. Example of AWS Data Center
Figure 58. AWS Network Latency and Variability
Figure 59. Amazon (AWS) Regional Data Center
Figure 60. A Map of Amazon Web Service Global Infrastructure
Figure 61. Rows of Servers Inside an Amazon (AWS) Data Center
8. CLOS NETWORK ARCHITECTURE TOPOLOGY
Figure 62. Clos Network
Figure 63. Data Center Technology Shifting
Figure 64. Data Center Technology Shift
9. SUMMARY: ECONOMIES OF SCALE
WINTERGREEN RESEARCH,

Download our eBook: How to Succeed Using Market Research

Learn how to effectively navigate the market research process to help guide your organization on the journey to success.

Download eBook

Share this report