Explore
Pick one of these common workloads to follow its journey through our infrastructure.
Follow the journey a query takes through our infrastructure to return its answer.
We keep your data protected even while it’s in transit. Find out how.
See how Kubernetes Engine automatically scales your app globally.
BigQuery gives you massive computational resources to search vast datasets in seconds. Let’s follow the complex journey each query takes through our infrastructure.
Get startedData encryption is just one of the many layers of security provided by Google Cloud. Learn more about the steps we take to protect your data in transit, including encryption by default.
Get startedKubernetes changed software development by standardizing application packaging models, and Google Kubernetes Engine took it a step further by automating it as a fully-managed service. Ever wonder what happens after you deploy your app on Kubernetes Engine? Let’s follow the journey it takes from your laptop to the cloud.
Get startedGoogle’s network operates like one global computer, ensuring continuous flow of data to and from all corners of the planet.
See it from spaceWhile we don’t operate our own wind farms, we do buy enough wind and solar electricity annually to offset every unit of electricity our operations consume, globally—including both our data centers and offices.
High voltage power comes in, and is converted to a medium voltage suitable for connecting to the power equipment attached to the data center. Our data centers use 50% less energy than the typical data center. We keep the temperature at 80°F, use outside air for cooling, and build custom servers. We also share detailed performance data so other businesses can learn from our best practices.
Data flows to and from our data centers over a Google-operated private fiber optic backbone network. Each data center campus has multiple redundant fiber connections so no single failure or fiber cut will cause you to lose your connection to the data center.
The power management distribution center connects to both the substation and generators, so it can distribute power from either. It also converts the medium voltage received from the substation to a low voltage suitable for distribution inside the data center building.
Each of our global regions is made up of different zones. Each zone is isolated from each other inside the region so a problem in one zone will not affect another zone. For fault-tolerant applications with high availability, it’s a good idea to deploy your applications across multiple zones in a region to help protect against unexpected failures.
We also design custom chips, including Titan and Cloud TPUs.
Titan is a secure, low-power microcontroller designed with Google hardware security requirements and
scenarios in mind. These chips allow us to securely identify and authenticate legitimate Google devices at the hardware level.
Cloud TPUs were designed to accelerate
machine learning workloads with TensorFlow. Each Cloud TPU provides up to 180 teraflops of performance, providing the computational power to train and run cutting-edge machine
learning models.
Google has a long history of building our own networking gear, and perhaps unsurprisingly, we build our own network load balancers as well, which have been handling most of the traffic to Google services since 2008. Maglev is our software network load balancer that enables Google Compute Engine load balancing to serve a million requests per second with no pre-warming.
For over a decade, we’ve been building our own network hardware and software to connect all of the servers in our data centers together. In that time, we’ve increased the capacity of a single data center network more than 100x. Our current generation—Jupiter fabrics—can deliver more than 1 Petabit/sec of total bisection bandwidth. To put this in perspective, this provides capacity for 100,000 servers to exchange information at 10Gb/s each, enough to read the entire scanned contents of the Library of Congress in less than 1/10th of a second.
The Networking Room is where the zone's Jupiter cluster network connects to the rest of Google Cloud through Google’s backbone network.
The electricity that powers a data center ultimately turns into heat. Most data centers use chillers or air conditioning units to cool things down, requiring 30-70% overhead in
energy usage. At Google data centers, we often use the “free cooling” provided by the climate through a water system.
The cooling plant receives hot water from the
data center floor, and cold water from cooling towers, transferring the waste heat from the hot water to the cold water. Cooled water returns to the data center floor to
extract more heat from the equipment there, and hot water flows to the cooling towers to be cooled. This allows us to use the ‘free cooling’ provided by the climate.
We've designed custom cooling systems for our server racks that we've named “Hot Huts” because they serve as temporary homes for the hot air that leaves our servers—sealing it away from the rest of the data center floor. Fans on top of each Hot Hut pull hot air from behind the servers through water-cooled coils. The chilled air leaving the Hot Hut returns to the ambient air in the data center, where our servers can draw the chilled air in, cooling them down and completing the cycle.
Cold water runs from the cooling plant to the data center floor, where it is used to extract heat from inside the "hot huts". The warm water is then returned to the cooling plant where the waste heat is removed, and the water is cycled back to the data center floor.