Explore

Google Cloud infrastructure
through common workloads

Pick one of these common workloads to follow its journey through our infrastructure.

Explore our global network
Regions
Network
Edge PoPs
CDN PoPs

Google Data
Center

Step inside
Local container
GCP Kubernetes container
GCP Kubernetes container
GCP Kubernetes container

Google
PoP

Google
Data Center

Data analytics

Analyze your
data, fast

BigQuery gives you massive computational resources to search vast datasets in seconds. Let’s follow the complex journey each query takes through our infrastructure.

Get started
Data encryption

Protecting your
data in transit

Data encryption is just one of the many layers of security provided by Google Cloud. Learn more about the steps we take to protect your data in transit, including encryption by default.

Get started
Container operations

Write once.
Scale globally.

Kubernetes changed software development by standardizing application packaging models, and Google Kubernetes Engine took it a step further by automating it as a fully-managed service. Ever wonder what happens after you deploy your app on Kubernetes Engine? Let’s follow the journey it takes from your laptop to the cloud.

Get started
Google Cloud

A global fiber network,
connecting you to
the world

Google’s network operates like one global computer, ensuring continuous flow of data to and from all corners of the planet.

See it from space
Slide cards to change steps

Windmills

While we don’t operate our own wind farms, we do buy enough wind and solar electricity annually to offset every unit of electricity our operations consume, globally—including both our data centers and offices.

Substation

High voltage power comes in, and is converted to a medium voltage suitable for connecting to the power equipment attached to the data center. Our data centers use 50% less energy than the typical data center. We keep the temperature at 80°F, use outside air for cooling, and build custom servers. We also share detailed performance data so other businesses can learn from our best practices.

Fiber Cables

Data flows to and from our data centers over a Google-operated private fiber optic backbone network. Each data center campus has multiple redundant fiber connections so no single failure or fiber cut will cause you to lose your connection to the data center.

PMDC

The power management distribution center connects to both the substation and generators, so it can distribute power from either. It also converts the medium voltage received from the substation to a low voltage suitable for distribution inside the data center building.

Zone

Each of our global regions is made up of different zones. Each zone is isolated from each other inside the region so a problem in one zone will not affect another zone. For fault-tolerant applications with high availability, it’s a good idea to deploy your applications across multiple zones in a region to help protect against unexpected failures.

Custom Chips
(TPU and Titan)

We also design custom chips, including Titan and Cloud TPUs.

Titan is a secure, low-power microcontroller designed with Google hardware security requirements and scenarios in mind. These chips allow us to securely identify and authenticate legitimate Google devices at the hardware level.

Cloud TPUs were designed to accelerate machine learning workloads with TensorFlow. Each Cloud TPU provides up to 180 teraflops of performance, providing the computational power to train and run cutting-edge machine learning models.

Maglev Load Balancers

Google has a long history of building our own networking gear, and perhaps unsurprisingly, we build our own network load balancers as well, which have been handling most of the traffic to Google services since 2008. Maglev is our software network load balancer that enables Google Compute Engine load balancing to serve a million requests per second with no pre-warming.

Jupiter Network Equipment

For over a decade, we’ve been building our own network hardware and software to connect all of the servers in our data centers together. In that time, we’ve increased the capacity of a single data center network more than 100x. Our current generation—Jupiter fabrics—can deliver more than 1 Petabit/sec of total bisection bandwidth. To put this in perspective, this provides capacity for 100,000 servers to exchange information at 10Gb/s each, enough to read the entire scanned contents of the Library of Congress in less than 1/10th of a second.

Networking Room

The Networking Room is where the zone's Jupiter cluster network connects to the rest of Google Cloud through Google’s backbone network.

Cooling Plant

The electricity that powers a data center ultimately turns into heat. Most data centers use chillers or air conditioning units to cool things down, requiring 30-70% overhead in energy usage. At Google data centers, we often use the “free cooling” provided by the climate through a water system.

The cooling plant receives hot water from the data center floor, and cold water from cooling towers, transferring the waste heat from the hot water to the cold water. Cooled water returns to the data center floor to extract more heat from the equipment there, and hot water flows to the cooling towers to be cooled. This allows us to use the ‘free cooling’ provided by the climate.

Hot Hut

We've designed custom cooling systems for our server racks that we've named “Hot Huts” because they serve as temporary homes for the hot air that leaves our servers—sealing it away from the rest of the data center floor. Fans on top of each Hot Hut pull hot air from behind the servers through water-cooled coils. The chilled air leaving the Hot Hut returns to the ambient air in the data center, where our servers can draw the chilled air in, cooling them down and completing the cycle.

Water Pipes

Cold water runs from the cooling plant to the data center floor, where it is used to extract heat from inside the "hot huts". The warm water is then returned to the cooling plant where the waste heat is removed, and the water is cycled back to the data center floor.

Discover the next story: Go to story
Back to home
Return to story
?

Swipe left to start and change steps

Data analytics

It all starts with a SQL query.

Ever wonder how many steps BigQuery takes to execute your query in a matter of seconds? Let’s follow your query and see what happens.

Data analytics

Your query is routed to the nearest point of presence.

Because Google has hundreds of points of presence located all over the world, your nearest PoP is likely to be quite close.

Data analytics

Your query travels to the nearest data center.

Because the query travels on our private fiber network, it follows a well-provisioned, direct path from the PoP to the data center.

Data analytics

Inside the data center, your query is routed to a cluster.

When Google first started, no one made data center networking gear that could meet our needs, so we built our own, which we call Jupiter. Our datacenter networks are built for modularity, constantly upgraded, and managed for availability so we can meet the needs of billions of global users. Most importantly, the same data center networks that power all of our internal infrastructure and services also power Google Cloud Platform.

Data analytics

A processor node turns your query into an execution plan.

It takes more than hardware to make your queries run fast. BigQuery requests are powered by Dremel, our massively scalable, interactive, ad-hoc query system, which turns your SQL query into an execution plan. Dremel is what we use inside Google—and it’s available to all Google Cloud customers through BigQuery. In a matter of milliseconds, BigQuery can scale to thousands of CPU cores dedicated to processing your task—no manual operations necessary.

Data analytics

Your data is loaded, processed, and shuffled.

To create the execution plan, Dremel decomposes your query into a series of steps. For each of those steps, Dremel reads the data from storage, performs any necessary computations, and then writes it to the in-memory shuffler. Dremel is widely used at Google—from Search to YouTube to Gmail—so BigQuery users get the benefit of continuous improvements in performance, durability, efficiency and scalability.

Data analytics

That was fast! The answer to your query is assembled and sent back to you.

It can take milliseconds for BigQuery to send you the answer. We also write the answer to your query to storage so the next time you query it, the system will remember it and process the information even faster. Although there are many steps associated with BigQuery, what’s remarkable is that it can move through them in fractions of a second. In the end, one of the greatest benefits of BigQuery isn’t just that it gives you enormous computing scale for everyday SQL queries, but that it does it without you ever needing to worry about things like software, virtual machines, networks or disks.

Swipe left to start and change steps

Data encryption

Your data is encrypted from the moment you connect to Google Cloud.

When you send a request to a Google Cloud service like Google Cloud Storage or Gmail, that request is first routed by a globally distributed system called Google Front End (GFE). The GFE encrypts your traffic, and provides load balancing and DDoS attack prevention.

Data encryption

Your data is routed to the nearest point of presence.

Because Google has more than 100 points of presence located all over the world, the nearest PoP is likely to be quite close.

Data encryption

Your data travels to a regional data center location.

When your data travels from the point of presence to a regional data center location, it’s authenticated by default. Not all data in transit inside Google is protected the same way—if your data leaves the physical boundaries of our network, we encrypt it.

Data encryption

Custom hardware & authenticated identity secure every service.

Custom servers, storage and chips—like our Titan security chip—mean we know exactly what hardware is running in our infrastructure, and can verify its origin and identity at startup. Titan signs the boot loader and verifies machine peripherals. Verified boot and signing processes at the chip-level allows us to give every service in the infrastructure a unique cryptographic identity, which it can use to authenticate to other services. This means we can verify whether two services should be communicating with each other.

Data encryption

To store data, we divide it into sub-file chunks across multiple machines.

Google encrypts data prior to it being written to disk. Encryption is inherent in our storage systems—rather than added on afterward. Data for storage is split into chunks, and each chunk is encrypted with a unique data encryption key. Each chunk is then distributed across our storage systems, and is replicated in encrypted form for backup and disaster recovery.

Data encryption

We generate data encryption keys.

A data encryption key (DEK) is generated for each chunk of data using Google’s common cryptographic library. Two chunks will not have the same encryption key, even if they are part of the same Google Cloud Storage object, owned by the same customer, or stored on the same machine. This partition of data, each using a different key, means the "blast radius" of a potential data encryption key compromise is limited to only that data chunk.

Data encryption

The data encryption keys are further protected with key encryption keys.

These data encryption keys are then further “wrapped” with the storage system’s key encryption key (KEK). This is done mainly for performance—so that a key unwrapping operation is very efficient, and the number of keys we need to centrally manage is much smaller.

Data encryption

The data is stored inside Google’s key management service.

Google’s internal key management service is globally distributed, and was built with resiliency and security in mind. KEKs are not exportable by design, and all encryption and decryption with these keys must be done within the key management service. This means the key management service can be a central point of enforcement.

Data encryption

Retrieving your encrypted data starts with ID’ing the requested chunks.

Here’s what happens in reverse when you access your encrypted data. A user makes a request for their data to a service, like Gmail or Drive, which authenticates and authorizes the user. The service makes a request to the storage system, which verifies the service’s permissions, and retrieves the requested chunks of data. For each data chunk, the service retrieves the data encryption key for that chunk, and sends that to the key management service.

Data encryption

Keys are decrypted and data is decrypted and reassembled.

The key management service authenticates the storage system, using its identity. The key management service then decrypts each data encryption key by verifying that the storage system is indeed authorized to use the key encryption key associated with the service. It unwraps each data encryption key, and passes the unwrapped data encryption key back to the storage system. The storage system then receives the unwrapped data encryption keys, and uses these to decrypt the data chunks. It then puts the data back together.

Data encryption

For your eyes only. Your data is returned to you, encrypted all the way.

After completing all these steps in fractions of a second, your data is reassembled and sent back to you. Google encrypts your data stored at rest by default, without any action required from you—and all of this almost instantaneously.

Swipe left to start and change steps

Container operations

Start with deploying your containerized app on Kubernetes Engine.

Once your app’s been written, it’s ready to be deployed with Kubernetes Engine where you can scale globally, roll out new code without downtime, and automatically stay up-to-date with new releases.

Container operations

As demand for your app increases, new containers are created.

As soon as you deploy your app on Kubernetes Engine, it scales automatically as demand increases, activating new containers on more nodes so that your app can handle anything.

Container operations

Our global load balancer goes to work.

As demand increases, Kubernetes Engine's global load balancer automatically distributes traffic amongst all the containers. If your app gets high traffic in certain parts of the world, Kubernetes Engine will detect that and divert resources to those regions.

Container operations

Your resources automatically scale up and down as needed.

Kubernetes Engine's cluster autoscaler automatically resizes clusters, scaling them up or down based on demand. This means you’ll only pay for resources that are needed at any given moment, and you’ll automatically get additional resources when demand increases.

Container operations

If a node fails, Kubernetes Engine will repair it for you.

With node auto-repair enabled, Kubernetes Engine will periodically check the health of each node in your cluster, and if they fail, automatically repairs them. This means you won’t need to repair a machine manually at 3am, or scramble on a Sunday to apply a patch.

Container operations

Kubernetes Engine’s rolling updates keep your containers up to date.

With node auto-upgrades enabled, Kubernetes Engine automatically keeps the nodes in your cluster up to date with the latest version of Kubernetes. Version updates roll out shortly after a new Kubernetes release.

Container operations

Congrats on a successful launch! Thanks to Kubernetes Engine, you’re meeting user demand globally.

With cluster management, auto-scaling, auto-repair, and auto-upgrades, Kubernetes Engine truly lets you deploy and forget. That means at the moment of your success, Kubernetes Engine will automatically scale up without any effort on your part. You won’t be turning users away.

01
00
World view
Region
Zone
Server racks
Explore a Google Data Center
Drag the map to explore the different components of a
Google Cloud data center.
The Google Cloud Region
Back to home
Contact us
Return to region
LD SD HD
SD Quality setting
Contact us
Quality setting
LD SD HD
Google Cloud’s global network serves 17 regions, 52 zones, and over 100 points of presence. Let’s take a closer look at the infrastructure that makes it possible.
Start exploring