Google Axion Processors
Arm Based CPU
At Google, they continuously push the limits of computers to investigate what can be done for big problems like global video distribution, information retrieval, and, of course, generative AI. Rethinking systems design in close cooperation with service developers is necessary to achieve this. Their large investment in bespoke silicon is the outcome of this rethinking. Google is excited to present the most recent iteration of this effort today: Google Axion Processors, Google’s first specially made Arm-based CPUs intended for data centers. Later this year, Axion which offers performance and energy efficiency that leads the industry will be made accessible to Google Cloud users.
Axion is only the most recent model of customised silicon from Google. Google’s first Video Coding Unit (VCU) increased video transcoding efficiency by 33x in 2018. Five generations of Tensor Processing Units have been launched since 2015. Google invested in “system on a chip” (SoC) designs and released the first of three generations of mobile Tensor processors in 2021 to boost bespoke computing.
General-purpose computing is and will continue to be a vital component of their customers’ workloads, even if Google investments in compute accelerators have revolutionised their capabilities. Extensive computation power is needed for analytics, information retrieval, and machine learning training and providing. The pace at which CPUs are being improved has slowed lately, which has affected customers and users who want to satisfy sustainability objectives, save infrastructure costs, and maximise performance. According to Amdahl’s Law, unless Google make the corresponding expenditures to stay up, general purpose compute will dominate the cost and restrict the capabilities of their infrastructure as accelerators continue to advance.
Google BigTable
In order to deliver instances with up to 30% better performance than the fastest general-purpose Arm-based instances currently available in the cloud, as well as up to 50% better performance and up to 60% better energy-efficiency than comparable current-generation x86-based instances, Axion processors combine Google’s silicon expertise with Arm’s highest performing CPU cores. For this reason, on current generation Arm-based servers, Google have already begun implementing Google services such as BigTable, Google Spanner, BigQuery, Blobstore, Pub/Sub, Google Earth Engine, and the YouTube Ads platform. Google also have plans to deploy and expand these services, along with others, on Axion shortly.
Superior effectiveness and performance, supported by Titanium
Axion processors, which are constructed around the Arm Neoverse V2 CPU, offer massive performance gains for a variety of general-purpose workloads, including media processing, web and app servers, containerised microservices, open-source databases, in-memory caches, data analytics engines, and more.
Titanium, a system of specially designed silicon microcontrollers and tiered scale-out offloads, provides the foundation for Axion. Platform functions like networking and security are handled by titanium offloads, giving Axion processors more capacity and enhanced performance for workloads from customers. Titanium also transfers I/O processing for storage to Hyperdisk, Google’s recently launched block storage solution that can be dynamically supplied in real time and decouples performance from instance size.
Titanium
A system of specially designed silicon security microcontrollers and tiered scale-out offloads that enhances the dependability, security, life cycle management, and performance of infrastructure.
Google-powered Titanium
Titanium is a free platform that supports Hyperdisk block storage, networking, the newest compute instance types (C3, A3, and H3), and more on Google Cloud.
Included in the system are:
- Titan security microcontrollers are specially designed to provide Google Cloud’s infrastructure a hardware root of trust.
- Titanium adaptor: specialised offload card that offers hardware acceleration for virtualization services; frees up resources for workloads by offloading processing from the host CPU
- Titanium offload processors (TOPs) are silicon devices placed across the data centre that are used as a scalable and adaptable method of offloading network and I/O operations from the host CPU.
Enhanced functionality of the infrastructure
Titanium offloads computation from the host hardware to provide additional compute and memory resources for your applications.
- Hyperdisk Extreme block storage allows for up to 500k IOPS per instance, which is the greatest among top hyperscalers.
- 200 Gbps or more of network bandwidth
- Full line rate network encryption that offers security without compromising speed
- Consistent performance comparable to bare metal for the most delicate workloads
Smooth management of the infrastructure life cycle
Infrastructure changes are made easier by Titanium’s modular hardware and software, which also provide offloading capabilities and workload continuity and security.
- Advanced maintenance controls for the most critical workloads and seamless upgrades for the majority of workloads
- It is possible to start remote infrastructure upgrades from any location.
- The Titanium adaptor’s dedicated domains for networking and storage enable for the autonomous upkeep and upgrades of individual services, keeping them apart from the host’s burden.
“Building on Google’s high-performance Arm Neoverse V2 platform, Google’s announcement of the new Axion CPU represents a significant milestone in the delivery of custom silicon optimised for Google’s infrastructure.” The greatest experience for consumers using Arm is guaranteed by decades of ecosystem investment, Google’s continuous innovation, and its contributions to open-source software.”
Customers want to accomplish their sustainability objectives and operate more effectively, not only perform better. In comparison to five years ago, Google Cloud data centres are now 1.5 times more efficient than the industry average and provide 3 times more processing power with the same amount of electrical power. Google lofty objectives include running their campuses, offices, and data centres entirely on carbon-free energy sources around-the-clock and providing resources to assist with carbon emission reporting. Customers may optimise for even greater energy efficiency using Axion processors.
Axion: Interoperability and compatibility with out-of-the-box applications
Additionally, Google has a long history of supporting the Arm ecosystem. They worked closely with Arm and industry partners to optimize Android, Kubernetes, Tensorflow, and the Go language for the Arm architecture. Google also constructed and made them open-sourced.
Armv9 architecture
The standard Armv9 architecture and instruction set serve as the foundation for Axion. Google have made contributions to the SystemReady Virtual Environment (VE) standard, which is designed to ensure that Arm-based servers and virtual machines (VMs) can run common operating systems and software packages. This standard makes it easier for customers to deploy Arm workloads on Google Cloud with minimal to no code rewrites. Google is gaining access to an ecosystem of tens of thousands of cloud users that are already using Arm-native software from hundreds of ISVs and open-source projects and deploying workloads thanks to Google’s partnership.
Axion will be available to users across a variety of Google Cloud services, such as Cloud Batch, Dataproc, Dataflow, Google Compute Engine, and Google Kubernetes Engine. The Google Cloud Marketplace now offers Arm-compatible apps and solutions, and Google just released preview support for the Migrate to Virtual Machines service, which allows you to migrate Arm-based instances.