Sunday, December 22, 2024

Boosting the Machine: How AI Chips are Revolutionizing Tech

- Advertisement -

Nikkei Asia reported that SoftBank Group’s Arm Holdings planned to offer AI chips in 2025, competing with Apple and Nvidia.

The article suggested UK-based Arm will establish an AI chip business and create a prototype by spring 2025. Nikkei Asia reported that contract manufacturers will begin mass production in October 2025.

- Advertisement -

The article said Arm and SoftBank will cover initial development expenditures, which may exceed hundreds of billions of yen.

The publication reported that SoftBank is talking with Taiwan Semiconductor Manufacturing Corp (TSMC) and others to acquire production capacity for the AI chip sector once a mass-production infrastructure is built.

Arm and SoftBank rejected comment, while TSMC did not answer quickly.

AI Chips

AI will impact national and international security in the future. The U.S. government is studying ways to limit AI information and technology dissemination. Modern AI systems’ computer hardware is naturally the focus of controls because general-purpose AI software, datasets, and algorithms are ineffective. Computation on a scale inconceivable a few years ago is key to modern AI.

- Advertisement -

A premier AI algorithm can take a month and cost $100 million to train. AI systems require computer chips with high computing capability, including those with the most transistors and optimized for specialized tasks. Leading-edge, specialized “AI chips” are needed to scale AI cost-effectively; older or general-purpose chips can cost tens to thousands of times more. Export control regulations are possible because the complex supply chains needed to make cutting-edge AI chips are concentrated in the US and a few allied democracies.

The above story is detailed in this report. It discusses AI chips’ function, proliferation, and importance. It also explains why leading-edge and AI-specific processors are cheaper than older generations. The study discusses semiconductor industry and AI chip design trends that are shaping chip and AI chip advancement. It also summarizes technical and economic factors that affect AI application cost-effectiveness.

This study defines AI as cutting-edge computationally expensive AI systems like deep neural networks. DNNs are behind recent AI successes like DeepMind’s AlphaGo, which defeated the global Go champion. As mentioned above, “AI chips” are computer chips that do AI-specific computations efficiently and quickly but poorly for general calculations.

We will discusses AI chips and why they are necessary for large-scale AI development and implementation. The AI chip supply chain and export control targets are not its focus. Future CSET reports will examine the semiconductor supply chain, national competitiveness, China’s semiconductor industry’s prospects for supply chain localization, and policies the US and its allies can pursue to maintain their AI chip production advantages and recommend ways to use them to benefit AI technology development and adoption.

Industry Trends Favor AI Chips Over General-Purpose Chips

Moore’s Law argues that transistor shrinking quadrupled computer chip transistors every two years from 1960 to 2010. This made computer chips millions of times faster and more efficient.

Modern chips use transistors a few atoms wide. However, making transistors smaller makes engineering challenges harder or impossible to address, driving up semiconductor industry capital and talent expenses. Moore’s Law is slowing, so it takes longer to double transistor density. Moore’s Law costs are justified primarily because it allows chip advances like transistor efficiency, transistor speed, and more specialized circuits.

Demand for specialized applications like AI and the stalling of Moore’s Law-driven CPU advancements have disrupted the economies of scale that favored general-purpose devices like central processing units. CPUs are losing market share to AI chips.

AI Chip Basics

  • GPUs, FPGAs, and AI-specific ASICs are AI chips.
  • Basic AI activities can be done with general-purpose devices like CPUs, but CPUs are becoming less helpful as AI progresses.
  • It, like general-purpose CPUs, use massive numbers of tiny transistors, which operate quicker and require less energy, to complete more computations per unit of energy.
  • AI chips have various AI-optimized design elements, unlike CPUs.
  • AI algorithms need identical, predictable, independent calculations, which these properties greatly expedite.
  • They involve parallelizing several calculations instead of sequentially like CPUs and implementing AI algorithms with poor precision but reduces the number of transistors needed for the same calculation, speeding up memory access by, for example, storing an entire AI algorithm in a single AI chip, and using programming languages designed to efficiently translate AI computer code for execution.
  • Different AI chips do different functions. Most AI algorithms are developed and refined on GPUs during “training.”
  • FPGAs are generally used for “inference” applying learned AI algorithms to real-world data. Training or inference ASICs are possible.

Why AI Needs Cutting-Edge Chips

AI chips train and infer AI algorithms tens or thousands of times faster and more efficiently than CPUs due to their unique properties. Due to their AI algorithm efficiency, state-of-the-art it are much cheaper than CPUs. A thousand-times-more-efficient AI chip equals 26 years of Moore’s Law CPU advances.

Modern AI systems need state-of-the-art AI chips. Older AI circuits with larger, slower, and more power-hungry transistors quickly become costly due to energy usage. Due to this, older AI chips cost more and slow down more than modern ones. Modern AI processors are needed to create and implement cutting-edge AI algorithms due to cost and speed dynamics.

Training an AI algorithm can cost tens of millions of dollars and take weeks, even with cutting-edge hardware. AI-related computing accounts for a considerable share of top AI lab spending. This training would take orders of magnitude longer and cost orders of magnitude more on general-purpose devices like CPUs or previous AI chips, making research and deployment impractical. Inference using less advanced or specialized chips may cost more and take orders of magnitude longer.

Implications for National AI Competitiveness

Advanced security-relevant AI systems require cutting-edge AI processors for cost-effective, speedy development and deployment. The US and its allies have an advantage in numerous semiconductor industries needed to make these devices. U.S. manufacturers dominate AI chip design, including EDA software.

Chinese AI chip designers are behind and use U.S. EDA software. U.S., Taiwanese, and South Korean corporations dominate most chip fabrication plants (“fabs”) that can make cutting-edge AI chips, while a Chinese firm just secured some capacity.

Chinese AI chip designers outsource manufacturing to non-Chinese fabs with higher capacity and quality. U.S., Dutch, and Japanese manufacturers dominate the semiconductor manufacturing equipment (SME) market for fabs. China’s ambitions to establish an advanced chip sector could eliminate these advantages.

Modern AI chips are vital to national security, thus the US and its allies must maintain their production edge. Future CSET papers will examine US and allied strategies to maintain their competitive edge and investigate points of control to ensure that AI technology development and deployment promote global stability and benefit everybody.

- Advertisement -
Drakshi
Drakshi
Since June 2023, Drakshi has been writing articles of Artificial Intelligence for govindhtech. She was a postgraduate in business administration. She was an enthusiast of Artificial Intelligence.
RELATED ARTICLES

Recent Posts

Popular Post

Govindhtech.com Would you like to receive notifications on latest updates? No Yes