We’re all part of silicon-enabled global growth as the globe transitions to artificial intelligence. In the “Siliconomy,” AI-powered technologies help us with knowledge-based and physical chores in our daily lives.
Intel Innovation introduced technology to bring AI everywhere and make it more accessible across client, edge, network, and cloud workloads. These include easier cloud access to AI solutions, better pricing performance for Intel data center AI accelerators than the competition, tens of millions of AI-enabled Intel PCs arriving in 2024, and edge AI deployment security tools.
To accelerate innovation, AI needs a wide spectrum of open and secure technologies. Intel’s CPUs, GPUs, accelerators, oneAPI programming model, OpenVINO developer toolkit, and AI ecosystem libraries enable customers to quickly deploy AI at scale with competitive, high-performance, open-standards solutions.
General Availability of Intel Developer Cloud
Intel released the Intel Developer Cloud, which lets developers test and deploy AI and high-performance computing applications and solutions on the newest Intel CPUs, GPUs, and AI accelerators. Developers may enable sophisticated AI and performance using cutting-edge technologies. The details:
The Intel Developer Cloud is based around AI-optimized CPUs, GPUs, and Intel Gaudi processors for Deep Learning, together with open software and tools. The Intel Development Cloud will offer 5th Gen Intel Xeon Scalable processors (code-named Emerald Rapids) in the coming weeks and launch on December, as well as Intel Data Center GPU Max Series 1100 and 1550.
Developers may design, test, and improve AI and high-performance computing apps on Intel Developer Cloud. They can efficiently deploy small- to large-scale AI training, model optimization, and inference workloads.Intel Developer Cloud, built on an open software foundation with oneAPI, offers hardware choice, independence from proprietary programming paradigms, and code reuse and portability.
Data Center Customer and Performance Momentum
Intel Gaudi, 4th Gen Intel Xeon, 5th Gen Intel Xeon, and Sierra Forest and Granite Rapids future-generation Xeon CPUs received AI performance improvements and industry momentum. The details:
Stability AI will undergird Intel’s big AI supercomputer, which will include Intel Xeon processors and 4,000 Intel Gaudi AI hardware accelerators.
Dell Technologies and Intel are working together to deliver AI solutions for clients at every AI stage. The PowerEdge with Xeon and Gaudi will handle AI workloads from large-scale training to base-level inferencing.
Due to its built-in Intel Advanced Matrix Extensions (Intel AMX) accelerators and other software optimizations, 4th Gen Xeon can accelerate response time by 3x for real-time large language model (LLM) inference in Alibaba Cloud’s model-serving platform DashScope.Granite Rapids will include industry-leading Performance-cores (P-cores), providing superior AI performance compared to other CPUs and a 2x to 3x increase over 4th Gen Xeon for AI applications.
Intel Core Ultra Processors Power New AI Experiences
The Intel Core Ultra CPUs, code-named Meteor Lake, will usher in the AI PC era with Intel’s first integrated neural processing unit (NPU) for power-efficient AI acceleration and local inference. Intel said Core Ultra would ship in December. The details:
Core Ultra provides connectivity-independent, low-latency AI computing with better data privacy.
Client silicon gets an NPU for the first time in Core Ultra. The NPU enables low power, high quality, and innovative PC experiences. Workloads moving from the CPU that need greater quality or efficiency or that would execute in the cloud owing to inefficient client computing are suited for it.
The first client chiplet design supported by Foveros packaging technology, Core Ultra marks an inflection point in Intel’s client CPU roadmap. The new CPU has discrete-level graphics using Intel Arc graphics, an NPU, and Intel 4 process technology for power efficiency.
Core Ultra’s disaggregated architecture balances AI-driven task performance and power:
AI-infused media, 3D applications, and render pipelines benefit from GPU parallelism and speed.
The NPU is a low-power AI engine for AI offload and sustained AI.
The CPU is fast enough for lightweight, single-inference low-latency AI workloads.
Intel and Acer collaborated to introduce AI to its future Core Ultra computers, demonstrating how the new “Acer Parallax” software feature leverages the NPU to make user graphics seem 3D.
Empowering Edge AI
Edge computing has huge potential due to the need to automate processes and analyze data using AI. Developers on client and edge platforms use Intel’s OpenVINO AI inference and deployment runtime. Intel makes edge AI more accessible with the OpenVINO developer toolbox. Over the last year, OpenVINO toolkit developer downloads have increased 90%. The details:
OneAPI-powered OpenVINO 2023.1 makes generative AI easier to use in real-world settings by letting developers write once and deploy across many devices and AI apps.
Downloadable on OpenVINO.ai, the latest update gets Intel closer to any model on any device everywhere.
OpenVINO 2023.1 supports the next Core Ultra CPUs and optimizes PyTorch, TensorFlow, and ONNX models. It also offers additional model compression methods, GPU support, memory usage for dynamic forms, portability, and performance across cloud, client, and edge computation.
The Innovation Day 1 keynote included Intel’s Fit:match AI solution to improve retail fitting rooms.
Fit:match’s 3D Concierge leverages Intel RealSense Depth Cameras with lidar sensors, Core CPUs, and OpenVINO. The solution scans and matches hundreds of goods to guarantee consumer fit while prioritizing security and privacy.
[…] SaaS Intel Trust Authority verifies trustworthiness. Intel Trust Authority supports remote attestation with […]