Intel today announced its 3rd Generation Intel Xeon Scalable Processors . This brings us to the industry’s first generation of general-purpose server processors with built-in bfloat16 support . These CPUs make Artificial Intelligence (AI) inference and training more widely deployable on general-purpose CPUs for applications including image classification, recommendation engines, speech recognition, and language modeling.
“The ability to rapidly deploy AI and data analytics is essential for today’s business. We remain committed to improving AI acceleration and in-processor software optimizations that power the data center and perimeter solutions, as well as offering an unmatched silicone foundation to extract information from data, “he said. Lisa Spelman, Corporate Vice President and General Manager, Intel, Xeon and Memory Group.
Of course, we will mention the top-of-the-range processor, and this is the Intel Xeon Platinum 8380HL, which offers 28 cores and 56 processing threads at a Base / Turbo frequency of 2.90 / 4.30 GHz with 38.5 MB of cache memory, a configuration of Hexa (6) -Channel DDR4 memory up to 3200 MHz, supporting up to 4.5 TB of RAM memory per socket, access to the PCI-Express 4.0 interface and with a TDP of 250W.
Unmatched portfolio breadth, AI and analytics support ecosystem: Intel’s new data platforms, along with a thriving partner ecosystem already using Intel AI technologies, are optimized for companies to monetize their data by the implementation of intelligent AI and analytics services.
- New 3rd Generation Intel Xeon Scalable Processors: Intel is further expanding its investment in accelerating the AI built into these new 3rd Generation Intel Xeon Scalable processors, thanks to the integration of bfloat16 support in proprietary Intel DL Boost processor technology. bfloat16 is a compact numerical format that uses half the bits of the current FP32 format but achieves similar model precision with minimal (if any) software changes. The addition of bfloat16 support speeds up both AI training and CPU inference performance. Intel-optimized distributions for all major deep learning frameworks (including TensorFlow and Pytorch) are bfloat16 compatible and available through the toolkit from Intel IA Analytics. Intel also delivers bfloat16 optimizations in its OpenVINO toolkit and ONNX runtime environment to facilitate inference implementations. The 3rd generation Intel Xeon Scalable processors (codenamed “Cooper Lake”) represent an evolution of Intel’s 4 and 8 socket processor offerings. The processor is designed for deep learning, Virtual Machine Concentration (VM), in-memory database, mission-critical applications, and extensive analytics workloads. In this way, customers upgrading old infrastructures will be able to forecast an estimated average return of 1.9 times in common workloads 3 and up to 2.2 times more VM 4 compared to equivalent 4- socket platforms from 5 years ago.
- New Intel Optane Persistent Memory: As part of the 3rd generation Intel Xeon Scalable platform, the company has also announced the 200 series of Intel Optane Persistent Memory, which provides customers with up to 4.5TB of memory per socket to handle large loads. data work, such as in-memory databases, dense virtualization, analytics, and high-power computing.
- New Intel 3D NAND SSDs: For systems that store data on flash arrays, Intel has announced the availability of its next generation of high-capacity 3D NAND SSDs, Intel SSD D7-P5500, and P5600. The new 3D NAND SSDs are built with the latest triple-level cell (TLC) NAND 3D technology from Intel and an all-new low-latency PCIe controller to be able to meet the intense IO requirements of AI and analytics workloads. In addition, they have advanced features to improve IT efficiency and data security.
- First AI-optimized Intel FPGA: Intel has unveiled its upcoming Intel® Stratix® 10 NX FPGAs, Intel’s first AI-optimized FPGAs aimed at accelerating AI with wide bandwidth and low latency. These FPGAs will offer customers more customizable, reconfigurable, and scalable AI acceleration for demanding computing applications such as natural language processing and fraud detection. Intel Stratix 10 NX FPGAs include high-bandwidth memory (HMB), high-performance network capabilities, and new AI-optimized arithmetic blocks called AI Tensor Blocks, which contain dense sets of multipliers of lower precision, commonly used for arithmetic AI models.
- OneAPI Cross-Architecture Development for Continuous AI Innovation: As Intel expands its portfolio of advanced AI products to meet the diverse needs of its customers, it is also paving the way for developers by simplifying heterogeneous programming with its portfolio of OneAPI cross-architecture tools with the goal of accelerating performance and increasing productivity. With these advanced tools, they can accelerate AI workloads on Intel CPUs, GPUs, and FPGAs, thereby securing the future of their code for current and future generations of Intel processors and accelerators.
- Enhanced Intel Select Solutions Portfolio Addressing Major IT Requirements: Intel has enhanced its Intel Select Solutions portfolio to accelerate the implementation of the most pressing IT requirements, highlighting the value of delivering pre-verified solutions in the changing context current business. Today 3 New Solutions and 5 Enhanced Intel Select Solutions Announced, Focusing on Analytics, AI, and Infrastructure The Intel Select Enhanced Genomics Analysis Solution is being used worldwide to find a vaccine for COVID-19 and the new Intel Select Solution for VMware Horizon VDI on vSAN is being used to enhance distance learning.