Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Tech giants form AI group to counter Nvidia with new interconnect standard

30 May 2024 at 16:42
Abstract image of data center with flowchart.

Enlarge (credit: Getty Images)

On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.

The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplications—necessary for running neural network architecture—in parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.

This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)—created by Intel in 2019—which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.

Read 5 remaining paragraphs | Comments

Microsoft’s “Copilot+” AI PC requirements are embarrassing for Intel and AMD

20 May 2024 at 15:00
Microsoft’s “Copilot+” AI PC requirements are embarrassing for Intel and AMD

Enlarge (credit: Microsoft)

Microsoft is using its new Surface launch and this week’s Build developer conference as a platform to launch its new “Copilot+" PC initiative, which comes with specific hardware requirements that systems will need to meet to be eligible. Copilot+ PCs will be able to handle some AI-accelerated workloads like chatbots and image generation locally instead of relying on the cloud, but new hardware will generally be required to run these workloads quickly and power efficiently.

At a minimum, systems will need 16GB of RAM and 256GB of storage, to accommodate both the memory requirements and the on-disk storage requirements needed for things like large language models (LLMs; even so-called “small language models” like Microsoft’s Phi-3, still use several billion parameters). Microsoft says that all of the Snapdragon X Plus and Elite-powered PCs being announced today will come with the Copilot+ features pre-installed, and that they'll begin shipping on June 18th.

But the biggest new requirement, and the blocker for virtually every Windows PC in use today, will be for an integrated neural processing unit, or NPU. Microsoft requires an NPU with performance rated at 40 trillion operations per second (TOPS), a high-level performance figure that Microsoft, Qualcomm, Apple, and others use for NPU performance comparisons. Right now, that requirement can only be met by a single chip in the Windows PC ecosystem, one that isn't even quite available yet: Qualcomm's Snapdragon X Elite and X Plus, launching in the new Surface and a number of PCs from the likes of Dell, Lenovo, HP, Asus, Acer, and other major PC OEMs in the next couple of months. All of those chips have NPUs capable of 45 TOPS, just a shade more than Microsoft's minimum requirement.

Read 7 remaining paragraphs | Comments

AMD unveils their Embedded+ architecture, Ryzen Embedded with Versal together

4 April 2024 at 17:24

One area of AMD’s product portfolio that doesn’t get as much attention as the desktop and server parts is their Embedded platform. AMD’s Embedded series has been important for on-the-edge devices, including industrial, automotive, healthcare, digital gaming machines, and thin client systems. Today, AMD has unveiled their latest Embedded architecture, Embedded+, which combines their Ryzen Embedded processors based on the Zen+ architecture with their Versal adaptive SoCs onto a single board.

↫ Gavin Bonshor at AnandTech

Machines with these chips will flood the used market a few years from now, and they’re going to be great buys for all kinds of fun projects – and because the corporate world buys these machines by the truckload, they show up on eBay at impulse prices within years. Sometimes, you can even buy cheap whole lots of these kinds of boxes. They often tend to be a little weird, and come with features and trinkets normal computers don’t come with, which is always good for some weekend fun.

Cathode Ray Dude is currently doing a series on these little things on YouTube, and there’s always something weird to discover about what kind of odd features and design choices these machines possess. If there’s interest from you, our lovely readers, I can see if I can snatch up a few weird ones from eBay and write about what kind of fun projects you can do with these. You can usually run Linux on these, the embedded versions of Windows, and if they’re not too weird, they could probably serve as a cheap Haiku box, too.

❌
❌