AMD and the future of the AI PC

AMD

The last 18 months have seen a whirlwind of industry interest around artificial intelligence (AI), including the introduction of a new class of AI-oriented systems referred to as “AI PCs.” The speed of AI arrival and the newness of AI PCs naturally raise questions about how much—and when—artificial intelligence is likely to matter to your organization.

While the particulars vary by industry, major software vendors in a host of fields are either developing new AI-based products or integrating AI processing into existing software suites. These updates and capabilities are already rolling out to end users, but haphazard, untracked usage across an organization isn’t the way to achieve optimal outcomes with AI.

AI deployments work best when undertaken thoughtfully, with clear goals and effective metrics for measuring whether those goals have been achieved. People often need time to experiment with and adjust to a new technology, whether that means a new data analytics platform or an internal chatbot.

What’s an AI PC?

While the exact meaning of the term varies depending on the organization, AI PCs generally contain a central processing unit (CPU), graphics processing unit (GPU), and dedicated neural processing unit (NPU). These new AI capabilities sometimes carry their own branding; AMD, for example, refers to the AI processing proficiency of its CPU, GPU, and NPU under the brand “Ryzen AI.”

CPUs and GPUs have existed for decades, but integrating a dedicated AI processor to handle emerging AI workloads is a recent innovation. AMD launched the first laptop processors with an on-die neural processor in 2023, and the first NPU-equipped desktop chips in 2024. While AI workloads can run on the CPU, GPU, or NPU, NPU-equipped systems can potentially execute these workloads far more efficiently, helping with reducing power consumption and saving the CPU and GPU for other tasks.

READ MORE  Scientists sketch out menus of the longer term

Bringing hardware on-die to help reduce power and improve performance is part of how semiconductor manufacturers incorporate new features, especially when those features radically expand user access to a particular type of computing.

AMD

There are two particular examples of this trend that are relevant to the larger conversation around AI today. In the 1980s, many consumer PCs shipped with CPUs that were only designed to handle integer calculations in hardware. Floating-point calculations that required a decimal point were handled via software emulation or through a specialized co-processor known as a floating-point unit (FPU) that sat in its own motherboard socket. As manufacturing technology advanced, chip designers moved the FPU on-die, making it more readily available to both software developers and end users. The ability to handle floating-point math expanded the fields the PC could address, including 3D gaming and high-performance computing (HPC) workloads.

The consumer 3D graphics accelerators that emerged in the mid-to-late 1990s are another example of how integrating new capabilities and technologies can transform the PC. The first GPUs were discrete cards; motherboards with “onboard” graphics existed, but the performance of these solutions was quite low in comparison to a standalone card. Bringing graphics capabilities aboard the processor allowed semiconductor manufacturers to dramatically improve the GPU’s performance and power consumption.

Many applications, including web browsers and operating systems, now use the GPU for rendering, while the widespread proliferation of video services across the internet was partly made possible by low-power video encoders baked into modern desktop, laptop, and mobile chips. In both cases, bringing these specialized processors onboard the CPU increased consumer access to the underlying technologies, allowed for greater innovation across the PC industry, and reduced cost. Over time, the relatively staid ability to run floating-point workloads or to handle video decode in a dedicated, on-die function block has had a transformative impact on the long-term evolution of the PC. AI is likely to follow a similar trajectory.

READ MORE  A wrestling match over who should control robotaxis is playing out in California

ZDNET

“Transformative impact” is a big label to hang on any technology, especially one as nascent as AI, but the services and capabilities now rolling out across the industry imply the label isn’t undeserved. Historically, if you wanted to use a computer to create something complex, detailed, or nuanced, you needed to be well-versed in an application or three. The more advanced your project, the more thorough your own knowledge needed to be. This was true in 1984 and it’s still mostly true in 2024. But AI has the potential to upend this axiom by closing the knowledge gap between what a user wants to accomplish with a PC and what they already know how to achieve.

There are now a number of competing commercial services that can turn text into images, while text-to-video concepts have been demoed. Different companies are working on digital personal assistants, with implementation concepts ranging from integrated website chatbots to holistic tools that could monitor a smart home or interact with an end-user’s PC. What unites these disparate products and efforts is the idea that AI’s greater contextual awareness and the ability to translate written or spoken text into a coherent directive will lead to better computing experiences—and, by extension, more useful computers.

The exact impact AI will have on your business depends on the business. In some contexts, that might mean an AI providing document summaries, transcripts, and translation services. In others, it might mean using AI for unstructured data analysis or deploying it within a 3D modeling application to allow the end-user to create and design in plain language.

READ MORE  New iPhone 15 homeowners: replace earlier than transferring knowledge out of your outdated iPhone

AMD

Why Invest Now?

Corporate PC fleets are typically refreshed on a 3-4 year timeline, which means plenty of newly minted systems today could be running AI workloads in a year or two. Companies that start evaluating how to best integrate AI into their existing systems and processes now will be better positioned to improve overall workforce productivity, outpace their competitors, and take advantage of the benefits AI offers as the technology continues to mature. This is in addition to the standard benefits from newer system deployments, including TCO and overall energy efficiency. If you are interested in comparing the latest Ryzen processor-based systems, the AMD Processor Efficiency Calculator offers power consumption estimates on a range of Ryzen and Ryzen PRO processor-based laptop computers.

One of the best ways to ensure that your PC fleet is ready to handle these workloads is to invest in PCs built with AMD Ryzen PRO processors, featuring Ryzen AI. AMD led the x86 processor market with a 10 TOPS  (trillions of operations per second) NPU in 2023, and select models of the recently launched Ryzen Mobile 8040 Series and desktop Ryzen 8000G Series processors offer a 16 TOPS NPU. AMD has worked with over a hundred software vendors to provide broad ecosystem compatibility and is deeply committed to supporting AI and its emerging use cases. General software support for AI is advancing as developers integrate AI into already-established products and new, AI-based applications come to market.

AI is real. Underneath the hype and still-uncertain effects is a technology that’s already driving productivity gains and customer experience improvements. The question isn’t if AI will impact computing and business at large, but when—and which companies will be best positioned to take advantage of it. 

Leave a Comment