AMD Patent Unveils Hybrid CPU-FPGA Design Enabled by Xilinx Tech

xilinx office

Although often not as good as CPUs alone, FPGAs can do an amazing job of speeding up specific tasks. Whether accelerating as a substance for large-scale data center services that increase AI performance, an FPGA in the hands of a skilled engineer can download a wide range of tasks from a CPU and speed up processes. Intel has been talking a big game over the past six years about integrating Xeons with FPGAs, but that has not resulted in a single product reaching its range. However, a new patent by AMD could mean the FPGA rookie is ready to make one of his own.

In October, AMD announced plans to acquire Xilinx as part of a major push into the data center. The US Patent and Trademark Office (USPTO) on Thursday published an AMD patent for the integration of programmable execution units with a CPU. AMD has made 20 claims in its patent application, but the key is that a processor can include one or more execution units that can be programmed to handle different types of custom instructions. This is exactly what an FPGA does. It may take a while until we see products based on this design, as it seems a bit too soon to be part of the CPUs included in recent EPYC leaks.

Although AMD has made waves with its chiplet designs for Zen 2 and Zen 3 processors, it does not seem to be happening here. The programmable unit in AMD’s FPGA patent actually shares registers with the processor’s power and integer execution units, which would be difficult or at least very slow if they were not in the same package. This type of integration should make it easy for developers to weave these custom instructions into applications, and the CPU will only know how to pass it on to the FPGA on the processor. Those programmable units can handle atypical data types, specifically FP16 (or half-precision) values ​​used to speed up AI training and derivation.

xilinx vu19p

In the case of multiple programmable units, each unit can be programmed with a different set of specialized instructions so that the processor can accelerate multiple instruction sets, and these programmable EUs can be reprogrammed immediately. The idea is that when a processor loads a program, it also loads a bit file that configures the programmable executable to speed up certain tasks. The processor’s own decoding and shipping unit can address the programmable unit and pass the custom instructions that need to be processed.

AMD has been working for years on various ways to speed up AI calculations. First, the company announced and released the Radeon Impact range of AI accelerators, which were just big headless Radeon graphics processors with custom drivers. The company doubled that with the release of the MI60, the first 7-nm GPU before the Radeon RX 5000 series, in 2018. A shift to the focus on AI via FPGAs after the Xilinx acquisition makes sense, and we are excited to see what the company comes up with.

Source