Samsung Electronics has developed an artificial intelligence (AI) processor-embedded high bandwidth memory (HBM) chip that boasts low energy consumption and enhanced performance.
Using processing-in-memory (PIM) technology, the tech giant said it integrated AI engines onto its 'HBM2 Aquabolt', becoming the first in the industry to develop a HBM-PIM semiconductor. PIM combines logic-based complex processing units within the memory. The 'HBM2 Aquabolt' is Samsung's second-generation HBM DRAM chip mainly used for high performance computing and AI applications that has been on the market since January 2018.
The company said its HBM-PIM solution more than doubles the performance of an AI system and reduces its energy consumption by 70 percent compared with the existing HBM2. The latest product also supports the existing HBM interface, meaning customers can set up an AI accelerator system without changing hardware or software. An AI accelerator is computer hardware that is designed specifically to handle AI requirements.
In the standard computer architecture, also known as the von Neumann architecture, the processor and memory are separate and data are exchanged between the two. In such a configuration, latency occurs especially when lots of data are moved. To overcome latency issues, Samsung said it installed AI engines into each memory bank, maximising parallel processing to boost performance. HBM-PIM also minimises data moves between the processor and memory, therefore enhancing energy efficiency for an AI accelerator system.
Samsung said it will install its HBM-PIM solution in its customers' AI accelerator systems for verification within the first half of the year and that it will actively cooperate with its clients to achieve the standardisation of the PIM platform and its ecosystem.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.