Samsung Electronics and NAVER Corporation announced a wide-reaching collaboration to develop semiconductor solutions tailored for hyperscale artificial intelligence (AI) models. Leveraging Samsung’s next-generation memory technologies like computational storage, processing-in-memory (PIM) and processing-near-memory (PNM), as well as Compute Express Link (CXL), the companies intend to pool their hardware and software resources to dramatically accelerate the handling of massive AI workloads.
“Through our collaboration with NAVER, we will develop cutting-edge semiconductor solutions to solve the memory bottleneck in large-scale AI systems,” said Jinman Han, Executive Vice President of Memory Global Sales & Marketing at Samsung Electronics. “With tailored solutions that reflect the most pressing needs of AI service providers and users, we are committed to broadening our market-leading memory lineup including computational storage, PIM and more, to fully accommodate the ever-increasing scale of data.”
Recent advances in hyperscale AI have led to an exponential growth in data volumes that need to be processed. However, the performance and efficiency limitations of current computing systems pose significant challenges in meeting these heavy computational requirements, fueling the need for new AI-optimized semiconductor solutions.
Developing such solutions requires an extensive convergence of semiconductor and AI disciplines. Samsung is combining its semiconductor design and manufacturing expertise with NAVER’s experience in the development and verification of AI algorithms and AI-driven services, to create solutions that take the performance and power efficiency of large-scale AI to a new level.
For years, Samsung has been introducing memory and storage that support high-speed data processing in AI applications, from computational storage (SmartSSD) and PIM-enabled high bandwidth memory (HBM-PIM) to next-generation memory supporting the Compute Express Link (CXL) interface. Samsung will now join with NAVER to optimize these memory technologies in advancing large-scale AI systems.
NAVER will continue to refine HyperCLOVA, a hyperscale language model with over 200 billion parameters, while improving its compression algorithms to create a more simplified model that significantly increases computation efficiency.
“Combining our acquired knowledge and know-how from HyperCLOVA with Samsung’s semiconductor manufacturing prowess, we believe we can create an entirely new class of solutions that can better tackle the challenges of today’s AI technologies,” said Suk Geun Chung, Head of NAVER CLOVA CIC. “We look forward to broadening our AI capabilities and bolstering our edge in AI competitiveness through this strategic partnership.”
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.