This development, data centers and artificial intelligence
While it increases the performance of the algorithms, it allows more complex calculations to be performed in a shorter time.
Samsung’un HBM3E 12H DRAM’ı A new era begins in the field of high bandwidth memory (HBM). HBM is a type of memory designed for applications that need high bandwidth, and Samsung is making a significant innovation in this area. In October, Samsung introduced HBM3E Shinebolt, an improved version of the third-generation HBM, with this technology reaching speeds of 9.8Gbps per pin and a bandwidth of 1.2 terabytes per second for the entire package.
12H refers to the number of chips stacked vertically in each module. In this case, 12 chips are brought together. This method offers a way to fit more memory in a module and
Samsung 12H with its design 36GB
capacity, that is, 50% more memory than the 8H design. Bandwidth remains constant at 1.2 terabytes per second.
TC NCF stands for Thermal Compression Non-Conductive Film and refers to the material placed between the chips.
Samsung, He made this material thinner, now down to a 7µm thick structure. Thus, the 12H stack can be the same height as the 8H stack, allowing the same HBM packaging to be used. TC
NCF’nin Another advantage is that it improves cooling by improving thermal properties. Moreover, the method used in this new HBM3E 12H DRAM also increases efficiency.
Samsung’s artificial intelligence move offers HBM3E memory
So, where will this advanced memory technology be used? There is no need to answer; Artificial intelligence is one of the most popular topics today. Indeed, applications such as artificial intelligence require large amounts of RAM. Last year Nvidia, list of high bandwidth memory suppliers
Samsung’u He added, and the company is developing some incredible designs that offer much higher performance than consumer graphics cards.
For example, Nvidia’nın H200 Tensor Core GPU’su,
It has 141GB HBM3E memory with a total speed of 4.8 terabytes per second. This is much higher than a consumer graphics card running GDDR. For example, RTX 4090, It has 24GB of GDDR6 at just 1 terabyte per second.
According to reports, H200, in total
144GB It uses six 24GB HBM3E 8H modules, which are 141GB usable. While the same capacity can be provided with only four 12H modules; alternatively, 216GB capacity can be achieved with six 12H modules.
A Samsung According to their estimates, the additional capacity of the new 12H design will speed up AI training by 34% and allow inference services to handle “11.5 times more” users.
artificial intelligence explosion, It will keep accelerators like the H200 in high demand, making being a memory supplier a profitable business. So it’s no surprise that companies like Micron, Samsung and SK Hynix want to get a piece of this pie.
Source link: https://www.teknolojioku.com/mobil/samsungun-yapay-zeka-hamlesi-hbm3e-bellek-sunuyor-65ded922b6a337196f01f665