Category: CommunityDDR RAM May Be Replaced By Hybrid Memory Cubes Within A Few Years

Share this post...Tweet about this on TwitterShare on Google+0Share on Facebook0
Image Credits: IBM

Image Credits: IBM

Random Access Memory is usually thought of as the bridge between larger and cheaper magnetic or solid state hard drive storage and the faster caches that directly feed processors with data. There is a proportional cost / speed relationship that forces server architecture to adopt a tiered approach to storage, with progressively smaller but faster caches as data approaches the very fast but comparatively expensive L1 cache.

That hierarchy is unlikely to change any time soon, but a new memory technology has recently been released into the market that may provide RAM with an equivalent bandwidth to L3 cache with orders of magnitude more capacity. As you can imagine, that technology has the potential to have considerable impact on data center and server performance, which is heavily dependent on data throughput.

The Hybrid Memory Cube from Micron Technologies is based on the same basic technology as DDR RAM, but it uses multiple memory dies stacked on top of each other and connected by TSVs (through-silicon via). The memory controller is embedded in the wafer and the whole shebang sits on top of the processor. Micron recently released a 2GB HMC , which is capable of pushing data at around 120GB/s. They have a 4GB version due for release next year which is expected to have maximum bandwidth of 320GB/s. Future HMCs are expected to be able to provide bandwidths in the range of 1 TB/s.

As processors have advanced, memory speeds have lagged. Current DDR technology has trouble pushing enough data to keep modern processors fed, which is a problem in data centers, where the memory bottleneck can slow down server performance. HMCs are more than fast enough to keep processors busy into the foreseeable future.

In the short term, HMCs aren’t likely to be used in servers, but they will probably find their way into network switches and routers, which is expected to significantly improve data throughput, as data is often held briefly in memory as routing is calculated. Faster performing routers means lower latencies and all-round better performance for websites and other data dependent services.

In servers, we’ll probably have to be content with the coming DDR4 technology for the time being, because the infrastructure is already in place, but when HMCs do hit the server rack, we can expect to see significant performance improvements in applications like in-memory databases, caches, web servers, and all other aspects of serving a website that involves holding data in random access memory.

data centersRAMSystem Administration
Dec 17, 2013, 3:57 pmBy: InterWorx (0) Comments

Leave a Reply
Surround code blocks with <pre>code</pre>

Your email address will not be published.


Sign up to receive periodic InterWorx news, updates and promos!

New Comments

Current Poll

  • This field is for validation purposes and should be left unchanged.