• Home
  • News
  • Tutorials
  • Analysis
  • About
  • Contact

TechEnablement

Education, Planning, Analysis, Code

  • CUDA
    • News
    • Tutorials
    • CUDA Study Guide
  • OpenACC
    • News
    • Tutorials
    • OpenACC Study Guide
  • Xeon Phi
    • News
    • Tutorials
    • Intel Xeon Phi Study Guide
  • OpenCL
    • News
    • Tutorials
    • OpenCL Study Guide
  • Web/Cloud
    • News
    • Tutorials
You are here: Home / Analysis / Understanding the Rational behind 400 GB Flash-based DIMM Memory

Understanding the Rational behind 400 GB Flash-based DIMM Memory

May 3, 2014 by Rob Farber Leave a Comment

On January 24th, SanDisk announced shipments of ULLtraDIMM SSD storage in concert with an IBM announcement rebranding the SanDisk ULLtraDIMMs as eXFlash DIMMs. On March 21, SanDisk’s stocks  hit a 14-year high.

ultradimm

ULLtraDIMM SSD storage puts Flash memory in a standard DIMM form factor that can be plugged into a memory socket. The Linux, Windows, or VMware UEFI/BIOS recognizes the MCS modules as specialized devices that are controlled by the MCS driver. The MCS driver then manages use of those modules as primary storage or as a memory extension.

With ULLtraDIMM storage, the customer gains 200 GB – 400 GB per DIMM of non-volatile secondary storage that can fit on a blade or inside a server. Direct access (although not currently byte addressable) across the memory bus provides high read/write bandwidth (760 MB/s – 1 GB/s) of low-latency (150 µsec read, less than 5 µsec write) storage. While perfect for video capture, databases, and virtualized servers (as evidenced by the initial Diablo Technology OS support), this performance is achieved at the loss of one or perhaps a pair of DIMM slots depending system architecture.

How does ULLtraDIMM compare against the adage, “Real Memory for Real Performance”?

For storage dominated workloads, DIMM based flash storage has several advantages that make the Sandisk/IBM products an attractive option even with the loss of RAM capacity:

  1. A terabyte or three of local NAND storage per blade  eliminates the need to access data across the network interface, which removes network jitter, can improve overall system performance, and allow customers to better realize the full scalability of their blade architecture.
  2. Eliminating much of the OS and all of the PCIe and network jitter  means that small yet very high capacity data capture devices (think video or A/D converters) can be designed according to very tight QoS (Quality of Service) agreements.

In comparison, a high-performance RAID system built out of SSD devices can achieve around 12 Gb/s of performance, but with a higher latency. Thus for streaming or other latency tolerant designs where there is space for a RAID controller and SSDs, the loss in potential RAM capacity might swing the design decision towards a more traditional storage architecture.

For the right design verticals, ULLtraDIMM storage provides compelling arguments. For example, the VMware support makes ULLtraDIMM storage modules perfect for hosting virtual machine services (like WordPress) that share a single OS image and utilize a local MySQL database. This is undoubtedly one reason why Big Blue is so bullish on the SanDisk products … because they will sell blade servers. It also reinforces IBM’s committment of $1B  to further develop Flash technologies.

Micron offers a competitive  Hybrid DIMM controller that contains both RAM and Flash on the same device (click on the image below) so system integrators don’t have to sacrifice all the capacity of one or a pair of DIMM slots. In the Micron product, data moves dynamically between the two memory types according to demand where the most frequently used data resides in the fastest memory. Think of a mmapped region of memory that has an uber fast path to storage that is not subject to software or CPU overhead.

Micron HDIMM

Share this:

  • Twitter

Filed Under: Analysis, Featured news, News Tagged With: HPC

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tell us you were here

Recent Posts

Farewell to a Familiar HPC Friend

May 27, 2020 By Rob Farber Leave a Comment

TechEnablement Blog Sunset or Sunrise?

February 12, 2020 By admin Leave a Comment

The cornerstone is laid – NVIDIA acquires ARM

September 13, 2020 By Rob Farber Leave a Comment

Third-Party Use Cases Illustrate the Success of CPU-based Visualization

April 14, 2018 By admin Leave a Comment

More Tutorials

Learn how to program IBM’s ‘Deep-Learning’ SyNAPSE chip

February 5, 2016 By Rob Farber Leave a Comment

Free Intermediate-Level Deep-Learning Course by Google

January 27, 2016 By Rob Farber Leave a Comment

Intel tutorial shows how to view OpenCL assembly code

January 25, 2016 By Rob Farber Leave a Comment

More Posts from this Category

Top Posts & Pages

  • Fine-Tuning Vectorization and Memory Traffic on Intel Xeon Phi Coprocessors
  • Intel Xeon Phi Study Guide
  • OpenACC Adoption Continues to Gain Momentum in 2016
  • Dynamic Load Balancing using OpenMP 4.0
  • Register For Lustre's Brent Gorda Parallel Storage and Big Data HP-Cast

Archives

© 2026 · techenablement.com