• Home
  • News
  • Tutorials
  • Analysis
  • About
  • Contact

TechEnablement

Education, Planning, Analysis, Code

  • CUDA
    • News
    • Tutorials
    • CUDA Study Guide
  • OpenACC
    • News
    • Tutorials
    • OpenACC Study Guide
  • Xeon Phi
    • News
    • Tutorials
    • Intel Xeon Phi Study Guide
  • OpenCL
    • News
    • Tutorials
    • OpenCL Study Guide
  • Web/Cloud
    • News
    • Tutorials
You are here: Home / Featured article / Lustre Delivers 10x the Bandwidth of NFS on Intel Xeon Phi

Lustre Delivers 10x the Bandwidth of NFS on Intel Xeon Phi

September 5, 2014 by Rob Farber Leave a Comment

Lustre on Intel Xeon Phi delivered 10x the bandwidth of NFS as reported in the 2014 Lustre User Group (LUG) presentation “Running Native Lustre* Client inside Intel® Xeon Phi™ coprocessor” by Dmitry Eremin, Zhiqi Tao and Gabriele Paciucci of Intel Corporation. Network file systems are essential to the current generation of Knights Corner Intel Xeon Phi coprocessors because the native file system resides in the coprocessors RAM. Yes, saving a file on an Intel Xeon Phi coprocessor reduces the available memory on the device. The only way to avoid this issue is to use a network file system, which is why the Lustre 10x greater bandwidth is so important.

Lustre delivers 10x the performance of NFS on Intel Xeon Phi

Lustre delivers 10x the performance of NFS on Intel Xeon Phi

The Lustre file system is a POSIX compliant, open source, parallel file system that supports the requirements of leadership class HPC and Enterprise environments.  It looks and acts like any other filesystem yet scales to thousands of clients, petabytes of storage, and has demonstrated over a terabyte per second of sustained I/O bandwidth.  Over 60% of the TOP100 supercomputers run Lustre.

For more information about the 10x result and the Intel Xeon Phi configuration, see the OpenSFS.org slides “Running Native Lustre* Client inside Intel® Xeon Phi™ coprocessor” and/or the following video from LUG 2014

For more information about Lustre on many systems see:

  • http://opensfs.org/
  • “Architecting a High Performance Lustre Storage Solution” that disususs Intel enhancements to Lustre.

You can also contact Intel directly about Intel Enhanced Lustre as illustrated by the graphic below, or learn more in the following video.

Intel enhancements to Lustre

Note that Lustre can exploit SSD storage quite nicely in a general-purpose CPU-based cluster environment 

Luster SSD 16-node IOR performance

Luster SSD 16-node IOR performance

 

Lustre 128-node SSD IOR performance using 1 MB files

Lustre 128-node SSD IOR performance using 1 MB files

 

 

Share this:

  • Twitter

Filed Under: Featured article, Featured news, News, News, Xeon Phi Tagged With: HPC, Intel Xeon Phi

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tell us you were here

Recent Posts

Farewell to a Familiar HPC Friend

May 27, 2020 By Rob Farber Leave a Comment

TechEnablement Blog Sunset or Sunrise?

February 12, 2020 By admin Leave a Comment

The cornerstone is laid – NVIDIA acquires ARM

September 13, 2020 By Rob Farber Leave a Comment

Third-Party Use Cases Illustrate the Success of CPU-based Visualization

April 14, 2018 By admin Leave a Comment

More Tutorials

Learn how to program IBM’s ‘Deep-Learning’ SyNAPSE chip

February 5, 2016 By Rob Farber Leave a Comment

Free Intermediate-Level Deep-Learning Course by Google

January 27, 2016 By Rob Farber Leave a Comment

Intel tutorial shows how to view OpenCL assembly code

January 25, 2016 By Rob Farber Leave a Comment

More Posts from this Category

Top Posts & Pages

  • Rob Farber
  • Altera OpenCL Programmable FPGA Talks QPI, HMC, and 100G Optical Interconnect
  • Plesiochronous (Loosely Synchronous) Phasing Barriers To Avoid Thread Inefficiencies
  • NVIDIA GTC'17 announcements make them a complete 'soup to nuts' solution for specialized deep-learning applications
  • Facebook Open Source GPU FFT 1.5x Faster Than NVIDIA CUFFT

Archives

© 2026 · techenablement.com