• Home
  • News
  • Tutorials
  • Analysis
  • About
  • Contact

TechEnablement

Education, Planning, Analysis, Code

  • CUDA
    • News
    • Tutorials
    • CUDA Study Guide
  • OpenACC
    • News
    • Tutorials
    • OpenACC Study Guide
  • Xeon Phi
    • News
    • Tutorials
    • Intel Xeon Phi Study Guide
  • OpenCL
    • News
    • Tutorials
    • OpenCL Study Guide
  • Web/Cloud
    • News
    • Tutorials
You are here: Home / Featured article / Data Transfer Using The Intel COI Library

Data Transfer Using The Intel COI Library

October 30, 2014 by Rob Farber Leave a Comment

This short chapter gives an introduction to the Intel COI library and discusses the pros and cons of different data buffers as well as provides benchmarks on transfer latency and bandwidth between the host and the coprocessor. For any non-trivial applications, there is likely going to be a need to share data between the host and the coprocessor. These valuable information are essential in choosing the proper method to communicate data efficiently. An application example is also provided to give a real world context of why it is necessary to consider optimize communication and avoid a potential bottleneck.

Cover3D-fs8

The COI library is built on top the Symmetric Communications InterFace (SCIF) which provides low level and optimized process and coprocessor communications within the Intel® Manycore Platform Software Stack (Intel® MPSS). In contrast to the compiler assisted offload mechanism (which also uses the COI library), using the COI library directly allows the programmer to manually control how data is explicitly transferred onto and off of the coprocessor. In this chapter, we will introduce how to use COI buffers to transfer data, evaluate the effectiveness of the COI library in real world applications, and discuss characteristics of the different types of COI buffers through benchmarks.

Chapter Author

Louis Feng

Louis Feng

Louis Feng is a software engineer at Intel working on high performance graphics in collaboration with DreamWorks Animation. He has previously worked at Disney ImageMovers Digital and Pixar on movie production rendering. Louis received his PhD in Computer Science from University of California, Davis on tensor field visualization research. His current research interests include ray tracing, photorealistic image synthesis on highly parallel architectures, and parallel programming models.

Click to see the overview article “Teaching The World About Intel Xeon Phi” that contains a list of TechEnablement links about why each chapter is considered a “Parallelism Pearl” plus information about James Reinders and Jim Jeffers, the editors of High Performance Parallelism Pearls.

Share this:

  • Twitter

Filed Under: Featured article, Featured news, News, Xeon Phi Tagged With: HPC, Intel, Intel Xeon Phi, x86

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tell us you were here

Recent Posts

Farewell to a Familiar HPC Friend

May 27, 2020 By Rob Farber Leave a Comment

TechEnablement Blog Sunset or Sunrise?

February 12, 2020 By admin Leave a Comment

The cornerstone is laid – NVIDIA acquires ARM

September 13, 2020 By Rob Farber Leave a Comment

Third-Party Use Cases Illustrate the Success of CPU-based Visualization

April 14, 2018 By admin Leave a Comment

More Tutorials

Learn how to program IBM’s ‘Deep-Learning’ SyNAPSE chip

February 5, 2016 By Rob Farber Leave a Comment

Free Intermediate-Level Deep-Learning Course by Google

January 27, 2016 By Rob Farber Leave a Comment

Intel tutorial shows how to view OpenCL assembly code

January 25, 2016 By Rob Farber Leave a Comment

More Posts from this Category

Top Posts & Pages

  • Rob Farber
  • NVIDIA GTC'17 announcements make them a complete 'soup to nuts' solution for specialized deep-learning applications
  • Altera OpenCL Programmable FPGA Talks QPI, HMC, and 100G Optical Interconnect
  • Facebook Open Source GPU FFT 1.5x Faster Than NVIDIA CUFFT
  • Intel Xeon Phi Study Guide

Archives

© 2026 · techenablement.com