• Home
  • News
  • Tutorials
  • Analysis
  • About
  • Contact

TechEnablement

Education, Planning, Analysis, Code

  • CUDA
    • News
    • Tutorials
    • CUDA Study Guide
  • OpenACC
    • News
    • Tutorials
    • OpenACC Study Guide
  • Xeon Phi
    • News
    • Tutorials
    • Intel Xeon Phi Study Guide
  • OpenCL
    • News
    • Tutorials
    • OpenCL Study Guide
  • Web/Cloud
    • News
    • Tutorials
You are here: Home / CUDA / NVIDIA Pascal in the Wild

NVIDIA Pascal in the Wild

August 15, 2016 by Rob Farber Leave a Comment

NVIDIA CEO Jen-Hsun Huang last week hand-delivered an  NVIDIA DGX-1 to OpenAI in San Francisco. “I thought it was incredibly appropriate that the world’s first supercomputer dedicated to artificial intelligence would go to the laboratory that was dedicated to open artificial intelligence,” Huang said. OpenAI’s team is working at the cutting-edge of a field that promises incredible advances. Imagine artificial personal assistants that can coordinate our digital lives and autonomous cars and robots that are accessible to everyone. “So if this is the only one ever shipped, this project would cost $2 billion,” Huang quipped.

Signed, sealed, delivered: NVIDIA CEO Jen-Hsun Huang and the team at OpenAI sign the first DGX-1 AI supercomputer in a box.

“You can take a large amount of data that would help people talk to each other on the internet, and you can train, basically, a chatbot, but you can do it in a way that the computer learns how language works and how people interact,” said OpenAI Research Scientist Andrej Karpathy.

Historic moment: OpenAI’s researchers gather around the first AI supercomputer in a box, NVIDIA DGX-1.

Unleashing DGX-1, the First AI Supercomputer in a Box

The key to all this: speed. Researchers today are limited by the computational power in their systems.

“Our advances depend on GPUs being fast. Speed of our computers is, in some sense, the lifeblood of deep learning,” Sutskever said.

“One very easy way of always getting our models to work better is to just scale the amount of compute,” Karpathy said. “So right now, if we’re training on, say, a month of conversations on Reddit, we can, instead, train on entire years of conversations of people talking to each other on all of Reddit.”

“And then we can get much more data in terms of how people interact with each other. And, eventually, we’ll use that to talk to computers, just like we talk to each other.”

For more information, see https://blogs.nvidia.com/blog/2016/08/15/first-ai-supercomputer-openai-elon-musk-deep-learning/.

Share this:

  • Twitter

Filed Under: CUDA, Featured article, Featured news, News, News, News Tagged With: Nvidia Tesla

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tell us you were here

Recent Posts

Farewell to a Familiar HPC Friend

May 27, 2020 By Rob Farber Leave a Comment

TechEnablement Blog Sunset or Sunrise?

February 12, 2020 By admin Leave a Comment

The cornerstone is laid – NVIDIA acquires ARM

September 13, 2020 By Rob Farber Leave a Comment

Third-Party Use Cases Illustrate the Success of CPU-based Visualization

April 14, 2018 By admin Leave a Comment

More Tutorials

Learn how to program IBM’s ‘Deep-Learning’ SyNAPSE chip

February 5, 2016 By Rob Farber Leave a Comment

Free Intermediate-Level Deep-Learning Course by Google

January 27, 2016 By Rob Farber Leave a Comment

Intel tutorial shows how to view OpenCL assembly code

January 25, 2016 By Rob Farber Leave a Comment

More Posts from this Category

Top Posts & Pages

  • MultiOS Gaming, Media, and OpenCL Using XenGT Virtual Machines On Shared Intel GPUs
  • Intel Xeon Phi Study Guide
  • High Performance Ray Tracing With Embree On Intel Xeon Phi

Archives

© 2025 · techenablement.com