• Home
  • News
  • Tutorials
  • Analysis
  • About
  • Contact

TechEnablement

Education, Planning, Analysis, Code

  • CUDA
    • News
    • Tutorials
    • CUDA Study Guide
  • OpenACC
    • News
    • Tutorials
    • OpenACC Study Guide
  • Xeon Phi
    • News
    • Tutorials
    • Intel Xeon Phi Study Guide
  • OpenCL
    • News
    • Tutorials
    • OpenCL Study Guide
  • Web/Cloud
    • News
    • Tutorials
You are here: Home / Featured news / NASA Charts Path For CFD To 2030 – Projects Future Computer Technology!

NASA Charts Path For CFD To 2030 – Projects Future Computer Technology!

October 7, 2014 by Rob Farber Leave a Comment

The recent NASA-sponsored report CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences is a must-read for everyone involved in Computational Fluid Dynamics and a very interesting read for those involved in computer technology. In a nutshell, “A single engineer/scientist must be able to conceive, create, analyze, and interpret a large ensemble of related simulations in a time-critical period (e.g. 24 hours), without individually managing each simulation.” The report also includes a projection about the computational technology in the year 2030.

The basic set of capabilities for Vision 2030 CFD must include, at a minimum:

  1. Emphasis on physics-based, predictive modeling
  2. Management of errors and uncertainties resulting from all possible sources
  3. A much higher degree of automation in all steps of the analysis process
  4. Ability to effectively utilize massively parallel, heterogeneous, and fault-tolerant HPC architectures
  5. Flexibility to tackle capability- and capacity-computing tasks in both industrial and research environments
  6. Seamless integration with multidisciplinary analyses that will be the norm in 2030
NASA Technology Development Roadman for CFD to 2030

NASA Technology Development Roadmap for CFD to 2030

The NASA vision for hardware are machines that are:

“hierarchical, consisting of large clusters of shared- memory multiprocessors, themselves including hybrid-chip multiprocessors combining low-latency sequential cores with high-throughput data-parallel cores. Even the memory chips are expected to contain computational elements, which could provide significant speedups for irregular memory access algorithms, such as sparse matrix operations arising from unstructured datasets.”

The study notes:

“The wildcard in predicting what a leading edge HPC system will look like is whether one or more of several current nascent HPC technologies will come to fruition. Radical new technologies such as quantum computing, superconducting logic, low-power memory, massively parallel molecular computing, next generation “traditional” processor technologies, on-chip optics, advanced memory technologies (e.g., 3D memory) have been proposed but are currently at very low technology readiness levels (TRL). Many of these revolutionary technologies will require different algorithms, software infrastructures, as well as different ways of using results from CFD simulations.” 

In particular the study references a paper, “Quantum Algorithm for Linear Systems of Equations” might impact future systems.

The mesh generation goals are aggressive:

  • Streamlined CAD access and interfacing.
  • Automated, adaptive techniques including anisotropy.
  • A less burdensome, more invisible meshing process.
  • Meshes with 1 trillion grid points.
  • Efficient exploitation of massively parallel hardware.
  • High order elements.

Current Technology Gaps

  1. Hardware system power consumption
  2. Higher levels of software extraction
  3. Advanced programming environments
  4. Robust CFD code scalability to O(1,000,000 cores)
  5. Lack of scalable CFD pre- and post-processing methods
  6. Lack of access to HPC resources for code development

The study proposes several grand challenge problems

NASA 2030 Vision Grand Challenge Problems

NASA 2030 Vision Grand Challenge Problems

Critical Flow Phenomena addressed in the study include:

  • Flow separation: e.g., smooth-body, shock-induced, blunt/bluff body
  • Laminar to turbulent boundary layer flow transition/reattachment
  • Viscous wake interactions and boundary layer confluence
  • Corner/junction flows
  • Icing and frost
  • Circulation and flow separation control
  • Turbomachinery flows
    Aerothermal cooling/mixing flows
  • Reactive flows, including gas chemistry and combustion
  • Jet exhaust
  • Airframe noise
  • Vortical flows: wing/blade tip, rotorcraft
  • Wake hazard reduction and avoidance
  • Wind tunnel to flight scaling
  • Rotor aero/structural/controls, wake and multirotor interactions, acoustic loading, ground effects
  • Shock/boundary layer, shock/jet interactions
  • Sonic boom
  • Store/booster separation
  • Planetary retro-propulsion
  • Aerodynamic/radiative heating
  • Plasma flows
  • Ablator aerothermodynamics
  • Plasma flows

 

 

Share this:

  • Twitter

Filed Under: Featured news, News Tagged With: GPU, HPC, Intel Xeon Phi

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tell us you were here

Recent Posts

Farewell to a Familiar HPC Friend

May 27, 2020 By Rob Farber Leave a Comment

TechEnablement Blog Sunset or Sunrise?

February 12, 2020 By admin Leave a Comment

The cornerstone is laid – NVIDIA acquires ARM

September 13, 2020 By Rob Farber Leave a Comment

Third-Party Use Cases Illustrate the Success of CPU-based Visualization

April 14, 2018 By admin Leave a Comment

More Tutorials

Learn how to program IBM’s ‘Deep-Learning’ SyNAPSE chip

February 5, 2016 By Rob Farber Leave a Comment

Free Intermediate-Level Deep-Learning Course by Google

January 27, 2016 By Rob Farber Leave a Comment

Intel tutorial shows how to view OpenCL assembly code

January 25, 2016 By Rob Farber Leave a Comment

More Posts from this Category

Top Posts & Pages

  • Face It: AI Gets Personal to Make You Look Better!
  • Run CUDA without Recompilation on x86, AMD GPUs, and Intel Xeon Phi with gpuOcelot
  • Inside NVIDIA's Unified Memory: Multi-GPU Limitations and the Need for a cudaMadvise API Call
  • Shared Memory is Simple on Intel Xeon Phi - supports STL!
  • The CUDA Thrust API Now Supports Streams and Concurrent Tasks

Archives

© 2023 · techenablement.com