Reservoir simulation is the single, most important calculation and Oil and Gas company can do. Attendees at the recent KAUST Global IT summit were told that armed guards are stationed inside the KAUST computer center when the ARAMCO simulation is running because the software is a state secret. The Oil and Gas Industry has been reporting heroic simulation sizes now range from billion to trillion cell runs.
IBM and Stone Ridge Technology, in partnership with NVIDIA, just announced a performance milestone in reservoir simulation that is 10x faster than legacy CPU codes yet consumes 1/10th the power and occupies 1/100th of the space.
The simulation utilized IBM Minsky Servers coupled with NVIDIA GPUs. More specifically, the simulation used 60 Power processors and 120 GPU accelerators to run 10x faster than the previous run that utilized 700,000 processors. These results aim to demonstrate the price and performance that the IBM/NVIDIA pairing can provide for business critical High Performance Computing (HPC) applications for simulation and exploration.
Energy companies use reservoir modeling to predict the flow of oil, water and natural gas in the subsurface of the earth before they drill to figure out how to more efficiently extract the most oil. A billion-cell simulation is extremely challenging due to the level of detail it seeks to provide. Stone Ridge Technology, maker of the ECHELON petroleum reservoir simulation software, completed the billion-cell reservoir simulation in 92 minutes using 30 IBM Power Systems S822LC for HPC servers equipped with 60 POWER processors and 120 NVIDIA® Tesla™ P100 GPU accelerators.
“This calculation is a very salient demonstration of the computational capability and density of solution that GPUs offer. That speed lets reservoir engineers run more models and ‘what-if’ scenarios than previously so they can have insights to produce oil more efficiently, open up fewer new fields and make responsible use of limited resources” said Vincent Natoli, President of Stone Ridge Technology. “By increasing compute performance and efficiency by more than an order of magnitude, we’re democratizing HPC for the reservoir simulation community.”
“This milestone calculation illuminates the advantages of the IBM POWER architecture for data intensive and cognitive workloads.” said Sumit Gupta, IBM Vice President, High Performance Computing, AI & Analytics. “By running ECHELON on IBM Power Systems, users can achieve faster run-times using a fraction of the hardware. The previous record used more than 700,000 processors in a supercomputer installation that occupies nearly half a football field. Stone Ridge did this calculation on two racks of IBM Power Systems machines that could fit in the space of half a ping-pong table.”
This milestone calculation illuminates the advantages of the IBM POWER architecture for data intensive and cognitive workloads.” – Sumit Gupta, IBM Vice President, High Performance Computing, AI & Analytics
This latest advance challenges perceived misconceptions that GPUs could not be efficient on complex application codes like reservoir simulators and are better suited to simple, more naturally parallel applications such as seismic imaging. The scale, speed and efficiency of the reported result disprove this preconception. The milestone calculation with a relatively small server infrastructure enables small and medium-size oil and energy companies to take advantage of computer-based reservoir modeling and optimize production from their asset portfolio.
Billion cell simulations in the industry are rare in practice, but the calculation was accomplished to highlight the performance differences between new fully GPU based codes like the ECHELON reservoir simulator and equivalent legacy CPU codes. ECHELON scales from the cluster to the workstation and while it can simulate a billion cells on 30 servers, it can also run smaller models on a single server or even on a single NVIDIA P100 board in a desktop workstation, the latter two use cases being more in the sweet spot for the energy industry.
Leave a Reply