It feels like an oxymoron to speak of interpreted languages for HPC much like ordering “jumbo shrimp”. The ease of programming coupled with an ability to augment RAD (Rapid Application Development) languages like Python, R, and Julia with high-performance back-end methods is now being recognized by the HPC and cloud computing communities. In short, the RAD language provide concise, powerful APIs while the underlying CUDA/C/C++/OpenCL methods provide the performance. From an Amdahl’s law perspective, the back-end methods provide the parallelism while the RAD front-end manage the serial sections of the code in the more convenient form that have made RAD languages so popular. The success of projects like PyFR CFD computational framework (in 5k lines of Python) and interest in Deep-learning projects like RaPyDLI (backed by HPC heavyweights like Jack Dongarra and Geoffrey Fox) bring success, credibility and funding to RAD frameworks for HPC and cloud computing.
In fact, Lorena Barba believes that introductory programming should be taught in Python (see “Why I push for Python“). She is also teaching a MOOC titled “Practical Numerical methods with Python“.
Intel is behind a push towards RAD frameworks with publications such as “Turbocharging Open Source Python, R, and Julia-based HPC Applications“. TACC has been developing “parallel R” for Intel Xeon Phi and rMPI (an R interface for MPI) [link].
SC14 contains several RAD workshops workshops such as:
Python is an established, general-purpose, high-level programming language with a large following in research and industry for applications in fields including computational fluid dynamics, finance, biomolecular simulation, artificial intelligence, statistics, data analysis, scientific visualization, and systems management. The use of Python in scientific, high performance parallel, big data, and distributed computing roles has been on the rise with the community providing new and innovative solutions while preserving Python’s famously clean syntax, low learning curve, portability, and ease of use.
- High Performance Technical Computing in Dynamic Languages
Dynamic high–level languages such as Julia, Maple®, Mathematica®, MATLAB®, Octave, Python, R, and Scilab are rapidly gaining popularity with computational scientists and engineers, who often find these languages more productive for rapid prototyping of numerical simulation codes. However, writing legible yet performant code in dynamic languages remains challenging, which limits the scalability of code written in such languages, particularly when deployed on massively parallel architectures such as clusters, cloud servers, and supercomputers. This workshop aims to bring together users, developers, and practitioners of dynamic technical computing languages, regardless of language, affiliation or discipline, to discuss topics of common interest. Examples of such topics include performance, software development, abstractions, composability and reusability, best practices for software engineering, and applications in the context of visualization, information retrieval and big data analytics. http://jiahao.github.io/hptcdl-sc14/
For more information:
- Python in HPC
- The PyFR team notes that that the Python Mako project offers a useful way to template kernel generation to assist in platform portability (PyFR runs on CPUs and GPUs). The potability augments the existing, relatively friendly interface Python interface to call low-level kernels pre-written in C++/CUDA/OpenCL and other lower-level languages and APIs.
- Julia (that can also call Python).


Leave a Reply