• Home
  • News
  • Tutorials
  • Analysis
  • About
  • Contact

TechEnablement

Education, Planning, Analysis, Code

  • CUDA
    • News
    • Tutorials
    • CUDA Study Guide
  • OpenACC
    • News
    • Tutorials
    • OpenACC Study Guide
  • Xeon Phi
    • News
    • Tutorials
    • Intel Xeon Phi Study Guide
  • OpenCL
    • News
    • Tutorials
    • OpenCL Study Guide
  • Web/Cloud
    • News
    • Tutorials
You are here: Home / Analysis / Programming Deep-learning Neural Networks to Solve Tasks

Programming Deep-learning Neural Networks to Solve Tasks

September 8, 2014 by Rob Farber Leave a Comment

Deep-learning neural networks can be programmed, or structured by a human to perform one or more complex tasks. The key requirements are the ability to (1) design the network topology and (2) lock weights in the ANN (Artificial Neural Network) during training. A powerful example of structured deep-learning comes from the 1993 Farber, et.al. paper, “Identi fication of Continuous-Time Dynamical Systems: Neural Network Based Algorithms and Parallel Implementation”  that implemented a fourth-order Runge-Kutta numerical integrator, discussed how to handle stiff sets of equations, perform identification in continuous time systems and train “netlets” to model a set of ODEs (Ordinary Differential Equations). The paper notes that both implicit and explicit integrators can be used. Succinctly, repeated iterations of a feed-forward neural network are used to train the implicit integrator while a recurrent neural network is used during training of the explicit integrator. This paper also discusses the algorithms used and their  implementation on parallel machines (SIMD and MIMD architectures). Once trained, these task level neural networks can be incorporated into other deep-learning systems to train other neural network subsystems as well as be integrated into conventional computational applications.

RK4_ANN

Beyond the ability to integrate and model stiff sets of equations, follow-on work  by Ramiro Rico-Martınez, Raymond A. Adomaitis, and Ioannis G. Kevrekidis investigated the noninvertability of this approach in the 2000 paper, “Noninvertibility in neural networks”

We present and discuss an inherent shortcoming of neural networks used as discrete-time models in system identification, time series processing, and prediction. Trajectories of nonlinear ordinary differential equations (ODEs) can, under reasonable assumptions, be integrated uniquely backward in time. Discrete-time neural network mappings derived from time series, on the other hand, can give rise to multiple trajectories when followed backward in time: they are in principle noninvertible. This fundamental difference can lead to model predictions that are not only slightly quantitatively different, but qualitatively inconsistent with continuous time series. We discuss how noninvertibility arises, present key analytical concepts and some of its phenomenology. Using two illustrative examples (one experimental and one computational), we demonstrate when noninvertibility becomes an important factor in the validity of artificial neural network (ANN) predictions, and show some of the overall complexity of the predicted pathological dynamical behavior. These concepts can be used to probe the validity of ANN time series models, as well as provide guidelines for the acquisition of additional training data. 

There are numerous applications of these techniques in a variety of fields in vision research, mathematical analysis, control and chemical engineering to name a few.

Please see my publications list to see other applications: http://techenablement.com/rob-farber.

The techniques discussed in “Identi fication of Continuous-Time Dynamical Systems: Neural Network Based Algorithms and Parallel Implementation” can be easily applied to the farbopt teaching code that achieves a TF/s per GPU or Intel Xeon Phi and over 13 PF/s on the ORNL Titan supercomputer due to the near-linear scaling of the Farber parallel mapping.

Click to view article

Click to view article

 

Of course, TechEnablement has utilized many of the techniques discussed in our articles such as structured deep-learning for consulting in a variety fields from manufacturing optimization to color matching and small molecule drug-design.

Share this:

  • Twitter

Filed Under: Analysis, Featured article, Featured news, News Tagged With: deep-learning, machine-learning

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tell us you were here

Recent Posts

Farewell to a Familiar HPC Friend

May 27, 2020 By Rob Farber Leave a Comment

TechEnablement Blog Sunset or Sunrise?

February 12, 2020 By admin Leave a Comment

The cornerstone is laid – NVIDIA acquires ARM

September 13, 2020 By Rob Farber Leave a Comment

Third-Party Use Cases Illustrate the Success of CPU-based Visualization

April 14, 2018 By admin Leave a Comment

More Tutorials

Learn how to program IBM’s ‘Deep-Learning’ SyNAPSE chip

February 5, 2016 By Rob Farber Leave a Comment

Free Intermediate-Level Deep-Learning Course by Google

January 27, 2016 By Rob Farber Leave a Comment

Intel tutorial shows how to view OpenCL assembly code

January 25, 2016 By Rob Farber Leave a Comment

More Posts from this Category

Top Posts & Pages

  • Accelerating the Traveling Salesman Problem with GPUs and Intel Xeon Phi
  • SC14 - Fast Hybrid GPU Betweenness Centrality Code Achieves Nearly Ideal Scaling to 192 GPUs
  • CUDA 340.29 Driver Significantly Boosts GPU Performance (100s GF/s For Machine-Learning)
  • Remote Teaching Rooms Available At SC14
  • SenseHUD $99 Heads Up Display for Cars - Pre-Order Price

Archives

© 2026 · techenablement.com