search instagram arrow-down

Geoffrey Fox

Distinguished professor of Informatics and Computing, and Physics at Indiana University

“Integrated Systems for Deep Learning and Data Engineering on Clouds and HPC Systems”

Thursday 16 February, 10.30am – 11.05am


AI, as seen today by large-scale deep learning, presents complex challenges to computer systems. It requires the adaptive execution of heterogeneous components, each of which is a cluster of parallel tasks. Further, large amounts of data need to be read, written, and often moved in small dynamic batches. Deep learning must be efficiently linked to pre and post-processing (data engineering). This implies converged environments with Java, Python, and C++ ecosystems. Further, AI must be integrated with better-understood large parallel simulations. The simulations can invoke AI to control passage through phase space (often with ensembles). Alternatively, it can train surrogates used to replace all or part of the simulation. In the latter case, there are consequent inferences, as in computational fluid dynamics or climate simulations where AI can learn microstructure. This implies we must support systems that run well in conjunction with classic Slurm-MPI-based simulations as well as in modern cloud environments, including challenges from shared resources due to multi-tenancy. This extends the needed convergence to link HPC and cloud environments.



  • M.A. at Cambridge, 1968
  • Ph.D. in Theoretical Physics at Cambridge, 1967
  • B.A. in Mathematics at Cambridge, 1964

Fox received a Ph.D. in Theoretical Physics from Cambridge University and is now distinguished professor of Informatics and Computing, and Physics at Indiana University where he is director of the Digital Science Center, Chair of Department of Intelligent Systems Engineering and Director of the Data Science program at the School of Informatics, Computing, and Engineering.

He previously held positions at Caltech, Syracuse University and Florida State University after being a postdoc at the Institute of Advanced Study at Princeton, Lawrence Berkeley Laboratory and Peterhouse College Cambridge.

He has supervised the Ph.D. of 68 students and published around 1200 papers in physics and computer science with an index of 70 and over 26000 citations.

He currently works in applying computer science from infrastructure to analytics in Biology, Pathology, Sensor Clouds, Earthquake and Ice-sheet Science, Image processing, Deep Learning, Manufacturing, Network Science and Particle Physics. The infrastructure work is built around Software Defined Systems on Clouds and Clusters. The analytics focuses on scalable parallelism.

He is involved in several projects to enhance the capabilities of Minority Serving Institutions. He has experience in online education and its use in MOOCs for areas like Data and Computational Science.

He is a Fellow of APS (Physics) and ACM (Computing).

Research Areas


%d bloggers like this: