SCEC 2018

Overview

Supercomputers are used to power discoveries and to reduce the time-to-results in a wide variety of disciplines such as engineering, physical sciences, and healthcare. They are globally considered as vital for staying competitive in defense, research and development activities in various disciplines, financial sector, several mainstream businesses and even agriculture. An integral requirement for enabling the usage of the supercomputers, like any other computer, is the availability of the software. Scalable and efficient software is typically required for optimally using the large-scale supercomputing platforms, and thereby, effectively leveraging the investments in the advanced CyberInfrastructure (CI). However, developing and maintaining such software is challenging due to several factors, such as, 1) no well-defined processes or guidelines for writing software that can ensure high-performance on supercomputers, and 2) shortfall of trained workforce having skills in both software engineering and supercomputing. With the rapid advancement in the computer architecture discipline, the complexity of the processors that are used in the supercomputers is also increasing, and, in turn, the task of developing efficient software for supercomputers is further becoming challenging and complex. To mitigate such challenges, there is a need for a common platform that brings together different stakeholders from the areas of supercomputing and software engineering. To provide such a platform, the second workshop on "Software Challenges to Exascale Computing (SCEC)" is being organized in Delhi, India, in December 2018.

The SCEC18 workshop will not only inform the participants about the challenges in large-scale HPC software development but will also steer them in the direction of building international collaborations for finding solutions to those challenges. The workshop will provide a forum through which hardware vendors and software developers can communicate with each other and influence the architecture of the next generation supercomputing systems and the supporting software stack. By fostering cross-disciplinary associations, the workshop will serve as a stepping-stone towards innovations in the future.

Overview

Benefits of the SCEC18 workshop to the researchers and users in the academia: the workshop will provide an opportunity to disseminate their results to the public, and find potential collaborators.

Benefits of the SCEC18 workshop to software developers: the workshop will provide an opportunity to understand the future trends in the HPC hardware and develop collaborations in the code modernization and optimization disciplines.

Benefits of the SCEC18 workshop to HPC service providers: the workshop will provide an opportunity to understand the challenges that the community faces in using the HPC platforms efficiently, and connect with the user-community.

Benefits of the SCEC18 workshop to HPC hardware vendors: the workshop will provide an opportunity to understand the evolving needs of the HPC community, and network with potential customers.

Benefits of the SCEC18 workshop to students: the workshop will provide an opportunity to network with HPC and advanced software engineering professionals, faculties and researchers, learn about the internship and career opportunities, discuss the opportunities for higher education.

The workshop proceedings will be of interest to the students, researchers, and other professionals who are working in the areas of HPC software and large-scale supercomputers.


Topics of interest include, but are not limited to:

  • Tools and techniques for code modernization
  • Generative programming techniques in HPC
  • Supporting software and middleware for HPC environments: e.g., MPI libraries
  • Tools for profiling, debugging, and parallelizing applications
  • Tools and techniques for memory and power optimization
  • Large-scale HPC applications (tuning, optimization, and implementation on HPC resources)
  • HPC Science Gateways, Containerization (HPC in the Cloud)
  • Fault-tolerance
  • Filesystems and Parallel I/O
  • High-level interfaces, libraries, compilers, and runtime systems for parallel programming
  • Domain-Specific Languages in HPC
  • Best practices for HPC software development


Invited Speakers


team member
Dr. Dan Stanzione
Associate Vice President For Research Executive Director, Texas Advanced Computing Center
team member
Dr. D.K. Panda
Professor and University Distinguished Scholar of Computer Science and Engineering, Ohio State University
team member
Dr. P. K. Sinha
Vice Chancellor and Director of International Institute of Information Technology (IIIT), Naya Raipur
team member
Dr. Manodeep Sinha
Computational astrophysicist, Centre for Astrophysics at Swinburne University of Technology, Melbourne
team member
Dr. Sushil Prasad
Program Director, National Science Foundation (NSF); Professor, Georgia State University
team member
Dr. M. K. Verma
Professor, Indian Institute of Technology, Kanpur
team member
Dr. Kishore Kothapalli
Associate Professor, International Institute of Information Technology, Hyderabad
team member
Dr. P. (Saday) Sadayappan
PROFESSOR AND UNIVERSITY DISTINGUISHED SCHOLAR OF COMPUTER SCIENCE AND ENGINEERING, OHIO STATE UNIVERSITY
team member
Dr. Santosh Ansumali
Associate Professor, Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR); Chief Technical Advisor, SankyaSutra Lab
mpgururajan
Dr. M. P. Gururajan
Associate Professor, Indian Institute of Technology, Bombay; Chief Technical Advisor, SankyaSutra Lab
mpgururajan
Mr. Ashrut Ambastha
Sr. Staff Architect at Mellanox



Agenda on December 13, 2018


Agenda on December 13, 2018


Agenda on December 13, 2018


Agenda on December 13, 2018


Agenda on December 13, 2018

Time
Topic
Speaker
Time
Topic
Speaker

08:45 AM - 09:15 AM
REGISTRATION, TEA/COFFEE
08:45 AM - 09:45 AM
REGISTRATION, TEA/COFFEE

09:45 AM - 10:00 AM
Opening Remarks
Dr. Amit Majumdar, Dr. Ritu Arora
09:45 AM - 10:00 AM
Opening Remarks
Dr. Amit Majumdar, Dr. Ritu Arora

10:00 AM - 10:30 AM
Computing for the Endless Frontier
Dr. Dan Stanzione
10:00 AM - 10:30 AM
Computing for the Endless Frontier
Dr. Dan Stanzione
Dr. Dan Stanzione

Bio: Dr. Stanzione is the Executive Director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. A nationally recognized leader in high performance computing, Stanzione has served as deputy director since June 2009 and assumed the Executive Director post on July 1, 2014.

He is the principal investigator (PI) for several leading projects including a multimillion-dollar National Science Foundation (NSF) grant to deploy and support TACC's Stampede supercomputer over four years. Stanzione is also the PI of TACC's Wrangler system, a supercomputer designed specifically for data-focused applications. He served for six years as the co-director of CyVerse, a large-scale NSF life sciences cyberinfrastructure in which TACC is a major partner. In addition, Stanzione was a co-principal investigator for TACC's Ranger and Lonestar supercomputers, large-scale NSF systems previously deployed at UT Austin. Stanzione previously served as the founding director of the Fulton High Performance Computing Initiative at Arizona State University and served as an American Association for the Advancement of Science Policy Fellow in the NSF's Division of Graduate Education.

Stanzione received his bachelor's degree in electrical engineering and his master's degree and doctorate in computer engineering from Clemson University, where he later directed the supercomputing laboratory and served as an assistant research professor of electrical and computer engineering.


Presentation Slides:
Computing for the Endless Frontier
Presentation Video
Dr. Dan Stanzione


Bio:
Dr. Stanzione is the Executive Director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. A nationally recognized leader in high performance computing, Stanzione has served as deputy director since June 2009 and assumed the Executive Director post on July 1, 2014.

He is the principal investigator (PI) for several leading projects including a multimillion-dollar National Science Foundation (NSF) grant to deploy and support TACC's Stampede supercomputer over four years. Stanzione is also the PI of TACC's Wrangler system, a supercomputer designed specifically for data-focused applications. He served for six years as the co-director of CyVerse, a large-scale NSF life sciences cyberinfrastructure in which TACC is a major partner. In addition, Stanzione was a co-principal investigator for TACC's Ranger and Lonestar supercomputers, large-scale NSF systems previously deployed at UT Austin. Stanzione previously served as the founding director of the Fulton High Performance Computing Initiative at Arizona State University and served as an American Association for the Advancement of Science Policy Fellow in the NSF's Division of Graduate Education.

Stanzione received his bachelor's degree in electrical engineering and his master's degree and doctorate in computer engineering from Clemson University, where he later directed the supercomputing laboratory and served as an assistant research professor of electrical and computer engineering.


Presentation Slides:
Computing for the Endless Frontier
Presentation Video

10:30 AM - 11:00 AM
Designing Scalable HPC, Deep Learning, Big Data and Cloud Middleware for Exascale Systems
Dr. D.K. Panda
10:30 AM - 11:00 AM
Designing Scalable HPC, Deep Learning, Big Data and Cloud Middleware for Exascale Systems
Dr. D.K. Panda

DK Panda Picture

Abstract: This talk will focus on challenges in designing HPC, Deep Learning, Big Data, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS - OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (Xeon, ARM and OpenPower), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness. Features, sample performance numbers and best practices of using MVAPICH2 libraries (http://mvapich.cse.ohio- state.edu)will be presented. For the Deep Learning domain, we will focus on popular Deep Learning frameworks (Caffe, CNTK, and TensorFlow) to extract performance and scalability with MVAPICH2-GDR MPI library. For the Big Data domain, we will focus on high- performance and scalable designs of Spark and Hadoop (including HDFS, MapReduce, RPC, and HBase) and the associated Deep Learning frameworks using native RDMA support for InfiniBand and RoCE. Finally, we will outline the challenges in moving these middleware to the Cloud environments using OpenStack, Docker, and Singularity.

Bio:
Dr. DK Panda is a Professor and University Distinguished Scholar of Computer Science and Engineering at the Ohio State University. He has published over 450 papers in the area of high-end computing and networking. The MVAPICH2 (High Performance MPI and PGAS over InfiniBand, Omni-Path, iWARP and RoCE) libraries, designed and developed by his research group (http://mvapich.cse.ohio-state.edu), are currently being used by more than 2,950 organizations worldwide (in 86 countries). More than 502,000 downloads of this software have taken place from the project's site. This software is empowering several InfiniBand clusters (including the 2 nd , 12 th , 15 th , and 24 th ranked ones) in the TOP500 list. The RDMA packages for Apache Spark, Apache Hadoop and Memcached together with OSU HiBD benchmarks from his group (http://hibd.cse.ohio-state.edu) are also publicly available. These libraries are currently being used by more than 290 organizations in 34 countries. More than 28,500 downloads of these libraries have taken place. High-performance and scalable versions of the Caffe and TensorFlow frameworks are available from http://hidl.cse.ohio-state.edu. Prof. Panda is an IEEE Fellow. More details about Prof. Panda are available at http://www.cse.ohio-state.edu/~panda.


Presentation Slides:
Designing Scalable HPC, Deep Learning, Big Data and Cloud Middleware for Exascale Systems
Presentation Video
DK Panda Picture

Abstract: This talk will focus on challenges in designing HPC, Deep Learning, Big Data, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS - OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (Xeon, ARM and OpenPower), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness. Features, sample performance numbers and best practices of using MVAPICH2 libraries (http://mvapich.cse.ohio- state.edu)will be presented. For the Deep Learning domain, we will focus on popular Deep Learning frameworks (Caffe, CNTK, and TensorFlow) to extract performance and scalability with MVAPICH2-GDR MPI library. For the Big Data domain, we will focus on high- performance and scalable designs of Spark and Hadoop (including HDFS, MapReduce, RPC, and HBase) and the associated Deep Learning frameworks using native RDMA support for InfiniBand and RoCE. Finally, we will outline the challenges in moving these middleware to the Cloud environments using OpenStack, Docker, and Singularity.

Bio:
Dr. DK Panda is a Professor and University Distinguished Scholar of Computer Science and Engineering at the Ohio State University. He has published over 450 papers in the area of high-end computing and networking. The MVAPICH2 (High Performance MPI and PGAS over InfiniBand, Omni-Path, iWARP and RoCE) libraries, designed and developed by his research group (http://mvapich.cse.ohio-state.edu), are currently being used by more than 2,950 organizations worldwide (in 86 countries). More than 502,000 downloads of this software have taken place from the project's site. This software is empowering several InfiniBand clusters (including the 2 nd , 12 th , 15 th , and 24 th ranked ones) in the TOP500 list. The RDMA packages for Apache Spark, Apache Hadoop and Memcached together with OSU HiBD benchmarks from his group (http://hibd.cse.ohio-state.edu) are also publicly available. These libraries are currently being used by more than 290 organizations in 34 countries. More than 28,500 downloads of these libraries have taken place. High-performance and scalable versions of the Caffe and TensorFlow frameworks are available from http://hidl.cse.ohio-state.edu. Prof. Panda is an IEEE Fellow. More details about Prof. Panda are available at http://www.cse.ohio-state.edu/~panda.


Presentation Slides:
Designing Scalable HPC, Deep Learning, Big Data and Cloud Middleware for Exascale Systems
Presentation Video

11:00 AM - 11:30 AM
Corrfunc: Blazing fast correlation functions with SIMD Intrinsics
Dr. Manodeep Sinha
11:00 AM - 11:30 AM
Corrfunc: Blazing fast correlation functions with SIMD Intrinsics
Dr. Manodeep Sinha

msinha Picture

Abstract: One of the major computational challenges of modern astrophysics is quantifying how galaxies are grouped or clustered. Galaxy clustering is determined by a combination of universal cosmological parameters, gravity, and the physics of galaxy formation. Quantifying galaxy clustering requires computing pair-wise separations-- an inherently quadratic process. Consequently, comparing the observed clustering of galaxies to that theoretically predicted is both useful to advance our understanding of physics and extremely technically complex. Since observational studies of galaxies and the theoretical models, contain millions of galaxies, computing the clustering strength becomes a bottleneck in the analysis pipeline. Here I present Corrfunc -- a suite of OpenMP-parallelized clustering codes that target current CPU micro-architecture with custom Advanced Vector Extensions (AVX512F, AVX) and Streaming SIMD Extensions (SSE) intrinsics. By design, Corrfunc is highly optimized and is at least a factor of few faster than all existing public galaxy clustering correlation function routines. The algorithm within Corrfunc can be easily adapted to a variety of different measurements and has already been implemented for nearest neighbour searches, group finding in galaxy surveys, weak lensing measurements etc. Corrfunc is covered by a suite of tests, extensive documentation and is publicly available at https://github.com/manodeep/Corrfunc. Software like Corrfunc highlights how we need a combination of efficient algorithms, custom software designed and tuned for the underlying hardware to go beyond the petascale frontier into the exascale regime.

Bio:
Dr. Manodeep Sinha is a computational astrophysicist based at Centre for Astrophysics at Swinburne University of Technology, Melbourne. Dr. Sinha is part of a large research collaboration -- ARC Centre of Excellence for All Sky Astrophysics in 3D (ASTRO 3D) -- a $40 million research effort funded for 7 years and spread over multiple institutions across Australia. Dr. Sinha obtained a PhD from The Pennsylvania State University in 2008 and has been a postdoctoral fellow and research scientist at Vanderbilt University and Swinburne University. His research interests include studying how galaxies form and evolve in the Universe, and how to make realistic comparisons between simulated and observed galaxies. Efficient computational methods form a core requirements of his research, whether in relation to creating large cosmological simulations or recreating real world observations. While an astrophysicist by profession, his recent research has been to create optimized, robust software required to solve research problems in astrophysics.


Presentation Slides:
Corrfunc: Blazing fast correlation functions with SIMD Intrinsics
Presentation Video
msinha Picture

Abstract: One of the major computational challenges of modern astrophysics is quantifying how galaxies are grouped or clustered. Galaxy clustering is determined by a combination of universal cosmological parameters, gravity, and the physics of galaxy formation. Quantifying galaxy clustering requires computing pair-wise separations-- an inherently quadratic process. Consequently, comparing the observed clustering of galaxies to that theoretically predicted is both useful to advance our understanding of physics and extremely technically complex. Since observational studies of galaxies and the theoretical models, contain millions of galaxies, computing the clustering strength becomes a bottleneck in the analysis pipeline. Here I present Corrfunc -- a suite of OpenMP-parallelized clustering codes that target current CPU micro-architecture with custom Advanced Vector Extensions (AVX512F, AVX) and Streaming SIMD Extensions (SSE) intrinsics. By design, Corrfunc is highly optimized and is at least a factor of few faster than all existing public galaxy clustering correlation function routines. The algorithm within Corrfunc can be easily adapted to a variety of different measurements and has already been implemented for nearest neighbour searches, group finding in galaxy surveys, weak lensing measurements etc. Corrfunc is covered by a suite of tests, extensive documentation and is publicly available at https://github.com/manodeep/Corrfunc. Software like Corrfunc highlights how we need a combination of efficient algorithms, custom software designed and tuned for the underlying hardware to go beyond the petascale frontier into the exascale regime.

Bio:
Dr. Manodeep Sinha is a computational astrophysicist based at Centre for Astrophysics at Swinburne University of Technology, Melbourne. Dr. Sinha is part of a large research collaboration -- ARC Centre of Excellence for All Sky Astrophysics in 3D (ASTRO 3D) -- a $40 million research effort funded for 7 years and spread over multiple institutions across Australia. Dr. Sinha obtained a PhD from The Pennsylvania State University in 2008 and has been a postdoctoral fellow and research scientist at Vanderbilt University and Swinburne University. His research interests include studying how galaxies form and evolve in the Universe, and how to make realistic comparisons between simulated and observed galaxies. Efficient computational methods form a core requirements of his research, whether in relation to creating large cosmological simulations or recreating real world observations. While an astrophysicist by profession, his recent research has been to create optimized, robust software required to solve research problems in astrophysics.


Presentation Slides:

Corrfunc: Blazing fast correlation functions with SIMD Intrinsics
Presentation Video

11:30 AM - 11:50 AM
TEA & COFFEE Break
11:30 AM - 11:50 AM
TEA & COFFEE Break
11:50 AM - 12:00 PM
Group Photo
11:50 AM - 12:00 PM
Group Photo

12:00 PM - 12:10 PM
Analyzing IO Usage Patterns of User Jobs to Improve Overall HPC System Efficiency
Syed Sadat Nazrul, Cherie Huang, Mahidhar Tatineni, Nicole Wolter, Dmitry Mishin, Trevor Cooper and Amit Majumdar

12:10 PM - 12:20 PM
Scalable Software Infrastructure for Integrating Supercomputing with Volunteer Computing
Ritu Arora, Carlos Redondo and Gerald Joshua

12:20 PM - 12:30 PM
Computational Microscopy of Biomolecular Processes using High Performance Computing: Challenges and Perspectives
Divya Nayar
12:00 PM - 12:10 PM
Analyzing IO Usage Patterns of User Jobs to Improve Overall HPC System Efficiency
Syed Sadat Nazrul, Cherie Huang, Mahidhar Tatineni, Nicole Wolter, Dmitry Mishin, Trevor Cooper and Amit Majumdar
12:10 PM - 12:20 PM
Scalable Software Infrastructure for Integrating Supercomputing with Volunteer Computing
Ritu Arora, Carlos Redondo and Gerald Joshua
12:20 PM - 12:30 PM
Computational Microscopy of Biomolecular Processes using High Performance Computing: Challenges and Perspectives
Divya Nayar

12:30 PM - 1:00 PM
High Performance Networks in the World of AI
Ashrut Ambastha
12:30 PM - 1:00 PM
High Performance Networks in the World of AI
Ashrut Ambastha
Ashrut Ambastha Picture

Abstract: This talk is aimed towards professionals interested in discussing the role of current and up-coming Interconnect in the field of Artificial Intelligence. We will start with analysing the latest “in-network computing” architecture of Infiniband and then move on to discuss how these advancements are being applied to address the demands of emerging markets of AI/Machine learning.

Bio:
Ashrut Ambastha is a Sr. Staff Architect at Mellanox responsible for defining network fabric for large scale Infiniband clusters and high-performance datacenter fabric. He is also a member of application engineering team that works on product designs with Mellanox silicon devices. Prior to Mellanox, he worked for Tata Computational Research Labs in India and was involved in architecting the Infiniband backbone for Tata’s HPC system “Eka” that was ranked #4 in Top500 list of SC07. Ashrut’s professional interests includes network topologies, routing algorithms and phy signal Integrity analysis/simulations. He holds an MTech in Electrical Engineering from Indian Institute of Technology-Bombay.

Ashrut Ambastha Picture

Abstract: This talk is aimed towards professionals interested in discussing the role of current and up-coming Interconnect in the field of Artificial Intelligence. We will start with analysing the latest “in-network computing” architecture of Infiniband and then move on to discuss how these advancements are being applied to address the demands of emerging markets of AI/Machine learning.

Bio:
Ashrut Ambastha is a Sr. Staff Architect at Mellanox responsible for defining network fabric for large scale Infiniband clusters and high-performance datacenter fabric. He is also a member of application engineering team that works on product designs with Mellanox silicon devices. Prior to Mellanox, he worked for Tata Computational Research Labs in India and was involved in architecting the Infiniband backbone for Tata’s HPC system “Eka” that was ranked #4 in Top500 list of SC07. Ashrut’s professional interests includes network topologies, routing algorithms and phy signal Integrity analysis/simulations. He holds an MTech in Electrical Engineering from Indian Institute of Technology-Bombay.



1:00 PM - 2:00 PM
LUNCH BREAK & NETWORKING (ICE-BREAKING SESSION)
1:00 PM - 2:00 PM
LUNCH BREAK & NETWORKING (ICE-BREAKING SESSION)

2:00 PM - 2:30 PM
Performance Portability Challenges for Exascale Computing
Dr. P. (Saday) Sadayappan
2:00 PM - 2:30 PM
Performance Portability Challenges for Exascale Computing
Dr. P. (Saday) Sadayappan
Dr. Sadayappan

Abstract: The increasing trend of heterogeneity and custom architectures makes software productivity and performance portability of high-performance applications extremely challenging. Compilers can play a prominent role in addressing these software challenges. However, a fundamental challenge faced by optimizing compilers is that of modeling and minimizing data movement overheads. The cost of data movement currently dominates the cost of arithmetic/logic operations, both in terms of energy and time. While computational complexity of algorithms in terms of elementary arithmetic/logic operations is quite well understood, the same is not true of the data movement complexity of computations. More effective models of data movement complexity are needed for building effective optimizing compilers for current/emerging platforms. One promising approach is to develop domain/pattern specific optimization strategies. Examples of domain-specific optimization for tensor computations and stencil computations on GPUs will be discussed.

Bio:
Dr. P. (Saday) Sadayappan is a University Distinguished Scholar and Professor of Computer Science and Engineering at The Ohio State University. His research interests include compiler optimization for heterogeneous systems, domain/pattern-specific compiler optimization, and characterization of data movement complexity of algorithms. He is a Fellow of the IEEE.


Presentation Slides:

Performance Portability Challenges for Exascale Computing
Dr. Sadayappan

Abstract: The increasing trend of heterogeneity and custom architectures makes software productivity and performance portability of high-performance applications extremely challenging. Compilers can play a prominent role in addressing these software challenges. However, a fundamental challenge faced by optimizing compilers is that of modeling and minimizing data movement overheads. The cost of data movement currently dominates the cost of arithmetic/logic operations, both in terms of energy and time. While computational complexity of algorithms in terms of elementary arithmetic/logic operations is quite well understood, the same is not true of the data movement complexity of computations. More effective models of data movement complexity are needed for building effective optimizing compilers for current/emerging platforms. One promising approach is to develop domain/pattern specific optimization strategies. Examples of domain-specific optimization for tensor computations and stencil computations on GPUs will be discussed.

Bio:
Dr. P. (Saday) Sadayappan is a University Distinguished Scholar and Professor of Computer Science and Engineering at The Ohio State University. His research interests include compiler optimization for heterogeneous systems, domain/pattern-specific compiler optimization, and characterization of data movement complexity of algorithms. He is a Fellow of the IEEE.


Presentation Slides:

Performance Portability Challenges for Exascale Computing

2:30 PM - 2:45 PM
High Level file system and parallel I/O optimization of DNS Code
Bipin Kumar, Nachiket Manapragada and Neethi Suresh

2:45 PM - 3:00 PM
Performance Analysis of Computational Neuroscience Software NEURON on Knights Corner Many Core Processors
Pramod S. Kumbhar, Subhashini Sivagnanam, Kenneth Yoshimoto, Michael Hines, Ted Carnevale and Amit Majumdar
2:30 PM - 2:45 PM
High Level file system and parallel I/O optimization of DNS Code
Bipin Kumar, Nachiket Manapragada and Neethi Suresh
2:45 PM - 3:00 PM
Performance Analysis of Computational Neuroscience Software NEURON on Knights Corner Many Core Processors
Pramod S. Kumbhar, Subhashini Sivagnanam, Kenneth Yoshimoto, Michael Hines, Ted Carnevale and Amit Majumdar

3:00 PM - 4:00 PM
Discussion Groups
Potential Topics: Benchmarking, Parallel Programming Tools & Environments, Science Gateways, Power Optimization, Memory Optimization, Fault Tolerance, Code Correctness, Software for Managing HPC Systems
3:00 PM - 4:00 PM
Discussion Groups
Potential Topics: Benchmarking, Parallel Programming Tools & Environments, Science Gateways, Power Optimization, Memory Optimization, Fault Tolerance, Code Correctness, Software for Managing HPC Systems

4:00 PM - 4:20 PM
TEA & COFFEE Break
4:00 PM - 4:20 PM
TEA & COFFEE Break

4:20 PM - 5:20 PM
Discussion Groups
Potential Topics: Benchmarking, Parallel Programming Tools & Environments, Science Gateways, Power Optimization, Memory Optimization, Fault Tolerance, Code Correctness, Software for Managing HPC Systems
4:20 PM - 5:20 PM
Discussion Groups
Potential Topics: Benchmarking, Parallel Programming Tools & Environments, Science Gateways, Power Optimization, Memory Optimization, Fault Tolerance, Code Correctness, Software for Managing HPC Systems

5:20 PM - 5:40 PM
Group Leads Present a Summary/Short Presentation of their Discussions
5:20 PM - 5:40 PM
Group Leads Present a Summary/Short Presentation of their Discussions

NETWORKING RECEPTION - 7:00 PM to 9:00 PM
NETWORKING RECEPTION - 7:00 PM to 9:00 PM




Agenda on December 14, 2018




Agenda on December 14, 2018




Agenda on December 14, 2018




Agenda on December 14, 2018




Agenda on December 14, 2018

Time
Topic
Speaker
Time
Topic
Speaker

08:30 AM - 9:15 AM
NETWORKING OVER TEA/COFFEE
08:30 AM - 9:15 AM
NETWORKING OVER TEA/COFFEE

09:15 AM - 9:20 AM
Opening Remarks
Dr. Amit Majumdar, Dr. Ritu Arora
09:15 AM - 9:20 AM
Opening Remarks
Mr. Vinodh Kumar Markapuram

09:25 AM - 9:55 AM
Developing IEEE-TCPP Parallel/Distributed Curriculum and NSF Office of Advanced Cyberinfrastructure CyberTraining Program
Dr. Sushil Prasad
09:25 AM - 9:55 AM
Developing IEEE-TCPP Parallel/Distributed Curriculum and NSF Office of Advanced Cyberinfrastructure CyberTraining Program
Dr. Sushil Prasad

msinha Picture

Abstract: The NSF-supported Center for Parallel and Distributed Computing Curriculum Development and Educational Resources (CDER), in collaboration with the IEEE TC on Parallel Processing (TCPP), developed undergraduate curriculum guidelines for parallel and distributed computing (PDC), from 2010 to 2012. Our goal has been to migrate Computer Science (CS) and Computer Engineering (CE) courses in the first two years from the sequential model toward the now pervasive paradigm of parallel computing. This curriculum initiative that has now over 100 early adopter institutions nationally and internationally, including in India and vicinity. It has heavily influenced the ACM/IEEE Taskforce on Computer Science Curricula 2013 for their PDC thrust. I will describe this initiative and its current update efforts along the key aspects of big data, energy, and distributed computing.
The US National Science Foundation Office of Advanced Cyberinfrastructure (OAC) has introduced a CyberTraining program (NSF 19-524) for education and training aimed to fully prepare scientific workforce for nation's research enterprise to innovate and utilize high performance computing resources, tools and methods. The community response in its two rounds of competition have exceeded expectations. I will introduce this, as well as research and education programs for early-career faculty such CAREER and CRII. I will also touch on NSF's ten big ideas, including Harnessing the Data Revolution.

Bio:
Sushil K. Prasad is a Program Director at US National Science Foundation in its Office of Advanced Cyberinfrastructure (OAC) in the Computer and Information Science and Engineering (CISE) directorate leading its emerging research and education programs such as CAREER, CRII, Expeditions, CyberTraining, and the most-recently introduced OAC-Core research. He is an ACM Distinguished Scientist and a Professor of Computer Science at Georgia State University. He is the director of Distributed and Mobile Systems Lab carrying out research in Parallel, Distributed, and Data Intensive Computing and Systems. He has been twice-elected chair of IEEE-CS Technical Committee on Parallel Processing (TCPP), and leads the NSF-supported TCPP Curriculum Initiative on Parallel and Distributed Computing for undergraduate education.


Presentation Slides:
Developing IEEE-TCPP Parallel/Distributed Curriculum and NSF Office of Advanced Cyberinfrastructure CyberTraining Program
Presentation Video
msinha Picture

Abstract: The NSF-supported Center for Parallel and Distributed Computing Curriculum Development and Educational Resources (CDER), in collaboration with the IEEE TC on Parallel Processing (TCPP), developed undergraduate curriculum guidelines for parallel and distributed computing (PDC), from 2010 to 2012. Our goal has been to migrate Computer Science (CS) and Computer Engineering (CE) courses in the first two years from the sequential model toward the now pervasive paradigm of parallel computing. This curriculum initiative that has now over 100 early adopter institutions nationally and internationally, including in India and vicinity. It has heavily influenced the ACM/IEEE Taskforce on Computer Science Curricula 2013 for their PDC thrust. I will describe this initiative and its current update efforts along the key aspects of big data, energy, and distributed computing.
The US National Science Foundation Office of Advanced Cyberinfrastructure (OAC) has introduced a CyberTraining program (NSF 19-524) for education and training aimed to fully prepare scientific workforce for nation's research enterprise to innovate and utilize high performance computing resources, tools and methods. The community response in its two rounds of competition have exceeded expectations. I will introduce this, as well as research and education programs for early-career faculty such CAREER and CRII. I will also touch on NSF's ten big ideas, including Harnessing the Data Revolution.

Bio:
Sushil K. Prasad is a Program Director at US National Science Foundation in its Office of Advanced Cyberinfrastructure (OAC) in the Computer and Information Science and Engineering (CISE) directorate leading its emerging research and education programs such as CAREER, CRII, Expeditions, CyberTraining, and the most-recently introduced OAC-Core research. He is an ACM Distinguished Scientist and a Professor of Computer Science at Georgia State University. He is the director of Distributed and Mobile Systems Lab carrying out research in Parallel, Distributed, and Data Intensive Computing and Systems. He has been twice-elected chair of IEEE-CS Technical Committee on Parallel Processing (TCPP), and leads the NSF-supported TCPP Curriculum Initiative on Parallel and Distributed Computing for undergraduate education.


Presentation Slides:
Developing IEEE-TCPP Parallel/Distributed Curriculum and NSF Office of Advanced Cyberinfrastructure CyberTraining Program
Presentation Video

10:00 AM - 10:30 AM
A Relook at Parallel Algorithms for Graphs
Dr. Kishore Kothapalli
10:00 AM - 10:30 AM
A Relook at Parallel Algorithms for Graphs
Dr. Kishore Kothapalli

msinha Picture

Abstract: Recent advances in designing and implementing parallel algorithms for a variety of graph problems have been quite successful. These success stories often come with a host of data structure and memory bandwidth optimizations apart from innovations in algorithmic techniques.
On the other hand, there is a sizeable body of work that considers the impact of specific structural properties of graphs in arriving at practically better parallel graph algorithms. These structural properties can vary from the nature of the degree distribution to the connectivity of the graph.
In this talk, we will show examples of the above process to algorithms for testing connecitivity and computing metrics on graphs. Experimental evidence to illustrate the advantages of these algorithms will also be presented.

Bio:
Dr. Kishore Kothapalli is presently an Associate Professor at the International Institute of Information Technology, Hyderabad, where he is working since 2006. Prior to that, he obtained his doctoral degree in Computer Science from the Johns Hopkins University, USA, and his Master's degree in Computer Science from Indian Institute of Technology, Kanpur. His current research interests are in parallel algorithms for problems on graphs, sparse matrices, and the like. He is also interested in data structures for geometric problems.

msinha Picture

Abstract: Recent advances in designing and implementing parallel algorithms for a variety of graph problems have been quite successful. These success stories often come with a host of data structure and memory bandwidth optimizations apart from innovations in algorithmic techniques.
On the other hand, there is a sizeable body of work that considers the impact of specific structural properties of graphs in arriving at practically better parallel graph algorithms. These structural properties can vary from the nature of the degree distribution to the connectivity of the graph.
In this talk, we will show examples of the above process to algorithms for testing connecitivity and computing metrics on graphs. Experimental evidence to illustrate the advantages of these algorithms will also be presented.

Bio:
Dr. Kishore Kothapalli is presently an Associate Professor at the International Institute of Information Technology, Hyderabad, where he is working since 2006. Prior to that, he obtained his doctoral degree in Computer Science from the Johns Hopkins University, USA, and his Master's degree in Computer Science from Indian Institute of Technology, Kanpur. His current research interests are in parallel algorithms for problems on graphs, sparse matrices, and the like. He is also interested in data structures for geometric problems.


10:30 AM to 11:00 AM
Phase field modelling: current challenges and opportunities for high performance computing
Dr. M. P. Gururajan
10:30 AM to 11:00 AM
Phase field modelling: current challenges and opportunities for high performance computing
Dr. M. P. Gururajan

mpgururajan Picture

Abstract: The properties of engineering materials are decided by their microstructure -- the structure of materials at length scales larger than atomic but smaller than the visible range. The microstructures can be tuned during the processing of materials; and, during service conditions, these microstructures continue to evolve. Thus, the formation and evolution of microstructures is one of the key areas of materials engineering. Continuum models known as phase field models have been used in the last three decades to model microstructural evolution. These models are physics based; they help us understand the reasons and mechanisms behind microstructural evolution; they are also useful in designing microstructures of engineering materials. Thus, phase field models are of both academic and practical interest. One of the challenges in implementing phase field models is computational; in this presentation, I will make an attempt to show the need for validated and benchmarked open source phase field codes and large scale computations using such codes in order to make progress in developing advanced materials.

Bio:
Dr. M. P. Gururajan is presently Associate Professor in the department of Metallurgical Engineering and Material Science at Indian Institute of Technology, Bombay, where he is working since 2009. He obtained his M.Sc(Engg) and Ph.D in Metallurgy from the Indian Institute of Science, Bangalore. His research interests include modelling of microstructural evolution, atomistic (Monte Carlo and Molecular dynamics) and continuum (phase field modelling) models, materials mechanics and thermodynamics, phase transformations, deformation and phase transformation induced micro-structural changes. Apart from teaching courses in material science, he has been actively involved in the conduct of the Technical Communication Skills course at IIT Bombay. He has offered NPTEL online course on Phase field modelling: the materials science, mathematics and computational aspects.


Presentation Slides:

Phase Field Modelling - Current Challenges and Opportunities for High Performance Computing
Movie That Accompanied the Presentation
mpgururajan Picture

Abstract: The properties of engineering materials are decided by their microstructure -- the structure of materials at length scales larger than atomic but smaller than the visible range. The microstructures can be tuned during the processing of materials; and, during service conditions, these microstructures continue to evolve. Thus, the formation and evolution of microstructures is one of the key areas of materials engineering. Continuum models known as phase field models have been used in the last three decades to model microstructural evolution. These models are physics based; they help us understand the reasons and mechanisms behind microstructural evolution; they are also useful in designing microstructures of engineering materials. Thus, phase field models are of both academic and practical interest. One of the challenges in implementing phase field models is computational; in this presentation, I will make an attempt to show the need for validated and benchmarked open source phase field codes and large scale computations using such codes in order to make progress in developing advanced materials.
Bio:
Dr. M. P. Gururajan is presently Associate Professor in the department of Metallurgical Engineering and Material Science at Indian Institute of Technology, Bombay, where he is working since 2009. He obtained his M.Sc(Engg) and Ph.D in Metallurgy from the Indian Institute of Science, Bangalore. His research interests include modelling of microstructural evolution, atomistic (Monte Carlo and Molecular dynamics) and continuum (phase field modelling) models, materials mechanics and thermodynamics, phase transformations, deformation and phase transformation induced micro-structural changes. Apart from teaching courses in material science, he has been actively involved in the conduct of the Technical Communication Skills course at IIT Bombay. He has offered NPTEL online course on Phase field modelling: the materials science, mathematics and computational aspects.


Presentation Slides:

Phase Field Modelling - Current Challenges and Opportunities for High Performance Computing
Movie That Accompanied the Presentation

11:00 AM - 11:30 AM
Challenges in fluid flow simulations using Exa-scale computing
Dr. M.K. Verma
11:00 AM - 11:30 AM
Challenges in fluid flow simulations using Exa-scale computing
Dr. M.K. Verma

Dr. M. K. Verma Picture

Abstract: HPC has seen steep growth in hardware, but a similar growth in parallel programming is lacking. In this talk I will attempt to highlight challenges of simulating fluid flows using large number of processors. As a test case, I will present a spectral-solver TARANG that has been scaled up to 196608 processors on SHAHEEN II of KAUST. I will also discuss complexities of Fast Fourier Transform (FFT) and finite difference schemes.

Reference:
A. G. Chatterjee, M. K. Verma, A. Kumar, R. Samtaney, B. Hadri, and R. Khurram, Scaling of a Fast Fourier Transform and a pseudo-spectral fluid solver up to 196608 cores, J. Parallel Distrib. Comput.,113, 77 (2018).

http://turbulencehub.org (for TARANG, FFTK download)

Bio:
Dr. Mahendra K. Verma is presently a Professor of Physics at Indian Institute of Technology, Kanpur, where he is working since 1994. He obtained his doctoral degree from University of Mary Land, College Park, USA. He leads an interdisciplinary group working in the area of turbulence and non- linear physics. The broad interest of his group are magnetohydrodynamic and convective turbulence, application of field theoretic methods to turbulence, direct numerical simulation of turbulence; high performance computing and non-equilibrium statistical mechanics. His group had developed open- source code TARANG and demonstrated its usage for turbulence simulation at extreme scales. He is an avid teacher and has authored two books; Introduction to Mechanics and Physics of Buoyant Flows: From Instabilities to Turbulence. He is receipient of the prestigeous Swarnajayanti fellowship of Department od Science and Technology, India in 2006. He was awarded Inndian National Science Academy (INSA) Teachers Award in 2016. He received Dr. A.P.J. Abdul Kalam Cray HPC award for the development of TARANG in 2018.



Presentation Slides:
Challenges in fluid flow simulations using Exascale computing
Presentation Video
Dr. M. K. Verma Picture

Abstract: HPC has seen steep growth in hardware, but a similar growth in parallel programming is lacking. In this talk I will attempt to highlight challenges of simulating fluid flows using large number of processors. As a test case, I will present a spectral-solver TARANG that has been scaled up to 196608 processors on SHAHEEN II of KAUST. I will also discuss complexities of Fast Fourier Transform (FFT) and finite difference schemes.

Reference:
A. G. Chatterjee, M. K. Verma, A. Kumar, R. Samtaney, B. Hadri, and R. Khurram, Scaling of a Fast Fourier Transform and a pseudo-spectral fluid solver up to 196608 cores, J. Parallel Distrib. Comput.,113, 77 (2018).

http://turbulencehub.org (for TARANG, FFTK download)


Bio:
Dr. Mahendra K. Verma is presently a Professor of Physics at Indian Institute of Technology, Kanpur, where he is working since 1994. He obtained his doctoral degree from University of Mary Land, College Park, USA. He leads an interdisciplinary group working in the area of turbulence and non- linear physics. The broad interest of his group are magnetohydrodynamic and convective turbulence, application of field theoretic methods to turbulence, direct numerical simulation of turbulence; high performance computing and non-equilibrium statistical mechanics. His group had developed open- source code TARANG and demonstrated its usage for turbulence simulation at extreme scales. He is an avid teacher and has authored two books; Introduction to Mechanics and Physics of Buoyant Flows: From Instabilities to Turbulence. He is receipient of the prestigeous Swarnajayanti fellowship of Department od Science and Technology, India in 2006. He was awarded Inndian National Science Academy (INSA) Teachers Award in 2016. He received Dr. A.P.J. Abdul Kalam Cray HPC award for the development of TARANG in 2018.



Presentation Slides:
Challenges in fluid flow simulations using Exascale computing
Presentation Video

11:30 AM - 11:50 AM
TEA & COFFEE BREAK
11:30 AM - 11:50 AM
TEA & COFFEE BREAK

11:50 PM - 12:00 PM
Research Collaborator Recommendation using Hybrid Recommender System in Science Gateways through Conversational Agent
Sai Swathi Sivarathri, Yuanxun Zhang, Arjun Chandrashekara Ankathatti and Anjaneya Prasad Calyam
11:50 PM - 12:00 PM
Research Collaborator Recommendation using Hybrid Recommender System in Science Gateways through Conversational Agent
Sai Swathi Sivarathri, Yuanxun Zhang, Arjun Chandrashekara Ankathatti and Anjaneya Prasad Calyam

12:00 PM - 12:10 PM
High Impact Applications of Optimization and Statistics (Big-Data) on Multi-Petaflop Systems, enabled by MPPLab (E-Teacher) - A Parallel Application Software Composition Framework
China Vudutula and Narendra K Karmarkar
12:00 PM - 12:10 PM
High Impact Applications of Optimization and Statistics (Big-Data) on Multi-Petaflop Systems, enabled by MPPLab (E-Teacher) - A Parallel Application Software Composition Framework
China Vudutula and Narendra K Karmarkar

12:10 PM - 12:20 PM
High-Level Approaches for Leveraging Deep-Memory Hierarchies on Modern Supercomputers
Antonio Gómez-Iglesias and Ritu Arora
12:10 PM - 12:20 PM
High-Level Approaches for Leveraging Deep-Memory Hierarchies on Modern Supercomputers
Antonio Gómez-Iglesias and Ritu Arora

12:20 PM - 12:35 PM
Hybrid Parallelization of Particle in Cell Monte Carlo Collision (PIC-MCC) algorithm for simulation of Low temperature Plasmas
Bhaskar Chaudhury, Mihir Shah, Unnati Parekh, Hasnain Gandhi, Paramjeet Desai, Keval Shah, Anusha Phadnis, Miral Shah, Mainak Bandyopadhyay and Arun Chakraborty
12:20 PM - 12:35 PM
Hybrid Parallelization of Particle in Cell Monte Carlo Collision (PIC-MCC) algorithm for simulation of Low temperature Plasmas
Bhaskar Chaudhury, Mihir Shah, Unnati Parekh, Hasnain Gandhi, Paramjeet Desai, Keval Shah, Anusha Phadnis, Miral Shah, Mainak Bandyopadhyay and Arun Chakraborty

12:35 PM - 12:45 PM
Semi-Automatic Code Modernization for Optimal Parallel I/O
Ritu Arora, Trung Nguyen Ba
12:35 PM - 12:45 PM
Semi-Automatic Code Modernization for Optimal Parallel I/O
Ritu Arora, Trung Nguyen Ba

12:45 PM - 1:00 PM
Overcoming MPI Communication Overhead for Distributed Community Detection
Naw Safrin Sattar and Shaikh Arifuzzaman
12:45 PM - 1:00 PM
Overcoming MPI Communication Overhead for Distributed Community Detection
Naw Safrin Sattar and Shaikh Arifuzzaman

1:00 PM - 2:00 PM
LUNCH
1:00 PM - 2:00 PM
LUNCH

2:00 PM - 2:25 PM
Scientific Computing at Exascale
Dr. Santosh Ansumali
2:00 PM - 2:25 PM
Scientific Computing at Exascale
Dr. Santosh Ansumali

mpgururajan Picture


Abstract:
In the last decade, scientific computing has provided a number of new insights for physical systems and now regularly used in engineering design for real-world engineering applications. Exascale computing should enable the execution of advanced simulation algorithms for computational physics on problems of a large size to provide new insight into the complex phenomena such as turbulence. The primary challenge in utilizing the full potential of emerging computing infrastructure for such large system sizes is in successfully handling the intensive floating point computations coupled with increased data movement operations. For example, direct numerical simulation of turbulence requires simulating a system with billions of degree of freedom. Similarly, in case of complex biological reaction network, one may need to generate billions of Gaussian random variables. In this talk, I will give a few examples of physics-based ideas which can be used for the creation of efficient simulation algorithms. For example, I will show that the idea of timescale separation suggests that the sequential fraction of usual finite difference equations can be drastically reduced by introducing delayed difference equations. Furthermore, I will show that a new class of pseudo-random number generators can be formulated based on the idea of molecular chaos.


Bio:
Dr. Santosh Ansumali is presently Associate Professor with Engineering Mechanics unit of Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR), Bengaluru, where he is working since 2009. He is associated with SankyaSutra Lab, a start-up incubated at JNCASR, in capacity of Chief Technical Advisor. He obtained his Ph.D from ETH Swiss Federal Institute of Technology, Zurich. He leads the research activities of mesoscale simulation lab. His research interests include Mesoscale Simulation Methods (Lattice Boltzmann Method, Direct Simulation Monte Carlo), Kinetic Theory and Numerical Algorithms focussing on Turbulence Modeling, Computational Fluid Dynamics and Finite Volume Methods for Population Balance. He is the winner of ETH, Zurich (Switzerland) medal for Outstanding PhD thesis in 2005. He is recipient of Ramanujan Fellowship of Department of Science and Technology, India in 2009.

P_K_sinha Picture


Abstract:
In the last decade, scientific computing has provided a number of new insights for physical systems and now regularly used in engineering design for real-world engineering applications. Exascale computing should enable the execution of advanced simulation algorithms for computational physics on problems of a large size to provide new insight into the complex phenomena such as turbulence. The primary challenge in utilizing the full potential of emerging computing infrastructure for such large system sizes is in successfully handling the intensive floating point computations coupled with increased data movement operations. For example, direct numerical simulation of turbulence requires simulating a system with billions of degree of freedom. Similarly, in case of complex biological reaction network, one may need to generate billions of Gaussian random variables. In this talk, I will give a few examples of physics-based ideas which can be used for the creation of efficient simulation algorithms. For example, I will show that the idea of timescale separation suggests that the sequential fraction of usual finite difference equations can be drastically reduced by introducing delayed difference equations. Furthermore, I will show that a new class of pseudo-random number generators can be formulated based on the idea of molecular chaos.


Bio:
Dr. Santosh Ansumali is presently Associate Professor with Engineering Mechanics unit of Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR), Bengaluru, where he is working since 2009. He is associated with SankyaSutra Lab, a start-up incubated at JNCASR, in capacity of Chief Technical Advisor. He obtained his Ph.D from ETH Swiss Federal Institute of Technology, Zurich. He leads the research activities of mesoscale simulation lab. His research interests include Mesoscale Simulation Methods (Lattice Boltzmann Method, Direct Simulation Monte Carlo), Kinetic Theory and Numerical Algorithms focussing on Turbulence Modeling, Computational Fluid Dynamics and Finite Volume Methods for Population Balance. He is the winner of ETH, Zurich (Switzerland) medal for Outstanding PhD thesis in 2005. He is recipient of Ramanujan Fellowship of Department of Science and Technology, India in 2009.


2:30 PM - 2:55 PM
Challenges in the Indian Approach to Exascale Computing
Dr. P.K. Sinha
2:30 PM - 2:55 PM
Challenges in the Indian Approach to Exascale Computing
Dr. P.K. Sinha

mpgururajan Picture

Abstract: Several countries are vying for HPC (High Performance Computing) leadership because supercomputers are seen as the most powerful tool for research and innovation in all disciplines. After having crossed the 100 PetaFlops barrier, HPC players are now looking forward to achieving the Exascale target. It is predicted that the first exascale system will be built by the year 2020. Several countries including USA, China, Japan, France and Russia launched their exascale initiatives more than a decade ago. India joined this League of Nations in 2013 with the preparation of its National Supercomputing Mission (NSM) project proposal, which was approved by the Government in 2015. The Top500 list of supercomputers announced every six months provides a fairly good idea of the pace of developments in supercomputing and the major players. Those making inferences from this list often wonder if India at all continues to be among the exascale players. In this presentation, I shall make an attempt to answer this question and discuss the challenges in the Indian approach to exascale computing.

Bio:
Dr. Pradeep Kumar Sinha, an engineer turned academician, is the Vice Chancellor and Director of International Institute of Information Technology (IIIT), Naya Raipur. Earlier he was with the Centre for Development of Advanced Computing (C-DAC), where he had led National Programs in areas of Supercomputing, Grid Computing, and Health Informatics. He is an educationist, researcher, scientist, inventor, and internationally acclaimed author of computer textbooks. He has significantly contributed to the advancement of Science & Technology and Technical Education. His technical contributions include national projects, supercomputing systems and facilities, healthcare products and solutions, international patents, a number of technical papers and six books in the area of Computer Science & Engineering. Books authored by him are published and marketed by several national and international publishers. Indian, American, and other Universities across the globe cite his books as textbooks for their courses. On public demand, books authored by him have been translated in local languages like Japanese and Hindi. He was instrumental in commissioning the first National Supercomputing Facility at C-DAC, Pune in 1998 and led C-DAC team to design and engineer the facility. He also led C-DAC’s team to design and commission the PARAM Yuva II supercomputer in 2013 at C-DAC, Pune which ranked 69th among the world’s Top 500 Supercomputers in June 2013 list. The system ranked 44th in the world, 9th in Asia Pacific and Number One in India as per the November 2013 list of world’s Green 500 supercomputers.

He is a Fellow of IEEE (Institute of Electrical and Electronics Engineers, USA), a Fellow of CSI (Computer Society of India) and ACM Distinguished Engineer (Association for Computing Machinery, USA). He has several awards to his credit. Notable among these are VASVIK Research Award and Intel Pathfinder Award. Since 2015, Dr. Sinha has been deeply involved in building IIIT-Naya Raipur as a premier academic institute of the country.


Presentation Slides:
Challenges in the Indian Approach to Exascale Computing
Presentation Video
P_K_sinha Picture

Abstract: Several countries are vying for HPC (High Performance Computing) leadership because supercomputers are seen as the most powerful tool for research and innovation in all disciplines. After having crossed the 100 PetaFlops barrier, HPC players are now looking forward to achieving the Exascale target. It is predicted that the first exascale system will be built by the year 2020. Several countries including USA, China, Japan, France and Russia launched their exascale initiatives more than a decade ago. India joined this League of Nations in 2013 with the preparation of its National Supercomputing Mission (NSM) project proposal, which was approved by the Government in 2015. The Top500 list of supercomputers announced every six months provides a fairly good idea of the pace of developments in supercomputing and the major players. Those making inferences from this list often wonder if India at all continues to be among the exascale players. In this presentation, I shall make an attempt to answer this question and discuss the challenges in the Indian approach to exascale computing.

Bio:
Dr. Pradeep Kumar Sinha, an engineer turned academician, is the Vice Chancellor and Director of International Institute of Information Technology (IIIT), Naya Raipur. Earlier he was with the Centre for Development of Advanced Computing (C-DAC), where he had led National Programs in areas of Supercomputing, Grid Computing, and Health Informatics. He is an educationist, researcher, scientist, inventor, and internationally acclaimed author of computer textbooks. He has significantly contributed to the advancement of Science & Technology and Technical Education. His technical contributions include national projects, supercomputing systems and facilities, healthcare products and solutions, international patents, a number of technical papers and six books in the area of Computer Science & Engineering. Books authored by him are published and marketed by several national and international publishers. Indian, American, and other Universities across the globe cite his books as textbooks for their courses. On public demand, books authored by him have been translated in local languages like Japanese and Hindi. He was instrumental in commissioning the first National Supercomputing Facility at C-DAC, Pune in 1998 and led C-DAC team to design and engineer the facility. He also led C-DAC’s team to design and commission the PARAM Yuva II supercomputer in 2013 at C-DAC, Pune which ranked 69th among the world’s Top 500 Supercomputers in June 2013 list. The system ranked 44th in the world, 9th in Asia Pacific and Number One in India as per the November 2013 list of world’s Green 500 supercomputers.

He is a Fellow of IEEE (Institute of Electrical and Electronics Engineers, USA), a Fellow of CSI (Computer Society of India) and ACM Distinguished Engineer (Association for Computing Machinery, USA). He has several awards to his credit. Notable among these are VASVIK Research Award and Intel Pathfinder Award. Since 2015, Dr. Sinha has been deeply involved in building IIIT-Naya Raipur as a premier academic institute of the country.


Presentation Slides:
Challenges in the Indian Approach to Exascale Computing
Presentation Video

2:55 PM - 3:10 PM
A review of dimensionality reduction in high-dimensional data using multi-core and many-core architecture
Siddheshwar Patil and Dinesh Kulkarni
2:55 PM - 3:10 PM
A review of dimensionality reduction in high-dimensional data using multi-core and many-core architecture
Siddheshwar Patil and Dinesh Kulkarni

3:10 PM - 3:30 PM
TEA & COFFEE BREAK
3:10 PM - 3:30 PM
TEA & COFFEE BREAK

3:30 PM - 5:30 PM
Bring Your Own Code, Parallel I/O Tutorial
Dr. Amit Majumdar, Dr. Ritu Arora, and Colleagues from C-DAC, Pune
3:30 PM - 5:30 PM
Bring Your Own Code, Parallel I/O Tutorial
Dr. Amit Majumdar, Dr. Ritu Arora

5:30 PM
Closing Remarks
Dr. Amit Majumdar, Dr. Ritu Arora, Colleagues from C-DAC, Pune
5:30 PM
Closing Remarks
Dr. Amit Majumdar, Dr. Ritu Arora, Colleagues from C-DAC, Pune

Dinner - self-paid
Dinner - self-paid


Deadlines for Papers

  • Paper submission deadline: November 15 (AOE), 2018 is the firm deadline (previous deadlines: October 21, October 8, 2018, October 1, 2018)
  • Notifications of acceptance/rejection sent by: November 25, 2018
  • Registration for the workshop to be completed by: November 27, 2018
  • Camera-ready copies of accepted papers due on: December 1, 2018

  • Information on submitting the papers to the workshop is available here.

Tentative Deadlines for the Participation Grant

  • Application deadline for the participation grant: September 25, 2018 (AOE)
  • Notification of acceptance of participation grant to be sent by: October 2, 2018
  • Hotel and flight reservations to be completed by (for the students selected from the U.S. institutions only): October 15, 2018
  • The workshop participants arrive in New Delhi, India by: December 12, 2018

  • Information on submitting the student travel grant application is available here.

Workshop Date

  • December 13-14, 2018

Committee


Organizing Committee

  • Amitava Majumdar, San Diego Supercomputing Center (SDSC), UC San Diego, La Jolla, USA (General Chair)
  • Ritu Arora, Texas Advanced Computing Center (TACC), UT Austin, Austin, USA (General Chair)
  • Sharda Dixit, Centre of Development of Advanced Computing (C-DAC), Pune, India (Program Co-Chair)
  • Anil Kumar Gupta, C-DAC, Pune, India (Program Co-Chair)
  • Vinai Kumar Singh, Indraprastha Engineering College, Ghaziabad, India (Event Promotion)
  • Venkatesh Shenoi, C-DAC, Pune, India (Communications Chair)
  • Vinodh Kumar Markapuram, C-DAC, Pune, India
  • Abhishek Das, C-DAC, Pune, India
  • Gaurav Rajput, Neilson Global Holdings, India
  • Sweta Anmulwar, C-DAC, Pune, India
  • Richa Jha, C-DAC, Pune, India

Technical Program Committee

  • Amit Ruhela, Ohio State University, Ohio, USA
  • Amitava Majumdar, San Diego Supercomputing Center (SDSC), UC San Diego, LA Jolla, California, USA
  • Anil Kumar Gupta, C-DAC, Pune, India
  • Anirban Jana, Pittsburgh Supercomputing Center (PSC), Pittsburgh, USA
  • Aniruddha Gokhale, Vanderbilt University, Nashville, Tennessee, USA
  • Antonio Gomez, Intel, Hillsboro, Oregon, USA
  • Amarjeet Sharma, C-DAC, Pune, India
  • Damon McDougall, Texas Advanced Computing Center (TACC), UT Austin, Texas, USA
  • Devangi Parekh, University of Texas at Austin, Texas, USA
  • Dinesh Rajagopal, BULL/AtoS, Bangalore, India
  • Galen Arnold, National Center of Supercomputing Applications, Illinois, USA
  • Hari Subramoni, Ohio State University, Ohio, USA
  • Krishna Muriki, Lawrence Berkeley National Laboratory, California, USA
  • Lars Koesterke, Texas Advanced Computing Center, UT Austin, Austin, USA
  • Manu Awasthi, IIT-Gandhinagar, Gandhinagar, India
  • Mahidhar Tatineni, San Diego Supercomputer Center, (SDSC), UC San Diego, La Jolla, California, USA
  • Milind Jagtap, Center of Development of Advanced Computing (C-DAC), Pune, India
  • Nisha Agarwal, Center of Development of Advanced Computing (C-DAC), Pune, India
  • Purushotham Bangalore, University of Alabama at Birmingham, Alabama, USA
  • Ritu Arora, Texas Advanced Computing Center (TACC), UT Austin, Austin, Texas, USA
  • Robert Sinkovits, San Diego Supercomputing Center (SDSC), UC San Diego, La Jolla, California, USA
  • Sandeep Joshi, C-DAC, Pune, India
  • Sharda Dixit, Center of Development of Advanced Computing (C-DAC), Pune, India
  • Si Liu, Texas Advanced Computing Center, UT Austin, Austin, Texas, USA
  • Soham Ghosh, Intel, India
  • Subhashini Sivagnanam, San Diego Supercomputer Center, (SDSC), UC San Diego, La Jolla, California, USA
  • Sukrit Sondhi, Fulcrum Worldwide, NJ, USA
  • Suresh Marru, Indiana University Bloomington, Indiana, USA
  • Tajendra Singh, University of California, Los Angeles (UCLA), California, USA
  • Venkatesh Shenoi, C-DAC, Pune, India
  • Victor Eijkhout, Texas Advanced Computing Center, UT Austin, Austin, Texas, USA
  • Vinai Kumar Singh, Indraprastha Engineering College, Ghaziabad, India

Webmaster

Committee


Organizing Committee

  • Amitava Majumdar, San Diego Supercomputing Center (SDSC), UC San Diego, La Jolla, California, USA (General Chair)
  • Ritu Arora, Texas Advanced Computing Center (TACC), UT Austin, Austin, Texas, USA (General Chair)
  • Sharda Dixit, Centre of Development of Advanced Computing (C-DAC), Pune, India (Program Co-Chair)
  • Anil Kumar Gupta, C-DAC, Pune, India (Program Co-Chair)
  • Vinai Kumar Singh, Indraprastha Engineering College, Ghaziabad, India (Event Promotion)
  • Venkatesh Shenoi, C-DAC, Pune, India (Communications Chair)
  • Vinodh Kumar Markapuram, C-DAC, Pune, India
  • Abhishek Das, C-DAC, Pune, India
  • Gaurav Rajput, Neilson Global Holdings, India
  • Sweta Anmulwar
  • Richa Jha

Technical Program Committee

  • Amit Ruhela, Ohio State University, Ohio, USA
  • Amitava Majumdar, San Diego Supercomputing Center (SDSC), UC San Diego, La Jolla, California, USA
  • Anil Kumar Gupta, C-DAC, Pune, India
  • Anirban Jana, Pittsburgh Supercomputing Center (PSC), Pittsburgh, USA
  • Aniruddha Gokhale, Vanderbilt University, Nashville, Tennessee, USA
  • Antonio Gomez, Intel, Hillsboro, Oregon, USA
  • Amarjeet Sharma, C-DAC, Pune, India
  • Damon McDougall, Texas Advanced Computing Center (TACC), UT Austin, Austin, Texas, USA
  • Devangi Parekh, University of Texas at Austin, Texas, USA
  • Dinesh Rajagopal, BULL/AtoS, Bangalore, India
  • Galen Arnold, National Center of Supercomputing Applications, Illinois, USA
  • Hari Subramoni, Ohio State University, Ohio, USA
  • Krishna Muriki, Lawrence Berkeley National Laboratory, California, USA
  • Lars Koesterke, Texas Advanced Computing Center, UT Austin, Austin, Texas, USA
  • Manu Awasthi, IIT-Gandhinagar, Gandhinagar, India
  • Mahidhar Tatineni, San Diego Supercomputer Center, (SDSC), UC San Diego, La Jolla, California, USA
  • Milind Jagtap, Center of Development of Advanced Computing (C-DAC), Pune, India
  • Nisha Agarwal, Center of Development of Advanced Computing (C-DAC), Pune, India
  • Purushotham Bangalore, University of Alabama at Birmingham, Alabama, USA
  • Ritu Arora, Texas Advanced Computing Center (TACC), UT Austin, Austin, Texas, USA
  • Robert Sinkovits, San Diego Supercomputing Center (SDSC), UC San Diego, La Jolla, California, USA
  • Sandeep Joshi, C-DAC, Pune, India
  • Sharda Dixit, Center of Development of Advanced Computing (C-DAC), Pune, India
  • Si Liu, Texas Advanced Computing Center, UT Austin, Austin, Texas, USA
  • Soham Ghosh, Intel, India
  • Subhashini Sivagnanam, San Diego Supercomputer Center, (SDSC), UC San Diego, La Jolla, California, USA
  • Sukrit Sondhi, Fulcrum Worldwide, NJ, USA
  • Suresh Marru, Indiana University Bloomington, Indiana, USA
  • Tajendra Singh, University of California, Los Angeles (UCLA), California, USA
  • Venkatesh Shenoi, C-DAC, Pune, India
  • Victor Eijkhout, Texas Advanced Computing Center, UT Austin, Austin, Texas, USA
  • Vinai Kumar Singh, Indraprastha Engineering College, Ghaziabad, India

Webmaster




Current Sponsors



Atos Logo Mellanox Logo


In-Kind Sponsors


Proceedings publisher:
Online publicity of the event:
Handling event registrations: Neilson Global Holdings

Current Sponsors

Atos Logo
Mellanox Logo

In-Kind Sponsors



Proceedings publisher:
Online publicity of the event:
Handling event registrations: Neilson Global Holdings


Sponsorship Levels

CFP (Call for Papers) and Call for Abstracts


SCEC 2018 workshop proceedings will be published by Springer in their prestigious Communications in Computer and Information Science (CCIS) series.

We invite authors to submit their original and unpublished work, that is not under review for another publication. Full papers (10 -15 pages in length including references) should be formatted as per the Springer-specified guidelines (details below) for the double-blind review.

We also invite short papers or extended abstracts for the lightning talks. The short papers should also be prepared following the Springer guidelines, and could be 6-9 pages in length including references. A selected number of short papers will also be included in the workshop proceedings. While all the short papers will NOT be published in the proceedings, they will be archived and made publicly accessible through Figshare and a Github repo. These submissions will have a DOI number for future citations. Those interested in presenting lightning talks are also required to submit the PDF copies of the rough draft of their slides. The PDFs of the abstract and the slides can be merged into a single PDF file, and can be uploaded to Easychair for peer-review.

The PDF version of the papers/extended abstracts can be submitted for review through the SCEC 2018 submission website :

https://easychair.org/conferences/?conf=scec2018

The review process is double-blind, and each paper will be reviewed by at least three committee members and/or external reviewers. The papers will be evaluated on the basis of the relevance to the workshop theme, clarity of the content presented, originality of the work, and the impact of the work on the community.


Springer's formatting information is available at the following link:

https://www.springer.com/us/computer-science/lncs/conference-proceedings-guidelines

Update: In April 2019, the SCEC 2018 proceedings were made available online by Springer. The volume number of the proceedings is CCIS 964 .

Workshop Registration

The registration fees for the workshop is Rupees 3200 (about US $46) for the students from the Indian academic institutions, and is Rs. 5000 ($68) for the faculty from the Indian academic institutions. The registration fees for the participants from non-academic institutions in India is Rs. 10,000 ($138). The registration fees for all the participants from the institutions outside India is US $250. The fees can be paid using check, direct bank deposit, or credit/debit card. All the workshop attendees should register in advance by filling the registration form whose link is provided below.

Registration Form


Application Form for Travel Award

We are happy to announce the availability of funds for covering the workshop participation cost for a limited number of undergraduate/graduate students. The students from the U.S. institution who are interested in applying for this participation grant, should submit the following form by September 25, 2018:

Students in the U.S.



The students from the institutions outside the U.S. can apply for this participation grant, by submitting the following form by September 25, 2018:


Workshop Venue

The LaLiT

Barakhamba Avenue, Connaught Place New Delhi-110001

Contact

For any questions regarding the workshop, please contact us at: scecforum@gmail.com