SCEC 2018

Overview

Supercomputers are used to power discoveries and to reduce the time-to-results in a wide variety of disciplines such as engineering, physical sciences, and healthcare. They are globally considered as vital for staying competitive in defense, research and development activities in various disciplines, financial sector, several mainstream businesses and even agriculture. An integral requirement for enabling the usage of the supercomputers, like any other computer, is the availability of the software. Scalable and efficient software is typically required for optimally using the large-scale supercomputing platforms, and thereby, effectively leveraging the investments in the advanced CyberInfrastructure (CI). However, developing and maintaining such software is challenging due to several factors, such as, 1) no well-defined processes or guidelines for writing software that can ensure high-performance on supercomputers, and 2) shortfall of trained workforce having skills in both software engineering and supercomputing. With the rapid advancement in the computer architecture discipline, the complexity of the processors that are used in the supercomputers is also increasing, and, in turn, the task of developing efficient software for supercomputers is further becoming challenging and complex. To mitigate such challenges, there is a need for a common platform that brings together different stakeholders from the areas of supercomputing and software engineering. To provide such a platform, the second workshop on "Software Challenges to Exascale Computing (SCEC)" is being organized in Delhi, India, in December 2018.

The SCEC18 workshop will not only inform the participants about the challenges in large-scale HPC software development but will also steer them in the direction of building international collaborations for finding solutions to those challenges. The workshop will provide a forum through which hardware vendors and software developers can communicate with each other and influence the architecture of the next generation supercomputing systems and the supporting software stack. By fostering cross-disciplinary associations, the workshop will serve as a stepping-stone towards innovations in the future.

Overview

Benefits of the SCEC18 workshop to the researchers and users in the academia: the workshop will provide an opportunity to disseminate their results to the public, and find potential collaborators.

Benefits of the SCEC18 workshop to software developers: the workshop will provide an opportunity to understand the future trends in the HPC hardware and develop collaborations in the code modernization and optimization disciplines.

Benefits of the SCEC18 workshop to HPC service providers: the workshop will provide an opportunity to understand the challenges that the community faces in using the HPC platforms efficiently, and connect with the user-community.

Benefits of the SCEC18 workshop to HPC hardware vendors: the workshop will provide an opportunity to understand the evolving needs of the HPC community, and network with potential customers.

Benefits of the SCEC18 workshop to students: the workshop will provide an opportunity to network with HPC and advanced software engineering professionals, faculties and researchers, learn about the internship and career opportunities, discuss the opportunities for higher education.

The workshop proceedings will be of interest to the students, researchers, and other professionals who are working in the areas of HPC software and large-scale supercomputers.


Topics of interest include, but are not limited to:

  • Tools and techniques for code modernization
  • Generative programming techniques in HPC
  • Supporting software and middleware for HPC environments: e.g., MPI libraries
  • Tools for profiling, debugging, and parallelizing applications
  • Tools and techniques for memory and power optimization
  • Large-scale HPC applications (tuning, optimization, and implementation on HPC resources)
  • HPC Science Gateways, Containerization (HPC in the Cloud)
  • Fault-tolerance
  • Filesystems and Parallel I/O
  • High-level interfaces, libraries, compilers, and runtime systems for parallel programming
  • Domain-Specific Languages in HPC
  • Best practices for HPC software development


Invited Speakers


team member
Dr. Dan Stanzione
Associate Vice President For Research Executive Director, Texas Advanced Computing Center
team member
Dr. D.K. Panda
Professor and University Distinguished Scholar of Computer Science and Engineering, Ohio State University
team member
Dr. P. K. Sinha
Vice Chancellor and Director of International Institute of Information Technology (IIIT), Naya Raipur
team member
Dr. Manodeep Sinha
Computational astrophysicist, Centre for Astrophysics at Swinburne University of Technology, Melbourne
team member
Dr. Sushil Prasad
Program Director, National Science Foundation (NSF); Professor, Georgia State University
team member
Dr. M. K. Verma
Professor, Indian Institute of Technology, Kanpur
team member
Dr. Kishore Kothapalli
Associate Professor, International Institute of Information Technology, Hyderabad
team member
Dr. P. (Saday) Sadayappan
PROFESSOR AND UNIVERSITY DISTINGUISHED SCHOLAR OF COMPUTER SCIENCE AND ENGINEERING, OHIO STATE UNIVERSITY



Agenda on December 13, 2018


Agenda on December 13, 2018


Agenda on December 13, 2018


Agenda on December 13, 2018


Agenda on December 13, 2018

Time
Topic
Speaker
Time
Topic
Speaker

09:30 AM - 10:00 AM
REGISTRATION, TEA/COFFEE
09:30 AM - 10:00 AM
REGISTRATION, TEA/COFFEE

10:00 AM - 10:15 AM
Opening Remarks
TBD
10:00 AM - 10:15 AM
Opening Remarks
TBD

10:15 AM - 10:45 AM
Invited Talk-1
Dr. Dan Stanzione
10:15 AM - 10:45 AM
Invited Talk-1
Dr. Dan Stanzione
Dr. Dan Stanzione

Bio: Dr. Stanzione is the Executive Director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. A nationally recognized leader in high performance computing, Stanzione has served as deputy director since June 2009 and assumed the Executive Director post on July 1, 2014.

He is the principal investigator (PI) for several leading projects including a multimillion-dollar National Science Foundation (NSF) grant to deploy and support TACC's Stampede supercomputer over four years. Stanzione is also the PI of TACC's Wrangler system, a supercomputer designed specifically for data-focused applications. He served for six years as the co-director of CyVerse, a large-scale NSF life sciences cyberinfrastructure in which TACC is a major partner. In addition, Stanzione was a co-principal investigator for TACC's Ranger and Lonestar supercomputers, large-scale NSF systems previously deployed at UT Austin. Stanzione previously served as the founding director of the Fulton High Performance Computing Initiative at Arizona State University and served as an American Association for the Advancement of Science Policy Fellow in the NSF's Division of Graduate Education.

Stanzione received his bachelor's degree in electrical engineering and his master's degree and doctorate in computer engineering from Clemson University, where he later directed the supercomputing laboratory and served as an assistant research professor of electrical and computer engineering.


Dr. Dan Stanzione

Abstract: Advanced cyberinfrastructure and the ability to perform large-scale simulations and accumulate massive amounts of data have revolutionized scientific and engineering disciplines. In this talk I will give an overview of the National Strategic Computing Initiative (NSCI) that was launched by Executive Order (EO) 13702 in July 2015 to advance U.S. leadership in high performance computing (HPC). The NSCI is a whole-of-nation effort designed to create a cohesive, multi-agency strategic vision and Federal investment strategy, executed in collaboration with industry and academia, to maximize the benefits of HPC for the United States. I will then discuss NSF’s role in NSCI and present three cross-cutting software programs ranging from extreme scale parallelism to supporting robust, reliable and sustainable software that will support and advance sustained scientific innovation and discovery.

Bio:
Dr. Stanzione is the Executive Director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. A nationally recognized leader in high performance computing, Stanzione has served as deputy director since June 2009 and assumed the Executive Director post on July 1, 2014.

He is the principal investigator (PI) for several leading projects including a multimillion-dollar National Science Foundation (NSF) grant to deploy and support TACC's Stampede supercomputer over four years. Stanzione is also the PI of TACC's Wrangler system, a supercomputer designed specifically for data-focused applications. He served for six years as the co-director of CyVerse, a large-scale NSF life sciences cyberinfrastructure in which TACC is a major partner. In addition, Stanzione was a co-principal investigator for TACC's Ranger and Lonestar supercomputers, large-scale NSF systems previously deployed at UT Austin. Stanzione previously served as the founding director of the Fulton High Performance Computing Initiative at Arizona State University and served as an American Association for the Advancement of Science Policy Fellow in the NSF's Division of Graduate Education.

Stanzione received his bachelor's degree in electrical engineering and his master's degree and doctorate in computer engineering from Clemson University, where he later directed the supercomputing laboratory and served as an assistant research professor of electrical and computer engineering.



10:45 AM - 11:15 AM
Designing Scalable HPC, Deep Learning, Big Data and Cloud Middleware for Exascale Systems
Dr. D.K. Panda
10:45 AM - 11:15 AM
Designing Scalable HPC, Deep Learning, Big Data and Cloud Middleware for Exascale Systems
Dr. D.K. Panda

DK Panda Picture

Abstract: This talk will focus on challenges in designing HPC, Deep Learning, Big Data, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS - OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (Xeon, ARM and OpenPower), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness. Features, sample performance numbers and best practices of using MVAPICH2 libraries (http://mvapich.cse.ohio- state.edu)will be presented. For the Deep Learning domain, we will focus on popular Deep Learning frameworks (Caffe, CNTK, and TensorFlow) to extract performance and scalability with MVAPICH2-GDR MPI library. For the Big Data domain, we will focus on high- performance and scalable designs of Spark and Hadoop (including HDFS, MapReduce, RPC, and HBase) and the associated Deep Learning frameworks using native RDMA support for InfiniBand and RoCE. Finally, we will outline the challenges in moving these middleware to the Cloud environments using OpenStack, Docker, and Singularity.

Bio:
Dr. DK Panda is a Professor and University Distinguished Scholar of Computer Science and Engineering at the Ohio State University. He has published over 450 papers in the area of high-end computing and networking. The MVAPICH2 (High Performance MPI and PGAS over InfiniBand, Omni-Path, iWARP and RoCE) libraries, designed and developed by his research group (http://mvapich.cse.ohio-state.edu), are currently being used by more than 2,950 organizations worldwide (in 86 countries). More than 502,000 downloads of this software have taken place from the project's site. This software is empowering several InfiniBand clusters (including the 2 nd , 12 th , 15 th , and 24 th ranked ones) in the TOP500 list. The RDMA packages for Apache Spark, Apache Hadoop and Memcached together with OSU HiBD benchmarks from his group (http://hibd.cse.ohio-state.edu) are also publicly available. These libraries are currently being used by more than 290 organizations in 34 countries. More than 28,500 downloads of these libraries have taken place. High-performance and scalable versions of the Caffe and TensorFlow frameworks are available from http://hidl.cse.ohio-state.edu. Prof. Panda is an IEEE Fellow. More details about Prof. Panda are available at http://www.cse.ohio-state.edu/~panda.

DK Panda Picture

Abstract: This talk will focus on challenges in designing HPC, Deep Learning, Big Data, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS - OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (Xeon, ARM and OpenPower), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness. Features, sample performance numbers and best practices of using MVAPICH2 libraries (http://mvapich.cse.ohio- state.edu)will be presented. For the Deep Learning domain, we will focus on popular Deep Learning frameworks (Caffe, CNTK, and TensorFlow) to extract performance and scalability with MVAPICH2-GDR MPI library. For the Big Data domain, we will focus on high- performance and scalable designs of Spark and Hadoop (including HDFS, MapReduce, RPC, and HBase) and the associated Deep Learning frameworks using native RDMA support for InfiniBand and RoCE. Finally, we will outline the challenges in moving these middleware to the Cloud environments using OpenStack, Docker, and Singularity.

Bio:
Dr. DK Panda is a Professor and University Distinguished Scholar of Computer Science and Engineering at the Ohio State University. He has published over 450 papers in the area of high-end computing and networking. The MVAPICH2 (High Performance MPI and PGAS over InfiniBand, Omni-Path, iWARP and RoCE) libraries, designed and developed by his research group (http://mvapich.cse.ohio-state.edu), are currently being used by more than 2,950 organizations worldwide (in 86 countries). More than 502,000 downloads of this software have taken place from the project's site. This software is empowering several InfiniBand clusters (including the 2 nd , 12 th , 15 th , and 24 th ranked ones) in the TOP500 list. The RDMA packages for Apache Spark, Apache Hadoop and Memcached together with OSU HiBD benchmarks from his group (http://hibd.cse.ohio-state.edu) are also publicly available. These libraries are currently being used by more than 290 organizations in 34 countries. More than 28,500 downloads of these libraries have taken place. High-performance and scalable versions of the Caffe and TensorFlow frameworks are available from http://hidl.cse.ohio-state.edu. Prof. Panda is an IEEE Fellow. More details about Prof. Panda are available at http://www.cse.ohio-state.edu/~panda.


11:15 AM - 11:45 AM
Corrfunc: Blazing fast correlation functions with SIMD Intrinsics
Dr. Manodeep Sinha
11:15 AM - 11:45 AM
Corrfunc: Blazing fast correlation functions with SIMD Intrinsics
Dr. Manodeep Sinha

msinha Picture

Abstract: One of the major computational challenges of modern astrophysics is quantifying how galaxies are grouped or clustered. Galaxy clustering is determined by a combination of universal cosmological parameters, gravity, and the physics of galaxy formation. Quantifying galaxy clustering requires computing pair-wise separations-- an inherently quadratic process. Consequently, comparing the observed clustering of galaxies to that theoretically predicted is both useful to advance our understanding of physics and extremely technically complex. Since observational studies of galaxies and the theoretical models, contain millions of galaxies, computing the clustering strength becomes a bottleneck in the analysis pipeline. Here I present Corrfunc -- a suite of OpenMP-parallelized clustering codes that target current CPU micro-architecture with custom Advanced Vector Extensions (AVX512F, AVX) and Streaming SIMD Extensions (SSE) intrinsics. By design, Corrfunc is highly optimized and is at least a factor of few faster than all existing public galaxy clustering correlation function routines. The algorithm within Corrfunc can be easily adapted to a variety of different measurements and has already been implemented for nearest neighbour searches, group finding in galaxy surveys, weak lensing measurements etc. Corrfunc is covered by a suite of tests, extensive documentation and is publicly available at https://github.com/manodeep/Corrfunc. Software like Corrfunc highlights how we need a combination of efficient algorithms, custom software designed and tuned for the underlying hardware to go beyond the petascale frontier into the exascale regime.

Bio:
Dr. Manodeep Sinha is a computational astrophysicist based at Centre for Astrophysics at Swinburne University of Technology, Melbourne. Dr. Sinha is part of a large research collaboration -- ARC Centre of Excellence for All Sky Astrophysics in 3D (ASTRO 3D) -- a $40 million research effort funded for 7 years and spread over multiple institutions across Australia. Dr. Sinha obtained a PhD from The Pennsylvania State University in 2008 and has been a postdoctoral fellow and research scientist at Vanderbilt University and Swinburne University. His research interests include studying how galaxies form and evolve in the Universe, and how to make realistic comparisons between simulated and observed galaxies. Efficient computational methods form a core requirements of his research, whether in relation to creating large cosmological simulations or recreating real world observations. While an astrophysicist by profession, his recent research has been to create optimized, robust software required to solve research problems in astrophysics.

msinha Picture

Abstract: One of the major computational challenges of modern astrophysics is quantifying how galaxies are grouped or clustered. Galaxy clustering is determined by a combination of universal cosmological parameters, gravity, and the physics of galaxy formation. Quantifying galaxy clustering requires computing pair-wise separations-- an inherently quadratic process. Consequently, comparing the observed clustering of galaxies to that theoretically predicted is both useful to advance our understanding of physics and extremely technically complex. Since observational studies of galaxies and the theoretical models, contain millions of galaxies, computing the clustering strength becomes a bottleneck in the analysis pipeline. Here I present Corrfunc -- a suite of OpenMP-parallelized clustering codes that target current CPU micro-architecture with custom Advanced Vector Extensions (AVX512F, AVX) and Streaming SIMD Extensions (SSE) intrinsics. By design, Corrfunc is highly optimized and is at least a factor of few faster than all existing public galaxy clustering correlation function routines. The algorithm within Corrfunc can be easily adapted to a variety of different measurements and has already been implemented for nearest neighbour searches, group finding in galaxy surveys, weak lensing measurements etc. Corrfunc is covered by a suite of tests, extensive documentation and is publicly available at https://github.com/manodeep/Corrfunc. Software like Corrfunc highlights how we need a combination of efficient algorithms, custom software designed and tuned for the underlying hardware to go beyond the petascale frontier into the exascale regime.

Bio:
Dr. Manodeep Sinha is a computational astrophysicist based at Centre for Astrophysics at Swinburne University of Technology, Melbourne. Dr. Sinha is part of a large research collaboration -- ARC Centre of Excellence for All Sky Astrophysics in 3D (ASTRO 3D) -- a $40 million research effort funded for 7 years and spread over multiple institutions across Australia. Dr. Sinha obtained a PhD from The Pennsylvania State University in 2008 and has been a postdoctoral fellow and research scientist at Vanderbilt University and Swinburne University. His research interests include studying how galaxies form and evolve in the Universe, and how to make realistic comparisons between simulated and observed galaxies. Efficient computational methods form a core requirements of his research, whether in relation to creating large cosmological simulations or recreating real world observations. While an astrophysicist by profession, his recent research has been to create optimized, robust software required to solve research problems in astrophysics.


11:45 AM - 12:00 PM
COFFEE BREAK
11:45 AM - 12:00 PM
COFFEE BREAK

12:00 PM - 12:40 PM
Multiple lightning talks, 10 minutes each
TBD
12:00 PM - 12:40 PM
Multiple lightning talks, 10 minutes each
TBD

12:40 PM - 2:00 PM
LUNCH BREAK & NETWORKING (ICE-BREAKING SESSION)
12:40 PM - 2:00 PM
LUNCH BREAK & NETWORKING (ICE-BREAKING SESSION)

2:00 PM -2:30 PM
Performance Portability Challenges for Exascale Computing
Dr. P. (Saday) Sadayappan
2:00 PM -2:30 PM
Performance Portability Challenges for Exascale Computing
Dr. P. (Saday) Sadayappan
Dr. Sadayappan

Abstract: The increasing trend of heterogeneity and custom architectures makes software productivity and performance portability of high-performance applications extremely challenging. Compilers can play a prominent role in addressing these software challenges. However, a fundamental challenge faced by optimizing compilers is that of modeling and minimizing data movement overheads. The cost of data movement currently dominates the cost of arithmetic/logic operations, both in terms of energy and time. While computational complexity of algorithms in terms of elementary arithmetic/logic operations is quite well understood, the same is not true of the data movement complexity of computations. More effective models of data movement complexity are needed for building effective optimizing compilers for current/emerging platforms. One promising approach is to develop domain/pattern specific optimization strategies. Examples of domain-specific optimization for tensor computations and stencil computations on GPUs will be discussed.

Bio:
Dr. P. (Saday) Sadayappan is a University Distinguished Scholar and Professor of Computer Science and Engineering at The Ohio State University. His research interests include compiler optimization for heterogeneous systems, domain/pattern-specific compiler optimization, and characterization of data movement complexity of algorithms. He is a Fellow of the IEEE.

Dr. Sadayappan

Abstract: The increasing trend of heterogeneity and custom architectures makes software productivity and performance portability of high-performance applications extremely challenging. Compilers can play a prominent role in addressing these software challenges. However, a fundamental challenge faced by optimizing compilers is that of modeling and minimizing data movement overheads. The cost of data movement currently dominates the cost of arithmetic/logic operations, both in terms of energy and time. While computational complexity of algorithms in terms of elementary arithmetic/logic operations is quite well understood, the same is not true of the data movement complexity of computations. More effective models of data movement complexity are needed for building effective optimizing compilers for current/emerging platforms. One promising approach is to develop domain/pattern specific optimization strategies. Examples of domain-specific optimization for tensor computations and stencil computations on GPUs will be discussed.

Bio:
Dr. P. (Saday) Sadayappan is a University Distinguished Scholar and Professor of Computer Science and Engineering at The Ohio State University. His research interests include compiler optimization for heterogeneous systems, domain/pattern-specific compiler optimization, and characterization of data movement complexity of algorithms. He is a Fellow of the IEEE.



2:30 PM -3:30 PM
Hands-on Part-1, HPC topic
TBD
2:30 PM -3:30 PM
Hands-on Part-1, HPC topic
TBD

3:30 PM - 3:45 PM
COFFEE BREAK
3:30 PM - 3:45 PM
COFFEE BREAK

3:45 PM - 5:00 PM
Hands-on Part-2, HPC topic
TBD
3:45 PM - 5:00 PM
Hands-on Part-2, HPC topic
TBD

5:00 PM - 7:00 PM
NETWORKING RECEPTION & TEAM BUILDING EXERCISE
TBD
5:00 PM - 7:00 PM
NETWORKING RECEPTION & TEAM BUILDING EXERCISE
TBD



Agenda on December 14, 2018




Agenda on December 14, 2018




Agenda on December 14, 2018




Agenda on December 14, 2018




Agenda on December 14, 2018

Time
Topic
Speaker
Time
Topic
Speaker

09:15 AM - 09:50 AM
NETWORKING OVER TEA/COFFEE
09:15 AM - 09:50 AM
NETWORKING OVER TEA/COFFEE

09:50 AM - 10:00 AM
Opening Remarks
TBD
09:50 AM - 10:00 AM
Opening Remarks
TBD

10:00 AM - 10:30 AM
Developing IEEE-TCPP Parallel/Distributed Curriculum and NSF Office of Advanced Cyberinfrastructure CyberTraining Program
Dr. Sushil Prasad
10:00 AM - 10:30 AM
Developing IEEE-TCPP Parallel/Distributed Curriculum and NSF Office of Advanced Cyberinfrastructure CyberTraining Program
TBD

msinha Picture

Abstract: The NSF-supported Center for Parallel and Distributed Computing Curriculum Development and Educational Resources (CDER), in collaboration with the IEEE TC on Parallel Processing (TCPP), developed undergraduate curriculum guidelines for parallel and distributed computing (PDC), from 2010 to 2012. Our goal has been to migrate Computer Science (CS) and Computer Engineering (CE) courses in the first two years from the sequential model toward the now pervasive paradigm of parallel computing. This curriculum initiative that has now over 100 early adopter institutions nationally and internationally, including in India and vicinity. It has heavily influenced the ACM/IEEE Taskforce on Computer Science Curricula 2013 for their PDC thrust. I will describe this initiative and its current update efforts along the key aspects of big data, energy, and distributed computing.
The US National Science Foundation Office of Advanced Cyberinfrastructure (OAC) has introduced a CyberTraining program (NSF 19-524) for education and training aimed to fully prepare scientific workforce for nation's research enterprise to innovate and utilize high performance computing resources, tools and methods. The community response in its two rounds of competition have exceeded expectations. I will introduce this, as well as research and education programs for early-career faculty such CAREER and CRII. I will also touch on NSF's ten big ideas, including Harnessing the Data Revolution.

Bio:
Sushil K. Prasad is a Program Director at US National Science Foundation in its Office of Advanced Cyberinfrastructure (OAC) in the Computer and Information Science and Engineering (CISE) directorate leading its emerging research and education programs such as CAREER, CRII, Expeditions, CyberTraining, and the most-recently introduced OAC-Core research. He is an ACM Distinguished Scientist and a Professor of Computer Science at Georgia State University. He is the director of Distributed and Mobile Systems Lab carrying out research in Parallel, Distributed, and Data Intensive Computing and Systems. He has been twice-elected chair of IEEE-CS Technical Committee on Parallel Processing (TCPP), and leads the NSF-supported TCPP Curriculum Initiative on Parallel and Distributed Computing for undergraduate education.

msinha Picture

Abstract: The NSF-supported Center for Parallel and Distributed Computing Curriculum Development and Educational Resources (CDER), in collaboration with the IEEE TC on Parallel Processing (TCPP), developed undergraduate curriculum guidelines for parallel and distributed computing (PDC), from 2010 to 2012. Our goal has been to migrate Computer Science (CS) and Computer Engineering (CE) courses in the first two years from the sequential model toward the now pervasive paradigm of parallel computing. This curriculum initiative that has now over 100 early adopter institutions nationally and internationally, including in India and vicinity. It has heavily influenced the ACM/IEEE Taskforce on Computer Science Curricula 2013 for their PDC thrust. I will describe this initiative and its current update efforts along the key aspects of big data, energy, and distributed computing.
The US National Science Foundation Office of Advanced Cyberinfrastructure (OAC) has introduced a CyberTraining program (NSF 19-524) for education and training aimed to fully prepare scientific workforce for nation's research enterprise to innovate and utilize high performance computing resources, tools and methods. The community response in its two rounds of competition have exceeded expectations. I will introduce this, as well as research and education programs for early-career faculty such CAREER and CRII. I will also touch on NSF's ten big ideas, including Harnessing the Data Revolution.

Bio:
Sushil K. Prasad is a Program Director at US National Science Foundation in its Office of Advanced Cyberinfrastructure (OAC) in the Computer and Information Science and Engineering (CISE) directorate leading its emerging research and education programs such as CAREER, CRII, Expeditions, CyberTraining, and the most-recently introduced OAC-Core research. He is an ACM Distinguished Scientist and a Professor of Computer Science at Georgia State University. He is the director of Distributed and Mobile Systems Lab carrying out research in Parallel, Distributed, and Data Intensive Computing and Systems. He has been twice-elected chair of IEEE-CS Technical Committee on Parallel Processing (TCPP), and leads the NSF-supported TCPP Curriculum Initiative on Parallel and Distributed Computing for undergraduate education.


10:30 AM - 11:00 AM
Invited Talk-6
TBD
10:30 AM - 11:00 AM
Invited Talk-6
TBD

11:00 AM - 11:45 AM
Three Paper Presentations
TBD
11:00 AM - 11:45 AM
Three Paper Presentations
TBD

11:45 AM - 12:00 PM
COFFEE BREAK
11:45 AM - 12:00 PM
COFFEE BREAK

12:00 PM - 1:00 PM
Multiple lightning talks and paper presentations
TBD
12:00 PM - 01:00 PM
Multiple lightning talks and paper presentations
TBD

1:00 PM - 2:00 PM
LUNCH, NETWORKING, and GROUP PHOTO
1:00 PM - 2:00 PM
LUNCH, NETWORKING, and GROUP PHOTO

2:00 PM - 5:00 PM
Bring Your Own Code (BYOC)/Team Project
TBD
2:00 PM - 5:00 PM
Bring Your Own Code (BYOC)/Team Project
TBD

5:00 PM - 7:00 PM
DINNER (Self-Paid)
TBD
5:00 PM - 7:00 PM
DINNER (Self-Paid)
TBD


Deadlines for Papers

  • Paper submission deadline: November 15 (AOE), 2018 is the firm deadline (previous deadlines: October 21, October 8, 2018, October 1, 2018)
  • Notifications of acceptance/rejection sent by: November 25, 2018
  • Registration for the workshop to be completed by: November 27, 2018
  • Camera-ready copies of accepted papers due on: December 1, 2018

  • Information on submitting the papers to the workshop is available here.

Tentative Deadlines for the Participation Grant

  • Application deadline for the participation grant: September 25, 2018 (AOE)
  • Notification of acceptance of participation grant to be sent by: October 2, 2018
  • Hotel and flight reservations to be completed by (for the students selected from the U.S. institutions only): October 15, 2018
  • The workshop participants arrive in New Delhi, India by: December 12, 2018

  • Information on submitting the student travel grant application is available here.

Workshop Date

  • December 13-14, 2018

Committee


Organizing Committee

  • Amitava Majumdar, San Diego Supercomputing Center (SDSC), UC San Diego, La Jolla, USA (General Chair)
  • Ritu Arora, Texas Advanced Computing Center (TACC), UT Austin, Austin, USA (General Chair)
  • Sharda Dixit, Centre of Development of Advanced Computing (C-DAC), Pune, India (Program Co-Chair)
  • Anil Kumar Gupta, C-DAC, Pune, India (Program Co-Chair)
  • Vinai Kumar Singh, Indraprastha Engineering College, Ghaziabad, India (Logistics and Finance Chair)
  • Venkatesh Shenoi, C-DAC, Pune, India (Communications Chair)
  • Vinodh Kumar Markapuram, C-DAC, Pune, India
  • Abhishek Das, C-DAC, Pune, India

Technical Program Committee

  • Amit Ruhela, Ohio State University, Ohio, USA
  • Amitava Majumdar, San Diego Supercomputing Center (SDSC), UC San Diego, LA Jolla, California, USA
  • Anil Kumar Gupta, C-DAC, Pune, India
  • Anirban Jana, Pittsburgh Supercomputing Center (PSC), Pittsburgh, USA
  • Aniruddha Gokhale, Vanderbilt University, Nashville, Tennessee, USA
  • Antonio Gomez, Intel, Hillsboro, Oregon, USA
  • Amarjeet Sharma, C-DAC, Pune, India
  • Damon McDougall, Texas Advanced Computing Center (TACC), UT Austin, Texas, USA
  • Devangi Parekh, University of Texas at Austin, Texas, USA
  • Dinesh Rajagopal, BULL/AtoS, Bangalore, India
  • Galen Arnold, National Center of Supercomputing Applications, Illinois, USA
  • Hari Subramoni, Ohio State University, Ohio, USA
  • Krishna Muriki, Lawrence Berkeley National Laboratory, California, USA
  • Lars Koesterke, Texas Advanced Computing Center, UT Austin, Austin, USA
  • Manu Awasthi, IIT-Gandhinagar, Gandhinagar, India
  • Mahidhar Tatineni, San Diego Supercomputer Center, (SDSC), UC San Diego, La Jolla, California, USA
  • Milind Jagtap, Center of Development of Advanced Computing (C-DAC), Pune, India
  • Nisha Agarwal, Center of Development of Advanced Computing (C-DAC), Pune, India
  • Purushotham Bangalore, University of Alabama at Birmingham, Alabama, USA
  • Ritu Arora, Texas Advanced Computing Center (TACC), UT Austin, Austin, Texas, USA
  • Robert Sinkovits, San Diego Supercomputing Center (SDSC), UC San Diego, La Jolla, California, USA
  • Sandeep Joshi, C-DAC, Pune, India
  • Sharda Dixit, Center of Development of Advanced Computing (C-DAC), Pune, India
  • Si Liu, Texas Advanced Computing Center, UT Austin, Austin, Texas, USA
  • Soham Ghosh, Intel, India
  • Subhashini Sivagnanam, San Diego Supercomputer Center, (SDSC), UC San Diego, La Jolla, California, USA
  • Sukrit Sondhi, Fulcrum Worldwide, NJ, USA
  • Suresh Marru, Indiana University Bloomington, Indiana, USA
  • Tajendra Singh, University of California, Los Angeles (UCLA), California, USA
  • Venkatesh Shenoi, C-DAC, Pune, India
  • Victor Eijkhout, Texas Advanced Computing Center, UT Austin, Austin, Texas, USA
  • Vinai Kumar Singh, Indraprastha Engineering College, Ghaziabad, India

Webmaster

Committee


Organizing Committee

  • Amitava Majumdar, San Diego Supercomputing Center (SDSC), UC San Diego, La Jolla, California, USA (General Chair)
  • Ritu Arora, Texas Advanced Computing Center (TACC), UT Austin, Austin, Texas, USA (General Chair)
  • Sharda Dixit, Centre of Development of Advanced Computing (C-DAC), Pune, India (Program Co-Chair)
  • Anil Kumar Gupta, C-DAC, Pune, India (Program Co-Chair)
  • Vinai Kumar Singh, Indraprastha Engineering College, Ghaziabad, India (Logistics and Finance Chair)
  • Venkatesh Shenoi, C-DAC, Pune, India (Communications Chair)
  • Vinodh Kumar Markapuram, C-DAC, Pune, India
  • Abhishek Das, C-DAC, Pune, India

Technical Program Committee

  • Amit Ruhela, Ohio State University, Ohio, USA
  • Amitava Majumdar, San Diego Supercomputing Center (SDSC), UC San Diego, La Jolla, California, USA
  • Anil Kumar Gupta, C-DAC, Pune, India
  • Anirban Jana, Pittsburgh Supercomputing Center (PSC), Pittsburgh, USA
  • Aniruddha Gokhale, Vanderbilt University, Nashville, Tennessee, USA
  • Antonio Gomez, Intel, Hillsboro, Oregon, USA
  • Amarjeet Sharma, C-DAC, Pune, India
  • Damon McDougall, Texas Advanced Computing Center (TACC), UT Austin, Austin, Texas, USA
  • Devangi Parekh, University of Texas at Austin, Texas, USA
  • Dinesh Rajagopal, BULL/AtoS, Bangalore, India
  • Galen Arnold, National Center of Supercomputing Applications, Illinois, USA
  • Hari Subramoni, Ohio State University, Ohio, USA
  • Krishna Muriki, Lawrence Berkeley National Laboratory, California, USA
  • Lars Koesterke, Texas Advanced Computing Center, UT Austin, Austin, Texas, USA
  • Manu Awasthi, IIT-Gandhinagar, Gandhinagar, India
  • Mahidhar Tatineni, San Diego Supercomputer Center, (SDSC), UC San Diego, La Jolla, California, USA
  • Milind Jagtap, Center of Development of Advanced Computing (C-DAC), Pune, India
  • Nisha Agarwal, Center of Development of Advanced Computing (C-DAC), Pune, India
  • Purushotham Bangalore, University of Alabama at Birmingham, Alabama, USA
  • Ritu Arora, Texas Advanced Computing Center (TACC), UT Austin, Austin, Texas, USA
  • Robert Sinkovits, San Diego Supercomputing Center (SDSC), UC San Diego, La Jolla, California, USA
  • Sandeep Joshi, C-DAC, Pune, India
  • Sharda Dixit, Center of Development of Advanced Computing (C-DAC), Pune, India
  • Si Liu, Texas Advanced Computing Center, UT Austin, Austin, Texas, USA
  • Soham Ghosh, Intel, India
  • Subhashini Sivagnanam, San Diego Supercomputer Center, (SDSC), UC San Diego, La Jolla, California, USA
  • Sukrit Sondhi, Fulcrum Worldwide, NJ, USA
  • Suresh Marru, Indiana University Bloomington, Indiana, USA
  • Tajendra Singh, University of California, Los Angeles (UCLA), California, USA
  • Venkatesh Shenoi, C-DAC, Pune, India
  • Victor Eijkhout, Texas Advanced Computing Center, UT Austin, Austin, Texas, USA
  • Vinai Kumar Singh, Indraprastha Engineering College, Ghaziabad, India

Webmaster


Current Sponsors


In-Kind Sponsors

Current Sponsors


In-Kind Sponsors



Sponsorship Levels

CFP (Call for Papers) and Call for Abstracts


SCEC 2018 workshop proceedings will be published by Springer in their prestigious Communications in Computer and Information Science (CCIS) series.

We invite authors to submit their original and unpublished work, that is not under review for another publication. Full papers (10 -15 pages in length including references) should be formatted as per the Springer-specified guidelines (details below) for the double-blind review.

We also invite short papers or extended abstracts for the lightning talks. The short papers should also be prepared following the Springer guidelines, and could be 6-9 pages in length including references. A selected number of short papers will also be included in the workshop proceedings. While all the short papers will NOT be published in the proceedings, they will be archived and made publicly accessible through Figshare and a Github repo. These submissions will have a DOI number for future citations. Those interested in presenting lightning talks are also required to submit the PDF copies of the rough draft of their slides. The PDFs of the abstract and the slides can be merged into a single PDF file, and can be uploaded to Easychair for peer-review.

The PDF version of the papers/extended abstracts can be submitted for review through the SCEC 2018 submission website :

https://easychair.org/conferences/?conf=scec2018

The review process is double-blind, and each paper will be reviewed by at least three committee members and/or external reviewers. The papers will be evaluated on the basis of the relevance to the workshop theme, clarity of the content presented, originality of the work, and the impact of the work on the community.


Springer's formatting information is available at the following link:

https://www.springer.com/us/computer-science/lncs/conference-proceedings-guidelines

Workshop Registration

The registration fees for the workshop is Rupees 3200 (about US $46) for the students from the Indian academic institutions, and is Rs. 5000 ($68) for the faculty from the Indian academic institutions. The registration fees for the participants from non-academic institutions in India is Rs. 10,000 ($138). The registration fees for all the participants from the institutions outside India is US $250. The fees can be paid using check, direct bank deposit, or credit/debit card. All the workshop attendees should register in advance by filling the registration form whose link is provided below.

Registration Form


Application Form for Travel Award

We are happy to announce the availability of funds for covering the workshop participation cost for a limited number of undergraduate/graduate students. The students from the U.S. institution who are interested in applying for this participation grant, should submit the following form by September 25, 2018:

Students in the U.S.



The students from the institutions outside the U.S. can apply for this participation grant, by submitting the following form by September 25, 2018:


Workshop Venue

The LaLit

Barakhamba Avenue, Connaught Place New Delhi-110001

Contact

For any questions regarding the workshop, please contact us at: scecforum@gmail.com