Conclusion
[ROUGH] Jupyter in HPC is now commonplace. We have been able to give hundreds of HPC users a rich user interface to HPC through Jupyter. In the supercomputing context, we look at Jupyter as a tool that will help make it easier for our users to take advantage of supercomputing hardware and software. Some of that will come from us at supercomputing centers. Jupyter as a project needs to not make design decisions that break things for us, or lock us into one way of doing things. Each HPC center is different and that means that for Jupyter to remain useful to HPC centers and supercomputing it needs to maintain its high level of abstraction. We should make this into a bulleted list of demands :)
- Rich user interfaces like Jupyter have the potential to make interacting with a supercomputer still easier, attracting new kinds of users and helping to expand the application of supercomputing to new science domains.
- Supercomputers and the HPC centers that maintain them are not all alike, and while this poses a challenge to those of us who would like to expand access to supercomputing through Jupyter, the challenge is not insurmountable provided the following conditions hold.
- Random thought: Even if vendors begin shipping supercomputer systems with Jupyter inside, HPC centers have to keep up with the demands of their users, and these requirements can evolve faster than the hardware/software cycle of supercomputing vendors. That means that staff should probably do more than just turn on Jupyter and walk away.
- Also we should make sure to mention that as the ultimate rich user interface to supercomputing, Jupyter shows a lot of promise but is not there yet. The things that we need to realize that promise:
- That the Jupyter Project not decide to go in any particular direction that breaks Jupyter or the Jupyter ecosystem for us.
- That the Jupyter Project maintains abstraction as a core design value
- That HPC centers prioritize software development and contributions to open-source projects like the Jupyter Project
- The next step is to really focus on supporting users with scaling needs. We've scaled in terms of the number of users, be we want to enable those users among them who want to use Jupyter to really make the most of supercomputing. What we need to do that also some other tools like Dask, Spark, etc to work as seamlessly as possible with Jupyter.
Acknowledgments
This work was supported by Lawrence Berkeley National Laboratory, through the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory.