In particular, ver the past five years we have seen new kinds of workloads emerging, driven by the explosion of data from experimental and observational facilities, the accelerating application of artificial intelligence (AI), and the fusion of experimental data and simulation. NERSC has been engaged in a pivoting supercomputing toward the data sciences for the past decade, and that is the context in which we begin our story with Jupyter.
In 2015, we observed with increasing regularity that users were trying to use SSH tunnels to launch and connect to their own Jupyter notebooks on Edison, a previous generation supercomputer. One user even published a blog post \citep{a} about how to do it. NERSC recognized that Jupyter notebooks and similar tools were a part of the emerging data science landscape we would need to learn to engage, understand, and support. Faced with the challenge of how we would authenticate users and launch, manage, and proxy their Jupyter notebooks, we turned to JupyterHub which had been released just a few months prior to do just those things. JupyterHub has a highly extensible, relatively deployment-agnostic design built on powerful high-level abstractions and is developed by a robust open-source community clearly invested in its development and propagation: Strategic leverage from the perspective of an organization tasked with supporting Jupyter users.