Keep in mind that, currently, the maximum number of GPUs that can be used on the training cluster is one. If you request more than one, your static console is going to be unresponsive for some time, and you won't be able to start your Jupyter service. The same thing will happen if you request resources that surpass the limits of the current underlying hardware infrastructure. These are summarized in the table below:
|Resource||Max Count||Max Memory (MB)|
|vCPUs, no GPU||15||48*1024|
|vCPUs, with GPU||5||40*1024|
Access Through IDS
Through IDS, clear controls regarding Jupyter are exposed under the 'ML Studio' section for Application. When you've selected a project, you need to start the Jupyter service to access notebooks on that project. In the creation process, you're able to select the container profile to use for your Jupyter service. By default, the 'BasicGpu' profile offers a single K80 GPU.