Virtual environments and Docker containers
What are virtual environments?
Virtual environments allow you to create and maintain development environments that are isolated from each other. Lambda recommends using either:
Creating a Python virtual environment
Create a Python virtual environment using the
venv
module by running:
Replace NAME with the name you want to give to your virtual environment.
The command, above, creates a virtual environment that has access to Lambda Stack packages and packages installed from Ubuntu repositories.
To create a virtual environment that doesn't have access to Lambda Stack and Ubuntu packages, omit the --system-site-packages
option.
Activate the virtual environment by running:
Replace NAME with the name you gave your virtual environment in the previous step.
Python packages you install in your virtual environment are isolated from the base environment and other virtual environments.
Locally installed packages can conflict with packages installed in virtual environments. For this reason, it’s recommended to uninstall locally installed packages by running:
To uninstall packages installed locally for your user only, run:
To uninstall packages installed locally, system-wide (for all users), run:
Don't run the above uninstall commands on Lambda GPU Cloud on-demand instances!
The above uninstall commands remove all locally installed packages and, on on-demand instances, break programs including pip and JupyterLab.
See the Python venv module documentation to learn more about Python virtual environments.
Creating a conda virtual environment
To create a conda virtual environment:
Download the latest version of Miniconda3 by running:
Then, install Miniconda3 by running the command:
Follow the installer prompts. Install Miniconda3 in the default location. Allow the installer to initialize Miniconda3.
If you want to create a conda virtual environment immediately after installing Miniconda3, you need to load the changes made to your
.bashrc
.
You can either:
Exit and reopen your shell (terminal).
Run
source ~/.bashrc
.
For compatibility with the Python venv module, it’s recommended that you disable automatic activation of the conda base environment by running:
Create a conda virtual environment using Miniconda3 by running:
Replace NAME with the name you want to give your virtual environment.
Replace PACKAGES with the list of packages you want to install in your virtual environment.
(Optional) Replace OPTIONS with options for the conda create
command. See the conda create
documentation to learn more about available options.
For example, to create a conda virtual environment for PyTorch® with CUDA 11.8, run the below command and follow the prompts:
Activate the conda virtual environment by running:
Replace NAME with the name of the virtual environment created in the previous step.
For instance, to activate the example PyTorch with CUDA 11.8 virtual environment mentioned in the previous step, run:
Once activated, you can test the example virtual environment is working by running:
You should see output similar to:
Locally installed packages can conflict with packages installed in virtual environments. For this reason, it’s recommended to uninstall locally installed packages by running:
To uninstall packages installed locally for your user only, run:
To uninstall packages installed locally, system-wide (for all users), run:
Don’t run the above uninstall commands on Lambda GPU Cloud on-demand instances!
The above uninstall commands remove all locally installed packages and, on on-demand instances, break programs including pip and JupyterLab.
See the Conda documentation to learn more about how to manage conda virtual environments.
Installing Docker and creating a container
Docker and NVIDIA Container Toolkit are preinstalled on Cloud on-demand instances.
If you're using an on-demand instance, skip step 1, below.
To create and run a Docker container:
Install Docker and NVIDIA Container Toolkit by running:
Add your user to the
docker
group by running:
Then, exit and reopen a shell (terminal) so that your user can create and run Docker containers.
Locate the Docker image for the container you want to create. For example, the NVIDIA NGC Catalog has images for creating TensorFlow NGC containers.
Create a container from the Docker image, and run a command in the container, by running:
Replace IMAGE with the URL to the image for the container you want to create.
Replace COMMAND with the command you want to run in the container.
For example, to create a TensorFlow NGC container and run a command to get the container’s TensorFlow build information, run:
You should see output similar to the following:
See the Docker documentation to learn more about using Docker.
You can also check out the Lambda blog post: NVIDIA NGC Tutorial: Run A PyTorch Docker Container Using Nvidia-Container-Toolkit On Ubuntu.
Last updated