Tags#
1-click clusters#
- Tutorial: Getting started with training a machine learning model
- Cloud API
- Firewalls
- Guest Agent
- Importing and exporting data
- Teams
- Introduction
- Security posture
- How to serve the Llama 3.1 405B model using a Lambda 1-Click Cluster
- Using the Lambda Public Cloud dashboard
- Getting started
1-click-clusters#
Virtualization#
api#
- Deploying Llama 3.2 3B in a Kubernetes (K8s) cluster
- Integrating Lambda Chat into VS Code
- Deploying models with dstack
- Using SkyPilot to deploy a Kubernetes cluster
- Using the Lambda Inference API
automation#
distributed training#
docker#
- Using Multi-Instance GPU (MIG)
- Serving the Llama 3.1 8B and 70B models using Lambda Cloud on-demand instances
generative ai#
- How to serve the FLUX.1 prompt-to-image models using Lambda Cloud on-demand instances
- Fine-tuning the Mochi video generation model on GH200
kubernetes#
- Deploying Llama 3.2 3B in a Kubernetes (K8s) cluster
- Using KubeAI to deploy Nous Research's Hermes 3 and other LLMs
- Using SkyPilot to deploy a Kubernetes cluster
llama#
- Using Multi-Instance GPU (MIG)
- Deploying Llama 3.2 3B in a Kubernetes (K8s) cluster
- Using KubeAI to deploy Nous Research's Hermes 3 and other LLMs
- Serving the Llama 3.1 8B and 70B models using Lambda Cloud on-demand instances
- Integrating Lambda Chat into VS Code
- Deploying models with dstack
- Using the Lambda Inference API
llm#
- Using Multi-Instance GPU (MIG)
- Deploying Llama 3.2 3B in a Kubernetes (K8s) cluster
- Using KubeAI to deploy Nous Research's Hermes 3 and other LLMs
- Serving the Llama 3.1 8B and 70B models using Lambda Cloud on-demand instances
- Integrating Lambda Chat into VS Code
- Deploying models with dstack
- Using the Lambda Inference API
managed kubernetes#
on-demand cloud#
- Tutorial: Getting started with training a machine learning model
- Using Multi-Instance GPU (MIG)
- Cloud API
- Firewalls
- Guest Agent
- Importing and exporting data
- Teams
- Connecting to an instance
- Creating and managing instances
- Using the Lambda Public Cloud dashboard
- Fine-tuning the Mochi video generation model on GH200
- Getting started
- Managing your system environment
- Running a PyTorch®-based benchmark on an NVIDIA GH200 instance
- Running Hugging Face Transformers and Diffusers on an NVIDIA GH200 instance
- Serving Llama 3.1 8B and 70B using vLLM on an NVIDIA GH200 instance