Introduction#
Generative AI (GAI)#
Large language models (LLMs)#
- Deploying a Llama 3 inference endpoint
- Deploying Llama 3.2 3B in a Kubernetes (K8s) cluster
- Using KubeAI to deploy Nous Research's Hermes 3 and other LLMs
- Serving Llama 3.1 405B on a Lambda 1-Click Cluster
- Serving the Llama 3.1 8B and 70B models using Lambda Cloud on-demand instances
Linux usage and system administration#
- Basic Linux commands and system administration
- Configuring Software RAID
- Lambda Stack and recovery images
- Troubleshooting and debugging
- Using the Lambda bug report to troubleshoot your system
- Using the nvidia-bug-report.log file to troubleshoot your system
Programming#
- Virtual environments and Docker containers
- Integrating Lambda Chat into VS Code
- Using the Cline AI assistant with the Lambda Inference API