Integrating the Lambda Inference API into Vim#
Introduction#
You can integrate the Lambda Inference API into Vim using vim-ai. After the Lambda Inference API is integrated, you can run code completions, have conversations, and more, all within Vim.
In this tutorial, you'll:
- Install the vim-ai plugin.
- Configure the vim-ai plugin to use the Lambda Inference API.
- Use the Lambda Inference API to generate a Docker compose file to run Nginx.
- Use the Lambda Inference API to generate an example
nginx.conf
. - Test the files generated by vim-ai.
Prerequisites#
To run this tutorial, you need:
- A Lambda Cloud API key.
-
A Linux environment that has the following packages installed:
- Vim with Python 3 support
- Docker Compose v2
- cURL
- Git
You should also be comfortable using the Vim command line.
Note
You're billed for all usage of the Lambda Inference API.
See the Lambda Inference API page for current pricing information.
Install and configure the vim-ai plugin#
-
Install the vim-ai plugin:
-
Set your Lambda Cloud API key so vim-ai can use it. Replace
<CLOUD-API-KEY>
with your actual Cloud API key. -
Create a
~/.vimrc
file with the following lines, or add the lines to your existing~/.vimrc
file:let g:vim_ai_chat = { \ "options": { \ "endpoint_url": "https://api.lambdalabs.com/v1/chat/completions", \ "model": "qwen25-coder-32b-instruct", \ }, \} let g:vim_ai_complete = { \ "options": { \ "endpoint_url": "https://api.lambdalabs.com/v1/chat/completions", \ "model": "qwen25-coder-32b-instruct", \ }, \} let g:vim_ai_edit = { \ "options": { \ "endpoint_url": "https://api.lambdalabs.com/v1/chat/completions", \ "model": "qwen25-coder-32b-instruct", \ }, \}
Note
This tutorial uses
qwen25-coder-32b-instruct
, but you can use any model available through the Lambda Inference API . We recommend that you try different models to learn which are best for your use cases.See Using the Lambda Inference API > Listing models to learn how to retrieve a list of the models served by the Lambda Inference API.
Generate a Docker Compose file to run Nginx#
-
Create a directory for this tutorial and navigate to the directory:
-
Launch Vim and begin editing a file named
docker-compose.yml
: -
Press
:
to open the Vim command line, and then prompt vim-ai (AI
) to generate the Docker Compose file:AI Generate a Docker Compose file to run an Nginx service. The service should bind mount nginx.conf from the current directory.
In a few seconds, you should see output similar to:
-
Open the Vim command line (
:
) again, then save the file:
Generate an Nginx config file#
-
On the Vim command line (
:
), begin editing a file namednginx.conf
: -
Press
:
to open the Vim command line again, and then prompt vim-ai (AI
) to generate thenginx.conf
file:In a few seconds, you should see output similar to:
-
Using the Vim command line (
:
), save the file and quit Vim:
Test your generated config files#
Warning
Always exercise caution when running code generated by the Lambda Inference API.
-
Run the Nginx container:
After a few seconds, you should see output similar to:
-
Confirm that you can connect to the Nginx service:
You should see output similar to:
* Trying 127.0.0.1:80... * Connected to localhost (127.0.0.1) port 80 (#0) > GET / HTTP/1.1 > Host: localhost > User-Agent: curl/7.81.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Server: nginx/1.27.4 < Date: Sun, 23 Feb 2025 15:55:37 GMT < Content-Type: text/plain < Content-Length: 13 < Connection: keep-alive < Content-Type: text/plain < * Connection #0 to host localhost left intact Hello, World!