Skip to content

Integrating the Lambda Inference API into Vim#

Introduction#

You can integrate the Lambda Inference API into Vim using vim-ai. After the Lambda Inference API is integrated, you can run code completions, have conversations, and more, all within Vim.

In this tutorial, you'll:

  1. Install the vim-ai plugin.
  2. Configure the vim-ai plugin to use the Lambda Inference API.
  3. Use the Lambda Inference API to generate a Docker compose file to run Nginx.
  4. Use the Lambda Inference API to generate an example nginx.conf.
  5. Test the files generated by vim-ai.

Prerequisites#

To run this tutorial, you need:

You should also be comfortable using the Vim command line.

Note

You're billed for all usage of the Lambda Inference API.

See the Lambda Inference API page for current pricing information.

Install and configure the vim-ai plugin#

  1. Install the vim-ai plugin:

    mkdir -p ~/.vim/pack/plugins/start && \
    git clone https://github.com/madox2/vim-ai.git ~/.vim/pack/plugins/start/vim-ai
    
  2. Set your Lambda Cloud API key so vim-ai can use it. Replace <CLOUD-API-KEY> with your actual Cloud API key.

    bash -c 'umask 177 && echo "<CLOUD-API-KEY>" > ~/.config/openai.token'
    
  3. Create a ~/.vimrc file with the following lines, or add the lines to your existing ~/.vimrc file:

    let g:vim_ai_chat = {
    \  "options": {
    \    "endpoint_url": "https://api.lambdalabs.com/v1/chat/completions",
    \    "model": "qwen25-coder-32b-instruct",
    \  },
    \}
    let g:vim_ai_complete = {
    \  "options": {
    \    "endpoint_url": "https://api.lambdalabs.com/v1/chat/completions",
    \    "model": "qwen25-coder-32b-instruct",
    \  },
    \}
    let g:vim_ai_edit = {
    \  "options": {
    \    "endpoint_url": "https://api.lambdalabs.com/v1/chat/completions",
    \    "model": "qwen25-coder-32b-instruct",
    \  },
    \}
    

    Note

    This tutorial uses qwen25-coder-32b-instruct, but you can use any model available through the Lambda Inference API . We recommend that you try different models to learn which are best for your use cases.

    See Using the Lambda Inference API > Listing models to learn how to retrieve a list of the models served by the Lambda Inference API.

Generate a Docker Compose file to run Nginx#

  1. Create a directory for this tutorial and navigate to the directory:

    mkdir ~/vim-ai-lambda-inference && cd ~/vim-ai-lambda-inference
    
  2. Launch Vim and begin editing a file named docker-compose.yml:

    vim docker-compose.yml
    
  3. Press : to open the Vim command line, and then prompt vim-ai (AI) to generate the Docker Compose file:

    AI Generate a Docker Compose file to run an Nginx service. The service should bind mount nginx.conf from the current directory.
    

    In a few seconds, you should see output similar to:

    version: '3'
    services:
    nginx:
        image: nginx:latest
        ports:
        - "80:80"
        volumes:
        - ./nginx.conf:/etc/nginx/nginx.conf:ro
    
  4. Open the Vim command line (:) again, then save the file:

    w
    

Generate an Nginx config file#

  1. On the Vim command line (:), begin editing a file named nginx.conf:

    e nginx.conf
    
  2. Press : to open the Vim command line again, and then prompt vim-ai (AI) to generate the nginx.conf file:

    AI Generate a valid nginx.conf file that serves "Hello, World!" as a text file.
    

    In a few seconds, you should see output similar to:

    events {}
    http {
        server {
            listen 80;
            location / {
                return 200 "Hello, World!";
                add_header Content-Type text/plain;
            }
        }
    }
    
  3. Using the Vim command line (:), save the file and quit Vim:

    wq
    

Test your generated config files#

Warning

Always exercise caution when running code generated by the Lambda Inference API.

  1. Run the Nginx container:

    sudo docker compose up -d
    

    After a few seconds, you should see output similar to:

    ✔ Network vim-ai-lambda-inference_default    Created
    ✔ Container vim-ai-lambda-inference-nginx-1  Started
    
  2. Confirm that you can connect to the Nginx service:

    curl -v http://localhost
    

    You should see output similar to:

    *   Trying 127.0.0.1:80...
    * Connected to localhost (127.0.0.1) port 80 (#0)
    > GET / HTTP/1.1
    > Host: localhost
    > User-Agent: curl/7.81.0
    > Accept: */*
    >
    * Mark bundle as not supporting multiuse
    < HTTP/1.1 200 OK
    < Server: nginx/1.27.4
    < Date: Sun, 23 Feb 2025 15:55:37 GMT
    < Content-Type: text/plain
    < Content-Length: 13
    < Connection: keep-alive
    < Content-Type: text/plain
    <
    * Connection #0 to host localhost left intact
    Hello, World!
    

Next steps#