How to use DeepSeek in Visual Studio Code

Last update: 10/03/2025

  • DeepSeek R1 is a free and open-source AI model that you can integrate into Visual Studio Code as a coding assistant.
  • There are several ways to run DeepSeek locally without relying on the cloud, including tools such as Ollama, LM Studio, and Jan.
  • To get the most out of DeepSeek, it's key to choose the right model based on your available hardware and configure it correctly in extensions like CodeGPT or Cline.
Deepseek in VS Code

DeepSeek R1 has emerged as a powerful and free alternative to other alternative solutions. Its best asset is that it allows developers to have a Advanced AI for code support without relying on cloud servers. In this article, we explain How to use DeepSeek in Visual Studio Code.

And it is that, thanks to its availability in versions optimized for local execution, its integration is possible without additional costs. All you have to do is use tools like Ollama, LM Studio and Jan, as well as integration with plugins such as CodeGPT and ClineWe'll tell you everything in the following paragraphs:

What is DeepSeek R1?

As we already explained here, DeepSeek R1 is a open source language model that competes with commercial solutions such as GPT-4 in logical reasoning tasks, code generation, and mathematical problem solving. Its main advantage is that can be run locally without relying on external servers, providing a high level of privacy for developers.

Exclusive content - Click Here  What are the functionalities of Alexa?

Depending on the available hardware, different versions of the model can be used, from 1.5B parameters (for modest computers) to 70B parameters (for high-performance PCs with advanced GPUs).

DeepSeek in Visual Studio Code

Methods to Run DeepSeek in VSCode

To achieve the best performance with DeepSeek en Visual Studio Code, it's essential to choose the right solution to run it on your system. There are three main options:

Option 1: Using Ollama

Don't DeepSeek is a lightweight platform that allows you to run AI models locally. Follow these steps to install and use DeepSeek with Ollama:

  1. Download and install Ollama from its official website (ollama.com).
  2. In a terminal, run: ollama pull deepseek-r1:1.5b (for lighter models) or a larger variant if the hardware allows it.
  3. Once downloaded, Ollama will host the model in http://localhost:11434, making it accessible to VSCode.

Option 2: Using LM Studio

LM Studio is another alternative for easily downloading and managing these types of language models (and also for using DeepSeek in Visual Studio Code). Here's how to use it:

  1. First, download LM Studio and install it on your system.
  2. Search and download the model DeepSeek R1 from the tab Discover.
  3. Upload the model and enable the local server to run DeepSeek in Visual Studio Code.
Exclusive content - Click Here  Fiverr layoffs: radical pivot to an AI-focused company

Option 3: Using Jan

The third option we recommend is Jan, another viable alternative for running AI models locally. To use it, you must do the following:

  • First download the version of Jan corresponding to your operating system.
  • Then download DeepSeek R1 from Hugging Face and load it into Jan.
  • Finally, start the server in http://localhost:1337 and set it up in VSCode.

If you want to explore more about how to use DeepSeek in different environments, feel free to check out our guide on DeepSeek in Windows 11 environments.

Deepseek in VS Code

DeepSeek Integration with Visual Studio Code

once you have DeepSeek working locally, it's time to integrate it into Visual Studio Code. To do this, you can use extensions like CodeGPT o cline.

Configuring CodeGPT

  1. From the tab Extensions In VSCode (Ctrl + Shift + X), search and install CodeGPT.
  2. Access the extension settings and select Don't as an LLM provider.
  3. Enter the URL of the server where it runs DeepSeek locally.
  4. Select the downloaded DeepSeek model and save it.

Configuring Cline

cline It's a tool geared more toward automated code execution. To use it with DeepSeek in Visual Studio Code, follow these steps:

  1. Download the extension cline in VSCode.
  2. Open the settings and select the API provider (Ollama or Jan).
  3. Enter the URL of the local server where it is running DeepSeek.
  4. Choose the AI ​​model and confirm the settings.
Exclusive content - Click Here  WeTransfer got into trouble: it wanted to use your files to train AI and had to back down after the controversy

For more information on the implementation of DeepSeek, I recommend you check out How Microsoft integrates DeepSeek R1 into Windows Copilot, which can give you a broader perspective on their capabilities.

Tips for Choosing the Right Model

El DeepSeek performance in Virtual Studio Code This will largely depend on the model you choose and the capabilities of your hardware. For reference, it's worth checking out the following table:

Córdoba Required RAM Recommended GPU
1.5B 4 GB Integrated or CPU
7B 8-10 GB GTX 1660 or higher
14B 16GB+ RTX 3060/3080
70B 40GB+ RTX 4090

 

If your PC is underpowered, you can opt for smaller models or quantized versions to reduce memory consumption.

As you can see, using DeepSeek in Visual Studio Code offers us an excellent and free alternative to other paid code assistants. The ability to run it locally through Don't, LM Studio o Jan, gives developers the opportunity to benefit from an advanced tool without relying on cloud-based services or monthly costs. If you set up your environment properly, you'll have a private, powerful AI assistant completely under your control.

How to use DeepSeek-0
Related article:
DeepSeek: Everything you need to know about the most innovative free AI