- DeepSeek R1 is a free and open-source AI model that you can integrate into Visual Studio Code as a coding assistant.
- There are several ways to run DeepSeek locally without relying on the cloud, including tools such as Ollama, LM Studio, and Jan.
- To get the most out of DeepSeek, it's key to choose the right model based on your available hardware and configure it correctly in extensions like CodeGPT or Cline.
DeepSeek R1 has emerged as a powerful and free alternative to other alternative solutions. Its best asset is that it allows developers to have a Advanced AI for code support without relying on cloud servers. In this article, we explain How to use DeepSeek in Visual Studio Code.
And it is that, thanks to its availability in versions optimized for local execution, its integration is possible without additional costs. All you have to do is use tools like Ollama, LM Studio and Jan, as well as integration with plugins such as CodeGPT and ClineWe'll tell you everything in the following paragraphs:
What is DeepSeek R1?
As we already explained here, DeepSeek R1 is a open source language model that competes with commercial solutions such as GPT-4 in logical reasoning tasks, code generation, and mathematical problem solving. Its main advantage is that can be run locally without relying on external servers, providing a high level of privacy for developers.
Depending on the available hardware, different versions of the model can be used, from 1.5B parameters (for modest computers) to 70B parameters (for high-performance PCs with advanced GPUs).
Methods to Run DeepSeek in VSCode
To achieve the best performance with DeepSeek en Visual Studio Code, it's essential to choose the right solution to run it on your system. There are three main options:
Option 1: Using Ollama
Don't DeepSeek is a lightweight platform that allows you to run AI models locally. Follow these steps to install and use DeepSeek with Ollama:
- Download and install Ollama from its official website (ollama.com).
- In a terminal, run:
ollama pull deepseek-r1:1.5b(for lighter models) or a larger variant if the hardware allows it. - Once downloaded, Ollama will host the model in
http://localhost:11434, making it accessible to VSCode.
Option 2: Using LM Studio
LM Studio is another alternative for easily downloading and managing these types of language models (and also for using DeepSeek in Visual Studio Code). Here's how to use it:
- First, download LM Studio and install it on your system.
- Search and download the model DeepSeek R1 from the tab Discover.
- Upload the model and enable the local server to run DeepSeek in Visual Studio Code.
Option 3: Using Jan
The third option we recommend is Jan, another viable alternative for running AI models locally. To use it, you must do the following:
- First download the version of Jan corresponding to your operating system.
- Then download DeepSeek R1 from Hugging Face and load it into Jan.
- Finally, start the server in
http://localhost:1337and set it up in VSCode.
If you want to explore more about how to use DeepSeek in different environments, feel free to check out our guide on DeepSeek in Windows 11 environments.

DeepSeek Integration with Visual Studio Code
once you have DeepSeek working locally, it's time to integrate it into Visual Studio Code. To do this, you can use extensions like CodeGPT o cline.
Configuring CodeGPT
- From the tab Extensions In VSCode (Ctrl + Shift + X), search and install CodeGPT.
- Access the extension settings and select Don't as an LLM provider.
- Enter the URL of the server where it runs DeepSeek locally.
- Select the downloaded DeepSeek model and save it.
Configuring Cline
cline It's a tool geared more toward automated code execution. To use it with DeepSeek in Visual Studio Code, follow these steps:
- Download the extension cline in VSCode.
- Open the settings and select the API provider (Ollama or Jan).
- Enter the URL of the local server where it is running DeepSeek.
- Choose the AI model and confirm the settings.
For more information on the implementation of DeepSeek, I recommend you check out How Microsoft integrates DeepSeek R1 into Windows Copilot, which can give you a broader perspective on their capabilities.
Tips for Choosing the Right Model
El DeepSeek performance in Virtual Studio Code This will largely depend on the model you choose and the capabilities of your hardware. For reference, it's worth checking out the following table:
| Córdoba | Required RAM | Recommended GPU |
|---|---|---|
| 1.5B | 4 GB | Integrated or CPU |
| 7B | 8-10 GB | GTX 1660 or higher |
| 14B | 16GB+ | RTX 3060/3080 |
| 70B | 40GB+ | RTX 4090 |
If your PC is underpowered, you can opt for smaller models or quantized versions to reduce memory consumption.
As you can see, using DeepSeek in Visual Studio Code offers us an excellent and free alternative to other paid code assistants. The ability to run it locally through Don't, LM Studio o Jan, gives developers the opportunity to benefit from an advanced tool without relying on cloud-based services or monthly costs. If you set up your environment properly, you'll have a private, powerful AI assistant completely under your control.
Editor specialized in technology and internet issues with more than ten years of experience in different digital media. I have worked as an editor and content creator for e-commerce, communication, online marketing and advertising companies. I have also written on economics, finance and other sectors websites. My work is also my passion. Now, through my articles in Tecnobits, I try to explore all the news and new opportunities that the world of technology offers us every day to improve our lives.
