Post

VSCode with Deepseek

VSCode with Deepseek

Complete Guide: Setup VSCode with Ollama

This guide will walk you through setting up VSCode with Ollama, installing models, and running them locally.


1️⃣ Install Ollama**

Download and Install Ollama:

  • Visit Ollama Download Page to download the appropriate version for your operating system.
  • Follow the installation instructions for your platform.

Verify the installation by running:

1
ollama --version

This should output the version of Ollama installed on your system.

2️⃣ Pull a Model with Ollama

After installing Ollama, you can use the following command in the terminal to pull a model. For example, to pull DeepSeek-R1-Distill-Qwen-14B:

Check the following table to find out which version best suit your PC.

1
2
3
4
5
6
1.5B = ~3.5GB RAM
7B = ~16GB RAM
8B = ~18GB RAM
14B = ~32GB RAM
70B = ~161GB RAM 
617B = ~1342GB RAM

I will chose the following model.

DeepSeek-R1-Distill-Qwen-14B

1
ollama run deepseek-r1:14b

This will start the DeepSeek-R1-Distill-Qwen-14B model.

3️⃣ Install Cline VSCode Extension

  1. Go to the VSCode Extension Marketplace and search for Cline.
  2. Install the Cline extension.
  3. After installation, select API Provider Ollama from the settings.
  4. Once installed, the extension should automatically detect the available models.
  5. Select deepseek-r1:14b (the installed model) and click Let’s Go! to proceed.

Now, you’re ready to use DeepSeek with VSCode and Ollama!

This post is licensed under CC BY 4.0 by the author.