Post

Ollama Guide

Ollama Guide

Guide to Using Ollama

Ollama allows you to run large language models (LLMs) like LLaMA, Mistral, and more on your local machine.


1️⃣ Install Ollama

🔹 On macOS & Linux

Run the following command in your terminal:

1
curl -fsSL https://ollama.com/install.sh | sh

🔹 On Windows

  1. Download Ollama from Ollama’s official website.
  2. Run the installer and follow the setup instructions.

2️⃣ Running Ollama

✅ Check if Ollama is installed

1
ollama --version

📜 List available models

1
ollama list

📥 Download and Run a Model

To run Llama 2:

1
ollama run llama2

To run Mistral:

1
ollama run mistral

3️⃣ Using Ollama in a Python Script

🛠 Install Ollama’s Python package

1
pip install ollama

💻 Run a simple Python script

1
2
3
4
import ollama

response = ollama.chat(model='mistral', messages=[{'role': 'user', 'content': 'What is Ollama?'}])
print(response['message']['content'])

4️⃣ Customizing Models

You can create your own fine-tuned models with Modelfiles.

📄 Create a custom model

  1. Create a Modelfile:
    1
    
    FROM mistral
    
  2. Run:
    1
    
    ollama create mymodel -f Modelfile
    
  3. Use it with:
    1
    
    ollama run mymodel
    

5️⃣ Running Ollama as an API

Ollama provides an API to integrate with your applications.

🖥 Start the Ollama server

1
ollama serve

📡 Send API requests using curl

1
curl http://localhost:11434/api/generate -d '{ "model": "mistral", "prompt": "Hello, how are you?" }'

🚀 Final Thoughts

  • Ollama is great for running LLMs locally without sending data to the cloud.
  • It works offline once the models are downloaded.
  • You can fine-tune and customize models for specific tasks.

Would you like help setting up a specific model or using Ollama for a project? 😊

🚀 Creating Your Own Model in Ollama

Ollama allows you to customize and fine-tune models using Modelfile.


1️⃣ Install Ollama

If you haven’t installed Ollama yet, do so first:

1
curl -fsSL https://ollama.com/install.sh | sh

Check if it’s installed:

1
ollama --version

2️⃣ Create a Custom Model Using a Modelfile

A Modelfile is a simple way to customize or fine-tune models in Ollama.

📄 Step 1: Create a Modelfile

Create a new file named Modelfile:

1
touch Modelfile

Open it in an editor:

1
nano Modelfile

✍️ Step 2: Define Your Model in the Modelfile

Here’s an example of a custom model using Mistral as a base:

1
2
3
4
5
6
7
8
9
FROM mistral

# Add system instructions
SYSTEM "You are an AI assistant trained for research tasks."

# Add example prompts & responses to guide the model
# Helps improve fine-tuning for specific applications
PROMPT "Translate the following English text to French:"
RESPONSE "Bonjour! Comment ça va?"

3️⃣ Build Your Custom Model

After defining your Modelfile, run:

1
ollama create mymodel -f Modelfile

This will create a new custom model named “mymodel”.


4️⃣ Run Your Custom Model

You can now use your fine-tuned model by running:

1
ollama run mymodel

Or interact via API:

1
curl http://localhost:11434/api/generate -d '{ "model": "mymodel", "prompt": "Hello!" }'

5️⃣ (Optional) Modify and Improve Your Model

You can enhance your Modelfile by:

  • Adding more instructions to change behavior.
  • Including specific prompt-response pairs for improved accuracy.
  • Using different base models like llama2, gemma, or mistral.

Example:

1
2
3
4
5
6
FROM llama2

SYSTEM "You are an AI specialized in medical diagnosis."

PROMPT "Describe symptoms of a common cold."
RESPONSE "The common cold often includes sneezing, runny nose, and mild fever."

Rebuild after changes:

1
ollama create newmodel -f Modelfile

🎯 Conclusion

  • Custom models let you fine-tune behavior for specific tasks.
  • Modelfiles help define instructions and sample responses.
  • You can build and deploy models locally without cloud dependency.

🚀 Now you have a custom AI model in Ollama! 🎉 Let me know if you need further help!

This post is licensed under CC BY 4.0 by the author.