← Home

ðŸĶ™

⌘K
ðŸĪ–
Claude Code AI Tools
ðŸĪ—
Hugging Face AI Tools
ðŸĶœ
LangChain AI Tools
🧠
Keras AI Tools
ðŸĶ™
Ollama AI Tools
🐍
Python Programming Languages
ðŸŸĻ
JavaScript Programming Languages
🔷
TypeScript Programming Languages
⚛ïļ
React Programming Languages
ðŸđ
Go Programming Languages
ðŸĶ€
Rust Programming Languages
📊
MATLAB Programming Languages
🗄ïļ
SQL Programming Languages
⚙ïļ
C/C++ Programming Languages
☕
Java Programming Languages
ðŸŸĢ
C# Programming Languages
🍎
Swift Programming Languages
🟠
Kotlin Programming Languages
â–ē
Next.js Programming Languages
💚
Vue.js Programming Languages
ðŸ”Ĩ
Svelte Programming Languages
ðŸŽĻ
Tailwind CSS Programming Languages
💚
Node.js Programming Languages
🌐
HTML Programming Languages
ðŸŽĻ
CSS/SCSS Programming Languages
🐘
PHP Programming Languages
💎
Ruby Programming Languages
ðŸ”ī
Scala Programming Languages
📊
R Programming Languages
ðŸŽŊ
Dart Programming Languages
💧
Elixir Programming Languages
🌙
Lua Programming Languages
🐊
Perl Programming Languages
🅰ïļ
Angular Programming Languages
🚂
Express.js Programming Languages
ðŸą
NestJS Programming Languages
ðŸ›Īïļ
Ruby on Rails Programming Languages
◾ïļ
GraphQL Programming Languages
🟊
Haskell Programming Languages
💚
Nuxt.js Programming Languages
🔷
SolidJS Programming Languages
⚡
htmx Programming Languages
ðŸ’ŧ
VS Code Development Tools
🧠
PyCharm Development Tools
📓
Jupyter Development Tools
🧠
IntelliJ IDEA Development Tools
💚
Neovim Development Tools
ðŸ”Ū
Emacs Development Tools
🔀
Git DevOps & CLI
ðŸģ
Docker DevOps & CLI
â˜ļïļ
Kubernetes DevOps & CLI
☁ïļ
AWS CLI DevOps & CLI
🔄
GitHub Actions DevOps & CLI
🐧
Linux Commands DevOps & CLI
ðŸ’ŧ
Bash Scripting DevOps & CLI
🌐
Nginx DevOps & CLI
📝
Vim DevOps & CLI
ðŸ”Ļ
Makefile DevOps & CLI
🧊
Pytest DevOps & CLI
🊟
Windows DevOps & CLI
ðŸ“Ķ
Package Managers DevOps & CLI
🍎
macOS DevOps & CLI
🏗ïļ
Terraform DevOps & CLI
🔧
Ansible DevOps & CLI
⎈
Helm DevOps & CLI
ðŸ”Ļ
Jenkins DevOps & CLI
ðŸ”Ĩ
Prometheus DevOps & CLI
📊
Grafana DevOps & CLI
ðŸ’ŧ
Zsh DevOps & CLI
🐟
Fish Shell DevOps & CLI
💙
PowerShell DevOps & CLI
🔄
Argo CD DevOps & CLI
🔀
Traefik DevOps & CLI
☁ïļ
Azure CLI DevOps & CLI
☁ïļ
Google Cloud CLI DevOps & CLI
📟
tmux DevOps & CLI
🔧
jq DevOps & CLI
✂ïļ
sed DevOps & CLI
📊
awk DevOps & CLI
🌊
Apache Airflow DevOps & CLI
ðŸ”Ē
NumPy Databases & Data
🐞
Pandas Databases & Data
ðŸ”Ĩ
PyTorch Databases & Data
🧠
TensorFlow Databases & Data
📈
Matplotlib Databases & Data
🐘
PostgreSQL Databases & Data
🐎
MySQL Databases & Data
🍃
MongoDB Databases & Data
ðŸ”ī
Redis Databases & Data
🔍
Elasticsearch Databases & Data
ðŸĪ–
Scikit-learn Databases & Data
👁ïļ
OpenCV Databases & Data
⚡
Apache Spark Databases & Data
ðŸŠķ
SQLite Databases & Data
⚡
Supabase Databases & Data
ðŸ”ĩ
Neo4j Databases & Data
ðŸ“Ļ
Apache Kafka Databases & Data
🐰
RabbitMQ Databases & Data
ðŸ”Ī
Regex Utilities
📝
Markdown Utilities
📄
LaTeX Utilities
🔐
SSH & GPG Utilities
🌐
curl & HTTP Utilities
📜
reStructuredText Utilities
🚀
Postman Utilities
🎎
FFmpeg Utilities
🖞ïļ
ImageMagick Utilities
🔍
ripgrep Utilities
🔍
fzf Utilities
📗
Microsoft Excel Office Applications
📘
Microsoft Word Office Applications
📙
Microsoft PowerPoint Office Applications
📝
Hancom Hangul Hancom Office
ðŸ“―ïļ
Hancom Hanshow Hancom Office
📊
Hancom Hancell Hancom Office
📄
Google Docs Google Workspace
📊
Google Sheets Google Workspace
ðŸ“―ïļ
Google Slides Google Workspace
🔌
Cadence Virtuoso EDA & Hardware
⚙ïļ
Synopsys EDA EDA & Hardware
💎
Verilog & VHDL EDA & Hardware
⚡
LTSpice EDA & Hardware
🔧
KiCad EDA & Hardware
📝
Notion Productivity
💎
Obsidian Productivity
💎
Slack Productivity
ðŸŽŪ
Discord Productivity
ðŸŽĻ
Figma Design Tools
📘
Confluence Atlassian
📋
Jira Atlassian
🃏
Jest Testing
⚡
Vitest Testing
🎭
Playwright Testing
ðŸŒē
Cypress Testing
🌐
Selenium Testing
💙
Flutter Mobile Development
ðŸ“ą
React Native Mobile Development
🍎
SwiftUI Mobile Development
ðŸ“ą
Expo Mobile Development
🐍
Django Web Frameworks
⚡
FastAPI Web Frameworks
ðŸŒķïļ
Flask Web Frameworks
🍃
Spring Boot Web Frameworks
ðŸļ
Gin Web Frameworks
⚡
Vite Build Tools
ðŸ“Ķ
Webpack Build Tools
⚡
esbuild Build Tools
🐘
Gradle Build Tools
ðŸŠķ
Maven Build Tools
🔧
CMake Build Tools
ðŸŽŪ
Unity Game Development
ðŸĪ–
Godot Game Development
🔌
Arduino Embedded & IoT
🔍
Nmap Security
🐕
Datadog Monitoring
📖
Swagger/OpenAPI Documentation
No results found
EN KO

Basics

Installation

curl -fsSL https://ollama.com/install.sh | sh Install on Linux
brew install ollama Install on macOS
ollama serve Start Ollama server

Model Management

ollama pull llama3 Download model
ollama list List downloaded models
ollama show llama3 Show model info
ollama rm llama3 Remove model
ollama cp llama3 my-llama3 Copy model
ollama ps List running models

Running Models

ollama run llama3 Run and chat
ollama run llama3 "What is Python?" Single prompt
ollama run codellama "Write a Python function" Code generation
echo "Hello" | ollama run llama3 Pipe input

Popular Models

Available Models

  • llama3 - Meta Llama 3 (8B), general purpose
  • llama3:70b - Llama 3 70B, more capable
  • mistral - Mistral 7B, fast and efficient
  • mixtral - Mistral MoE 8x7B
  • codellama - Code Llama for programming
  • phi3 - Microsoft Phi-3, compact model
  • gemma - Google Gemma models
  • qwen2 - Alibaba Qwen 2
  • deepseek-coder - DeepSeek for coding
  • nomic-embed-text - Text embeddings

API

REST API

Generate completion
curl http://localhost:11434/api/generate -d '{
  "model": "llama3",
  "prompt": "Why is the sky blue?",
  "stream": false
}'
Chat completion
curl http://localhost:11434/api/chat -d '{
  "model": "llama3",
  "messages": [
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user", "content": "Hello!" }
  ],
  "stream": false
}'
Generate embeddings
curl http://localhost:11434/api/embeddings -d '{
  "model": "nomic-embed-text",
  "prompt": "Hello world"
}'
List models
curl http://localhost:11434/api/tags

Python Client

Install & Basic usage
# pip install ollama

import ollama

# Generate
response = ollama.generate(
    model='llama3',
    prompt='Why is the sky blue?'
)
print(response['response'])

# Chat
response = ollama.chat(
    model='llama3',
    messages=[
        {'role': 'user', 'content': 'Hello!'}
    ]
)
print(response['message']['content'])
Streaming
import ollama

# Streaming response
stream = ollama.chat(
    model='llama3',
    messages=[{'role': 'user', 'content': 'Tell me a story'}],
    stream=True
)

for chunk in stream:
    print(chunk['message']['content'], end='', flush=True)
Embeddings
import ollama

response = ollama.embeddings(
    model='nomic-embed-text',
    prompt='Hello world'
)

embedding = response['embedding']
print(f'Dimension: {len(embedding)}')
Model management
import ollama

# List models
models = ollama.list()
for model in models['models']:
    print(model['name'])

# Pull model
ollama.pull('llama3')

# Show model info
info = ollama.show('llama3')
print(info)

JavaScript Client

Basic usage
// npm install ollama

import { Ollama } from 'ollama';

const ollama = new Ollama();

// Generate
const response = await ollama.generate({
  model: 'llama3',
  prompt: 'Why is the sky blue?'
});
console.log(response.response);

// Chat
const chatResponse = await ollama.chat({
  model: 'llama3',
  messages: [{ role: 'user', content: 'Hello!' }]
});
console.log(chatResponse.message.content);
Streaming
const stream = await ollama.chat({
  model: 'llama3',
  messages: [{ role: 'user', content: 'Tell me a story' }],
  stream: true
});

for await (const chunk of stream) {
  process.stdout.write(chunk.message.content);
}

Customization

Modelfile

Create custom model
# Modelfile
FROM llama3

# Set parameters
PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER top_k 40
PARAMETER num_ctx 4096
PARAMETER stop "<|end|>"

# Set system prompt
SYSTEM You are a helpful coding assistant.

# Create model
# ollama create mymodel -f Modelfile
From GGUF file
# Modelfile
FROM ./my-model.gguf

TEMPLATE """{{ if .System }}<|system|>
{{ .System }}<|end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|end|>
{{ end }}<|assistant|>
"""

Parameters

Generation parameters
ollama.generate(
    model='llama3',
    prompt='Hello',
    options={
        'temperature': 0.7,    # Creativity (0-2)
        'top_p': 0.9,          # Nucleus sampling
        'top_k': 40,           # Top-k sampling
        'num_predict': 128,    # Max tokens
        'num_ctx': 4096,       # Context window
        'repeat_penalty': 1.1, # Repetition penalty
        'seed': 42,            # Random seed
    }
)

Integrations

LangChain Integration

With LangChain
from langchain_community.llms import Ollama
from langchain_community.chat_models import ChatOllama
from langchain_core.messages import HumanMessage

# LLM
llm = Ollama(model="llama3")
response = llm.invoke("Hello!")

# Chat model
chat = ChatOllama(model="llama3")
response = chat.invoke([HumanMessage(content="Hello!")])

# Embeddings
from langchain_community.embeddings import OllamaEmbeddings

embeddings = OllamaEmbeddings(model="nomic-embed-text")
vector = embeddings.embed_query("Hello world")

OpenAI Compatibility

OpenAI-compatible API
# Ollama exposes OpenAI-compatible endpoint
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:11434/v1",
    api_key="ollama"  # Any string works
)

response = client.chat.completions.create(
    model="llama3",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

Advanced

Configuration

Environment variables
# Model storage location
export OLLAMA_MODELS=/path/to/models

# Server host/port
export OLLAMA_HOST=0.0.0.0:11434

# Keep model loaded
export OLLAMA_KEEP_ALIVE=5m

# GPU settings
export OLLAMA_NUM_GPU=1
export CUDA_VISIBLE_DEVICES=0

# Debug mode
export OLLAMA_DEBUG=1

Docker

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama Run with Docker
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama Run with GPU
docker exec -it ollama ollama run llama3 Run model in container