← Home

ðŸĶœ

⌘K
ðŸĪ–
Claude Code AI Tools
ðŸĪ—
Hugging Face AI Tools
ðŸĶœ
LangChain AI Tools
🧠
Keras AI Tools
ðŸĶ™
Ollama AI Tools
🐍
Python Programming Languages
ðŸŸĻ
JavaScript Programming Languages
🔷
TypeScript Programming Languages
⚛ïļ
React Programming Languages
ðŸđ
Go Programming Languages
ðŸĶ€
Rust Programming Languages
📊
MATLAB Programming Languages
🗄ïļ
SQL Programming Languages
⚙ïļ
C/C++ Programming Languages
☕
Java Programming Languages
ðŸŸĢ
C# Programming Languages
🍎
Swift Programming Languages
🟠
Kotlin Programming Languages
â–ē
Next.js Programming Languages
💚
Vue.js Programming Languages
ðŸ”Ĩ
Svelte Programming Languages
ðŸŽĻ
Tailwind CSS Programming Languages
💚
Node.js Programming Languages
🌐
HTML Programming Languages
ðŸŽĻ
CSS/SCSS Programming Languages
🐘
PHP Programming Languages
💎
Ruby Programming Languages
ðŸ”ī
Scala Programming Languages
📊
R Programming Languages
ðŸŽŊ
Dart Programming Languages
💧
Elixir Programming Languages
🌙
Lua Programming Languages
🐊
Perl Programming Languages
🅰ïļ
Angular Programming Languages
🚂
Express.js Programming Languages
ðŸą
NestJS Programming Languages
ðŸ›Īïļ
Ruby on Rails Programming Languages
◾ïļ
GraphQL Programming Languages
🟊
Haskell Programming Languages
💚
Nuxt.js Programming Languages
🔷
SolidJS Programming Languages
⚡
htmx Programming Languages
ðŸ’ŧ
VS Code Development Tools
🧠
PyCharm Development Tools
📓
Jupyter Development Tools
🧠
IntelliJ IDEA Development Tools
💚
Neovim Development Tools
ðŸ”Ū
Emacs Development Tools
🔀
Git DevOps & CLI
ðŸģ
Docker DevOps & CLI
â˜ļïļ
Kubernetes DevOps & CLI
☁ïļ
AWS CLI DevOps & CLI
🔄
GitHub Actions DevOps & CLI
🐧
Linux Commands DevOps & CLI
ðŸ’ŧ
Bash Scripting DevOps & CLI
🌐
Nginx DevOps & CLI
📝
Vim DevOps & CLI
ðŸ”Ļ
Makefile DevOps & CLI
🧊
Pytest DevOps & CLI
🊟
Windows DevOps & CLI
ðŸ“Ķ
Package Managers DevOps & CLI
🍎
macOS DevOps & CLI
🏗ïļ
Terraform DevOps & CLI
🔧
Ansible DevOps & CLI
⎈
Helm DevOps & CLI
ðŸ”Ļ
Jenkins DevOps & CLI
ðŸ”Ĩ
Prometheus DevOps & CLI
📊
Grafana DevOps & CLI
ðŸ’ŧ
Zsh DevOps & CLI
🐟
Fish Shell DevOps & CLI
💙
PowerShell DevOps & CLI
🔄
Argo CD DevOps & CLI
🔀
Traefik DevOps & CLI
☁ïļ
Azure CLI DevOps & CLI
☁ïļ
Google Cloud CLI DevOps & CLI
📟
tmux DevOps & CLI
🔧
jq DevOps & CLI
✂ïļ
sed DevOps & CLI
📊
awk DevOps & CLI
🌊
Apache Airflow DevOps & CLI
ðŸ”Ē
NumPy Databases & Data
🐞
Pandas Databases & Data
ðŸ”Ĩ
PyTorch Databases & Data
🧠
TensorFlow Databases & Data
📈
Matplotlib Databases & Data
🐘
PostgreSQL Databases & Data
🐎
MySQL Databases & Data
🍃
MongoDB Databases & Data
ðŸ”ī
Redis Databases & Data
🔍
Elasticsearch Databases & Data
ðŸĪ–
Scikit-learn Databases & Data
👁ïļ
OpenCV Databases & Data
⚡
Apache Spark Databases & Data
ðŸŠķ
SQLite Databases & Data
⚡
Supabase Databases & Data
ðŸ”ĩ
Neo4j Databases & Data
ðŸ“Ļ
Apache Kafka Databases & Data
🐰
RabbitMQ Databases & Data
ðŸ”Ī
Regex Utilities
📝
Markdown Utilities
📄
LaTeX Utilities
🔐
SSH & GPG Utilities
🌐
curl & HTTP Utilities
📜
reStructuredText Utilities
🚀
Postman Utilities
🎎
FFmpeg Utilities
🖞ïļ
ImageMagick Utilities
🔍
ripgrep Utilities
🔍
fzf Utilities
📗
Microsoft Excel Office Applications
📘
Microsoft Word Office Applications
📙
Microsoft PowerPoint Office Applications
📝
Hancom Hangul Hancom Office
ðŸ“―ïļ
Hancom Hanshow Hancom Office
📊
Hancom Hancell Hancom Office
📄
Google Docs Google Workspace
📊
Google Sheets Google Workspace
ðŸ“―ïļ
Google Slides Google Workspace
🔌
Cadence Virtuoso EDA & Hardware
⚙ïļ
Synopsys EDA EDA & Hardware
💎
Verilog & VHDL EDA & Hardware
⚡
LTSpice EDA & Hardware
🔧
KiCad EDA & Hardware
📝
Notion Productivity
💎
Obsidian Productivity
💎
Slack Productivity
ðŸŽŪ
Discord Productivity
ðŸŽĻ
Figma Design Tools
📘
Confluence Atlassian
📋
Jira Atlassian
🃏
Jest Testing
⚡
Vitest Testing
🎭
Playwright Testing
ðŸŒē
Cypress Testing
🌐
Selenium Testing
💙
Flutter Mobile Development
ðŸ“ą
React Native Mobile Development
🍎
SwiftUI Mobile Development
ðŸ“ą
Expo Mobile Development
🐍
Django Web Frameworks
⚡
FastAPI Web Frameworks
ðŸŒķïļ
Flask Web Frameworks
🍃
Spring Boot Web Frameworks
ðŸļ
Gin Web Frameworks
⚡
Vite Build Tools
ðŸ“Ķ
Webpack Build Tools
⚡
esbuild Build Tools
🐘
Gradle Build Tools
ðŸŠķ
Maven Build Tools
🔧
CMake Build Tools
ðŸŽŪ
Unity Game Development
ðŸĪ–
Godot Game Development
🔌
Arduino Embedded & IoT
🔍
Nmap Security
🐕
Datadog Monitoring
📖
Swagger/OpenAPI Documentation
No results found
EN KO

Basics

Installation

pip install langchain Install LangChain
pip install langchain-openai OpenAI integration
pip install langchain-anthropic Anthropic integration
pip install langchain-community Community integrations
pip install langchain-chroma Chroma vector store

LLM Basics

OpenAI
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
  model="gpt-4",
  temperature=0.7,
  api_key="your-api-key"
)

response = llm.invoke("Hello, how are you?")
Anthropic
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
  model="claude-3-sonnet-20240229",
  api_key="your-api-key"
)
Messages
from langchain_core.messages import HumanMessage, SystemMessage, AIMessage

messages = [
  SystemMessage(content="You are a helpful assistant."),
  HumanMessage(content="What is Python?"),
]

response = llm.invoke(messages)

Prompts

Prompt Templates

Basic template
from langchain_core.prompts import PromptTemplate

template = PromptTemplate.from_template(
  "Tell me a {adjective} joke about {topic}."
)

prompt = template.format(adjective="funny", topic="programming")
Chat template
from langchain_core.prompts import ChatPromptTemplate

template = ChatPromptTemplate.from_messages([
  ("system", "You are a {role}."),
  ("human", "{input}"),
])

messages = template.format_messages(
  role="helpful assistant",
  input="What is AI?"
)
Few-shot prompts
from langchain_core.prompts import FewShotPromptTemplate

examples = [
  {"input": "happy", "output": "sad"},
  {"input": "tall", "output": "short"},
]

example_prompt = PromptTemplate.from_template(
  "Input: {input}\nOutput: {output}"
)

few_shot = FewShotPromptTemplate(
  examples=examples,
  example_prompt=example_prompt,
  prefix="Give the antonym:",
  suffix="Input: {word}\nOutput:",
  input_variables=["word"],
)

Chains

LCEL Chains

Simple chain
from langchain_core.output_parsers import StrOutputParser

chain = template | llm | StrOutputParser()

result = chain.invoke({"topic": "Python"})
With retriever
from langchain_core.runnables import RunnablePassthrough

rag_chain = (
  {"context": retriever, "question": RunnablePassthrough()}
  | prompt
  | llm
  | StrOutputParser()
)

result = rag_chain.invoke("What is RAG?")
Parallel chains
from langchain_core.runnables import RunnableParallel

chain = RunnableParallel(
  summary=summary_chain,
  translation=translation_chain,
)

result = chain.invoke({"text": "Hello world"})
Branching
from langchain_core.runnables import RunnableBranch

branch = RunnableBranch(
  (lambda x: "math" in x["topic"].lower(), math_chain),
  (lambda x: "science" in x["topic"].lower(), science_chain),
  general_chain,  # default
)

RAG

Document Loading

Text loader
from langchain_community.document_loaders import TextLoader

loader = TextLoader("document.txt")
docs = loader.load()
PDF loader
from langchain_community.document_loaders import PyPDFLoader

loader = PyPDFLoader("document.pdf")
docs = loader.load()
Web loader
from langchain_community.document_loaders import WebBaseLoader

loader = WebBaseLoader("https://example.com")
docs = loader.load()
Directory loader
from langchain_community.document_loaders import DirectoryLoader

loader = DirectoryLoader("./docs", glob="**/*.txt")
docs = loader.load()

Text Splitting

Character splitter
from langchain.text_splitter import RecursiveCharacterTextSplitter

splitter = RecursiveCharacterTextSplitter(
  chunk_size=1000,
  chunk_overlap=200,
)

chunks = splitter.split_documents(docs)
Token splitter
from langchain.text_splitter import TokenTextSplitter

splitter = TokenTextSplitter(
  chunk_size=500,
  chunk_overlap=50,
)

chunks = splitter.split_documents(docs)

Vector Store

Chroma
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings()

vectorstore = Chroma.from_documents(
  documents=chunks,
  embedding=embeddings,
  persist_directory="./chroma_db"
)

# Retriever
retriever = vectorstore.as_retriever(
  search_type="similarity",
  search_kwargs={"k": 4}
)
FAISS
from langchain_community.vectorstores import FAISS

vectorstore = FAISS.from_documents(chunks, embeddings)

# Save and load
vectorstore.save_local("faiss_index")
vectorstore = FAISS.load_local("faiss_index", embeddings)

RAG Chain

Complete RAG
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough

template = """Answer based on context:
{context}

Question: {question}
"""

prompt = ChatPromptTemplate.from_template(template)

rag_chain = (
  {"context": retriever | format_docs, "question": RunnablePassthrough()}
  | prompt
  | llm
  | StrOutputParser()
)

def format_docs(docs):
  return "\n\n".join(doc.page_content for doc in docs)

result = rag_chain.invoke("What is machine learning?")

Agents

Tools & Agents

Define tool
from langchain_core.tools import tool

@tool
def search(query: str) -> str:
  """Search the web for information."""
  return f"Results for: {query}"

@tool
def calculator(expression: str) -> str:
  """Evaluate a math expression."""
  return str(eval(expression))
Create agent
from langchain.agents import create_tool_calling_agent, AgentExecutor

tools = [search, calculator]

prompt = ChatPromptTemplate.from_messages([
  ("system", "You are a helpful assistant."),
  ("human", "{input}"),
  ("placeholder", "{agent_scratchpad}"),
])

agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)

result = executor.invoke({"input": "What is 25 * 4?"})
ReAct agent
from langchain.agents import create_react_agent
from langchain import hub

prompt = hub.pull("hwchase17/react")
agent = create_react_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

Memory

Conversation Memory

Buffer memory
from langchain.memory import ConversationBufferMemory

memory = ConversationBufferMemory(return_messages=True)

memory.save_context(
  {"input": "Hi, I am John"},
  {"output": "Hello John!"}
)

history = memory.load_memory_variables({})
Window memory
from langchain.memory import ConversationBufferWindowMemory

memory = ConversationBufferWindowMemory(
  k=5,  # Keep last 5 exchanges
  return_messages=True
)
Summary memory
from langchain.memory import ConversationSummaryMemory

memory = ConversationSummaryMemory(
  llm=llm,
  return_messages=True
)
With chain
from langchain.chains import ConversationChain

chain = ConversationChain(
  llm=llm,
  memory=memory,
  verbose=True
)

response = chain.invoke({"input": "Hi!"})