ChatGPT, Claude, and more on your system - Multi-provider AI chat with MCP
v3.0.0-b3 - Multi-Provider Support! 🔄oAI is a powerful command-line interface that brings AI models directly to your terminal with advanced capabilities. Beyond simple chat, oAI now features MCP (Model Context Protocol) integration, allowing AI agents to access your local files and query SQLite databases securely - all from the comfort of your command line.
Whether you're debugging code, analyzing data, exploring databases, or just chatting with AI, oAI provides a rich, interactive experience with full conversation management, cost tracking, and intelligent file handling.
oAI now features universal web search for all providers (Anthropic Native, DuckDuckGo, Google),
a new /commands reference modal, enhanced /config screen with web search settings,
and a smart /credits command! Plus multi-provider support, Claude 4.5 models, and refined UI!
Three web search options for ALL providers: Anthropic Native (with citations, $0.01/search), DuckDuckGo (free), or Google Custom Search. Enable with /online on.
New modal screen showing all available commands, organized by category. Access instantly with /commands for quick command discovery.
Updated configuration modal now shows web search provider settings, API key status, and Google-specific configuration. Cleaner, more informative interface.
Shows real-time credit balance for OpenRouter. For other providers, provides helpful links to console billing pages. One command works for all!
/commands reference modal - comprehensive command documentation in-app/config screen - shows web search provider, API key status, Google settings/credits command - shows balance or helpful console links per provider/provider command - change providers mid-sessionv3.0.0-b3 is beta software. While I strive for stability, beta versions may contain bugs, incomplete features, or unexpected behavior. I actively work on improvements and appreciate your feedback.
Beta releases are ideal for testing new features and providing feedback. For production use or maximum stability, consider using the latest stable release.
📦 Latest Stable Release: v2.1
For a tested and stable version without beta features, download v2.1:
Download v2.1 Stable Release →💡 Tip: Report bugs or issues on the GitLab issue tracker.
Persistent conversation memory with full history management. Toggle memory on/off to optimize costs.
Attach images, PDFs, and code files directly to your prompts with simple @file syntax.
Three search options for ALL providers: Anthropic Native (with citations), DuckDuckGo (free), or Google Custom Search. Enable with /online on.
Real-time token usage and cost monitoring with configurable alerts to keep your budget in check.
Save conversations to database and export to Markdown, JSON, or HTML formats.
Beautiful terminal interface with markdown rendering, syntax highlighting, and smooth streaming.
Paste code or text directly from clipboard, and copy AI responses with a single keystroke.
Customize everything from model selection to token limits, streaming, and system prompts.
MCP enables your AI to become a true agent with direct access to your local filesystem and databases. This transforms conversations from "copy-paste code to AI" into "AI directly analyzes your codebase and data."
Every folder and database requires explicit approval. MCP operates in read-only mode by default with automatic filtering of sensitive directories (.git, node_modules, venv), query validation, and timeout protection. Write mode is OFF by default and requires explicit activation. Delete operations always require user confirmation.
Grant AI access to specific folders. It can intelligently navigate, read, and search your files.
Capabilities:You> /mcp on
You> /mcp add ~/Projects
[🔧 MCP: Files] You> List all Python files
[🔧 MCP: Files] You> Read main.py and explain it
Enable AI to create, edit, and delete files on your system. OFF by default for safety.
Capabilities:You> /mcp on
You> /mcp write on
[✍️ 🔧 MCP: Files] You> Create a README.md file
[✍️ 🔧 MCP: Files] You> Fix the syntax error in app.py
/mcp write onConnect SQLite databases for intelligent data exploration with full SQL query support.
Capabilities:You> /mcp add db ~/app/data.db
You> /mcp db 1
[🗄️ MCP: DB #1] You> Show all tables
[🗄️ MCP: DB #1] You> Find users created this month
You> /mcp on
✓ MCP Filesystem Server started successfully
# For file access
You> /mcp add ~/Documents
# For database querying
You> /mcp add db ~/myapp/database.db
# Use files (default)
You> /mcp files
# Switch to database
You> /mcp db 1
# In file mode
[🔧 MCP: Files] You> What Python files are in my project?
# In database mode
[🗄️ MCP: DB #1] You> How many users signed up last week?
/mcp status to see comprehensive stats and current mode/mcp gitignore on|off to control file filtering/mcp write on - look for the ✍️ indicator/help mcp for the complete MCP guide
All other dependencies are automatically installed from the included requirements.txt file.
If you choose to use the binary file (recommended) you don't need to think about any other requirements. So this is the best way to do it if you're not going to develop on the code.
Download binary for your system from the release page.
Move to path (e.g. /usr/local/bin).
Run chmod +x your-path-here/oai
Run from anywhere with just oai
Follow these steps to install oAI v3.0.0-b2 on your system (if you are going to develop on the project):
git clone https://gitlab.pm/rune/oai.git
# Or download the latest zip file from the release page
cd oai
Install the package in editable mode (for development):
pip install -e .
Launch the TUI interface:
oai
Or with options:
oai --model gpt-4o --online --mcp
On first run, you will be prompted to enter your OpenRouter API key.
If you have issues with the above method, you can add an alias in your .bashrc, .zshrc, etc:
alias oai='python3 /path/to/oai/oai.py'
Replace /path/to/oai/oai.py with the actual path to your oai.py file.
Visit openrouter.ai, create a free account, and generate an API key from your dashboard. OpenRouter provides access to 300+ AI models through a single API.
After installation, simply run:
oai
This starts an interactive chat session. If no API key is configured, you'll be prompted to enter one.
You> /model
Browse and select from 300+ available models. You can search by name:
You> /model gpt
You> /model claude
# Enable MCP and add your project folder
You> /mcp on
You> /mcp add ~/Projects/myapp
# AI can now access your files
[🔧 MCP: Files] You> List all Python files in this project
🔧 AI requesting 1 tool call(s)...
→ Calling: search_files(pattern="*.py")
✓ Found 24 file(s)
AI Response: I found 24 Python files in your project:
- src/main.py
- src/utils.py
- tests/test_main.py
...
[🔧 MCP: Files] You> Read main.py and explain what it does
🔧 AI requesting 1 tool call(s)...
→ Calling: read_file(file_path="~/Projects/myapp/src/main.py")
✓ Read file (8192 bytes)
AI Response: This is the main entry point for your application...
# Add a database
You> /mcp add db ~/myapp/users.db
✓ Added database #1: users.db
# Switch to database mode
You> /mcp db 1
# Ask natural language questions
[🗄️ MCP: DB #1] You> What tables are in this database?
🔧 AI requesting 1 tool call(s)...
→ Calling: inspect_database()
✓ Inspected database (5 tables)
AI Response: This database contains 5 tables:
1. users (1,234 rows)
2. posts (5,678 rows)
3. comments (12,345 rows)
...
[🗄️ MCP: DB #1] You> How many users signed up in December 2024?
🔧 AI requesting 1 tool call(s)...
→ Calling: query_database(query="SELECT COUNT(*) FROM users WHERE created_at >= '2024-12-01'")
✓ Query returned 1 row(s)
AI Response: There were 47 users who signed up in December 2024.
Simply type your question or prompt:
You> Explain quantum computing in simple terms
Attach images, PDFs, or code files using the @ symbol:
You> Analyze this code @script.py
You> Summarize this document @report.pdf
You> What's in this image? @photo.jpg
Enable online mode for models that support it:
You> /online on
Online mode enabled. Model will use web search capabilities.
You> What are the latest developments in AI?
🌐 Online mode active - model has web search access
# Save current conversation
You> /save my_project_discussion
# List saved conversations
You> /list
# Load a conversation by name or number
You> /load my_project_discussion
You> /load 3
# Export to file
You> /export md conversation.md
You> /export html report.html
# Paste code from clipboard
You> /paste
# Paste with additional prompt
You> /paste Explain this code and suggest improvements
| Command | Description |
|---|---|
/mcp on |
Start MCP server for file and database access |
/mcp off |
Stop MCP server |
/mcp status |
Show comprehensive status (mode, folders, databases, stats) |
/mcp add <folder> |
Add folder for file access (auto-loads .gitignore patterns) |
/mcp add db <path> |
Add SQLite database for querying |
/mcp list |
List all allowed folders with file counts and sizes |
/mcp db list |
List all databases with table counts and details |
/mcp db <number> |
Switch to database mode (select database by number) |
/mcp files |
Switch to file mode (default) |
/mcp remove <num> |
Remove folder by number (requires confirmation) |
/mcp remove db <num> |
Remove database by number (requires confirmation) |
/mcp gitignore on|off |
Toggle .gitignore filtering (default: on) |
/mcp write on|off |
Enable/disable write mode (allows AI to create, edit, delete files). OFF by default, resets each session |
| Command | Description |
|---|---|
/clear or /cl |
Clear the terminal screen (or use Ctrl+L) |
/help [command|topic] |
Show all commands or detailed help. Use /help mcp for MCP guide |
/commands |
Show comprehensive commands reference modal with all commands organized by category |
/memory [on|off] |
Toggle conversation memory. OFF saves costs by not sending history |
/online [on|off] |
Enable/disable web search (Anthropic Native, DuckDuckGo, or Google - all providers supported) |
/retry |
Resend the last prompt |
/reset |
Clear conversation history and system prompt |
/prev / /next |
Navigate through conversation history |
/paste [prompt] |
Send clipboard content to AI with optional prompt |
| Command | Description |
|---|---|
/provider [name] |
Show current provider or switch to another (openrouter/anthropic/openai/ollama). Remembers last model per provider |
/model [search] |
Select or change the current model. Supports searching by name or ID. Shows image capabilities and online support |
/info [model_id] |
Display detailed information about current or specified model (pricing, capabilities, context length, function calling support) |
| Command | Description |
|---|---|
/config |
View all current configurations including provider, web search, and MCP status |
/config provider [name] |
Set default provider (openrouter/anthropic/openai/ollama) |
/config openrouter_api_key [key] |
Set OpenRouter API key |
/config anthropic_api_key [key] |
Set Anthropic API key |
/config openai_api_key [key] |
Set OpenAI API key |
/config ollama_base_url [url] |
Set Ollama server URL (default: http://localhost:11434) |
/config search_provider [provider] |
Set web search provider (anthropic_native/duckduckgo/google) |
/config google_api_key [key] |
Set Google API key for Google Custom Search |
/config google_search_engine_id [id] |
Set Google Custom Search Engine ID |
/config api |
Set or update API key (legacy - use provider-specific keys) |
/config model [search] |
Set default model that loads on startup |
/config stream [on|off] |
Enable or disable response streaming |
/config maxtoken [value] |
Set stored max token limit (persisted) |
/config costwarning [value] |
Set cost warning threshold in USD |
/config loglevel [level] |
Set log verbosity (debug/info/warning/error/critical) |
/config log [size_mb] |
Set log file size limit (takes effect immediately) |
/config online [on|off] |
Set default online mode for new sessions |
/config url |
Set or update the base URL for OpenRouter API |
| Command | Description |
|---|---|
/maxtoken [value] |
Set temporary session token limit |
/middleout [on|off] |
Enable middle-out transform to compress prompts exceeding context size |
/system [prompt|clear] |
Set session-level system prompt to guide AI behavior |
| Command | Description |
|---|---|
/save <name> |
Save current conversation to database |
/load <name|number> |
Load a saved conversation by name or number |
/list |
List all saved conversations with numbers and timestamps |
/delete <name|number> |
Delete a saved conversation (requires confirmation) |
/export <format> <file> |
Export conversation to file (formats: md, json, html) |
| Command | Description |
|---|---|
/stats |
Display session cost summary, token usage, and warnings |
/credits |
Show account credits (OpenRouter: live balance, others: helpful console links) |
| Method | Description |
|---|---|
@/path/to/file |
Attach files to messages: images (PNG, JPG, etc.), PDFs, and code files (.py, .js, etc.) |
/paste [prompt] |
Send plain text/code from clipboard to AI with optional prompt |
// escape |
Start with // to send literal / character (e.g., //help sends "/help" as text) |
exit | quit | bye |
Quit the chat application and display session summary |
You> /mcp on
✓ MCP Filesystem Server started successfully
You> /mcp add ~/Projects/webapp
⚠️ Security Check:
You are granting MCP access to: /Users/you/Projects/webapp
This folder contains: 156 files (2.3 MB)
MCP will be able to:
✓ Read files in this folder
✓ List and search files
✓ Access subfolders recursively
✓ Automatically respect .gitignore patterns
✗ Delete or modify files (read-only)
Proceed? [Y/n]: y
✓ Added /Users/you/Projects/webapp to MCP allowed folders
[🔧 MCP: Files] You> What Python files are in the src/ directory?
🔧 AI requesting 1 tool call(s)...
→ Calling: search_files(pattern="*.py", search_path="~/Projects/webapp/src")
✓ Found 12 file(s)
AI Response: I found 12 Python files in src/:
- main.py (entry point)
- config.py (configuration)
- models.py (database models)
...
[🔧 MCP: Files] You> Read models.py and explain the User model
🔧 AI requesting 1 tool call(s)...
→ Calling: read_file(file_path="~/Projects/webapp/src/models.py")
✓ Read file (4096 bytes)
AI Response: The User model in models.py is a SQLAlchemy ORM class...
You> /mcp add db ~/webapp/production.db
⚠️ Database Check:
Adding: production.db
Size: 45.3 MB
Tables: users, orders, products, sessions (4 total)
Proceed? [Y/n]: y
✓ Added database #1: production.db
You> /mcp db 1
✓ Switched to database #1: production.db
Tables: users, orders, products, sessions
[🗄️ MCP: DB #1] You> Show me the schema for the users table
🔧 AI requesting 1 tool call(s)...
→ Calling: inspect_database(table_name="users")
✓ Inspected table: users (1,234 rows)
AI Response: The users table has the following schema:
- id (INTEGER, PRIMARY KEY)
- email (TEXT, NOT NULL)
- created_at (TIMESTAMP)
- last_login (TIMESTAMP)
...
[🗄️ MCP: DB #1] You> How many orders were placed in December 2024?
🔧 AI requesting 1 tool call(s)...
→ Calling: query_database(query="SELECT COUNT(*) as order_count FROM orders WHERE created_at >= '2024-12-01' AND created_at < '2025-01-01'")
✓ Query returned 1 row(s)
AI Response: There were 347 orders placed in December 2024.
You> /model gpt-4o
Selected: GPT-4o (openai/gpt-4o)
You> Write a Python function to calculate fibonacci numbers
AI Response: Here's an efficient implementation...
You> Review this code for potential bugs @app.py
✓ Code file attached: app.py (12.3 KB)
AI Response: I've analyzed your code. Here are some observations...
You> /online on
Online mode enabled. Model will use web search capabilities.
You> What are the latest features in Python 3.13?
🌐 Online mode active - model has web search access
AI Response: Based on recent information, Python 3.13 introduces...
You> What's in this screenshot? @error.png
✓ Image attached: error.png (245.8 KB)
AI Response: This screenshot shows a Python traceback error...
You> /memory off
Conversation memory disabled. API calls will not include history (lower cost)
You> Quick question: What is 2+2?
📊 Metrics: 142 tokens | $0.0002 | 0.85s, Memory: OFF
All configuration is stored in ~/.config/oai/:
oai_config.db - Settings, conversations, and MCP configurationhistory.txt - Command history for auto-completionoai.log - Application logs for debugging (rotating, configurable size)On first run, you'll be prompted to enter your OpenRouter API key. Alternatively:
You> /config api
Enter new API key: sk-or-v1-...
You> /config model
# Choose your preferred model from the list
# Models with function calling support will work with MCP
# Enable streaming for real-time responses
You> /config stream on
# Set token limit
You> /config maxtoken 50000
# Set cost alert threshold
You> /config costwarning 0.10
# Set default online mode
You> /config online on
# Enable MCP
You> /mcp on
# Add folders for file access
You> /mcp add ~/Documents
You> /mcp add ~/Projects
# Add databases for querying
You> /mcp add db ~/app/data.db
# View status
You> /mcp status
Set custom system prompts to define AI behavior for your session:
You> /system You are a Python expert specialized in data science
Session system prompt set to: You are a Python expert...
/stats after each session/memory off for simple, independent queries/maxtoken limits to control response length/config costwarning to get alerts for expensive responses/model to browse)Found a bug or have a feature request? Visit the issue tracker to let me know.
Report IssueContributions are welcome! Check out the repository to get started with development.
View RepositoryComprehensive guides and examples available in the repository README and via /help command.
MCP (Model Context Protocol) allows AI models to use "tools" to interact with your local system. In oAI, these tools are:
Security measures:
Requirements: Models must support function calling (GPT-4, Claude 3.5, Gemini, etc.)
Database mode allows natural language interaction with SQLite databases:
Examples:Note: All queries are executed in read-only mode with timeout protection.
oAI supports all 300+ models available through the OpenRouter API, including:
Use /model to see all available models with their capabilities (image support, online mode, function calling).
For MCP: Choose models with function calling support (indicated in model info).
Several strategies help reduce costs:
/memory off for independent queries (doesn't send conversation history)/maxtoken 1000 to limit response length/stats and /credits regularly/config costwarning 0.05 to get notified of expensive responses/config stream off if you don't need real-time responsesoAI supports multiple file types (up to 10MB each):
Simply use @filename in your prompt to attach files.
Alternatively: Use MCP file mode to let AI read files directly without manual attachment.
MCP supports two modes that you can switch between freely:
# Default: File mode
You> /mcp on
You> /mcp add ~/Projects
[🔧 MCP: Files] You> List Python files
# Switch to database
You> /mcp db 1
[🗄️ MCP: DB #1] You> Show tables
# Switch back to files
You> /mcp files
[🔧 MCP: Files] You> Search for config files
The prompt indicator shows your current mode. Each mode has its own set of tools available to the AI.
Online mode adds the :online suffix to your model ID, enabling web search capabilities.
This allows the AI to access current information from the web in real-time.
Requirements: Only models that support the "tools" parameter can use online mode. The application will automatically prevent you from enabling it on unsupported models.
Usage:
You> /online on
You> What happened in the news today?
🌐 Online mode active - model has web search access
Note: Online mode and MCP can be used together!
To update to the latest version:
cd /path/to/oai
git pull origin main
pip install -r requirements.txt --upgrade
Your configuration, saved conversations, and MCP settings are preserved during updates.
For binary users: Download the latest release and replace your existing binary.
All data is stored locally in ~/.config/oai/:
Your API key is stored securely in the database. No data is sent anywhere except to OpenRouter's API.
MCP data: Only file metadata and query results are sent to the AI model via OpenRouter. Your files and databases remain on your local system.
Yes! Use the /system command to set custom behavior for your AI:
# Set a system prompt
You> /system You are a helpful Python expert specialized in web development
# View current system prompt
You> /system
# Clear system prompt
You> /system clear
System prompts persist throughout your session until cleared or reset.
Note: MCP automatically adds context about available tools to the system prompt based on your current mode.
File Attachments (@file):
MCP File Mode:
Use Both: You can combine approaches! Attach specific files AND enable MCP for broader exploration.