oAI Logo
oAI - OpenRouter CLI App

ChatGPT on your system - A powerful CLI for OpenRouter AI models

v2.1.0-RC1 - Now with MCP & Write Mode! 🚀

Overview

oAI is a powerful command-line interface that brings AI models directly to your terminal with advanced capabilities. Beyond simple chat, oAI now features MCP (Model Context Protocol) integration, allowing AI agents to access your local files and query SQLite databases securely - all from the comfort of your command line.

Whether you're debugging code, analyzing data, exploring databases, or just chatting with AI, oAI provides a rich, interactive experience with full conversation management, cost tracking, and intelligent file handling.

View on gitlab.pm Get Started

🎉 What's New in v2.1.0-RC1

✨ Latest in RC1: Write Mode - AI Can Now Modify Files!

The AI can now create, edit, and delete files on your system. Write mode gives your AI full file system write capabilities with 6 powerful tools - but it's OFF by default and requires explicit activation for security.

✍️

Write Mode

Enable with /mcp write on to let AI create, edit, delete, move, and copy files. Includes safety confirmations for destructive operations and a ✍️ indicator when active.

🔧

File Mode

Grant AI access to specific folders. It can read files, search by name or content, and list directory structures - all with automatic .gitignore filtering.

🗄️

Database Mode

Connect SQLite databases for read-only querying. AI can inspect schemas, search data, and execute complex SQL queries safely.

🔒

Secure by Design

Explicit approval required for every folder and database. Write mode OFF by default, delete operations require confirmation, and non-persistent settings that reset each session.

Complete RC1 Feature List

Core Features

💬

Smart Conversations

Persistent conversation memory with full history management. Toggle memory on/off to optimize costs.

📎

File Attachments

Attach images, PDFs, and code files directly to your prompts with simple @file syntax.

🌐

Web Search

Enable online mode for models that support it, giving your AI access to real-time web information.

📊

Cost Tracking

Real-time token usage and cost monitoring with configurable alerts to keep your budget in check.

💾

Save & Export

Save conversations to database and export to Markdown, JSON, or HTML formats.

🎨

Rich UI

Beautiful terminal interface with markdown rendering, syntax highlighting, and smooth streaming.

📋

Clipboard Support

Paste code or text directly from clipboard, and copy AI responses with a single keystroke.

🔧

Highly Configurable

Customize everything from model selection to token limits, streaming, and system prompts.

🚀 MCP (Model Context Protocol)

MCP enables your AI to become a true agent with direct access to your local filesystem and databases. This transforms conversations from "copy-paste code to AI" into "AI directly analyzes your codebase and data."

🔐 Security First

Every folder and database requires explicit approval. MCP operates in read-only mode by default with automatic filtering of sensitive directories (.git, node_modules, venv), query validation, and timeout protection. Write mode is OFF by default and requires explicit activation. Delete operations always require user confirmation.

Three Powerful Modes

🔧 File Mode (Default - Read Only)

Grant AI access to specific folders. It can intelligently navigate, read, and search your files.

Capabilities:
  • Read complete file contents (auto-truncates >50KB)
  • List directory contents recursively
  • Search by filename or content
  • Automatic .gitignore filtering
  • Virtual environment exclusion
Example Usage:
You> /mcp enable
You> /mcp add ~/Projects
[🔧 MCP: Files] You> List all Python files
[🔧 MCP: Files] You> Read main.py and explain it

✍️ Write Mode (RC1 - File Modification)

Enable AI to create, edit, and delete files on your system. OFF by default for safety.

Capabilities:
  • Create new files (write_file)
  • Edit existing files (edit_file)
  • Delete files (delete_file - requires confirmation)
  • Create directories (create_directory)
  • Move files (move_file)
  • Copy files (copy_file)
Example Usage:
You> /mcp enable
You> /mcp write on
[✍️ 🔧 MCP: Files] You> Create a README.md file
[✍️ 🔧 MCP: Files] You> Fix the syntax error in app.py
⚠️ Security Notes:
  • OFF by default - must enable with /mcp write on
  • Non-persistent setting (resets each session)
  • Delete operations always require user confirmation
  • Visual indicator (✍️) shows when write mode is active

🗄️ Database Mode

Connect SQLite databases for intelligent data exploration with full SQL query support.

Capabilities:
  • Inspect schemas (tables, columns, indexes)
  • Full-text search across all tables
  • Execute SELECT queries (JOINs, CTEs, subqueries)
  • Query validation (blocks INSERT/UPDATE/DELETE)
  • Automatic result limiting (1000 rows max)
Example Usage:
You> /mcp add db ~/app/data.db
You> /mcp db 1
[🗄️ MCP: DB #1] You> Show all tables
[🗄️ MCP: DB #1] You> Find users created this month

Quick Start with MCP

  1. Enable MCP
    You> /mcp enable
    ✓ MCP Filesystem Server started successfully
  2. Add Resources (Files or Databases)
    # For file access
    You> /mcp add ~/Documents
    
    # For database querying
    You> /mcp add db ~/myapp/database.db
  3. Switch Modes as Needed
    # Use files (default)
    You> /mcp files
    
    # Switch to database
    You> /mcp db 1
  4. Ask Natural Questions
    # In file mode
    [🔧 MCP: Files] You> What Python files are in my project?
    
    # In database mode
    [🗄️ MCP: DB #1] You> How many users signed up last week?
💡 MCP Pro Tips
  • Use /mcp status to see comprehensive stats and current mode
  • Toggle /mcp gitignore on|off to control file filtering
  • Enable write mode with /mcp write on - look for the ✍️ indicator
  • Write mode is non-persistent and resets each session for safety
  • Use /help mcp for the complete MCP guide
  • Models must support function calling for MCP to work (GPT-4, Claude, etc.)
  • Database queries are read-only - your data is never modified

Requirements

  • Python 3.10-3.13 - Available at python.org (3.14 not yet supported)
  • OpenRouter API key - Get one free at openrouter.ai
  • Function-calling model - Required for MCP features (GPT-4, Claude, Gemini, etc.)

All other dependencies are automatically installed from the included requirements.txt file.

🔧 Use binary install (recommended)!

If you choose to use the binary file (recommended) you don't need to think about any other requirements. So this is the best way to do it if you're not going to develop on the code.

Installation

Recommended option: download optimized binary

Download binary for your system from the release page.
Move to path (e.g. /usr/local/bin).
Run chmod +x your-path-here/oai
Run from anywhere with just oai
                        

Follow these steps to install oAI v2.1.0-RC1 on your system (if you are going to develop on the project):

  1. Clone the Repository
    git clone https://gitlab.pm/rune/oai.git
    # Or download the latest zip file from the release page
    cd oai
  2. Install Dependencies

    Use the included requirements.txt file (now with MCP support):

    pip install -r requirements.txt
  3. Make the Script Executable
    chmod +x oai.py
  4. Copy to PATH

    Copy the script to a directory in your $PATH environment variable:

    Option 1: System-wide (requires sudo)

    sudo cp oai.py /usr/local/bin/oai

    Option 2: User-local (recommended)

    mkdir -p ~/.local/bin
    cp oai.py ~/.local/bin/oai
    
    # Add to PATH if not already (add to ~/.bashrc or ~/.zshrc)
    export PATH="$HOME/.local/bin:$PATH"
  5. On first run, you will be prompted to enter your OpenRouter API key.

🔧 Alternative Installation (for *nix systems)

If you have issues with the above method, you can add an alias in your .bashrc, .zshrc, etc:

alias oai='python3 /path/to/oai/oai.py'

Replace /path/to/oai/oai.py with the actual path to your oai.py file.

💡 Getting Your API Key

Visit openrouter.ai, create a free account, and generate an API key from your dashboard. OpenRouter provides access to 300+ AI models through a single API.

How to Use

Quick Start

After installation, simply run:

oai

This starts an interactive chat session. If no API key is configured, you'll be prompted to enter one.

Selecting a Model

You> /model

Browse and select from 300+ available models. You can search by name:

You> /model gpt
You> /model claude

Using MCP for File Access

# Enable MCP and add your project folder
You> /mcp enable
You> /mcp add ~/Projects/myapp

# AI can now access your files
[🔧 MCP: Files] You> List all Python files in this project
🔧 AI requesting 1 tool call(s)...
  → Calling: search_files(pattern="*.py")
  ✓ Found 24 file(s)

AI Response: I found 24 Python files in your project:
- src/main.py
- src/utils.py
- tests/test_main.py
...

[🔧 MCP: Files] You> Read main.py and explain what it does
🔧 AI requesting 1 tool call(s)...
  → Calling: read_file(file_path="~/Projects/myapp/src/main.py")
  ✓ Read file (8192 bytes)

AI Response: This is the main entry point for your application...

Using MCP for Database Queries

# Add a database
You> /mcp add db ~/myapp/users.db
✓ Added database #1: users.db

# Switch to database mode
You> /mcp db 1

# Ask natural language questions
[🗄️ MCP: DB #1] You> What tables are in this database?
🔧 AI requesting 1 tool call(s)...
  → Calling: inspect_database()
  ✓ Inspected database (5 tables)

AI Response: This database contains 5 tables:
1. users (1,234 rows)
2. posts (5,678 rows)
3. comments (12,345 rows)
...

[🗄️ MCP: DB #1] You> How many users signed up in December 2024?
🔧 AI requesting 1 tool call(s)...
  → Calling: query_database(query="SELECT COUNT(*) FROM users WHERE created_at >= '2024-12-01'")
  ✓ Query returned 1 row(s)

AI Response: There were 47 users who signed up in December 2024.

Chatting with AI

Simply type your question or prompt:

You> Explain quantum computing in simple terms

Attaching Files

Attach images, PDFs, or code files using the @ symbol:

You> Analyze this code @script.py
You> Summarize this document @report.pdf
You> What's in this image? @photo.jpg

Using Web Search

Enable online mode for models that support it:

You> /online on
Online mode enabled. Model will use web search capabilities.

You> What are the latest developments in AI?
🌐 Online mode active - model has web search access

Managing Conversations

# Save current conversation
You> /save my_project_discussion

# List saved conversations
You> /list

# Load a conversation by name or number
You> /load my_project_discussion
You> /load 3

# Export to file
You> /export md conversation.md
You> /export html report.html

Using Clipboard

# Paste code from clipboard
You> /paste

# Paste with additional prompt
You> /paste Explain this code and suggest improvements

Command Reference

🆕 MCP Commands
Command Description
/mcp enable Start MCP server for file and database access
/mcp disable Stop MCP server
/mcp status Show comprehensive status (mode, folders, databases, stats)
/mcp add <folder> Add folder for file access (auto-loads .gitignore patterns)
/mcp add db <path> Add SQLite database for querying
/mcp list List all allowed folders with file counts and sizes
/mcp db list List all databases with table counts and details
/mcp db <number> Switch to database mode (select database by number)
/mcp files Switch to file mode (default)
/mcp remove <num> Remove folder by number (requires confirmation)
/mcp remove db <num> Remove database by number (requires confirmation)
/mcp gitignore on|off Toggle .gitignore filtering (default: on)
/mcp write on|off Enable/disable write mode (allows AI to create, edit, delete files). OFF by default, resets each session
Session Commands
Command Description
/clear or /cl Clear the terminal screen (or use Ctrl+L)
/help [command|topic] Show all commands or detailed help. Use /help mcp for MCP guide
/memory [on|off] Toggle conversation memory. OFF saves costs by not sending history
/online [on|off] Enable/disable web search for models that support it
/retry Resend the last prompt
/reset Clear conversation history and system prompt
/prev / /next Navigate through conversation history
/paste [prompt] Send clipboard content to AI with optional prompt
Model Commands
Command Description
/model [search] Select or change the current model. Supports searching by name or ID. Shows image capabilities and online support
/info [model_id] Display detailed information about current or specified model (pricing, capabilities, context length, function calling support)
Configuration Commands
Command Description
/config View all current configurations including MCP status
/config api Set or update the OpenRouter API key
/config model [search] Set default model that loads on startup
/config stream [on|off] Enable or disable response streaming
/config maxtoken [value] Set stored max token limit (persisted)
/config costwarning [value] Set cost warning threshold in USD
/config loglevel [level] Set log verbosity (debug/info/warning/error/critical)
/config log [size_mb] Set log file size limit (takes effect immediately)
/config online [on|off] Set default online mode for new sessions
/config url Set or update the base URL for OpenRouter API
Token & System Commands
Command Description
/maxtoken [value] Set temporary session token limit
/middleout [on|off] Enable middle-out transform to compress prompts exceeding context size
/system [prompt|clear] Set session-level system prompt to guide AI behavior
Conversation Management
Command Description
/save <name> Save current conversation to database
/load <name|number> Load a saved conversation by name or number
/list List all saved conversations with numbers and timestamps
/delete <name|number> Delete a saved conversation (requires confirmation)
/export <format> <file> Export conversation to file (formats: md, json, html)
Monitoring & Stats
Command Description
/stats Display session cost summary, token usage, and warnings
/credits Show remaining credits on your OpenRouter account with alerts
Input Methods
Method Description
@/path/to/file Attach files to messages: images (PNG, JPG, etc.), PDFs, and code files (.py, .js, etc.)
/paste [prompt] Send plain text/code from clipboard to AI with optional prompt
// escape Start with // to send literal / character (e.g., //help sends "/help" as text)
exit | quit | bye Quit the chat application and display session summary

Usage Examples

🆕 MCP File Access Example

You> /mcp enable
✓ MCP Filesystem Server started successfully

You> /mcp add ~/Projects/webapp
⚠️  Security Check:
You are granting MCP access to: /Users/you/Projects/webapp
This folder contains: 156 files (2.3 MB)

MCP will be able to:
  ✓ Read files in this folder
  ✓ List and search files
  ✓ Access subfolders recursively
  ✓ Automatically respect .gitignore patterns
  ✗ Delete or modify files (read-only)

Proceed? [Y/n]: y

✓ Added /Users/you/Projects/webapp to MCP allowed folders

[🔧 MCP: Files] You> What Python files are in the src/ directory?
🔧 AI requesting 1 tool call(s)...
  → Calling: search_files(pattern="*.py", search_path="~/Projects/webapp/src")
  ✓ Found 12 file(s)

AI Response: I found 12 Python files in src/:
- main.py (entry point)
- config.py (configuration)
- models.py (database models)
...

[🔧 MCP: Files] You> Read models.py and explain the User model
🔧 AI requesting 1 tool call(s)...
  → Calling: read_file(file_path="~/Projects/webapp/src/models.py")
  ✓ Read file (4096 bytes)

AI Response: The User model in models.py is a SQLAlchemy ORM class...

🆕 MCP Database Query Example

You> /mcp add db ~/webapp/production.db
⚠️  Database Check:
Adding: production.db
Size: 45.3 MB
Tables: users, orders, products, sessions (4 total)

Proceed? [Y/n]: y

✓ Added database #1: production.db

You> /mcp db 1
✓ Switched to database #1: production.db
Tables: users, orders, products, sessions

[🗄️ MCP: DB #1] You> Show me the schema for the users table
🔧 AI requesting 1 tool call(s)...
  → Calling: inspect_database(table_name="users")
  ✓ Inspected table: users (1,234 rows)

AI Response: The users table has the following schema:
- id (INTEGER, PRIMARY KEY)
- email (TEXT, NOT NULL)
- created_at (TIMESTAMP)
- last_login (TIMESTAMP)
...

[🗄️ MCP: DB #1] You> How many orders were placed in December 2024?
🔧 AI requesting 1 tool call(s)...
  → Calling: query_database(query="SELECT COUNT(*) as order_count FROM orders WHERE created_at >= '2024-12-01' AND created_at < '2025-01-01'")
  ✓ Query returned 1 row(s)

AI Response: There were 347 orders placed in December 2024.

Basic Chat

You> /model gpt-4o
Selected: GPT-4o (openai/gpt-4o)

You> Write a Python function to calculate fibonacci numbers
AI Response: Here's an efficient implementation...

Code Analysis with File Attachment

You> Review this code for potential bugs @app.py
✓ Code file attached: app.py (12.3 KB)

AI Response: I've analyzed your code. Here are some observations...

Web Search with Online Mode

You> /online on
Online mode enabled. Model will use web search capabilities.

You> What are the latest features in Python 3.13?
🌐 Online mode active - model has web search access

AI Response: Based on recent information, Python 3.13 introduces...

Multi-modal with Images

You> What's in this screenshot? @error.png
✓ Image attached: error.png (245.8 KB)

AI Response: This screenshot shows a Python traceback error...

Cost-Conscious Mode

You> /memory off
Conversation memory disabled. API calls will not include history (lower cost)

You> Quick question: What is 2+2?
📊 Metrics: 142 tokens | $0.0002 | 0.85s, Memory: OFF

Configuration Guide

Setting Up Your Environment

All configuration is stored in ~/.config/oai/:

First-Time Setup

  1. Enter Your API Key

    On first run, you'll be prompted to enter your OpenRouter API key. Alternatively:

    You> /config api
    Enter new API key: sk-or-v1-...
  2. Select a Default Model
    You> /config model
    # Choose your preferred model from the list
    # Models with function calling support will work with MCP
  3. Configure Preferences
    # Enable streaming for real-time responses
    You> /config stream on
    
    # Set token limit
    You> /config maxtoken 50000
    
    # Set cost alert threshold
    You> /config costwarning 0.10
    
    # Set default online mode
    You> /config online on
  4. 🆕 Set Up MCP (Optional)
    # Enable MCP
    You> /mcp enable
    
    # Add folders for file access
    You> /mcp add ~/Documents
    You> /mcp add ~/Projects
    
    # Add databases for querying
    You> /mcp add db ~/app/data.db
    
    # View status
    You> /mcp status

Advanced Configuration

⚙️ System Prompts

Set custom system prompts to define AI behavior for your session:

You> /system You are a Python expert specialized in data science
Session system prompt set to: You are a Python expert...
💰 Cost Management Tips
  • Monitor usage with /stats after each session
  • Use /memory off for simple, independent queries
  • Set appropriate /maxtoken limits to control response length
  • Configure /config costwarning to get alerts for expensive responses
  • Choose smaller models for simple tasks (use /model to browse)
  • Note: MCP tool calls add minimal cost (just function call overhead)

Support & Contributing

🐛

Report Issues

Found a bug or have a feature request? Visit our issue tracker to let us know.

Report Issue
🤝

Contribute

Contributions are welcome! Check out the repository to get started with development.

View Repository
📖

Documentation

Comprehensive guides and examples available in the repository README and via /help command.

Read Docs

Frequently Asked Questions

🆕 How does MCP work and is it safe?

MCP (Model Context Protocol) allows AI models to use "tools" to interact with your local system. In oAI, these tools are:

  • File Mode: read_file, list_directory, search_files
  • Database Mode: inspect_database, search_database, query_database

Security measures:

  • ✅ Explicit approval required for each folder/database
  • ✅ System directories automatically blocked
  • ✅ Read-only access (no write/delete operations)
  • ✅ SQL query validation (INSERT/UPDATE/DELETE blocked)
  • ✅ Query timeout (5 seconds max)
  • ✅ Result limits (1000 rows max)
  • ✅ .gitignore patterns respected
  • ✅ Virtual environments excluded

Requirements: Models must support function calling (GPT-4, Claude 3.5, Gemini, etc.)

🆕 What can I do with MCP database mode?

Database mode allows natural language interaction with SQLite databases:

Examples:
  • "Show me all tables in this database"
  • "What's the schema for the users table?"
  • "Find all records mentioning 'error'"
  • "How many orders were placed last month?"
  • "Show me users who haven't logged in for 30 days"
  • "What are the top 10 products by sales?"
Supported SQL:
  • ✅ SELECT statements
  • ✅ JOINs (INNER, LEFT, RIGHT, FULL)
  • ✅ Subqueries and CTEs
  • ✅ Aggregations (COUNT, SUM, AVG, etc.)
  • ✅ WHERE, GROUP BY, HAVING, ORDER BY, LIMIT
  • ❌ INSERT/UPDATE/DELETE (blocked for safety)

Note: All queries are executed in read-only mode with timeout protection.

What models are supported?

oAI supports all 300+ models available through the OpenRouter API, including:

  • OpenAI models (GPT-4, GPT-4o, GPT-4 Turbo, o1, o3, etc.)
  • Anthropic Claude models (Claude 3.5 Sonnet, Opus, Haiku, etc.)
  • Google models (Gemini 2.0, Gemini Pro, Flash, etc.)
  • Meta Llama models (Llama 3.1, Llama 3.3, etc.)
  • Mistral models (Mistral Large, Codestral, etc.)
  • DeepSeek models (DeepSeek V3, R1, etc.)
  • Qwen models (Qwen 2.5, QwQ, etc.)
  • Many open-source and specialized models

Use /model to see all available models with their capabilities (image support, online mode, function calling).

For MCP: Choose models with function calling support (indicated in model info).

How do I minimize costs?

Several strategies help reduce costs:

  • Disable memory: Use /memory off for independent queries (doesn't send conversation history)
  • Set token limits: Use /maxtoken 1000 to limit response length
  • Choose cheaper models: Smaller models often work well for simple tasks
  • Monitor usage: Check /stats and /credits regularly
  • Configure alerts: Set /config costwarning 0.05 to get notified of expensive responses
  • Disable streaming: Use /config stream off if you don't need real-time responses
  • MCP impact: Tool calls add minimal cost (function call overhead only)
What file types can I attach?

oAI supports multiple file types (up to 10MB each):

  • Images: PNG, JPG, JPEG, GIF, WEBP, BMP (requires model with image support)
  • Documents: PDF (requires model with document support)
  • Code files: .py, .js, .ts, .java, .cpp, .go, .rs, .php, .rb, .swift, and many more
  • Text files: .txt, .md, .json, .yaml, .yml, .xml, .csv, etc.

Simply use @filename in your prompt to attach files.

Alternatively: Use MCP file mode to let AI read files directly without manual attachment.

🆕 How do I switch between MCP modes?

MCP supports two modes that you can switch between freely:

# Default: File mode
You> /mcp enable
You> /mcp add ~/Projects
[🔧 MCP: Files] You> List Python files

# Switch to database
You> /mcp db 1
[🗄️ MCP: DB #1] You> Show tables

# Switch back to files
You> /mcp files
[🔧 MCP: Files] You> Search for config files

The prompt indicator shows your current mode. Each mode has its own set of tools available to the AI.

How does online mode work?

Online mode adds the :online suffix to your model ID, enabling web search capabilities. This allows the AI to access current information from the web in real-time.

Requirements: Only models that support the "tools" parameter can use online mode. The application will automatically prevent you from enabling it on unsupported models.

Usage:

You> /online on
You> What happened in the news today?
🌐 Online mode active - model has web search access

Note: Online mode and MCP can be used together!

How do I update oAI?

To update to the latest version:

cd /path/to/oai
git pull origin main
pip install -r requirements.txt --upgrade

Your configuration, saved conversations, and MCP settings are preserved during updates.

For binary users: Download the latest release and replace your existing binary.

Where is my data stored?

All data is stored locally in ~/.config/oai/:

  • oai_config.db: SQLite database containing settings, saved conversations, and MCP configuration (allowed folders, databases)
  • history.txt: Command history for auto-completion feature
  • oai.log: Application logs for troubleshooting (rotating log with configurable size)

Your API key is stored securely in the database. No data is sent anywhere except to OpenRouter's API.

MCP data: Only file metadata and query results are sent to the AI model via OpenRouter. Your files and databases remain on your local system.

Can I use custom system prompts?

Yes! Use the /system command to set custom behavior for your AI:

# Set a system prompt
You> /system You are a helpful Python expert specialized in web development

# View current system prompt
You> /system

# Clear system prompt
You> /system clear

System prompts persist throughout your session until cleared or reset.

Note: MCP automatically adds context about available tools to the system prompt based on your current mode.

🆕 What's the difference between file attachments and MCP?

File Attachments (@file):

  • You manually specify which files to include
  • Entire file content sent to AI in your message
  • Good for specific files you want to analyze
  • Works with all models (no function calling needed)

MCP File Mode:

  • AI decides which files to read based on your question
  • AI can search, list, and explore your codebase
  • Better for open-ended queries ("find the bug", "refactor this module")
  • Requires model with function calling support

Use Both: You can combine approaches! Attach specific files AND enable MCP for broader exploration.