Portal Foundation

PORTALOS

Local AI command-line interface with Ollama and llama.cpp support.
PortalOS

PortalOS

What It Is

Local AI command-line interface. Runs LLMs (LLaMA, Mistral, Phi) via Ollama or llama.cpp on your machine. No cloud dependencies. Natural language commands for file operations, document processing, and system control.


The Problem

Cloud-based AI tools:

  • Send data to external servers
  • Require internet connection
  • Limited control over model behavior

Local alternatives are often fragmented or require significant setup.


The Solution

Single CLI that:

  • Runs entirely locally
  • Supports multiple model backends
  • Processes documents and files via natural language
  • Executes system commands

How It Works

  1. Select model backend (Ollama or llama.cpp)
  2. Launch PortalOS in terminal
  3. Issue natural language commands
  4. Processing happens locally
  5. Output returned in terminal

Capabilities

Document Processing

  • PDF, Markdown, TXT parsing
  • OCR via Tesseract
  • Summarization
  • Key point extraction

File Operations

  • Natural language file search
  • Semantic search across content
  • Batch operations
  • Folder organization

System Control

  • Execute shell commands via text
  • Monitor CPU, RAM, disk
  • Task automation
  • Background process management

Model Backends

BackendUse Case
OllamaModern systems, 8GB+ RAM, easy model switching
llama.cppResource-constrained devices, quantized models
CustomFine-tuned or domain-specific models

Supported models: LLaMA, Mistral, Phi, and others.


Technical Architecture

  • Modular NLP pipeline
  • Context memory management
  • Plugin support
  • Cross-platform (Linux, macOS, Windows)
  • RAG for personal knowledge bases

Development Status

MilestoneTargetStatus
Core CLI and file operationsQ2 2025Live
Ollama and llama.cpp integrationQ2 2025Live
Advanced document parsing, OCRQ3 2025In Progress
Plugin systemQ3 2025In Progress
Personal RAG trainingQ4 2025Planned
Multimodal supportQ1 2026Planned