Talk to your files locally. Private summaries. No uploads.
Privacy-first. Franklin runs entirely on your machine — nothing uploaded.
Docker-based FastAPI + compact Web UI. TinyLlama runs on your machine by default; GPT‑4o is opt‑in.
Download Franklin Core and installers from the official download page:
Download Franklin — Download PageWho Franklin BE Is For
- Developers wanting private local AI tools
- Homelab users running local-first setups
- Writers and researchers needing private summaries
- Privacy-focused users who avoid cloud uploads
- Builders who want a customizable AI middleware
Core Features
Local TinyLlama engine
Runs on your machine for private, low-latency inference.
Optional GPT-4o hybrid mode
Opt-in cloud fallback for more capable responses when needed.
Private file summaries
Summarize documents without uploads — everything local by default.
Local filesystem tools
List, read and summarize files from a mounted /memory volume.
FastAPI backend + Web UI
Simple backend and small UI for fast local development.
Plugin support
Extend functionality with custom plugins and integrations.
No cloud uploads
By default, nothing leaves your machine.
Fully offline-first design
Works even without network access using local models.
One reproducible Docker stack
Easy to install and run across platforms via Docker Compose.
Franklin vs Cloud AI
| Feature | Franklin BE | ChatGPT / Copilot |
|---|---|---|
| Local file access | Yes — direct access to mounted /memory | No — requires uploads or API bridges |
| Offline mode | Yes (TinyLlama local) | No (cloud service) |
| No subscription fees | One-time purchase available | Subscription or pay-as-you-go |
| Full privacy | Data stays on your machine | Data sent to cloud provider |
| Extendable plugins | Yes — plugin system | Limited or provider-specific |
| One-time purchase | Yes (Builder’s Edition) | Typically no |
| Data stays on your machine | Always (unless you opt-in cloud) | Not by default |
Screenshots
Placeholder images — replace with your screenshots.
Privacy & Security
- 100% local processing — models run on your machine
- No file uploads by default — your data never leaves your device
- You control your memory directory — mount whatever you want
- Optional hybrid mode — cloud only if you explicitly enable it
- No tracking, no analytics — privacy-first by design
Pricing Breakdown — $49
- Lifetime license
- All updates included
- Local-only mode (TinyLlama) included
- Hybrid mode optional (GPT-4o)
- Plugin system included
- Fully private installation — no recurring fees
Simple Install Steps
1. Install Docker
Ensure Docker & Docker Compose are installed on your machine.
2. Clone the repository
git clone https://github.com/dfrankstudioz/franklin-be.git
3. Run the stack
docker compose up -d
4. Open the UI
Open http://localhost:8000 in your browser
FAQ
Does Franklin upload my files?
No. Everything stays on your machine.
Do I need an OpenAI key?
Only if you want optional GPT-4o hybrid mode. Local mode works without it.
Does Franklin run offline?
Yes. TinyLlama runs fully offline.
What OS is supported?
Windows, macOS, and Linux (via Docker).
Can I extend Franklin?
Yes, through the plugin system.
Is there a subscription?
No. It’s a one-time purchase.