logo

Mobile Artificial IntelligenceMobile AI

Maid / Guides

Maid Guides

Everything you need to get Maid set up and working with your preferred AI provider — whether that's a model running on your phone, a server on your home network, or a cloud API.

Local inference

Run AI Locally with llama.cpp

Run GGUF language models entirely on your Android device — no internet, no API key, no cost per query. Covers model selection, quantization, and loading custom files.

Self-hosted

Connect to Ollama over Your Network

Run a large model on your desktop or home server and use Maid on your phone as the chat interface. Covers installation, remote access, and the automatic Find Ollama feature.

Cloud · OpenAI-compatible

Use OpenAI & Compatible APIs

Connect Maid to OpenAI's cloud models or any OpenAI-compatible endpoint such as LM Studio, OpenRouter, or vLLM. Covers API key setup and custom base URLs.

Cloud · Anthropic

Use Claude via Anthropic API

Connect Maid to Anthropic's Claude family of models — including Claude Opus, Sonnet, and Haiku — via the official Anthropic API.

Cloud · DeepSeek

Use DeepSeek

Connect Maid to DeepSeek's cloud API, including DeepSeek-V3 and the DeepSeek-R1 reasoning model.

Cloud · Mistral

Use Mistral AI

Connect Maid to Mistral AI's cloud API. Covers Mistral Large, Mistral Small, and Codestral, plus custom base URL support for self-hosted Mistral-compatible servers.

Local inference · Vision

Vision Models with llama.cpp

Send images to a vision-capable AI model running entirely on your Android device. Covers loading a multimodal projector file alongside a GGUF vision model for fully offline image understanding.

Self-hosted · Vision

Vision Models with Ollama

Use Maid to send images to a vision-capable model running on your desktop via Ollama. Vision support is automatic — just pull a compatible model like llama3.2-vision or llava.