Use OpenAI & Compatible APIs
Connect Maid to OpenAI's cloud models — including GPT-4o and o3 — using your own API key. The same provider also works with any OpenAI-compatible endpoint, including LM Studio, OpenRouter, and vLLM.
Overview
Maid's OpenAI provider sends requests to the OpenAI REST API using the standard chat completions endpoint. Because the base URL is configurable, it also works with any third-party service that implements the same API spec — which many do, including local inference servers like LM Studio and cloud aggregators like OpenRouter.
Unlike the local llama.cpp provider, this option requires an internet connection and charges are incurred based on token usage at OpenAI's current rates. Your API key is stored in encrypted local storage on your device and is only ever sent to the endpoint you configure.
Get an OpenAI API key
Configure Maid
Configuration reference
| Field | Required | Default |
|---|---|---|
| API Key | Yes | — |
| Model | Yes | Auto-populated from your account |
| Base URL | No | https://api.openai.com/v1 |
| Custom Headers | No | — |
| Parameters | No | Provider defaults |
Using compatible APIs
Because the base URL is configurable, the OpenAI provider in Maid works with any service that implements the OpenAI chat completions specification. This includes local inference servers — meaning you can run a model on your desktop and connect to it from your phone without Ollama.
| Service | Base URL | API Key |
|---|---|---|
| LM Studio | http://<host>:1234/v1 | Not required |
| OpenRouter | https://openrouter.ai/api/v1 | Your OpenRouter key |
| vLLM | http://<host>:8000/v1 | Optional |