Introduction:
👋  Welcome
Agents:
Features:
Shipable natively supports seamless integration and orchestration across the leading foundation model providers. Whether you need accuracy, speed, cost-efficiency, or compliance — you can plug-and-play the best LLM for each job.
Provider | Notes |
---|---|
OpenAI | GPT-4o, GPT-4, GPT-3.5. Includes system prompt config, fallback logic, and token observability. |
Anthropic | Claude 3 family with structured memory and toolchain routing support. |
Meta LLaMA | OSS compatibility via vLLM. Supports fine-tuned LLaMA 2/3. |
Mistral | Lightweight, fast, OSS-friendly — ideal for low-latency inference. |
DeepSeek | For deep code tasks and Chinese language coverage. |
Gemini (Google) | Used for research-heavy or vision multimodal tasks. Optional cloud-only fallback. |
xAI (Grok) | API-ready for Elon’s open models. Used in conversational summarization chains. |
AbacusAI | Used in vertical enterprise deployments. Native RAG & fine-tune options. |
Cohere | Embed + chat models. High accuracy for enterprise summarization and multilingual NLP. |
Shipable routes every prompt dynamically based on:
You can override any default with a single toggle or define model rules per agent.
Got a private endpoint? Fine-tuned model on Replicate, Modal, Sagemaker, or Ollama? Shipable supports custom LLM connectors with full routing and observability built-in.
That’s it. You're ready to test your first agent in minutes with Shipable.