Introduction:

👋  Welcome

Platform

FAQ

Plans and Pricing

Changelog

Agents:

Agents Overview

Build Your Agent

Share Your Agent

Monitor Your Agent

Features:

Interaction Tools (100+)

Voice & Arabic support

Environment Awareness

System Prompting

Templates (20+)

Available LLMs

Shipcrew

Multi-Model LLM Support

Shipable natively supports seamless integration and orchestration across the leading foundation model providers. Whether you need accuracy, speed, cost-efficiency, or compliance — you can plug-and-play the best LLM for each job.

image.png

Out-of-the-Box Support For:

Provider Notes
OpenAI GPT-4o, GPT-4, GPT-3.5. Includes system prompt config, fallback logic, and token observability.
Anthropic Claude 3 family with structured memory and toolchain routing support.
Meta LLaMA OSS compatibility via vLLM. Supports fine-tuned LLaMA 2/3.
Mistral Lightweight, fast, OSS-friendly — ideal for low-latency inference.
DeepSeek For deep code tasks and Chinese language coverage.
Gemini (Google) Used for research-heavy or vision multimodal tasks. Optional cloud-only fallback.
xAI (Grok) API-ready for Elon’s open models. Used in conversational summarization chains.
AbacusAI Used in vertical enterprise deployments. Native RAG & fine-tune options.
Cohere Embed + chat models. High accuracy for enterprise summarization and multilingual NLP.

Model Routing Engine

Shipable routes every prompt dynamically based on:

You can override any default with a single toggle or define model rules per agent.

Use Cases

FinOps-Ready

Plug Your Own Model

Got a private endpoint? Fine-tuned model on Replicate, Modal, Sagemaker, or Ollama? Shipable supports custom LLM connectors with full routing and observability built-in.

That’s it. You're ready to test your first agent in minutes with Shipable.