Accessing powerful Large Language Models (LLMs) is becoming increasingly crucial for building intelligent applications and services. But as the landscape of LLMs grows, with new models and providers emerging constantly, juggling multiple APIs and integrations can quickly become a headache.
Enter the LLM gateway.
An LLM gateway acts as a single point of access, unifying diverse LLM APIs under one roof. While the core function of simplified access is paramount, advanced LLM gateways like llm.do offer much more than just a handshake with different models. They provide a foundation for building robust, flexible, and intelligent applications.
At its heart, an LLM gateway delivers on the promise of unified access. Imagine needing to switch from one provider's model to another – perhaps due to performance, cost, or specific capabilities. Without a gateway, this means modifying significant parts of your codebase, updating authentication, and adapting to different API structures.
With llm.do, this complexity melts away. You connect to a single, consistent API and simply specify the desired model within your request.
This capability is not just a convenience; it's a powerful enabler for agility and experimentation. You can easily a/b test different models, dynamically route requests based on the specific task, and leverage the unique strengths of various LLMs without rewriting core logic.
While unified access is the starting point, advanced LLM gateways offer essential features that elevate your AI integration:
llm.do provides a unified gateway for the world's leading large language models, including those from OpenAI, Anthropic, Google AI, xAI, and more. By abstracting away the complexity of individual provider APIs, llm.do allows developers to focus on building innovative AI-powered applications and agentic workflows.
Don't limit yourself to a single LLM. Embrace the power of access to diverse models through a unified, intelligent gateway. Standardize your interaction, simplify your architecture, and unlock the full potential of large language models with llm.do.
What is llm.do and how does it work?
llm.do simplifies accessing multiple large language models (LLMs) through a single, consistent API. Instead of integrating with individual providers, you connect to llm.do and gain access to a wide range of models, making it easy to switch or use the best model for your specific task.
Which LLMs and providers are supported by llm.do?
llm.do allows you to access models from various providers like OpenAI, Anthropic, Google, xAI, etc. You simply specify the desired model using a standardized format (e.g., 'openai/gpt-4o', 'anthropic/claude-3-opus', 'x-ai/grok-3-beta') in your API calls.
What are the key benefits of using llm.do?
Using llm.do standardizes your interaction with LLMs, reduces integration effort when switching models or providers, provides a single point of access for management and monitoring, and helps power robust agentic workflows that may require different models for different steps.
How do I integrate llm.do into my application?
Integrating llm.do is straightforward. You use our SDKs (like the example shown with the ai library) or directly interact with our unified API endpoint. You'll need an API key from llm.do to authenticate your requests.
Does llm.do integrate with the .do Agentic Workflow Platform?
llm.do is designed to be fully compatible with the .do Agentic Workflow Platform, allowing you to easily incorporate powerful LLM capabilities into your Business-as-Code services and workflows. It acts as the intelligence layer for your agents.
import { llm } from 'llm.do'
import { generateText } from 'ai'
// Easily switch models by changing the model identifier
const { text } = await generateText({
model: llm('x-ai/grok-3-beta'), // Or 'openai/gpt-4o', 'anthropic/claude-3-opus', etc.
prompt: 'Write a blog post about the future of work post-AGI',
})
console.log(text)