OpenAI-compatible providers
Configure OpenAI-compatible LLM providers such as Mistral, DeepSeek, or any other provider that implements the OpenAI API format in kgateway.
Overview
Many LLM providers offer APIs that are compatible with OpenAI’s API format. You can configure these providers in agentgateway by using the openai provider type with custom host, port, path, and authHeader overrides.
Note that when you specify a custom host override, agentgateway requires explicit TLS configuration via BackendTLSPolicy for HTTPS endpoints. This differs from well-known providers (like OpenAI) where TLS is automatically enabled when using default hosts.
Before you begin
Set up an agentgateway proxy.
Set up access to an OpenAI-compatible provider
Review the following examples for common OpenAI-compatible provider endpoints:
Mistral AI example
Set up OpenAI-compatible provider access to Mistral AI models.
-
Get a Mistral AI API key.
-
Save the API key in an environment variable.
export MISTRAL_API_KEY=<insert your API key> -
Create a Kubernetes secret to store your Mistral AI API key.
kubectl apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: mistral-secret namespace: kgateway-system type: Opaque stringData: Authorization: $MISTRAL_API_KEY EOF -
Create an AgentgatewayBackend resource to configure your LLM provider and reference the AI API key secret that you created earlier.
kubectl apply -f- <<EOF apiVersion: agentgateway.dev/v1alpha1 kind: AgentgatewayBackend metadata: name: mistral namespace: kgateway-system spec: ai: provider: openai: model: mistral-medium-2505 host: api.mistral.ai port: 443 path: "/v1/chat/completions" policies: auth: secretRef: name: mistral-secret tls: sni: api.mistral.ai EOFReview the following table to understand this configuration. For more information, see the API reference.
Setting Description ai.provider.openaiDefine the OpenAI-compatible provider. openai.modelThe model to use, such as mistral-medium-2505.openai.hostRequired: The hostname of the OpenAI-compatible provider, such as api.mistral.ai.openai.portRequired: The port number (typically 443for HTTPS). Bothhostandportmust be set together.openai.pathOptional: Override the API path. Defaults to /v1/chat/completionsif not specified.policies.authConfigure the authentication token for OpenAI API. The example refers to the secret that you previously created. policies.tls.sniThe hostname for which to validate the server certificate (must match the hostvalue). -
Create an HTTPRoute resource that routes incoming traffic to the AgentgatewayBackend. The following example sets up a route on the
/openaipath to the AgentgatewayBackend that you previously created. TheURLRewritefilter rewrites the path from/openaito the path of the API in the LLM provider that you want to use,/v1/chat/completions.kubectl apply -f- <<EOF apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: mistral namespace: kgateway-system spec: parentRefs: - name: agentgateway namespace: kgateway-system rules: - matches: - path: type: PathPrefix value: /mistral filters: - type: URLRewrite urlRewrite: hostname: api.mistral.ai backendRefs: - name: mistral namespace: kgateway-system group: agentgateway.dev kind: AgentgatewayBackend EOF -
Send a request to the LLM provider API. Verify that the request succeeds and that you get back a response from the chat completion API.
curl "$INGRESS_GW_ADDRESS/mistral" -H content-type:application/json -d '{ "model": "", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Write a short haiku about artificial intelligence." } ] }' | jqcurl "localhost:8080/mistral" -H content-type:application/json -d '{ "model": "", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Write a short haiku about artificial intelligence." } ] }' | jqExample output:
{ "model": "mistral-medium-2505", "usage": { "prompt_tokens": 20, "completion_tokens": 18, "total_tokens": 38 }, "choices": [ { "message": { "content": "Silent circuits hum,\nLearning echoes through the void,\nWisdom without warmth.", "role": "assistant", "tool_calls": null }, "index": 0, "finish_reason": "stop" } ], "id": "d05ef3973085435a8db8b51b580eeef8", "created": 1764614501, "object": "chat.completion" }
DeepSeek example
Set up OpenAI-compatible provider access to DeepSeek models.
-
Get a DeepSeek API key.
-
Save the API key in an environment variable.
export DEEPSEEK_API_KEY=<insert your API key> -
Create a Kubernetes secret to store your DeepSeek API key.
kubectl apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: deepseek-secret namespace: kgateway-system type: Opaque stringData: Authorization: $DEEPSEEK_API_KEY EOF -
Create an AgentgatewayBackend resource to configure your LLM provider and reference the AI API key secret that you created earlier.
kubectl apply -f- <<EOF apiVersion: agentgateway.dev/v1alpha1 kind: AgentgatewayBackend metadata: name: deepseek namespace: kgateway-system spec: ai: provider: openai: model: deepseek-chat host: api.deepseek.com port: 443 path: "/v1/chat/completions" policies: auth: secretRef: name: deepseek-secret tls: sni: api.deepseek.com EOF -
Create an HTTPRoute resource that routes incoming traffic to the AgentgatewayBackend. Note that kgateway automatically rewrites the endpoint to the OpenAI chat completion endpoint of the LLM provider for you, based on the LLM provider that you set up in the AgentgatewayBackend resource.
kubectl apply -f- <<EOF apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: deepseek namespace: kgateway-system spec: parentRefs: - name: agentgateway namespace: kgateway-system rules: - matches: - path: type: PathPrefix value: /deepseek backendRefs: - name: deepseek namespace: kgateway-system group: agentgateway.dev kind: AgentgatewayBackend EOF -
Send a request to the LLM provider API. Verify that the request succeeds and that you get back a response from the chat completion API.
curl "$INGRESS_GW_ADDRESS/deepseek" -H content-type:application/json -d '{ "model": "", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Write a short haiku about artificial intelligence." } ] }' | jqcurl "localhost:8080/deepseek" -H content-type:application/json -d '{ "model": "", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Write a short haiku about artificial intelligence." } ] }' | jqExample output:
{ "id": "chatcmpl-deepseek-12345", "object": "chat.completion", "created": 1727967462, "model": "deepseek-chat", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Neural networks learn,\nPatterns emerge from data streams,\nMind in silicon grows." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 20, "completion_tokens": 17, "total_tokens": 37 } }
Next steps
- Want to use other endpoints than chat completions, such as embeddings or models? Check out the multiple endpoints guide.
- Explore other guides for LLM consumption, such as function calling, model failover, and prompt guards.