Vertex AI
Configure Vertex AI as an LLM provider in agentgateway.
Before you begin
Set up an agentgateway proxy.
Set up access to Vertex AI
-
Set up authentication for Vertex AI. Make sure to have your:
- Google Cloud Project ID
- Project location, such as
us-central1 - API key or service account credentials
-
Save your Vertex AI API key as an environment variable.
export VERTEX_AI_API_KEY=<insert your API key> -
Create a Kubernetes secret to store your Vertex AI API key.
kubectl apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: vertex-ai-secret namespace: kgateway-system type: Opaque stringData: Authorization: $VERTEX_AI_API_KEY EOF
Create a resource to configure an LLM provider that references the Vertex AI API key secret.
kubectl apply -f- <<EOF
apiVersion: gateway.kgateway.dev/v1alpha1
kind:
metadata:
name: vertex-ai
namespace: kgateway-system
spec:
type: AI
ai:
llm:
vertexai:
authToken:
kind: SecretRef
secretRef:
name: vertex-ai-secret
model: "gemini-pro"
apiVersion: "v1"
projectId: "my-gcp-project"
location: "us-central1"
publisher: "GOOGLE"
EOFReview the following table to understand this configuration. For more information, see the API reference.
| Setting | Description |
|---|---|
type |
Set to AI to configure this for an AI provider. |
ai |
Define the AI backend configuration. The example uses Vertex AI (spec.ai.llm.vertexai). |
authToken |
Configure the authentication token for Vertex AI API. The example refers to the secret that you previously created. The token is automatically sent in the key header. |
model |
The Vertex AI model to use. For more information, see the Vertex AI model docs. |
apiVersion |
The version of the Vertex AI API to use. For more information, see the Vertex AI API reference. |
projectId |
The ID of the Google Cloud Project that you use for Vertex AI. |
location |
The location of the Google Cloud Project that you use for Vertex AI (e.g., us-central1). |
publisher |
The type of publisher model to use. Currently, only GOOGLE is supported. |
modelPath |
Optional: The model path to route to. Defaults to the Gemini model path, generateContent. |
Create an HTTPRoute resource that routes incoming traffic to the . The following example sets up a route on the /vertex path. Note that kgateway automatically rewrites the endpoint to the appropriate chat completion endpoint of the LLM provider for you, based on the LLM provider that you set up in the resource.
kubectl apply -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: vertex-ai
namespace: kgateway-system
spec:
parentRefs:
- name: agentgateway
namespace: kgateway-system
rules:
- matches:
- path:
type: PathPrefix
value: /vertex
backendRefs:
- name: vertex-ai
namespace: kgateway-system
group: gateway.kgateway.dev
kind:
EOF-
Send a request to the LLM provider API. Verify that the request succeeds and that you get back a response from the API.
curl "$INGRESS_GW_ADDRESS/vertex" -H content-type:application/json -d '{ "model": "", "messages": [ { "role": "user", "content": "Write me a short poem about Kubernetes and clouds." } ] }' | jqcurl "localhost:8080/vertex" -H content-type:application/json -d '{ "model": "", "messages": [ { "role": "user", "content": "Write me a short poem about Kubernetes and clouds." } ] }' | jqExample output:
{ "id": "chatcmpl-vertex-12345", "object": "chat.completion", "created": 1727967462, "model": "gemini-pro", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "In the cloud, Kubernetes reigns,\nOrchestrating pods with great care,\nContainers float like clouds,\nScaling up and down,\nAutomation everywhere." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 12, "completion_tokens": 28, "total_tokens": 40 } }
Next steps
- Explore other guides for LLM consumption, such as function calling, model failover, and prompt guards.