Azure OpenAI

Configure Azure OpenAI as an LLM provider in agentgateway.

Before you begin

Set up an agentgateway proxy.

Set up access to Azure OpenAI

  1. Deploy a Microsoft Foundry Model in the Foundry portal.

  2. Go to the Foundry portal to access your model deployment. From the Details tab, retrieve the endpoint and key to access your model deployment. Later, you use this endpoint information to configure your Azure OpenAI backend, including the base URL, your deployment model name, and API version.

    For example, the following URL https://my-endpoint.cognitiveservices.azure.com/openai/deployments/gpt-4.1-mini/chat/completions?api-version=2025-01-01-preview is composed of the following details:

    • my-endpoint.cognitiveservices.azure.com as the base URL
    • gpt-4.1-mini as the name of your model deployment
    • 2025-01-01-preview as the API version
  3. Store the key to access your model deployment in an environment variable.

    export AZURE_OPENAI_KEY=<insert your model deployment key>
  4. Create a Kubernetes secret to store your model deployment key.

    kubectl apply -f- <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: azure-openai-secret
      namespace: kgateway-system
    type: Opaque
    stringData:
      Authorization: $AZURE_OPENAI_KEY
    EOF
  5. Create an resource to configure Azure OpenAI LLM provider.

    kubectl apply -f- <<EOF
    apiVersion: agentgateway.dev/v1alpha1
    kind: 
    metadata:
      name: azure-openai
      namespace: kgateway-system
    spec:
      ai:
        provider:
          azureopenai:
            endpoint: my-endpoint.cognitiveservices.azure.com
            deploymentName: gpt-4.1-mini
            apiVersion: 2025-01-01-preview
      policies:
        auth:
          secretRef:
            name: azure-openai-secret
    EOF

    Review the following table to understand this configuration. For more information, see the API reference.

    Setting Description
    ai.provider.azureopenai Define the Azure OpenAI provider.
    azureopenai.endpoint The endpoint of the Azure OpenAI deployment that you created, such as my-endpoint.cognitiveservices.azure.com.
    azureopenai.deployment The name of the Azure OpenAI model deployment that you created earlier. For more information, see the Azure OpenAI model docs.
    azureopenai.apiVersion The version of the Azure OpenAI API to use. For more information, see the Azure OpenAI API version reference.
    policies.auth Configure the authentication token for Azure OpenAI API. The example refers to the secret that you previously created. The token is automatically sent in the api-key header.
  6. Create an HTTPRoute resource that routes incoming traffic to the . The following example sets up a route on the /azure-openai path to the that you previously created. Note that kgateway automatically rewrites the endpoint to the appropriate chat completion endpoint of the LLM provider for you, based on the LLM provider that you set up in the resource.

    kubectl apply -f- <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: azure-openai
      namespace: kgateway-system
    spec:
      parentRefs:
        - name: agentgateway
          namespace: kgateway-system
      rules:
      - matches:
        - path:
            type: PathPrefix
            value: /azure-openai
        backendRefs:
        - name: azure-openai
          namespace: kgateway-system
          group: agentgateway.dev
          kind: 
    EOF
  1. Send a request to the LLM provider API. Verify that the request succeeds and that you get back a response from the chat completion API.

    curl "$INGRESS_GW_ADDRESS/azure-openai" -H content-type:application/json  -d '{
       "model": "",
       "messages": [
         {
           "role": "system",
           "content": "You are a helpful assistant."
         },
         {
           "role": "user",
           "content": "Write a short haiku about cloud computing."
         }
       ]
     }' | jq
    curl "localhost:8080/azure-openai" -H content-type:application/json  -d '{
       "model": "",
       "messages": [
         {
           "role": "system",
           "content": "You are a helpful assistant."
         },
         {
           "role": "user",
           "content": "Write a short haiku about cloud computing."
         }
       ]
     }' | jq

    Example output:

    {
      "id": "chatcmpl-9A8B7C6D5E4F3G2H1",
      "object": "chat.completion",
      "created": 1727967462,
      "model": "gpt-4o-mini",
      "choices": [
        {
          "index": 0,
          "message": {
            "role": "assistant",
            "content": "Floating servers bright,\nData streams through endless sky,\nClouds hold all we need."
          },
          "finish_reason": "stop"
        }
      ],
      "usage": {
        "prompt_tokens": 28,
        "completion_tokens": 19,
        "total_tokens": 47
      }
    }

Next steps

  • Want to use other endpoints than chat completions, such as embeddings or models? Check out the multiple endpoints guide.