Sunday, April 27, 2025

Apigee Extension Processor v1.0: CLB Policy Decision Point

Apigee Extension Processor v1.0

Apigee’s reach and flexibility are greatly increased by this potent new feature, which also makes it simpler than ever to manage and protect a larger variety of backend services and contemporary application architectures.

The Extension Processor’s smooth interface with Cloud Run enables developers adopting contemporary deployment patterns to apply Apigee policies to their scalable containerized apps.

Additionally, the Extension Processor opens up potent new channels of connection. With gRPC bidirectional streaming, you can now effortlessly handle complex real-time interactions, allowing for low-latency and highly engaging apps. Additionally, the Extension Processor offers a mechanism to control and safeguard Server-Sent Events (SSE) for event-driven architectures, enabling effective data streaming to clients.

However, the advantages go beyond communication protocols and application implementation. When used in conjunction with Google Token Injection policies, the Apigee Extension Processor significantly streamlines safe access to your Google Cloud infrastructure. With Apigee’s consistent security framework, you can easily connect to and manage access to robust data sources like Bigtable and use Vertex AI’s intelligence for your machine learning workloads.

Lastly, the Extension Processor provides unmatched flexibility in routing and managing a variety of traffic flows by connecting with Google’s Cloud Load Balancing’s sophisticated traffic management features. Managing even the most complicated API landscapes is made possible by this potent combo.

This blog describes a potent way to handle gRPC streaming in Apigee, a significant problem in the current environment of high-performance and real-time applications. Although gRPC is a fundamental component of effective microservices, enterprises using Google Cloud’s Apigee as an inline proxy (conventional mode) have difficulties due to its streaming nature.

Nous will look at how Apigee’s data plane can impose regulations on gRPC streaming traffic as it travels through the Application Load Balancer (ALB) to the Apigee Extension Processor. A service extension, also known as a traffic extension, is used to accomplish this, enabling efficient administration and routing without requiring the gRPC stream to pass through the Apigee gateway directly.

Continue reading to explore the main components of this solution, stressing its advantages and giving a high-level synopsis of a practical use case employing a Cloud Run backend.

Apigee Extension Processor: An Overview

You can use Cloud Load Balancing to send callouts to Apigee as part of its API management by using the Apigee Extension Processor, a potent traffic extension (a kind of service extension). This essentially extends Apigee’s powerful API management capabilities to workloads fronted by Cloud Load Balancing by allowing Apigee to apply API management policies to requests prior to the ALB forwarding them to user-managed backend services.

Infrastructure and Data Flow

The necessary components of the Apigee Extension Processor configuration are shown in the diagram:

Components of the Apigee Extension Processor configuration
Image Credit To Google

A number of essential elements are involved in the Apigee Extension Processor configuration. These consist of a Service Extension, an ALB, and an Apigee instance with the Extension Processor enabled.

In order to show the order of occurrences, the numbered steps below match the numbered arrows in the flow diagram:

  • The ALB receives a request from the client.
  • The traffic is processed by the ALB, which serves as the Policy Enforcement Point (PEP). It makes calls to Apigee through the specified Service Extension (traffic extension) as part of this processing.
  • After receiving the callout, the Apigee Extension Processor which serves as the Policy Decision Point (PDP) applies the pertinent API management policies to the request and sends the completed request back to the ALB (PEP).
  • After processing is finished, the ALB sends the request to the backend service.

The ALB receives the response, which is started by the backend service. Before sending the response to the client, the ALB may use the Service Extension to contact Apigee once more to enforce the policies.

Connecting the dots: Making gRPC streaming pass-through possible

Apigee, which is utilised as an inline proxy, does not currently enable streaming, despite the fact that many contemporary applications need and utilise the capability of streaming gRPC. By enabling the ALB to handle the streaming gRPC traffic and serve as the PEP (policy enforcement point) and the Apigee runtime as the PDP (policy decision point), the Apigee Extension Processor becomes quite useful in this situation.

Key components required to allow Apigee’s gRPC streaming pass-through

The following essential components must be present in order to enable gRPC streaming pass-through utilising the Apigee Extension Processor. Please see Get started with the Apigee Extension Processor for comprehensive setup instructions.

  • gRPC streaming backend service: A gRPC service that implements the required bidirectional, server, or client streaming features.
  • Client requests enter through the Application Load Balancer (ALB), which is set up to route traffic and make calls to the Apigee Service Extension.
  • An instance of Apigee with the Extension Processor turned on: A targetless API proxy is used by an Apigee instance and environment that has been set up with the Extension Processor functionality to process Service Extension communication using ext-proc.
  • Configuring a service extension: The ALB and Apigee runtime are connected via a traffic extension, which is a kind of service extension (preferably utilising Private Service Connect (PSC)).
  • Network connectivity: All components (client to ALB, ALB to Apigee, and ALB to backend) can communicate with each other when the network is properly configured.

Use cases: Using Apigee to secure and administer gRPC streaming services on the cloud

Imagine a situation where a client creates a high-performance backend service that can stream gRPC data, like real-time application logs. This backend application is set up on Google Cloud Run as part of their main Google Cloud project for scalability and administrative convenience. The customer now wishes to use a safe and well-managed API gateway to make this gRPC streaming service available to its clients. Utilising Apigee’s strong API administration capabilities, such as authentication, authorization, rate restriction, and other policies, they select it for this purpose.

The Challenge

As previously stated, while using Apigee in the inline proxy mode, gRPC streaming is not natively supported. None of the streaming use-cases client, server, or bi-di streaming would be supported by direct exposing of the Cloud Run gRPC service using typical Apigee setups.

Solution

Within the same Google Cloud project, the Apigee Extension Processor offers the required bridge to handle gRPC streaming traffic going to a backend application hosted on Cloud Run.

This is a condensed flow:

Client initiation

  • The gRPC streaming request is started by the client application.
  • The public IP address or DNS name of the ALB acting as the entry point is the target of this request.

Application Load Balancer processing and Service Extension callout

  • The incoming gRPC streaming request is sent to the ALB.
  • The ALB is set up with a backend service that points to the Cloud Run backend via a serverless Network Endpoint Group (NEG).
  • Additionally, the ALB has a Service Extension (Traffic extension) set up with a particular Apigee runtime backend.
  • This Service Extension receives the initial call from the ALB for pertinent traffic.

 Apigee proxy processing

Through the Service Extension, the gRPC request is routed to the specified Apigee API proxy.

Several API management policies are implemented within this Apigee X proxy. Rate limitation, authorisation, and authentication are a few examples of this.

Note: In this case, the Apigee proxy is a no-target proxy, meaning it is not configured with a Target Endpoint.The ALB is used for final routing.

Return to ALB

After policy processing, control is returned to the ALB via the Service Extension response because the Apigee proxy has no target.

Routing to Backend in Cloud Run by Load Balancer

The ALB routes the gRPC streaming request to the relevant backend service, which is mapped to the serverless NEG where the Cloud Run service is located, in accordance with its backend service settings.

The underlying routing to the Cloud Run instance is managed by the ALB.

Response handling

The request flow and response handling follow a similar pattern. The ALB handles the response after it is started by the backend. Before sending the response to the client, the ALB may use the Service Extension (traffic extension) to call Apigee for policy enforcement.

This condensed use case shows how to apply API management policies to gRPC streaming traffic going to an application that is deployed on Cloud Run within the same Google Cloud project using the Apigee Extension Processor. Based on its NEG configuration, the ALB mostly manages the routing to the Cloud Run service.

Benefits of Leveraging the Apigee Extension Processor for gRPC Streaming

There are numerous significant benefits of using the Apigee Extension Processor to backend manage gRPC streaming services, bringing Apigee’s fundamental capabilities to this new platform application:

Extending Apigee’s reach

This method effectively expands Apigee’s powerful API management features to gRPC streaming, a streaming communication protocol that the core proxy of the Apigee platform does not support natively.

Making use of current investments

With this solution, businesses who already use Apigee for their RESTful APIs may control their gRPC streaming services from within Apigee. It makes use of well-known API management concepts and eliminates the need for additional tools, even if it necessitates the usage of the Extension Processor.

Centralized policy management

A centralized platform for creating and implementing API management policies is offered by Apigee. All of your API endpoints can have uniform governance and security if you integrate gRPC streaming via the Extension Processor.

Monetization potential

Apigee’s monetisation features can be used if you are offering gRPC streaming services as a product. By including rate plans in the customised API products you develop in Apigee, you may make money every time your gRPC streaming APIs are accessed.

Improved observability and traceability

Apigee still offers useful information about the traffic going to your streaming services, including connection attempts, error rates, and general usage patterns, even though comprehensive gRPC protocol-level analytics may be restricted in a pass-through scenario. This observability is essential for troubleshooting and monitoring.

With end-to-end visibility across numerous apps, services, and databases, Apigee’s distributed tracing solutions may assist you in tracking requests in distributed systems using your gRPC streaming services.

Business intelligence

The abundance of data passing through your load balancer is gathered by Apigee API Analytics, which offers data visualisation in the user interface (UI) or the option to download data for offline study. Making wise business decisions, figuring out performance bottlenecks, and comprehending usage trends can all be greatly aided by this data.

It is evident from weighing these advantages that the Apigee Extension Processor provides a useful and workable means of adding crucial API management features to Google Cloud’s gRPC streaming services.

Looking Ahead

An important advancement in expanding Apigee’s capabilities is the Apigee Extension Processor. There will come a time when Apigee’s policy enforcement capabilities will be available on any gateway, anywhere. Using the Apigee runtime as the Policy Decision Point (PDP) and utilising the ext-proc protocol, this will allow a number of Envoy-based load balancers and gateways to function as Policy Enforcement Points (PEPs). Organizations will be even better equipped to handle and safeguard their digital assets in increasingly dispersed and diverse settings to this progress.

Thota nithya
Thota nithya
Thota Nithya has been writing Cloud Computing articles for govindhtech from APR 2023. She was a science graduate. She was an enthusiast of cloud computing.
RELATED ARTICLES

Page Content

Recent Posts

Index