Skip to main content

Serverless OAuth2/OIDC server with OpenIddict 6 and AWS Aurora v2

· 10 min read
Akhan Zhakiyanov
Lead engineer

With the recent announcement of OpenIddict 6 and AWS Aurora Serverless v2's new scaling to zero capability, we have a perfect opportunity to build a cost-effective, serverless OAuth2/OpenID Connect server.

This setup will leverage AWS Lambda for compute and Aurora v2 PostgreSQL for storage, providing enterprise-grade security and scalability while maintaining optimal cost efficiency and only incurring cost when actually in use.

openiddict-serverless-with-aurora-v2-architecture

Let's start by creating a new solution for our OAuth2/OIDC server.

Prerequisites

Make sure you have the following tools installed:

ECR repository

Since AWS Lambda container image does not support other registries than ECR, we need to create an ECR repository and push a container image to it prior to lambda function creation.

You can create an ECR repository manually with AWS CLI or using Pulumi:

// index.ts
import * as awsx from "@pulumi/awsx";

const prefix = "openiddict6-serverless"
const repository = new awsx.ecr.Repository(`${prefix}-ecr-repository`, {
name: `openiddict6-serverless`,
imageTagMutability: 'IMMUTABLE',
});

OpenIddict 6 with ASP.NET Lambda hosting

info

Target framework is still .NET 8, because we are going to utilize public.ecr.aws/lambda/dotnet:8 for our Lambda container image.

Let's create a new solution called OpenIddict.Serverless and add a new ASP.NET API project to it:

dotnet new sln -n OpenIddict.Serverless
dotnet new webapi -n OpenIddict.Serverless.AuroraV2 -f net8.0
dotnet sln add OpenIddict.Serverless.AuroraV2/OpenIddict.Serverless.AuroraV2.csproj

Next, we need to add the necessary packages to our project:

dotnet add OpenIddict.Serverless.AuroraV2/OpenIddict.Serverless.AuroraV2.csproj package OpenIddict.AspNetCore # OpenIddict ASP.NET Core integration
dotnet add OpenIddict.Serverless.AuroraV2/OpenIddict.Serverless.AuroraV2.csproj package OpenIddict.EntityFrameworkCore # OpenIddict EntityFrameworkCore integration
dotnet add OpenIddict.Serverless.AuroraV2/OpenIddict.Serverless.AuroraV2.csproj package Npgsql.EntityFrameworkCore.PostgreSQL # PostgreSQL provider
dotnet add OpenIddict.Serverless.AuroraV2/OpenIddict.Serverless.AuroraV2.csproj package Microsoft.EntityFrameworkCore.Design # to generate migrations
dotnet add OpenIddict.Serverless.AuroraV2/OpenIddict.Serverless.AuroraV2.csproj package Amazon.Lambda.AspNetCoreServer.Hosting # to host the API in AWS Lambda environment

OpenIddict 6 uses EF Core for its database migrations, so we need to add a DbContext to our project. This DbContext will be associated with OpenIddict's EF configuration.

tip

Altough not required, we are using Guid as a primary key type for OpenIddict entities.

https://documentation.openiddict.com/integrations/entity-framework-core#use-a-custom-primary-key-type

// OpenIddictDbContext.cs
using Microsoft.EntityFrameworkCore;

namespace OpenIddict.Serverless.AuroraV2;

public class OpenIddictDbContext : DbContext
{
public OpenIddictDbContext()
{
}

public OpenIddictDbContext(DbContextOptions<OpenIddictDbContext> options)
: base(options)
{
}

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
if (!optionsBuilder.IsConfigured)
{
optionsBuilder.UseNpgsql();
optionsBuilder.UseOpenIddict<Guid>(); // https://documentation.openiddict.com/integrations/entity-framework-core#use-a-custom-primary-key-type
}
}
}
...

// appsettings.json
{
"ConnectionStrings": {
"OpenIddictDbContext": "Host=localhost;Username=openiddict;Password=password;Database=openiddict"
}
...
}

...
// Program.cs
builder.Services.AddDbContext<OpenIddictDbContext>(o =>
{
o.UseNpgsql(builder.Configuration.GetConnectionString("OpenIddictDbContext"));
o.UseOpenIddict<Guid>(); // https://documentation.openiddict.com/integrations/entity-framework-core#use-a-custom-primary-key-type
});

Once we have the DbContext, we can generate OpenIddict database migrations.

dotnet ef migrations add AddOpenIddict6 --context OpenIddictDbContext

After that the only thing left is to add OpenIddict server configuration to our project.

For demo purposes we will use ephemeral singning and encryption keys, use client credentials flow and disable HTTPS requirement.

// Program.cs
builder.Services.AddOpenIddict()
.AddCore(o =>
{
o.UseEntityFrameworkCore()
.UseDbContext<OpenIddictDbContext>()
.ReplaceDefaultEntities<Guid>(); // https://documentation.openiddict.com/integrations/entity-framework-core#use-a-custom-primary-key-type
})
.AddServer(o =>
{
o.SetTokenEndpointUris("connect/token");

o.AllowClientCredentialsFlow();

o.AddEphemeralEncryptionKey()
.AddEphemeralSigningKey();

o.UseAspNetCore()
.DisableTransportSecurityRequirement() // disable HTTPs because we plan to use lambda with HTTP API Gateway
.EnableTokenEndpointPassthrough();
});

Authorization controller

Authorization controller implementation has been copied from the OpenIddict documentation to simplify the setup.

using System.Security.Claims;
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Mvc;
using Microsoft.IdentityModel.Tokens;
using OpenIddict.Abstractions;
using OpenIddict.Server.AspNetCore;

namespace OpenIddict.Serverless.AuroraV2.Controllers;

public class AuthorizationController : Controller
{
private readonly IOpenIddictApplicationManager _applicationManager;

public AuthorizationController(IOpenIddictApplicationManager applicationManager)
=> _applicationManager = applicationManager;

[HttpPost("~/connect/token"), Produces("application/json")]
public async Task<IActionResult> Exchange()
{
var request = HttpContext.GetOpenIddictServerRequest();
if (request.IsClientCredentialsGrantType())
{
// Note: the client credentials are automatically validated by OpenIddict:
// if client_id or client_secret are invalid, this action won't be invoked.

var application = await _applicationManager.FindByClientIdAsync(request.ClientId) ??
throw new InvalidOperationException("The application cannot be found.");

// Create a new ClaimsIdentity containing the claims that
// will be used to create an id_token, a token or a code.
var identity = new ClaimsIdentity(TokenValidationParameters.DefaultAuthenticationType, OpenIddictConstants.Claims.Name, OpenIddictConstants.Claims.Role);

// Use the client_id as the subject identifier.
identity.SetClaim(OpenIddictConstants.Claims.Subject, await _applicationManager.GetClientIdAsync(application));
identity.SetClaim(OpenIddictConstants.Claims.Name, await _applicationManager.GetDisplayNameAsync(application));

identity.SetDestinations(static claim => claim.Type switch
{
// Allow the "name" claim to be stored in both the access and identity tokens
// when the "profile" scope was granted (by calling principal.SetScopes(...)).
OpenIddictConstants.Claims.Name when claim.Subject.HasScope(OpenIddictConstants.Permissions.Scopes.Profile)
=> [OpenIddictConstants.Destinations.AccessToken, OpenIddictConstants.Destinations.IdentityToken],

// Otherwise, only store the claim in the access tokens.
_ => [OpenIddictConstants.Destinations.AccessToken]
});

return SignIn(new ClaimsPrincipal(identity), OpenIddictServerAspNetCoreDefaults.AuthenticationScheme);
}

throw new NotImplementedException("The specified grant is not implemented.");
}
}

Publish OpenIddict 6 Lambda container image to ECR

warning

https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-reqs

Lambda provides multi-architecture base images. However, the image you build for your function must target only one of the architectures. Lambda does not support functions that use multi-architecture container images.

Below is a Dockerfile that allows us to build image either for arm64 or x86_64 architecture.

# Dockerfile
FROM --platform=$BUILDPLATFORM public.ecr.aws/lambda/dotnet:8 AS base

FROM --platform=$BUILDPLATFORM mcr.microsoft.com/dotnet/sdk:8.0 as build
ARG TARGETARCH
WORKDIR /src
COPY ["OpenIddict.Serverless.AuroraV2.csproj", "./"]
RUN dotnet restore --arch $TARGETARCH "OpenIddict.Serverless.AuroraV2.csproj"

WORKDIR "/src/OpenIddict.Serverless.AuroraV2"
COPY . .
RUN dotnet build --arch $TARGETARCH "OpenIddict.Serverless.AuroraV2.csproj" --configuration Release --output /app/build

FROM build AS publish
RUN dotnet publish "OpenIddict.Serverless.AuroraV2.csproj" \
--configuration Release \
--arch $TARGETARCH \
--output /app/publish \
-p:PublishReadyToRun=true

FROM base AS final
WORKDIR /var/task
COPY --from=publish /app/publish .
CMD ["OpenIddict.Serverless.AuroraV2"]

Login to ECR before building and pushing the image.

# login to ECR
aws ecr get-login-password --region ap-southeast-1 | docker login --username AWS --password-stdin $ECR_REPOSITORY_URL

Make sure to replace $ECR_REPOSITORY_URL with the actual URL of your ECR repository.

# build and push amd64 image
docker build --platform linux/amd64 -t $ECR_REPOSITORY_URL:1.0.0-amd64 .
docker push $ECR_REPOSITORY_URL:1.0.0-amd64

Infrastructure setup with Pulumi

danger
Error: fork/exec /lambda-entrypoint.sh: exec format error
Runtime.InvalidEntrypoint

In this example we are using arm64 architecture for the Lambda function.

Make sure lambda architecture matches the architecture of the container image you are using. Otherwise you will get Runtime.InvalidEntrypoint error.

Let's create the remaining AWS infrastructure using Pulumi.

This setup will create a VPC with public, private and isolated subnets, security groups for both the database and Lambda function, and an Aurora Serverless v2 cluster with scaling to zero configuration.

The Lambda function will be configured to use the VPC and security groups, and an HTTP API Gateway will be created to expose the OAuth2/OIDC server.

// index.ts
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
import * as awsx from "@pulumi/awsx";
import { ManagedPolicy } from "@pulumi/aws/iam";

const config = new pulumi.Config();
const dbClusterMasterPassword = config.requireSecret("db-cluster-master-password");
const prefix = "openiddict6-serverless"

// created previously
const repository = new awsx.ecr.Repository(`${prefix}-ecr-repository`, {
name: `${prefix}`,
imageTagMutability: 'IMMUTABLE',
});

const vpc = new awsx.ec2.Vpc(`${prefix}-vpc`, {
cidrBlock: "10.0.0.0/16",
numberOfAvailabilityZones: 2,
natGateways: {
strategy: 'Single'
},
enableDnsHostnames: true,
enableDnsSupport: true,
subnetStrategy: 'Auto',
subnetSpecs: [
{
type: awsx.ec2.SubnetType.Private,
name: "private",
},
{
type: awsx.ec2.SubnetType.Public,
name: "public",
},
{
type: awsx.ec2.SubnetType.Isolated,
name: "isolated",
}
],
tags: { Name: `${prefix}-vpc` },
});

const dbSecurityGroup = new aws.ec2.SecurityGroup(`${prefix}-db-sg`, {
name: `${prefix}-db-sg`,
vpcId: vpc.vpcId,
description: "Security group for Aurora database",
ingress: [
{
protocol: "tcp",
fromPort: 5432,
toPort: 5432,
cidrBlocks: [vpc.vpc.cidrBlock],
},
],
tags: { Name: `${prefix}-db-sg` },
});

const dbSubnetGroup = new aws.rds.SubnetGroup(`${prefix}-db-subnet-group`, {
name: `${prefix}-db-subnet-group`,
subnetIds: vpc.isolatedSubnetIds,
tags: { Name: `${prefix}-db-subnet-group` },
});

const dbCluster = new aws.rds.Cluster(`${prefix}-aurora-v2`, {
engine: "aurora-postgresql",
engineVersion: "16.4",
databaseName: "openiddict",
masterUsername: "openiddict",
masterPassword: dbClusterMasterPassword,
serverlessv2ScalingConfiguration: {
minCapacity: 0, // Can scale to zero
maxCapacity: 1 // Max 0.5 ACU
},
skipFinalSnapshot: true,
vpcSecurityGroupIds: [dbSecurityGroup.id],
dbSubnetGroupName: dbSubnetGroup.name,
replicationSourceIdentifier: undefined,
copyTagsToSnapshot: true,
storageEncrypted: true,
allowMajorVersionUpgrade: false,
});

const writerInstance = new aws.rds.ClusterInstance(`${prefix}-aurora-v2-writer`, {
clusterIdentifier: dbCluster.id,
instanceClass: 'db.serverless',
engine: "aurora-postgresql",
engineVersion: "16.4",
tags: {
Name: `${prefix}-aurora-v2-writer`,
Role: "writer",
},
});

const readerInstance = new aws.rds.ClusterInstance(`${prefix}-aurora-v2-reader`, {
clusterIdentifier: dbCluster.id,
instanceClass: 'db.serverless',
engine: "aurora-postgresql",
engineVersion: "16.4",
tags: {
Name: `${prefix}-aurora-v2-reader`,
Role: "reader",
},
});

const lambdaSecurityGroup = new aws.ec2.SecurityGroup(`${prefix}-lambda-sg`, {
name: `${prefix}-lambda-sg`,
vpcId: vpc.vpcId,
description: "Security group for Lambda function",
egress: [
{
protocol: "-1",
fromPort: 0,
toPort: 0,
cidrBlocks: ["0.0.0.0/0"],
},
],
tags: { Name: `${prefix}-lambda` },
});

const lambdaRole = new aws.iam.Role(`${prefix}-lambda-role`, {
name: `${prefix}-lambda-role`,
managedPolicyArns: [
ManagedPolicy.AWSLambdaBasicExecutionRole
],
assumeRolePolicy: JSON.stringify({
Version: "2012-10-17",
Statement: [{
Action: "sts:AssumeRole",
Effect: "Allow",
Principal: {
Service: "lambda.amazonaws.com",
},
}],
}),
});

const lambdaVpcPolicy = new aws.iam.RolePolicy(`${prefix}-lambda-role-policy`, {
name: `${prefix}-lambda-role-policy`,
role: lambdaRole.id,
policy: JSON.stringify({
Version: "2012-10-17",
Statement: [{
Effect: "Allow",
Action: [
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface",
"ec2:DescribeInstances",
"ec2:AttachNetworkInterface",
],
Resource: "*",
}],
}),
});

const lambdaFunction = new aws.lambda.Function(`${prefix}-lambda`, {
packageType: "Image",
imageUri: pulumi.interpolate`${repository.url}:1.0.0-arm64`,
architectures: ['arm64'],

// imageUri: pulumi.interpolate`${repository.url}:1.0.0-amd64`,
architectures: ['x86_64'],

role: lambdaRole.arn,
timeout: 10,
memorySize: 1024,
environment: {
variables: {
ConnectionStrings__OpenIddictDbContext: pulumi.interpolate`Host=${dbCluster.endpoint};Username=openiddict;Password=${dbClusterMasterPassword};Database=openiddict`
},
},
vpcConfig: {
subnetIds: vpc.privateSubnetIds,
securityGroupIds: [lambdaSecurityGroup.id],
},
});

const api = new aws.apigatewayv2.Api(`${prefix}-http-api`, {
protocolType: "HTTP",
target: lambdaFunction.arn,
});

const lambdaPermission = new aws.lambda.Permission(`${prefix}-api-lambda-permission`, {
action: "lambda:InvokeFunction",
function: lambdaFunction.name,
principal: "apigateway.amazonaws.com",
sourceArn: pulumi.interpolate`${api.executionArn}/*/*`,
});

const integration = new aws.apigatewayv2.Integration(`${prefix}-api-lambda-integration`, {
apiId: api.id,
integrationType: "AWS_PROXY",
integrationUri: lambdaFunction.arn,
integrationMethod: "ANY",
payloadFormatVersion: "2.0",
});

export const dbEndpoint = dbCluster.endpoint;
export const apiEndpoint = api.apiEndpoint;

After running pulumi up you will get apiEndpoint to test the OAuth2/OIDC server.

Testing

Database migrations

warning

Since Aurora Serverless v2 cluster is in isolated subnets, as a workaround we will create a new API method that will be used to apply database migrations and seed demo client.

This is done for demo purposes only.

// DbController.cs
using Microsoft.AspNetCore.Mvc;
using OpenIddict.Abstractions;

namespace OpenIddict.Serverless.AuroraV2.Controllers;

public class DbController
{
private readonly IOpenIddictApplicationManager _applicationManager;
private readonly OpenIddictDbContext _dbContext;

public DbController(IOpenIddictApplicationManager applicationManager, OpenIddictDbContext dbContext)
{
_applicationManager = applicationManager;
_dbContext = dbContext;
}

/// <summary>
/// For demo purpose only
/// This API needs to be called before trying to obtain access token with OAuth2 flows
/// It ensures that database is created, db migrations are applied, and seeds demo client
/// </summary>
/// <param name="ct"></param>
/// <returns></returns>
[HttpPost("/seed-database"), Produces("application/json")]
public async Task<IActionResult> SeedDatabaseAsync(CancellationToken ct)
{
await _dbContext.Database.EnsureCreatedAsync(ct);
if (await _applicationManager.FindByClientIdAsync("serverless-demo", ct) is null)
{
await _applicationManager.CreateAsync(new OpenIddictApplicationDescriptor
{
ClientId = "serverless-demo",
ClientSecret = "serverless-demo-secret",
Permissions =
{
OpenIddictConstants.Permissions.Endpoints.Token,
OpenIddictConstants.Permissions.GrantTypes.ClientCredentials
}

}, ct);
}
return new CreatedResult("/seed-database", new { clientId = "serverless-demo" });
}
}

Testing Lambda function locally

Codebase provides a docker-compose.yml file to test the Lambda function locally. Since Lambda function is configured to use HTTP API Gateway as a trigger, we need to create a request payload that matches the API Gateway request format.

docker-compose up --build -d --wait;
echo "\n----------------------------------------"

curl -X POST "http://0.0.0.0:5085/2015-03-31/functions/function/invocations" \
-H 'Content-Type: application/json' \
-d '{"version":"2.0","routeKey":"$default","rawPath":"/seed-database","headers":{},"requestContext":{"accountId":"123456789012","apiId":"api-id","domainName":"id.execute-api.us-east-1.amazonaws.com","domainPrefix":"id","http":{"method":"POST","path":"/seed-database","protocol":"HTTP/1.1","sourceIp":"192.168.0.1/32","userAgent":"agent"},"requestId":"id","routeKey":"$default","stage":"$default","time":"12/Mar/2020:19:03:58 +0000","timeEpoch":1583348638390},"body":"","isBase64Encoded":false}';
echo "\n----------------------------------------"

curl -X POST "http://0.0.0.0:5085/2015-03-31/functions/function/invocations" -H 'Content-Type: application/json' -d '{"version":"2.0","routeKey":"$default","rawPath":"/connect/token","headers":{"Content-Type":"application/x-www-form-urlencoded","Accept":"application/json","Host":"localhost:5085"},"requestContext":{"http":{"method":"POST","path":"/connect/token"}},"body":"grant_type=client_credentials&client_id=serverless-demo&client_secret=serverless-demo","isBase64Encoded":false}';
echo "\n----------------------------------------"

docker-compose down

Testing with HTTP API Gateway

Make sure to replace <api-id> and <region> with your actual API Gateway ID and region.

curl -X POST "https://<api-id>.execute-api.<region>.amazonaws.com/seed-database"

# response example
{
"clientId": "serverless-demo"
}

echo "\n----------------------------------------"

curl -X POST "https://<api-id>.execute-api.<region>.amazonaws.com/connect/token" -H "Content-Type: application/x-www-form-urlencoded" -d "grant_type=client_credentials&client_id=serverless-demo&client_secret=serverless-demo-secret"

# response example
{
"access_token": "eyJhbGciOiJSU0EtT0FFUCIsImVuYyI6IkEyNTZDQkMtSFM1MTIiLCJraWQiOiJJOE9QU09XSFNHVlZTNlRUVjhOS1FLU1ZIVUctVy1XTVhfVTUxR0QtIiwidHlwIjoiYXQrand0IiwiY3R5IjoiSldUIn0.P0aL3MBvyJGaVqy7XXOd1EFbzM8GzT3s695GKUsYZJKkSGALWtyByAB4AO1diIMlC8ZNo2n78PXJkOYToVDgY55wm9f7OekmDiEw6WINvtWldCc8XmNYcBFGSZP-4NLaViKRz1Dl5VI8MpgJmUpapSOIiiLdrLkHFX3pTp-lGSHVmYWl7KJou-DlCUJtPkF12WgU74_BLhNbhNbFDrne4-8zn014VSL53uUY9uM-YDqGDxKIG-_ujVbWePyPmjhV97S45hBHyEqj1eAe8tqkmQYrriIRIp-jeKNogDaW21V-QEaU6GAqsBFzWwMRRN9LEFbGBwKJdFngC1sVmgXKog.Sm7zoG6VP0kgRlgeQJo3kw.IW2m0tvDhVfp_55ba8EicfiT2PYMOobArJWqgSDabuqFC9jBkMJzExZgLnqbWWQd4KX6iCbFNB1COcCm64jdYSMmSY-B9dvDE8p-8m-_CBtuuwS7EmAyfkN42WDeoCtcg9N1Fl6Uvt4RemZ-PaeqNSIkAnLiL0PsynEXZjfTtdHGRAsfDRe-qAwJ_HhbnTFY5s3iZv15Lg25QBAINHL_lWUkDcIPQM_G2oQ14vanB_efGMIqOyt9HmUZ990WbQe4fGTEjr6Wt6rOIzpJewoYNONqRjI2h5YgD1csxYbu9La7GkVR_0xvgRum2PDRNaF0j41t-Gp1CHqFsfqboThAC298jRmB3Gd-Gt-RM9bz4xcpfEpdHA5q4pE4xAOhkQsRSuupVr95z0nddAevcSV3JQsRoymmEr59-OmWOoy5JwFBp72CCXUkTt3ok8rZ7iu0Xz9bgNI3OJc83W878vMO5B48UHE2Cp3ZgJiL7FLPALQUPVf0uRDz0yYbBxs0H8AkmoCZULGIw1fH4eiTV_CMTBO8x6pgpLX2hsnDvxj8WFHn_DxncnqhTm6aLFQycjyel4OuUhruCUZO8z7YIyqR1Mn1T7QzyOQgu5QVXTNWi3rCOwELRZFQvNq6xOiOuv-wa5IdPv93Te96fi-SIjQ4tnZacd4VlqcXH_cDXgaHY3XNtUc4GPziE6WnuwxF0xAaAfwybiMLMa3Irg20vnwKMs80fVfb7KggHqcWYv-feeNI8OQRf0ooNgqtX0mKcVPPFYk51ZupGcqeTMuF7Or5u4OJ5kUGUS9bvVAuqCXLxwXUJE1ORkjy7hvpQVxPht24eDWafDnpaAP2KORmoDo8TLBg8h-VUYO1gijDqDGTwZ92u73nWYV9movingHQTSNEUhUeDGDCAhmJYDm6dbfStSQ0vM1TCUIojCR6t6D-MDlVKVtL4siMbAUsWTu92Ap99aZW_4XCvV7PyrFpW4qFnvkVu5HBTD2-agvka12cZrNxNA9VJo-B5LR_IqmRazmZQJjRbmdYp5MGcrn7HQCrO4bspaTsBhMxAiiLtqS5nz3kuOhNJLIQOt-AV7dU-aoxeIOFAhqSSsr_OvYkomUazw.73ztJsNH15MKq2NiYusuSCdJwO0M1YKkNabHD7UlKEk",
"token_type": "Bearer",
"expires_in": 3600
}

Conclusion

By leveraging AWS Aurora Serverless v2's scaling to zero feature and OpenIddict 6's robust OAuth2/OIDC implementation, we've created a serverless OAuth2 server that only incurs costs when actually in use.

This setup provides us full control over the infrastructure and can be easily converted back to a traditional setup with ECS or EKS and RDS for Postgres or even other cloud / on-premises solutions.

tip

Full source code is available on GitHub.