This code bundle builds a Serverless ingress solution, enabling Amazon VPC Lattice Services to be reached by consumers that reside outside of the Amazon Virtual Private Cloud (VPC) both from trusted (on-premise) and non-trusted (external) locations.
This solution is deployed in two parts:
Base Solution
The base solution copies the code in this repo into your own AWS account and enables you to iterate on it - your changes, as you make them will be saved to your own git compliant repo from which you can orchestrate deployment. The stack template sets up an Amazon Virtual Private Cloud across three Availability Zones with both public and private subnets across all three as well as supporting infrastructure such as AWS PrivateLink VPC Endpoints (interface and gateway) for the reaching AWS services privately (such as Amazon S3, Amazon Cloudwatch Logging and docker image repositories (ECR) Route Tables and an Internet Gateway. The stack also creates the infrastructure that is needed to iterate on your code releases and deploys an AWS Code Commitrepo for holding the code, an Elastic Container Registry (ECR)) for storing container images, an AWS CodeBuild environment for building containers that run an open-source version of NGINX and an AWS CodePipeline for the orchestration of the solution build and delivery. Once deployed, your pipeline is ready for release.
The following depicts the base solution:
ECS Solution
The pipeline deploys the following template into your AWS account using CloudFormation. The stack template sets up 'External' access by deploying an internet-facing Amazon Network Load Balancer that is deployed into the three public subnets and across the three Availability Zones. The stack template also sets up internal access (hybrid) by using an internal load balancer that can only be reached from within the Amazon Virtual Private Cloud or via hybrid connections such as AWS Virtual Private Network or AWS Direct Connect and four Target Groups that are used to pass traffic to back-end compute instances. The stack template sets up an Elastic Container Service Cluster an ECS Task Definition and an ECS Service that uses Amazon Fargate as the capacity provider. As Amazon Fargate tasks are deployed, they are mapped to the external and internal load balancer target groups which are bound to two 'tcp' listeners configured for ports 80 and 443. ECS Tasks therefore service both internal and external traffic.
The following depicts the complete solution:
Deployment of this solution is straight forward, you must:
-
Deploy the baseline stack using the stack template in any AWS Region where you are publishing Amazon VPC Lattice Services. More succinctly, you must deploy this stack as many times as you have distinct Amazon Lattice VPC Service Networks in a region, since there is a 1:1 mapping between Service Networks and Amazon VPCs.
-
After the baseline stack has been deployed, your CodePipeline will be waiting for you to release it. More accurately, you are required to 'enable a transition' from the source stage to the build stage. After you enable this transition, the pipeline will build the ECS infrastructure and deploy the load balancers and containers.
-
Following this you can now associate the ingress VPC to the Amazon VPC Lattice Service Network you want. To have the solution working and access your Lattice Services using the ingress solution, you will need to play with the DNS resolution:
- A Hosted Zone should translate the service's domain name into the NLB domain name located in the Ingress VPC (CNAME record).
- If the NLB is a public one, you will need to create a Route 53 Public Hosted Zone.
- If the NLB is a private one, and the consumer application is located in your on-premises environments; you will need to create a Route 53 Private Hosted Zone and associate it with a VPC where you can forward your on-premises DNS requests - either using a Route 53 Resolver Inbound endpoint or your own custom Hybrid DNS solution.
- If the NLB is a private one, and the consumer application is located in another AWS Region; you will need to create a Route 53 Private Hosted Zone and associate it with the VPC where this consumer application is located.
- A Private Hosted Zone that translates the service's domain name into the VPC Lattice Service generated domain name. This Private HZ needs to be associated with the Ingress VPC.
- A Hosted Zone should translate the service's domain name into the NLB domain name located in the Ingress VPC (CNAME record).
You can find a CloudFormation example that deploys a Service Network and a Service for you to test the solution. The example also creates the CNAME records explained above, but it does not create any Route 53 Hosted Zone.
Once both parts of the solution have been deployed you should be able to perform a simple curl against your network load balancers public DNS name, or your own dns alias records that you may have created to masquerade behind. If you have enabled your VPC Lattice Service or Service Network for authorisation, then you will need to sign your requests to the endpoint in the same region that you have deployed the stack in, the following example using the --aws-sigv4 switch with curl demonstrates how to do this:
curl https://yourvpclatticeservice.name \
--aws-sigv4 "aws:amz:%region%:vpc-lattice-svcs" \
--user "$AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY" \
--header "x-amz-security-token:$AWS_SESSION_TOKEN" \
--header "x-amz-content-sha256:UNSIGNED-PAYLOAD"
A level of performance testing was performed against this solution. The specifics of the testing were as follows:
- Region tested us-west-2
- The Amazon VPC Lattice Service Published was AWS LAMBDA
- This was a simple LAMBDA, that had concurrency elevated to 3000 (from 1000 base)
- External access via a three-zone AWS Network Load Balancer using DNS for round-robin on requests
- AWS NLB was not configured for X-zone load balancing (in tests, this performed less well)
- Three zonal AWS Fargate Tasks bound to the Network Load Balancer
- Each task had 2048 CPU units and 4096MB RAM
The testing harness used came from an AWS quick start solution that can be found here and additionally, the template can be found in this repo, here.
The following results show the harness performance, NLB performance, VPC Lattice performance and LAMBDA performance given 5000 remote users, generating ~3000 requests per second, with sustained access for 20 mins and a ramp-up time of 5 minutes.
Harness Performance
ECS Performance
LAMBDA Performance
VPC Lattice Performance
Clean-up of this solution is straight-forward. First, start by removing the stack that was created by the CodePipeline - this can be identified in the CloudFormation console with the name %basestackname%-%accountid%-ecs. Once this has been removed, you can remove the parent stack that built the base stack.
NOTE The ECR repo and the S3 bucket will remain and should be removed manually.
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.