AWS Workshop 04
Before you begin
We'll be using your AWS Sandbox accounts for this workshop.
- Make sure you have a an
sso-session
configuration section in your~/.aws/config
:
[sso-session j1]
sso_start_url = https://j1.awsapps.com/start
sso_region = ap-southeast-2
sso_registration_scopes = sso:account:access
If you don't, then you'll need to configure your sso-session
section with the aws configure sso-session wizard
:
aws configure sso
SSO session name: j1
SSO start URL [None]: https://j1.awsapps.com/start
SSO region [None]: ap-southeast-2
SSO registration scopes [None]: sso:account:access
- Next, we'll create and SSO linked developer profile for your sandbox account
aws configure sso
SSO session name (Recommended): j1
When prompted: - Choose your sandbox account from the list - Next, choose the SandboxDeveloper
role - Set the CLI default client Region to ap-southeast-2
- CLI default output format to json
- Update the
cli_pager
setting in thepf-sandbox-developer
profile:
aws configure set cli_pager "" --profile pf-sandbox-developer
- Verify that you have a
[profile pf-sandbox-developer]
section in your~/.aws/config
file:
grep -n -A6 "[profile pf-sandbox-developer]" ~/.aws/config
[profile pf-sandbox-developer]
sso_session = j1
sso_account_id = YOUR_SANDBOX_ACCOUNT_ID
sso_role_name = SandboxDeveloper
region = ap-southeast-2
output = json
cli_pager =
- Add an alias to your
~/.bash_aliases
to log you in to your AWS sandbox account using yourpf-sandbox-developer
profile:
echo "alias pfsbd='aws sso login --profile pf-sandbox-developer'" >> ~/.bash_aliases
source ~/.bash_aliases
- If you've previously completed steps 1 and 2, refresh your
pf-sandbox-developer
andpf-sandbox-admin
token:
pfsbd
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:
https://device.sso.ap-southeast-2.amazonaws.com/
Then enter the code:
ABCD-1234
Successfully logged into Start URL: https://j1.awsapps.com/start
- Verify that you can access resources in your AWS sandbox account using the
pf-sandbox-developer
profile:
aws ec2 describe-vpcs --profile pf-sandbox-developer --query Vpcs[0].VpcId
"vpc-xxxxxxxxxxxxxxx"
We'll be using the pf-sandbox-developer
profile for the remainder of the workshop.
- Install
jq
sudo apt-get install jq
- Repeat steps 2-7 and set up a
pf-sandbox-admin
profile for your Product Factory Sandbox Administrator role.
Deploying a multi-layered application to the cloud
In this workshop we will be building and deploying our Todo application to the cloud using AWS's ECS, Fargate, Cloudfront and Cloudformation.
The structure of our application is as follows:
- Front-end application: This the Bit-by-bit web client
- Back-end application: This is the Todo HTTP API web server
- Database: This is the MongoDB instance that the Back-end application uses to persist client resources
The AWS Fargate can be used in conjunction with Amazon ECS to run containers. AWS Fargate removes the need to manage servers, or clusters of ECS instances. It can also handle sclaing, proivisioning and configuration of clusters of virtual machines.
When running our application using AWS Fargate, we need to:
- Package each of our applications into containers.
- Specifiy the OS, CPU, memory requirements
- Define networking and IAM policies
- Launch each application as a Fargate task.
Create Docker images
Let's create a Docker image for our Bit-by-Bit web client.
- Add a
.dockerignore
file to the root of our project. We will add the files that we do not want copied into the Docker image:
node_modules
build
- Next, we need to create a Dockerfile. Add a file called
Dockerfile
to the root of your project and add the following content:
# ==== CONFIGURE =====
# Use a Node 18 base image
FROM node:18-alpine
# Set the working directory to /app inside the container
WORKDIR /app
# Copy app files
COPY . .
# ==== BUILD =====
# Install dependencies (npm ci makes sure the exact versions in the lockfile gets installed)
RUN npm ci
# Build the app
RUN npm run build
# ==== RUN =======
# Set the env to "production"
ENV NODE_ENV production
# Expose the port on which the app will be running (3000 is the default that `serve` uses)
EXPOSE 3000
# Start the app
CMD [ "npx", "serve", "build" ]
- We now need to verify that our Dockerfile is configured correctly. To do this, we will build a Docker image based on our Dockerfile and then run a container based on that image:
The docker build
command will build an image based on our Dockerfile specification:
# Build the Docker image for the current folder
# and tag it with `bit-by-bit`
docker build . -t bit-by-bit
[+] Building 17.8s (10/13)
=> => sha256:44286d6df86930644852ebe4f9a5909bdbe1547585fb5c12dc875dee18068db6 11.63MB / 11.63MB 1.9s
=> => extracting sha256:81234aecc257822106327b49f7dc428d0b5b2f67577ad4b745188672a5029f12 0.1s
[+] Building 17.9s (10/13)
=> => extracting sha256:f7c8639dc75ec36e3531791446072f73c7e6a786bbd6a3f03f6fb822e3576047 0.0s
=> => extracting sha256:d0071b96733a839c3dc75a8a186bba103b8b455ce4980398e1b36e57d1850734 0.0s
[+] Building 24.8s (14/14) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 655B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 59B 0.0s
=> [internal] load metadata for docker.io/library/nginx:1.24.0-alpine 1.1s
=> [internal] load metadata for docker.io/library/node:18-alpine 1.0s
=> [builder 1/5] FROM docker.io/library/node:18-alpine@sha256:47d97b93629d9461d64197773966cc49081cf4463b1b07de5a38b6bd5acfbe9d 0.0s
=> [production 1/3] FROM docker.io/library/nginx:1.24.0-alpine@sha256:4411951a38c9fb7e91813971944d390ca7a824413ea318260e2bb4da86b172a8 2.4s
=> => resolve docker.io/library/nginx:1.24.0-alpine@sha256:4411951a38c9fb7e91813971944d390ca7a824413ea318260e2bb4da86b172a8 0.0s
=> => sha256:bb5936af66b7c9c110b4c59af7d7595547529732b6691edc46c6798687f21d0a 627B / 627B 0.4s
=> => sha256:81234aecc257822106327b49f7dc428d0b5b2f67577ad4b745188672a5029f12 1.80MB / 1.80MB 1.2s
=> => sha256:f7c8639dc75ec36e3531791446072f73c7e6a786bbd6a3f03f6fb822e3576047 958B / 958B 0.7s
=> => sha256:4411951a38c9fb7e91813971944d390ca7a824413ea318260e2bb4da86b172a8 1.65kB / 1.65kB 0.0s
=> => sha256:26a1b20f194c2adab3be2d9e569ca837ddd31fe105d95dd51344e5bf829f1349 1.78kB / 1.78kB 0.0s
=> => sha256:1266a3a46e967b44a063058d2efa42097e2a55be287a62f9a6343b231f585f9d 16.32kB / 16.32kB 0.0s
=> => sha256:d0071b96733a839c3dc75a8a186bba103b8b455ce4980398e1b36e57d1850734 773B / 773B 1.0s
=> => sha256:b6b60f9051a8186751b35d6f9e0a792edcd47b24b1d4d53be0ee383e3e1ecaf1 1.40kB / 1.40kB 1.1s
=> => sha256:44286d6df86930644852ebe4f9a5909bdbe1547585fb5c12dc875dee18068db6 11.63MB / 11.63MB 1.9s
=> => extracting sha256:81234aecc257822106327b49f7dc428d0b5b2f67577ad4b745188672a5029f12 0.1s
=> => extracting sha256:bb5936af66b7c9c110b4c59af7d7595547529732b6691edc46c6798687f21d0a 0.0s
=> => extracting sha256:f7c8639dc75ec36e3531791446072f73c7e6a786bbd6a3f03f6fb822e3576047 0.0s
=> => extracting sha256:d0071b96733a839c3dc75a8a186bba103b8b455ce4980398e1b36e57d1850734 0.0s
=> => extracting sha256:b6b60f9051a8186751b35d6f9e0a792edcd47b24b1d4d53be0ee383e3e1ecaf1 0.0s
=> => extracting sha256:44286d6df86930644852ebe4f9a5909bdbe1547585fb5c12dc875dee18068db6 0.2s
=> [internal] load build context 0.0s
=> => transferring context: 4.74kB 0.0s
=> CACHED [builder 2/5] WORKDIR /app 0.0s
=> [builder 3/5] COPY . . 0.1s
=> [builder 4/5] RUN npm ci 14.3s
=> [builder 5/5] RUN npm run build 8.6s
=> [production 2/3] COPY --from=builder /app/build /usr/share/nginx/html 0.0s
=> [production 3/3] COPY nginx.conf /etc/nginx/conf.d/default.conf 0.0s
=> exporting to image 0.1s
=> => exporting layers 0.1s
=> => writing image sha256:c61adcf820f9de4fe56d9120f60cdf46b9479b4fc9384a6668e8c45182df5c7e 0.0s
=> => naming to docker.io/library/bit-by-bit
We should verify that the image was successfully created. We can list the images on our host by using docker's images
command:
docker images bit-by-bit
REPOSITORY TAG IMAGE ID CREATED SIZE
bit-by-bit latest 7f9acb2d3b97 5 minutes ago 695MB
Finally, we can start a container based on the bit-by-bit
image we built in the step above. To access our running appliction within the container, we will need to map port 3000
inside to container to port 3000
on our host. To run a container we'll be using docker's run
command, passing the --name
flag to set the name the running container:
docker run --name -p 3000:3000 -d bit-by-bit
d09a972060b7fb1bbc8f9c88118372bcd6f3ea613de43004a972746a889960e8
If you visit http://localhost:3000
you should now see the homepage for your Bit by Bit application.
Let's see if we can't improve our current approach. Firstly, we don't need to use the serve
utility to serve our app's content as it's just a static site. When serving static content, a tool like nginx
is better suited to the job. To use nginx
, we need to write a configuration file. Create a file called nginx.conf
in the root of your project and add the following content to it:
server {
listen 80;
location / {
root /usr/share/nginx/html/;
include /etc/nginx/mime.types;
try_files $uri $uri/ /index.html;
}
}
Now, let's separate out the layers required to build our application from those needed to host it. We can do this using a multi-stage build:
# ==== BUILD =====
FROM node:18-alpine as builder
# Set the working directory to /app inside the container
WORKDIR /app
# Copy app files
COPY . .
# Install dependencies (npm ci makes sure the exact versions in the lockfile gets installed)
RUN npm ci
# Build the app
RUN npm run build
# ==== RUN =====
# Bundle static assets with nginx
FROM nginx:1.24.0-alpine as production
ENV NODE_ENV production
# Copy built assets from `builder` image
COPY --from=builder /app/build /usr/share/nginx/html
# Add your nginx.conf
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Expose port
EXPOSE 80
# Start nginx
CMD ["nginx", "-g", "daemon off;"]
With the updated Dockerfile
on hand, we can now rebuild the bit-by-bit
image:
docker build . -t bit-by-bit
[+] Building 2.4s (14/14) FINISHED
=> [internal] load .dockerignore 0.0s
=> => transferring context: 59B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 655B 0.0s
=> [internal] load metadata for docker.io/library/nginx:1.24.0-alpine 2.4s
=> [internal] load metadata for docker.io/library/node:18-alpine 2.4s
=> [builder 1/5] FROM docker.io/library/node:18-alpine@sha256:47d97b93629d9461d64197773966cc49081cf4463b1b07de5a38b6bd5acfbe9d 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 4.58kB 0.0s
=> [production 1/3] FROM docker.io/library/nginx:1.24.0-alpine@sha256:4411951a38c9fb7e91813971944d390ca7a824413ea318260e2bb4da86b172a8 0.0s
=> CACHED [builder 2/5] WORKDIR /app 0.0s
=> CACHED [builder 3/5] COPY . . 0.0s
=> CACHED [builder 4/5] RUN npm ci 0.0s
=> CACHED [builder 5/5] RUN npm run build 0.0s
=> CACHED [production 2/3] COPY --from=builder /app/build /usr/share/nginx/html 0.0s
=> CACHED [production 3/3] COPY nginx.conf /etc/nginx/conf.d/default.conf 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:c61adcf820f9de4fe56d9120f60cdf46b9479b4fc9384a6668e8c45182df5c7e 0.0s
=> => naming to docker.io/library/bit-by-bit
Because we separated the builder and production stages in our Dockerfile
, our final image will now only contain the contents of the build
folder from the builder stage. Everything else from the builder stage will no longer be included in the final image.
We need to stop our currently running container. We can use the stop
command, supplying the name of the container we want to stop (bit-by-bit
):
docker stop bit-by-bit
bit-by-bit
We should also remove the stopped bit-by-bit
container using the rm
command:
docker rm bit-by-bit
bit-by-bit
With the old container stopped, we can now run our new container. This time we need to map port 80
from inside the container to port 3000
on our host:
docker run --name bit-by-bit -p 3000:80 -d bit-by-bit
Visit http://localhost:3000 and you should see that your application back up and running.
Let's clean up. Again, we need to stop our currently running container. We can use the stop
command, supplying the name of the container we want to stop (bit-by-bit
):
docker stop bit-by-bit
bit-by-bit
We should also remove the stopped bit-by-bit
container using the rm
command:
docker rm bit-by-bit
bit-by-bit
Setting up the infrastructure
Let's create a Cloudformation template called ecr.cfn.template.json
in the root of our project:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "ECR Repository for AWS Workshop 04",
"Resources": {}
}
The first thing we need to do is create an ECR Repository to hold our bit-by-bit
image.
The ECR repository will need to allow the ECR service principal to perform certain actions, outlined in the RepositoryPolicyText
field in the resource definition below.
For this workshop, we'll also need to give our user permission (i.e. our currently authenticated user) to push and pull images to/from our ECR repository. In effect, this means adding ourselves as principals to the policy we've attached to the ECR repository. In the real world (what is real...), we'd build and push our images using an AWS service like CodeBuild - taking our user out of the picture entirely.
We've added a template parameter to capture the calling user's ARN. We then add the user as a principal, along side the ECS service principal.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "ECR Repository for AWS Workshop 04",
"Parameters": {
"CallingUserArn": {
"Description": "Calling User ARN",
"Type": "String"
}
},
"Resources": {
"WorkshopRepository": {
"Type": "AWS::ECR::Repository",
"Properties": {
"RepositoryName": "workshop-repository",
"RepositoryPolicyText": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPushPull",
"Effect": "Allow",
"Principal": {
"Service": ["ecs.amazonaws.com"],
"AWS": [
{
"Ref": "CallingUserArn"
}
]
},
"Action": ["ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage", "ecr:BatchCheckLayerAvailability", "ecr:PutImage", "ecr:InitiateLayerUpload", "ecr:UploadLayerPart", "ecr:CompleteLayerUpload"]
}
]
}
}
}
},
"Outputs": {
"RepositoryUri": {
"Value": {
"Fn::GetAtt": ["WorkshopRepository", "RepositoryUri"]
}
}
}
}
Let's deploy a stack using our template and pass it the ARN of our current user as a parameter. We'll use the AWS CLI's sts get-caller-identity
command to get the current user's ARN as shown below:
aws cloudformation deploy \
--template-file ecr.cfn.template.json \
--stack-name aws-workshop-04-ecr \
--profile pf-sandbox-developer \
--parameter-overrides CallingUserArn="$(aws sts get-caller-identity --profile pf-sandbox-developer | jq -r .Arn)";
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - aws-workshop-04-ecr
Now that we've deployed our ECR Repository, let's get the Repository URI by inspecting the RepositoryUri
output value from our stack. To do this we'll be using the aws cli's cloudformation describe-stacks command:
WORKSHOP_REPOSITORY_URI=$(aws cloudformation describe-stacks \
--stack-name aws-workshop-04-ecr \
--profile pf-sandbox-developer \
--output text \
--query 'Stacks[0].Outputs[?OutputKey==`RepositoryUri`].OutputValue'); \
echo ${WORKSHOP_REPOSITORY_URI};
572058642352.dkr.ecr.us-west-2.amazonaws.com/workshop-repository
Give Docker cli credentials for use against the ECR repository:
aws ecr get-login-password \
--profile pf-sandbox-developer | \
docker login --username AWS --password-stdin ${WORKSHOP_REPOSITORY_URI};
WARNING! Your password will be stored unencrypted in /home/tanbal/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Tag the bit-by-bit
image:
docker tag bit-by-bit:latest ${WORKSHOP_REPOSITORY_URI};
Now we can now push the bit-by-bit
image into the ECR repository we created. We'll do this using the aws cli's ecr push-image
command:
docker push ${WORKSHOP_REPOSITORY_URI};
Using default tag: latest
The push refers to repository [541172580536.dkr.ecr.ap-southeast-2.amazonaws.com/workshop-repository]
87996a9baecd: Pushed
28a06e6eec18: Pushed
24e414d9930b: Pushed
e43c8ee2056a: Pushed
effffea84b55: Pushed
b42e5b255ff1: Pushed
d9c63c119ee8: Pushed
f33d2eed4384: Pushed
f1417ff83b31: Pushed
latest: digest: sha256:4d696e0c2475319f0b9e5e2108b14c4259d7b50d2b79b4203cc1ea05c812d0df size: 2198
VPC
{
"Parameters": {
"Tag": {
"Type": "String"
}
},
"Resources": {
"VPC": {
"Type": "AWS::EC2::VPC",
"Properties": {
"CidrBlock": "10.0.0.0/16",
"InstanceTenancy": "default",
"Tags": [
{
"Key": "Name",
"Value": {
"Ref": "Tag"
}
}
]
}
},
"InternetGateway": {
"Type": "AWS::EC2::InternetGateway"
},
"VPCGatewayAttachment": {
"Type": "AWS::EC2::VPCGatewayAttachment",
"Properties": {
"VpcId": {
"Ref": "VPC"
},
"InternetGatewayId": {
"Ref": "InternetGateway"
}
}
},
"PublicSubnet": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"AvailabilityZone": {
"Fn::Select": [
0,
{
"Fn::GetAZs": {
"Ref": "AWS::Region"
}
}
]
},
"VpcId": {
"Ref": "VPC"
},
"CidrBlock": "10.0.0.0/24"
}
},
"PrivateSubnetInternetEnabled": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"AvailabilityZone": {
"Fn::Select": [
1,
{
"Fn::GetAZs": {
"Ref": "AWS::Region"
}
}
]
},
"VpcId": {
"Ref": "VPC"
},
"CidrBlock": "10.0.1.0/24"
}
},
"RouteTable": {
"Type": "AWS::EC2::RouteTable",
"Properties": {
"VpcId": {
"Ref": "VPC"
}
}
},
"InternetRoute": {
"Type": "AWS::EC2::Route",
"Properties": {
"DestinationCidrBlock": "0.0.0.0/0",
"GatewayId": {
"Ref": "InternetGateway"
},
"RouteTableId": {
"Ref": "RouteTable"
}
}
},
"PublicSubnetRouteTableAssociation": {
"Type": "AWS::EC2::SubnetRouteTableAssociation",
"Properties": {
"RouteTableId": {
"Ref": "RouteTable"
},
"SubnetId": {
"Ref": "PublicSubnet"
}
}
},
"EIP": {
"Type": "AWS::EC2::EIP",
"Properties": {
"Domain": "vpc"
}
},
"Nat": {
"Type": "AWS::EC2::NatGateway",
"Properties": {
"AllocationId": {
"Fn::GetAtt": ["EIP", "AllocationId"]
},
"SubnetId": {
"Ref": "PublicSubnet"
}
}
},
"NatRouteTable": {
"Type": "AWS::EC2::RouteTable",
"Properties": {
"VpcId": {
"Ref": "VPC"
}
}
},
"NatRoute": {
"Type": "AWS::EC2::Route",
"Properties": {
"DestinationCidrBlock": "0.0.0.0/0",
"NatGatewayId": {
"Ref": "Nat"
},
"RouteTableId": {
"Ref": "NatRouteTable"
}
}
},
"PrivateSubnetInternetEnabledRouteTableAssociation": {
"Type": "AWS::EC2::SubnetRouteTableAssociation",
"Properties": {
"RouteTableId": {
"Ref": "NatRouteTable"
},
"SubnetId": {
"Ref": "PrivateSubnetInternetEnabled"
}
}
}
},
"Outputs": {
"VpcId": {
"Description": "The VPC ID",
"Value": {
"Ref": "VPC"
}
},
"PublicSubnetId": {
"Description": "The Public Subnet ID",
"Value": {
"Ref": "PublicSubnet"
}
},
"PrivateSubnetInternetEnabledId": {
"Description": "The Private Subnet Internet Enabled ID",
"Value": {
"Ref": "PrivateSubnetInternetEnabled"
}
}
}
}
aws cloudformation deploy \
--template-file vpc.cfn.template.json \
--stack-name aws-workshop-04-vpc \
--profile pf-sandbox-developer \
--parameter-overrides Tag="aws-workshop-04-vpc";
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - aws-workshop-04-vpc
WORKSHOP_PUBLIC_SUBNET_ID=$(aws cloudformation describe-stacks \
--stack-name aws-workshop-04-vpc \
--profile pf-sandbox-developer \
--output text \
--query 'Stacks[0].Outputs[?OutputKey==`PublicSubnetId`].OutputValue'); \
echo ${WORKSHOP_PUBLIC_SUBNET_ID};
subnet-0c0558340445ff293
WORKSHOP_VPC_ID=$(aws cloudformation describe-stacks \
--stack-name aws-workshop-04-vpc \
--profile pf-sandbox-developer \
--output text \
--query 'Stacks[0].Outputs[?OutputKey==`VpcId`].OutputValue'); \
echo ${WORKSHOP_VPC_ID};
vpc-04babd9709e6bd499
ECS
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "ECS Cluster for AWS Workshop 04"
}
"Parameters": {
"RepositoryUri": {
"Description": "Repository URI",
"Type": "String"
},
"SubnetId": {
"Description": "Public Subnet Id",
"Type": "AWS::EC2::Subnet::Id"
},
"VpcId": {
"Description": "VcpId",
"Type": "AWS::EC2::VPC::Id"
}
}
Let's add an ECS Cluster to the Resources
in our template:
"Resources": {
"WorkshopCluster":{
"Type" : "AWS::ECS::Cluster",
"Properties" : {
"CapacityProviders" : [ "FARGATE" ],
"ClusterName" : "aws-workshop-04-cluster",
"Tags": [
{
"Key": "environment",
"Value": "sandbox"
}
]
}
}
}
Next, add an execution role that can be assumed by the ecs-tasks.amazonaws.com
service principal:
"Resources": {
"EcsTaskExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ecs-tasks.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Description": "ECS Task Execution Role",
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
],
"RoleName": {
"Fn::Join": [
"-",
[
{
"Ref": "AWS::Region"
},
{
"Ref": "WorkshopCluster"
}
]
]
}
}
}
}
"Resources":{
"LogGroup":{
"Type": "AWS::Logs::LogGroup",
"Properties": {
"LogGroupName": "bit-by-bit"
}
}
}
ECS Task Definition:
"Resources": {
"TaskDefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"ContainerDefinitions": [
{
"Essential": true,
"Image": {
"Ref": "RepositoryUri"
},
"LogConfiguration": {
"LogDriver": "awslogs",
"Options": {
"awslogs-group": {"Ref":"LogGroup"},
"awslogs-region": {
"Ref": "AWS::Region"
},
"awslogs-stream-prefix": "ecs"
}
},
"Name": "bit-by-bit-web-client",
"PortMappings": [
{
"ContainerPort": 80,
"HostPort": 80,
"Protocol": "tcp"
}
]
}
],
"Cpu": "256",
"ExecutionRoleArn": {
"Ref": "EcsTaskExecutionRole"
},
"Family": "task-definition-cfn",
"Memory": "512",
"NetworkMode": "awsvpc",
"RequiresCompatibilities": [
"FARGATE"
]
}
}
}
Container Security Group:
"Resources":{
"ContainerSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Bit-by-Bit container security group",
"SecurityGroupIngress": [
{
"IpProtocol": "tcp",
"FromPort": "80",
"ToPort": "80",
"CidrIp": "0.0.0.0/0"
}
],
"VpcId": {
"Ref": "VpcId"
}
}
}
}
ECS Service:
"Resources":{
"ECSService": {
"Type": "AWS::ECS::Service",
"Properties": {
"ServiceName": "aws-workshop-ecs-service",
"Cluster": {
"Ref": "WorkshopCluster"
},
"DesiredCount": 1,
"LaunchType": "FARGATE",
"NetworkConfiguration": {
"AwsvpcConfiguration": {
"AssignPublicIp": "ENABLED",
"SecurityGroups": [
{
"Fn::GetAtt": [
"ContainerSecurityGroup",
"GroupId"
]
}
],
"Subnets": [
{
"Ref": "SubnetId"
}
]
}
},
"TaskDefinition": {
"Ref": "TaskDefinition"
}
}
}
}
Let's deploy our ECS Cluster - For this step we'll need to use our pf-sandbox-admin
profile:
aws cloudformation deploy \
--template-file ecs.cfn.template.json \
--stack-name aws-workshop-04-ecs \
--profile pf-sandbox-admin \
--capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides \
VpcId="${WORKSHOP_VPC_ID}" \
SubnetId="${WORKSHOP_PUBLIC_SUBNET_ID}" \
RepositoryUri="${WORKSHOP_REPOSITORY_URI}";
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - aws-workshop-04-ecs
Finally, let's clean up our resources:
WORKSHOP_IMAGE_DIGEST="$(aws ecr list-images \
--output text \
--repository-name workshop-repository \
--query 'imageIds[0].imageDigest' \
--profile pf-sandbox-developer)";
echo "WORKSHOP_IMAGE_DIGEST = ${WORKSHOP_IMAGE_DIGEST}";
aws ecr batch-delete-image \
--repository-name workshop-repository \
--profile pf-sandbox-developer \
--image-ids imageDigest=${WORKSHOP_IMAGE_DIGEST};
{
"imageIds": [
{
"imageDigest": "sha256:4d696e0c2475319f0b9e5e2108b14c4259d7b50d2b79b4203cc1ea05c812d0df",
"imageTag": "latest"
}
],
"failures": []
}
aws cloudformation delete-stack \
--stack-name aws-workshop-04-ecr \
--profile pf-sandbox-admin;
aws cloudformation delete-stack \
--stack-name aws-workshop-04-ecs \
--profile pf-sandbox-admin;
aws cloudformation delete-stack \
--stack-name aws-workshop-04-vpc \
--profile pf-sandbox-admin;