AWS Workshop 02
Before you begin
We'll be using your AWS Sandbox accounts for this workshop.
- Make sure you have a an
sso-session
configuration section in your~/.aws/config
:
[sso-session j1]
sso_start_url = https://j1.awsapps.com/start
sso_region = ap-southeast-2
sso_registration_scopes = sso:account:access
If you don't, then you'll need to configure your sso-session
section with the aws configure sso-session wizard
:
aws configure sso
SSO session name: j1
SSO start URL [None]: https://j1.awsapps.com/start
SSO region [None]: ap-southeast-2
SSO registration scopes [None]: sso:account:access
- Next, we'll create and SSO linked developer profile for your sandbox account
aws configure sso
SSO session name (Recommended): j1
When prompted: - Choose your sandbox account from the list - Next, choose the SandboxDeveloper
role - Set the CLI default client Region to ap-southeast-2
- CLI default output format to json
- Update the
cli_pager
setting in thepf-sandbox-developer
profile:
aws configure set cli_pager "" --profile pf-sandbox-developer
- Verify that you have a
[profile pf-sandbox-developer]
section in your~/.aws/config
file:
grep -n -A6 "[profile pf-sandbox-developer]" ~/.aws/config
[profile pf-sandbox-developer]
sso_session = j1
sso_account_id = YOUR_SANDBOX_ACCOUNT_ID
sso_role_name = SandboxDeveloper
region = ap-southeast-2
output = json
cli_pager =
- Add an alias to your
~/.bash_aliases
to log you in to your AWS sandbox account using yourpf-sandbox-developer
profile:
echo "alias pfsbd='aws sso login --profile pf-sandbox-developer'" >> ~/.bash_aliases
source ~/.bash_aliases
- If you've previously completed steps 1 and 2, refresh your
pf-sandbox-developer
token:
pfsbd
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:
https://device.sso.ap-southeast-2.amazonaws.com/
Then enter the code:
ABCD-1234
Successfully logged into Start URL: https://j1.awsapps.com/start
- Verify that you can access resources in your AWS sandbox account using the
pf-sandbox-developer
profile:
aws ec2 describe-vpcs --profile pf-sandbox-developer --query Vpcs[0].VpcId
"vpc-xxxxxxxxxxxxxxx"
We'll be using the pf-sandbox-developer
profile for the remainder of the workshop.
- Install
jq
sudo apt-get install jq
AWS CLI
The AWS cli allows you to interact with AWS services in your command line shell. The browser based user interface may be appropriate for ad-hoc tasks or experimentation, but such an approach is not appropriate if we want our actions to be consistent and easily repeated.
Deploying a Statically Hosted Website with S3
What is Simple Storage Service
Amazon Simple Storage Service is a highly available, secure, reliable and scalable object storage service. In S3 lingo, a bucket is where you store objects. An object can be any file and the associated metadata describing it.
Using S3 is as simple as creating a bucket and uploading objects into it. Objects can then be downloaded, moved around within the bucket and deleted once they are no longer required.
With S3 you only pay for what you use.
S3 buckets can function like a webserver, allowing you to host a static website. Next, we'll go through the steps required to host your Todo application from your first AWS Workshop using S3.
Create a bucket
# generate a unique bucket name (between 3-63 characters, lowercase letters, numbers, dots and hyphens only)
BUCKET_NAME="aws-workshop-02-$(uuidgen)"
aws s3 mb s3://${BUCKET_NAME} --profile pf-sandbox-developer
make_bucket: aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4 # <---- Your unique bucket name
Enable Static Website Hosting
After creating our bucket, we'll need to enable static website hosting. We can do this using the s3 website
command. The website
command is used to set the website configuration for a bucket. The --index-document
parameter is a suffix that is appended to any requests for a directory on your website. For example, if we were to request the /
path, the data that is returned will be for the object with the key /index.html
.
aws s3 website s3://${BUCKET_NAME} --index-document index.html --profile pf-sandbox-developer
Edit Block Public Access settings
By default Amazon S3 blocks public access to buckets within our account. This is done for (obvious!) security reasons. However, to host a publicly accessible website, we will need to update the block public access bucket settings. Specifically, we will toggle off the Block all public access setting.
View public access blocks
Permissions
To perform this activity we need the s3:GetBucketPublicAccessBlock permission.
To view public access blocks on the bucket:
aws s3api get-public-access-block --bucket ${BUCKET_NAME} --profile pf-sandbox-developer
{
"PublicAccessBlockConfiguration": {
"IgnorePublicAcls": true,
"BlockPublicPolicy": true,
"BlockPublicAcls": true,
"RestrictPublicBuckets": true
}
}
Delete all public access blocks
Permissions
To perform this activity we need the s3:PutBucketPublicAccessBlock permission.
To remove all public access blocks on the bucket:
aws s3api delete-public-access-block --bucket ${BUCKET_NAME} --profile pf-sandbox-developer
# Verify that all public blocks are removed
aws s3api get-public-access-block --bucket ${BUCKET_NAME} --profile pf-sandbox-developer
An error occurred (NoSuchPublicAccessBlockConfiguration) when calling the GetPublicAccessBlock operation: The public access block configuration was not found
Add a bucket policy to make our bucket content publicly available
Policies and Permissions in IAM
You can learn more about policies and permissions here.
After updating the S3 Block Pubilc Access settings, we need to add a bucket policy to grant public read access to our bucket. This will allow anyone on the internet read access to our bucket.
Get the current bucket policy (if any)
aws s3api get-bucket-policy --bucket ${BUCKET_NAME} --profile pf-sandbox-developer
An error occurred (NoSuchBucketPolicy) when calling the GetBucketPolicy operation: The bucket policy does not exist
Apply a bucket policy to allow public access
BUCKET_POLICY_FILE=$(mktemp /tmp/bucket-policy.XXXXXX.json)
cat <<EOF > ${BUCKET_POLICY_FILE}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::${BUCKET_NAME}/*"
]
}
]
}
EOF
aws s3api put-bucket-policy --bucket ${BUCKET_NAME} --profile pf-sandbox-developer --policy file://${BUCKET_POLICY_FILE}
rm -rf ${BUCKET_POLICY_FILE}
And now we can view the attached policy:
aws s3api get-bucket-policy --bucket ${BUCKET_NAME} --profile pf-sandbox-developer --output text | jq
Prepare your single page application's production bundle
cd path/to/your/single/page/application
npm run build
Once the build has completed, change into the build directory and list the contents:
cd build
ls
asset-manifest.json favicon.ico index.html logo192.png logo512.png manifest.json robots.txt static
Deploying the application's production bundle to S3
We can now sync the contents of the build folder with our bucket:
aws s3 sync . s3://${BUCKET_NAME} --profile pf-sandbox-developer
upload: ./asset-manifest.json to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/asset-manifest.json
upload: static/js/787.f5af9790.chunk.js.map to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/static/js/787.f5af9790.chunk.js.map
upload: ./robots.txt to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/robots.txt
upload: ./favicon.ico to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/favicon.ico
upload: static/css/main.073c9b0a.css.map to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/static/css/main.073c9b0a.css.map
upload: ./logo512.png to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/logo512.png
upload: ./index.html to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/index.html
upload: ./manifest.json to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/manifest.json
upload: static/css/main.073c9b0a.css to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/static/css/main.073c9b0a.css
upload: static/js/787.f5af9790.chunk.js to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/static/js/787.f5af9790.chunk.js
upload: ./logo192.png to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/logo192.png
upload: static/js/main.4aa63aad.js.LICENSE.txt to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/static/js/main.4aa63aad.js.LICENSE.txt
upload: static/media/logo.6ce24c58023cc2f8fd88fe9d219db6c6.svg to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/static/media/logo.6ce24c58023cc2f8fd88fe9d219db6c6.svg
upload: static/js/main.4aa63aad.js to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/static/js/main.4aa63aad.js
upload: static/js/main.4aa63aad.js.map to s3://aws-workshop-02-abaddc0e-1234-5678-1234-8c3e83fb8eb4/static/js/main.4aa63aad.js.map
Next, let's list the contents of the bucket to verify the sync:
aws s3 ls s3://${BUCKET_NAME} --recursive --human-readable --summarize --profile pf-sandbox-developer
2023-03-16 09:28:48 605 Bytes asset-manifest.json
2023-03-16 09:28:48 3.8 KiB favicon.ico
2023-03-16 09:28:48 644 Bytes index.html
2023-03-16 09:28:48 5.2 KiB logo192.png
2023-03-16 09:28:48 9.4 KiB logo512.png
2023-03-16 09:28:48 492 Bytes manifest.json
2023-03-16 09:28:48 67 Bytes robots.txt
2023-03-16 09:28:48 1.0 KiB static/css/main.073c9b0a.css
2023-03-16 09:28:48 1.5 KiB static/css/main.073c9b0a.css.map
2023-03-16 09:28:48 4.5 KiB static/js/787.f5af9790.chunk.js
2023-03-16 09:28:48 10.3 KiB static/js/787.f5af9790.chunk.js.map
2023-03-16 09:28:48 140.7 KiB static/js/main.4aa63aad.js
2023-03-16 09:28:48 971 Bytes static/js/main.4aa63aad.js.LICENSE.txt
2023-03-16 09:28:48 364.4 KiB static/js/main.4aa63aad.js.map
2023-03-16 09:28:48 2.6 KiB static/media/logo.6ce24c58023cc2f8fd88fe9d219db6c6.svg
Total Objects: 15
Total Size: 546.1 KiB
Finally, we should test the website endpoint:
S3 website endpoints follow the below formats, depending on your region:
- s3-website dash (-) Region ‐ http://bucket-name.s3-website-Region.amazonaws.com
- s3-website dot (.) Region ‐ http://bucket-name.s3-website.Region.amazonaws.com
To request the index.html document on our site, we need to construct a URL as follows:
# Generate the S3 URL and copy it into our clipboard
echo "http://${BUCKET_NAME}.s3-website.ap-southeast-2.amazonaws.com" | xclip -selection c
Paste the URL into your browser and you should be greeted with your newly deployed site:
Clean up resources
Note: Conditions for deleting a bucket
A bucket must be completely empty of objects and versioned objects before it can be deleted. However, the --force parameter can be used to delete the non-versioned objects in the bucket before the bucket is deleted.
aws s3 rb --force s3://${BUCKET_NAME} --profile pf-sandbox-developer
Next steps
We've seen how we can host a static site using S3, but one limitation of this approach is that S3 only supports HTTP for website hosting. To securely host our site's content, we need to use another AWS service: Cloudfront.
Homework: Write two bash functions to deploy and un-deploy your site's content to a secure endpoint using S3 and Cloudfront.
cd /path/to/your/app
SECURE_ENDPOINT=deploy() # Deploy the site
echo $SECURE_ENDPOINT
https://xxxxxxxxxx.cloudfront.net
cd /path/to/your/app
destroy() # Tear down the site and all associated resources