- Week 11/X — CloudFormation Part 2 - Cleanup
- Overview
- Videos for week X
- CFN CI/CD Stack
- Week X Sync tool for static website hosting
- Initialise Static Hosting
- Create GitHub Action
- CleanUp
- Messaging Alt User
- Allowing Production to upload images
- Troubleshooting
- Update Lambda
- All CFN Stacks Created
- Proof of working in Production
- cloud-project-bootcamp-validation-tool
Due to scope creep, this week will focus on cleaning up the code and ensuring it is in a stable state.
- Week X Sync tool for static website hosting
- Week X Reconnect Database and Post Confirmation Lambda
- Week X Use CORS for Service
- Week-X CICD Pipeline and Create Activity
- Week-X Refactor JWT to use a decorator
- Week-X Refactor AppPy
- Week-X Refactor Flask Routes
- Week-X Replies Work In Progress
- Week-X Refactor Error Handling and Fetch Requests
- Week-X Activity Show Page
- Week-X Cleanup
- Week X Cleanup Part 2
- Final Submissions Instructions
Create the folder structure.
cd /workspace/aws-bootcamp-cruddur-2023
mkdir -p aws/cfn/cicd
cd aws/cfn/cicd
touch template.yaml config.toml
The CI/CD stack requires a nested codebuild stack so a directory needs to be created for it too.
cd /workspace/aws-bootcamp-cruddur-2023
mkdir -p aws/cfn/cicd/nested
cd aws/cfn/cicd/nested
touch codebuild.yaml
Update the files with the following code.
aws/cfn/cicd/config.toml
structure
[deploy]
bucket = 'cfn-tajarba-artifacts'
region = 'eu-west-2'
stack_name = 'CrdCicd'
[parameters]
ServiceStack = 'CrdSrvBackendFlask'
ClusterStack = 'CrdCluster'
GitHubBranch = 'prod'
GithubRepo = 'shehzadashiq/aws-bootcamp-cruddur-2023'
ArtifactBucketName = "codepipeline-cruddur-tajarba-artifacts"
BuildSpec = 'backend-flask/buildspec.yml'
- Publicly accessible bucket that was created via
./bin/cfn/frontend
- Cloudfront distribution that was created via
./bin/cfn/frontend
Create the following scripts static-build
and sync
in bin/frontend
and set them as executable
touch bin/frontend/static-build
touch bin/frontend/sync
chmod u+x bin/frontend/static-build
chmod u+x bin/frontend/sync
Create a new file erb/sync.env.erb
that holds the environment variables for the bin/frontend/sync
script
touch erb/sync.env.erb
Add the following, replace SYNC_S3_BUCKET
and SYNC_CLOUDFRONT_DISTRIBUTION_ID
with your own.
SYNC_S3_BUCKET=tajarba.com
SYNC_CLOUDFRONT_DISTRIBUTION_ID=E2VH3EBBB8C06D
SYNC_BUILD_DIR=<%= ENV['THEIA_WORKSPACE_ROOT'] %>/frontend-react-js/build
SYNC_OUTPUT_CHANGESET_PATH=<%= ENV['THEIA_WORKSPACE_ROOT'] %>/tmp/changeset.json
SYNC_AUTO_APPROVE=false
Create the following files in the root of the repository
- Gemfile
- Rakefile
touch Gemfile
touch Rakefile
The code for these files is located respectively here Gemfile and here Rakefile.
Create the following file ./tmp/.keep
as a placeholder
touch tmp/.keep
Create a sync
script in bin/cfn
touch bin/cfn/sync
chmod u+x bin/cfn/sync
Update bin/cfn/sync
with the following code
cd /workspace/aws-bootcamp-cruddur-2023
mkdir -p aws/cfn/sync
touch aws/cfn/sync/template.yaml aws/cfn/sync/config.toml aws/cfn/sync/config.toml.example
Update config.toml with the following settings that specify the bucket, region and name of the CFN stack. Replace bucket
and region
with your own.
We also need to specify the GitHubOrg which in our case will correspond to our GitHub username and the GitHub Repository name
[deploy]
bucket = 'cfn-tajarba-artifacts'
region = 'eu-west-2'
stack_name = 'CrdSyncRole'
[parameters]
GitHubOrg = 'shehzadashiq'
RepositoryName = 'aws-bootcamp-cruddur-2023'
OIDCProviderArn = ''
Update aws/cfn/sync/template.yaml
with the following code
Run build script ./bin/frontend/build
, you should see output similar to the following when successful.
The build folder is ready to be deployed.
You may serve it with a static server:
npm install -g serve
serve -s build
Find out more about deployment here:
https://cra.link/deployment
Change to the frontend directory and zip the build folder
cd frontend-react-js
zip -r build.zip build/
The steps within the video recommended downloading the zip file locally and then uploading it to the s3 bucket. I instead chose to use the s3 cp command to copy from the frontend-react-js
folder directly to the s3 bucket s3://tajarba.com
aws s3 cp build s3://tajarba.com/ --recursive
I verified everything had been copied successfully using the s3 ls
command
aws s3 ls s3://tajarba.com
In the root of the repository
- Install the pre-requisite ruby gems
gem install aws_s3_website_sync dotenv
- Generate
sync.env
by running updated./bin/frontend/generate-env
- Initiate synchronisation './bin/frontend/sync'
- Create CFN Sync
CrdSyncRole
stack by running./bin/cfn/sync
Create folder in base of repo for action
mkdir -p .github/workflows/
touch .github/workflows/sync.yaml
Update with the following. Replace role-to-assume
with the role generated in CrdSyncRole
and aws-region
with the region your stack was created in.
name: Sync-Prod-Frontend
on:
push:
branches: [ prod ]
pull_request:
branches: [ prod ]
jobs:
build:
name: Statically Build Files
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [ 18.x]
steps:
- uses: actions/checkout@v3
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.node-version }}
- run: cd frontend-react-js
- run: npm ci
- run: npm run build
deploy:
name: Sync Static Build to S3 Bucket
runs-on: ubuntu-latest
# These permissions are needed to interact with GitHub's OIDC Token endpoint.
permissions:
id-token: write
contents: read
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Configure AWS credentials from Test account
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: arn:aws:iam::797130574998:role/CrdSyncRole-Role-VW38RM6ZXJ6W
aws-region: eu-west-2
- uses: actions/checkout@v3
- name: Set up Ruby
uses: ruby/setup-ruby@ec02537da5712d66d4d50a0f33b7eb52773b5ed1
with:
ruby-version: '3.1'
- name: Install dependencies
run: bundle install
- name: Run tests
run: bundle exec rake sync
This involved the following
- Refactoring of code
- Reimporting code from other branches that had been missed e.g. TimeDateCode
- Fixing CloudFormation stacks to correct missing settings
- Adding a user to ensure least privilege access
- Refactor to use JWT decorator in the application
- Implementing replies
- Improve error handling
- Other Quality Of Life Changes
I used the following URL to message my altUser in Production: https://tajarba.com/messages/new/altshehzad
To allow messaging, the following changes need to be made from my experience
- Update the CORS Policy for the avatars bucket to change the
AllowedOrigins
to the production domain - In the CruddurAvatarUpload Lambda edit
function.rb
to the production domain. Make sure to not have a trailing slash i.e it should behttps://tajarba.com
- Add the
PUT
method in/api/profile/update
underbackend-flask/routes/users.py
- Update the CORS Policy for the avatars bucket to change the
AllowedMethods
asPOST,PUT
- Tasks in GitPod and AWS CLI stopped running because
AWS_ENDPOINT_URL
had been set and was causing issues - CI/CD configuration error
- Reply function not working due to code overwrite error, specifically I had when copy/pasting code and not realising this. I ended up having to spend time trying to figure out what the issue was by debugging
- Rollbar stopped working despite working earlier with no errors thrown.
- Earlier on in the bootcamp I changed my seed script to include the BIO column so I do not need to run the migrations script
- Gitpod.yml would not always work. To resolve this I created a bootstrap script which automated the common tasks for me. This also worked in my local environment
- To save costs in Week 10, I had tore down the CFN stacks. This meant in WeekX I could no longer remember which stacks needed to exist as I had not yet finished documentation. Troubleshooting this consumed a lot of time.
- Uploading in production was causing CORS issues. In addition to adding permissions to the
tajarba.com
domain, this was resolved by adding thePUT
method in/api/profile/update
underbackend-flask/routes/users.py
- There was an issue generating the Cloudformation for validation when using the https://github.com/ExamProCo/cloud-project-bootcamp-validation-tool , I resolved this by troubleshooting the code and successfully generated the required template
Error on First Run as Pipeline Execution Fails
Choose Connection Application and click connect ![image]https://github.com/shehzadashiq/aws-bootcamp-cruddur-2023/assets/5746804/dd3d062e-0390-4a76-a97d-85b8a3719906
Connection created successfully ![image]https://github.com/shehzadashiq/aws-bootcamp-cruddur-2023/assets/5746804/99575532-7849-4c6c-aba5-2516b240f6b9
Pipeline still fails saying [GitHub] No Branch [prod] found for FullRepositoryName [aws-bootcamp-cruddur-2023]
When trying to edit the pipeline the following message is displayed A repository Id must be in the format <account>/<repository-name>
To resolve this change the following setting GithubRepo
in aws/cfn/cicd/config.toml
to include the account name e.g
GithubRepo = 'shehzadashiq/aws-bootcamp-cruddur-2023'
Pipeline failing at Build stage with error
Error calling startBuild: Project cannot be found: arn:aws:codebuild:eu-west-2:797130574998:project/CrdCicd-CodeBuildBakeImageStack-1O32P0X7I5NBCProject (Service: AWSCodeBuild; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: 3f7509fa-14e3-478b-9139-2ff5621ccc6e; Proxy: null)
Build succeeded after updating with reference to codebuild and buildspec.yml
A new security group was created for the Post Confirmation Lambda.
In CrdDbRDSSG
created a rule to allow connectivity as it was previously connected to the default VPC.
In GitPod, docker compose started to fail with the following message
I noticed that tasks within Gitpod.yml were not working either. The same issue happened in a new environment
The following error was shown
Could not connect to the endpoint URL: "http://dynamodb-local:8000/"
I looked further into the error and saw that the value was configured as an environment variable AWS_ENDPOINT_URL
when we were using DynamoDB in week 5 locally.
This was configured thus, AWS_ENDPOINT_URL="http://dynamodb-local:8000"
This had not caused any issues previously so I was surprised that this had happened.
I tried to change the URL to point to my region following the recommendations here https://docs.aws.amazon.com/general/latest/gr/rande.html#ddb_region e.g. AWS_ENDPOINT_URL="https://dynamodb.eu-west-2.amazonaws.com"
This still caused the same issue. To resolve this issue I unset the variable locally and removed it from Gitpod
gp env -u AWS_ENDPOINT_URL
unset AWS_ENDPOINT_URL
Once this had been unset I was able to run all aws_cli commands and run docker-compose
To automate tasks that would not run when the .gitpod.yml
file did not work I created a ./bin/bootstrap
script. I also created a local version for my local environment ./bin/bootstrap-local
#! /usr/bin/bash
set -e # stop if it fails at any point
CYAN='\033[1;36m'
NO_COLOR='\033[0m'
LABEL="bootstrap"
printf "${CYAN}====== ${LABEL}${NO_COLOR}\n"
ABS_PATH=$(readlink -f "$0")
BIN_DIR=$(dirname $ABS_PATH)
# Connect to ECR
source "$BIN_DIR/ecr/login"
# Configure Gitpod connectivity
source "$THEIA_WORKSPACE_ROOT/bin/rds/update-sg-rule"
source "$THEIA_WORKSPACE_ROOT/bin/ddb/update-sg-rule"
# Generate environment variables
ruby "$BIN_DIR/backend/generate-env"
ruby "$BIN_DIR/frontend/generate-env"
# Install CFN section
source "$BIN_DIR/cfn/initialise.sh"
# Install SAM section
source "$BIN_DIR/sam/initialise.sh"
These were addressed by commenting out the following import line
import ReactDOM from 'react-dom';
All stacks were deployed successfully
https://github.com/ExamProCo/cloud-project-bootcamp-validation-tool
Command to run to validate: bundle exec rake permit
Error generated when running the command.
Fixed by changing line 52 in lib/cpbvt/payloads/aws/policy.rb
from
"#{general-params.run_uuid}-cross-account-role-template.yaml"
to
'cross-account-role-template.yaml'
This causes an issue because when trying to merge the files it cannot find the original source file and fails.