This is just a CICD pipeline. Not an exemplerary one, not a bad one, just one.
What is the purpose of the pipeline?
It is to push latest changes that the devs have implemented on to the test environment so that QA can test and validate the functionality.
If you zoom in a little, you will see that the code needs to pass certain quality standards for the code to actually manifest as functionality in the test environment.
Code is compiled. It has to be have the quality that it gets compiled properly. Then, unit tests are run, integration tests are run, and then the code gets merged to master.
Code quality measurement
Build tool used by the project is gradle. Gitlab CI is invoking the gradle targets. Jacoco plugin is used to measure code coverage. Over the jacoco reports, sonarqube plugin is run to publish more coverage and related data to sonarqube server. Cobertura report is used to display code coverage of the MR visually with the MR changes so that a reviewer can notice whether a critical piece of functionality has been covered with tests.
Now that the code met the standard, it gets merged to main. Codebase has the Dockerfile which is used to generate the image against the pristine code. This image is then pushed to ECR.
Cloudformation to deploy containerized applications to ECS
Cloudformation templates are configured to create ECS tasks pointing to the image that was pushed from the main branch. This deployment of the image is abstracted out into a separate deploy project which does not deal with the codebase at all, and only deals with the deployment of the image.
Scheduled deployment of the main branch images
Main branch images gets pushed to ECR everytime a developer merges his MR to the main branch. Deploying these images to the testing environment every time developer merges an MR would be disruptive to the testers. Functionality can change while testing is in progress. That is the reason to have a schedule for deployment which renders the environment behavior more predictable.
Testing team has the test environment URL bookmarked. This does not change with every deployment. They don’t have to run and find the latest environment URL.
Containerized applications would get deployed on brand new EC2 instances
This is a containerized application. This gets deployed on brand new EC2 instance. How come the testing team is transparent to the fact that the “machine” where the previous version of the application was running has been terminated? That is where application load balancer comes in.
Bookmarked URL on the tester’s browser
Amazon Route53 would have an entry to point the human readable url to the application load balancer instance. This application load balancer would have a listener rule to direct traffic to target groups. This target group is a group of targets. Target in the sense of target of the request from the browser. When the tester clicks on the bookmarked url and hits enter, the arrow enters the heart of the target.
What is the target?
Target is the EC2 instance where the latest version of the containerized application is running.
Targets live between schedules
Application load balancer lives for ever. Targets change with every deployment. A new EC2 is spawned into which the latest image is deployed.
Updating the application load balancer
The listener rule of the load balancer is updated to direct the traffic to the latest target group. This is like a quick update statement on a database, and viola the browser on the tester’s machine gets the brand new functionality.