Cloud Application Documentation
To manage the build and deployment of the READinet application in the cloud, there are Terraform and Ansible scripts that can be used for this purpose. Essentially, Terraform is used to manage the AWS infrastructure and Ansible is used to manage the build and deployment of the application itself.
Prerequisites
Before deploying the application on AWS, there are a few prerequisites that need to be in place. The first is to purchase a domain name that will be used when hosting the application. Simply put, this is the URL that will user's will put in their browsers to get to the cloud based application. You can purchase the domain name from any number of sources, but we used a service called Namecheap to purchase readinet.org. Once you have ownership of the name, you can create the AWS account (not documented here). Once you have the account and have admin privileges, you will want to add the domain entry in the Route 53 service in your AWS account. In Route53, you will create a 'Hosted Zone' where you will register your new domain name (not documented here).
The rest of the instructions below were written using the AWS account created by MBARI for the development of this project (706008-readinet). To access this account, you can login with MBARI's SSO (connected to AD) or with a local IAM account (for those that don't have an MBARI AD account). If you have not been added to the account either through SSO or local IAM, talk to the MBARI team about getting connected. Then depending on how you were added, you can use the MBARI SSO Login or IAM Login instructions below to log in to the web console.
Also, you need to make sure you have Terraform and Ansible installed on your local machine.
Login to AWS Account
MBARI SSO Login
- Go to the MBARI SSO Login Page

- Login with MBARI credentials
- This will drop you into an 'Access Portal' that will show you a list of accounts you have access to.
- Click on the arrow to the left of the
706008-readinetaccount, then click on the role you want to connect to the account as (this example is theAWSAdministratorAccess)
- This then logs you into the READinet account as that role

IAM Login
In order to interact with the READinet application, you will need to create an IAM login account. The instructions below assume that you are creating a new account:
- Log into the READinet account AWS console with an account that can create new users.
- Go to the
Identity and Access Management(IAM)service once you are logged in. - Click on
Usersin the left navigation menu. - Click on the
Create userbutton to the right. - Type in a
User nameand click on the check box forProvide user access to the AWS Management Console, then clickNext. - On the
Set permissionsscreen, click on theAttach policies directlyradio button and then click on theAdminstrator Accesscheckbox, then click onNext - On the
Review and createscreen, click onCreate user, this will take you back to the list of users. - Click on the user you just created.
- Click on the
Create access keybutton in theAccess keyssection. - Click on the
Command Line Interface(CLI)radio button and then on the checkbox next toI understand the above recommendation and want to proceed to create an access key, then clickNext. - You can leave the tag description empty and click on
Create access key - On the next screen, it is easiest to just click on the
Download .csv filebutton so you have the access key and secret key local on your machine as you will need them later. MAKE SURE YOU KEEP THIS FILE WELL PROTECTED!!! - If you want to enable console access, click on
Doneand then click onEnable console accessand follow the instructions.
AWS Services
The AWS services that make up the cloud application portion of the FIDO Sampler system are:
| AWS Service | Description |
|---|---|
| EC2 | There are two EC instances running, one for production and one for development |
| S3 | There are three S3 object storage buckets: mbari-readinet-backups, mbari-readinet-images, mbari-terraform-state |
| VPC | There is one VPC (the default one) in the Oregon region with four subnets |
| Route 53 | Links the domain name to the IP addresses so that DNS services can find the correct servers given the domain name |
| Secrets Manager | Holds one secret which is the deploy key for github |
| SNS | Sends notifications when new SD card images are deployed to S3 |
| DynamoDB | The DynamoDB table is a very simple table that keeps track of a lock that Terraform checks to make sure two people are not trying to apply changes at the same time |
| Key Management Service | TBD |
| IAM | Five local accounts: david, karl, nick, scott, kgomes |
DevOps
The management of the deployed system on AWS done using two different tools. Terraform is used to manage the AWS infrastructure components and Ansible is used to manage what gets deployed on those components.
Terraform
Terraform is a tool that can be used to script the management of the cloud resources. This is what is known as 'infrastructure as code'. It's basically a way to build a 'recipe' that describes infrastructure. One thing to realize is that Terraform is a 'push' tool which means that the recipe is defined by an external client and then that client pushes the recipe to a provider (like AWS).
For READinet, the terraform files are located in the source code repository under the devops>terraform folder.
In Terraform, there is what is known as 'state' which keeps track of the actual mapping between the terraform 'recipe' and what is deployed on the provider. The state is kept in what is known as the 'backend' and for this project is located in an S3 bucket named mbari-terraform-state. The file terraform.tfstate and is a binary non-human readable format. The backed is identified in the main.tf file in the top level of the devops>terraform folder under the terraform.backend block. That block also identifies a DynamoDB table named terraform-state-lock that is simply a way to keep track of whether or not a terraform apply is currently running. This is to prevent the unlikely case where two different people might be trying to run terraform apply to make changes to the infrastructure.
In order to use this terraform against the production and staging services, you will need a local AWS config named mbari. This should have the login credentials and access keys of an IAM admin account on the READinet AWS account. So, make sure you have an IAM account created with the correct permissions and generate an access key for that account. See the IAM Login instructions above if you need to create an account. To create a named profile, run
aws configure --profile mbari
and you should do something like what is show below:
AWS Access Key ID [None]: XXXXXXXXXXXXXXXXXXXXXXX
AWS Secret Access Key [None]: YYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
Default region name [None]: us-west-2
Default output format [None]: json
Where the X's are your access key and Y's are the secret key that you downloaded in the .csv file during the creation of your access keys. You can verify that this worked, but running
aws s3 ls --profile mbari
and you should get something like:
2023-11-16 12:40:21 mbari-readinet-backups
2023-09-27 08:59:44 mbari-readinet-images
2023-09-27 09:00:49 mbari-terraform-state
With the CLI configured with the correct profile, you should be able to now use the Terraform files. The first thing to do is to open a terminal window and initialize Terraform by running the following in the devops/terraform directory:
terraform init
This will cause Terraform to download the appropriate provider code and get everything ready. You should see somethig like:
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
- dev in readinet-server
- github in github
- prod in readinet-server
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Installing hashicorp/aws v5.18.1...
- Installed hashicorp/aws v5.18.1 (signed by HashiCorp)
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Then, you can run
terraform plan
which will look at the plan files and examine the current state of the AWS resources and generate a report. It should look something like:
module.dev.data.aws_vpc.main: Reading...
module.prod.data.aws_ami.debian: Reading...
module.prod.data.aws_vpc.main: Reading...
module.github.aws_secretsmanager_secret.github_deploy_key: Refreshing state... [id=arn:aws:secretsmanager:us-west-2:414844587818:secret:github_deploy_key-gi9gC4]
module.prod.aws_eip.server: Refreshing state... [id=eipalloc-0d24f325dffb5aca0]
module.dev.aws_iam_role.webserver: Refreshing state... [id=dev-webserver]
aws_dynamodb_table.tf_state_lock: Refreshing state... [id=terraform-state-lock]
module.prod.aws_iam_role.webserver: Refreshing state... [id=prod-webserver]
aws_s3_bucket.tf_state: Refreshing state... [id=mbari-terraform-state]
aws_s3_bucket.readinet_images: Refreshing state... [id=mbari-readinet-images]
module.dev.data.aws_route53_zone.main: Reading...
module.prod.data.aws_route53_zone.main: Reading...
aws_s3_bucket.readinet_backups: Refreshing state... [id=mbari-readinet-backups]
module.prod.data.aws_ami.debian: Read complete after 1s [id=ami-08a1cc220ff6e579e]
module.github.data.aws_vpc.main: Reading...
module.prod.data.aws_vpc.main: Read complete after 1s [id=vpc-02ad470bcf2b29a3c]
module.github.aws_iam_role.build_server: Refreshing state... [id=build_server]
module.dev.data.aws_vpc.main: Read complete after 1s [id=vpc-02ad470bcf2b29a3c]
module.dev.data.aws_ami.debian: Reading...
module.github.data.aws_vpc.main: Read complete after 0s [id=vpc-02ad470bcf2b29a3c]
module.github.aws_sns_topic.notifications: Refreshing state... [id=arn:aws:sns:us-west-2:414844587818:mbari-readinet-notifications]
module.dev.data.aws_ami.debian: Read complete after 0s [id=ami-08a1cc220ff6e579e]
module.dev.aws_eip.server: Refreshing state... [id=eipalloc-060e46e9d3f4f1d95]
module.github.aws_security_group.build_server: Refreshing state... [id=sg-04dd09170d092d1a0]
module.prod.aws_security_group.webserver: Refreshing state... [id=sg-02e95afbb83f50a7f]
module.dev.aws_security_group.webserver: Refreshing state... [id=sg-02a0c48f17ffe663d]
module.dev.aws_iam_instance_profile.webserver: Refreshing state... [id=dev-webserver]
module.dev.aws_iam_role_policy.webserver: Refreshing state... [id=dev-webserver:dev-webserver]
module.prod.aws_iam_instance_profile.webserver: Refreshing state... [id=prod-webserver]
module.prod.aws_iam_role_policy.webserver: Refreshing state... [id=prod-webserver:prod-webserver]
module.prod.data.aws_route53_zone.main: Read complete after 1s [id=Z05725303067Q3QRAXOIR]
module.prod.aws_route53_record.vpn: Refreshing state... [id=Z05725303067Q3QRAXOIR_vpn.prod_A]
module.prod.aws_route53_record.server: Refreshing state... [id=Z05725303067Q3QRAXOIR_prod_A]
module.github.aws_sns_topic_subscription.skillsusa_email["nstocchero@owlcyberdefense.com"]: Refreshing state... [id=arn:aws:sns:us-west-2:414844587818:mbari-readinet-notifications:85f831ad-7ae3-4168-b1a6-d2fa0f2d28c4]
module.github.aws_iam_role_policy.build_server: Refreshing state... [id=build_server:build_server]
module.github.aws_iam_instance_profile.build_server: Refreshing state... [id=build_server]
module.dev.data.aws_route53_zone.main: Read complete after 1s [id=Z05725303067Q3QRAXOIR]
module.dev.aws_route53_record.server: Refreshing state... [id=Z05725303067Q3QRAXOIR_dev_A]
module.dev.aws_route53_record.vpn: Refreshing state... [id=Z05725303067Q3QRAXOIR_vpn.dev_A]
module.dev.aws_instance.server: Refreshing state... [id=i-062e15a4bc487c834]
module.prod.aws_instance.server: Refreshing state... [id=i-0ba82e244fc486a01]
module.dev.aws_eip_association.eip_assoc: Refreshing state... [id=eipassoc-0211170ae844e6a2f]
module.prod.aws_eip_association.eip_assoc: Refreshing state... [id=eipassoc-06aa9fc9cab0b69aa]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
If changes are made to the Terraform files, you want to first examine what changes will actually be made by running
terraform plan
which will dump a list of the changes. If everything looks good, you can apply those changes by running
terraform apply
See the How to Add a New Server section later in the document for an example where a new web server was created.
Ansible
The actual deployment of the web application, database, etc. is done using Ansible from a developer's machine. There are two targets defined
ansible-playbook -i hosts server.yml -l prod
How to Add a New Server
This section shows you an example of how you can create a new EC2 server instance that will have a full stack of the cloud hosted web application. First, in the devops/terraform/readinet.tf file, I copied the module "prod" section and pasted it directly below the prod section and changed the module label and env_name to test. The entry looked like:
module "test" {
source = "./readinet-server"
env_name = "test"
}
I then ran
terraform plan
from the devops/terraform directory and it returned a detailed plan of the changes that would be made. Then I ran
terraform apply
which then applied those changes to the AWS infrastructure. I monitored the EC2 instance pages and when the instance showed that it was up and running, it was time to install the application to the cloud. This is done using Ansible. Before running ansible commands, a few changes needed to be made. In the devops/ansible/hosts file, I added a new section at the bottom of the file using the IP address that was given to the new EC2 instance (found on the web console page). The final file looked like this:
[all:vars]
ansible_connection=ssh
ansible_user=admin
letsencrypt_email=kgomes@mbari.org
[dev]
54.71.128.24 domain_name=dev.readinet.org
[prod]
54.218.178.215 domain_name=prod.readinet.org
[test]
100.21.92.143 domain_name=test.readinet.org
Then to apply the Ansible recipe to the cloud, I ran
ansible-playbook -i hosts server.yml -l test
where test identifies the host to run the scripts against. Lastly, a step needs to be taken to initialize the application. First ssh to the new server using:
ssh admin@100.21.92.143
and connect to the correct docker container by running:
docker exec -it web /bin/bash
Once connected, change into the correct directory by running
cd /opt/web/readinet
and finally run:
./bootstrap.sh
to initialize the application. You should now be able to load the application your web browser by going to the new doman_name (test.readinet.org in this example).
Note
We need to document how you would add a new ssh key to allow a new user to login to the cloud server. This entails creating a key in GitHub and then editing the devops/ansible/common/tasks/main.yml file and adding the user name to the section named Set authorized keys from github.
Update Cloud Application
To connect to a server, SSH using an SSH key (ssh keys must be used, passwords aren't accepted). The admin user should be used for ssh access, and the authorized keys can be found in /home/admin/.ssh/authorized_keys. A bootstrap of the first authorized users are made in the user_data declaration of the terraform.
Before updating the application from you local machine, be sure to pull any changes from the GitHub repo. Then run ansible using the following commands:
ansible-playbook -i hosts server.yml -l prod
ansible-playbook -i hosts server.yml -l dev
On the VPN/Headscale side, a single generic readinet user was created. However, this single user setup isn't required and any VPN user and identity management can be instituted.
To Start From Scratch
This needs to be documented, but should cover how to create a new instance of the cloud infrastructure from an empty AWS account by using both Terraform and Ansible.