Quick Start
Overview
There are instructions to quickly database service stack in 4 steps and a working ECS cluster to process your video in 1 step.
What, what? Yes, it really is that simple.
Prerequisites
- Bash Shell for running setup and launch scripts
- Docker - Required to launch the services
- Git - Used to check out the code
- AWS account for launching a processing cluster in the Elastic Container Service (optional)
Note
Your AWS account must have full permission to provision EC2 instances, create buckets, and setup job queues and definitions to use the ECS Cluster. See deepsea-ai setup
instructions to setup AWS permissions.
Checkout the code and change to that working directory
git clone http://github.org/mbari-org/deepsea-ai-backend
cd deepsea-ai-backend
Database service stack
This will create the database, graphql api, and workers in development mode and load a small sample database.
1 - Setup an environment
./bin/setup_env.sh
2 - Run
./bin/docker_start.sh postgres all
3 - Add an admin user
./bin/docker_add_user.sh admin@deepsea-ai.org gulpereel true
You should see a token output similar to the following: "token": "eyJhbGciOiJIUzI1NiIsI ...."
4 - Use that token to load some test data
Copy the entire token here - it is abbreviated for the sake of brevity in the example.
./bin/docker_load.sh eyJhbGciOiJIUzI1NiIsI... /srv/data/DEEPSEA_AI/tracks/V4244_20191205T165023Z_h264_1min.tracks.tar.gz
The database and API should be up now and a small sample should now be loaded to experiment with in GraphiQL.
That's it! Now, open http://localhost:4000/graphql and check-out the example queries and mutations.
Loading data
Loading data can occur a number of ways:
-
From a file e.g. to load a test file in the docker image
docker exec deepsea-ai-worker ts-node src/worker/cli/load.ts --force \ --token eyJhbGciOiJIUzI1NiIsI... \ --file /srv/data/DEEPSEA_AI/tracks/V4244_20191205T165023Z_h264_1min.tracks.tar.gz
-
From data in an S3 bucket e.g.
docker exec deepsea-ai-worker ts-node src/worker/cli/load.ts --force \ --token eyJhbGciOiJIUzI1NiIsI... \ --s3 s3://my-track-bucket
Note you must have permission to read the bucket -
From a local file or S3 bucket through a queue. This is the preferred method as it allows for the load to be queued and processed in the background.
docker exec deepsea-ai-worker ts-node src/worker/cli/load.ts \ --token eyJhbGciOiJIUzI1NiIsI... \ --s3 s3://my-track-bucket \ --queue
In method 3, the load will be queued and processed in the background. See the Bull Dashboard for the status of the load.
Cleanup
To shut down the software, run ./bin/docker_stop.sh postgres all
Your data will not be deleted. It is stored in a docker volume. To delete the data, run
docker volume rm deepsea-ai-backend_deepsea-ai-data deepsea-ai-backend_deepsea-ai-db deepsea-ai-backend_deepsea-ai-redis
Deploy Cluster
This deploys an autoscaling detection and tracking workflow that leverages the AWS Elastic Container Service ECS. This deployment is simplified with the Cloud Development Kit CDK - a much simpler tool for deploying work in the cloud than navigating complex Kubernetes infrastructure or very long CloudFormation templates.
Important Points
- You don't get charged for having a cluster, only when you use resources in your cluster
- By default, all resources are removed when destroying the stack so be sure to remove any videos of track data that you might need before tearing the stack down.
1 - Build your cluster
./bin/docker_start_cluster.sh