ESP Portal Installation
These notes describe the installation procedure for setting up the ESP portal application (and supporting services) that is deployed on the server esp-portal.mbari.org.
Preparation
- I submitted a ticket to IS to have a new VM created running CentOS 8 named esp-portal.mbari.org.
- Pat made sure podman was installed and installed a utility that aliased docker to podman for convenience.
- Joe set up SSL and some proxy passes that I requested that I know I will need for the application (I edited them a little to add an ending slash) and they look like:
<VirtualHost *:80>
ServerAdmin kgomes@mbari.org
DocumentRoot /var/www/html
ServerName esp-portal.mbari.org
RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R=301,L]
Timeout 600
ProxyTimeout 600
ProxyRequests Off
ProxyPreserveHost On
</VirtualHost>
<VirtualHost esp-portal.mbari.org:443>
ServerAdmin kgomes@mbari.org
DocumentRoot /var/www/html
ServerName esp-portal.mbari.org
#----------------------------------------------------------------
SSLEngine on
SSLProtocol all
#Wildcard Certificates
SSLCertificateFile /usr/local/keystore/wcbundle.crt
SSLCertificateKeyFile /usr/local/keystore/wcprivate.key
SSLCertificateChainFile /usr/local/keystore/wcbundle.crt
#----------------------------------------------------------------
<Location /web/>
ProxyPass http://esp-portal.mbari.org:8081/
ProxyPassReverse http://esp-portal.mbari.org:8081/
</Location>
<Location /log-parser/>
ProxyPass http://esp-portal.mbari.org:8080/
ProxyPassReverse http://esp-portal.mbari.org:8080/
</Location>
<Location /ia-service/>
ProxyPass http://esp-portal.mbari.org:8082/
ProxyPassReverse http://esp-portal.mbari.org:8082/
</Location>
<Location /couchdb/>
ProxyPass http://esp-portal.mbari.org:5984/
ProxyPassReverse http://esp-portal.mbari.org:5984/
Require ip 134.89.0.0/255.255.0.0
</Location>
</VirtualHost>
- Pod creation: in order for the services to all communicate, they need to run in the same ‘pod’ (kind of like a network, I think). In order to expose all the ports outside the pods, I needed to create the pod first, then add the containers to the pod. I created the pod using:
podman pod create -p 5984:5984 -p 5432:5432 -p 8080:80 -p 8081:8081 -p 8082:8080 --name=esp-pod
CouchDB setup
- I created the
/data/couchdb/datadirectory that will be used by CouchDB (via podman mount) - I created the CouchDB podman container by running:
podman create -d --name esp-couchdb --pod esp-pod -e COUCHDB_USER=espadm -e COUCHDB_PASSWORD=xxxxxxx -v /data/couchdb/data:/opt/couchdb/data couchdb:3.1
- That created the podman container in my local account. In order to set that container to run as a service on boot, I ran the following to generate a starting service definition:
podman generate systemd --name esp-couchdb > ~/.config/systemd/user/container-esp-couchdb.service
- This creates a skeleton startup that worked OK to start and stop manually, but did not work at reboot. I worked with Pat to fix that and the ending service definition looks like (note 653 is my user ID and the long ID number is the container ID that is generated by the container creation step:
# This service starts a CouchDB server that is a database for the ESP web portal
[Unit]
Description=ESP CouchDB Service
Wants=syslog.service
Documentation=man:podman-generate-systemd(1)
[Service]
Restart=always
ExecStart=/usr/bin/podman start esp-couchdb
ExecStop=/usr/bin/podman stop -t 10 esp-couchdb
KillMode=none
Type=forking
PIDFile=/run/user/653/overlay-containers/9c4243f5b7b847bbaa3485991dab97a8b6adf5e53f2572f92e405a54015e12af/userdata/conmon.pid
[Install]
WantedBy=default.target
- I then enabled the service using:
systemctl --user daemon-reload
systemctl --user enable container-esp-couchdb.service
- You can then start and stop the service using:
systemctl --user start container-esp-couchdb.service
systemctl --user stop container-esp-couchdb.service
- And check the status of the service using:
systemctl --user status container-esp-couchdb.service
- Once the CouchDB server is running, you can get to the web page for the server at https://esp-portal.mbari.org/couchdb/_utils/
- The login is the username and password that you set in the -e variable when the container was created.
- If you want to view the logs from CouchDB, you need the container ID which can be found using:
podman ps
- Then run:
podman ps 6a448dc9ee2c
- When first starting CouchDB, there is normally a setup that occurs if you are using it in cluster mode. This entails setting up three databases (_users, _replicator, _global_changes). The logs will spew warnings until this is fixed. Since we are using it in single instance mode we can just create these databases at the command line to quiet those warnings using:
curl -X PUT https://espadm:xxxxxxx@esp-portal.mbari.org/couchdb/_users
curl -X PUT https://espadm:xxxxxxx@esp-portal.mbari.org/couchdb/_replicator
curl -X PUT https://espadm:xxxxxxx@esp-portal.mbari.org/couchdb/_global_changes
- I rebooted the server just to make sure everything worked OK. NOTE: There is actually a problem with this configuration that Pat is working on currently. With the --user flag and systemctl, the service won’t actually start unless the user logs in. Hopefully he will find a resolution to this. In order to fix this, Pat ran the following which is supposed to tell systemctl to look at the user’s config during startup/shutdown (it worked!):
loginctl enable-linger kgomes
PostgreSQL server
- I created a
/data/postgresql/datadirectory and a/data/postgresql/docker-entrypoint-initdb.ddirectory. - I cd’d to the docker-entrypoint-initdb.d directory and pulled down the SQL initialization script from GitHub using:
curl https://bitbucket.org/mbari/esp-apps/raw/d6a1b9f17e36edba81e98082d0075a8df0a5e48a/resources/init-esp-db.sh -o init-esp-db.sh
- I then created the podman container using:
podman create -d --name esp-postgresql --pod esp-pod -e POSTGRES_PASSWORD=xxxxxxx -e ESP_APPS_PG_USERNAME=espadm -e ESP_APPS_PG_PASSWORD=xxxxxxx -v /data/postgresql/data:/var/lib/postgresql/data -v /data/postgresql/docker-entrypoint-initdb.d/init-esp-db.sh:/docker-entrypoint-initdb.d/init-esp-db.sh postgres:12.3
- Then, I ran
podman generate systemd --name esp-postgresql > ~/.config/systemd/user/container-esp-postgresql.service
- That gave me a starting point with the correct container ID and user ID. I then edited the file to look like:
# This service starts a PostgreSQL server for the ESP web portal
[Unit]
Description=ESP PostgreSQL Service
Documentation=man:podman-generate-systemd(1)
[Service]
Restart=always
ExecStart=/usr/bin/podman start esp-postgresql
ExecStop=/usr/bin/podman stop -t 10 esp-postgresql
KillMode=none
Type=forking
PIDFile=/run/user/653/overlay-containers/102c752005e45ba4e5f69254add239220e9645c52eef18e389db358fdf00eaf1/userdata/conmon.pid
[Install]
WantedBy=default.target
- Finally, I ran the following to enable it at boot time:
systemctl --user enable container-esp-postgresql.service
- Pat had to configure the firewall to allow me to get to ports 443, 5984 (couchdb), and 5432.
ESP Web Log Parser
- Next, I wanted to set up the web app that allows user’s to submit .log files for conversion to .out files
- The first thing I needed to do was check out the ESP2Gscript project since it’s a private project and is needed by the web parsing script. I put it in the /data directory by running the following in the /data directory (Pat had to install git):
git clone https://github.com/MBARI-ESP/ESP2Gscript.git
- This created the
/data/ESP2Gscriptdirectory with the most recent master branch of the code and this will then be mounted in the log-parser container. I then created directories for the web server logs and the uploaded log files in the/data/esp-web-log-parser/logsand/data/esp-web-log-parser/uploadsdirectories. NOTE: For some strange reason, I had to runchmod a+w on /data/esp-web-log-parser/uploadsso that the cgi-bin script could write the uploaded file. After that everything worked. - I then needed to create the container from the image, so I ran:
podman create -d --name esp-web-log-parser --pod esp-pod -v /data/esp-web-log-parser/logs:/var/log/httpd -v /data/esp-web-log-parser/uploads:/data/uploads -v /data/ESP2Gscript:/home/esp/ESP2Gscript mbari/esp-web-log-parser
- I then ran the following to generate a staring point for the esp-web-log-parser service:
podman generate systemd --name esp-web-log-parser > ~/.config/systemd/user/container-esp-web-log-parser.service
- I then edited the .service file to look like:
# This service runs a container that provides an HTTP service to parse ESP log files
[Unit]
Description=ESP Web Log Parser Service
Documentation=man:podman-generate-systemd(1)
[Service]
Restart=always
ExecStart=/usr/bin/podman start esp-web-log-parser
ExecStop=/usr/bin/podman stop -t 10 esp-web-log-parser
KillMode=none
Type=forking
PIDFile=/run/user/653/overlay-containers/115f010b5462a7652e472ae915f11be8cb7adf6867c55cc69195449168f015a1/userdata/conmon.pid
[Install]
WantedBy=default.target
- I started it using:
systemctl --user start container-esp-web-log-parser.service
- And it’s working!
- I enabled it to start on boot with:
systemctl --user enable container-esp-web-log-parser.service
- Another step closer!
ESP IA Server
- This is the image analysis service that Brian created
- I created the container using:
podman create -d --name esp-ia-server --pod esp-pod mbari/esp-ia-server
- I then created the skeleton service using:
podman generate systemd --name esp-ia-server > ~/.config/systemd/user/container-esp-ia-server.service
- I edited it to look like the following:
# This is the service that starts up the ESP Image Analysis service
[Unit]
Description=ESP Image Analysis Service
Documentation=man:podman-generate-systemd(1)
[Service]
Restart=always
ExecStart=/usr/bin/podman start esp-ia-server
ExecStop=/usr/bin/podman stop -t 10 esp-ia-server
KillMode=none
Type=forking
PIDFile=/run/user/653/overlay-containers/c193aa9f63a76350e336ff9399232c8d92cc66c922278cb1b901f9d34094f780/userdata/conmon.pid
[Install]
WantedBy=default.target
- I enabled the service using:
systemctl --user enable container-esp-ia-server.service
- And lastly, I started it using:
systemctl --user start container-esp-ia-server.service
- It works! Yet another step closer!
ESP Portal
- The last piece! This is the container that runs the application server and sets up jobs to sync FTP sites and parse esp files.
- I first needed to create the directories where everything would go so I created the following:
/data/esp-portal/data
/data/esp-portal/data/instances
/data/esp-portal/data/tmp
/data/esp-portal/logs
- I then need to grab the config.js.template file from BitBucket by running: \
cd /data/esp-portal
curl https://bitbucket.org/mbari/esp-apps/raw/d6a1b9f17e36edba81e98082d0075a8df0a5e48a/server/config.js.template -o config.js
- I then edited the config file to configure the location of the data directories and the log directories inside the container. I also edited the URLs and the hostnames to reflect how the container would need to look for the other resources inside the pod. The final script looks like (TODO kgomes - put script here)
SCRIPT PLACEHOLDER
- Next, I created the container using:
podman create -d --name esp-portal --pod esp-pod -v /data/esp-portal/config.js:/opt/esp/server/config.js -v /data/esp-portal/data:/esp-portal/data -v /data/esp-portal/logs:/esp-portal/logs mbari/esp-apps
- I then created the systemd skeleton using:
podman generate systemd --name esp-portal > ~/.config/systemd/user/container-esp-web-portal.service
- I then edited the container-esp-web-portal.service file and edited to look like:
# This is the ESP web portal server and services
[Unit]
Description=ESP Web Portal
Documentation=man:podman-generate-systemd(1)
[Service]
Restart=always
ExecStart=/usr/bin/podman start esp-portal
ExecStop=/usr/bin/podman stop -t 10 esp-portal
KillMode=none
Type=forking
PIDFile=/run/user/653/overlay-containers/a6770829a6d4a2fe17d91ca000856fc3060130e4697df548a71688e012c33a52/userdata/conmon.pid
[Install]
WantedBy=default.target
- I enabled the service using:
systemctl --user enable container-esp-web-portal.service
- And then started it using:
systemctl --user start container-esp-web-portal.service
- It started and the application showed up on https://esp-portal.mbari.org/web/ !!!
Final Steps
- I created an index.html page in
/var/www/htmlthat forwards the base URL to the web application and it contains one line:
<meta http-equiv="refresh" content="0; URL=https://esp-portal.mbari.org/web/" />
- And finally, I had Joe open up the 443 port to the world and we tested the
/couchdb/proxy pass IP restriction. It all works! Finally! - Then began the long migration process of moving old data to the new server. I wanted to do this because the log parsing is now much better and there are bug fixes so I wanted to re-parse everything.
Updates
After working on the code for the ESP portal, you will eventually commit the changes and push them to the remote repository on BitBucket. This will actually trigger an automated build that will create a Docker image in DockerHub that uses the latest code (HEAD). After this is built, you can deploy this on esp-portal.mbari.org by doing the following:
- Stop the esp portal server by issuing:
systemctl --user stop container-esp-web-portal.service
- Next, find the container ID by running
podman ps -a
- Copy the CONTAINER ID for the docker.io/mbari/esp-apps:latest IMAGE and then remove it by running (where XXXXXX is the ID of the container):
podman rm XXXXXXX
- In addition, you need to remove the image that was previously downloaded by first looking for the image ID using:
podman images
- Copy the IMAGE ID of the image that is from the repository docker.io/mbari/esp-apps and remove it by running (where XXXXX is the ID of the image to remove):
podman rmi XXXXXX
- Now, generate a new container using the same command that was used during installation:
podman create -d --name esp-portal --pod esp-pod -v /data/esp-portal/config.js:/opt/esp/server/config.js -v /data/esp-portal/data:/esp-portal/data -v /data/esp-portal/logs:/esp-portal/logs mbari/esp-apps
- When this is complete, it will output a long ID that is the container ID. Copy that ID
- Cd into the
.config/systemd/userdirectory and edit the container-esp-web-portal.service file. In the line that starts with PIDFile, replace the section of the path that was the old container ID with the new container ID and save it - Reload the systemctl files by running:
systemctl --user daemon-reload
- Then start the service by running:
systemctl --user start container-esp-web-portal.service
- The updated image should now be running.