ODSS Test Installation on Ubuntu
So, right after the production change to CentOS 8 (not fully, still in progress, but everything is running off Normandy8, just need to migrate some data scripts) we found out CentOS 8 will not be supported after 2021. So, the next step was then to try and do all this again on an Ubuntu machine. Here we go:
Host Setup
- Pat created an Ubuntu VM and named it odss-testu.shore.mbari.org
- I logged in using 'kgomes' and then tried to 'sudo -u odssadm -i' and there was no odssadm account so I submitted a ticket.
-
Pat created the account and I am able to now do 'sudo -u odssadm -i'. When running
idyou get:uid=11775(odssadm) gid=1001(odssadm) groups=1001(odssadm) -
Pat also enabled 'lingering' using:
root@odss-testu:/etc# loginctl enable-linger kgomes root@odss-testu:/etc# loginctl enable-linger odssadm root@odss-testu:/etc# ls /var/lib/systemd/linger/ kgomes odssadm -
This allows for processes that are started by specific user accounts to linger after the user logs out.
- Pat also made the observation that you need to XDG_RUNTIME_DIR variable in the user environments to get systemctl at the user level to work. So, in the kgomes
.bashrcand the odssadm.bash_profilefiles, I added:export XDG_RUNTIME_DIR=/run/user/$(id -u)
Note
One thing I noticed is that the .bashrc does not seem to get 'sourced' when I switch over to it using the 'sudo -u odssadm -i' command. Pat figured out that it needed to be placed in the .bash_profile because .bashrc is only sourced at login and the sudo -u method is not that.
Note
While not applicable to this installation because we are not using NFS mounts, it appears Ubuntu uses the same subuid method of manging uids inside containers. I will know more as I progress, but there is an /etc/subuid file with number ranges in Ubuntu.
- Next, I needed to get a web server set up with proxy passes and SSL. I submitted a ticket letting them know the configuration should look just like the proxy pass setup on normandy8 that is specified in the
/etc/httpd/conf.d/odss.conffile -
Pat replied by installing the Apache server and updated the ticket with the following:
root@odss-testu:~# systemctl status apache2 ● apache2.service - The Apache HTTP Server Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2021-03-15 13:29:42 PDT; 2min 13s ago Docs: https://httpd.apache.org/docs/2.4/ Main PID: 10270 (apache2) Tasks: 55 (limit: 4579) Memory: 5.8M CGroup: /system.slice/apache2.service ├─10270 /usr/sbin/apache2 -k start ├─10271 /usr/sbin/apache2 -k start └─10272 /usr/sbin/apache2 -k start Mar 15 13:29:42 odss-testu.shore.mbari.org systemd[1]: Starting The Apache HTTP Server... Mar 15 13:29:42 odss-testu.shore.mbari.org systemd[1]: Started The Apache HTTP Server. root@odss-testu:~# ufw status Status: active To Action From -- ------ ---- 22/tcp ALLOW Anywhere 80/tcp ALLOW Anywhere 443/tcp ALLOW Anywhere 3389 ALLOW Anywhere Apache Full ALLOW Anywhere -
Path then handed the ticket off to Joe who responded with:
Hi Kevin, odss-testu is Ubuntu 20.04 and normandy8 is CentOS 8. The apache versions on each differ somewhat in how they expect things to be configured, the directory structure of /etc/apache2 vs /etc/httpd/, and the context of commands and parameters also differ somewhat. So you might run into issues where the config files need to be modified. Anyway, I've configured odss-testu.shore.mbari.org to be similar to normandy8. Here are my notes: -a2enmod ssl -a2enmod rewrite -a2enmod proxy -a2ensite default-ssl -Upload *.shore.mbari.org SSL Cert and Private key to /usr/local/keystore/ -vim /etc/apache2/sites-available/default-ssl.conf --> modify certificate values -copied odss.conf from /etc/httpd/conf.d/ on normandy8 to /etc/apache2/conf-available/ on odss-testu -Update hostname in /etc/apache2/conf-available/odss.conf -a2enconf odss -Restart apache -
And the web site works with SSL now. Nice!
- While this install does not use any Altas shares, I wanted it to be exactly consistent with production so I edited the /etc/subuid file and changed it from
meteor:100000:65536 ntp:165536:65536 xrdp:231072:65536 postfix:296608:65536 _chrony:362144:65536 odssadm:427680:65536 jgomezroot:493216:65536
to:
meteor:100000:65536
ntp:165536:65536
xrdp:231072:65536
odssadm:300001:65536
postfix:365537:65536
_chrony:431073:65536
jgomezroot:496609:65536
- I also edited the
/etc/subgidfile and made the exact same changes. Then I rebooted. The machine rebooted OK and I could login so I must have not broken it. - I manually created the
/home/odssadm/.config/systemd/userdirectory. - Next, I created a
/datadirectory and made odssadm the owner (had to do this under my kgomes account using sudo) - Before getting started, I wanted to make sure I copied over any 'repo' data so I ran:
scp -r kgomes@odss-test8.shore.mbari.org:/data/repo /data/repo
ERDDAP Installation
- The first thing to do was to copy the data for ERDDAP from the current test server (odss-test8) to the new one (odss-testu)
-
From odss-testu, I cd'd into the /data directory and ran:
scp -r kgomes@odss-test8.shore.mbari.org:/data/erddap /data/erddap -
Once the data was copied over, I needed to edit the /data/erddap/content/erddap/setup.xml file and edit the baseURL changing from odss-test8 to odss-testu.
-
At MBARI, we like to use the BitstreamVeraSans font and also install the JetPlus.cpt palette. In order to do this, I copied the fonts from the current production machine and put them in the /home/odssadm/odss-components/erddap/fonts folder. I also copied over the JetPlus.cpt file to /home/odssadm/odss-components/erddap/JetPlus.cpt. Next, I needed to add the JetPlus palette to the messages.xml file inside the container so I copied the default messages.xml from inside the container to /home/odssadm/odss-components/erddap/messages.xml and edited to add 'JetPlus' to the list in the
<palettes>tag. In order to get this into a container, I had to create a custom image which I did by creating the /home/odssadm/odss-components/erddap/Dockerfile and using the following:# This Dockerfile builds a custom image of ERDDAP that contains some tweaks # we use in the ODSS # First, start with the stock Axiom ERDDAP image FROM axiom/docker-erddap:2.11 # Add the BitstreamVeraSans Fonts COPY fonts/ /usr/local/openjdk-8/jre/lib/fonts/ # Add the JetPlus palette COPY JetPlus.cpt /usr/local/tomcat/webapps/erddap/WEB-INF/cptfiles/JetPlus.cpt # Overwrite the messages.xml file to include the new palette COPY messages.xml /usr/local/tomcat/webapps/erddap/WEB-INF/classes/gov/noaa/pfel/erddap/util/messages.xml -
Then I built the image using
podman build -t mbari/odss-erddap . -
Then I could create the container using
podman create --security-opt label=disable --privileged --name erddap -p 8280:8080 -v /data/erddap/content/erddap:/usr/local/tomcat/content/erddap -v /data/erddap/bigParentDirectory:/data/erddap/bigParentDirectory -v /data/erddap/data:/data/erddap/data mbari/odss-erddap -
This returned an ID of
246e59630e7bd1e7d6acf11e0e17f193c707f7a0085345317736ea2ef199bbfc -
Now, to run this as a service, we need to create the systemd file by running:
podman generate systemd --name erddap > ~/.config/systemd/user/container-erddap.service -
I then edited the .service file so it looks like:
# container-erddap.service # autogenerated by Podman 2.1.1 # Mon Dec 7 11:38:23 PST 2020 # Updated by kgomes 2020-12-07: Updated description and changed Restart to always [Unit] Description=ODSS ERDDAP Server Documentation=man:podman-generate-systemd(1) Wants=network.target After=network-online.target [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=always ExecStart=/usr/bin/podman start erddap ExecStop=/usr/bin/podman stop -t 10 erddap ExecStopPost=/usr/bin/podman stop -t 10 erddap PIDFile=/run/user/11290/containers/overlay-containers/708f54b2e614c1bb9fcca2c74dc3af03fbc4eb34b9011b7dd8c51b7546332dc4/userdata/conmon.pid KillMode=none Type=forking [Install] WantedBy=multi-user.target default.target -
I then enabled the service using:
systemctl --user daemon-reload systemctl --user enable container-erddap.service -
You can then start and stop the service using:
systemctl --user start container-erddap.service systemctl --user stop container-erddap.service -
And check the status of the service using:
systemctl --user status container-erddap.service -
I hit a snag here and I think it's the apache config, I get a server error when I try to go to: https://odss-testu.shore.mbari.org/erddap. Putting in ticket. Joe fixed it the next day, not sure what he did, but got it working.
- I got an error because the container could not seem to write to the /data/erddap/bigParentDirectory so I opened it up wide open using
chmod a+w -R /data/erddap/bigParentDirectory. That worked, but probably not ideal and will probably have issues on the machine with the mount. We will see. One option is to put the bigParentDirectory on local disk, that should improve performance too (lucene). After that, it all worked. Nice! -
Even though not necessary for the test machine, I wanted to check to make sure everying was running correctly from subuid perspective so I ran
podman top erddap huser userto check the process mappings and it came out as:odssadm@odss-testu:/data$ podman top erddap huser user HUSER USER 301000 tomcat -
Perfect! All this worked and ERDDAP is now available on
https://odss-testu.shore.mbari.org/erddap
THREDDS Installation
- The first thing to do was to copy the data for THREDDS from the current test server (odss-test8) to the new one (odss-testu)
-
From odss-testu, I cd'd into the /data directory and ran:
scp -r kgomes@odss-test8.shore.mbari.org:/data/thredds /data/thredds -
I got some permission denied, but it looks like they are files that belong to the THREDDS process (log files, etc.) so I'm not too worried about it yet.
-
Now, to create the image the way I needed it, I ran:
podman create --security-opt label=disable --name thredds --privileged -p 8180:8080 -v /data/thredds:/usr/local/tomcat/content/thredds -v /data:/data unidata/thredds-docker:4.6.15 -
Now, to generate the start up script, I ran:
podman generate systemd --name thredds > ~/.config/systemd/user/container-thredds.service -
I then edited the service file to convert it to:
# container-thredds.service # autogenerated by Podman 3.0.1 # Thu Apr 15 13:48:52 PDT 2021 [Unit] Description=ODSS THREDDS Server Documentation=man:podman-generate-systemd(1) Wants=network.target After=network-online.target [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=always TimeoutStopSec=70 ExecStart=/usr/bin/podman start thredds ExecStop=/usr/bin/podman stop -t 10 thredds ExecStopPost=/usr/bin/podman stop -t 10 thredds PIDFile=/run/user/11775/containers/overlay-containers/369ef18fa2909bb48f3a986d616b7a26ac07996e97fc6ee7130a7b7cc7b71a20/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target -
I then enabled the service using:
systemctl --user daemon-reload systemctl --user enable container-thredds.service -
You can then start and stop the service using:
systemctl --user start container-thredds.service systemctl --user stop container-thredds.service -
And check the status of the service using:
systemctl --user status container-thredds.service -
As a check to make sure everying was running correctly, I did the following:
-
I ran 'podman top thredds huser user' to check the process mappings and it came out as:
odssadm@odss-testu:~/.config/systemd/user$ podman top thredds huser user HUSER USER 301000 tomcat -
Nice, and now the server is available at https://odss-testu.shore.mbari.org/thredds
Mapserver Installation
- The first thing to do was to copy the data for mapserver from the current test server (odss-test8) to the new one (odss-testu)
-
From odss-testu, I cd'd into the /data directory and ran:
scp -r kgomes@odss-test8.shore.mbari.org:/data/mapserver /data/mapserver -
The default EPSG file for mapserver does not have the Google projection in it. To fix this, I took a copy of the epsg file from the running mapserver container located at
/usr/share/proj/epsgand put in the odss-components/mapserver directory. I then edited the file and added the following line:# Google Earth / Virtual Globe Mercator <900913> +proj=merc +a=6378137 +b=6378137 +lat_ts=0.0 +lon_0=0.0 +x_0=0.0 +y_0=0 +k=1.0 +units=m +no_defs -
I also figured out that we must have installed some custom fonts on the production machine so I copied all the fonts from /usr/share/fonts on normandy to the odss-components/mapserver/fonts directory.
-
Then to create an image with my customizations, I created a file in the odss-components/mapserver directory named Dockerfile file with the following contents:
# This creates a Docker image from camptocamp/mapserver that overrides the start-server file # First define the base image that this will build from FROM camptocamp/mapserver:7.6 # Add the EPSG file with the Google projection COPY epsg /usr/share/proj/epsg # Add more fonts COPY fonts/ /usr/share/fonts/ # Change the www-data user from uid 33 to 1000 (and gid too) RUN groupmod -g 1000 www-data && \ usermod -u 1000 www-data -
Then to build the image, I ran:
podman build -t mbari/odss-mapserver . -
Now to create the container, I ran:
podman create --security-opt label=disable --name mapserver --privileged -p 8080:80 -v /data/mapserver:/data/mapserver mbari/odss-mapserver -
Now, to generate the start up script, I ran:
podman generate systemd --name mapserver > ~/.config/systemd/user/container-mapserver.service -
I then edited the service file to convert it to:
# container-mapserver.service # autogenerated by Podman 3.0.1 # Fri Apr 16 07:27:54 PDT 2021 [Unit] Description=ODSS Mapserver Service Documentation=man:podman-generate-systemd(1) Wants=network.target After=network-online.target [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=always TimeoutStopSec=70 ExecStart=/usr/bin/podman start mapserver ExecStop=/usr/bin/podman stop -t 10 mapserver ExecStopPost=/usr/bin/podman stop -t 10 mapserver PIDFile=/run/user/11775/containers/overlay-containers/1c74d4e72ef53a6cb1315db03c395e5ed18463aa7c7f02303c443bab63ced4cb/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target -
I then enabled the service using:
systemctl --user daemon-reload systemctl --user enable container-mapserver.service -
You can then start and stop the service using:
systemctl --user start container-mapserver.service systemctl --user stop container-mapserver.service -
And check the status of the service using:
systemctl --user status container-mapserver.service -
As a check to make sure everying was running correctly, I ran 'podman top mapserver huser user' to check the process mappings and it came out as:
odssadm@odss-testu:~/.config/systemd/user$ podman top mapserver huser user HUSER USER 11775 root 301000 www-data 301000 www-data 301000 www-data -
Now, mapserver is available at
https://odss-testu.shore.mbari.org/cgi-bin/mapservand it WORKS!
MBARI Tracking Database
- As a first step, I needed to define a location on the host where the PostGIS container would store it's data. I created a /pgdata directory (needed to use the kgomes account as it had sudo privs) so that I can replicate how it's done in production. In production, Pat creates a special mount to high-performance disks for this so it needs to be mounted somewhere and this made the most sense. So, on odss-testu, I created the /pgdata directory and gave ownership to odssadm.
-
In order to migrate the database, I needed to have the backup file available to the PostGIS container. So, while I will be using the /pgdata for the process space, I still need the PostGIS container to be able to read from /data because that is where I will be backing up the production database to. So, in order for that to work, I created a Dockerfile in the odss-components/mbaritracking folder that looks like this:
# This creates a Docker image from the PostGIS image and then changes the uid/gid of the postgres account FROM postgis/postgis:13-3.0 # Change the postgres user from uid 999 to 1000 (and gid too) RUN groupmod -g 1000 postgres && \ usermod -u 1000 postgres -
Then I built the new image using:
podman build -t mbari/mbari-tracking . -
Then to build the container I ran:
podman create --name mbaritracking -p 5432:5432 -e POSTGRES_PASSWORD=xxxxxxx -v /pgdata:/var/lib/postgresql/data -v /data:/data mbari/mbari-tracking -
Now to create the service, I ran:
podman generate systemd --name mbaritracking > ~/.config/systemd/user/container-mbaritracking.service -
I then edited the container-mbaritracking.service file to look like:
# container-mbaritracking.service # autogenerated by Podman 3.0.1 # Fri Apr 16 08:17:24 PDT 2021 [Unit] Description=MBARI Tracking Database Documentation=man:podman-generate-systemd(1) Wants=network.target After=network-online.target [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=always TimeoutStopSec=70 ExecStart=/usr/bin/podman start mbaritracking ExecStop=/usr/bin/podman stop -t 10 mbaritracking ExecStopPost=/usr/bin/podman stop -t 10 mbaritracking PIDFile=/run/user/11775/containers/overlay-containers/19d9a47e7e5d1a64f0fc0ca90ca2eb9af4a514eca6335455ca4ab6114ca3648e/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target -
I then enabled the service using:
systemctl --user daemon-reload systemctl --user enable container-mbaritracking.service -
You can then start and stop the service using:
systemctl --user start container-mbaritracking.service systemctl --user stop container-mbaritracking.service -
And check the status of the service using:
systemctl --user status container-mbaritracking.service -
Before moving the current odss-test8 DB, it needs to be created. The best thing is to connect to the container using
podman exec -it mbaritracking /bin/bashand then switch to the postgres account usingsu postgresand then runpsql. Run the following statements to create the mbaritracking database:CREATE USER odssadm WITH PASSWORD 'xxxxxxx'; CREATE DATABASE mbaritracking with OWNER=odssadm TEMPLATE=template_postgis; \c mbaritracking GRANT ALL ON ALL TABLES IN SCHEMA PUBLIC TO odssadm; -
Now, for the next set of steps, I want to create RabbitMQ queues for the Ubuntu server and bind them to the exchanges at the same time that I stop the consumers on odss-test8. I do this so both queues will start filling with the same location messages and I can then migrate the database at my leisure and then restart the consumers and both database should then be in sync.
- I logged into the RabbitMQ admin page and manually created 6 queues that were connected to the vhost 'trackingvhost'. The queue names were:
- odss-testu_persist_ais (to be bound to ais/ais)
- odss-testu_persist_auvs (to be bound to auvs/auvs)
- odss-testu_persist_drifters (to be bound to drifters/drifters)
- odss-testu_persist_gliders (to be bound to gliders/gliders)
- odss-testu_persist_moorings (to be bound to moorings/moorings)
- odss-testu_persist_ships (to be bound to ships/ships)
- Then, I clicked on "Queues" and for each queue that I just created, I clicked on that queue and created a binding that matched the platform type.
-
I waited for a moment to see if the queues started to accrue messages, then I ssh'd into odss-test8 and ran the following to stop the tracking consumers on odss-test8
systemctl --user stop container-tracking-clients.service -
Now, the queues for both odss-test8 and odss-testu started filling. Yes, the ones for odss-testu will have messages that were already processed by odss-test8, but better to have duplicates than miss any.
- Next, I wanted to backup the odss-test8 DB and migrate that to odss-testu so I ssh'd into odss-test8 and ran
podman psto get the container ID and then ranpodman exec -it c49093a6a040 /bin/bashwhich gave me a root shell inside the container. - I switched over to the postgres account using
su postgres. - I cd'd into
/var/lib/postgresql/dataand there was the old mbaritracking.backup file from when I imported the last time I migrated. I deleted it usingrm mbaritracking.backup(as well as removing the mbaritracking.backup.lst file, hope that is OK :)) -
Now I need to dump the db on odss-test8 so I ran
pg_dump mbaritracking > /var/lib/postgresql/data/mbaritracking.sql -
This takes a REALLY long time as it's a LOT of data.
- Once it was complete, I exited the postgres account and the container itself. The file is now located at /data/mbaritracking/postgresql/data/mbaritracking.sql, but the host does not have permissions to see it. From the 'kgomes' account I ran
sudo chmod a+r /data/mbaritracking/postgresql/datawhich gives everyone read permissions and the group it belongs to is odssadm so the odssadm account should be able to read it. Also ransudo chmod a+x /data/mbaritracking/postgresql/dataso I could cd into it. With these permissions, I should be able to scp the backup to odss-testu. -
I ssh'd into odss-testu as kgomes, switched over to odssadm using
sudo -u odssadm -isince it has writes privs on /data and ran:scp kgomes@odss-test8.shore.mbari.org:/data/mbaritracking/postgresql/data/mbaritracking.sql /data/mbaritracking.sql -
Now, I need to get into the container and restore the database to the tracking DB. I ran
podman psto get the container ID, then ranpodman exec -it 19d9a47e7e5d /bin/bashto get inside the container. In order for the postgres account to have access to write the manifest during the restore, I needed to open up the /data directory by runningchomd a+w /dataas root. I then switched over to the postgres account usingsu postgres. -
Then, I ran:
psql mbaritracking < /data/mbaritracking.sql -
This also takes a very long time as there is lots of data to restore. It worked and I removed the mbaritracking.sql file to save disk space
- OK, now I need to re-enable the tracking consumers on odss-test8 and build them on odss-testu
-
I ssh'd into odss-test8 and restart the tracking consumers using the following as the odssadm user:
systemctl --user start container-tracking-clients.service -
That immediately drained the queues in RabbitMQ for odss-test8.
MBARI Tracking Consumers
-
Now, it is time to create the python consumers that will read the messages off RabbitMQ and then will stuff them into the tracking DB. There are quite a few environment variables for this container, so I thought it was easier to create them in a file and then point to the file when I create the pod. In the /home/odssadm/odss-components/mbaritracking directory, I created a file named .env-tracking and put the following in that file:
# These are the environment variable settings for the tracking database AMQP clients DJANGO_SETTINGS_MODULE=settings TRACKING_ADMIN_NAME="Kevin Gomes" TRACKING_ADMIN_EMAIL=kgomes@mbari.org TRACKING_HOME=/opt/MBARItracking TRACKING_DATABASE_ENGINE=django.contrib.gis.db.backends.postgis TRACKING_DATABASE_NAME=mbaritracking TRACKING_DATABASE_USER=odssadm TRACKING_DATABASE_PASSWORD=xxxxxxx TRACKING_DATABASE_HOST=odss-testu.shore.mbari.org TRACKING_DATABASE_PORT=5432 TRACKING_SECRET_KEY="dumb-key" AMQP_QUEUE_TYPE=ships,ais,drifters,auvs,moorings,gliders AMQP_HOSTNAME=messaging.shore.mbari.org AMQP_PORT=5672 AMQP_USERNAME=tracking AMQP_PASSWORD=xxxxxxxxx AMQP_VHOST=trackingvhost MBARI_TRACKING_HOME_HOST=/data/mbaritracking MBARI_TRACKING_HOST_NAME=odss-testu PYTHONPATH=/opt/MBARItracking/amqp ALLOWED_HOSTS=* -
I manually created the
/data/mbaritracking/client/logsdirectory that will be used to write the logs from the consumers. -
That should be all I need to start the consumers, so I then ran:
podman create --name tracking-clients --env-file /home/odssadm/odss-components/mbaritracking/.env-tracking -p 8081:80 -v /data/mbaritracking/client:/data mbari/tracking:1.0.5 -
This just creates the container, but does not run it yet. In order to get the container to run as a service, I had to create the systemd file. Then, Podman helps you create the systemd file by running:
podman generate systemd --name tracking-clients > ~/.config/systemd/user/container-tracking-clients.service -
I then had to make a couple of small changes, but the final systemd file was:
# container-tracking-clients.service # autogenerated by Podman 3.1.0 # Mon Apr 19 10:29:25 PDT 2021 [Unit] Description=MBARItracking Consumers Documentation=man:podman-generate-systemd(1) Wants=network.target After=network-online.target RequiresMountsFor=/home/odssadm/.local/share/containers/storage /run/user/11775/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=always TimeoutStopSec=70 ExecStart=/usr/bin/podman start tracking-clients ExecStop=/usr/bin/podman stop -t 10 tracking-clients ExecStopPost=/usr/bin/podman stop -t 10 tracking-clients PIDFile=/run/user/11775/containers/overlay-containers/4670b83bd013f63dc38d40036d7c2d7e276d28b13017c259b639117c612c8834/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target -
I then enabled the service using:
systemctl --user daemon-reload systemctl --user enable container-tracking-clients.service -
You can then start and stop the service using:
systemctl --user start container-tracking-clients.service systemctl --user stop container-tracking-clients.service -
And check the status of the service using:
systemctl --user status container-tracking-clients.service -
Damn, that actually worked and all the odss-testu queues drained! OK, so I have odss-testu running in parallel with odss-test8d and both tracking DBs are being updated with identical information so that is GREAT!
MongoDB
- The ODSS itself uses a catalog that is served by MongoDB. So we need to migrate the data from the old MongoDB to the new.
- Before starting, I created the /data/mongodb directory on odss-testu as user odssadm.
-
Like with PostGIS, I wanted to try and run the mongodb user inside the container as 1000 so it is consist with the way I will need to do it on the production machine due to the share UID issue. To do this, I created a Dockerfile in odss-components/mongodb and entered:
# This creates a Docker image from the MongoDB image and then changes the uid/gid of the mongodb account FROM mongo:4.4 # Change the mongodb user from uid 999 to 1000 (and gid too) RUN groupmod -g 1000 mongodb && \ usermod -u 1000 mongodb -
Now to create the image, I ran:
podman build -t mbari/odss-mongodb . -
Now, to create the container, I ran:
podman create --security-opt label=disable --name mongodb -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=mongoadm -e MONGO_INITDB_ROOT_PASSWORD=xxxxx -v /data/mongodb:/data/db mbari/odss-mongodb -
Then I have to create the systemd file using:
podman generate systemd --name mongodb > ~/.config/systemd/user/container-mongodb.service -
I then edited that file to change it to:
# container-mongodb.service # autogenerated by Podman 3.1.0 # Mon Apr 19 10:43:15 PDT 2021 [Unit] Description=ODSS MongoDB Server Documentation=man:podman-generate-systemd(1) Wants=network.target After=network-online.target RequiresMountsFor=/home/odssadm/.local/share/containers/storage /run/user/11775/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=always TimeoutStopSec=70 ExecStart=/usr/bin/podman start mongodb ExecStop=/usr/bin/podman stop -t 10 mongodb ExecStopPost=/usr/bin/podman stop -t 10 mongodb PIDFile=/run/user/11775/containers/overlay-containers/0ab58e69669781747e34d029b39be0e379991928036c87e56c82d7908b6f41e3/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target -
Then, I needed to enable service using:
systemctl --user daemon-reload systemctl --user enable container-mongodb.service -
You can then start and stop the service using:
systemctl --user start container-mongodb.service systemctl --user stop container-mongodb.service -
And check the status of the service using:
systemctl --user status container-mongodb.service -
After starting the service, I connected to the container using:
podman exec -it mongodb /bin/bash -
In another window, I logged into odss-test8 as myself, then changed to odssadm using
sudo -u odssadm -i. I then ran:mkdir /data/tmp mkdir /data/tmp/mongobackup -
This created the directories on the host where I will dump the mongo database.
-
Now, in order to dump the database, I had to get into the container by running:
podman exec -it mongodb /bin/bash -
That put me in the mongodb container as root on odss-test. Then to backup the mongo database I ran:
mongodump --db=odss --out=/tmp/odss.backup --uri="mongodb://odssadm:xxxxxx@localhost/odss" cd /tmp tar -cvf odss.backup.tar odss.backup/ -
This created a tar of the ODSS db inside the container. To copy the file out to the host, I exited the container using
exitand then ran:podman cp mongodb:/tmp/odss.backup.tar /data/tmp/mongobackup/odss.backup.tar -
Now, I need to get that backup over to odss-testu, so on odss-testu as odssadm outside the container*, I ran:
mkdir /data/tmp mkdir /data/tmp/mongobackup scp kgomes@odss-test8.shore.mbari.org:/data/tmp/mongobackup/odss.backup.tar /data/tmp/mongobackup/odss.backup.tar podman cp odss.backup.tar mongodb:/tmp/odss.backup.tar -
That put the backup tar file inside the mongodb container. The next step was to actually run the restore, so first I got inside the container by running
podman exec -it mongodb /bin/bash. Then to do the restore, I ran:cd /tmp tar -xvf odss.backup.tar cd odss.backup mongorestore --db odss --username mongoadm --authenticationDatabase=admin odss -
This created the odss database in the local mongodb instance. So I am really close, but I need to create a user that can have full rights to the DB and can then be used by the ODSS application. Traditionally, I’ve used the username odssadm. So, to create that user and give it full permissions on the odss database, inside the container I entered the mongo shell as user mongoadm and then ran:
mongo -u mongoadm -p xxxxx use odss db.createUser({user:"odssadm",pwd:"xxxxxx",roles:[{role:"dbOwner",db:"odss"}]}) quit() -
I did have to put in a ticket to add port 27017 (and for 5432 for PostgreSQL) to the firewall so I could get to it from my Studio 3T application. It worked after that.
- Just to clean up, inside the container, inside the container, I removed the /tmp/odss.backup directory and the odss.backup.tar file. On the host, I removed the /data/tmp directory since I did not need it.
- I rebooted just to make sure everything came back online.
Support Scripts
Danelle had written some supporting scripts to help with managing the ERDDAP and THREDDS data sets so that we could replicate them on the boat more easily. I migrated them to a Docker deployment and added the configuration to the Git repo in the resources/deployment/odss-utils directory. To get this all set up
- ssh'd into the odss-testu machine as kgomes
- ran
sudo -u odssadm -ito change over to the odssadm user - ran
cdto get to the home directory. - ran
cd odssto get to the odss directory. - ran
git pullto get the new files I had checked in - ran
cd /datato get to the /data directory. - ran
mkdir utilsto create the /data/utils directory - ran
cd utilsto get into the directory - ran
mkdir logsto create a logs directory - ran
vi mapdatasets.jsonto create a new JSON configuration file for the utility scripts. I copy and pasted an example that I had been working with and edited to fit the deployment on the odss-testu machine. -
ran
cdto get back to the home directory and ranpodman create --security-opt label=disable --name odss-utils -v /data/erddap/data:/data/erddap/data -v /data/mapserver:/data/mapserver -v /data/utils/logs:/var/log -v /data/utils/mapdatasets.json:/mapdatasets.json mbari/odss-utils:1.0.9 -
I then ran
podman generate systemd --name odss-utils > ~/.config/systemd/user/container-odss-utils.service -
I then edited the service file that was generated so that it looked like the following:
# container-odss-utils.service # autogenerated by Podman 3.2.3 # Mon Oct 11 12:38:04 PDT 2021 [Unit] Description=ODSS Utilities Container Documentation=man:podman-generate-systemd(1) Wants=network.target After=network-online.target RequiresMountsFor=/run/user/11775/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=always TimeoutStopSec=70 ExecStart=/usr/bin/podman start odss-utils ExecStop=/usr/bin/podman stop -t 10 odss-utils ExecStopPost=/usr/bin/podman stop -t 10 odss-utils PIDFile=/run/user/11775/containers/overlay-containers/31a5ff18e60a65e3733d6fdc091aca05664c5c764e770ca5cb96ab9d14a104ea/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target -
Then ran
systemctl --user daemon-reload - Followed by
systemctl --user enable container-odss-utils.service -
You can then start and stop the service using:
systemctl --user start container-odss-utils.service systemctl --user stop container-odss-utils.service -
And check the status of the service using:
systemctl --user status container-odss-utils.service
The ODSS Itself
A general note before we get started. While I have the capability to deploy the ODSS as a container, I find that because we need to support them over remote, small comms links to the boats, it's better to have the git repos checked out directly on the host machine and then running it on the host. So, here is what I did:
- ssh'd into odss-testu
- switched over to the odssadm account using
sudo -u odssadm -i - I created a location for the server logs by running
mkdir /data/server, followed bymkdir /data/server/logs. - cd'd to the odss home directory using
cd -
Checked out the master branch of the odss to the home directory using
git clone https://kgomes@bitbucket.org/mbari/odss.git -
Since it's a public repo, it did not need my password so updates should be fine using the shared account.
- I changed into the main web application folder by using
cd odss/webapp/server - I made a copy of the config.js.template file by running
cp config.js.template config.js - I edited the config.js file and changed the properties to matched what I needed on this server (logs pointed to /data/server/logs, repo pointed to /data/repo).
- Pat installed node in /usr/local/nodejs, so it's not on any default path. I asked him to create symbolic links from /usr/local/nodejs/bin/* to /usr/local/bin. In the meantime, I added them to my local path in .bash_profile and I can hardcode them in scripts if I need to. He also installed the quasar CLI.
- I then ran
npm installfrom the server directory. -
Then I ran
node app.jsto run the ODSS and it came up in the browser. Only the platform tracks are not showing and when I click on the link to get the track data directly, I get an error that reads:Cannot find SRID (4326) in spatial_ref_sys. Somehow that got lost in the backup restore. From Google, I found somebody suggested running:psql -d mbaritracking -f /usr/share/postgresql/13/contrib/postgis-3.0/spatial_ref_sys.sql -
From the odssadm account, I ran
podman exec -it mbaritracking /bin/bashto get into the mbaritracking container, then switched to the postgres account usingsu postgres. I ran the above and it did a bunch of inserts. The queries now work, but they take FOREVER and they give up on the ODSS. I'm wondering if the indexes also did not make it since the spatial reference did not either. I never looked at the errors.txt file from the restore and it turns out there were errors. See the MBARI Tracking database section above for the fix. -
Now to get the ODSS running automatically, I created the following script in /home/odssadm/bin/start-odss.sh
#! /bin/bash # This script looks for a running ODSS app.js server and starts one if it's not running ODSS_PID=$(pgrep -f 'node /home/odssadm/odss/webapp/server/app.js') # First let's see if the ODSS PID was even found if [ -z "$ODSS_PID" ] then echo "ODSS PID not found, will start the ODSS server" /usr/local/bin/node /home/odssadm/odss/webapp/server/app.js > /home/odssadm/logs/odss-cron.log 2>&1 & fi -
And the following crontab entry:
# Crontab for odssadm # This checks to see if the ODSS server is running every 5 minutes and starts it if not * * * * * /home/odssadm/bin/start-odss.sh > /dev/null -
Now the ODSS will restart if it dies for some reason.
-
Lastly, I need a basic web page redirect to send users from the base
https://odss-testu.shore.mbari.orgtohttps://odss-testu.shore.mbari.org/odss. The content looks like:<html> <head> <title>Redirecting to ODSS Test</title> <meta HTTP-EQUIV="REFRESH" content="0; url=https://odss-testu.shore.mbari.org/odss/"> </head> <body> <p>Redirecting...</p> </body> </html> -
Now, to install the planning component, I decided to just clone it right into the /var/www/html directory so under the account 'kgomes', I ran:
cd /var/www sudo chown odssadm /var/www/html sudo -u odssadm -i cd /var/www/html mkdir odss-planning exit cd /var/www sudo chown root /var/www/html sudo -u odssadm -i cd git clone https://kgomes@bitbucket.org/mbari/odss-planning.git cd odss-planning npm install quasar build cd dist/spa tar zcf ../odss-planning.tgz * mv ../odss-planning.tgz /var/www/html/odss-planning cd /var/www/html/odss-planning tar xf odss-planning.tgz rm odss-planning.tgz -
AND IT WORKED, I can see it in the odss tab on odss-testu!
- I did submit a ticket to have my account (kgomes) and Carlos' (carueda) to be added to the odssadm group so we could manage the server more easily.
Upgrading a Container
Moving forward, you will want to upgrade containers from time to time. This is the generic process to do that.
- Log in to odss-testu.shore.mbari.org.
- Switch to the odssadm account using
sudo -u odssadm -i. - Run
podman psto see the containers that are currently running. This is just to make sure everything looks OK. -
Stop the container you want to upgrade by running (where xxxxxxx is the name of the service you want to upgrade):
systemctl --user stop container-xxxxxxx.service -
Now you need to disable the service from systemctl by running:
systemctl --user disable container-xxxxxxx.service -
Then you need to remove the service file from ~/.config/systemd/user by running:
rm ~/.config/systemd/user/container-xxxxxxx.service -
Now reload the systemctl to make sure it's clear by running:
systemctl --user daemon-reload -
Now you need to find the container that was running as that service by running
podman ps -a -
You container should show up in the list but show someting like
Exited ...under status. You want to copy the CONTAINER ID and run:podman rm [CONTAINER_ID] -
This removes the container and if you want to be really strict you can remove the old image by running
podman images podman rmi [IMAGE ID] -
This removes all traces of the old image, now repeat the install steps from above section with upgraded version number to install the upgraded container.
Moving from Podman to Docker
So, in March of 2023, we decided to really focus on standardizing on Ubunutu and Docker as the main way to run our systems. So, that meant I needed to move the test server from podman to Docker before starting to focus on other deployments/upgrades. One critical thing is to make sure I capture the volume mounts correctly since I will be re-using those. Here is a table showing how they are set up
| Container | Host Directory | Container Directory |
|---|---|---|
| ERDDAP | /data/erddap/content/erddap | /usr/local/tomcat/content/erddap |
| ERDDAP | /data/erddap/bigParentDirectory | /data/erddap/bigParentDirectory |
| ERDDAP | /data/erddap/data | /data/erddap/data |
| THREDDS | /data/thredds | /usr/local/tomcat/content/thredds |
| THREDDS | /data | /data |
| Mapserver | /data/mapserver | /data/mapserver |
| TrackingDB | /pgdata | /var/lib/postgresql/data |
| TrackingDB | /data | /data |
| Tracking Clients | /data/mbaritracking/client | /data |
| MongoDB | /data/mongodb | /data/db |
| Support Scripts | /data/erddap/data | /data/erddap/data |
| Support Scripts | /data/mapserver | /data/mapserver |
| Support Scripts | /data/utils/logs | /var/log |
| Support Scripts | /data/utils/mapdatasets.json | /mapdatasets.json |
| Container | Host Port | Container Port |
|---|---|---|
| ERDDAP | 8280 | 8080 |
| THREDDS | 8180 | 8080 |
| Mapserver | 8080 | 80 |
| TrackingDB | 5432 | 5432 |
| Tracking Clients | 8081 | 80 |
| MongoDB | 27017 | 27017 |