Skip to content

ODSS Installation on Normandy (production)

Normandy (Production)

This page documents the steps taken during the installation and migration of the ODSS from the machine normandy to the machine normandy8. This was necessary because normandy was CentOS 6 which was end of life so we needed to move to CentOS 8. In addition, we moved the operational deployment of the ODSS to a docker-based deployment to make management easier. Docker is actually 'podman' on CentOS 8 so that was part of the migration as well (moving from Docker to Podman).

Preparation

From what I learned in the odss-test migration, I wanted to make a few notes before starting the normandy transition. While the set up of the odss-test8 machine was not that difficult once I got the directories and locations all set up correctly, the production server was a completely different server and this was all due to the fact that we were using an NFS mount for the data. When an NFS mount is connected to any server, the permissions system looks for user id and all permissions are managed by those. So, originally, the odssadm account on normandy8 was given permissions to the NFS mount by adding the uid of 11290 to the NFS permissions. If a container that is started by odssadm only has one account (usually root) inside the container, the internal uid of that account is mapped to the uid of the account that launched it, in this case 11290. So that means any resource requests made by the process inside the container that translated to requests on the host machine, the uid of that process would match the launching account. So, for example, if the NFS mount was then mounted in the container and the process inside the container was running as root (uid 0 inside the container), any requests made by root inside the container looked like requests from the odssadm account (11290) and NFS would allow them. The problem arose when containers would run their processes under a different account than the account that first started inside the container. The classic case of this is the Apache HTTPD container. Apache always starts as root, but then spins off other processes, but does so as a non-privileged account, something like 'daemon' for example. This account has a non 0 uid insdie the container, but podman has to translate that to a uid for requests made outside the container. There is a file on the host machine, /etc/subuid, that contains ranges of uids that are associated with each host account. So, if you look in the /etc/subuid account, you will see something like 'odssadm:300001:65536' which means that there are 65536 uids that start numbering at 300001 that are available to use for this mapping. The way it seems to work is that when a non-start up account makes a request outside the container, the host OS takes the uid of the account inside the container and adds it to the starting number of the range given in the /etc/subuid file. An example is probably easier. So, let's say, for example, the odssadm account starts up an Apache HTTPD container. The container starts up a process as root which has a uid of 0. The host system maps that uid to the odssadm uid of 11290. Then, the httpd software spins up more httpd processes, but now switches them over to the daemon account which has a uid of 1. The host then looks up the odssadm entry in the /etc/subuid file and finds the first entry to be 300001 and adds the uid of the container to calculate the uid mapping which ends up being 1 -> 300001. Note our best guess is because the root uid of 0 maps to the host uid of the account already (11290), it starts the mapping assuming that 1 = 300001. You can see this in action after the container is up and running. If you run 'podman top container-id huser user' you can see what host uids were mapped to accounts inside the container. Since we don't have control over the uids that are used to start processes inside containers, we basically had to start the processes and figure out what uids they were being mapped to, create AD accounts with those uids and then add them to a group that had r/w access to the Atlas share. To prevent this from clashing too badly with normal AD accounts, we decided to start the uids at 300000 so that there was space between those AD accounts and any others. You will see that play out in the instructions below but at least you know the background on what we had to do to get this to work.

Proxy Passes

The existing proxy passes for normandy are:

  1. /erddap -> normandy.mbari.org:8280/erddap
  2. /thredds -> normandy.mbari.org:8180/thredds
  3. /odss -> normandy.mbari.org:3000/odss

It might be a good idea to keep the ports the same in case it was embedded in some URLs somewhere.

Data Directories

The directories under /data (Atlas mount) are:

  1. activity
  2. biospace
  3. canon
  4. cosci
  5. erddap
  6. goc
  7. hotspot
  8. logs
  9. mapserver
  10. mongo
  11. other
  12. simz
  13. thredds
  14. tilecache-2.11
  15. tmp

Steps

Host Configuration

  1. I had Pat create a kgomes and odssadm accounts (I logged into them directly as the sudo -u odssadm -i did not seem to work)
  2. Web server

    1. While I was working on things, Joe installed the HTTP server, added the certificate and set up the following proxy passes in the file /etc/httpd/conf.d/odss.conf:

          <VirtualHost *:80>
          ServerName normandy8.mbari.org
          RewriteEngine On
          RewriteCond %{HTTPS} !on
          RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
          </VirtualHost>
      
          <Location /erddap>
            ProxyPass        http://127.0.0.1:8280/erddap
            ProxyPassReverse http://127.0.0.1:8280/erddap
          </Location>
      
          <Location /thredds>
            ProxyPreserveHost On
            ProxyPass        http://127.0.0.1:8180/thredds
            ProxyPassReverse http://127.0.0.1:8180/thredds
          </Location>
      
          <Location /odss>
            ProxyPass        http://127.0.0.1:3000/odss
            ProxyPassReverse http://127.0.0.1:3000/odss
          </Location>
      
          <Location /trackingdb>
            Proxypass        http://127.0.0.1:8081/trackingdb
            ProxyPassReverse http://127.0.0.1:8081/trackingdb
          </Location>
      
          <Location /cgi-bin/mapserv>
            Proxypass        http://127.0.0.1:8080
            ProxyPassReverse http://127.0.0.1:8080
          </Location>
      
    2. Just a quick note that I actually tweaked these after he had set them up since I wanted to keep ERDDAP an THREDDS running on ports 8280 and 8180 respectively.

    3. Another note that in September of 2023, we got an email letting us know we had directories being indexed by Google in the data directory that we probably didn't want exposed. I added the following to the odss.conf file to block those directories
             <Directory "/var/www/html/data/erddap">
               Require all denied
             </Directory>
    
             <Directory "/var/www/html/data/hotspot">
               Require all denied
             </Directory>
    
             <Directory "/var/www/html/data/logs">
               Require all denied
             </Directory>
    
             <Directory "/var/www/html/data/mbaritracking">
               Require all denied
             </Directory>
    
             <Directory "/var/www/html/data/mongo">
               Require all denied
             </Directory>
    
             <Directory "/var/www/html/data/server">
               Require all denied
             </Directory>
    
             <Directory "/var/www/html/data/thredds">
               Require all denied
             </Directory>
    
             <Directory "/var/www/html/data/tilecache-2.11">
               Require all denied
             </Directory>
    
             <Directory "/var/www/html/data/tmp">
               Require all denied
             </Directory>
    
             <Directory "/var/www/html/data/utils">
               Require all denied
             </Directory>
    
    1. I also added a robots.txt file in /data to prevent future indexing by Google. The entry for that file looked like
    
            User-agent: *
            Disallow: /
    
  3. After the odssadm account was created, we had to change the /etc/subuid file to match the uid mappings we wanted for NFS permissions (see the introduction at the top of this document). I edited the /etc/subuid file directly and changed it from:

    meteor:100000:65536
    odssadm:165536:65536
    podman_user:231072:65536
    ApacheBeachRO:296608:65536
    
  4. To:

    meteor:100000:65536
    ApacheBeachRO:165536:65536
    podman_user:231072:65536
    odssadm:300001:65536
    
  5. We also made the same changes to the /etc/subgid file to match those in the /etc/subuid file.

  6. I then rebooted the machine to clear up any proccess mapping that might have been made already.
  7. In order to get the user level systemctl to work correctly, in the .bashrc file for odssadm, I added the following to the .bashrc in the odssadm account and logged out and back in:

    export XDG_RUNTIME_DIR=/run/user/$(id -u)
    
  8. Before I started, I needed to make sure any container service that I created would still run while logged out of the odssadm account. In order for this to happen, I switched over to root using 'su' and then ran:

    loginctl enable-linger odssadm
    
  9. Before I started create any images/containers, I had to manually create the /home/odssadm/.config/systemd/user directory

ERDDAP

  1. So with ERDDAP on the production machine, the /data directory is a mount to an Atlas share that is shared by both machines so I can't just move files around without breaking things. The only real change I had to make though was to copy the non-data and non-content directories and files into a directory named bigParentDirectory and then I would point the new server to that location. To do this, I ran:

    mkdir /data/erddap/bigParentDirectory
    
  2. Then, to copy all the files, I ran the following steps:

    cp -r /data/erddap/a /data/erddap/bigParentDirectory
    cp -r /data/erddap/cache /data/erddap/bigParentDirectory
    cp -r /data/erddap/copy /data/erddap/bigParentDirectory
    cp /data/erddap/currentDatasets.xml /data/erddap/bigParentDirectory
    cp -r /data/erddap/dataset /data/erddap/bigParentDirectory
    cp -r /data/erddap/flag /data/erddap/bigParentDirectory
    mkdir /data/erddap/bigParentDirectory/logs
    cp -r /data/erddap/lucene /data/erddap/bigParentDirectory
    cp /data/erddap/mapdatasets.xml /data/erddap/bigParentDirectory
    cp /data/erddap/subscriptionsV1*.txt /data/erddap/bigParentDirectory
    
  3. I shutdown the ERDDAP server on current normandy using 'sudo service tomcat-erddap stop'.

  4. I then edited the /data/erddap/content/erddap/setup.xml and change

  5. bigParentDirectory from /data/erddap to /data/erddap/bigParentDirectory

  6. baseUrl from http://normandy.mbari.org to https://normandy8.mbari.org
  7. I added this to the end of the admin section:

    <adminInstitutionUrl>https://www.mbari.org</adminInstitutionUrl>
    
  8. At MBARI, we like to use the BitstreamVeraSans font and also install the JetPlus.cpt palette. In order to do this, I copied the fonts from the current production machine and put them in the /home/odssadm/odss-components/erddap/fonts folder. I also copied over the JetPlus.cpt file to /home/odssadm/odss-components/erddap/JetPlus.cpt. Next, I needed to add the JetPlus palette to the messages.xml file inside the container so I copied the default messages.xml from inside the container to /home/odssadm/odss-components/erddap/messages.xml and edited to add 'JetPlus' to the list in the <palettes> tag. In order to get this into a container, I had to create a custom image which I did by creating the /home/odssadm/odss-components/erddap/Dockerfile and using the following:

    # This Dockerfile builds a custom image of ERDDAP that contains some tweaks
    # we use in the ODSS
    
    # First, start with the stock Axiom ERDDAP image
    FROM axiom/docker-erddap:2.02
    
    # Add the BitstreamVeraSans Fonts
    COPY fonts/ /usr/local/openjdk-8/jre/lib/fonts/
    
    # Add the JetPlus palette
    COPY JetPlus.cpt /usr/local/tomcat/webapps/erddap/WEB-INF/cptfiles/JetPlus.cpt
    
    # Overwrite the messages.xml file to include the new palette
    COPY messages.xml /usr/local/tomcat/webapps/erddap/WEB-INF/classes/gov/noaa/pfel/erddap/util/messages.xml
    
  9. Then I built the image using

    podman build -t mbari/odss-erddap .
    
  10. Then I could create the container using

    podman create --security-opt label=disable --privileged --name erddap -p 8280:8080 -v /data/erddap/content/erddap:/usr/local/tomcat/content/erddap -v /data/erddap/bigParentDirectory:/data/erddap/bigParentDirectory -v /data/erddap/data:/data/erddap/data mbari/odss-erddap
    
  11. Now, to run this as a service, we need to create the systemd file by running:

    podman generate systemd --name erddap > ~/.config/systemd/user/container-erddap.service
    
  12. I then edited the .service file so it looks like:

    # container-erddap.service
    # autogenerated by Podman 2.1.1
    # Mon Dec  7 11:38:23 PST 2020
    # Updated by kgomes 2020-12-07: Updated description and changed Restart to always
    
    [Unit]
    Description=ODSS ERDDAP Server
    Documentation=man:podman-generate-systemd(1)
    Wants=network.target
    After=network-online.target
    
    [Service]
    Environment=PODMAN_SYSTEMD_UNIT=%n
    Restart=always
    ExecStart=/usr/bin/podman start erddap
    ExecStop=/usr/bin/podman stop -t 10 erddap
    ExecStopPost=/usr/bin/podman stop -t 10 erddap
    PIDFile=/run/user/11290/containers/overlay-containers/708f54b2e614c1bb9fcca2c74dc3af03fbc4eb34b9011b7dd8c51b7546332dc4/userdata/conmon.pid
    KillMode=none
    Type=forking
    
    [Install]
    WantedBy=multi-user.target default.target
    
  13. I then enabled the service using:

    systemctl --user daemon-reload
    systemctl --user enable container-erddap.service
    
  14. You can then start and stop the service using:

    systemctl --user start container-erddap.service
    systemctl --user stop container-erddap.service
    
  15. And check the status of the service using:

    systemctl --user status container-erddap.service
    
  16. As a check to make sure everying was running correctly, I did the following:

  17. I then ran 'podman top erddap huser user' to check the process mappings and it came out as:

    [odssadm@normandy8 user]$ podman top erddap huser user
    HUSER    USER
    301000   tomcat
    
  18. Todd created an AD user with a uid of 301000 and added it to the account that should have r/w on atlas.

  19. All this worked and ERDDAP is now available on https://normandy8.mbari.org/erddap

  20. Because it worked, I set the proxy pass on normandy to point to this one so that things needing ERDDAP would still work.

THREDDS

  1. The THREDDS install was a bit easier as the data directory for THREDDS was already mounted on normandy8 in the /data/thredds location.
  2. To create the image the way I needed it, I ran:

    podman create --security-opt label=disable --name thredds --privileged -p 8180:8080 -v /data/thredds:/usr/local/tomcat/content/thredds -v /data:/data unidata/thredds-docker:4.6.15
    
  3. Now, to generate the start up script, I ran:

    podman generate systemd --name thredds > ~/.config/systemd/user/container-thredds.service
    
  4. I then edited the service file to convert it to:

    # container-thredds.service
    # autogenerated by Podman 2.1.1
    # Mon Dec  7 12:03:42 PST 2020
    # Updated by kgomes 2020-12-07: Updated description and changed Restart to always
    
    [Unit]
    Description=ODSS THREDDS Server
    Documentation=man:podman-generate-systemd(1)
    Wants=network.target
    After=network-online.target
    
    [Service]
    Environment=PODMAN_SYSTEMD_UNIT=%n
    Restart=always
    ExecStart=/usr/bin/podman start thredds
    ExecStop=/usr/bin/podman stop -t 10 thredds
    ExecStopPost=/usr/bin/podman stop -t 10 thredds
    PIDFile=/run/user/11290/containers/overlay-containers/9d25a6a0426d7d5f672dd89cc00f1e0edb99b0ca49d5943e42c2248b589eaeaa/userdata/conmon.pid
    KillMode=none
    Type=forking
    
    [Install]
    WantedBy=multi-user.target default.target
    
  5. I then enabled the service using:

    systemctl --user daemon-reload
    systemctl --user enable container-thredds.service
    
  6. You can then start and stop the service using:

    systemctl --user start container-thredds.service
    systemctl --user stop container-thredds.service
    
  7. And check the status of the service using:

    systemctl --user status container-thredds.service
    
  8. As a check to make sure everying was running correctly, I did the following:

  9. I ran 'podman top thredds huser user' to check the process mappings and it came out as:

    [odssadm@normandy8 user]$ podman top thredds huser user
    HUSER    USER
    301000   tomcat
    
  10. Since Todd had already set up the 30100 user, it just worked!

  11. The THREDDS server is now available at https://normandy8.mbari.org

  12. I set the proxy pass on the current production server to point /thredds to this thredds server instead. Once the new server is renamed to normandy, that should just work.

Mapserver

  1. So, one thing I found through MUCH pain was that the default EPSG file for mapserver did not have the Google projection in it which was causing all my layers to fail. To fix this, I took a copy of the epsg file from the running mapserver container located at /usr/share/proj/epsg and put in the odss-components/mapserver directory. I then edited the file and added the following line:

    # Google Earth / Virtual Globe Mercator
    <900913> +proj=merc +a=6378137 +b=6378137 +lat_ts=0.0 +lon_0=0.0 +x_0=0.0 +y_0=0 +k=1.0 +units=m +no_defs
    
  2. I also figured out that we must have installed some custom fonts on the production machine so I copied all the fonts from /usr/share/fonts on normandy to the odss-components/mapserver/fonts directory.

  3. Then to create an image with my customizations, I created a file in the odss-components/mapserver directory named Dockerfile file with the following contents:

    # This creates a Docker image from camptocamp/mapserver that overrides the start-server file
    
    # First define the base image that this will build from
    FROM camptocamp/mapserver:7.6
    
    # Add the EPSG file with the Google projection
    COPY epsg /usr/share/proj/epsg
    
    # Add more fonts
    COPY fonts/ /usr/share/fonts/
    
    # Change the www-data user from uid 33 to 1000 (and gid too)
    RUN groupmod -g 1000 www-data && \
    usermod -u 1000 www-data
    
  4. Then to build the image, I ran:

    podman build -t mbari/odss-mapserver .
    
  5. Now to create the container, I ran:

    podman create --security-opt label=disable --name mapserver --privileged -p 8080:80 -v /data/mapserver:/data/mapserver mbari/odss-mapserver
    
  6. Now, to generate the start up script, I ran:

    podman generate systemd --name mapserver > ~/.config/systemd/user/container-mapserver.service
    
  7. I then edited the service file to convert it to:

    # container-mapserver.service
    # autogenerated by Podman 2.1.1
    # Mon Dec  7 12:12:25 PST 2020
    # Updated by kgomes 2020-12-07: Updated description and changed Restart to always
    
    [Unit]
    Description=ODSS Mapserver Service
    Documentation=man:podman-generate-systemd(1)
    Wants=network.target
    After=network-online.target
    
    [Service]
    Environment=PODMAN_SYSTEMD_UNIT=%n
    Restart=always
    ExecStart=/usr/bin/podman start mapserver
    ExecStop=/usr/bin/podman stop -t 10 mapserver
    ExecStopPost=/usr/bin/podman stop -t 10 mapserver
    PIDFile=/run/user/11290/containers/overlay-containers/e85fbff9d270ad46756557f4f279838e6efe0cd4d3aff62c19d671a26d130320/userdata/conmon.pid
    KillMode=none
    Type=forking
    
    [Install]
    WantedBy=multi-user.target default.target
    
  8. I then enabled the service using:

    systemctl --user daemon-reload
    systemctl --user enable container-mapserver.service
    
  9. You can then start and stop the service using:

    systemctl --user start container-mapserver.service
    systemctl --user stop container-mapserver.service
    
  10. And check the status of the service using:

    systemctl --user status container-mapserver.service
    
  11. As a check to make sure everying was running correctly, I did the following:

  12. I then ran 'podman top mapserver huser user' to check the process mappings and it came out as:

    [odssadm@normandy8 user]$ podman top mapserver huser user
    HUSER    USER
    11290    root
    301000   www-data
    301000   www-data
    301000   www-data
    301000   www-data
    301000   www-data
    301000   www-data
    301000   www-data
    301000   www-data
    
  13. As hard as it is to believe it worked! For the true test, I added a proxy pass on normandy to point /cgi-bin/mapserv to http://normandy8.mbari.org/cgi-bin/mapserv and it WORKS! Crazy!

MBARITracking PostgreSQL (PostGIS) Database

  1. As a first step, I needed to define a location on the host where the PostGIS container would store it's data. I put in a ticket to have Pat created a /pgdata directory that I will use to mount into the PG container. Pat created a dedicated SCSI connection to the /pgdata share and give ownership to odssadm.
  2. In order to migrate the database, I needed to have the backup file available to the PostGIS container. So, while I will be using the /pgdata for the process space, I still need the PostGIS container to be able to read from /data because that is where I will be backing up the production database to. So, in order for that to work, I created a Dockerfile in the odss-components/mbaritracking folder that looks like this:

    # This creates a Docker image from the PostGIS image and then changes the uid/gid of the postgres account
    FROM postgis/postgis:13-3.0
    
    # Change the postgres user from uid 999 to 1000 (and gid too)
    RUN groupmod -g 1000 postgres && \
    usermod -u 1000 postgres
    
  3. Then I built the new image using:

    podman build -t mbari/mbari-tracking .
    
  4. Then to build the container I ran:

    podman create --name mbaritracking -p 5432:5432 -e POSTGRES_PASSWORD=xxxxxxx -v /pgdata:/var/lib/postgresql/data -v /data:/data mbari/mbari-tracking
    
  5. Now to create the service, I ran:

    podman generate systemd --name mbaritracking > ~/.config/systemd/user/container-mbaritracking.service
    
  6. I then edited the container-mbaritracking.service file to look like:

    # container-mbaritracking.service
    # autogenerated by Podman 2.1.1
    # Mon Dec  7 12:48:06 PST 2020
    # Updated by kgomes 2020-12-07: Updated description and changed Restart to always
    
    [Unit]
    Description=MBARITracking PostGIS Server
    Documentation=man:podman-generate-systemd(1)
    Wants=network.target
    After=network-online.target
    
    [Service]
    Environment=PODMAN_SYSTEMD_UNIT=%n
    Restart=always
    ExecStart=/usr/bin/podman start mbaritracking
    ExecStop=/usr/bin/podman stop -t 10 mbaritracking
    ExecStopPost=/usr/bin/podman stop -t 10 mbaritracking
    PIDFile=/run/user/11290/containers/overlay-containers/4d0a65cb9d3279485679232da3211e350625855328a05117734b10d296ee3d3a/userdata/conmon.pid
    KillMode=none
    Type=forking
    
    [Install]
    WantedBy=multi-user.target default.target
    
  7. I then enabled the service using:

    systemctl --user daemon-reload
    systemctl --user enable container-mbaritracking.service
    
  8. You can then start and stop the service using:

    systemctl --user start container-mbaritracking.service
    systemctl --user stop container-mbaritracking.service
    
  9. And check the status of the service using:

    systemctl --user status container-mbaritracking.service
    
  10. Before moving the current production DB, it needs to be created. The best thing is to connect to the container using podman exec -it mbaritracking /bin/bash and then switch to the postgres account using su postgres and then run psql. Run the following statements to create the mbaritracking database:

    CREATE USER odssadm WITH PASSWORD 'xxxxxxx';
    CREATE DATABASE mbaritracking with OWNER=odssadm TEMPLATE=template_postgis;
    \c mbaritracking
    GRANT ALL ON ALL TABLES IN SCHEMA PUBLIC TO odssadm;
    
  11. I then ssh'd into the old machine (normandy) and edited the crontab to stop the consumers from restarting

  12. Next, I logged into the RabbitMQ admin page and manually created 6 queues that were connected to the vhost 'trackingvhost'. The queue names were:
  13. normandy8_persist_ais (to be bound to ais/ais)
  14. normandy8_persist_auvs (to be bound to auvs/auvs)
  15. normandy8_persist_drifters (to be bound to drifters/drifters)
  16. normandy8_persist_gliders (to be bound to gliders/gliders)
  17. normandy8_persist_moorings (to be bound to moorings/moorings)
  18. normandy8_persist_ships (to be bound to ships/ships)
  19. I then got a process listing for all the python consumers on normandy. This gave me the process IDs that I would need to kill.
  20. For each one of the queues, I would go to the queue in the web admin page and then fill out the information (exchange and routing key) to bind it to the various exchanges. Then, as soon as I created the binding, I would kill the matching consumer on the old machine. I did this for each queue so that the messages that started accruing were the same for both old and new machine.
  21. On normandy, I ran 'su postgres' to switch to the postgres account.
  22. I thought it would be a good idea to clean up the DB as much as possible before migration, so I ran the ‘vacuumdb mbaritracking’ (from the postgres account).
  23. From the psql account, I could not write directly to /data, so I first ran the following to back up the database to the /pgdata directory (from the postgres account and note I had to use the full path to the pg_dump as the default was for version 8 of PostgreSQL):

    /usr/pgsql-9.1/bin/pg_dump -Fc -b -b -f /pgdata/tmp/mbaritracking.backup mbaritracking
    
  24. After the backup finished, I exited the postgres account and copied the file to the /data directory using (note I had to add r,x to the permissions on /pgdata to allow this since I did not have sudo cp/mv on that file, sigh):

    sudo mv /pgdata/tmp/mbaritracking.backup /data/tmp/pgbackup/mbaritracking.backup
    
  25. Now, I moved over to normandy8 and connected to the mbaritracking container using podman exec -it mbaritracking /bin/bash. This puts you in the container as root.

  26. Due to all the uid stuff, postgres account does not have access to /data. So, from the root account, I copied the backup file to the /var/lib/postgresql/data/tmp directory (which is on the SSDS) so that the postgres account would have access to it.
  27. I switched over to the posgres account using su postgres.
  28. Then, I navigated to the location of the postgis_restore.pl script using cd /usr/share/postgresql/13/contrib/postgis-3.0/. I restored the database to the new server using:

    perl postgis_restore.pl /var/lib/postgresql/data/tmp/mbaritracking.backup | psql mbaritracking 2> ~/errors.txt
    
  29. And of course, this failed. Because why in the hell would we want any of this upgrade to go smoothly. That would just be silly. I got the error:

    Converting /var/lib/postgresql/data/tmp/mbaritracking.backup to ASCII on stdout...
    Reading list of functions to ignore...
    Writing manifest of things to read from dump file...
    postgis_restore.pl: Cannot open manifest file '/var/lib/postgresql/data/tmp/mbaritracking.backup.lst'
    pg_restore: warning: restoring tables WITH OIDS is not supported anymore
    
  30. Yay! More failures and time wasted, this is just awesome!

  31. I started doing research and it looks like a complete nightmare. Something about PostGIS using OIDs heavily and there doesn't seem to be any upgrade path. Before switching back on the consumers on normandy, I made a last ditch to just dump the data only from the database in case I can somehow create a the new database schema using some other mechanism and then import the data only. I doubt it, but before getting things out of sync, I thought I would just grab it so I ran:
    ./pg_dump -a -f /pgdata/tmp/mbaritracking-data.sql mbaritracking
    

and ./pg_dump -a -b -f /pgdata/tmp/mbaritracking-data-blob.sql mbaritracking

  1. I then restarted the consumers by editing the cron job on normandy and letting it start the consumers which then drained the RabbitMQ queues and got stuff working again.
  2. Could there be some way that I could incrementally use PostGIS docker images to walk the old DB up, version by version? Worth a shot. Currently, I have it as PostgreSQL 9.1 with PostGIS 1.5. So, the idea is, copy the mbaritracking.backup file to my local machine. Then use Docker to run the incremental postgis_restore.pl and then pg_dump to try to move up versions step wise. So, on my local machine I copied the mbaritracking.backup file to my desktop in a folder named 'mbaritracking'.
  3. I started the 9.5-2.5 combination of PostGIS container using:

    docker run -it -v /Users/kgomes/Desktop/mbaritracking:/data -e POSTGRES_PASSWORD=xxxxxxx postgis/postgis:9.5-2.5
    
  4. This started the container with PostgreSQL 9.5 with PostGIS version 2.5 installed and mounted the directory with the backup file into the container on /data.

  5. Now I am going to try to do the restore using the postgis_restore.pl script which I think will probably fail, but we will see. I connected to the container using:

    docker exec -it 565195d090ce /bin/bash
    
  6. That connected me to the container as root. I then switched over to the postgres account using 'su postgres'.

  7. I first had to create the database and user so I ran psql. The the following steps:

    CREATE USER odssadm WITH PASSWORD 'xxxxxxx';
    CREATE DATABASE mbaritracking with OWNER=odssadm TEMPLATE=template_postgis;
    \c mbaritracking
    GRANT ALL ON ALL TABLES IN SCHEMA PUBLIC TO odssadm;
    
  8. I then exited psql using \q followed by cd /usr/share/postgresql/9.5/contrib/postgis-2.5 which is where the postgis restore script is located. Now, the moment of truth. All hope hinges on this next steps (I think). I crossed my fingers and ran:

    perl postgis_restore.pl /data/mbaritracking.backup | psql mbaritracking 2> ~/errors.txt
    
  9. It did not barf right away and is chugging away ... a ray of hope! (Note, this was taking a long time on my laptop, so I moved to my beefier Mac Mini). It finished!!! It took a REALLY long time and there were actually a few grant errors when it tried to grant permissions on tables to account 'stoqsadm' and 'everyone' which do not exist. I am not going to worry about those as they can be added later if need be, but here are the errors:

    ERROR:  role "stoqsadm" does not exist
    STATEMENT:  GRANT ALL ON TABLE geography_columns TO stoqsadm;
    ERROR:  role "everyone" does not exist
    STATEMENT:  GRANT SELECT ON TABLE geography_columns TO everyone;
    ERROR:  role "stoqsadm" does not exist
    STATEMENT:  GRANT ALL ON TABLE geometry_columns TO stoqsadm;
    ERROR:  role "everyone" does not exist
    STATEMENT:  GRANT SELECT ON TABLE geometry_columns TO everyone;
    ERROR:  role "everyone" does not exist
    STATEMENT:  GRANT SELECT ON TABLE platform TO everyone;
    ERROR:  role "everyone" does not exist
    STATEMENT:  GRANT SELECT ON TABLE platform_type TO everyone;
    ERROR:  role "everyone" does not exist
    STATEMENT:  GRANT SELECT ON TABLE "position" TO everyone;
    ERROR:  role "stoqsadm" does not exist
    STATEMENT:  GRANT ALL ON TABLE spatial_ref_sys TO stoqsadm;
    ERROR:  role "everyone" does not exist
    STATEMENT:  GRANT SELECT ON TABLE spatial_ref_sys TO everyone;
    
  10. Since this took a REALLY long time, I am going to see if I can skip versions. I will try to pg_dump the 9.5-2.5 version and just see if I can go right to 13-3 and take incremental steps only when I need to. So, to dump the DB from 9.5-2.5, I ran:

    pg_dump -Fc -b -b -f /data/mbaritracking-9.5-2.5.backup mbaritracking
    
  11. Once the dump completed, I stopped the docker container that was running 9.5-2.5 and then started one with 13.0-3.0 using:

    docker run -it -v /Users/kgomes/Desktop/mbaritracking:/data -e POSTGRES_PASSWORD=xxxxxxx postgis/postgis:13-3.0
    
  12. This started the container with PostgreSQL 13.0 with PostGIS version 3.0 installed and mounted the directory with the backup file into the container on /data.

  13. Now I am going to try to do the restore using the postgis_restore.pl script which I think will probably fail, but we will see. I connected to the container using:

    docker exec -it c72dd790ecad /bin/bash
    
  14. That connected me to the container as root. I then switched over to the postgres account using 'su postgres'.

  15. I first had to create the database and user so I ran psql. The the following steps:

    CREATE USER odssadm WITH PASSWORD 'xxxxxxx';
    CREATE DATABASE mbaritracking with OWNER=odssadm TEMPLATE=template_postgis;
    \c mbaritracking
    GRANT ALL ON ALL TABLES IN SCHEMA PUBLIC TO odssadm;
    
  16. I then exited psql using \q followed by cd /usr/share/postgresql/13/contrib/postgis-3.0 which is where the postgis restore script is located. Another moment of truth, I ran:

    perl postgis_restore.pl /data/mbaritracking-9.5-2.5.backup | psql mbaritracking 2> ~/errors.txt
    
  17. Aaaaaaand ... it's doing something! Fingers crossed!!!

  18. The process was running for awhile on my mac mini and it looked like it was progressing so I decided to give it a shot on normandy8. I ssh'd into normandy8 as odssadm and then ran:

    podman exec -it mbaritracking /bin/bash
    
  19. Back on my Mac Mini, I took the mbaritracking-9.5-2.5.backup file and copied to the ODSS share under the tmp/pgbackup directory. That makes it available on the /data mount in the mbaritracking container on normandy8.

  20. Now, I would like to copy it to /pgdata so that there is a more robust and efficient connection to the file from the container (no network). To do this, in the container shell as the user root, I ran:

    cd /var/lib/postgresql/data/tmp
    cp /data/tmp/pgbackup/mbaritracking-9.5-2.5.backup .
    chown postgres:postgres mbaritracking-9.5-2.5.backup
    
  21. I switched over the postgres account using su postgres and than ran psql followed by \l just to make sure the database was still there. Then I exited psql using \q so I was back at the container shell prompt as user postgres. Just like before, I ran:

    cd /usr/share/postgresql/13/contrib/postgis-3.0
    perl postgis_restore.pl /var/lib/postgresql/data/tmp/mbaritracking-9.5-2.5.backup | psql mbaritracking 2> ~/errors.txt
    
  22. And it started chugging away! While I was waiting for the restore to complete, the one on my Mac Mini finished and I poked around a bit and it looked good! Whew!

  23. Now the restore on normandy8 finished and I ran a few queries and rebooted the machine and it worked! So, as a sanity check, I have ERDDAP, THREDDS, Mapserver, and the PostGIS MBARITracking database all in place. WHEW!
  24. Now, it is time to create the python consumers that will read the messages off RabbitMQ and then will stuff them into the tracking DB. There are quite a few environment variables for this container, so I thought it was easier to create them in a file and then point to the file when I create the pod. In the /home/odssadm/odss-components/mbaritracking directory, I created a file named .env-tracking and put the following in that file:

    # These are the environment variable settings for the tracking database AMQP clients
    DJANGO_SETTINGS_MODULE=settings
    TRACKING_ADMIN_NAME="Kevin Gomes"
    TRACKING_ADMIN_EMAIL=kgomes@mbari.org
    TRACKING_HOME=/opt/MBARItracking
    TRACKING_DATABASE_ENGINE=django.contrib.gis.db.backends.postgis
    TRACKING_DATABASE_NAME=mbaritracking
    TRACKING_DATABASE_USER=odssadm
    TRACKING_DATABASE_PASSWORD=xxxxxxx
    TRACKING_DATABASE_HOST=134.89.2.33
    TRACKING_DATABASE_PORT=5432
    TRACKING_SECRET_KEY="dumb-key"
    AMQP_QUEUE_TYPE=ships,ais,drifters,auvs,moorings,gliders
    AMQP_HOSTNAME=messaging.shore.mbari.org
    AMQP_PORT=5672
    AMQP_USERNAME=tracking
    AMQP_PASSWORD=xxxxxxxxx
    AMQP_VHOST=trackingvhost
    MBARI_TRACKING_HOME_HOST=/data/mbaritracking
    MBARI_TRACKING_HOST_NAME=normandy8
    PYTHONPATH=/opt/MBARItracking/amqp
    ALLOWED_HOSTS=*
    
  25. I also manually created the /data/mbaritracking/client/logs directory that will be used to write the logs from the consumers.

  26. That should be all I need to start the consumers, so I then ran:

    podman create --name tracking-clients --env-file /home/odssadm/odss-components/mbaritracking/.env-tracking -p 8081:80 -v /data/mbaritracking/client:/data mbari/tracking:1.0.4
    
  27. This just creates the container, but does not run it yet. In order to get the container to run as a service, I had to create the systemd file. Then, Podman helps you create the systemd file by running:

    podman generate systemd --name tracking-clients > ~/.config/systemd/user/container-tracking-clients.service
    
  28. I then had to make a couple of small changes, but the final systemd file was:

    # container-tracking-clients.service
    # autogenerated by Podman 2.1.1
    # Mon Dec 14 15:04:51 PST 2020
    # Updated by kgomes 2020-12-14: Updated the description and set Restart to always
    
    [Unit]
    Description=ODSS MBARI Tracking DB AMQP Consumers
    Documentation=man:podman-generate-systemd(1)
    Wants=network.target
    After=network-online.target
    
    [Service]
    Environment=PODMAN_SYSTEMD_UNIT=%n
    Restart=always
    ExecStart=/usr/bin/podman start tracking-clients
    ExecStop=/usr/bin/podman stop -t 10 tracking-clients
    ExecStopPost=/usr/bin/podman stop -t 10 tracking-clients
    PIDFile=/run/user/11290/containers/overlay-containers/ecb4f462b0a77921bb4a6d49e597b042521a0c6bfd810010131571fec210b4d3/userdata/conmon.pid
    KillMode=none
    Type=forking
    
    [Install]
    WantedBy=multi-user.target default.target
    
  29. I then enabled the service using:

    systemctl --user daemon-reload
    systemctl --user enable container-tracking-clients.service
    
  30. You can then start and stop the service using:

    systemctl --user start container-tracking-clients.service
    systemctl --user stop container-tracking-clients.service
    
  31. And check the status of the service using:

    systemctl --user status container-tracking-clients.service
    
  32. Damn, that actually worked! OK, so I have normandy8 running in parallel with normandy and both tracking DBs are being updated with identical information so that is GREAT!

  33. For a little bit of clean up, I connected to the mbaritracking container and removed the /var/lib/postgres/data/tmp directory which had the backup of normandy that was step upgraded. There is limited space on that SSD disk, so I wanted to clean that out.

MongoDB

  1. The ODSS itself uses a catalog that is served by MongoDB. So we need to migrate the data from the old MongoDB to the new. I am going to try to use mongodump and mongorestore to see if that will bridge the version gap.
  2. Differing from the test machine, the directory /data/mongo already exists and is being used by the current production MongoDB on normandy. To improve performance, I wanted to move this to a VM SSD like I did with PostgreSQL so I put in a ticket to have /mongodb created alongside /pgdata 20GB and owned by 301000:301000. Pat hooked me up.
  3. Like with PostGIS, I wanted to try and run the mongodb user inside the container as 1000 so it could read from the /data directory if ever needed to. To do this, I created a Dockerfile in odss-components/mongodb and entered:

    # This creates a Docker image from the MongoDB image and then changes the uid/gid of the mongodb account
    FROM mongo:4.4
    
    # Change the mongodb user from uid 999 to 1000 (and gid too)
    RUN groupmod -g 1000 mongodb && \
    usermod -u 1000 mongodb
    
  4. Now to create the image, I rand:

    podman build -t mbari/odss-mongodb .
    
  5. Now, to create the container, I ran:

    podman create --security-opt label=disable --name mongodb -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=mongoadm -e MONGO_INITDB_ROOT_PASSWORD=xxxxx -v /mongodb:/data/db mbari/odss-mongodb
    
  6. Then I have to create the systemd file using:

    podman generate systemd --name mongodb > ~/.config/systemd/user/container-mongodb.service
    
  7. I then edited that file to change it to:

    # container-mongodb.service
    # autogenerated by Podman 2.1.1
    # Tue Dec  8 16:28:26 PST 2020
    # Updated by kgomes 2020-12-08: Updated description and changed Restart to always
    
    [Unit]
    Description=ODSS MongoDB Service
    Documentation=man:podman-generate-systemd(1)
    Wants=network.target
    After=network-online.target
    
    [Service]
    Environment=PODMAN_SYSTEMD_UNIT=%n
    Restart=always
    ExecStart=/usr/bin/podman start mongodb
    ExecStop=/usr/bin/podman stop -t 10 mongodb
    ExecStopPost=/usr/bin/podman stop -t 10 mongodb
    PIDFile=/run/user/11290/containers/overlay-containers/644bf5e657ef6bd37abcf15f71ec121ed5952f6a6c945741c5efbe8bf63c5e8f/userdata/conmon.pid
    KillMode=none
    Type=forking
    
    [Install]
    WantedBy=multi-user.target default.target
    
  8. Then, I needed to enable service using:

    systemctl --user daemon-reload
    systemctl --user enable container-mongodb.service
    
  9. You can then start and stop the service using:

    systemctl --user start container-mongodb.service
    systemctl --user stop container-mongodb.service
    
  10. And check the status of the service using:

    systemctl --user status container-mongodb.service
    
  11. After starting the service, I connected to the container using podman ps to get the container ID, then jumped into a bash shell:

    podman ps
    podman exec -it mongodb /bin/bash
    
  12. In another window, I logged into the old normandy as odssadm. I then ran:

    mkdir /data/tmp/mongobackup
    cd /data/tmp/mongobackup
    mongodump -u odssadm -p xxxxxxxxxxx --db odss --out /data/tmp/mongobackup/odss.backup
    
  13. Then, over on normandy8, I needed to copy that backup into the /mongodb directory so the container could see it. Inside the container I created the /data/db/tmp directory. Then from a normandy8 host command line prompt I ran:

    cp -r /data/tmp/mongobackup/odss.backup /mongodb/tmp
    
  14. Then, inside the mongodb container, I ran:

    cd /data/db/tmp/odss.backup
    mongorestore --db odss --username mongoadm --authenticationDatabase=admin odss
    
  15. This created the odss database in the local mongodb instance. So I am really close, but I need to create a user that can have full rights to the DB and can then be used by the ODSS application. Traditionally, I’ve used the username odssadm. So, to create that user and give it full permissions on the odss database, I entered the mongo shell as user mongoadm and then ran:

    mongo -u mongoadm -p xxxxx
    use odss
    db.createUser({user:"odssadm",pwd:"xxxxxx",roles:[{role:"dbOwner",db:"odss"}]})
    quit()
    
  16. I did have to put in a ticket to add port 27017 to the firewall so I could get to it from my Studio 3T application. But, I could after that. I think that is all I need for MongoDB.

  17. Just to clean up, inside the container, I removed the /data/db/tmp directory
  18. I rebooted just to make sure everything came back online.

Support Scripts

Danelle had written some supporting scripts to help with managing the ERDDAP and THREDDS data sets so that we could replicate them on the boat more easily. I migrated them to a Docker deployment and added the configuration to the Git repo in the resources/deployment/odss-utils directory. To get this all set up

  1. ssh'd into the normandy8 machine as odssadm
  2. ran cd /data to get to the /data directory.
  3. ran mkdir utils to create the /data/utils directory
  4. ran cd utils to get into the directory
  5. ran mkdir logs to create a logs directory
  6. ran vi mapdatasets.json to create a new JSON configuration file for the utility scripts. I copy and pasted an example that I had been working with and edited to fit the deployment on normandy machine.
  7. ran cd to get back to the home directory and ran

    podman create --security-opt label=disable --name odss-utils -v /data/erddap/data:/data/erddap/data -v /data/mapserver:/data/mapserver -v /data/utils/logs:/var/log -v /data/utils/mapdatasets.json:/mapdatasets.json mbari/odss-utils:1.0.9
    
  8. I then ran podman generate systemd --name odss-utils > ~/.config/systemd/user/container-odss-utils.service

  9. I then edited the service file that was generated so that it looked like the following:

    # container-odss-utils.service
    # autogenerated by Podman 3.2.3
    # Mon Oct 11 13:24:48 PDT 2021
    
    [Unit]
    Description=ODSS Utilities Container
    Documentation=man:podman-generate-systemd(1)
    Wants=network.target
    After=network-online.target
    RequiresMountsFor=/run/user/11290/containers
    
    [Service]
    Environment=PODMAN_SYSTEMD_UNIT=%n
    Restart=always
    TimeoutStopSec=70
    ExecStart=/usr/bin/podman start odss-utils
    ExecStop=/usr/bin/podman stop -t 10 odss-utils
    ExecStopPost=/usr/bin/podman stop -t 10 odss-utils
    PIDFile=/run/user/11290/containers/overlay-containers/8d2273098d8df6993adc8b03d2cd7de6254be281f413b3638899eb27270e1351/userdata/conmon.pid
    Type=forking
    
    [Install]
    WantedBy=multi-user.target default.target
    
  10. Then ran systemctl --user daemon-reload

  11. Followed by systemctl --user enable container-odss-utils.service
  12. You can then start and stop the service using:

systemctl --user start container-odss-utils.service systemctl --user stop container-odss-utils.service

  1. And check the status of the service using:

systemctl --user status container-odss-utils.service

ODSS

Warning

There are two sections here because the first time I deployed it, I used a Docker container, but the updates where very painful, so in March of 2023, I switched over to a Git clone deployment so see that section below for the current deployment method
Docker Container Method
  1. OK, now the final piece of the puzzle is to get the ODSS container running. I need a couple of locations on normandy8, one to house the repository files and one for the server log files. The repository files already exist on normandy8 in the /data mount. I then created a /data/server/logs directory to hold the server logs. It turns out I did not have enough disk space to do this, so I copied a couple of directories over while I waited on the trouble ticket to increase disk space.
  2. Once that was done, I could create the container by running:

    podman create --name odss-server -p 3000:3000 -e ODSS_MONGODB_HOST="134.89.2.33" -e ODSS_MONGODB_USERNAME=odssadm -e ODSS_MONGODB_PASSWORD=xxxxxx -e ODSS_TRACKING_DATABASE_HOST="134.89.2.33" -e ODSS_TRACKING_DATABASE_USERNAME=odssadm -e ODSS_TRACKING_DATABASE_PASSWORD=xxxxxxxxxx -e ODSS_LOG_FILE_LOCATION=/data/logs/server -e ODSS_REPO_ROOT_DIR=/data -v /data:/data mbari/odss:1.1.4
    
  3. This just creates the container, but does not run it yet. In order to get the container to run as a service, I had to create the systemd file. Podman helps you create the systemd file by running:

    podman generate systemd --name odss-server > ~/.config/systemd/user/container-odss-server.service
    
  4. I then had to make a couple of small changes, but the final systemd file was:

    # container-odss-server.service
    # autogenerated by Podman 3.2.3
    # Tue May 17 12:02:58 PDT 2022
    
    [Unit]
    Description=ODSS Server
    Documentation=man:podman-generate-systemd(1)
    Wants=network.target
    After=network-online.target
    RequiresMountsFor=/run/user/11290/containers
    
    [Service]
    Environment=PODMAN_SYSTEMD_UNIT=%n
    Restart=always
    TimeoutStopSec=70
    ExecStart=/usr/bin/podman start odss-server
    ExecStop=/usr/bin/podman stop -t 10 odss-server
    ExecStopPost=/usr/bin/podman stop -t 10 odss-server
    PIDFile=/run/user/11290/containers/overlay-containers/bd703b097299df94fa752695c389757d881f1a2fedbf1dee75c59aa623d36742/userdata/conmon.pid
    Type=forking
    
    [Install]
    WantedBy=multi-user.target default.target
    
  5. I then enabled the service using:

    systemctl --user daemon-reload
    systemctl --user enable container-odss-server.service
    
  6. You can then start and stop the service using:

    systemctl --user start container-odss-server.service
    systemctl --user stop container-odss-server.service
    
  7. And check the status of the service using:

    systemctl --user status container-odss-server.service
    
  8. So, there was a very strange error when I started the ODSS. Everything seemed to work except for the tracks were not rendering. I looked at the logs and was seeing errors in the RepoRouter to the effect:

    Cannot find SRID (4326) in spatial_ref_sys
    
  9. After a quick Google search, it looks like the spatial_ref_sys table in the mbaritracking DB did not make the transition through all my steps. Fortunately, it was an easy fix as there is a script located in /usr/share/postgresql/13/contrib/postgis-3.0 named spatial_ref_sys.sql that has the statements to re-build that table. I connected to the container usering podman exec -it mbaritracking /bin/bash, then ran su postgres, followed by psql -d mbaritracking -f /usr/share/postgresql/13/contrib/postgis-3.0/spatial_ref_sys.sql and everything worked. Makes me a bit nervous, but tracks are coming in and showing on the ODSS.

  10. To direct users from the base HTTP server, I put a index.html page at /var/www/html that had this so user's would be redirected to the ODSS properly (I had to take ownership of html directory and then put the ownership back after):
    <html>
    <title>Redirecting to the ODSS</title>
    <head>
    <meta HTTP-EQUIV="REFRESH" content="0; url=https://normandy8.mbari.org/odss/">
    </head>
    <body>
    <h2>Redirecting you to the ODSS ...</h2>
    </body>
    </html>
    
Git Clone Method

In March of 2023, I was trying to deploy a fix to the ESRI map and the normal workflow of develop/test then build an ODSS image and push to the repo then checkout on the production server broke. Because of this and to make the deployment of the ODSS exactly like the test machine, I shut off the podman container and checked out the code directly and ran it from there as a user service. Here is how I set that up.

  1. I ssh'd into normandy8 and then ran su odssadm
  2. I then checked out the odss codebase directly into the home directory of /home/odssadm.
  3. I cd'd into the odss/webapp/server directory and copied the config.js.template to config.js and edited to look like:

        /**
        * This is the configuration file that tries to read in configuration from the environment
        * and if it does not find properties, will use the properties defined in this file directly
        */
        module.exports = {
    
                // The location of the log files
                logFileLocation: process.env.ODSS_LOG_FILE_LOCATION || '/data/logs/server',
    
                // The directory which is the root of all data repositories
                repoRootDir: process.env.ODSS_REPO_ROOT_DIR || '/data',
    
                // The base URL that points to reporRootDir
                repoRootDirUrl: process.env.ODSS_REPO_ROOT_DIR_URL || '/odss/data',
    
                // The options for the app.js startup script
                appOptions: {
                        trackingUpdateInterval: Number(process.env.ODSS_TRACKING_UPDATE_INTERVAL) || 60000,
                        loggerLevel: process.env.ODSS_APP_LOGGER_LEVEL || 'info'
                },
    
                // The configuration options for the datastore
                dataStoreOptions: {
                        mongodbHost: process.env.ODSS_MONGODB_HOST || 'localhost',
                        mongodbPort: Number(process.env.ODSS_MONGODB_PORT) || 27017,
                        mongodbName: process.env.ODSS_MONGODB_DATABASE_NAME || 'odss',
                        mongodbUsername: process.env.ODSS_MONGODB_USERNAME || 'odssadm',
                        mongodbPassword: process.env.ODSS_MONGODB_PASSWORD || 'xxxxxx',
                        trackingDBProtocol: process.env.ODSS_TRACKING_DATABASE_PROTOCOL || 'postgres',
                        trackingDBHostname: process.env.ODSS_TRACKING_DATABASE_HOST || 'localhost',
                        trackingDBPort: Number(process.env.ODSS_TRACKING_DATABASE_PORT) || 5432,
                        trackingDBDatabaseName: process.env.ODSS_TRACKING_DATABASE_NAME || 'mbaritracking',
                        trackingDBUsername: process.env.ODSS_TRACKING_DATABASE_USERNAME || 'odssadm',
                        trackingDBPassword: process.env.ODSS_TRACKING_DATABASE_PASSWORD || 'xxxxxx'
                },
    
                // The configuration options for the application server
                appServerOptions: {
                        hostBaseUrl: process.env.ODSS_APP_SERVER_HOST_BASE_URL || '/',
                        port: Number(process.env.ODSS_APP_SERVER_PORT) || 3000,
                        expressCookieParserSecret: process.env.ODSS_APP_SERVER_EXPRESS_COOKIE_PARSER_SECRET || 'mysecret',
                        expressSessionSecret: process.env.ODSS_APP_SERVER_EXPRESS_SESSION_SECRET || 'mysessionsecret',
                        esriApiKey: process.env.ODSS_APP_SERVER_ESRI_API_KEY || "xxxxxxxxxxxxxxxxxxxxxxxx",
                }
        }
    
  4. I stopped the current podman container by running:

    systemctl --user stop container-odss-server.service
    systemctl --user disable container-odss-server.service
    
  5. I then removed the ~/.config/systemd/user/container-odss-server.service file

  6. I refreshed the systemctl daemon using systemctl --user daemon-reload
  7. I then created a new file ~/.config/systemd/user/odss-server.service and input the following:

    [Unit]
    Description=The ODSS web application server
    Documentation=https://odss.mbari.org
    After=network.target
    
    [Service]
    Type=simple
    WorkingDirectory=/home/odssadm/odss/webapp/server
    ExecStart=/usr/bin/node /home/odssadm/odss/webapp/server/app.js
    Restart=always
    
    [Install]
    WantedBy=multi-user.target
    
  8. I refreshed the systemctl daemon using systemctl --user daemon-reload

  9. I then enabled the new odss server and started it by running:
    systemctl --user enable odss-server.service
    systemctl --user start odss-server.service
    

ODSS Planning

  1. Now, it was time to install Carlos' planning tool.
  2. I first had IS install Git, NodeJS (12) and Quasar CLI.
  3. I cd'd to the /home/odssadm directory and ran:

    git clone https://kgomes@bitbucket.org/mbari/odss-planning.git
    
  4. That created an odss-planning directory, I cd'd into that directory and ran:

    npm install @quasar/cli
    npm install
    /home/odssadm/odss-planning/node_modules/@quasar/cli/bin/quasar build
    cd dist/spa
    tar zcf ../odss-planning.tgz *
    
  5. That created a tar gzipped file of a static website

  6. I changed ownership on the /var/www/html directory to odssadm (sudo chown odssadm /var/www/html)
  7. I then created a odss-planning directory in /var/www/html
  8. Then I copied the odss-planning.tgz file into that directoy and unzipped it using tar xf odss-planning.tgz
  9. Lastly, I then deleted the odss-planning.tgz files and changed the /var/www/html ownership back to root
  10. I can see it in the odss tab on normandy8!

TimeZero file creation

WHEW

Going Into Production

Soft Cut Over

We are attempting a 'soft' cutover to the new ODSS at 3PM today (12/9).

Joe is doing the following:

  1. Change CNAME in DNS for odss.mbari.org and point it to normandy8.mbari.org.
  2. Change odss.conf on normandy8 and have it reflect ServerName odss.mbari.org and restart apache.
  3. Change /etc/httpd/conf.d/ssl.conf on normandy to reflect normandy.mbari.org:443 instead of odss.mbari.org:443 and restart apache

I will change:

  1. I will edit the /data/erddap/content/erddap/setup.xml file and change the baseUrl from https://normandy8.mbari.org to https://odss.mbari.org just to see if that is OK and then restart the ERDDAP server.

So, this did not go well. When we first did the cut over, everything looked good. But quickly the tracking data API queries came to a grinding halt. I did not catch them before because I did not have Carlos' API hitting the new server. In his queries he ends up triggering the query:

    SELECT datetime, ST_AsText(ST_Transform(geom,4326)) FROM position WHERE platform_id in (SELECT id from platform where name ='Paragon') ORDER BY datetime DESC, id LIMIT 20;

Which took literally forever. The query still ran OK on normandy, but on normandy8 something is different. We rolled back the cutover and I started to dig in. From an indexing and data perspective, the DBs look exactly the same so I don't know why things are so different, but the bottom line is that it comes down to queries and indexes. I added the following indexes to the DB on normandy8:

    CREATE INDEX position_platform_id_datetime ON position (platform_id, datetime);
    CREATE INDEX position_platform_id_datetime_desc ON position (platform_id, datetime DESC);
    CREATE INDEX position_datetime_platform_id ON position (datetime, platform_id);
    CREATE INDEX position_datetime_desc_platform_id ON position (datetime DESC, platform_id);

And then, in the DataStore.js file, I changed line 247 from the query clause ORDER BY datetime DESC, id LIMIT to ORDER BY platform_id, datetime DESC LIMIT which seemed to keep the regular track queries on the map quick (actually quicker) and then the API queries that Carlos was making worked again. So, in order to roll those changes out, I had to check in a new version of the ODSS, build a new Docker image, push that to DockerHub and then build a new container on normandy8 with that new image.

Note that the first two indexes fixed problems with queries that were using order by platform_id, datetime DESC but that did not work when different platforms had the same name. So I changed the query to order by datetime desc, platform_id and it went back to the same problem as before. So I tried to create the last two indexes above, but no luck. Still looking into it. Ultimately, I could not figure out what they hell PostgreSQL was doing. There were times when queries looked identical except for platform id (or id list) and things would change, but some queries would take up to 30 minutes to run. Ridiculous. I took what was one basically join query that would query for all id's by platform name embedded in the query for locations and broke that into two steps in the DataStore.js code. Some things worked better, but others then broke. At that point, I just decided to clean up the database to get rid of the mutiple platforms with the same name. It turned out there were only about 8 platforms that fell into that category, with some names having up to 4 different platform entries. The basic steps I took were:

  1. Find all platforms with multiple entries with the same name:

    SELECT name, COUNT(*) FROM platform GROUP BY name HAVING COUNT(*) > 1 order by count(*) desc
    
  2. For each of these platforms, query for the positions to see which platform had the most recent data.

  3. Updated the positions of the older platforms to the latest platform
  4. Deleted older platforms.
  5. By the way, a great query to use to monitor what things are happening in the DB is to connect to the mbaritracking DB and run:
    select pid, query from pg_stat_activity where datname = 'mbaritracking'
    

During this thrash, the general steps taken to deploy the ODSS during updates was:

  1. Changed the README.md in the ODSS repo on my local machine describing the change to DataStore.js
  2. Checked in the change to README.md and DataStore.js locally
  3. Created a new 1.0.1 (then 1.0.2 and 1.0.3 to try and fix bug in queries) TAG locally.
  4. Pushed the Master changes and the tag up to the remote origin
  5. Built a new Docker image using docker build -t mbari/odss:1.0.1 .
  6. Pushed the new image to DockerHub using docker push mbari/odss:1.0.1
  7. Then back on Normandy8, I ran:

    systemctl --user stop container-odss-server
    systemctl --user disable container-odss-server
    systemctl --user daemon-reload
    rm ~/.config/systemd/user/container-odss-server.service
    podman rm odss-server
    podman images (to get the IMAGE ID)
    podman rmi 7008d2f5f8d8
    podman create --name odss-server -p 3000:3000 -e ODSS_MONGODB_HOST="134.89.2.33" -e ODSS_MONGODB_USERNAME=odssadm -e ODSS_MONGODB_PASSWORD=xxxxxx -e ODSS_TRACKING_DATABASE_HOST="134.89.2.33" -e ODSS_TRACKING_DATABASE_USERNAME=odssadm -e ODSS_TRACKING_DATABASE_PASSWORD=xxxxxxxxxx -e ODSS_LOG_FILE_LOCATION=/data/logs/server -e ODSS_REPO_ROOT_DIR=/data -v /data:/data mbari/odss:1.0.1
    podman generate systemd --name odss-server > ~/.config/systemd/user/container-odss-server.service
    
  8. I edited the container-odss-server.service file to contain:

    # container-odss-server.service
    # autogenerated by Podman 2.1.1
    # Thu Dec 10 08:07:02 PST 2020
    # Updated by kgomes 2020-12-10: Updated description and change Restart to always
    
    [Unit]
    Description=ODSS Server
    Documentation=man:podman-generate-systemd(1)
    Wants=network.target
    After=network-online.target
    
    [Service]
    Environment=PODMAN_SYSTEMD_UNIT=%n
    Restart=always
    ExecStart=/usr/bin/podman start odss-server
    ExecStop=/usr/bin/podman stop -t 10 odss-server
    ExecStopPost=/usr/bin/podman stop -t 10 odss-server
    PIDFile=/run/user/11290/containers/overlay-containers/b60130221acc721f7c217273cba8df9cab6e8ec2e55b63d8fc7a25c651b95e6b/userdata/conmon.pid
    KillMode=none
    Type=forking
    
    [Install]
    WantedBy=multi-user.target default.target
    
  9. I then ran:

    systemctl --user daemon-reload
    systemctl --user enable container-odss-server.service
    systemctl --user start container-odss-server.service
    
  10. Everything looks great so we are going to give the soft cut-over another go.

Joe is doing the following:

  1. Change CNAME in DNS for odss.mbari.org and point it to normandy8.mbari.org.
  2. Change /etc/httpd/conf.d/odss.conf on normandy8 and have it reflect ServerName odss.mbari.org.
  3. Change /etc/httpd/conf.d/ssl.conf on normandy8 and have it reflect ServerName odss.mbari.org. and restart apache
  4. Change /etc/httpd/conf.d/ssl.conf on normandy to reflect normandy.mbari.org:443 instead of odss.mbari.org:443 and restart apache

I will change:

  1. I will edit the /data/erddap/content/erddap/setup.xml file and change the baseUrl from https://normandy8.mbari.org to https://odss.mbari.org just to see if that is OK and then restart the ERDDAP server.
  2. I also changed /var/www/html/index.html to forward to odss.mbari.org instead of normandy8.

Final steps to cut over

  1. There are some rsync jobs that run via cron on both ships (servers zuma on Carson and malibu on Flyer) that copy files back and forth to/from the ships to Normandy that needed to be changed to work with normandy8. Pat set up all the ssh stuff so rsync could work password-less and I updated all the syncronization jobs and the crontab entries to point to normandy8 instead of normandy. I tested with a couple of files and everything worked while both ships were connected to copper. I submitted a ticket to make sure the Palo Alto on the ship let's these connections happen over satellite and Adrianna changed them, but we did not test off of copper yet.
  2. I then ssh'd into normandy and edited the cronjob to stop the check for AMQP consumers. For documentation purposes, here is the cron entry because there are still some cronjobs running that I may have to resurrect on normandy8

    # Set bash as default shell script
    SHELL=/bin/bash
    
    # This is the script that runs to make sure the tracking database consumers are running
    #*/10 * * * * /home/odssadm/dev/MBARItracking/amqp/runPersistMonitor.sh >> /data/logs/tracking/persistWatchdog.log 2>&1
    
    # This entry runs the script to extract saildrone positions and send them to ODSS/tracking db
    #* * * * * /home/odssadm/dev/saildrone-extraction/run-sd-extract.sh >> /data/logs/tracking/saildrone.log 2>&1
    
    # This is the cron entry to make sure the ODSS start on reboot
    @reboot /usr/local/nodejs/bin/forever start /opt/odss-node/server/app.js
    
    # restart the thredds server nightly to ensure correct forecast cataloging
    00 00 * * * /home/odssadm/dev/restart-thredds.sh > /tmp/thredds.log 2>&1
    
    # clean up tilecache
    @daily  /var/www/tilecache/tilecache_clean.py -s500 /tmp/tilecache >> /data/logs/tilecache/clean.log 2>&1
    */30 * * * * find /tmp/tilecache -name '*.png' -size 334c -exec /bin/rm {} \; &> /data/logs/tilecache/334.log 2>&1
    
    # clean up data older than 84 hours
    #@daily find /data/erddap/data/rssoffice -name '*.nc' -mmin +5040 -type f -exec rm {} \;&  2>&1
    
    # clean up ferret journal files created from script that subsets ROM data. Can't disable all journalling in Ferret.
    0 */5 * * * find /home/odssadm/dev/mbari-dss/src/mapcfg/scripts -name 'ferret.jnl*' -exec /bin/rm {} \; &> /data/logs/ferret/ferret.log 2>&1
    
    # download and subset data for catalog in ERDDAP and use in ODSS
    0 */6 * * * /home/odssadm/dev/mbari-dss/src/mapcfg/scripts/download-remotesensing.sh > /data/logs/download/download-remotesensing.log 2>&1
    
    # download roms data at every 30 minute past the 0,4,8,16 and 20th hour
    # ROMS now longer supported
    #30 */4 * * * /home/odssadm/dev/mbari-dss/src/mapcfg/scripts/download-roms.sh > /data/logs/download/download-roms.log 2>&1
    
    # update mapserver mapfiles with the latest available timecodes for each layer every hour
    0 * * * * /home/odssadm/dev/mbari-dss/src/mapcfg/scripts/updatemaps.sh > /data/logs/map/updatemaps.log 2>&1
    
    # update front detections every 5 minutes
    */5 * * * * /home/odssadm/dev/mbari-dss/src/mapcfg/scripts/updatefrontdet.sh > /data/logs/map/updatefrontdet.log 2>&1
    
    # update AIS data with the latest available data in the tracking database
    #*/10 * * * * /home/odssadm/dev/mbari-dss/src/mapcfg/scripts/updateais.sh > /data/logs/map/updateais.log 2>&1
    #@hourly  /home/odssadm/dev/mbari-dss/src/mapcfg/scripts/updateais_hourly.sh > /data/logs/map/updateais_hourly.log 2>&1
    
    # Run the script to extract the wg gotoWatch commands and publish to ODSS
    #*/5 * * * * cd /home/odssadm/dev/wg-extract-goto-watch && ./extract-publish.sh >> /home/odssadm/dev/wg-extract-goto-watch/extract-publish.log 2>&1
    
    ## (2020-11-25, Carlos) move odss2dash service to tethysdash2
    ##------- (2018-08-15, Carlos) enable odss2dash service:
    ##------- @reboot cd /home/odssadm/dev/odss2dash && source ./setenv.sh && java -jar odss2dash-0.0.6.jar run-server -d >> odss2dash.out 2>&1
    
    # (2019-06-01, KGomes) This is a script to write reports on tracking latest data for troubleshooting lags on the Western Flyer
    # This one creates shore based reports for comparison to those on the boats
    #* * * * * /usr/local/nodejs/bin/node /home/odssadm/dev/tracking-monitor/index.js
    
  3. Here are some notes related to the entries in the crontab:

    1. The runPersistMonitor entry was commented out to stop consumers from re-starting and this has been migrated and is working on normandy8
    2. The saildrone extraction code was migrated to normandy 8 in the /home/odssadm/dev/saildrone-extraction directory, but I did not put any entry in the crontab on normandy8 because it's not needed right now (note I never tested it on normandy8 because I can't, API keys are expired I think)
    3. The 'forever' script is meant to start the ODSS on boot, but since we've moved to containers that is no longer valid so not needed on normandy8
    4. I did not migrate the restart-thredds.sh script as I'm hoping I don't need it. If I do, here it is:

    #!/bin/bash echo "Stopping thredds" sudo /sbin/service tomcat-thredds stop sleep 60 echo "Restarting thredds" sudo /sbin/service tomcat-thredds start

    1. I did not install tilecache on normandy8 so I am not worried about the entry to keep the tilecache clean, might someday, but not worrying about it for now.
    2. The download-remotesensing.sh script is something that needs to be migrated or maybe I should move to pointing to the real maps from the source server (maybe I do that on the boats?)
    3. The download-roms.sh script is something that hasn't updated files since 2018 so I don't think that necessary.
    4. Looks like the updatemaps.sh might be a script that needs to be migrated.
    5. I heard from Danelle that the updatefrontdet.sh is no longer used.
    6. The updateais scripts were already commented out, so I did not migrate them
    7. Carlos had already moved the odss2dash scripts
    8. The tracking monitor script has been commented out for a long time so I did not migrate.
  4. I then ran a ps command and grepped for consumers so I could get the PIDs for those processes and then killed each one.

  5. I logged into the RabbitMQ web management console and deleted all the queues associated with normandy (they were building up messages)
  6. So I started to look into the ERDDAP WMS layers that were updated by a couple of the scripts and found some issues. Some were just broken, some out of date. I wanted to see if there was a way to use the source ERDDAP servers directly so we are not having to copy down these subsetted files. When I looked at the production machine, the 3-day MODIS Chlorophyll seemed to be working so I used that as an example. The legend was up to date and the map showed colors so it probably is also up to date. Looking into the stack I found:

    1. MongoDB:

      1. The base URL is /cgi-bin/mapserv?
      2. The legend URL is /cgi-bin/mapserv?MAP=/data/mapserver/mapfiles/odss.map&SERVICE=WMS&VERSION=1.1.1&REQUEST=getlegendgraphic&FORMAT=image%2Fpng&LAYER=erdMWchla3day:chlorophyll&RULE=erdMWchla3day:chlorophyll
      3. Here are the params and options for the WMS call from the ODSS app:
        "params" : {
            "map" : "/data/mapserver/mapfiles/odss.map",
            "service" : "WMS",
            "version" : "1.1.1",
            "request" : "GetMap",
            "srs" : "EPSG:4326",
            "exceptions" : "application/vnd.ogc.se_iniimage",
            "layers" : "erdMWchla3day:chlorophyll",
            "elevation" : "0.0",
            "transparent" : "true",
            "bgcolor" : "0x808080",
            "format" : "image/png"
        },
        "options" : {
            "timeformat" : "UTC:yyyy-mm-dd'T'00:00:00'Z'",
            "hoursbetween" : NumberInt(24),
            "minhoursback" : NumberInt(72)
        }
        
  7. So this means, when the ODSS application reads in the layer information it constructs a WMS layer call (Leaflet) using these params and options.

  8. The key to this call is the odss.map file entry and the layer name which is erdMWchla3day:chlorophyll. This ends up making a network call that looks like: https://odss.mbari.org/cgi-bin/mapserv?&service=WMS&request=GetMap&layers=erdMWchla3day%3Achlorophyll&styles=&format=image%2Fpng&transparent=true&version=1.1.1&map=%2Fdata%2Fmapserver%2Fmapfiles%2Fodss.map&exceptions=application%2Fvnd.ogc.se_iniimage&elevation=0.0&bgcolor=0x808080&width=256&height=256&srs=EPSG%3A4326&bbox=-121.99218750000001,36.31512514748051,-121.64062500000001,36.59788913307022
  9. And a call to get the legend using: https://odss.mbari.org/cgi-bin/mapserv?MAP=/data/mapserver/mapfiles/odss.map&SERVICE=WMS&VERSION=1.1.1&REQUEST=getlegendgraphic&FORMAT=image%2Fpng&LAYER=erdMWchla3day:chlorophyll&RULE=erdMWchla3day:chlorophyll
  10. This all seems to work which means it's going through mapserver correctly. If I look at the mapserver file odss.map, it includes the specific file by using INCLUDE "coastwatch/chl3day.layer.map".
  11. That file looks like: LAYER NAME 'erdMWchla3day:chlorophyll' TYPE RASTER STATUS ON
      CLASS
        NAME   "erdMWchla3day:chlorophyll"
        KEYIMAGE   "legend-icons/erdMWchla3day0.0chlorophyll.png"
      END
    
      #CONNECTION 'http:/coastwatch.pfeg.noaa.gov/erddap/wms/erdMWchla3day/request?TIME=%TIME%&ELEVATION=%ELEVATION%'
      CONNECTION 'http://normandy.mbari.org/erddap/wms/erdMWchla3day/request?TIME=%TIME%&ELEVATION=%ELEVATION%'
      CONNECTIONTYPE WMS
    
      PROJECTION
        'init=epsg:4326'
      END
    
      METADATA
        'wms_title'           'erdMWchla3day:chlorophyll'
        'wms_name'            'erdMWchla3day:chlorophyll'
        'wms_onlineresource'  'http://localhost/cgi-bin/mapserv?map=/data/mapserver/mapfiles/odss.map'
        'wms_server_version'  '1.1.0'
        'wms_srs'             'EPSG:4326'
        'wms_format'          'image/png'
        'wms_transparent'     'true'
      END
    
      VALIDATION
        'default_time'        '2021-01-10T:12:00:00Z'
        'default_elevation'   '0.0'
        'TIME'          '.'
        'ELEVATION'     '.'
      END
    END
    

Create/Change CNAMEs odss->normandy8, normandy->normandy8

  1. Do I need to change the ServerName in the /etc/httpd/conf.d/odss.conf?
  2. Do I need to edit the /data/erddap/content/erddap/setup.xml file and change the baseUrl from https://normandy8.mbari.org to either https://normandy.mbari.org or https://odss.mbari.org?

Post

  1. Delete normandy8 queues on RabbitMQ
  2. Remove the old log files from the ODSS server which are in /data/logs/server/old.
  3. Clean up the files in the /data/erddap directory that were copied to bigParentDirectory.
    rm -rf /data/erddap/a
    rm -rf /data/erddap/cache
    rm -rf /data/erddap/copy
    rm /data/erddap/currentDatasets.xml
    rm -rf /data/erddap/dataset
    rm -rf /data/erddap/flag
    rm -rf /data/erddap/logs
    rm -rf /data/erddap/lucene
    rm /data/erddap/mapdatasets.xml
    rm -f /data/erddap/subscriptionsV1*.txt
    

Upgrade the ODSS

Originally, I had the ODSS running as a podman image (see instructions above on Docker Container Method of deploying the server), but later changed it to a direct git clone method so updates and changes were much easier. Since you can deploy it either way, there are instructions below on how to upgrade in both scenarios, first with a container image and then with direct Git clone.

Upgrade with Docker Container Method

When new releases of the ODSS docker image are published to DockerHub, these are the steps to take to upgrade the image. In general terms, you need to stop the current services (and thus, container), disable the current container-odss-server.service, remove the container, image and service files. Then you create the new container version and service file and re-enable them. Here is an example of an upgrade to v1.1.4:

  • SSH into the test server using ssh username@normandy8.mbari.org
  • Then switch over to the odssadm account using

    [username@normandy8 ~]$ su odssadm
    
  • Stop the ODSS container using systemctl

    systemctl --user stop container-odss-server.service
    
  • Next, disable the systemctl startup using

    [odssadm@normandy8 ~]$ systemctl --user disable container-odss-server
    Removed /home/odssadm/.config/systemd/user/default.target.wants/container-odss-server.service.
    Removed /home/odssadm/.config/systemd/user/multi-user.target.wants/container-odss-server.service.
    
  • Now remove the container using podman

    [odssadm@normandy8 ~]$ podman rm odss-server
    35aa2e6266b76808a91e2bfae88337160c5a72702ae31c731f04a83c16d64f90
    
  • Remove the service definition file as well by running

    rm ~/.config/systemd/user/container-odss-server.service
    
  • This basically removes the ODSS server completely. Now it's time to install the newer version which is basically like installing it from scratch. Here is the process:

    podman create --name odss-server -p 3000:3000 -e ODSS_MONGODB_HOST="134.89.2.33" -e ODSS_MONGODB_USERNAME=odssadm -e ODSS_MONGODB_PASSWORD=xxxxxx -e ODSS_TRACKING_DATABASE_HOST="134.89.2.33" -e ODSS_TRACKING_DATABASE_USERNAME=odssadm -e ODSS_TRACKING_DATABASE_PASSWORD=xxxxxxxxxx -e ODSS_LOG_FILE_LOCATION=/data/logs/server -e ODSS_REPO_ROOT_DIR=/data -v /data:/data mbari/odss:1.1.4
    
  • Now create the service file using

    podman generate systemd --name odss-server > ~/.config/systemd/user/container-odss-server.service
    
  • Edit the newly created file (container-odss-server.service) and change the description to something more useful and the restart to always.

  • I then enabled the service using:

    systemctl --user daemon-reload
    systemctl --user enable container-odss-server.service
    
  • You can then start and stop the service using:

    systemctl --user start container-odss-server.service
    

Upgrade with the Git Clone Method

Updates to the code base are much easier with the Git clone method. Essentially, you ssh to the normandy8.mbari.org server as yourself, then switch to the odssadm account using:

    su odssadm

Once you are in athe odssadm account, you can cd to the correct location and pull any updates from Git by issuing a pull command.

    cd odss
    git pull origin

Then you can restart the server using:

    systemctl --user stop odss-server.service
    systemctl --user start odss-server.service

This is assuming you don't have any major changes that need new npm packages installed. If you do, you might need to run npm install in the ~/odss/webapp/server directory.