Skip to content

ODSS Installation on Rachel Carson (Zuma)

This document describes all the steps taken to migrate from the old zuma to the new zuma-u which is an Ubuntu machine. The main goal is to move from CentOS 6 to Ubuntu.

Current Zuma

The first thing I wanted to do was capture the current configuration and customization of the current Zuma server.

Current crontab:

    MAILTO=''
    # This entry makes sure the ODSS server starts after a reboot
    @reboot /usr/local/nodejs/bin/forever start /opt/odss/webapp/server/app.js

    # Restart as necessary consumer.py processes that are deemed to have stopped, log each hour, check each 10 minutes
    # Takes a while to run on odss-staging.  Do only once per hour.
    */10 * * * * /home/odssadm/dev/MBARItracking/amqp/runPersistMonitor.sh >> /data/logs/tracking/persistWatchdog.log 2>&1

    # This entry calls a node script which parses the rsync log file and generates a web site for analysis
    # TODO kgomes - move this from mbari-dss to odss
    #*/10 * * * * /home/odssadm/dev/mbari-dss/synchronization/logAnalysis/analyzeRsync.sh >> /data/logs/rsync/analysis/logAnalysis.log 2>&1

    # Sync canon data files with shore
    */10 * * * * /home/odssadm/dev/odss/synchronization/bin/zuma/zuma-sync.sh 2>&1 | tee -a  /data/logs/rsync/zuma-sync.log

    # Sync ctd files into ODSS data directory (routine).
    # TODO kgomes move this from mbari-dss to odss
    */10 * * * * /home/odssadm/dev/mbari-dss/bin/ctdsync >> /data/logs/ctdsync/ctdSync.log 2>&1

    # Sync ctd files for SIMZ.
    # TODO kgomes move this from mbari-dss to odss
    #0,30 * * * * /home/odssadm/dev/mbari-dss/bin/ctdsync.simz

    # Clean tilecache
    #@daily  /var/www/tilecache/tilecache_clean.py -s500 /tmp/tilecache >> /data/logs/tilecache/clean.log 2>&1
    #*/10 * * * * find /tmp/tilecache -name '*.png' -size 334c -exec /bin/rm {} \; &> /data/logs/tilecache/334.log 2>&1

    # Run the script to pull positions from tracking DB and write to files in /RC-mnavData
    */2 * * * * /home/odssadm/dev/odss/synchronization/timezero/timezero-positions.sh >> /data/logs/timezero/timezero.log
  1. The first line is to makes sure the ODSS application (NodeJS) starts after a reboot. On the new system, I might need to ask Peter how he wasts this done. I checked out the ODSS code directory into the /opt/odss directory, created a config.js file and run it right from there. The config.js file looks like:

    /**
    * This is the configuration file that holds all the parameters which can be adjusted
    * when running the Node server for the ODSS
    */
    module.exports = {
    
        // The location of the log files
        logFileLocation: '/data/logs/server',
    
        // The directory which is the root of all data repositories
        repoRootDir: '/data',
    
        // The base URL that points to reporRootDir
        repoRootDirUrl: '/odss/data',
    
        // The options for the app.js startup script
        appOptions: {
            trackingUpdateInterval: 60000,
            loggerLevel: 'info'
        },
    
        // The configuration options for the datastore
        dataStoreOptions: {
            loggerLevel: 'info',
            mongodbHost: 'localhost',
            mongodbPort: '27017',
            mongodbName: 'odss',
            mongodbUsername: 'odssadm',
            mongodbPassword: 'xxxxxx',
            trackingDBProtocol: 'postgres',
            trackingDBHostname: 'localhost',
            trackingDBPort: 5432,
            trackingDBDatabaseName: 'mbaritracking',
            trackingDBUsername: 'odssadm',
            trackingDBPassword: 'xxxxxx',
            layerOptions: {
                loggerLevel: 'info'
            },
            platformOptions: {
                loggerLevel: 'info'
            },
            sampleOptions: {
                loggerLevel: 'info'
            },
            userOptions: {
                loggerLevel: 'info'
            },
            prefsOptions: {
                loggerLevel: 'info'
            },
            viewOptions: {
                loggerLevel: 'info'
            }
        },
    
        // The configuration options for the application server
        appServerOptions: {
            loggerLevel: 'info',
            hostBaseUrl: 'http://zuma.rc.mbari.org/odss',
            port: 3000,
            expressLoggerLevel: 'info',
            expressCookieParserSecret: 'mysecret',
            expressSessionSecret: 'xxxxxx',
            authOptions: {
                loggerLevel: 'info'
            },
            userRouterOptions: {
                loggerLevel: 'info'
            },
            sessionRouterOptions: {
                loggerLevel: 'info'
            },
            prefsRouterOptions: {
                loggerLevel: 'info'
            },
            repoRouterOptions: {
                loggerLevel: 'info'
            },
            viewRouterOptions: {
                loggerLevel: 'info'
            },
            platformRouterOptions: {
                loggerLevel: 'info'
            },
            layerRouterOptions: {
                loggerLevel: 'info'
            },
            sampleRouterOptions: {
                loggerLevel: 'info'
            },
            elevationRouterOptions: {
                loggerLevel: 'info'
            },
            trackRouterOptions: {
                loggerLevel: 'info'
            }
        }
    }
    
  2. The second line is to restart the consumer.py processes that are deemed to have stopped. This will happen inside the docker container now so this should go away.

  3. The third line runs a script to sync stuff between zuma and shore. Here is the script:

    #!/bin/bash
    # This script synchronizes data products between zuma and normandy
    #
    # This script requires that the IP address to the remote host be resolvable;
    # hence should be executed from the ship (not from shore)
    #
    # To avoid password input on each invocation of rsync, local machine's
    # public rsa key should be included in remote machine's .ssh/authorized_hosts
    # file (see https://blogs.oracle.com/jkini/entry/how_to_scp_scp_and)
    echo start: `date`
    
    # First check whether script is already running - if so, then exit
    
    if ps axco command | grep rsync
    then echo "rsync already running"
    exit 1
    fi
    
    #OPTS='--timeout 30 -P -arzpu --copy-dirlinks --partial --partial-dir=partial-dir --stats --checksum'
    OPTS='--timeout 30 -P -arzpu --copy-dirlinks --partial --partial-dir=partial-dir --stats'
    
    # Pull erddap configuration file from shore
    #rsync $OPTS odssadm@normandy8.mbari.org:/data/erddap/content/erddap/datasets.xml /data/erddap/content/erddap/
    
    # Pull mapserver files
    #rsync $OPTS odssadm@normandy8.mbari.org:/data/mapserver/mapfiles/ /data/mapserver/mapfiles
    
    # Pull tilecache configuration file
    #rsync $OPTS odssadm@normandy8.mbari.org:/var/www/tilecache/tilecache.cfg /var/www/tilecache
    
    # Pull products from shore
    # NOTE!!!!! This should only be uncommented when connected via copper as it w
    # will pull ALL ERDDAP data over the link and that is HUGE
    #rsync $OPTS odssadm@normandy8.mbari.org:/data/erddap/data/ /data/erddap/data
    
    # The lines below this are sync tasks specific to the deployment specified at the command prompt
    #rsync $OPTS odssadm@normandy8.mbari.org:/data/canon/2015_Sep/ /data/canon/2015_Sep
    
    # Now push the rsync stats data to shore
    #rsync $OPTS /data/logs/rsync/analysis odssadm@normandy8.mbari.org:/data/logs/rsync/analysis/zuma
    
    # And the CTD data
    rsync $OPTS /data/other/routine/Platforms/Ships/RachelCarson/pctd/ odssadm@normandy8.mbari.org:/data/other/routine/Platforms/Ships/RachelCarson/pctd
    rsync $OPTS /data/other/routine/Platforms/Ships/RachelCarson/uctd/ odssadm@normandy8.mbari.org:/data/other/routine/Platforms/Ships/RachelCarson/uctd
    
    echo end: `date` \n\n
    
  4. The fourth line runs a script to sync ctd files into the ODSS data directory

    #!/bin/bash
    # Copy files from CTD PCs to ODSS data repository
    start="$(date)"
    echo "Running CTD Sync at $start"
    
    # Mount PC files; requires ROOT permission
    sudo mount -t cifs //rc-uctd/data /mnt/uctd -o credential=/home/odssadm/dev/mbari-dss/bin/smbcred,dir_mode=0777,file_mode=0666
    sudo mount -t cifs //rc-pctd/data /mnt/pctd -o credential=/home/odssadm/dev/mbari-dss/bin/smbcred,dir_mode=0777,file_mode=0666
    
    # Make sure that wildcards in <from_directory> match the files you want for this cruise
    # pctd files - sync only the *.hdr, *.asc, and *.btl files
    cp --preserve=timestamps /mnt/pctd/*.hdr /data/other/routine/Platforms/Ships/RachelCarson/pctd/.
    cp --preserve=timestamps /mnt/pctd/*.asc /data/other/routine/Platforms/Ships/RachelCarson/pctd/.
    cp --preserve=timestamps /mnt/pctd/*.btl /data/other/routine/Platforms/Ships/RachelCarson/pctd/.
    sudo chown -R odssadm /data/other/routine/Platforms/Ships/RachelCarson/pctd
    
    # Make sure that wildcards in <from_directory> match the files you want for this cruise
    # uctd files - do not need the .hex files
    cp --preserve=timestamps /mnt/uctd/*.asc /data/other/routine/Platforms/Ships/RachelCarson/uctd/.
    cp --preserve=timestamps /mnt/uctd/*.hdr /data/other/routine/Platforms/Ships/RachelCarson/uctd/.
    sudo chown -R odssadm /data/other/routine/Platforms/Ships/RachelCarson/uctd
    
    # Unmount CTD PC files
    sudo umount /mnt/pctd
    sudo umount /mnt/uctd
    
    # Now convert all files to lower case
    cd /data/other/routine/Platforms/Ships/RachelCarson/pctd
    for f in * ; do mv -v $f `echo $f | tr '[A-Z]' '[a-z]'`; done
    cd /data/other/routine/Platforms/Ships/RachelCarson/uctd
    for f in * ; do mv -v $f `echo $f | tr '[A-Z]' '[a-z]'`; done
    
    end="$(date)"
    echo "Finished at $end"
    
  5. The last line runs a script to pull positions from the tracking database and write to files in /RC-mnavData. The .sh file looks like:

    #!/bin/bash
    
    # Change to the WORKING_DIR to point to the directory where the script is running (replace WORKING_DIR with your location)
    cd /home/odssadm/dev/odss/synchronization/timezero
    
    # Export the environment variables and run the script (add your environment variables for the database connection
    # properties and change ./reports to point to the absolute path of where you want the files written.  The 720 is the
    # number of minutes of data to query for and you can adjust that if you want to
    PYTHONUNBUFFERED=1 TRACKING_DATABASE_HOST=localhost TRACKING_DATABASE_NAME=mbaritracking TRACKING_DATABASE_USERNAME=odssadm TRACKING_DATABASE_PASSWORD=odsspw python timezero-positions.py /RC-mnavData 720
    
  6. And the python script is:

    # This script grabs the most recent positions of non-AIS platforms in the tracking DB
    # and creates a file that can be read by Timezero software that is running on our
    # ships
    
    # import the os library
    import os
    
    # Import the system library
    import sys
    
    # Import the PostgreSQL library
    import psycopg2
    
    # Import utility to handle command line arguments
    from sys import argv
    
    # Import some date utilities
    from datetime import datetime, timedelta
    
    # I need the math library for lat/lon conversions
    import math
    
    # Grab the script name, the directory to write to, and how many minutes of data to grab
    script, directory_name, minutes_of_data = argv
    
    # Create a current date
    query_date = datetime.now() - timedelta(minutes=int(minutes_of_data))
    
    # Grab the environment variables for the Tracking DB connection
    database_host = os.environ.get('TRACKING_DATABASE_HOST')
    if database_host is None:
        sys.exit('Database host was not defined as the environment variable TRACKING_DATABASE_HOST, please fix')
    database_name = os.environ.get('TRACKING_DATABASE_NAME')
    if database_name is None:
        sys.exit('Database name was not defined as the environment variable TRACKING_DATABASE_NAME, please fix')
    database_username = os.environ.get('TRACKING_DATABASE_USERNAME')
    if database_username is None:
        sys.exit('Database username was not defined as the environment variable TRACKING_DATABASE_USERNAME, please fix')
    database_password = os.environ.get('TRACKING_DATABASE_PASSWORD')
    if database_password is None:
        sys.exit('Database password was not defined as the environment variable TRACKING_DATABASE_PASSWORD, please fix')
    
    # Now create the connection URL
    database_url = "host='" + database_host + "' dbname='" + database_name + "' user='" + database_username + \
                "' password='" + database_password + "'"
    
    # Grab a connection
    try:
        conn = psycopg2.connect(database_url)
    except psycopg2.Error as e:
            print "Unable to connect!"
            print e.pgerror
            print e.diag.message_detail
            sys.exit(1)
    
    # Grab a cursor
    cur = conn.cursor()
    
    # Create a dictionary to hold ID to name mappings
    id_to_name = {}
    
    # And create an array to hold if a file for a platform has been created
    platforms_with_files = []
    
    # Now try reading all the platforms that are non-AIS so we can create a dictionary of platform ID to name
    try:
        cur.execute(
            """select id, name from platform where platformtype_id <> (select id from platform_type where name = 'ais') order by id""")
    except:
        sys.exit("Problem running query to find non-AIS platforms")
    
    # Fill up the dictionary
    rows = cur.fetchall()
    for row in rows:
        id_to_name[row[0]] = row[1]
    
    # Now we want to query for positions
    query = """select platform_id, datetime AT TIME ZONE 'GMT', st_x(geom), st_y(geom) from position where platform_id in
    (select id from platform where platformtype_id <> (select id from platform_type where name = 'ais')) and
    datetime > '""" + query_date.strftime("%m/%d/%Y %I:%M:%S %p") + """' group by platform_id, datetime, st_x(geom),
    st_y(geom) order by platform_id, datetime"""
    print query
    try:
        cur.execute(query)
    except:
        sys.exit("Problem running query to find non-AIS platforms")
    
    # Now loop over the positions and build the file
    rows = cur.fetchall()
    for row in rows:
        platform_id = str(row[0])
        timestamp = row[1]
        longitude = row[2]
        latitude = row[3]
        platform_name = id_to_name[row[0]]
    
        # Check to see if we have written data from this platform already
        if platform_name not in platforms_with_files and os.path.isfile(directory_name + os.sep + platform_name):
            # Since we have not written data yet, delete and recreate the file to start fresh
            os.remove(directory_name + os.sep + platform_name)
    
        # Make sure the data points are in a real range
        if longitude >= -180 and longitude <= 180 and latitude >= -90 and latitude <= 90:
    
            # Some the Timezero software does a very strange conversion using the decimal from the lat and lon
            # it essentially looks at the numbers after the decimal as decimal minutes.  So I have to do the conversion
            # from decimal degrees to minutes and then substitute that as the four digits after the decimal
            # Let's do the latitude first, split into before and after decimal parts
            (lat_dec_degrees, lat_degrees) = math.modf(latitude)
    
            # Now convert the decimal degrees to minutes and divide by 100 to move the decimal
            lat_minutes = lat_dec_degrees * 60 / 100
    
            # Now add back to the degrees
            lat_converted = lat_degrees + lat_minutes
    
            # Same for longitude
            lon_tuple = math.modf(longitude)
            (lon_dec_degrees, lon_degrees) = math.modf(longitude)
    
            # Now convert the decimal degrees to minutes and divide by 100 to move the decimal
            lon_minutes = lon_dec_degrees * 60 / 100
    
            # Now add back to the degrees
            lon_converted = lon_degrees + lon_minutes
    
            # Do a little formatting
            if lon_converted < 0:
                longitude_string = "%.4f" % abs(lon_converted) + "W"
            else:
                longitude_string = "%.4f" % lon_converted + "E"
            if lat_converted < 0:
                latitude_string = "%.4f" % abs(lat_converted) + "S"
            else:
                latitude_string = "%.4f" % lat_converted + "N"
    
            # Open the platform file for writing
            posreport = open(directory_name + os.sep + platform_name, 'a')
    
            # Check to see if this is the first time writing to the file
            if platform_name not in platforms_with_files:
                # Write the header
                posreport.write("POSREPORT\n")
                # Add the platform to the list of files we have written to already for the next round
                platforms_with_files.append(platform_name)
    
            # Since we have, just write the new line
            posreport.write("0;" + platform_name + ";" + latitude_string + ";" + longitude_string + ";" +
                            timestamp.strftime('%m/%d/%y %H:%M:%S') + ";" + platform_name + "\n")
    
            # Now close the POSREPORT file
            posreport.close()
    
  7. The startup script for ERDDAP is located in /etc/rc.d/init.d/tomcat-erddap and looks like the following:

    #!/bin/bash
    #
    # tomcat
    #
    # chkconfig: 35 98 01
    # description:  Start/stop the Tomcat servlet engine gracefully on boot/shutdown.  \
    #                      The numbers say start tomcat in run levels 3 or 5 \
    #                       and use priority 98 for start, and priority 01 for stop
    # Source function library.
    . /etc/init.d/functions
    
    RETVAL=$?
    CATALINA_HOME="/opt/apache-tomcat-erddap"
    JAVA_HOME="/usr/java/default"
    
    case "$1" in
    start)
            if [ -f $CATALINA_HOME/bin/startup.sh ];
            then
                echo $"Starting Tomcat"
                /bin/su tomcat $CATALINA_HOME/bin/startup.sh
            fi
            ;;
    stop)
            if [ -f $CATALINA_HOME/bin/shutdown.sh ];
            then
                echo $"Stopping Tomcat"
                /bin/su tomcat $CATALINA_HOME/bin/shutdown.sh
            fi
            ;;
    *)
            echo $"Usage: $0 {start|stop}"
            exit 1
            ;;
    esac
    
    exit $RETVAL
    
  8. When the ERDDAP process is running, it looks like:

    tomcat    3082     1  0 Jul22 ?        01:58:55 /usr/java/jdk1.7.0_13/bin/java -Djava.util.logging.config.file=/opt/apache-tomcat-erddap/conf/logging.properties -d64 -Djava.awt.headless=true -Xmx2048M -Xms2048M -DerddapContentDirectory=/data/erddap/content/erddap/ -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/opt/apache-tomcat-erddap/endorsed -classpath /opt/apache-tomcat-erddap/bin/bootstrap.jar:/opt/apache-tomcat-erddap/bin/tomcat-juli.jar -Dcatalina.base=/opt/apache-tomcat-erddap -Dcatalina.home=/opt/apache-tomcat-erddap -Djava.io.tmpdir=/opt/apache-tomcat-erddap/temp org.apache.catalina.startup.Bootstrap start
    
  9. The startup script for THREDDS is located in /etc/rc.d/init.d/tomcat-thredds and looks like the following:

    #!/bin/bash
    #
    # tomcat
    #
    # chkconfig: 35 98 01
    # description:  Start/stop the Tomcat servlet engine gracefully on boot/shutdown.  \
    #                      The numbers say start tomcat in run levels 3 or 5 \
    #                       and use priority 98 for start, and priority 01 for stop
    # Source function library.
    . /etc/init.d/functions
    
    RETVAL=$?
    CATALINA_HOME="/opt/apache-tomcat-thredds"
    JAVA_HOME="/usr/java/default"
    
    case "$1" in
    start)
            if [ -f $CATALINA_HOME/bin/startup.sh ];
            then
                echo $"Starting Tomcat"
                /bin/su tomcat $CATALINA_HOME/bin/startup.sh
            fi
            ;;
    stop)
            if [ -f $CATALINA_HOME/bin/shutdown.sh ];
            then
                echo $"Stopping Tomcat"
                /bin/su tomcat $CATALINA_HOME/bin/shutdown.sh
            fi
            ;;
    *)
            echo $"Usage: $0 {start|stop}"
            exit 1
            ;;
    esac
    
    exit $RETVAL
    
  10. When the THREDDS process is running, it looks like:

    tomcat    3098     1  0 Jul22 ?        02:29:37 /usr/java/jdk1.7.0_13/bin/java -Djava.util.logging.config.file=/opt/apache-tomcat-thredds/conf/logging.properties -d64 -Djava.awt.headless=true -Xmx1500M -Xms1500M -Dtds.content.root.path=/data -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/opt/apache-tomcat-thredds/endorsed -classpath /opt/apache-tomcat-thredds/bin/bootstrap.jar:/opt/apache-tomcat-thredds/bin/tomcat-juli.jar -Dcatalina.base=/opt/apache-tomcat-thredds -Dcatalina.home=/opt/apache-tomcat-thredds -Djava.io.tmpdir=/opt/apache-tomcat-thredds/temp org.apache.catalina.startup.Bootstrap start
    

Current Setup Diagram

This diagram captures the state of data flow on the Western Flyer before it's retirement in October of 2022.

flowchart TB subgraph /opt direction LR erddap-server[ERDDAP Server] thredds-server[THREDDS Server] end subgraph /data direction LR biospace canon erddap goc logs mapserver mongo other simz thredds tilecache-2.11 tmp end erddap-server --> erddap thredds-server --> thredds

Notes:

  1. I don't think we even need to worry about the biospace directory.

New Setup Diagram

Things I need from IS

  1. Install my ssh public key in /etc/ssh/keys/kgomes/authorized_keys
  2. Ask IS if they want us to use a shared account (currently using odssadm) which should have permissions to use Docker. It should also have permissions to read and write from the /data directory.
  3. I should ask Peter how he wants me to run the ODSS server (NodeJS) as a service. I think I can do it with systemd, but I need to know how to do it.
  4. I need to be able to use Docker and Peter said since I am sudo, I could run sudo usermod -aG docker kgomes to get permissions to control Docker but we should do this as the shared account.
  5. I need to get a file system mounted on /RC-mnavData and it should map to //corona.rc.mbari.org/RC-mnavData. The fstab entry on the curren machine is:

    //corona.rc.mbari.org/RC-mnavData /RC-mnavData cifs credential=/etc/cifs,dir_mode=0777,file_mode=0666 0 0
    
  6. I need to get a web server installed with SSL certificates. Joe normally used to do this. I will serve as the proxy for the other services. On the odss-testu server, the config is called odss.conf and lives in the /etc/apache2/conf-enabled directory. It looks like this:

    <VirtualHost *:80>
    ServerName odss-testu.shore.mbari.org
    RewriteEngine On
    RewriteCond %{HTTPS} !on
    RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
    </VirtualHost>
    
    <Location /erddap>
    ProxyPass        http://127.0.0.1:8080/erddap
    ProxyPassReverse http://127.0.0.1:8080/erddap
    </Location>
    
    <Location /thredds>
    ProxyPreserveHost On
    ProxyPass        http://127.0.0.1:8081/thredds
    ProxyPassReverse http://127.0.0.1:8081/thredds
    </Location>
    
    <Location /odss>
    ProxyPass        http://127.0.0.1:3000/odss
    ProxyPassReverse http://127.0.0.1:3000/odss
    </Location>
    
    <Location /trackingdb>
    Proxypass        http://127.0.0.1:8081/trackingdb
    ProxyPassReverse http://127.0.0.1:8081/trackingdb
    </Location>
    
    <Location /cgi-bin/mapserv>
    Proxypass        http://127.0.0.1:8082
    ProxyPassReverse http://127.0.0.1:8082
    </Location>
    
  7. Create a /opt/odss directory and give the account that will be reponsible for running the server full access to that directory.

  8. Create a /data directory and give the account that will be reponsible for running the server full access to that directory.

IS Configuration

I submitted a help ticket to have IS create an Ubuntu VM with Docker containers. The original request was:

    VM_Name: zuma-u
    VM_Purpose: Will be the ODSS replacement on the Carson
    Network_Location: Rachel Carson
    VM_Expiration: 5 years
    VM_Support: IS_Supported
    Alt_Administrator: 
    VM_OS: Ubuntu 20.04 LTS with Docker Containers
    RAM: 4
    Addl_RAM_Reason: 
    CPU: 2
    Addl_CPU_Reason: 
    VM_disk: Hey Peter, can you see how much disk space is on the current zuma and give this machine the same amount. Also, not sure if it's possible, but I think we have high speed disk mounted on the production ODSS (normandy8) for the /pgdata directory so the postgres database has fast disk access. That might be helpful with this machine too. Thanks!

What IS configured:

  1. IP address is 134.89.22.77
  2. High speed disk volume and mounted it at /data
  3. User created 'kgomes' and it was added to the 'sudo' group for now
  4. Docker was not installed with the template as the Carson ones did not have it, but Peter added it later but I did not have permissions to use it. Added a note to the help ticket.

Setup Instructions

  1. IS did all their stuff first.
  2. Login as the 'odssadm' account (shared account)
  3. cd /opt/odss
  4. Clone the odss repository using git clone git@bitbucket.org:mbari/odss.git.
  5. The general process is to use the docker-compose.yml file in the 'resources/deployment' directory to manage the supporting services and then run the ODSS directly on the host to make it easier to update, etc. The deployment diagram for zuma is shown below.
flowchart TB subgraph /opt/odss/resources/deployment subgraph docker-compose subgraph ERDDAP Service erddap-container-data-mount["/erddapData"] erddap-container-datasets-xml["/usr/local/tomcat/content/erddap/datasets.xml"] end end end subgraph /data subgraph /erddap erddap-data-directory["/erddapData"] erddap-datasets-xml["datasets.xml"] end end erddap-data-directory -- -v --> erddap-container-data-mount erddap-datasets-xml -- -v --> erddap-container-datasets-xml
  1. Environment variables

    1. I made a copy of the .env.template file and called it .env and then edited to be the following
      ODSS_BASEDIR_HOST=/data
      ERDDAP_bigParentDirectory=/erddapData/
      ERDDAP_baseUrl=https://zuma.rc.mbari.org
      ERDDAP_emailEverythingTo=kgomes@mbari.org
      ERDDAP_emailDailyReportsTo=kgomes@mbari.org
      ERDDAP_emailFromAddress=mbari.seinfo@gmail.com
      ERDDAP_emailUserName=mbari.seinfo
      ERDDAP_emailPassword=*****
      ERDDAP_emailSmtpHost=smtp.gmail.com
      ERDDAP_emailSmtpPort=587
      ERDDAP_adminInstitution="Monterey Bay Aquarium Research Institute"
      ERDDAP_adminInstitutionUrl=https://www.mbari.org
      ERDDAP_adminIndividualName="Kevin Gomes"
      ERDDAP_adminPosition="Information Engineering Group Lead"
      ERDDAP_adminPhone=831-775-1700
      ERDDAP_adminAddress="7700 Sandholdt Road"
      ERDDAP_adminCity="Moss Landing"
      ERDDAP_adminStateOrProvince=CA
      ERDDAP_adminPostalCode=95039
      ERDDAP_adminCountry=USA
      ERDDAP_adminEmail=kgomes@mbari.org
      TDS_HOST=https://zuma.rc.mbari.org
      
  2. ERDDAP

    1. For ERDDAP, I decided to just do a clean install since we don't use it on board much for anything. I do want to back up the files from zuma, but I think a clean install is just easier and it still gives us a place to put things if we ever want to serve data from the boat itself.
    2. I created a /data/erddap directory, followed by a /data/erddap/erddapData directory
    3. I copied the resources/deployment/erddap/v2.18/datasets.xml file to the /data/erddap directory
  3. THREDDS
    1. I created a /data/thredds directory, followed by a /data/thredds/tomcat/logs directory, a /data/thredds/logs directory, a /data/thredds/content directory and a /data/thredds/data directory.
    2. I just let the THREDDS instance create all the files that go into those directories with no customization for the machine on zuma since we likely won't be hosting anything from the THREDDS server on the boat. NOTE If I need to use the THREDDS server on the boat, I will have to create configuration and catalog XML files somewhere on the host and then mount them into the container with docker-compose.yml file.
  4. Mapserver
    1. Like with ERDDAP and THREDDS, I left the default mapserver running and while it works, there are not map files defined on zuma. If we need to add that capability, we will need to define the map configuration and data files on the host somewhere and mount them into the mapserver container. See the docker hub instructions to see how to do that. NOTE Also, I never verified if we needed to add the Google projection to the EPSG file. In previous versions I had to add a line to the /usr/share/proj/epsg file to add the Google projection, but there is no file in the latest version of mapserver (8.0).