Skip to content

SE IE Related Servers

Here is a list of the various servers that are on MBARI premises that are linked to Information Engineering. These are the servers where we have stuff deployed and/or are using for development.

Servers

alfresco6.mbari.org

  • Person responsible: Kevin Gomes
  • OS: RHEL 6
  • Status: Shutdown
  • Is the server where the Alfresco content management system is running
  • It was decided that we were no longer going to use Alfresco and all the content was going to be migrated back to the ProjectLibrary share on Atlas (or other locations).
  • I am in the process of migrating the data and then the end goal is to shut it down
  • This server was shutdown when we stopped supporting CentOS 6 in November of 2020

auth.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 7.8
  • Status: Shutdown
  • The only thing running on auth is an instance of Atlassian crowd (version 2.8.3) as of this writing
  • The goal for this server was to provide an OpenID wrapper around our LDAP system. It was also going to be used to allow for MBARI logins to be used on our Confluence server. After deploying this server, it was discovered that we could not upgrade the Confluence version we had without a whole bunch of pain so it was not used for the Confluence upgrade. It is quite possible that nobody is using the authentication server at all.
  • I had deployed a test NodeJS server there to try out some authentication code I had been working on but it was not used for any application that I know of
  • I am going to try to shut it down.
  • On 6/8, I transferred the license from auth to oceana. Had to re-adjust the LDAP connection to cut down on the number of people because the license only supports up to 500. After that, I had Pat shutdown auth. YAY!

bob.shore.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 7.8
  • This is running a very old version of JBoss (3.2.8 with Java 1.4.2) and is running the ingest for messages for the SSDS. It’s basically a very small application that receives JMS messages, does a small amount of transformation on them, stores them in the SSDS_Data database on dione.mbari.org and then forwards some of them to a JMS service on new-ssds.mbari.org.
  • The deployment files are in /opt/jboss-3.2.8.SP1/server/default/deploy and consist of some .xml files for configuring JMS topics and SQL database connections. In the deploy.last directory there are two .jar files that contain the java code that runs these services. The services write files to both the /data/ssds/rawpackets directory and the /data/ssds/ruminate directory. The rawpackets directory is on the local disk and contains serialized versions of the data packets that come in to the SSDS and the ruminate directory is an atlas NFS mounted directory that points to the ssdsdata/ssds/ruminate share. Ruminate uses those files to look for XML that has already been submitted.

canonfishvm.shore.mbari.org

  • I believe this machine was being used for downloading and subsetting maps for ODSS. Is this still being used?
  • OS: CentOS 6.10
  • Status: Shutdown
  • The only thing that looks like is running outside of normal Linux stuff is condor.
  • There was nothing really running on this so we let IS shut it down when we stopped supporting CentOS 6 in November of 2020.

Cetacean.shore.mbari.org

  • Person responsible: Danelle Cline
  • OS: CentOS 7
  • This is a production machine in the DMZ.
  • Responsible for syncing data from MARS to shore, audio transcoding for rebroadcast to shoutcast.shore for streaming, uploads from AWS opendata program and weekly/monthly queries on OpenData usage.
  • Deployment is through ansible playbooks. See https://github.com/mbari-org/deploy-soundscape for deployment details.

pam-processing

coredata.shore.mbari.org

  • Person responsible: Rich Schramm/Kevin Gomes
  • OS: CentOS release 6.10
  • Status: SHUTDOWN
  • This is where the OASIS data downloads happen
  • Ship to shore data migration happens here
  • MiniROV data processing happens here
  • There are two mounts via /etc/fstab:
    • Atlas OASIS is mounted on /oasis
    • Atlas OASIS_Coredata is mounted on /oasis_coredata

coredata8.shore.mbari.org

  • Person responsible: Rich Schramm/Kevin Gomes/Karen Salamy/Danelle Cline

Deepsea-ai.shore.mbari.org

* Person responsible: Danelle Cline * Located in the DMZ * This server is used to host a machine learning stack for hosting video track data. It has a mounts to Titan and Atlas shares. * See http://deepsea-ai.shore.mbari.org/ for more details.

Data.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 7.8
  • Located in the DMZ
  • This server is just a static data product server. It has a single mount which is the Atlas share Data_Repository and is read-only mounted on /data. There is a web server configuration in /etc/httpd/conf.d/data-repo.conf that contains the aliases that point to the three main product directories: products, platforms and activities. The URLs for those are:
  • There is no index page at https://data.mbari.org, but I should probably fix that to show the other directories. Some day ...

DiGiTS-Dev-Box-Fish.shore.mbari.org

  • Person responsible: Duane Edgington, Danelle Cline. IS also has an admin account for assisting with OS upgrades on this machine.
  • Ubuntu 18.04; 4 GPUs
  • This server is in D.Cline's office. This is mostly a development machine not a production machine. It hosts some images used for training and testing deep learning models at http://digits-dev-box-fish.shore.mbari.org:8080

doris.shore.mbari.org

  • Person responsible: Danelle Cline/Duane Edgington
  • Located in the DMZ
  • This server is used for developing/running aipipelines for many projects both on premise and in AWS for bulk processing video. It does not suffer from timeout issues that happen on Mac OS machines.
  • Requires titan:/data/M3 be mounted as /M3
  • Requires titan:/data/M3_ML be mounted as /M3_ML to access models
  • Requires titan:/data/DeepSea-AI be mounted as /DeepSea-AI to access models
  • Other projects are mounted as needed, e.g. /data/UAV to /UAV /data/CFElab to /CFElab
  • To use this machine for AWS bulk processing video, ssh login with your MBARI username/password and go to /data, otherwise use your MBARI /u/username volume for your development.
  • Deployment code is https://bitbucket.org/mbari/deploy-ai

Docker-rc.rc.mbari.org

  • Docker master: Brian Schlining
  • OS: CentOS 7
  • Runs the M3 microservice stack for RC shipboard operations.
  • Framegrabs are archived on this server and synced to atlas:/framegrabs by IS
  • Docker-compose code is at https://bitbucket.org/mbari/m3-microservices. Service is started using prod_rc.sh up -d

Docker-wf.wf.mbari.org

  • Docker master: Brian Schlining
  • OS: CentOS 7
  • Runs the M3 microservice stack for WF shipboard operations.
  • Framegrabs are archived on this server and synced to atlas:/framegrabs by IS
  • Docker-compose code is at https://bitbucket.org/mbari/m3-microservices. Service is started using prod_wf.sh up -d

docs.mbari.org

  • Person responsible: Kevin Gomes/Carlos Rueda
  • OS: CentOS 7.8
  • Located in the DMZ to make documentation publicly available (except for internal subdirectory)
  • Similar to data.mbari.org, this machine is mainly there to serve static documentation, things like mkdocs. The main goal is to provide documentation for MBARI engineering products.
  • The Atlas://ProjectLibrary share is mounted as read-only (need to double check this) on this machine (/etc/fstab) and is mounted at /var/www/html/internal/projects.
  • The httpd configuration (set up by Joe Gomez) only allows traffic from 134.89.x.x to access the internal project library stuff which means you need to be VPN’d or onsite to get to it. He set that up in the /etc/httpd/conf/httpd.conf file
  • There is a webhook deployed here that builds and deploys MkDocs sites automatically when they are checked in.

elvis.shore.mbari.org

  • Person responsible: Mike McCann/Kevin Gomes/Rich Schramm
  • OS: CentOS 7.8
  • There is a Tomcat 7 server running which looks like it has an instance of OPeNDAP and THREDDS running there.
  • There are a BUNCH or httpd proxy passes set up to serve various things:
  • There is a mysql server running, not sure why. Answer: It was used for the Live Access Server (/var/www/lasxmlOASIS/) which is no longer running.
  • Under the ssdsadmin account (‘sudo -u ssdsadmin -i’) there are a bunch of processes that run under cron that process data from the OASIS moorings. This is a VERY large set of scripts and processes.
  • There is a process (NDBC) that pushes some mooring data to the NDBC data repositories
  • There is a process that is called from the coredata (~/dev/DPforSSDS/oasis/bin/oasisToSSDS) that processes downloaded OASIS data into SSDS (XML and raw data packets)
  • Looks like there are some lobo viz processes kicked off under the ssdsadmin account.
  • The following are mounted via /etc/fstab:
    atlasnfs:/ifs/mbari/ssdsdata                        /ssdsdata                                              nfs defaults
    atlasnfs:/ifs/mbari/oasis                           /oasis                                                 nfs defaults
    atlasnfs:/ifs/mbari/oasis_cfg-cron                  /oasis_cfg-cron                                        nfs defaults
    atlasnfs:/ifs/mbari/oasis_coredata                  /oasis_coredata                                        nfs defaults
    atlasnfs:/ifs/mbari/3Dreplay                        /var/www/search_html2/ARCHIVE/3Dreplay                 nfs defaults
    atlasnfs:/ifs/mbariarchive/digitalimages            /var/www/search_html2/ARCHIVE/digitalimages            nfs defaults
    atlasnfs:/ifs/mbariarchive/framegrabs               /var/www/search_html2/ARCHIVE/framegrabs               nfs defaults
    atlasnfs:/ifs/mbari/ShipData/logger                 /var/www/search_html2/ARCHIVE/logger                   nfs defaults
    atlasnfs:/ifs/mbari/RovNavEdit                      /var/www/search_html2/ARCHIVE/nav                      nfs defaults
    atlasnfs:/ifs/mbari/ShipData/rovctd                 /var/www/search_html2/ARCHIVE/rovctd                   nfs defaults
    atlasnfs:/ifs/mbariarchive/VARS_Stn-M_Image_Archive /var/www/search_html2/ARCHIVE/VARS_Stn-M_Image_Archive nfs defaults
    
    • The CNAME search points to elvis and is used for serving ROV video frame grabs

Move to ubuntu VM (elvis2) in 2025

  • See IS help tickets #40847, #40866, #41491... for details on server configuration
  • The Hyrax server installed directly on the legacy elvis's OS has became unstable, crashing several times while reloading stoqs_all_dorado
  • Up-to-date Hyrax and Thredds Data servers are now available as docker images
  • The hyrax, thredds, and apache2 services are now all configured in the elvis-services repo
  • In order to ease the setup of reverse proxies and isolation of traffic for public serving the various services have been split among multiple VMs

erddap.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 7.8
  • Located in the DMZ
  • There is a docker process running that has an ERDDAP server running.
  • This is meant to be a long-standing instance of ERDDAP to server MBARI data
  • Right now, it’s more of a development machine that we used for the Improving impact project
  • I don’t think anything is mounted to this machine (nothing in /etc/fstab). I think everything is just local.

esp-portal.mbari.org

Expanse.shore.mbari.org

  • Master of the server: Brian Schlining
  • OS: CentOS 8
  • Replaces seaspray.shore.mbari.org
  • Runs the internal deep-sea guide which is proxied at http://m3.shore.mbari.org/dsg. The build script for the internal DSG is at https://bitbucket.org/mbari/m3-deployspace/src/master/deepseaguide/build.sh
  • Uses podman instead of docker. DSG is run under the local brian account.
  • True fact: It’s named after one of the best space sci-fi series ever.
  • Runs the following crontabs (using podman) as brian:
    • m3-merge-rov/housekeeping.sh The weekly merge between VARS annotations and EXPD
    • midwater-transects/update-database-app Does some housekeeping on the midwater transect database.

Geoserver.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 7.8
  • Located in the DMZ
  • No mounts via fstab
  • This server is meant to be a long term GeoServer instance where MBARI can share geospatial data.
  • More of a development machine for Improving impact project currently.
  • There is a tomcat 9 instance deployed (Java 1.8) in /opt/tomcat9 and has a geoserver.war file deployed. It’s running as the user ‘tomcat’ which has no login so you have to either chown or chmod stuff to work with it. It’s a pain.
  • Proxy pass set up in /etc/httpd/conf.d/geoserver.conf and redirects /geoserver to :8080/geoserver so it will hit the tomcat port.

Gizo.shore.mbari.org

  • Person responsible: Danelle Cline, Carlos Rueda
  • OS: CentOS 7
  • Located in the DMZ
  • This server is used for decimating audio data and producing various soundscape products including LTSA Long-Term Spectral Averages, PSD Power Spectral Density, and HMB Hybrid Millidecade Spectra.
  • Requires mounts to the PAM_Analysis share and

Hyrax.shore.mbari.org

Ione.mbari.org

  • The server lord: Brian Schlining
  • OS: CentOS 7
  • In the DMZ
  • Runs external vars services via docker. These include the external deep-sea guide, vars-kb-server, charybdis, vampire-squid, and annosaurus. These services are proxied via dione.mbari.org using the dsg.mbari.org CNAME.
  • Services build/deploy scripts are in https://bitbucket.org/mbari/m3-deployspace except for charybdis which is at https://bitbucket.org/mbari/charybdis

Jupyterhub.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 7.8
  • Located in the DMZ
  • No mounts via /etc/fstab
  • The goal of this server is the host a long term jupyterhub instance that is available for MBARI to share notebooks with the outside world
  • Located at (https://jupyterhub.mbari.org/hub/login)
  • Tried to originally set up using our AD so we could use our MBARI logins, but this was really messy so we gave up and were just using it for the Improving Impact project for development and testing.
  • I think I also tried to use Auth0 for it and that got messy, but that might be another alternative since Joe has connected our AD to Auth0.
  • Need to upgrade JupyterHub and make another run at the AD stuff, maybe through OpenID?
  • Currently the logins are linked to the local accounts on the machine so you use your local login to get in and right now the kgomes and meteor accounts are listed as admins.

Kahuna.shore.mbari.org

  • The Big Kahunas: Karen Salamy/Kevin Gomes
  • OS: CentOS 7.8
  • One mount which is Atlas://Subversion mounted on /svn
  • This is our internal Subversion server (more info needed)

kelp.mbari.org

  • Person responsible: ??? (Carlos sent emails to possible people involved with this VM to determine what should be done re CentOS6 EOL: Mike Godin, Brett H, and others). (Both kelp.mbari.org and aosn.mbari.org used to resolve to the same site (IP)).
  • I believe these were both shutdown went CentOS 6 was abandoned and nobody was there to defend them (poor servers).

Kraken.shore.mbari.org

  • Person responsible: Mike McCann
  • STOQS server
  • Is this still a big fire-breathing server (hardware) or now a VM?

M3datavis.shore.mbari.org

M3.shore.mbari.org

Malibu.wf.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 6.10
  • VM Located on the Western Flyer.
  • There is a mount from the machine hurricane on the Flyer from /WF-mnavData to /WF-mnavData and this is used so that a process can write files that contain tracking data from the tracking database to the Timezero machine. The files on this mount show up on the bridge machine and then those files are used to plot asset tracks in TimeZero on the bridge. There is a cronjob that runs every 5 minutes to do this processing (~/dev/timezero-positions/timezero-positions.sh)
  • This server runs the ODSS on the Flyer.
  • It has a cron job that makes sure the AMQP consumers are running to pull down the tracking data from RabbitMQ on messaging.shore.mbari.org and writing to the PostgreSQL database on malibu. The process to check is (~/dev/MBARItracking/amqp/runPersistMonitor.sh)
  • There are three Tomcat servers running on different ports. One for a very small ODSS service (I think this is only the servlet that returns the file listing that shows up in the “data’ pane of the ODSS), one for an ERDDAP server and one for a THREDDS server.
  • We tend to ssh into this machine directly as odssadm.
  • There is a cron job set up to start the ODSS NodeJS server (/opt/odss-node/server/app.js) on reboot.
  • There is a cron job that runs every 15 minutes to sync data to/from shore (~/dev/odss/synchronization/bin/malibu/malibu-sync.sh). This script need to be curated VERY carefully to make sure we do not kill the satellite comms pipe back to shore.
  • There is a cron job that runs every 5 minutes that copies the UCTD and PCTD files from the CTD to malibu (~/dev/odss/synchronization/bin/ctdsync, all happens within the WF network)
  • There is a cron job that runs every 2 minutes that pulls select AIS data and pushes that into the tracking database (~/dev/ais-extractor/ais-extractor.sh). This process actually connects to the mbaritracking database on normandy, extracts AIS data of vessels of interest (specified as a MMSI on the command line) and then inserts that data into the MBARI tracking database on malibu. This allows specific AIS tracked vessels to be visible on the ODSS without having to pull all AIS data through the satellite link from RabbitMQ (this caused problems in the past).
  • MongoDB is running (data store for the ODSS). It uses port 27017 and I have a desktop app (Studio 3T) that uses that port to allow for editing of the data store. This is how I create and edit views and layers in the ODSS.
  • There are other cron jobs that are commented out for reporting, syncing etc. Look in the odssadm crontable for more info.
  • There is an HTTPD server running with the following proxy passes:
    • /erddap -> :8280/erddap
    • /odss/services -> :8080/odss/services (this is the data pane listing)
    • /catalogs -> :8080/catalogs (I think this one isn’t used, but not sure)
    • /socket.io -> :3000/socket.io (this was supposed to be for pushing events to the ODSS, but I don’t think it’s used, I may not be able to remove it though until I remove the code that connects it to the browser)
    • /odss -> :3000/odss
    • Note that the /observations proxy is commented out and a local empty file was created so that the ship based ODSS does not try to read the observations from the services.mbari.org machine every minute over the satellite link.
    • /thredds -> :8180/thredds
    • WSGIScriptAlias /canon /home/stoqsadm/dev/stoqshg/stoqs.wsgi
    • alias /tilecache/ "/var/www/tilecache/"
    • WSGI alias /trackingdb -> /home/odssadm/dev/MBARItracking/amqp/tracking.wgsi

mantis.shore.mbari.org

  • Person responsible: Danelle Cline/Duane Edgington
  • Host tools for image annotation, model development and advanced analytics for the following projects:
  • 901103 - Biodiversity & Biooptics
  • 901502 - Passive Acoustic Sensing
  • 901902 - Aerial Drones for Sci&MO
  • 902111 - Carbon Flux Ecology
  • 902004 - Planktivore
  • Runs NGINX via docker as docker_user.
  • Deployment code is through ansible playbooks. See https://github.com/mbari-org/deploy-ai
  • Requires mounts to all project data which is used in read-only mode. The mounts are as follows:
    • /mnt/UAV:/UAV:ro
    • /mnt/CoMPAS/:/CoMPAS:ro
    • /mnt/M3/:/M3:ro
    • /mnt/CFElab/:/CFElab:ro
    • /mnt/ProjectLibrary/:/ProjectLibrary:ro
    • /mnt/DeepSea-AI/:/DeepSea-AI:ro
  • Tator web client
  • FastAPI tator service api for bulk/common operations

Manta.wf.mbari.org

Mbon-erddap.shore.mbari.org

  • I helped set this up for Francisco, but haven’t touched it since, so it’s not ours.

mbscentos7.shore.mbari.org

  • Person responsible: Carlos Rueda/Dave Caress
  • OS: CentOS 7-9.2009.1.el7.centos.x86_64
  • Test machine for the dockerized MB-System

Menard.shore.mbari.org

Messaging.shore.mbari.org

  • Person responsible: Kevin Gomes/Rich Schramm
  • OS: CentOS 7.8
  • No mounts through /etc/fstab
  • This is the RabbitMQ server that is mostly used by SSDS (MARS data processing that is running on pismo) and the ODSS/Tracking DB (messages are processed on pismo and then sent to messaging, then consumed by processes on pismo, normandy, odss-test, zuma.rc.mbari.org and malibu.wf.mbari.org)
  • The server is located in /var/lib/rabbitmq and runs as the rabbitmq user which has no login which makes it a pain to try and see what is going on and or configure it.
  • Log files are in /var/log/rabbitmq
  • Management interface is here: http://messaging.shore.mbari.org:55672/mgmt/#/
  • This thing is a beast and just keeps running. I never interact with it and the only time it is restarted is for OS patches.
  • Startup and shutdown scripts are in /usr/local/bin and started by /etc/systemd/system/rabbitmq.service
  • Not sure where the config file is, but probably in /var/lib/rabbitmq which is not visible without chowning and I’m not going to worry about it for now.

Moonjelly.shore.mbari.og

  • Person responsible: Karen Salamy/Kevin Gomes
  • OS: CentOS Linux release 7.8.2003 (Core)
  • This is our CVS server

mxm.shore.mbari.org

New-ssds.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 7.8
  • Located in the DMZ
  • There are three read-only mounts from Atlas
    • Ssdsdata is mounted on /ssdsdata
    • AUVCTD is mounted /auvctd
    • AUVBI is mounted on /auvbi
  • This is the public facing SSDS server (http://new-ssds.mbari.org/ssds/) … holy crap, we still have the original image the John Graybeal created there ... yikes.
  • There is a JBoss (version 4.0.3-SP1) server running from /opt/jboss.
  • The SSDS files are deployed into /opt/jboss-4.0.3SP1/server/default/deploy
  • There is an XML files that subscribes to a remote JMS topic on bob that then consumes message from bob. There is a ssds-ruminate.jar files that creates a message driven bean which consumes those messages off the ruminate topic on bob, does some processing, then forwards them on to a ruminate-republish topic that is created by the ssds-ruminate-republish-service.xml file. This is the JMS topic which is connected to by the updatebot process on pismo.
  • There are some .war files that can probably be removed (access.war, axis.war, cimt.war, foce.war, mars.war and muce.war) as you cannot even get to them unless you go through the :8080 connection and most say they are no longer available.
  • Two XML files configure database connection:
    • Ssds-data-ds.xml: connection to dione and the SSDS_Data database
    • Ssds-metadata-ds.xml: connection to dionw and the SSDS_Metadata database
  • I need to look into the access.war and the axis.war files. I think they can probably be removed. I think the axis stuff was for the OMF project
  • There are two proxy passes created in the /etc/httpd/conf.d/ssds.conf file
    • /ssds -> :8080/ssds
    • /servlet -> :8080/servlet
  • Not sure if the servlet does anything, but I think that is what is used by the OASIS2SSDS process that is running on elvis that is called by the process on coredata (whew!)
  • For more details, look at the SSDS codebase and the build process to see what gets deployed where

Nexus.shore.mbari.org

  • Defender of the nexus: Brian Schlining
  • OS: CentOS 7
  • Runs two annosaurus servers via docker, one for the video lab, another for Danelle’s machine learning pipeline. The video labs annosaurus server is proxied via m3.shore.mbari.org. The build/deploy script for them is at https://bitbucket.org/mbari/m3-deployspace/src/master/annosaurus/ . You will need this script to restart/redeploy

Normandy.mbari.org

  • Person responsible: Kevin Gomes/Mike McCann/Carlos Rueda/DanelleCline
  • OS: CentOS 6.10
  • Located in the DMZ
  • Normally we login as odssadm
  • There are two Atlas mounts in /etc/fstab
    • ODSS is mounted on /ODSS (then /ODSS/data is mounted on /data)
    • CANON_Images is mounted on /CANON_Images (not sure why and odssadmin can’t even read it). This was in the readme so not sure if we are even using it anymore: CANON_Images is for image and video data related to the CANON program. CANON data belongs on the ftp site: server: odss.mbari.org, user: canon, passwd (send a request to dss-dev@mbari.org or tm@mbari.org)
  • I suspect it was a way for folks to FTP files to the ODSS, but we have that set up directly as FTP on normandy now
  • It has a cron job that makes sure the AMQP consumers are running to pull down the tracking data from RabbitMQ on messaging.shore.mbari.org and writing to the PostgreSQL database on malibu. The process to check is (~/dev/MBARItracking/amqp/runPersistMonitor.sh)
  • There is a cron job which is currently commented out that runs the process to grab the saildrone positions and push them to the ODSS every minute. (/home/odssadm/dev/saildrone-extraction/run-sd-extract.sh)
  • The ODSS NodeJS server runs from /opt/odss-node/server/app.js
  • Sometimes, the repository monitor is run manually and is run using node (it is RepoWatcher.js and takes configuration params at the command line.
  • Looks like there is a cron job that runs nightly to restart the THREDDS server.
  • There is a cron job that cleans out the tilecache each day
  • There is a cron job that cleans up Ferret journal files
  • There is a cron job that copies down remote sensing ERDDAP data to our local ERDDAP server (this is to both improve performance, but also to allow for replication out to the ships). We should probably re-think this and maybe just register those data sets and access them from their sources directly.
  • There is a cron job that is used to update mapserver mapfiles with the latest available timecodes for each layer every hour (D.Cline)
  • There is a cron job (currently commented out) to run the script to extract the wg gotoWatch commands and publish to ODSS
  • There is a cron job (Carlos) that enables the odss2dash service, which pushes asset positions from the tracking DB to TethysDash. NOTE: the odss2dash service was moved to a different machine. More details at https://docs.mbari.org/tethysdash/odss2dash/.
  • There is a postgres server running which is for the tracking database
  • There is a rabbitmq server running here. Do we need this?
  • MongoDB is running (data store for the ODSS). It uses port 27017 and I have a desktop app (Studio 3T) that uses that port to allow for editing of the data store. This is how I create and edit views and layers in the ODSS. The configuration file is in /etc/mongod.conf (tells mongo to put data files in /data/mongo and log files in /var/log/mongo/mongod.log)
  • There are three Tomcat servers running on different ports. One for a very small ODSS service (I think this is only the servlet that returns the file listing that shows up in the “data’ pane of the ODSS), one for an ERDDAP server and one for a THREDDS server.
  • There is an HTTPD server running with the following proxy passes:
    • /erddap -> :8280/erddap
    • /odss/services -> :8080/odss/services (this is the data pane listing)
    • /catalogs -> :8080/catalogs (I think this one isn’t used, but not sure)
    • /socket.io -> :3000/socket.io (this was supposed to be for pushing events to the ODSS, but I don’t think it’s used, I may not be able to remove it though until I remove the code that connects it to the browser)
    • /odss -> :3000/odss
    • /observations -> http://services.mbari.org/observations (not really used, but is still a part of the ODSS, so need to keep). UPDATE: When I migrated services.mbari.org, I actually just removed this from the ODSS.
    • /thredds -> :8180/thredds
    • WSGI alias /trackingdb -> /home/odssadm/dev/MBARItracking/amqp/tracking.wgsi
    • /tracking-post -> :3001 (this was for a small node JS service I wrote that allows for pushing locations directly into the tracking DB. It is not active currently, but lives in /opt/odss-node/tracking-post)

Oceana2.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 7.8
  • CNAME points from oceana.mbari.org and confluence.mbari.org.
  • No mounts through /etc/fstab
  • This is where crowd and confluence are running
  • Located in the DMZ, but I am pretty sure we have it all closed off from the outside world.
  • The goal with this server is to move the Confluence content to something else (Google docs, Markdown, etc.) and then shutdown the server.
  • The crowd running on this machine is connected to our LDAP server and then is used to handle the logins from Confluence. Because I have an instance or Crowd running as production on auth.mbari.org, I have to keep renewing licenses for Crowd as trial licenses, but since the upgrade for Confluence is not happening I could probably move that license back to oceana to prevent me from having to renew it each month to keep the logins in Confluence working.
  • The data for confluence and crowd are stored in /data (which is local and where the license is renewed)
  • Two java processes running two tomcat instances which are both in /opt
  • Crowd page is here: http://oceana.mbari.org:8095/crowd/console/login.action
  • Note the crowd page is not over https, but it’s not used when folks log in. They login on the Confluence page (which is secure) and Confluece connects to crowd via localhost, not over the network. The Crowd page is only available internally.
  • Confluence page is here: https://oceana.mbari.org/confluence/dashboard.action
  • A proxy pass is configured in /etc/httpd/conf.d/proxypass.conf that points /confluence to :8080/confluence

Odss-test.shore.mbari.org

  • Person responsible: Kevin Gomes/Carlos Rueda
  • OS: CentOS 6.10
  • No mounts through /etc/fstab
  • Log in using odssadm
  • The only cron jobs on this machine (as odssadm) are the ones that start the odss server at reboot (which currently does not work) and one that makes sure the AMQP consumers are running to pull down the tracking data from RabbitMQ on messaging.shore.mbari.org and writing to the PostgreSQL database on malibu. The process to check is (~/dev/MBARItracking/amqp/runPersistMonitor.sh)
  • MongoDB is running (data store for the ODSS). It uses port 27017 and I have a desktop app (Studio 3T) that uses that port to allow for editing of the data store. This is how I create and edit views and layers in the ODSS. The configuration file is in /etc/mongod.conf (tells mongo to put data files in /data/mongo and log files in /var/log/mongo/mongod.log)
  • There are three Tomcat servers running on different ports. One for a very small ODSS service (I think this is only the servlet that returns the file listing that shows up in the “data’ pane of the ODSS), one for an ERDDAP server and one for a THREDDS server.
  • There are several proxy passes set up:
    • WSGIScriptAlias /canon /home/odssadm/dev/stoqsgit/stoqs.wsgi
    • WSGIScriptAlias /canontest /home/odssadm/dev/stoqsgittest/stoqs.wsgi
    • WSGIScriptAlias /trackingdb /home/odssadm/dev/MBARItracking/amqp/tracking.wsgi
    • These are configured in /etc/httpd/custom (which is not normal)
      • /erddap -> :8180/erddap
      • /services -> :8080/odss/services (this is the data pane listing)
      • /catalogs -> :8080/odss/catalogs (probably not used anymore)
      • /socket.io -> :3000/socket.io (this was supposed to be for pushing events to the ODSS, but I don’t think it’s used, I may not be able to remove it though until I remove the code that connects it to the browser)
      • /odss -> :3000/odss
      • /observations -> http://services.mbari.org/observations (not really used, but is still a part of the ODSS, so need to keep)
      • /thredds -> :8280/thredds
      • alias /tilecache/ "/data/tilecache-2.11/"

okeanids.mbari.org

Pam.shore.mbari.org

  • Person responsible: Danelle Cline
  • OS: CentOS 7.8
  • Status: Not used
  • Responsible for decimating raw data in 24-hour increments from 256kz to 2kHz sound files. Decimation and LTSA files have been moved to gizo.mbari.org via Matlab scripts John Ryan runs by hand.

pismo.shore.mbari.org

  • Person responsible: Mike McCann/Kevin Gomes/Rich Schramm/Carlos Rueda
  • OS: CentOS 7.8
  • There is one mount in /etc/fstab (The ssdsdata share is mounted on /ssdsdata)
  • I don’t think this is just a SE IE server (Luke Coletti has a home directory), but I think we use it heavily for various tasks. Some things that I know of that run there:
    • This is where all the ingest for the tracking DB runs. It processes all the emails and updates files, sends messages to RabbitMQ, etc. This all happens in the /home/* directories
    • This is where the SSDS UpdateBot runs. It’s started via a systemd script (/etc/systemd/system/updatebot.service) and is a Java program that connects to the SSDS instance running on new-ssds.mbari.org and watches for any JMS messages that come in on the ruminate topics. It then kicks off a process to check if the message has any files that need conversion to NetCDF. This is used almost exclusively in the Dorado data processing pipeline, but I think the OASIS mooring XML might be augmented here. The Java program also launches a process that crawls all SSDS deployments and looks for any files that need to be converted to NetCDF. I think this is currently set to run twice a day. It will start when the process first starts so if a Dorado process did not complete, you can restart to crawl everything. The log for this process is in /opt/ssds/logs.
    • Rich has some scripts for processing MARS data into SSDS that run under the ssdsadmin account. If you login and then switch over to the ssdsadmin account using sudo -u ssdsadmin -i, you can see the processes that get kicked off if you run a crontab -l
    • Rich is running the rovctd plotting scripts here under the ssdsadmin account (crontab)
    • Carlos has some scripts running:
      • gpsfix2auvtrack: Service that emails "auvtrack" with GPS fixes received from TethysDash.
      • UPDATE 2020-07-10: gpsfix2auvtrack now obsolete and completely disabled. (The TethysDash backend now provides the functionality.)
    • There is a Tomcat server running, but as far as I can tell, it’s not doing anything, can we shut this down? There is nothing in /usr/share/tomcat/webapps
    • Docker is running there
    • MariaDB (MySQL) is running there, do we need this?
  • There is an httpd service running and some files under /var/www/html/external and /internal. Not sure what those are from. There are some jelly watch files and I think Jellywatch is hosted elsewhere so I would imagine we could clean that out.
    • This looks like this is where the pacfic forum calendar is served from (verified by looking at the calendar which is running on mww2.shore.mbari.org which is a CNAME for pismo. It looks like there might be two of them one in /external/events and one in /internal/events
    • Looks like there might be some BOG stuff too in /var/www/html/external/science/upper-ocean-systems/biological-oceanography/web/extern/bog

Portal.shore.mbari.org

  • Portal keeper: Brian Schlining
  • OS: CentOS 7
  • Runs a number of docker containers in support of VARS and other services:
    • A docker registry at portal.shore.mbari.org:5000
    • The auv-rawdata-server (deployed using build.sh in https://bitbucket.org/mbari/auv-rawdata-server)
    • Vars-user-server - M3 user service. This and the the other M3 services are deployed using scripts in https://bitbucket.org/mbari/m3-deployspace. All M3 services are proxied via m3.shore.mbari.org
    • Vampire-squid - M3 video asset manager
    • Annosaurus - M3 annotation manager
    • Vars-kb-server - M3 knowledgebase service

Quasar.shore.mbari.org

Seaspray.shore.mbari.org

  • The precious: Brian Schlining
  • OS: Old - being retired
  • Still runs the cookie break sign up app

Services.mbari.org -> CNAME to services8.mbari.org

Services8.mbari.org

  • Person responsible: Kevin Gomes/Brian Schlining
  • OS: Centos 8.1
  • Located in the DMZ
  • No mounts defined in /etc/fstab
  • This used to be where the observ
  • There are a few of proxy passes in the /etc/httpd/conf/httpd.conf
    <VirtualHost *:80>
        DocumentRoot /var/www/html
        #Redirect permanent / https://services.mbari.org/
        <Location /vars/references>
            ProxyPass http://ione.mbari.org:8300
            ProxyPassReverse http://ione.mbari.org:8300
        </Location>
        Redirect "/espweb" "https://esp-portal.mbari.org/web/"
        Redirect "/data" "https://esp-portal.mbari.org/web/data/"
    </VirtualHost>
    

Singularity.shore.mbari.org

Ssds-ingest.shore.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 7.8
  • I am pretty sure this is where I was working to deploy the updated ingest mechanism for the SSDS. I was working on a branch to deploy it in the updated JBoss server which was Wildfly.
  • Wildfly 10 was installed in /opt/wildlfy-10.0.0.Final (Java 1.8) and that is where the updated ssds ingest components are deployed. The files installed (in /opt/wildfly/standalone/deployments) are:
    • Data.war
    • Ingest.ear
    • Jtds-1.3.1.jar
    • Ssds-data-ds.xml: this connects up to the SSDS_Data database on dione and creates a data source named SSDS_Data.
    • Ssds-ingest-jms.xml
    • Ssds-ingest-protobuf-jms.xml
    • Ssds-ruminate-jms.xml
    • Ssds-transmogrify-jms.xml
  • So, I think this is a development in process and there are probably not any clients that are sending data to the services on this machine. They probably work in the new context, but I probably never got far enough to move over any production feeds. I think this was just responsible for ingesting (sort of like a bob.shore.mbari.org replacement).
  • The artifacts are built in the ~/dev/shore-side-data-system directory and the build process deploys the artifacts to wildfly. The builds in this location indicate that only one client jar file was built that publishes messages to the transmogrify topic and is likely the replacement jar file that would be used by SIAM clients. This new version has several web pages that get deployed with the ingest services. They are:
  • These are basically a way for you to send in a packet using a web form to the different points in the pipeline. This is handy if you want to manually inject metadata. There is a web application (form) that gets deployed and is available at the base url (/transmogrify, /ingest, /data) which then submits data to a servlet that is available at a sub-url /transmogrify/transmogrify, /ingest/ingest, /data/data. I think /data might be the same as /ingest and I might have done that for convenience. Or it could be that I wanted to create a form for extracting/querying and that would be on the /data url. That would make more sense, but probably never finished. Looking at the code, I think that is what happened. There is a process to build the data access servlet, but the associated web app looks like a copy of the Ingest web app. Yep, that was it, the servlet is available at /data/raw-data so I need to build a web form at /data that will submit parameters to the /data/raw-data servlet. Good to know. So this means the ingest.ear file that is deployed contains the ingest.war and transmogrify.war files which are the two web form/servlet combinations and the data.war file is the work in progress for the data query/download.
  • There is a proxy pass configured in /etc/httpd/conf.d/ssds.conf, but that points /ssds to the base wildfly server at the port :8080. The base /ssds does not really work as the base Wildfly page needs the :8080 and looks broken, but the sub links seem to work (transmogrify, ingest, data).

Ssds-prod.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 7.8
  • Located in the DMZ
  • This is similar to the ssds-ingest.shore.mbari.org machine, but only the data.war and ssds-data-ds.xml files are deployed. This means that only the data query/extraction page/servlet combination are deployed in the DMZ.
  • I think my plan was to deploy components as I build them into the DMZ machine that will be used for query and APIs into the DMZ machine. That is why the split exists between the ssds-ingest and ssds-prod machine. I bet the ssds-test machine has everything.

Ssds-test.shore.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 7.8
  • This is exactly the same configuration as ssds-ingest.shore.mbari.org
  • It has all the same components that are deployed on ssds-ingest, but I think the goal is to have all the metadata stuff deployed here to and the metadata services and applications would be deployed to ssds-prod.

Stoqs2.mbari.org

Stoqs.mbari.org

Tethyscode2.shore.mbari.org

Tethyscode.shore.mbari.org

tethysdash2.shore.mbari.org

  • Person responsible: Carlos Rueda
  • OS: CentOS 7.8.2003
  • Production machine for the TethysDash system. For more details see "okeanids entry" in this page.

Tethysdash.shore.mbari.org

  • Replaced by tethysdash2 -- see above.

Tethysdata.shore.mbari.org

tethystest.shore.mbari.org

  • Person responsible: Carlos Rueda
  • OS: CentOS 7.8.2003
  • Testing/staging machine for TethysDash.

Tethysviz.shore.mbari.org

tsauv.shore.mbari.org

Wg-portal.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 8.1
  • No mounts through /etc/fstab
  • This is going to be where I deploy the WaveGlider data portal. I am working on some python scripts to use the SOAP services in WGMS to extract data for the WaveGliders and store it. It will also publish locations to the ODSS from this portal. Nothing installed yet however, not even httpd. Podman is there though.

Zuma.rc.mbari.org

  • Person responsible: Kevin Gomes
  • OS: CentOS 6.10
  • VM Located on the Rachel Carson
  • There is a mount from the machine corona on the Carson from /RC-mnavData to /RC-mnavData and this is used so that a process can write files that contain tracking data from the tracking database to the Timezero machine. The files on this mount show up on the bridge machine and then those files are used to plot asset tracks in TimeZero on the bridge. There is a cronjob that runs every 5 minutes to do this processing (~/dev/timezero-positions/timezero-positions.sh)
  • This server runs the ODSS on the Carson.
  • It has a cron job that makes sure the AMQP consumers are running to pull down the tracking data from RabbitMQ on messaging.shore.mbari.org and writing to the PostgreSQL database on malibu. The process to check is (~/dev/MBARItracking/amqp/runPersistMonitor.sh)
  • There are three Tomcat servers running on different ports. One for a very small ODSS service (I think this is only the servlet that returns the file listing that shows up in the data pane of the ODSS), one for an ERDDAP server and one for a THREDDS server.
  • We tend to ssh into this machine directly as odssadm.
  • There is a cron job set up to start the ODSS NodeJS server (/opt/odss-node/server/app.js) on reboot.
  • There is a cron job that runs every 10 minutes to sync data to/from shore (~/dev/odss/synchronization/bin/zuma/zuma-sync.sh). This script needs to be curated VERY carefully to make sure we do not kill the comms pipe back to shore although most of the time it’s on microwave which is more forgiving.
  • There is a cron job that runs every 10 minutes that copies the UCTD and PCTD files from the CTD to zuma (~/dev/odss/synchronization/bin/ctdsync, all happens within the RC network)
  • MongoDB is running (data store for the ODSS). It uses port 27017 and I have a desktop app (Studio 3T) that uses that port to allow for editing of the data store. This is how I create and edit views and layers in the ODSS.
  • There are other cron jobs that are commented out for reporting, syncing etc. Look in the odssadm cron table for more info.
  • There is an HTTPD server running with the following proxy passes:
    • /erddap -> :8280/erddap
    • /odss/services -> :8080/odss/services (this is the data pane listing)
    • /catalogs -> :8080/catalogs (I think this one isn’t used, but not sure)
    • /socket.io -> :3000/socket.io (this was supposed to be for pushing events to the ODSS, but I don’t think it’s used, I may not be able to remove it though until I remove the code that connects it to the browser)
    • /odss -> :3000/odss
    • Note that the /observations proxy is commented out and a local empty file was created so that the ship based ODSS does not try to read the observations from the services.mbari.org machine every minute over the satellite link.
    • /thredds -> :8180/thredds
    • WSGIScriptAlias /canon /home/stoqsadm/dev/stoqshg/stoqs.wsgi
    • alias /tilecache/ "/var/www/tilecache/"
    • WSGI alias /trackingdb -> /home/odssadm/dev/MBARItracking/amqp/tracking.wgsi