Skip to content

Rachel Carson Simulator

This page documents the Navproc and Logging system on the Rachel Carson Simulator

Overview

First, let's look at a high level diagram of the various components. There are basically five main components in the simulator. They are:

  1. An Ubuntu 24 virtual machine name sim-rcsn.shore.mbari.org that is running some simulation software that is generating mock signals and pushing them out of serial ports to a Moxa serial server. These simulators are basically just reading old log files from the Carson and pushing them out serial ports as the time ticks by.
  2. Moxa NPort 16 Port RS-232 Serial Device Server that is connected to the VM mentioned above via Linux kernel drivers.
  3. Digi Connect TS16 that takes in physical connections from the outputs of the Moxa NPort.
  4. An Ubuntu 22 NUC that is running the current version of navproc/logr software that has the TS16 port mounted virtually as /dev/tty devices
  5. An Ubuntu 24 virtual machine that is running the data distribution side of things (Redis, InfluxDB, ZeroMQ, etc.)
--- title: Logical Simulator Components --- flowchart LR subgraph RCSimulator[Rachel Carson Simulator] direction LR sim-rcsn["Simulator VM
sim-rcsn.shore.mbari.org"] --> simulator-rcsn-moxa["Simulator Moxa
134.89.10.247
simulator-moxa-rcsn.shore.mbari.org"] --> simulator-rcsn-ts["Digi TS16
134.89.10.100
simulator-ts-rcsn"] --> navproc-rcsn-sim-nuc["Navproc NUC
navproc-rcsn-sim-nuc.shore.mbari.org"] --> coredata-rcsn-sim["Simulator Coredata
coredata-rcsn-sim.shore.mbari.org"] end

Below is a picture of the Moxa and Digi Terminal servers that are located in Kevin Gomes' office currently.

physical-connections

The diagram below shows the current signal connections and their associated navproc processes

--- title: Signal Connections --- flowchart LR subgraph signals[Signal Connections] direction LR subgraph Simulator ttyr00 ttyr01 gyro --> ttyr02 gps --> ttyr03 ctd --> ttyr04 lodestar --> ttyr05 gtdpro --> ttyr06 uhsmsg --> ttyr07 ttyr08 csp --> ttyr09 nav4d --> ttyr0a ttyr0b ttyr0c ttyr0d ttyr0e ttyr0f end subgraph Moxa moxa-port1 moxa-port2 moxa-port3 moxa-port4 moxa-port5 moxa-port6 moxa-port7 moxa-port8 moxa-port9 moxa-port10 moxa-port11 moxa-port12 moxa-port13 moxa-port14 moxa-port15 moxa-port16 end subgraph TS16 ts16-ttya00 ts16-ttya01 ts16-ttya02 ts16-ttya03 ts16-ttya04 ts16-ttya05 ts16-ttya06 ts16-ttya07 ts16-ttya08 ts16-ttya09 ts16-ttya10 ts16-ttya11 ts16-ttya12 ts16-ttya13 ts16-ttya14 ts16-ttya15 end subgraph navproc-rscn-sim nav4d_gps_serin nav4d_gps nav4d_output nmea_gps_serin nmea_gps seabird_ctd_serin seabird_ctd ship_gyro_serin ship_gyro ventana_csp_serin ventana_csp vorne_display gtdpro_serin lodestar_serin uhsmsg_serin nav4d-gps-serin-lcm[NAV4D_GPS_SERIN] nav4d-gps-msg-lcm[NAV4D_GPS_MSG] nmea-gps-serin-lcm[NMEA_GPS_SERIN] nmea-gps-msg-lcm[NMEA_GPS_MSG] seabird-ctd-serin-lcm[SEABIRD_CTD_SERIN] seabird-ctd-msg-lcm[SEABIRD_CTD_MSG] ship-gyro-serin-lcm[SHIP_GYRO_SERIN] ship-gyro-msg-lcm[SHIP_GYRO_MSG] ventana-csp-serin-lcm[VENTANA_CSP_SERIN] ventana-csp-msg-lcm[VENTANA_CSP_MSG] nav4d-output-msg-lcm[NAV4D_OUTPUT_MSG] gtdpro-serin-lcm[GTDPRO_SERIN] lodestar-serin-lcm[LODESTAR_SERIN] uhsmsg-serin-lcm[UHS_SERIN] end vorne-display-device[Vorne Display] end ttyr00 --> moxa-port1 ttyr01 --> moxa-port2 ttyr02 --> moxa-port3 ttyr03 --> moxa-port4 ttyr04 --> moxa-port5 ttyr05 --> moxa-port6 ttyr06 --> moxa-port7 ttyr07 --> moxa-port8 ttyr08 --> moxa-port9 ttyr09 --> moxa-port10 ttyr0a --> moxa-port11 ttyr0b --> moxa-port12 ttyr0c --> moxa-port13 ttyr0d --> moxa-port14 ttyr0e --> moxa-port15 ttyr0f --> moxa-port16 moxa-port2 --> ts16-ttya01 moxa-port3 --> ts16-ttya02 moxa-port4 --> ts16-ttya03 moxa-port5 --> ts16-ttya04 moxa-port6 --> ts16-ttya05 moxa-port7 --> ts16-ttya06 moxa-port8 --> ts16-ttya07 moxa-port9 --> ts16-ttya08 moxa-port10 --> ts16-ttya09 moxa-port11 --> ts16-ttya10 moxa-port12 --> ts16-ttya11 moxa-port13 --> ts16-ttya12 moxa-port14 --> ts16-ttya13 moxa-port15 --> ts16-ttya14 moxa-port16 --> ts16-ttya15 ts16-ttya10 --> nav4d_gps_serin --> nav4d-gps-serin-lcm nav4d-gps-serin-lcm --> nav4d_gps --> nav4d-gps-msg-lcm ventana-csp-msg-lcm --> nav4d_output --> nav4d-output-msg-lcm ts16-ttya03 --> nmea_gps_serin --> nmea-gps-serin-lcm nmea-gps-serin-lcm --> nmea_gps --> nmea-gps-msg-lcm ts16-ttya04 --> seabird_ctd_serin --> seabird-ctd-serin-lcm seabird-ctd-serin-lcm --> seabird_ctd --> seabird-ctd-msg-lcm ts16-ttya02 --> ship_gyro_serin --> ship-gyro-serin-lcm ship-gyro-serin-lcm --> ship_gyro --> ship-gyro-msg-lcm ts16-ttya09 --> ventana_csp_serin --> ventana-csp-serin-lcm ventana-csp-serin-lcm --> ventana_csp --> ventana-csp-msg-lcm nav4d-gps-msg-lcm --> vorne_display --> ts16-ttya00 --> vorne-display-device ventana-csp-msg-lcm --> vorne_display ts16-ttya06 --> gtdpro_serin --> gtdpro-serin-lcm ts16-ttya05 --> lodestar_serin --> lodestar-serin-lcm ts16-ttya07 --> uhsmsg_serin --> uhsmsg-serin-lcm

Installation and Setup

sim-moxa-rcsn (Moxa NPort 5600)

  1. Before setting up the simulator and the navproc for the simulator, I re-configured the Moxa for the new setup. The manual for the Moxa is located here.
  2. Since this was an older Moxa, I did a factory reset by holding the reset button for 5 seconds until the LED stops blinking and the factory defaults should be loaded.
  3. I downloaded the NPort Administrator Suite to a separate Windows 11 machine and then downloaded the latest (v3.11) firmware ROM from the Moxa support site.
  4. I installed the NPort Administrator Suite on the Windows machine and ran it.
  5. I clicked on Search and it found the list of all Moxa NPorts that it could see.
  6. I double-clicked on the Moxa that I had just reset to bring up the settings.
  7. Note that it already had the most recent version of the firmware so I did not update it
  8. I changed the name of the Moxa to sim-moxa-rcsn
  9. I changed the timezone to Pacific, set the time and date and set the NTP server to time-sh1.shore.mbari.org
  10. I configured the network by setting the Netmask to 255.255.254.0, the gateway to 134.89.10.247, selected DHCP, set the DNS Server 1 to 134.89.10.10 and the DNS Server 2 to 134.89.12.87.
  11. I applied the changes which restarted the Moxa.
  12. You can find the NPort web page located here

Warning

I have noticed that sometimes, even though the Moxa is working, the web page will not respond. I have to power off and on the Moxa for the web page to become responsive again.

  1. Using the web interface, I changed the password to match the same ops password we use for other navproc machines (note the username is still 'admin').

sim-ts-rcsn (Digi TS 16)

The next in line is a Digi TS 16 terminal server. The serial cables from the Moxa are routed to the terminal server which is then virtually mounted on the computer that is running the navproc and logr code.

The webpage for the Digi TS 16 terminal server can be found here and ask Kevin Gomes for login credentials if you need them.

Some support documents:

  1. Quick Start Guide
  2. User Guide

Installation Steps:

  1. Before starting, I wanted to do a factory reset to clear any settings. I unplugged the power plug, held the reset button and plugged back in the power plug. I held the reset button for about 30 seconds and the LED on the back started blinking in a 1-5-1 pattern so I released the reset button.
  2. I downloaded the Digi Discovery Tool to my windows machine and ran it.
  3. I could see the Digi was assigned an IP address of 134.89.11.137 (tell by the MAC address) so I double clicked on it to open up the settings and it opened the web page interface. I logged in using the default user of root with password dbps.
  4. On the Network settings panel, I changed the IP address from DHCP to static (IS provided) and set it to 134.89.10.100 and then clicked on Apply.
  5. It restarted the Digi and redirected my browser to here and I logged in again.
  6. Under the Network->Advanced Network Settings, I changed the Host Name to sim-ts-rcsn and the DNS servers to 134.89.10.10 and 134.89.12.87 and then clicked on Apply.
  7. I then went to Reboot and rebooted the terminal server.
  8. I downloaded the latest firmware from the Digi site (820014374_U.bin) and then installed it from the web page and then rebooted it.
  9. Under Users->root, I changed the password to match the ops password we use.
  10. Under System, I set the Device Identity Settings for Description and Contact.
  11. Under System, I set the Date and Time to use an UTC offset of -08 hours and set the first time source to use time-sh1.shore.mbari.org.
  12. I rebooted again, but time and date did not update so I set it manually.

sim-rcsn

  1. Submitted a help ticket to create a VM for the signal simulator. This should just be a vanilla Ubunutu 24 with Docker installed. The hostname of the machine will be sim-rcsn.shore.mbari.org. Here is the ticket text:

     - Request_Submitted_By: kgomes@mbari.org
     - VM_Name: sim-rcsn
     - VM_Purpose: Hey Peter, this is the next VM for the simulation setup. It should be configured exactly like the sim-rcsn-old VM. It will be the signal generator for the new version of navproc we will be deploying on the Carson
     - VM_Expiration: 5 years
     - VM_Support: IS_Supported
     - VM_Support_Alt_Adm:
     - VM_OS: Ubuntu 24.04 LTS
     - CPU: 2
     - CPU_Addl_Reason:
     - RAM: 4
     - RAM_Addl_Reason:
     - GPU_Extra:
     - GPU: NO
     - Disk_Extra:
     - Network: Internal (SHORE)
     - Resource_Priority: Low
     - Resource_Priority_Reason:
     - Conf_Firewall: None that I can think of.
     - Conf_Desktop: YES
     - Conf_Desktop_Extra:
     - Conf_Logins: local
     - Conf_Docker_Extra:
     - Conf_Docker: NO
     - Conf_WebServer: none
     - Conf_sudo: sudo permissions should be on the kgomes and ops accounts.
     - Conf_vCenter_Access: No need
     - VM_Comments: This should be set up exactly like the VM for sim-rcsn-old except I don't need a mccann account. Just 'kgomes' and 'ops' accounts. Thanks!1. Peter finished the VM installation and gave the `ops` account the default password.
    
  2. I needed to ssh into the VM first to make sure that the system prompted for a new password by running ssh ops@sim-rcsn.shore.mbari.org

  3. I changed the password to the normal ops password.
  4. I then brought up the Windows.app on my Mac and created a connection to this VM. It opened right to the initial setup of the Ubuntu installation
  5. I accepted mostly the defaults for the options in the setup.
  6. Note the default UI through the remote desktop is different from the normal one.
  7. I ran the software updater to get everything up to date.
  8. I opened up the Settings application and under the Power settings I set Screen Blank to Never.
  9. I then opened a terminal and ran sudo apt-get install build-essential cmake python3-pip python3-venv -y
  10. Installing the Moxa virtual serial ports was a little different. The default kernel 6 drivers from their website did not work so I reached out to support and he sent me the attached driver file which I then downloaded to the Downloads folder.

            tar -xvf npreal2_v6.0.1_build_24051001.tgz
            cd moxa
            sudo ./mxinst
            cd /usr/lib/npreal2/driver
            sudo ./mxaddsvr 134.89.10.247 16
    
  11. This creates the virtual serial ports at /dev/ttyr00->0f.

  12. The code for the Python simulator is located in BitBucket here and it was checked out to the /opt directory by changing to the /opt directory and running sudo git clone https://kgomes@bitbucket.org/mbari/corenav-simulators.git (I used my dev-checkout app password).
  13. I then ran sudo chown -R ops:ops corenav-simulators to change over to the ops account for ownership
  14. I then cd corenav-simulators and ran mkdir logs to create a directory where log files will go
  15. Next, I needed to grab some log files from the Carson so I could replay them. I cd'd into the /opt/corenav-simulators/data directory and ran:

    mkdir log-files
    cd log-files
    mkdir carson
    cd carson
    scp <ops@rcnavproc1.rc.mbari.org>:/home/ops/corelogging/rc/data/transfercomplete/2024339* .
    scp <ops@rcnavproc1.rc.mbari.org>:/home/ops/corelogging/rc/data/transfercomplete/2024340* .
    scp <ops@rcnavproc1.rc.mbari.org>:/home/ops/corelogging/rc/data/transfercomplete/2024341* .
    gunzip *.gz
    
  16. Then in /opt/corenav-simulators, I edited the simulator_config.json file to look like the entries below.

    {
            "name": "Rachel Carson Data Simulators",
            "version": "0.1",
            "description": "Simulator for data that should be coming from the Rachel Carson",
            "author": "Kevin Gomes",
            "logging-level": "DEBUG",
            "logging-format": "%(asctime)s: %(message)s",
            "logging-datefmt": "%H:%M:%S",
            "try-to-send-over-serial": true,
            "simulators": [
                    {
                            "name": "logr_simulator",
                            "config": {
                                            "type": "logr-file-reader",
                                            "log-dir": "./data/log-files/carson/",
                                            "file-mapping": {
                                            "2024340csprawfulllogr.dat": {
                                                    "port": "/dev/ttyr09",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340gtdprologr.dat": {
                                                    "port": "/dev/ttyr06",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340lodestarlogr.dat": {
                                                    "port": "/dev/ttyr05",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340nav4dlogr.dat": {
                                                    "port": "/dev/ttyr0a",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340nmeafulllogr.dat": {
                                                    "port": "/dev/ttyr03",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340seabirdctdfulllogr.dat": {
                                                    "port": "/dev/ttyr04",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340shipgyrofulllogr.dat": {
                                                    "port": "/dev/ttyr02",
                                                    "baudrate": 4800,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340uhsmsgfulllogr.dat": {
                                                    "port": "/dev/ttyr07",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            }
                                    }
                            }
                    }
            ]
    }
    
  17. In order to just test this, I ran ./simulator.sh in the /opt directory. Once I verified data was being generated properly, I kill the python process.

  18. Now to get this to run as a service, I created a service startup file /etc/systemd/system/corenav-simulators.service which looks like:

    [Unit]
    Description=Python scripts to simulate data for corenav
    After=network.target
    
    [Service]
    Type=forking
    ExecStartPre=/bin/sleep 30
    ExecStart=/opt/corenav-simulators/simulator.sh
    Restart=Always
    
    [Install]
    WantedBy=default.target
    
  19. The service can then be enabled by running the following:

    sudo systemctl daemon-reload
    sudo systemctl enable corenav-simulators.service
    sudo systemctl start corenav-simulators.service
    
  20. I then rebooted the machine to make sure the simulators started properly

Warning

Once, after doing a standard upgrade on the Ubuntu installation, the serial ports would fail to be recognized. I had to uninstall the driver by running sudo ./mxuinst from the ~/Downloads/moxa/ directory and then running sudo ./mxinst again. After that, then run cd /usr/lib/npreal2/driver and then run sudo ./mxaddsvr 134.89.10.246 16 to add the virtual ports

coredata-rcsn-sim

This is the main server for the Navproc API (and other applications). This is a server in the VM cluster that will act as an API server for the data coming from the navproc simulator for the Carson. This API consists of a ZMQ proxy, a NATS server, a TimescaleDB server and a Grafana server. In addition, there are several custom applications that serve different functions (described below). Here is a basic diagram of the API services and data flow.

--- title: Navproc API --- flowchart LR subgraph Rachel Carson Navproc API Server Simulator direction LR subgraph navproc navproc-process-1 navproc-process-2 logr-process-1 logr-process-2 lcm-bridge subgraph logger files navproc-process-1-log-file[Logger File 1] navproc-process-2-log-file[Logger File 2] end end subgraph navproc-api port5555(5555) zmq-proxy[ZMQ Proxy
Python] port5556(5556) port4222in(4222) subgraph nats-cluster nats-server-1 nats-server-2 nats-server-3 end port4222out(4222) port8222html(8222-http) port8080ws(8080-web-sockets) nats-udp-proxy[NATS UDP Proxy
Python] port54003(54003) port54005(54005) port54007(54007) port54009(54009) port54010(54010
DiskUsage) port54017(54017) port54019(54019) grafana nats-to-tsdb[NATS to TimescaleDB
Python] port5432in(5432) timescaledb[TimescaleDB] position-to-odss[Position To ODSS
Python] pull-logr-files[Pull Logr File Process] subgraph "/data/logr/YYYY" navproc-process-1-log-file-gzip[Logger File 1 GZip] navproc-process-2-log-file-gzip[Logger File 2 GZip] end end end subgraph Shore messaging[AMQP Server
messaging.shore.mbari.org] subgraph coredata8 navprocessing-directory["~/shipandrov/navprocessing/rcsn-sim/todo/YYYYDDDshipnavlogr.dat.gz"] mbari-directory["/mbari/ShipData/logger/YYYY/rcsn-sim/YYYYDDDDxxxxxlogr.dat.gz"] end subgraph atlas ship-data-share["ShipData://logger/YYYY/rcsn-sim/YYYYDDDxxxxxlogr.dat.gz"] end end navproc-process-1 --> logr-process-1 navproc-process-1 --> lcm-bridge logr-process-1 --> lcm-bridge logr-process-1 --> navproc-process-1-log-file navproc-process-2 --> logr-process-2 navproc-process-2 --> lcm-bridge logr-process-2 --> lcm-bridge logr-process-2 --> navproc-process-2-log-file lcm-bridge --> port5555 --> zmq-proxy --> port5556 lcm-bridge --> port4222in --> nats-cluster --> port4222out nats-cluster --> port8222html nats-cluster --> port8080ws port4222out --> nats-udp-proxy timescaledb --> grafana nats-udp-proxy --> port54003 nats-udp-proxy --> port54005 nats-udp-proxy --> port54007 nats-udp-proxy --> port54009 nats-udp-proxy --> port54010 nats-udp-proxy --> port54017 nats-udp-proxy --> port54019 port4222out --> position-to-odss --> messaging port4222out --> nats-to-tsdb --> port5432in --> timescaledb navproc-process-1-log-file --> pull-logr-files --> navproc-process-1-log-file-gzip --> mbari-directory --> ship-data-share navproc-process-2-log-file --> pull-logr-files --> navproc-process-2-log-file-gzip --> mbari-directory --> ship-data-share navproc-process-1-log-file-gzip --> navprocessing-directory
  1. Next, it was time to set up the API server so the navproc system can send data over the LCM bridge
  2. I submitted a ticket to IS to create a new VM and here is the original request

      - Request_Submitted_By: kgomes@mbari.org
      - VM_Name: coredata-rcsn-sim
      - VM_Purpose: This will be the API server for the Rachel Carson simulator
      - VM_Expiration: 5 years
      - VM_Support: IS_Supported
      - VM_Support_Alt_Adm: 
      - VM_OS: Ubuntu 24.04 LTS
      - CPU: 2
      - CPU_Addl_Reason: 
      - RAM: 4
      - RAM_Addl_Reason: 
      - GPU_Extra: 
      - GPU: NO
      - Disk_Extra: 
      - Network: Internal (SHORE)
      - Resource_Priority: Low
      - Resource_Priority_Reason: 
      - Conf_Firewall: The ports allowed should be 54000->54050; 80/443; 6379; 5556; 5555; 8086; 8094; 1883; 9001; 5432
      - Conf_Desktop: YES
      - Conf_Desktop_Extra: 
      - Conf_Logins: local
      - Conf_Docker: YES
      - Conf_Docker_Extra: 
      - Conf_WebServer: apache
      - Conf_sudo: There should be two accounts, 'kgomes' and 'ops' and they both should have sudo privs.
      - Conf_vCenter_Access: None
      - VM_Comments: Can this be named 'coredata-rcsn-sim'? This will be running a zeroMQ proxy, a Redis server, a Grafana server and an InfluxDB server and they will need proxy passes set up so they can be served over SSL through the web server. I'm open to using Nginx BTW if you can show me how to set up proxy passes. I'm not familiar with Nginx, but willing to learn. Also, can you setup HTTPS on this too? Thanks!
    
  3. I needed to ssh into the VM first to make sure that the system prompted for a new password by running ssh ops@sim-rcsn.shore.mbari.org

  4. I changed the password to the normal ops password.
  5. I then brought up the Windows.app on my Mac and created a connection to this VM. It opened right to the initial setup of the Ubuntu installation
  6. I accepted the defaults for the options in the setup.
  7. Note the default UI through the remote desktop is different from the normal one.
  8. I ran the software updater to get everything up to date.
  9. I opened up the Settings application and under the Power settings I set Screen Blank to Never.
  10. I want to be able to pull changes from the Git repo without having to use my password, so I am going to use ssh keys.

    1. The first step is to generate a key locally by running: ssh-keygen -t rsa -b 4096 -C "kgomes@mbari.org"
    2. I stored the key in /home/ops/.ssh/id_bitbucket and left the passphrase empty
    3. I then opened a browser and logged into BitBucket.
    4. Next to my picture, I clicked on the settings gear and then clicked on 'Personal BitBucket Settings'
    5. Then I clicked on 'SSH Keys'
    6. I clicked on the 'Add Key' button
    7. I named it 'For Navproc related Checkouts and Pulls'
    8. Back in the terminal window I ran cat /home/ops/.ssh/id_bitbucket.pub and then copy and pasted that text into the 'SSH Public Key' part of the BitBucket web page.
    9. I selected 'No expiry' and then clicked on 'Add key'
    10. To tell ssh to use this key (since it's not the default id_rsa name), I edited the /home/ops/.ssh/config file and added this section

          Host bitbucket.org
            AddKeysToAgent yes
            IdentityFile ~/.ssh/id_bitbucket
      
    11. Now I can checkout and pull things without a password

    12. I also needed to create an ssh keypair to allow some software on this API server to scp files from the navproc machine and push them to the coredata shore server.
    13. Like above, I generated another key locally by running: ssh-keygen -t rsa -b 4096 -C "kgomes@mbari.org"
    14. I stored the key in /home/ops/.ssh/id_navproc and left the passphrase empty
    15. I sent the publick part of the key to Peter Walker who set up the coredata side.
    16. To tell ssh to use this key, I edited the /home/ops/.ssh/config file and added this section

          Host coredata8.shore.mbari.org navproc-rcsn-sim-nuc.shore.mbari.org
            AddKeysToAgent yes
            IdentityFile ~/.ssh/id_navproc
      
    17. I then needed to set that key up to work on the navproc machine by running ssh-copy-id ops@navproc-rcsn-sim-nuc.shore.mbari.org

  11. Next, it was time to install the API stack. One small thing with this stack is that the data being written by docker in the timescale DB has to be done as user with UID 1000. So I had to pre-emptively create the /data/timescaledb and change the owner to UUID 1000.

    cd /
    sudo mkdir data
    sudo chown ops data
    cd data
    mkdir timescaledb
    mkdir timescaledb/data
    sudo chown -R 1000 timescaledb/
    cd /opt
    sudo mkdir corenav
    sudo chown ops corenav
    cd corenav
    git clone git@bitbucket.org:mbari/navproc-api.git
    cd navproc-api/
    cp .env.template .env
    
  12. One of the docker containers needs the ssh key to connect to the navproc and coredata machines, so I copied the the key so the build would have it by running cp ~/.ssh/id_navproc /opt/corenav/navproc-api/pull-logr-files/ssh-key

  13. I then edited the .env file to set up all the usernames and passwords for the various connected services
  14. I then started the API stack by running docker compose up -d
  15. Now, in order for things to be more straightforward, I wanted to set up proxy passes in Nginx so that we can map clean https URLs to the ports of the underlying services. The host is already running Nginx and we just need to set up the proxy passes. Peter had already configured an example proxy pass and it was located at /etc/nginx/sites-available/mbari-revproxy and looked like this:

    server {
        listen 80;
        server_name  coredata-rcsn-sim.shore.mbari.org;
        return 301   https://$host$request_uri;
    }
    
    server {
        listen 443 ssl;
        server_name                 coredata-rcsn-sim.shore.mbari.org;
        ssl_certificate             /etc/nginx/ssl/wcbundle.crt;
        ssl_certificate_key         /etc/nginx/ssl/wcprivate.key;
        ssl_prefer_server_ciphers   on;
    
        location / {
            proxy_pass  http://localhost:8080;
    
            proxy_set_header   Host $host;
            proxy_set_header   X-Real-IP $remote_addr;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header   X-Forwarded-Proto $scheme;
        }
    }
    
  16. First, in order to configure Grafana to run through reverse proxy, I added the following properties to .env file and restarted the grafana container:

    GF_SERVER_ROOT_URL="https://coredata-rcsn-sim.shore.mbari.org/grafana/"
    

    which tells the grafana app it will be running proxied on a sub path.

  17. Then I added the following section to the mbari-revproxy file:

    location /grafana/ {
        proxy_pass  http://localhost:3000/;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
    
        # Required for WebSocket support
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
    

    Note

    The ending / on both the location and proxy_pass are VERY important and it won't work without it.

  18. Next, I wanted to set up the reverse proxy for the NATS web page and so I added the following configuruation:

    location /nats/ {
        rewrite ^/nats(/.*)$ $1 break;
        proxy_pass http://127.0.0.1:8222;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    
        proxy_http_version 1.1;
    }
    
  19. Lastly, I wanted to make all the log files available to download over https. I added the following location clause:

    location /logs/ {
        alias /data/logr/;
        autoindex on;
        autoindex_exact_size on;     # Optional: show human-readable file sizes
        autoindex_localtime on;       # Optional: show local time
    }
    

Warning

I first tried to setup the Navproc computer on a VM. It seemed to work at first, but as time went on, it started wiping out all the serial ports on each reboot and was throwing all kinds of hypervisor messages and popping up configuration menus when I would seem to make a simple change. It just got too unstable and too out-of-step with what we were installing on the boat so I went back to a physical NUC computer.

Next, it was time to set up the Navproc instance for the onshore simulator for the Carson.

  1. I set up the computer as was detailed in the Computer Setup Doc and named it navproc-rcsn-sim-nuc

After Patching

Warning

This is a very important note. While I reference different steps that need to be done after a patch in places above, I wanted to create one section that has everything to do after a patch is applied to the VM which breaks a lot of stuff. It breaks the changes we made to get LCM to work and it blows away the virtual serial ports so follow the instructions below to get it working again after a patch

Corenav Simulators

    ssh ops@sim-rcsn.shore.mbari.org
    cd Downloads/moxa/
    sudo ./mxuninst 
    sudo ./mxinst
    sudo ./mxaddsvr 134.89.10.247 16
    sudo systemctl start corenav-simulators.service
    ssh ops@navproc-rcsn-sim.shore.mbari.org
    sudo ip link set lo multicast on
    sudo ip route add 224.0.0.0/4 dev lo
    sudo ifconfig lo multicast & sudo route add -net 224.0.0.0 netmask 240.0.0.0 dev lo
    cd ~/Downloads/dgrp-1.9/
    sudo ./configure
    sudo make all
    sudo make install
    sudo make postinstall
    sudo dgrp_cfg_node init -v -e none a 134.89.10.100 16
    cd /etc/udev/rules.d/
    sudo vi 10-dgrp.rules

Edit the 10-dgrp.rules file to look like this

    KERNEL=="tty_dgrp*", PROGRAM="/usr/bin/dgrp_udev %k", SYMLINK+="%c", TAG="systemd", GROUP="dialout", MODE="0666”, OPTIONS="last_rule"
    #KERNEL=="cu_dgrp*", PROGRAM="/usr/bin/dgrp_udev %k", SYMLINK+="%c", TAG="systemd"
    #KERNEL=="pr_dgrp*", PROGRAM="/usr/bin/dgrp_udev %k", SYMLINK+="%c", TAG="systemd"