Rachel Carson Simulator
This page documents the Navproc and Logging system on the Rachel Carson Simulator
Overview
First, let's look at a high level diagram of the various components. There are basically five main components in the simulator. They are:
- An Ubuntu 24 virtual machine name
sim-rcsn.shore.mbari.orgthat is running some simulation software that is generating mock signals and pushing them out of serial ports to a Moxa serial server. These simulators are basically just reading old log files from the Carson and pushing them out serial ports as the time ticks by. - Moxa NPort 16 Port RS-232 Serial Device Server that is connected to the VM mentioned above via Linux kernel drivers.
- Digi Connect TS16 that takes in physical connections from the outputs of the Moxa NPort.
- An Ubuntu 22 NUC that is running the current version of navproc/logr software that has the TS16 port mounted virtually as /dev/tty devices
- An Ubuntu 24 virtual machine that is running the data distribution side of things (Redis, InfluxDB, ZeroMQ, etc.)
sim-rcsn.shore.mbari.org"] --> simulator-rcsn-moxa["Simulator Moxa
134.89.10.247
simulator-moxa-rcsn.shore.mbari.org"] --> simulator-rcsn-ts["Digi TS16
134.89.10.100
simulator-ts-rcsn"] --> navproc-rcsn-sim-nuc["Navproc NUC
navproc-rcsn-sim-nuc.shore.mbari.org"] --> coredata-rcsn-sim["Simulator Coredata
coredata-rcsn-sim.shore.mbari.org"] end
Below is a picture of the Moxa and Digi Terminal servers that are located in Kevin Gomes' office currently.

The diagram below shows the current signal connections and their associated navproc processes
Installation and Setup
sim-moxa-rcsn (Moxa NPort 5600)
- Before setting up the simulator and the navproc for the simulator, I re-configured the Moxa for the new setup. The manual for the Moxa is located here.
- Since this was an older Moxa, I did a factory reset by holding the reset button for 5 seconds until the LED stops blinking and the factory defaults should be loaded.
- I downloaded the NPort Administrator Suite to a separate Windows 11 machine and then downloaded the latest (v3.11) firmware ROM from the Moxa support site.
- I installed the NPort Administrator Suite on the Windows machine and ran it.
- I clicked on
Searchand it found the list of all Moxa NPorts that it could see. - I double-clicked on the Moxa that I had just reset to bring up the settings.
- Note that it already had the most recent version of the firmware so I did not update it
- I changed the name of the Moxa to
sim-moxa-rcsn - I changed the timezone to Pacific, set the time and date and set the NTP server to
time-sh1.shore.mbari.org - I configured the network by setting the Netmask to 255.255.254.0, the gateway to 134.89.10.247, selected DHCP, set the DNS Server 1 to 134.89.10.10 and the DNS Server 2 to 134.89.12.87.
- I applied the changes which restarted the Moxa.
- You can find the NPort web page located here
Warning
I have noticed that sometimes, even though the Moxa is working, the web page will not respond. I have to power off and on the Moxa for the web page to become responsive again.
- Using the web interface, I changed the password to match the same
opspassword we use for other navproc machines (note the username is still 'admin').
sim-ts-rcsn (Digi TS 16)
The next in line is a Digi TS 16 terminal server. The serial cables from the Moxa are routed to the terminal server which is then virtually mounted on the computer that is running the navproc and logr code.
The webpage for the Digi TS 16 terminal server can be found here and ask Kevin Gomes for login credentials if you need them.
Some support documents:
Installation Steps:
- Before starting, I wanted to do a factory reset to clear any settings. I unplugged the power plug, held the reset button and plugged back in the power plug. I held the reset button for about 30 seconds and the LED on the back started blinking in a 1-5-1 pattern so I released the reset button.
- I downloaded the Digi Discovery Tool to my windows machine and ran it.
- I could see the Digi was assigned an IP address of 134.89.11.137 (tell by the MAC address) so I double clicked on it to open up the settings and it opened the web page interface. I logged in using the default user of
rootwith passworddbps. - On the Network settings panel, I changed the IP address from DHCP to static (IS provided) and set it to
134.89.10.100and then clicked on Apply. - It restarted the Digi and redirected my browser to here and I logged in again.
- Under the Network->Advanced Network Settings, I changed the
Host Nametosim-ts-rcsnand the DNS servers to134.89.10.10and134.89.12.87and then clicked on Apply. - I then went to
Rebootand rebooted the terminal server. - I downloaded the latest firmware from the Digi site (820014374_U.bin) and then installed it from the web page and then rebooted it.
- Under Users->root, I changed the password to match the
opspassword we use. - Under System, I set the Device Identity Settings for Description and Contact.
- Under System, I set the Date and Time to use an UTC offset of -08 hours and set the first time source to use
time-sh1.shore.mbari.org. - I rebooted again, but time and date did not update so I set it manually.
sim-rcsn
-
Submitted a help ticket to create a VM for the signal simulator. This should just be a vanilla Ubunutu 24 with Docker installed. The hostname of the machine will be
sim-rcsn.shore.mbari.org. Here is the ticket text:- Request_Submitted_By: kgomes@mbari.org - VM_Name: sim-rcsn - VM_Purpose: Hey Peter, this is the next VM for the simulation setup. It should be configured exactly like the sim-rcsn-old VM. It will be the signal generator for the new version of navproc we will be deploying on the Carson - VM_Expiration: 5 years - VM_Support: IS_Supported - VM_Support_Alt_Adm: - VM_OS: Ubuntu 24.04 LTS - CPU: 2 - CPU_Addl_Reason: - RAM: 4 - RAM_Addl_Reason: - GPU_Extra: - GPU: NO - Disk_Extra: - Network: Internal (SHORE) - Resource_Priority: Low - Resource_Priority_Reason: - Conf_Firewall: None that I can think of. - Conf_Desktop: YES - Conf_Desktop_Extra: - Conf_Logins: local - Conf_Docker_Extra: - Conf_Docker: NO - Conf_WebServer: none - Conf_sudo: sudo permissions should be on the kgomes and ops accounts. - Conf_vCenter_Access: No need - VM_Comments: This should be set up exactly like the VM for sim-rcsn-old except I don't need a mccann account. Just 'kgomes' and 'ops' accounts. Thanks!1. Peter finished the VM installation and gave the `ops` account the default password. -
I needed to ssh into the VM first to make sure that the system prompted for a new password by running
ssh ops@sim-rcsn.shore.mbari.org - I changed the password to the normal ops password.
- I then brought up the Windows.app on my Mac and created a connection to this VM. It opened right to the initial setup of the Ubuntu installation
- I accepted mostly the defaults for the options in the setup.
- Note the default UI through the remote desktop is different from the normal one.
- I ran the software updater to get everything up to date.
- I opened up the
Settingsapplication and under the Power settings I set Screen Blank to Never. - I then opened a terminal and ran
sudo apt-get install build-essential cmake python3-pip python3-venv -y -
Installing the Moxa virtual serial ports was a little different. The default kernel 6 drivers from their website did not work so I reached out to support and he sent me the attached driver file which I then downloaded to the
Downloadsfolder.tar -xvf npreal2_v6.0.1_build_24051001.tgz cd moxa sudo ./mxinst cd /usr/lib/npreal2/driver sudo ./mxaddsvr 134.89.10.247 16 -
This creates the virtual serial ports at /dev/ttyr00->0f.
- The code for the Python simulator is located in BitBucket here and it was checked out to the
/optdirectory by changing to the /opt directory and runningsudo git clone https://kgomes@bitbucket.org/mbari/corenav-simulators.git(I used mydev-checkoutapp password). - I then ran
sudo chown -R ops:ops corenav-simulatorsto change over to the ops account for ownership - I then
cd corenav-simulatorsand ranmkdir logsto create a directory where log files will go -
Next, I needed to grab some log files from the Carson so I could replay them. I cd'd into the
/opt/corenav-simulators/datadirectory and ran:mkdir log-files cd log-files mkdir carson cd carson scp <ops@rcnavproc1.rc.mbari.org>:/home/ops/corelogging/rc/data/transfercomplete/2024339* . scp <ops@rcnavproc1.rc.mbari.org>:/home/ops/corelogging/rc/data/transfercomplete/2024340* . scp <ops@rcnavproc1.rc.mbari.org>:/home/ops/corelogging/rc/data/transfercomplete/2024341* . gunzip *.gz -
Then in
/opt/corenav-simulators, I edited the simulator_config.json file to look like the entries below.{ "name": "Rachel Carson Data Simulators", "version": "0.1", "description": "Simulator for data that should be coming from the Rachel Carson", "author": "Kevin Gomes", "logging-level": "DEBUG", "logging-format": "%(asctime)s: %(message)s", "logging-datefmt": "%H:%M:%S", "try-to-send-over-serial": true, "simulators": [ { "name": "logr_simulator", "config": { "type": "logr-file-reader", "log-dir": "./data/log-files/carson/", "file-mapping": { "2024340csprawfulllogr.dat": { "port": "/dev/ttyr09", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340gtdprologr.dat": { "port": "/dev/ttyr06", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340lodestarlogr.dat": { "port": "/dev/ttyr05", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340nav4dlogr.dat": { "port": "/dev/ttyr0a", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340nmeafulllogr.dat": { "port": "/dev/ttyr03", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340seabirdctdfulllogr.dat": { "port": "/dev/ttyr04", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340shipgyrofulllogr.dat": { "port": "/dev/ttyr02", "baudrate": 4800, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340uhsmsgfulllogr.dat": { "port": "/dev/ttyr07", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 } } } } ] } -
In order to just test this, I ran
./simulator.shin the/optdirectory. Once I verified data was being generated properly, I kill the python process. -
Now to get this to run as a service, I created a service startup file
/etc/systemd/system/corenav-simulators.servicewhich looks like:[Unit] Description=Python scripts to simulate data for corenav After=network.target [Service] Type=forking ExecStartPre=/bin/sleep 30 ExecStart=/opt/corenav-simulators/simulator.sh Restart=Always [Install] WantedBy=default.target -
The service can then be enabled by running the following:
sudo systemctl daemon-reload sudo systemctl enable corenav-simulators.service sudo systemctl start corenav-simulators.service -
I then rebooted the machine to make sure the simulators started properly
Warning
Once, after doing a standard upgrade on the Ubuntu installation, the serial ports would fail to be recognized. I had to uninstall the driver by running sudo ./mxuinst from the ~/Downloads/moxa/ directory and then running sudo ./mxinst again. After that, then run cd /usr/lib/npreal2/driver and then run sudo ./mxaddsvr 134.89.10.246 16 to add the virtual ports
coredata-rcsn-sim
This is the main server for the Navproc API (and other applications). This is a server in the VM cluster that will act as an API server for the data coming from the navproc simulator for the Carson. This API consists of a ZMQ proxy, a NATS server, a TimescaleDB server and a Grafana server. In addition, there are several custom applications that serve different functions (described below). Here is a basic diagram of the API services and data flow.
Python] port5556(5556) port4222in(4222) subgraph nats-cluster nats-server-1 nats-server-2 nats-server-3 end port4222out(4222) port8222html(8222-http) port8080ws(8080-web-sockets) nats-udp-proxy[NATS UDP Proxy
Python] port54003(54003) port54005(54005) port54007(54007) port54009(54009) port54010(54010
DiskUsage) port54017(54017) port54019(54019) grafana nats-to-tsdb[NATS to TimescaleDB
Python] port5432in(5432) timescaledb[TimescaleDB] position-to-odss[Position To ODSS
Python] pull-logr-files[Pull Logr File Process] subgraph "/data/logr/YYYY" navproc-process-1-log-file-gzip[Logger File 1 GZip] navproc-process-2-log-file-gzip[Logger File 2 GZip] end end end subgraph Shore messaging[AMQP Server
messaging.shore.mbari.org] subgraph coredata8 navprocessing-directory["~/shipandrov/navprocessing/rcsn-sim/todo/YYYYDDDshipnavlogr.dat.gz"] mbari-directory["/mbari/ShipData/logger/YYYY/rcsn-sim/YYYYDDDDxxxxxlogr.dat.gz"] end subgraph atlas ship-data-share["ShipData://logger/YYYY/rcsn-sim/YYYYDDDxxxxxlogr.dat.gz"] end end navproc-process-1 --> logr-process-1 navproc-process-1 --> lcm-bridge logr-process-1 --> lcm-bridge logr-process-1 --> navproc-process-1-log-file navproc-process-2 --> logr-process-2 navproc-process-2 --> lcm-bridge logr-process-2 --> lcm-bridge logr-process-2 --> navproc-process-2-log-file lcm-bridge --> port5555 --> zmq-proxy --> port5556 lcm-bridge --> port4222in --> nats-cluster --> port4222out nats-cluster --> port8222html nats-cluster --> port8080ws port4222out --> nats-udp-proxy timescaledb --> grafana nats-udp-proxy --> port54003 nats-udp-proxy --> port54005 nats-udp-proxy --> port54007 nats-udp-proxy --> port54009 nats-udp-proxy --> port54010 nats-udp-proxy --> port54017 nats-udp-proxy --> port54019 port4222out --> position-to-odss --> messaging port4222out --> nats-to-tsdb --> port5432in --> timescaledb navproc-process-1-log-file --> pull-logr-files --> navproc-process-1-log-file-gzip --> mbari-directory --> ship-data-share navproc-process-2-log-file --> pull-logr-files --> navproc-process-2-log-file-gzip --> mbari-directory --> ship-data-share navproc-process-1-log-file-gzip --> navprocessing-directory
- Next, it was time to set up the API server so the navproc system can send data over the LCM bridge
-
I submitted a ticket to IS to create a new VM and here is the original request
- Request_Submitted_By: kgomes@mbari.org - VM_Name: coredata-rcsn-sim - VM_Purpose: This will be the API server for the Rachel Carson simulator - VM_Expiration: 5 years - VM_Support: IS_Supported - VM_Support_Alt_Adm: - VM_OS: Ubuntu 24.04 LTS - CPU: 2 - CPU_Addl_Reason: - RAM: 4 - RAM_Addl_Reason: - GPU_Extra: - GPU: NO - Disk_Extra: - Network: Internal (SHORE) - Resource_Priority: Low - Resource_Priority_Reason: - Conf_Firewall: The ports allowed should be 54000->54050; 80/443; 6379; 5556; 5555; 8086; 8094; 1883; 9001; 5432 - Conf_Desktop: YES - Conf_Desktop_Extra: - Conf_Logins: local - Conf_Docker: YES - Conf_Docker_Extra: - Conf_WebServer: apache - Conf_sudo: There should be two accounts, 'kgomes' and 'ops' and they both should have sudo privs. - Conf_vCenter_Access: None - VM_Comments: Can this be named 'coredata-rcsn-sim'? This will be running a zeroMQ proxy, a Redis server, a Grafana server and an InfluxDB server and they will need proxy passes set up so they can be served over SSL through the web server. I'm open to using Nginx BTW if you can show me how to set up proxy passes. I'm not familiar with Nginx, but willing to learn. Also, can you setup HTTPS on this too? Thanks! -
I needed to ssh into the VM first to make sure that the system prompted for a new password by running
ssh ops@sim-rcsn.shore.mbari.org - I changed the password to the normal ops password.
- I then brought up the Windows.app on my Mac and created a connection to this VM. It opened right to the initial setup of the Ubuntu installation
- I accepted the defaults for the options in the setup.
- Note the default UI through the remote desktop is different from the normal one.
- I ran the software updater to get everything up to date.
- I opened up the
Settingsapplication and under the Power settings I set Screen Blank to Never. -
I want to be able to pull changes from the Git repo without having to use my password, so I am going to use ssh keys.
- The first step is to generate a key locally by running:
ssh-keygen -t rsa -b 4096 -C "kgomes@mbari.org" - I stored the key in
/home/ops/.ssh/id_bitbucketand left the passphrase empty - I then opened a browser and logged into BitBucket.
- Next to my picture, I clicked on the settings gear and then clicked on 'Personal BitBucket Settings'
- Then I clicked on 'SSH Keys'
- I clicked on the 'Add Key' button
- I named it 'For Navproc related Checkouts and Pulls'
- Back in the terminal window I ran
cat /home/ops/.ssh/id_bitbucket.puband then copy and pasted that text into the 'SSH Public Key' part of the BitBucket web page. - I selected 'No expiry' and then clicked on 'Add key'
-
To tell ssh to use this key (since it's not the default id_rsa name), I edited the
/home/ops/.ssh/configfile and added this sectionHost bitbucket.org AddKeysToAgent yes IdentityFile ~/.ssh/id_bitbucket -
Now I can checkout and pull things without a password
- I also needed to create an ssh keypair to allow some software on this API server to scp files from the navproc machine and push them to the coredata shore server.
- Like above, I generated another key locally by running:
ssh-keygen -t rsa -b 4096 -C "kgomes@mbari.org" - I stored the key in
/home/ops/.ssh/id_navprocand left the passphrase empty - I sent the publick part of the key to Peter Walker who set up the coredata side.
-
To tell ssh to use this key, I edited the
/home/ops/.ssh/configfile and added this sectionHost coredata8.shore.mbari.org navproc-rcsn-sim-nuc.shore.mbari.org AddKeysToAgent yes IdentityFile ~/.ssh/id_navproc -
I then needed to set that key up to work on the navproc machine by running
ssh-copy-id ops@navproc-rcsn-sim-nuc.shore.mbari.org
- The first step is to generate a key locally by running:
-
Next, it was time to install the API stack. One small thing with this stack is that the data being written by docker in the timescale DB has to be done as user with UID 1000. So I had to pre-emptively create the /data/timescaledb and change the owner to UUID 1000.
cd / sudo mkdir data sudo chown ops data cd data mkdir timescaledb mkdir timescaledb/data sudo chown -R 1000 timescaledb/ cd /opt sudo mkdir corenav sudo chown ops corenav cd corenav git clone git@bitbucket.org:mbari/navproc-api.git cd navproc-api/ cp .env.template .env -
One of the docker containers needs the ssh key to connect to the navproc and coredata machines, so I copied the the key so the build would have it by running
cp ~/.ssh/id_navproc /opt/corenav/navproc-api/pull-logr-files/ssh-key - I then edited the .env file to set up all the usernames and passwords for the various connected services
- I then started the API stack by running
docker compose up -d -
Now, in order for things to be more straightforward, I wanted to set up proxy passes in Nginx so that we can map clean https URLs to the ports of the underlying services. The host is already running Nginx and we just need to set up the proxy passes. Peter had already configured an example proxy pass and it was located at
/etc/nginx/sites-available/mbari-revproxyand looked like this:server { listen 80; server_name coredata-rcsn-sim.shore.mbari.org; return 301 https://$host$request_uri; } server { listen 443 ssl; server_name coredata-rcsn-sim.shore.mbari.org; ssl_certificate /etc/nginx/ssl/wcbundle.crt; ssl_certificate_key /etc/nginx/ssl/wcprivate.key; ssl_prefer_server_ciphers on; location / { proxy_pass http://localhost:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } -
First, in order to configure Grafana to run through reverse proxy, I added the following properties to .env file and restarted the grafana container:
GF_SERVER_ROOT_URL="https://coredata-rcsn-sim.shore.mbari.org/grafana/"which tells the grafana app it will be running proxied on a sub path.
-
Then I added the following section to the mbari-revproxy file:
location /grafana/ { proxy_pass http://localhost:3000/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; # Required for WebSocket support proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; }Note
The ending
/on both the location and proxy_pass are VERY important and it won't work without it. -
Next, I wanted to set up the reverse proxy for the NATS web page and so I added the following configuruation:
location /nats/ { rewrite ^/nats(/.*)$ $1 break; proxy_pass http://127.0.0.1:8222; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; } -
Lastly, I wanted to make all the log files available to download over https. I added the following location clause:
location /logs/ { alias /data/logr/; autoindex on; autoindex_exact_size on; # Optional: show human-readable file sizes autoindex_localtime on; # Optional: show local time }
navproc-rcsn-sim-nuc
Warning
I first tried to setup the Navproc computer on a VM. It seemed to work at first, but as time went on, it started wiping out all the serial ports on each reboot and was throwing all kinds of hypervisor messages and popping up configuration menus when I would seem to make a simple change. It just got too unstable and too out-of-step with what we were installing on the boat so I went back to a physical NUC computer.
Next, it was time to set up the Navproc instance for the onshore simulator for the Carson.
- I set up the computer as was detailed in the Computer Setup Doc and named it
navproc-rcsn-sim-nuc
After Patching
Warning
This is a very important note. While I reference different steps that need to be done after a patch in places above, I wanted to create one section that has everything to do after a patch is applied to the VM which breaks a lot of stuff. It breaks the changes we made to get LCM to work and it blows away the virtual serial ports so follow the instructions below to get it working again after a patch
Corenav Simulators
ssh ops@sim-rcsn.shore.mbari.org
cd Downloads/moxa/
sudo ./mxuninst
sudo ./mxinst
sudo ./mxaddsvr 134.89.10.247 16
sudo systemctl start corenav-simulators.service
Navproc
ssh ops@navproc-rcsn-sim.shore.mbari.org
sudo ip link set lo multicast on
sudo ip route add 224.0.0.0/4 dev lo
sudo ifconfig lo multicast & sudo route add -net 224.0.0.0 netmask 240.0.0.0 dev lo
cd ~/Downloads/dgrp-1.9/
sudo ./configure
sudo make all
sudo make install
sudo make postinstall
sudo dgrp_cfg_node init -v -e none a 134.89.10.100 16
cd /etc/udev/rules.d/
sudo vi 10-dgrp.rules
Edit the 10-dgrp.rules file to look like this
KERNEL=="tty_dgrp*", PROGRAM="/usr/bin/dgrp_udev %k", SYMLINK+="%c", TAG="systemd", GROUP="dialout", MODE="0666”, OPTIONS="last_rule"
#KERNEL=="cu_dgrp*", PROGRAM="/usr/bin/dgrp_udev %k", SYMLINK+="%c", TAG="systemd"
#KERNEL=="pr_dgrp*", PROGRAM="/usr/bin/dgrp_udev %k", SYMLINK+="%c", TAG="systemd"