Skip to content

David Packard Simulator

This document details how the simulation environment for the David Packard was built. First, let's look at a high level diagram of the various components. There are basically five main components in the simulator. They are:

  1. An Ubuntu 24 virtual machine named sim-dpkd.shore.mbari.org running some simulation software that pushes generated signals to the local serial ports.
  2. Moxa NPort 16 Port RS-232 Serial Device Server that is connected to the VM mentioned above via Linux kernel drivers.
  3. Digi Connect TS16 that takes in physical connections from the outputs of the Moxa NPort.
  4. Digi Connect TS32 that takes in physical connections from the outputs of the Moxa NPort.
  5. Intel NUC computer running navproc/logr software that has the TS16 and TS32 ports mounted virtually as /dev/tty devices
  6. An Ubuntu 24.04 VM server named coredata-dpkd-sim.shore.mbari.org that serves as the navproc-api server
--- title: Logical Simulator Components --- flowchart LR subgraph Simulator direction LR sim-dpkd[sim-dpkd
Ubuntu 24.04] moxa[Moxa NPort
sim-moxa-dpkd
134.89.11.159] digi16[Digi TS 16] navproc-dpkd-sim.shore.mbari.org[navproc-dpkd-sim.shore.mbari.org
Ubuntu 22.04] coredata-dpkd-sim.shore.mbari.org[coredata-dpkd-sim.shore.mbari.org
Ubuntu 24.04] digi32[Digi TS 32] end sim-dpkd --> moxa --> digi16 --> navproc-dpkd-sim.shore.mbari.org moxa --> digi32 --> navproc-dpkd-sim.shore.mbari.org navproc-dpkd-sim.shore.mbari.org --> coredata-dpkd-sim.shore.mbari.org

The physical simulator hardware setup looks like the following:

physical-connections

Here is a diagram of the signal connections on the Packard Simulator

--- title: David Packard Simulator Connections --- flowchart LR subgraph signals[Signal Connections] direction LR subgraph Simulator ttyr00 ttyr01 gyro --> ttyr02 gps --> ttyr03 ctd --> ttyr04 lodestar --> ttyr05 gtdpro --> ttyr06 uhsmsg --> ttyr07 ttyr08 csp --> ttyr09 nav4d --> ttyr0a ttyr0b ttyr0c ttyr0d ttyr0e ttyr0f end subgraph Moxa moxa-port1 moxa-port2 moxa-port3 moxa-port4 moxa-port5 moxa-port6 moxa-port7 moxa-port8 moxa-port9 moxa-port10 moxa-port11 moxa-port12 moxa-port13 moxa-port14 moxa-port15 moxa-port16 end subgraph TS16 ts16-ttya00 ts16-ttya01 ts16-ttya02 ts16-ttya03 ts16-ttya04 ts16-ttya05 ts16-ttya06 ts16-ttya07 ts16-ttya08 ts16-ttya09 ts16-ttya10 ts16-ttya11 ts16-ttya12 ts16-ttya13 ts16-ttya14 ts16-ttya15 end subgraph TS32 direction TB ts32-ttya00 ts32-ttya01 ts32-ttya02 ts32-ttya03 ts32-ttya04 ts32-ttya05 ts32-ttya06 ts32-ttya07 ts32-ttya08 ts32-ttya09 ts32-ttya10 ts32-ttya11 ts32-ttya12 ts32-ttya13 ts32-ttya14 ts32-ttya15 ts32-ttya16 ts32-ttya17 ts32-ttya18 ts32-ttya19 ts32-ttya20 ts32-ttya21 ts32-ttya22 ts32-ttya23 ts32-ttya24 ts32-ttya25 ts32-ttya26 ts32-ttya27 ts32-ttya28 ts32-ttya29 ts32-ttya30 ts32-ttya31 end vorne end ttyr00 --> moxa-port1 ttyr01 --> moxa-port2 ttyr02 --> moxa-port3 ttyr03 --> moxa-port4 ttyr04 --> moxa-port5 ttyr05 --> moxa-port6 ttyr06 --> moxa-port7 ttyr07 --> moxa-port8 ttyr08 --> moxa-port9 ttyr09 --> moxa-port10 ttyr0a --> moxa-port11 ttyr0b --> moxa-port12 ttyr0c --> moxa-port13 ttyr0d --> moxa-port14 ttyr0e --> moxa-port15 ttyr0f --> moxa-port16 moxa-port2 --> ts16-ttya01 moxa-port3 --> ts16-ttya02 moxa-port4 --> ts16-ttya03 moxa-port5 --> ts16-ttya04 moxa-port6 --> ts16-ttya05 moxa-port7 --> ts16-ttya06 moxa-port8 --> ts16-ttya07 moxa-port9 --> ts16-ttya08 moxa-port10 --> ts32-ttya09 moxa-port11 --> ts32-ttya10 moxa-port12 --> ts32-ttya11 moxa-port13 --> ts32-ttya12 moxa-port14 --> ts32-ttya13 moxa-port15 --> ts32-ttya14 moxa-port16 --> ts32-ttya15 ts16-ttya00 --> vorne

Moxa NPort 16

I wanted to make sure the Moxa was configured properly before setting up the signal simulator. In order to provide support for multiple serial ports on the sim-dpkd machine, we use a Moxa NPort 5610 which can then provide virtual serial ports. To configure the NPort for this simulator:

  1. I downloaded the NPort Administrator Suite to a separate Windows 11 machine and then downloaded the latest (v3.12) firmware ROM from the Moxa support site.
  2. I installed the NPort Administrator Suite and ran it. It automatically found the Moxa which had previously been configured with IP 192.168.127.254. It had v3.9 firmware on it.
  3. I used the NPort Administrator Suite to update the firmware.
  4. I fired up the NPort configuration utility again and it found the Moxa.
  5. I connected to it with the default username and password and enabled DHCP and then set the Netmask to 255.255.254.0, the gateway to 134.89.10.1, the DNS Server 1 to 134.89.10.10 and the DNS Server 2 to 134.89.12.87. I renamed it to sim-moxa-dpkd so it was easily recognizable and then I saved. It automatically restarted with an IP address of 134.89.11.159.
  6. When I connected over the web interface, it asked me for new password, so I set the password to match that of the 'ops' account on the navproc machines.
  7. You can find the NPort web page located here

sim-dpkd.shore.mbari.org

  1. Submitted a help ticket to create a VM for the signal simulator. This should just be a vanilla Ubunutu 24 with Docker installed. The hostname of the machine will be sim-dpkd.shore.mbari.org. Here is the ticket text:

    - Request_Submitted_By: <kgomes@mbari.org>
    - VM_Name: sim-dpkd
    - VM_Purpose: This VM will be used to run a Python Docker container that simulates signals for the current Navproc installation on the Rachel Carson. Peter, let's make this one Ubuntu 24 if that's OK with you. I am putting one year on this VM because it should only be in operation until we get the new infrastructure online.
    - VM_Expiration: 1 year
    - VM_Support: IS_Supported
    - VM_Support_Alt_Adm:
    - VM_OS: > Refer to Comments
    - CPU: 2
    - CPU_Addl_Reason:
    - RAM: 4
    - RAM_Addl_Reason:
    - GPU_Extra:
    - GPU: NO
    - Disk_Extra:
    - Network: Internal (SHORE)
    - Resource_Priority: Low
    - Resource_Priority_Reason:
    - Conf_Firewall:
    - Conf_Desktop: YES
    - Conf_Desktop_Extra: Could you also enable Remote Desktop on this machine? I will do most work via ssh, but it will be helpful to have Desktop access.
    - Conf_Logins: local
    - Conf_Docker: YES
    - Conf_Docker_Extra:
    - Conf_WebServer: none
    - Conf_sudo: sudo for kgomes local account. Can you also create a local account named 'ops' that has sudo too? This is the way Mike and I use Navproc now and we would like to do it this way so we can run the navproc/logging stuff as ops and then we can both login directly as ops and manage it.
    - Conf_vCenter_Access:
    - VM_Comments: Ubuntu 24 please.
    
    1. Peter finished the VM installation and gave the ops account the default password.
    2. I needed to ssh into the VM first to make sure that the system prompted for a new password by running ssh ops@sim-dkpd.shore.mbari.org
    3. I changed the password to the normal ops password.
    4. I then brought up the Windows.app on my Mac and created a connection to this VM. It opened right to the initial setup of the Ubuntu installation
    5. I accepted mostly the defaults for the options in the setup.
    6. Note the default UI through the remote desktop is different from the normal one.
    7. I ran the software updater to get everything up to date.
    8. I opened up the Settings application and under the Power settings I set Screen Blank to Never.
    9. I then opened a terminal and ran sudo apt-get install build-essential cmake python3-pip python3-venv -y
    10. Next, I downloaded the 'Real TTY Linux Kernel 6.x Driver' from the Moxa website into the Downloads folder. Then I ran
          tar -xvf moxa-real-tty-linux-kernel-6.x-driver-v6.1.tar
          cd moxa
          sudo ./mxinst
          cd /usr/lib/npreal2/driver
          sudo ./mxaddsvr 134.89.11.159 16
      
  2. This creates the virtual serial ports at /dev/ttyr00->0f.

  3. The code for the Python simulator is located in BitBucket here and it was checked out to the /opt directory by changing to the /opt directory and running sudo git clone https://kgomes@bitbucket.org/mbari/corenav-simulators.git (I used my dev-checkout app password).
  4. I then ran sudo chown -R ops:ops corenav-simulators to change over to the ops account for ownership
  5. I then cd corenav-simulators and ran mkdir logs to create a directory where log files will go
  6. Next, I needed to grab some log files from the Carson so I could replay them. I cd'd into the /opt/corenav-simulators/data directory and ran:
    mkdir log-files
    cd log-files
    mkdir carson
    cd carson
    scp ops@rcnavproc1.rc.mbari.org:/home/ops/corelogging/rc/data/2024339* .
    scp ops@rcnavproc1.rc.mbari.org:/home/ops/corelogging/rc/data/2024340* .
    scp ops@rcnavproc1.rc.mbari.org:/home/ops/corelogging/rc/data/2024341* .
    sudo gunzip *.gz
    

Warning

When first setting this simulator up, we did not have any data from the Packard. Eventually, these data files should be replaced by the proper Packard instrument generated files so the simulated signals are correct.

  1. Then in /opt/corenav-simulators, I edited the simulator_config.json file to look like the entries below.

    {
            "name": "David Packard Data Simulators",
            "version": "0.1",
            "description": "Simulator for data that should be coming from the David Packard",
            "author": "Kevin Gomes",
            "logging-level": "DEBUG",
            "logging-format": "%(asctime)s: %(message)s",
            "logging-datefmt": "%H:%M:%S",
            "try-to-send-over-serial": true,
            "simulators": [
                    {
                            "name": "logr_simulator",
                            "config": {
                                            "type": "logr-file-reader",
                                            "log-dir": "./data/log-files/carson/",
                                            "file-mapping": {
                                            "2024340csprawfulllogr.dat": {
                                                    "port": "/dev/ttyr09",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340gtdprologr.dat": {
                                                    "port": "/dev/ttyr06",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340lodestarlogr.dat": {
                                                    "port": "/dev/ttyr05",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340nav4dlogr.dat": {
                                                    "port": "/dev/ttyr0a",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340nmeafulllogr.dat": {
                                                    "port": "/dev/ttyr03",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340seabirdctdfulllogr.dat": {
                                                    "port": "/dev/ttyr04",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340shipgyrofulllogr.dat": {
                                                    "port": "/dev/ttyr02",
                                                    "baudrate": 4800,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340uhsmsgfulllogr.dat": {
                                                    "port": "/dev/ttyr07",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            }
                                    }
                            }
                    }
            ]
    }
    
  2. In order to just test this, I ran ./simulator.sh in the /opt directory. Once I verified data was being generated properly, I kill the python process.

  3. Now to get this to run as a service, I created a service startup file /etc/systemd/system/corenav-simulators.service which looks like:

    [Unit]
    Description=Python scripts to simulate data for corenav
    After=network.target
    
    [Service]
    Type=forking
    ExecStartPre=/bin/sleep 30
    ExecStart=/opt/corenav-simulators/simulator.sh
    Restart=Always
    
    [Install]
    WantedBy=default.target
    
  4. The service can then be enabled by running the following:

    sudo systemctl daemon-reload
    sudo systemctl enable corenav-simulators.service
    sudo systemctl start corenav-simulators.service
    
  5. I then rebooted the machine to make sure the simulators started properly

Warning

Once, after doing a standard upgrade on the Ubuntu installation, the serial ports would fail to be recognized. I had to uninstall the driver by running sudo ./mxuinst from the ~/Downloads/moxa/ directory and then running sudo ./mxinst again. After that, then run cd /usr/lib/npreal2/driver and then run sudo ./mxaddsvr 134.89.11.159 16 to add the virtual ports

sim-ts16-dpkd (Digi TS16)

The next in line is a Digi TS 16 terminal server. The serial cables from the Moxa are routed to the terminal server which is then virtually mounted on the computer that is running the navproc and logr code.

The webpage for the Digi TS 16 terminal server can be found here and ask Kevin Gomes for login credentials if you need them.

Some support documents:

  1. Quick Start Guide
  2. User Guide

Installation Steps:

  1. Before starting, I took a picture of the label so I could get the MAC address off it before installing it in the rack. The MAC address is 00409DD38856
  2. I wanted to do a factory reset to clear any settings. Before plugging in the power, I held the reset button and then plugged in the power plug. I held the reset button for about 30 seconds and the LED on the back started blinking in a 1-5-1 pattern so I released the reset button.
  3. I downloaded the Digi Discovery Tool to my windows machine and ran it.
  4. I could see the Digi was assigned an IP address of 134.89.11.118 so I double clicked on it to open up the settings and it opened the web page interface. I logged in using the default user of root with password that was printed on the label with the MAC address.
  5. Under the Network->Advanced Network Settings, I changed the Host Name to sim-ts16-dpkd and the DNS servers to 134.89.10.10 and 134.89.12.87 and then clicked on Apply.
  6. I then went to Reboot and rebooted the terminal server.
  7. It had the most recent firmware so I did not do a firmware update.
  8. Under Users->root, I changed the password to match the ops password we use.
  9. Under System, I set the Device Identity Settings for Description and Contact.
  10. Under System, I set the Date and Time to use an UTC offset of -08 hours and set the first time source to use time-sh1.shore.mbari.org.
  11. I rebooted again, but time and date did not update so I set it manually.

sim-ts32-dpkd (Digi TS32)

  1. Before starting, I took a picture of the label on the outside so I would have the MAC address.
  2. I then installed the TS in the rack, connected both ethernet ports to the LAN and then powered it on.
  3. Went to the Digi website and downloaded the "Digi Navigator" for Windows 11.
  4. I installed the application with the default settings, added a desktop shortcut and ran the utility.
  5. I could see from the list in the application, it looked like there was something called "EZ32-001007", but it had two IP addresses underneath it. I'm assuming because I have two cables plugged in. The IPs are quite different though. One is 134.89.10.214 and the other is 134.89.11.32. Hmmm. I want to make sure I am not picking up one of the other 32 port TSs. I uplugged the top cable and the 134.89.10.214 address disappeared so that was easy. I plugged it back in and it came back. I unplugged the other cable and other IP disappeared so I plugged it back in. At least I know I found the right one. I unplugged the second
  6. I selected the 134.89.11.32 IP and then clicked on Configure Device for RealPort.
  7. I entered the user admin and the password printed on the label. I message box popped up saying OK.
  8. I then clicked on the HTTPS button which opened a web page and I logged in using admin and the password on the label
  9. First, I went to 'admin' -> 'change password' menu and changed the password. NOTE: I couldn't use the standard ops password as it wasn't secure enough so I changed it a little and added it to my BitWarden application under the SE-IE shared password.
  10. First, I went to the System -> Firmware Update menu and then selected 'Download from server'. There was an update so I clickec 'Update firmware'. I updated automatically and rebooted and then I logged in again.

Intel NUC simnavproc.shore.mbari.org

The next component in the chain is an Intel NUC machine that will run the Navproc, Logr and LCM bridge software and is named navproc-dpkd-sim.shore.mbari.org. I used the standard Navproc Computer Setup to get all the software installed. Please consult that documentation which will install all the necessary software and tools. Once that is all complete, but before the processes are started, I needed to set up the connection to the DigiTS and get the signals routed to the correct serial ports.

One difference in the setup is that we are using a TS16 and a TS32 so after setting up the TS16, I took the following setups.

Signal Connections

Signal Details

Simulated Device navproc-dev-ubunutu-22 source Protocol/COM Port Baud Rate Data Bits Stop Bits Parity Moxa IP Address Moxa Port Notes
/dev/ttyr00 134.89.10.247 1
GPS nmeaGPS_sim.py /dev/ttyr01 9600 8 1 None 134.89.10.247 2
CTD file_reader_sim.py /dev/ttyr02 9600 8 1 None 134.89.10.247 3
Gryo file_reader_sim.py /dev/ttyr03 9600 8 1 None 134.89.10.247 4
/dev/ttyr04 134.89.10.247 5
/dev/ttyr05 134.89.10.247 6
/dev/ttyr06 134.89.10.247 7
/dev/ttyr07 134.89.10.247 8
GPS nmeaGPS_sim.py /dev/ttyr08 9600 8 1 None 134.89.10.247 9
CTD file_reader_sim.py /dev/ttyr09 9600 8 1 None 134.89.10.247 10
Gyro file_reader_sim.py /dev/ttyr0a 9600 8 1 None 134.89.10.247 11
/dev/ttyr0b 134.89.10.247 12
/dev/ttyr0c 134.89.10.247 13
/dev/ttyr0d 134.89.10.247 14
/dev/ttyr0e 134.89.10.247 15
/dev/ttyr0f 134.89.10.247 16

API Server

The next step was to configure a server in the VM cluster that will act as an API server for the data coming from navproc. This API consists of a ZMQ proxy, a Redis server, a UDP proxy that replicates the old bcserver, a Telegraf socket that feeds an InfluxDB database and a Grafana server. Here is a basic diagram of the API services and data flow.

--- title: Navproc API --- flowchart LR subgraph David Packard API Server direction LR subgraph navproc navproc-process-1 navproc-process-2 logr-process-1 logr-process-2 lcm-bridge end subgraph navproc-api port5555(5555) zmq-proxy port5556(5556) port6379in(6379) redis-server[Redis Server] port6379out(6379) redis-udp-proxy port54003(54003) port54005(54005) port54007(54007) port54009(54009) port54017(54017) port54019(54019) port8086in(8086) influxdb[InfluxDB] port8086out(8086) port8094(8094) telegraf[Telegraf] grafana end end navproc-process-1 --> logr-process-1 navproc-process-1 --> lcm-bridge logr-process-1 --> lcm-bridge navproc-process-2 --> logr-process-2 navproc-process-2 --> lcm-bridge logr-process-2 --> lcm-bridge lcm-bridge --> port5555 --> zmq-proxy --> port5556 lcm-bridge --> port6379in --> redis-server --> port6379out lcm-bridge --> port8094 --> telegraf --> port8086in --> influxdb --> port8086out redis-server --> redis-udp-proxy redis-server --> grafana influxdb --> grafana redis-udp-proxy --> port54003 redis-udp-proxy --> port54005 redis-udp-proxy --> port54007 redis-udp-proxy --> port54009 redis-udp-proxy --> port54017 redis-udp-proxy --> port54019
  1. I submitted a ticket to have IS build a Ubuntu 20.04 and Peter built the VM dp-sim-api.shore.mbari.org and gave Mike and I sudo privs on it.
  2. ssh’d into the dp-sim-api server as me
  3. I checked to make sure openssh-client was installed by running apt list openssh-client (it was installed)
  4. I started an ssh-agent by running eval $(ssh-agent) which return the pid of the agent.
  5. I changed into the /etc/ssh/keys/kgomes directory by running cd /etc/ssh/keys/kgomes so my generated key would end up here.
  6. I then created a new ssh key by running ssh-keygen -t ed25519 -b 4096 -C "{kgomes@mbari.org}" -f ops_as_kgomes (I used the login password for ops as they key password). This created the ops_as_kgomes private key and the ops_as_kgomes.pub public key.
  7. I added the ssh key to the agent by running ssh-add /etc/ssh/keys/kgomes/ops_as_kgomes (had to enter the password)
  8. I then created a /etc/ssh/keys/kgomes/config file and added the following:

    Host bitbucket.org
            AddKeysToAgent yes
            IdentityFile /etc/ssh/keys/kgomes/ops_as_kgomes
    
  9. This finishes the setup of my ssh key on the api machine. Now I need to add the ssh key to BitBucket. I logged into BitBucket, went to workplace settings and then clicked on SSH Keys. Then I clicked on Add Key, gave it a name of “From dp-sim-api.shore.mbari.org as kgomes”, and then back in the terminal, in the .ssh directory, ran cat ops_as_kgomes.pub, selected, copied and pasted the entire string into the BitBucket Key window. Now I should have access to the repositories I need.

  10. Now, to install the api software, I went to the opt directory by running cd /opt
  11. I created the corenav directory using sudo mkdir corenav
  12. Changed it to be owned by ops account using sudo chown ops corenav.
  13. I then switched to the ops account by running sudo -u ops -i.
  14. Then I went into the corenav directory by running cd corenav.
  15. I then checked out the API repo by running git clone git@bitbucket.org:mbari/navproc-api.git (I used an app password from my kgomes account)
  16. I then cd'd into the navproc-api directory.
  17. Before starting everything, I needed to copy the .env.template file to a file named .env and then edit it to set the passwords and such for the API.
  18. Now everything should be ready to go and I started up the API by running docker-compose up -d which runs everything in the background. You can double check this by running docker ps and you should see 5 containers running.