Skip to content

Rachel Carson Simulator - Previous Version

This page documents the Navproc and Logging system on the Rachel Carson Simulator running the previously deployed version of Navproc and logging.

Warning

This document is for historical purposes only and shows how things used to be setup. These components were shut down once the new version of navproc was installed in March/April of 2025

Overview

First, let's look at a high level diagram of the various components. There are basically four main components in the simulator. They are:

  1. An Ubuntu 24 virtual machine name sim-rcsn-old.shore.mbari.org that is running some simulation software that is generating mock signals and pushing them out of serial ports to a Moxa serial server. These simulators are basically just reading old log files from the Carson and pushing them out serial ports as the time ticks by.
  2. Moxa NPort 16 Port RS-232 Serial Device Server that is connected to the VM mentioned above via Linux kernel drivers.
  3. Digi Connect TS16 that takes in physical connections from the outputs of the Moxa NPort.
  4. An Ubuntu 20 virtual machine that is running the current version of navproc/logr software that has the TS16 port mounted virtually as /dev/tty devices
--- title: Logical Simulator Components --- flowchart LR subgraph RCSimulator[Rachel Carson Simulator] direction LR simulator-rcsn-old["Old Simulator VM
sim-rcsn-old.shore.mbari.org"] --> simulator-rcsn-old-moxa["Old Simulator Moxa
134.89.10.246
sim-moxa-rcsn-old.shore.mbari.org"] --> simulator-rcsn-old-ts["Digi TS16
134.89.10.99
sim-ts-rcsn-old"] --> navproc-rcsn-sim-old["Old Navproc VM
navproc-rcsn-sim-old.shore.mbari.org"] end
--- title: Signal Connections --- flowchart LR subgraph signals[Signal Connections] direction LR subgraph Simulator ttyr00 ttyr01 gyro --> ttyr02 gps --> ttyr03 ctd --> ttyr04 lodestar --> ttyr05 gtdpro --> ttyr06 uhsmsg --> ttyr07 ttyr08 csp --> ttyr09 nav4d --> ttyr0a ttyr0b ttyr0c ttyr0d ttyr0e ttyr0f end subgraph Moxa moxa-port1 moxa-port2 moxa-port3 moxa-port4 moxa-port5 moxa-port6 moxa-port7 moxa-port8 moxa-port9 moxa-port10 moxa-port11 moxa-port12 moxa-port13 moxa-port14 moxa-port15 moxa-port16 end subgraph TS16 ts16-ttya00 ts16-ttya01 ts16-ttya02 ts16-ttya03 ts16-ttya04 ts16-ttya05 ts16-ttya06 ts16-ttya07 ts16-ttya08 ts16-ttya09 ts16-ttya10 ts16-ttya11 ts16-ttya12 ts16-ttya13 ts16-ttya14 ts16-ttya15 end vorne end ttyr00 --> moxa-port1 ttyr01 --> moxa-port2 ttyr02 --> moxa-port3 ttyr03 --> moxa-port4 ttyr04 --> moxa-port5 ttyr05 --> moxa-port6 ttyr06 --> moxa-port7 ttyr07 --> moxa-port8 ttyr08 --> moxa-port9 ttyr09 --> moxa-port10 ttyr0a --> moxa-port11 ttyr0b --> moxa-port12 ttyr0c --> moxa-port13 ttyr0d --> moxa-port14 ttyr0e --> moxa-port15 ttyr0f --> moxa-port16 moxa-port2 --> ts16-ttya01 moxa-port3 --> ts16-ttya02 moxa-port4 --> ts16-ttya03 moxa-port5 --> ts16-ttya04 moxa-port6 --> ts16-ttya05 moxa-port7 --> ts16-ttya06 moxa-port8 --> ts16-ttya07 moxa-port9 --> ts16-ttya08 moxa-port10 --> ts16-ttya09 moxa-port11 --> ts16-ttya10 moxa-port12 --> ts16-ttya11 moxa-port13 --> ts16-ttya12 moxa-port14 --> ts16-ttya13 moxa-port15 --> ts16-ttya14 moxa-port16 --> ts16-ttya15 ts16-ttya00 --> vorne

Installation and Setup

simulator-moxa-rcsn-old (Moxa NPort 5600)

  1. Before setting up the simulator and the navproc for the simulator, I re-configured the Moxa for the new setup. The manual for the Moxa is located here.
  2. Since this was an older Moxa, I did a factory reset by holding the reset button for 5 seconds until the LED stops blinking and the factory defaults should be loaded.
  3. I downloaded the NPort Administrator Suite to a separate Windows 11 machine and then downloaded the latest (v3.11) firmware ROM from the Moxa support site.
  4. I installed the NPort Administrator Suite on the Windows machine and ran it.
  5. I clicked on Search and it found the list of all Moxa NPorts that it could see.
  6. I double-clicked on the Moxa that I had just reset to bring up the settings.
  7. I then used the NPort admin tool to upgrade the firmware to the latest that I downloaded.
  8. I changed the name of the Moxa to sim-moxa-rcsn-old
  9. I changed the timezone to Pacific, set the time and date and set the NTP server to time-sh1.shore.mbari.org
  10. I configured the network by setting the Netmask to 255.255.254.0, the gateway to 134.89.10.1, selected DHCP, set the DNS Server 1 to 134.89.10.10 and the DNS Server 2 to 134.89.12.87.
  11. I applied the changes which restarted the Moxa.

Note

After I had set up things, there was some strangeness in my networking in my office. After it was all worked out, Adriana changed the IP address of the NPort to 134.89.10.246

  1. You can find the NPort web page located here

Warning

I have noticed that sometimes, even though the Moxa is working, the web page will not respond. I have to power off and on the Moxa for the web page to become responsive again.

  1. Using the web interface, I changed the password to match the same ops password we use for other navproc machines (note the username is still 'admin').

sim-ts-rcsn-old (Digi TS 16)

The next in line is a Digi TS 16 terminal server. The serial cables from the Moxa are routed to the terminal server which is then virtually mounted on the computer that is running the navproc and logr code.

The webpage for the Digi TS 16 terminal server can be found here and ask Kevin Gomes for login credentials if you need them.

Some support documents:

  1. Quick Start Guide
  2. User Guide

Installation Steps:

  1. Before starting, I wanted to do a factory reset to clear any settings. I unplugged the power plug, held the reset button and plugged back in the power plug. I held the reset button for about 30 seconds and the LED on the back started blinking in a 1-5-1 pattern so I released the reset button.
  2. I downloaded the Digi Discovery Tool to my windows machine and ran it.
  3. I could see the Digi was assigned an IP address of 134.89.10.255 (tell by the MAC address) so I double clicked on it to open up the settings and it opened the web page interface. I logged in using the default user of root with password dbps.
  4. On the Network settings panel, I changed the IP address from DHCP to static (IS provided) and set it to 134.89.10.99 and then clicked on Apply.
  5. It restarted the Digi and redirected my browser to here and I logged in again.
  6. Under the Network->Advanced Network Settings, I changed the Host Name to simulator-ts-rcsn-old and the DNS servers to 134.89.10.10 and 134.89.12.87 and then clicked on Apply.
  7. I then went to Reboot and rebooted the terminal server.
  8. I downloaded the latest firmware from the Digi site (820014374_U.bin) and then installed it from the web page and then rebooted it.
  9. Under Users->root, I changed the password to match the ops password we use.
  10. Under System, I set the Device Identity Settings for Description and Contact.
  11. Under System, I set the Date and Time to use an UTC offset of -08 hours and set the first time source to use time-sh1.shore.mbari.org.
  12. I rebooted again, but time and date did not update so I set it manually.

sim-rcsn-old-shore

  1. Submitted a help ticket to create a VM for the signal simulator. This should just be a vanilla Ubunutu 24 with Docker installed. The hostname of the machine will be sim-rcsn-old.shore.mbari.org. Here is the ticket text:

    - Request_Submitted_By: <kgomes@mbari.org>
    - VM_Name: sim-rcsn-old
    - VM_Purpose: This VM will be used to run a Python Docker container that simulates signals for the current Navproc installation on the Rachel Carson. Peter, let's make this one Ubuntu 24 if that's OK with you. I am putting one year on this VM because it should only be in operation until we get the new infrastructure online.
    - VM_Expiration: 1 year
    - VM_Support: IS_Supported
    - VM_Support_Alt_Adm:
    - VM_OS: > Refer to Comments
    - CPU: 2
    - CPU_Addl_Reason:
    - RAM: 4
    - RAM_Addl_Reason:
    - GPU_Extra:
    - GPU: NO
    - Disk_Extra:
    - Network: Internal (SHORE)
    - Resource_Priority: Low
    - Resource_Priority_Reason:
    - Conf_Firewall:
    - Conf_Desktop: YES
    - Conf_Desktop_Extra: Could you also enable Remote Desktop on this machine? I will do most work via ssh, but it will be helpful to have Desktop access.
    - Conf_Logins: local
    - Conf_Docker: YES
    - Conf_Docker_Extra:
    - Conf_WebServer: none
    - Conf_sudo: sudo for kgomes local account. Can you also create a local account named 'ops' that has sudo too? This is the way Mike and I use Navproc now and we would like to do it this way so we can run the navproc/logging stuff as ops and then we can both login directly as ops and manage it.
    - Conf_vCenter_Access:
    - VM_Comments: Ubuntu 24 please.
    
    1. Peter finished the VM installation and gave the ops account the default password.
    2. I needed to ssh into the VM first to make sure that the system prompted for a new password by running ssh ops@sim-rcsn-old.shore.mbari.org
    3. I changed the password to the normal ops password.
    4. I then brought up the Windows.app on my Mac and created a connection to this VM. It opened right to the initial setup of the Ubuntu installation
    5. I accepted mostly the defaults for the options in the setup.
    6. Note the default UI through the remote desktop is different from the normal one.
    7. I ran the software updater to get everything up to date.
    8. I opened up the Settings application and under the Power settings I set Screen Blank to Never.
    9. I then opened a terminal and ran sudo apt-get install build-essential cmake python3-pip python3-venv -y
    10. Installing the Moxa virtual serial ports was a little different. The default kernel 6 drivers from their website did not work so I reached out to support and he sent me the attached driver file which I then downloaded to the Downloads folder.
          tar -xvf npreal2_v6.0.1_build_24051001.tgz
          cd moxa
          sudo ./mxinst
          cd /usr/lib/npreal2/driver
          sudo ./mxaddsvr 134.89.10.246 16
      
  2. This creates the virtual serial ports at /dev/ttyr00->0f.

  3. The code for the Python simulator is located in BitBucket here and it was checked out to the /opt directory by changing to the /opt directory and running sudo git clone https://kgomes@bitbucket.org/mbari/corenav-simulators.git (I used my dev-checkout app password).
  4. I then ran sudo chown -R ops:ops corenav-simulators to change over to the ops account for ownership
  5. I then cd corenav-simulators and ran mkdir logs to create a directory where log files will go
  6. Next, I needed to grab some log files from the Carson so I could replay them. I cd'd into the /opt/corenav-simulators/data directory and ran:

    mkdir log-files
    cd log-files
    mkdir carson
    cd carson
    scp ops@rcnavproc1.rc.mbari.org:/home/ops/corelogging/rc/data/2024339* .
    scp ops@rcnavproc1.rc.mbari.org:/home/ops/corelogging/rc/data/2024340* .
    scp ops@rcnavproc1.rc.mbari.org:/home/ops/corelogging/rc/data/2024341* .
    sudo gunzip *.gz
    
  7. Then in /opt/corenav-simulators, I edited the simulator_config.json file to look like the entries below.

    {
            "name": "Rachel Carson Data Simulators",
            "version": "0.1",
            "description": "Simulator for data that should be coming from the Rachel Carson",
            "author": "Kevin Gomes",
            "logging-level": "DEBUG",
            "logging-format": "%(asctime)s: %(message)s",
            "logging-datefmt": "%H:%M:%S",
            "try-to-send-over-serial": true,
            "simulators": [
                    {
                            "name": "logr_simulator",
                            "config": {
                                            "type": "logr-file-reader",
                                            "log-dir": "./data/log-files/carson/",
                                            "file-mapping": {
                                            "2024340csprawfulllogr.dat": {
                                                    "port": "/dev/ttyr09",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340gtdprologr.dat": {
                                                    "port": "/dev/ttyr06",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340lodestarlogr.dat": {
                                                    "port": "/dev/ttyr05",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340nav4dlogr.dat": {
                                                    "port": "/dev/ttyr0a",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340nmeafulllogr.dat": {
                                                    "port": "/dev/ttyr03",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340seabirdctdfulllogr.dat": {
                                                    "port": "/dev/ttyr04",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340shipgyrofulllogr.dat": {
                                                    "port": "/dev/ttyr02",
                                                    "baudrate": 4800,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            },
                                            "2024340uhsmsgfulllogr.dat": {
                                                    "port": "/dev/ttyr07",
                                                    "baudrate": 9600,
                                                    "parity": "N",
                                                    "stopbits": 1,
                                                    "bytesize": 8
                                            }
                                    }
                            }
                    }
            ]
    }
    
  8. In order to just test this, I ran ./simulator.sh in the /opt directory. Once I verified data was being generated properly, I kill the python process.

  9. Now to get this to run as a service, I created a service startup file /etc/systemd/system/corenav-simulators.service which looks like:

    [Unit]
    Description=Python scripts to simulate data for corenav
    After=network.target
    
    [Service]
    Type=forking
    ExecStartPre=/bin/sleep 30
    ExecStart=/opt/corenav-simulators/simulator.sh
    Restart=Always
    
    [Install]
    WantedBy=default.target
    
  10. The service can then be enabled by running the following:

    sudo systemctl daemon-reload
    sudo systemctl enable corenav-simulators.service
    sudo systemctl start corenav-simulators.service
    
  11. I then rebooted the machine to make sure the simulators started properly

Warning

Once, after doing a standard upgrade on the Ubuntu installation, the serial ports would fail to be recognized. I had to uninstall the driver by running sudo ./mxuinst from the ~/Downloads/moxa/ directory and then running sudo ./mxinst again. After that, then run cd /usr/lib/npreal2/driver and then run sudo ./mxaddsvr 134.89.10.246 16 to add the virtual ports

  1. Submitted a second VM to serve as the navproc/logging software that is the same as what is currently running on the Carson. This will be Ubuntu 20. Here is the ticket text:

    - Request_Submitted_By: kgomes@mbari.org
    - VM_Name: *see comments*
    - VM_Purpose: This will be running the Navproc/Logging software that is currently running on the Carson. It is the 'old' version of the navproc. We want a simulator running the old stack handing as we migrate to the new.
    - VM_Expiration: 1 year
    - VM_Support: IS_Supported
    - VM_Support_Alt_Adm:
    - VM_OS: Ubuntu 20.04 LTS
    - CPU: 2
    - CPU_Addl_Reason:
    - RAM: 4
    - RAM_Addl_Reason:
    - GPU_Extra:
    - GPU: NO
    - Disk_Extra:
    - Network: Internal (SHORE)
    - Resource_Priority: Low
    - Resource_Priority_Reason:
    - Conf_Firewall:
    - Conf_Desktop: YES
    - Conf_Desktop_Extra: Need to have Remote Desktop available on this one. Thanks!
    - Conf_Logins: local
    - Conf_Docker_Extra:
    - Conf_Docker: NO
    - Conf_WebServer: none
    - Conf_sudo: sudo for kgomes local account. Can you also create a local account named 'ops' that has sudo too? This is the way Mike and I use Navproc now and we would like to do it this way so we can run the navproc/logging stuff as ops and then we can both login directly as ops and manage it.
    - Conf_vCenter_Access:
    - VM_Comments: Can we name the VM 'navproc-rcsn-sim-old'? I know it's long, but it's not a long term VM and it's really helpful if it's explicit
    
  2. Peter finished the VM installation and gave the ops account the default password.

  3. I needed to ssh into the VM first to make sure that the system prompted for a new password by running ssh ops@navproc-rcsn-sim-old.shore.mbari.org
  4. I changed the password to the normal ops password.
  5. I then brought up the Windows.app on my Mac and created a connection to this VM. It opened right to the initial setup of the Ubuntu installation
  6. I accepted mostly the defaults for the options in the setup.
  7. Note the default UI through the remote desktop is different from the normal one.
  8. I ran the software updater to get everything up to date.
  9. I opened a terminal window and in the upper menu bar I clicked on Terminal and selected Preferences. Under the Unnamed profile, I chose Command and then checked the box that says Run command as a login shell.
  10. I opened the Settings application.
    1. NOTE: Normally, on the NUCs, we set the ops account to default and have it login automatically, but I could not do this on the VM.
    2. Under the Power Settings, for the Blank Screen setting, I set to Never
  11. I opened the Software and Updates application and on the Updates tab, I normally set the updates to be as infrequent as possible, but I could not do this on the VM.
  12. It is easier to do these steps with ssh, so I ssh'd to the machine using ops from my Mac
  13. Then I ran sudo apt-get install -y openssh-server libssl-dev meld libglib2.0-0 libglib2.0-dev python2 python2-dev libzmq3-dev git libmodbus-dev build-essential cmake wish libncurses5-dev libncursesw5-dev ncftp net-tools
  14. I then edited the /etc/group file and add the ops user to the dialout group

    proxy:x:13:
    kmem:x:15:
    dialout:x:20:ops
    fax:x:21:
    voice:x:22:
    
  15. Then I edited the ~/.profile file and added the following

    # User specific environment and startup programs
    
    # add navproc bin path for either rc or wf
    PATH="$PATH:/home/ops/CoreNav/navproc-process/bin/<ship_path>"
    
    # The LCM URL is only set in the navproc-process *.ini
    # file and used by all processes and loggers at startup
    export LCM_DEFAULT_URL=NOT_SET
    
    alias gh='history | grep'
    alias ge='env | grep'
    alias gp='ps ax | grep'
    
    alias goback='cd $OLDPWD'
    
  16. Next, I configured time synchronization. See systemd-timesyncd.service man page

    1. I first ran systemctl status systemd-timesyncd to see the status and it said it was inactive (dead) and gave a reason that the Unit systemd-timesyncd.service is masked. I wonder if that means the timesync is handled automatically by the VM?. Normally, I would configure it using the steps below, but I skipped for now in the VM

      systemctl stop systemd-timesyncd
      systemctl start systemd-timesyncd
      
    2. Added following lines to /etc/systemd/timesyncd.conf

      [Time]
      
      # https://mww.mbari.org/is/howto/network/timesynch
      
      # Shore
      NTP=time-sh1.shore.mbari.org time-sh2.shore.mbari.org time-sh3.shore.mbari.org
      
      # Rachel Carson
      #NTP=time-rc1.rc.mbari.org time-rc2.rc.mbari.org
      
      # Western Flyer
      #NTP=time-wf1.wf.mbari.org time-wf2.wf.mbari.org time-wf3.wf.mbari.org
      
      FallbackNTP=ntp.ubuntu.com
      
    3. Note: comment out the time servers that don't apply to the deployment platform. For example comment out Rachel Carson and Western Flyer if your system will be used on Shore.

    4. Normally, I would configure screen sharing, but it's already enable on the VM
    5. Next, it was time to install AdoptOpenJDK 8. Note: lcm-spy does not work well with openjdk-8-jdk. It hangs and has issues with refreshing the graphics. AdoptOpenJDK's version adoptopenjdk-8-hotspot (now Adoptium) seems to resolve these issues. To install it, I ran:
      wget -O - https://packages.adoptium.net/artifactory/api/gpg/key/public | sudo apt-key add -
      echo "deb https://packages.adoptium.net/artifactory/deb $(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release) main" | sudo tee /etc/apt/sources.list.d/adoptium.list
      sudo apt update
      sudo apt install temurin-8-jdk
      
  17. To verify the install, I ran java -version and javac -version

  18. Now it was time to install LCM

    1. Before doing that, I need to make sure Python2 was available. I installed it in the steps above, but in order to make it active, I needed to run:

      sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 1
      sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 2
      sudo update-alternatives --config python
      
    2. Select 1 when presented with options to choose Python 2

    3. I cd'd into home directory using cd
    4. Then I ran mkdir Libs, cd Libs
    5. Then I ran git clone --branch v1.5.0 https://github.com/lcm-proj/lcm.git
    6. I cd'd into the new cloned directory and ran mkdir build, cd build, cmake .., make, sudo make install, sudo ldconfig
    7. In order for things to work, we need to inform Python 2.7 about the local LCM installation (which ends up under Python 3 by default). This can be done by running:
      sudo cp -r /usr/local/lib/python3.8/site-packages/lcm /usr/lib/python2.7/dist-packages/
      
  19. Before moving on, it was time to checkout the CoreNav repositories

    cd
    mkdir CoreNav
    cd CoreNav
    git clone https://kgomes@bitbucket.org/mbari/navproc-common.git
    git clone https://kgomes@bitbucket.org/mbari/navproc-logger.git
    git clone https://kgomes@bitbucket.org/mbari/navproc-misc.git
    git clone https://kgomes@bitbucket.org/mbari/navproc-process.git
    
  20. Next, it was time to install libbot2

    cd
    cd CoreNav
    cd navproc-misc
    cd packages
    sudo apt-get install ./python-gtk2_2.24.0-6_amd64.deb
    gunzip ./libbot2.tgz
    tar -xvf ./libbot2.tar
    cd libbot2
    
  21. Edit the tobuild.txt file and comment out these

    #bot2-vis
    #bot2-lcmgl
    #bot2-param
    #bot2-frames
    
  22. Build and install to /usr/local using

    sudo make BUILD_PREFIX=/usr/local
    
  23. Additional notes:

    1. make uninstall should uninstall /usr/local version
    2. starting with lcm-1.4.0 the support jars for lcm-spy are not packaged inside lcm.jar they are located in /usr/local/share/java. The /usr/local/bin/bot-spy script had the following line added for compatibility,

      CLASSPATH=$CLASSPATH:/usr/local/share/java/jchart2d-3.2.2.jar: \
                          /usr/local/share/java/xmlgraphics-commons-1.3.1.jar: \
                          /usr/local/share/java/jide-oss-2.9.7.jar
      
    3. The original bot-spy script was copied to /usr/local/bin/bot-spy.org

  24. In order for procman to work correctly, I needed to install a more recent version and change the config files to match the new format it was expecting. I did all this by running:

    cd ~/Libs
    git clone https://github.com/ashuang/procman.git
    cd procman
    mkdir build
    cd build
    cmake ..
    make
    
  25. This spit out the following list of installed files

    -- Install configuration: ""
    -- Installing: /usr/local/share/java/procman_lcmtypes.jar
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/__init__.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/cmd_desired_t.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/cmd_status_t.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/cmd_t.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/deputy_info_t.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/discovery_t.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/orders_t.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/output_t.py
    -- Installing: /usr/local/share/lcmtypes/procman_lcm_cmd_t.lcm
    -- Installing: /usr/local/share/lcmtypes/procman_lcm_cmd_status_t.lcm
    -- Installing: /usr/local/share/lcmtypes/procman_lcm_discovery_t.lcm
    -- Installing: /usr/local/share/lcmtypes/procman_lcm_deputy_info_t.lcm
    -- Installing: /usr/local/share/lcmtypes/procman_lcm_orders_t.lcm
    -- Installing: /usr/local/share/lcmtypes/procman_lcm_output_t.lcm
    -- Installing: /usr/local/share/lcmtypes/procman_lcm_cmd_desired_t.lcm
    -- Installing: /usr/local/share/icons/hicolor/scalable/apps/procman_icon.svg
    -- Installing: /usr/local/lib/libprocman.so
    -- Installing: /usr/local/include/procman/procman.hpp
    -- Installing: /usr/local/include/procman/procinfo.hpp
    -- Installing: /usr/local/lib/pkgconfig/procman.pc
    -- Installing: /usr/local/bin/procman-deputy
    -- Set runtime path of "/usr/local/bin/procman-deputy" to "/usr/local/lib"
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/build_prefix.py
    -- Up-to-date: /usr/local/lib/python2.7/dist-packages/procman
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_script.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_config.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/command_model.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/deputies_treeview.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/command_treeview.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/__init__.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/command_console.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/sheriff_dialogs.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/sheriff_gtk.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/__init__.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_cli.py
    -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff.py
    -- Installing: /usr/local/share/procman/procman-sheriff.glade
    -- Installing: /usr/local/bin/procman-sheriff
    
  26. While the LCM types jar file was installed in /usr/local/share, it isn't exactly the format spy will be looking for so I ran sudo cp /usr/local/share/java/procman_lcmtypes.jar /usr/local/share/java/lcmtypes_procman.jar

  27. Next, it was time to install the serial port drivers for the TS 16 and create the virtual serial ports that navproc will interact with
  28. I used the remote desktop and opened firefox and browsed to http://www.digi.com/support/cpts8
  29. I scrolled down to "Drivers & Patches" and clicked on "RealPort Driver"
  30. I then selected "Linux" from the operating system list and then clicked on the tgz version 1.9-42 which downloaded a file named 40002086_AC.tgz
  31. In a terminal and ran

    cd ~/Downloads
    tar -xzf 40002086.tgz
    cd dgrp-1.9
    sudo ./configure
    sudo make all
    sudo make install
    sudo make postinstall
    
  32. Before we use the ports, we add a patch to make port numbers 1-16 instead of 0-15. Edit the dgrp_udev file by running sudo vi /usr/bin/dgrp_dev and add the line

    TMPPORT=`expr $TMPPORT + 1`
    

Note

Note that we are using single back ticks here

Warning

The spaces before and after the + are vital!!!

  1. There is only one TS 16 that we are using, so we add those 16 ports to the navproc VM by running: sudo dgrp_cfg_node init -v -e none a 134.89.10.99 16
  2. That added 16 ports at /dev/tty_dgrp_a_XX and then created symlinks at /dev/ttya00 -> /dev/ttya15.
  3. I then rebooted the machine by running sudo shutdown -r now just to make sure the ports stayed connected.
  4. Since the simulator is running on the Moxa and is already feeding simulated data to the TS16, I could check that they are working by running sudo dinc /dev/ttya02 and I see DATA!!!!
  5. Lastly, we want to change the permissions so ops account does not need sudo to access to ports. We do this by running sudo vi /etc/udev/rules.d/10-dgrp.rules and add the GROUP, MODE and OPTIONS tags to the tty_dgrp line and comment out the other two which should end up looking like this:

    KERNEL=="tty_dgrp*", PROGRAM="/usr/bin/dgrp_udev %k", SYMLINK+="%c", TAG="systemd", GROUP="dialout", MODE="0666”, OPTIONS="last_rule"
    #KERNEL=="cu_dgrp*", PROGRAM="/usr/bin/dgrp_udev %k", SYMLINK+="%c", TAG="systemd"
    #KERNEL=="pr_dgrp*", PROGRAM="/usr/bin/dgrp_udev %k", SYMLINK+="%c", TAG="systemd"
    
  6. I rebooted the machine again and then from a terminal ran dinc /dev/tty03 and got data (did not need sudo for this now)

  7. Next, it was time to build the navproc software and get it running. In a terminal, I ran:

    cd ~/CoreNav/navproc-process
    make
    cd bin/sim
    vi core_nav_sim.ini
    
  8. To properly use the spy GUI with the lcm types, I had to build and install the java versions of the LCM types by running

    cd ~/CoreNave/navproc-common/lcm_types
    lcm-gen -j --jpath ./java --jmkdir true *.lcm
    cd java
    javac -cp ~/Libs/lcm/build/lcm-java/lcm.jar *.java
    jar -cvf lcmtypes_corenav.jar corenav`
    sudo cp lcmtype_corenav.jar /usr/local/share/java
    
  9. In order to use the new procman deputy and sheriff I had to change all the scripts in navproc-process/bin/sim as well as the bot-proc.cfg file to match the new format.

  10. I edited the core_nav_sim.ini file to match the port and signal mapping shown at the top of this document

Warning

With the navproc machine being a VM, it seemed to mess with LCM. When I tried to run navproc_start, it would error out with a bunch of stuff including "LCM self test failed". With some googling I found that if you run the following, it gets rid of that error: sudo ifconfig lo multicast & sudo route add -net 224.0.0.0 netmask 240.0.0.0 dev lo.

  1. Now I can start/stop navproc and it looks like it's working!!!
  2. Now time to set up the logr processes

    cd ~/CoreNave/navproc-logger/src/datalogger
    make clean
    make
    sudo make install
    
  3. After this, the logr executable will be installed in the /usr/local/bin directory.

  4. The actually loggers and bcservers are configured and run other projects called corelogging and bcserver. To get all this up and running

    cd
    git clone https://kgomes@bitbucket.org/mbari/bcserver.git
    git clone https://kgomes@bitbucket.org/mbari/corelogging.git
    cd bcserver
    make clean
    make
    sudo make install
    cd
    cd corelogging/scripts
    
  5. I then edited the my_cron_with_logging and commented out the lines at the bottom that FTP files to shore and send shop locations to the ODSS and telepresence.

  6. I then loaded the crontab by running crontab < my_cron_with_logging and waited a couple of minutes for everything to start up.
  7. I then looked at the log files and ran Dataprobe against it to make sure it worked.

Note

While troubleshooting, I turned off the firewall on ubuntu by running sudo ufw disable as I thought that might have been blocking the bcserver ports. I think the problem I was having was not related to the firewall per se, but I left it off anyway. I should enable it and configure openenings for just the ports I need.