Rachel Carson Simulator - Previous Version
This page documents the Navproc and Logging system on the Rachel Carson Simulator running the previously deployed version of Navproc and logging.
Warning
This document is for historical purposes only and shows how things used to be setup. These components were shut down once the new version of navproc was installed in March/April of 2025
Overview
First, let's look at a high level diagram of the various components. There are basically four main components in the simulator. They are:
- An Ubuntu 24 virtual machine name
sim-rcsn-old.shore.mbari.orgthat is running some simulation software that is generating mock signals and pushing them out of serial ports to a Moxa serial server. These simulators are basically just reading old log files from the Carson and pushing them out serial ports as the time ticks by. - Moxa NPort 16 Port RS-232 Serial Device Server that is connected to the VM mentioned above via Linux kernel drivers.
- Digi Connect TS16 that takes in physical connections from the outputs of the Moxa NPort.
- An Ubuntu 20 virtual machine that is running the current version of navproc/logr software that has the TS16 port mounted virtually as /dev/tty devices
sim-rcsn-old.shore.mbari.org"] --> simulator-rcsn-old-moxa["Old Simulator Moxa
134.89.10.246
sim-moxa-rcsn-old.shore.mbari.org"] --> simulator-rcsn-old-ts["Digi TS16
134.89.10.99
sim-ts-rcsn-old"] --> navproc-rcsn-sim-old["Old Navproc VM
navproc-rcsn-sim-old.shore.mbari.org"] end
Installation and Setup
simulator-moxa-rcsn-old (Moxa NPort 5600)
- Before setting up the simulator and the navproc for the simulator, I re-configured the Moxa for the new setup. The manual for the Moxa is located here.
- Since this was an older Moxa, I did a factory reset by holding the reset button for 5 seconds until the LED stops blinking and the factory defaults should be loaded.
- I downloaded the NPort Administrator Suite to a separate Windows 11 machine and then downloaded the latest (v3.11) firmware ROM from the Moxa support site.
- I installed the NPort Administrator Suite on the Windows machine and ran it.
- I clicked on
Searchand it found the list of all Moxa NPorts that it could see. - I double-clicked on the Moxa that I had just reset to bring up the settings.
- I then used the NPort admin tool to upgrade the firmware to the latest that I downloaded.
- I changed the name of the Moxa to
sim-moxa-rcsn-old - I changed the timezone to Pacific, set the time and date and set the NTP server to
time-sh1.shore.mbari.org - I configured the network by setting the Netmask to 255.255.254.0, the gateway to 134.89.10.1, selected DHCP, set the DNS Server 1 to 134.89.10.10 and the DNS Server 2 to 134.89.12.87.
- I applied the changes which restarted the Moxa.
Note
After I had set up things, there was some strangeness in my networking in my office. After it was all worked out, Adriana changed the IP address of the NPort to 134.89.10.246
- You can find the NPort web page located here
Warning
I have noticed that sometimes, even though the Moxa is working, the web page will not respond. I have to power off and on the Moxa for the web page to become responsive again.
- Using the web interface, I changed the password to match the same
opspassword we use for other navproc machines (note the username is still 'admin').
sim-ts-rcsn-old (Digi TS 16)
The next in line is a Digi TS 16 terminal server. The serial cables from the Moxa are routed to the terminal server which is then virtually mounted on the computer that is running the navproc and logr code.
The webpage for the Digi TS 16 terminal server can be found here and ask Kevin Gomes for login credentials if you need them.
Some support documents:
Installation Steps:
- Before starting, I wanted to do a factory reset to clear any settings. I unplugged the power plug, held the reset button and plugged back in the power plug. I held the reset button for about 30 seconds and the LED on the back started blinking in a 1-5-1 pattern so I released the reset button.
- I downloaded the Digi Discovery Tool to my windows machine and ran it.
- I could see the Digi was assigned an IP address of 134.89.10.255 (tell by the MAC address) so I double clicked on it to open up the settings and it opened the web page interface. I logged in using the default user of
rootwith passworddbps. - On the Network settings panel, I changed the IP address from DHCP to static (IS provided) and set it to
134.89.10.99and then clicked on Apply. - It restarted the Digi and redirected my browser to here and I logged in again.
- Under the Network->Advanced Network Settings, I changed the
Host Nametosimulator-ts-rcsn-oldand the DNS servers to134.89.10.10and134.89.12.87and then clicked on Apply. - I then went to
Rebootand rebooted the terminal server. - I downloaded the latest firmware from the Digi site (820014374_U.bin) and then installed it from the web page and then rebooted it.
- Under Users->root, I changed the password to match the
opspassword we use. - Under System, I set the Device Identity Settings for Description and Contact.
- Under System, I set the Date and Time to use an UTC offset of -08 hours and set the first time source to use
time-sh1.shore.mbari.org. - I rebooted again, but time and date did not update so I set it manually.
sim-rcsn-old-shore
-
Submitted a help ticket to create a VM for the signal simulator. This should just be a vanilla Ubunutu 24 with Docker installed. The hostname of the machine will be
sim-rcsn-old.shore.mbari.org. Here is the ticket text:- Request_Submitted_By: <kgomes@mbari.org> - VM_Name: sim-rcsn-old - VM_Purpose: This VM will be used to run a Python Docker container that simulates signals for the current Navproc installation on the Rachel Carson. Peter, let's make this one Ubuntu 24 if that's OK with you. I am putting one year on this VM because it should only be in operation until we get the new infrastructure online. - VM_Expiration: 1 year - VM_Support: IS_Supported - VM_Support_Alt_Adm: - VM_OS: > Refer to Comments - CPU: 2 - CPU_Addl_Reason: - RAM: 4 - RAM_Addl_Reason: - GPU_Extra: - GPU: NO - Disk_Extra: - Network: Internal (SHORE) - Resource_Priority: Low - Resource_Priority_Reason: - Conf_Firewall: - Conf_Desktop: YES - Conf_Desktop_Extra: Could you also enable Remote Desktop on this machine? I will do most work via ssh, but it will be helpful to have Desktop access. - Conf_Logins: local - Conf_Docker: YES - Conf_Docker_Extra: - Conf_WebServer: none - Conf_sudo: sudo for kgomes local account. Can you also create a local account named 'ops' that has sudo too? This is the way Mike and I use Navproc now and we would like to do it this way so we can run the navproc/logging stuff as ops and then we can both login directly as ops and manage it. - Conf_vCenter_Access: - VM_Comments: Ubuntu 24 please.- Peter finished the VM installation and gave the
opsaccount the default password. - I needed to ssh into the VM first to make sure that the system prompted for a new password by running
ssh ops@sim-rcsn-old.shore.mbari.org - I changed the password to the normal ops password.
- I then brought up the Windows.app on my Mac and created a connection to this VM. It opened right to the initial setup of the Ubuntu installation
- I accepted mostly the defaults for the options in the setup.
- Note the default UI through the remote desktop is different from the normal one.
- I ran the software updater to get everything up to date.
- I opened up the
Settingsapplication and under the Power settings I set Screen Blank to Never. - I then opened a terminal and ran
sudo apt-get install build-essential cmake python3-pip python3-venv -y - Installing the Moxa virtual serial ports was a little different. The default kernel 6 drivers from their website did not work so I reached out to support and he sent me the attached driver file which I then downloaded to the
Downloadsfolder.tar -xvf npreal2_v6.0.1_build_24051001.tgz cd moxa sudo ./mxinst cd /usr/lib/npreal2/driver sudo ./mxaddsvr 134.89.10.246 16
- Peter finished the VM installation and gave the
-
This creates the virtual serial ports at /dev/ttyr00->0f.
- The code for the Python simulator is located in BitBucket here and it was checked out to the
/optdirectory by changing to the /opt directory and runningsudo git clone https://kgomes@bitbucket.org/mbari/corenav-simulators.git(I used mydev-checkoutapp password). - I then ran
sudo chown -R ops:ops corenav-simulatorsto change over to the ops account for ownership - I then
cd corenav-simulatorsand ranmkdir logsto create a directory where log files will go -
Next, I needed to grab some log files from the Carson so I could replay them. I cd'd into the
/opt/corenav-simulators/datadirectory and ran:mkdir log-files cd log-files mkdir carson cd carson scp ops@rcnavproc1.rc.mbari.org:/home/ops/corelogging/rc/data/2024339* . scp ops@rcnavproc1.rc.mbari.org:/home/ops/corelogging/rc/data/2024340* . scp ops@rcnavproc1.rc.mbari.org:/home/ops/corelogging/rc/data/2024341* . sudo gunzip *.gz -
Then in
/opt/corenav-simulators, I edited the simulator_config.json file to look like the entries below.{ "name": "Rachel Carson Data Simulators", "version": "0.1", "description": "Simulator for data that should be coming from the Rachel Carson", "author": "Kevin Gomes", "logging-level": "DEBUG", "logging-format": "%(asctime)s: %(message)s", "logging-datefmt": "%H:%M:%S", "try-to-send-over-serial": true, "simulators": [ { "name": "logr_simulator", "config": { "type": "logr-file-reader", "log-dir": "./data/log-files/carson/", "file-mapping": { "2024340csprawfulllogr.dat": { "port": "/dev/ttyr09", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340gtdprologr.dat": { "port": "/dev/ttyr06", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340lodestarlogr.dat": { "port": "/dev/ttyr05", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340nav4dlogr.dat": { "port": "/dev/ttyr0a", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340nmeafulllogr.dat": { "port": "/dev/ttyr03", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340seabirdctdfulllogr.dat": { "port": "/dev/ttyr04", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340shipgyrofulllogr.dat": { "port": "/dev/ttyr02", "baudrate": 4800, "parity": "N", "stopbits": 1, "bytesize": 8 }, "2024340uhsmsgfulllogr.dat": { "port": "/dev/ttyr07", "baudrate": 9600, "parity": "N", "stopbits": 1, "bytesize": 8 } } } } ] } -
In order to just test this, I ran
./simulator.shin the/optdirectory. Once I verified data was being generated properly, I kill the python process. -
Now to get this to run as a service, I created a service startup file
/etc/systemd/system/corenav-simulators.servicewhich looks like:[Unit] Description=Python scripts to simulate data for corenav After=network.target [Service] Type=forking ExecStartPre=/bin/sleep 30 ExecStart=/opt/corenav-simulators/simulator.sh Restart=Always [Install] WantedBy=default.target -
The service can then be enabled by running the following:
sudo systemctl daemon-reload sudo systemctl enable corenav-simulators.service sudo systemctl start corenav-simulators.service -
I then rebooted the machine to make sure the simulators started properly
Warning
Once, after doing a standard upgrade on the Ubuntu installation, the serial ports would fail to be recognized. I had to uninstall the driver by running sudo ./mxuinst from the ~/Downloads/moxa/ directory and then running sudo ./mxinst again. After that, then run cd /usr/lib/npreal2/driver and then run sudo ./mxaddsvr 134.89.10.246 16 to add the virtual ports
navproc-rcsn-sim-old
-
Submitted a second VM to serve as the navproc/logging software that is the same as what is currently running on the Carson. This will be Ubuntu 20. Here is the ticket text:
- Request_Submitted_By: kgomes@mbari.org - VM_Name: *see comments* - VM_Purpose: This will be running the Navproc/Logging software that is currently running on the Carson. It is the 'old' version of the navproc. We want a simulator running the old stack handing as we migrate to the new. - VM_Expiration: 1 year - VM_Support: IS_Supported - VM_Support_Alt_Adm: - VM_OS: Ubuntu 20.04 LTS - CPU: 2 - CPU_Addl_Reason: - RAM: 4 - RAM_Addl_Reason: - GPU_Extra: - GPU: NO - Disk_Extra: - Network: Internal (SHORE) - Resource_Priority: Low - Resource_Priority_Reason: - Conf_Firewall: - Conf_Desktop: YES - Conf_Desktop_Extra: Need to have Remote Desktop available on this one. Thanks! - Conf_Logins: local - Conf_Docker_Extra: - Conf_Docker: NO - Conf_WebServer: none - Conf_sudo: sudo for kgomes local account. Can you also create a local account named 'ops' that has sudo too? This is the way Mike and I use Navproc now and we would like to do it this way so we can run the navproc/logging stuff as ops and then we can both login directly as ops and manage it. - Conf_vCenter_Access: - VM_Comments: Can we name the VM 'navproc-rcsn-sim-old'? I know it's long, but it's not a long term VM and it's really helpful if it's explicit -
Peter finished the VM installation and gave the
opsaccount the default password. - I needed to ssh into the VM first to make sure that the system prompted for a new password by running
ssh ops@navproc-rcsn-sim-old.shore.mbari.org - I changed the password to the normal ops password.
- I then brought up the Windows.app on my Mac and created a connection to this VM. It opened right to the initial setup of the Ubuntu installation
- I accepted mostly the defaults for the options in the setup.
- Note the default UI through the remote desktop is different from the normal one.
- I ran the software updater to get everything up to date.
- I opened a terminal window and in the upper menu bar I clicked on
Terminaland selectedPreferences. Under the Unnamed profile, I chose Command and then checked the box that saysRun command as a login shell. - I opened the
Settingsapplication.- NOTE: Normally, on the NUCs, we set the ops account to default and have it login automatically, but I could not do this on the VM.
- Under the
PowerSettings, for theBlank Screensetting, I set toNever
- I opened the
Software and Updatesapplication and on theUpdatestab, I normally set the updates to be as infrequent as possible, but I could not do this on the VM. - It is easier to do these steps with ssh, so I ssh'd to the machine using ops from my Mac
- Then I ran
sudo apt-get install -y openssh-server libssl-dev meld libglib2.0-0 libglib2.0-dev python2 python2-dev libzmq3-dev git libmodbus-dev build-essential cmake wish libncurses5-dev libncursesw5-dev ncftp net-tools -
I then edited the
/etc/groupfile and add the ops user to the dialout groupproxy:x:13: kmem:x:15: dialout:x:20:ops fax:x:21: voice:x:22: -
Then I edited the
~/.profilefile and added the following# User specific environment and startup programs # add navproc bin path for either rc or wf PATH="$PATH:/home/ops/CoreNav/navproc-process/bin/<ship_path>" # The LCM URL is only set in the navproc-process *.ini # file and used by all processes and loggers at startup export LCM_DEFAULT_URL=NOT_SET alias gh='history | grep' alias ge='env | grep' alias gp='ps ax | grep' alias goback='cd $OLDPWD' -
Next, I configured time synchronization. See systemd-timesyncd.service man page
-
I first ran
systemctl status systemd-timesyncdto see the status and it said it was inactive (dead) and gave a reason that theUnit systemd-timesyncd.service is masked. I wonder if that means the timesync is handled automatically by the VM?. Normally, I would configure it using the steps below, but I skipped for now in the VMsystemctl stop systemd-timesyncd systemctl start systemd-timesyncd -
Added following lines to /etc/systemd/timesyncd.conf
[Time] # https://mww.mbari.org/is/howto/network/timesynch # Shore NTP=time-sh1.shore.mbari.org time-sh2.shore.mbari.org time-sh3.shore.mbari.org # Rachel Carson #NTP=time-rc1.rc.mbari.org time-rc2.rc.mbari.org # Western Flyer #NTP=time-wf1.wf.mbari.org time-wf2.wf.mbari.org time-wf3.wf.mbari.org FallbackNTP=ntp.ubuntu.com -
Note: comment out the time servers that don't apply to the deployment platform. For example comment out Rachel Carson and Western Flyer if your system will be used on Shore.
- Normally, I would configure screen sharing, but it's already enable on the VM
- Next, it was time to install AdoptOpenJDK 8. Note: lcm-spy does not work well with openjdk-8-jdk. It hangs and has issues with refreshing the graphics. AdoptOpenJDK's version adoptopenjdk-8-hotspot (now Adoptium) seems to resolve these issues. To install it, I ran:
wget -O - https://packages.adoptium.net/artifactory/api/gpg/key/public | sudo apt-key add - echo "deb https://packages.adoptium.net/artifactory/deb $(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release) main" | sudo tee /etc/apt/sources.list.d/adoptium.list sudo apt update sudo apt install temurin-8-jdk
-
-
To verify the install, I ran
java -versionandjavac -version -
Now it was time to install LCM
-
Before doing that, I need to make sure Python2 was available. I installed it in the steps above, but in order to make it active, I needed to run:
sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 1 sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 2 sudo update-alternatives --config python -
Select
1when presented with options to choose Python 2 - I cd'd into home directory using
cd - Then I ran
mkdir Libs,cd Libs - Then I ran
git clone --branch v1.5.0 https://github.com/lcm-proj/lcm.git - I cd'd into the new cloned directory and ran
mkdir build,cd build,cmake ..,make,sudo make install,sudo ldconfig - In order for things to work, we need to inform Python 2.7 about the local LCM installation (which ends up under Python 3 by default). This can be done by running:
sudo cp -r /usr/local/lib/python3.8/site-packages/lcm /usr/lib/python2.7/dist-packages/
-
-
Before moving on, it was time to checkout the CoreNav repositories
cd mkdir CoreNav cd CoreNav git clone https://kgomes@bitbucket.org/mbari/navproc-common.git git clone https://kgomes@bitbucket.org/mbari/navproc-logger.git git clone https://kgomes@bitbucket.org/mbari/navproc-misc.git git clone https://kgomes@bitbucket.org/mbari/navproc-process.git -
Next, it was time to install libbot2
cd cd CoreNav cd navproc-misc cd packages sudo apt-get install ./python-gtk2_2.24.0-6_amd64.deb gunzip ./libbot2.tgz tar -xvf ./libbot2.tar cd libbot2 -
Edit the
tobuild.txtfile and comment out these#bot2-vis #bot2-lcmgl #bot2-param #bot2-frames -
Build and install to /usr/local using
sudo make BUILD_PREFIX=/usr/local -
Additional notes:
- make uninstall should uninstall /usr/local version
-
starting with lcm-1.4.0 the support jars for lcm-spy are not packaged inside lcm.jar they are located in /usr/local/share/java. The /usr/local/bin/bot-spy script had the following line added for compatibility,
CLASSPATH=$CLASSPATH:/usr/local/share/java/jchart2d-3.2.2.jar: \ /usr/local/share/java/xmlgraphics-commons-1.3.1.jar: \ /usr/local/share/java/jide-oss-2.9.7.jar -
The original bot-spy script was copied to /usr/local/bin/bot-spy.org
-
In order for procman to work correctly, I needed to install a more recent version and change the config files to match the new format it was expecting. I did all this by running:
cd ~/Libs git clone https://github.com/ashuang/procman.git cd procman mkdir build cd build cmake .. make -
This spit out the following list of installed files
-- Install configuration: "" -- Installing: /usr/local/share/java/procman_lcmtypes.jar -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/__init__.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/cmd_desired_t.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/cmd_status_t.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/cmd_t.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/deputy_info_t.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/discovery_t.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/orders_t.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman_lcm/output_t.py -- Installing: /usr/local/share/lcmtypes/procman_lcm_cmd_t.lcm -- Installing: /usr/local/share/lcmtypes/procman_lcm_cmd_status_t.lcm -- Installing: /usr/local/share/lcmtypes/procman_lcm_discovery_t.lcm -- Installing: /usr/local/share/lcmtypes/procman_lcm_deputy_info_t.lcm -- Installing: /usr/local/share/lcmtypes/procman_lcm_orders_t.lcm -- Installing: /usr/local/share/lcmtypes/procman_lcm_output_t.lcm -- Installing: /usr/local/share/lcmtypes/procman_lcm_cmd_desired_t.lcm -- Installing: /usr/local/share/icons/hicolor/scalable/apps/procman_icon.svg -- Installing: /usr/local/lib/libprocman.so -- Installing: /usr/local/include/procman/procman.hpp -- Installing: /usr/local/include/procman/procinfo.hpp -- Installing: /usr/local/lib/pkgconfig/procman.pc -- Installing: /usr/local/bin/procman-deputy -- Set runtime path of "/usr/local/bin/procman-deputy" to "/usr/local/lib" -- Installing: /usr/local/lib/python2.7/dist-packages/procman/build_prefix.py -- Up-to-date: /usr/local/lib/python2.7/dist-packages/procman -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_script.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_config.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/command_model.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/deputies_treeview.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/command_treeview.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/__init__.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/command_console.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/sheriff_dialogs.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_gtk/sheriff_gtk.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman/__init__.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff_cli.py -- Installing: /usr/local/lib/python2.7/dist-packages/procman/sheriff.py -- Installing: /usr/local/share/procman/procman-sheriff.glade -- Installing: /usr/local/bin/procman-sheriff -
While the LCM types jar file was installed in /usr/local/share, it isn't exactly the format spy will be looking for so I ran
sudo cp /usr/local/share/java/procman_lcmtypes.jar /usr/local/share/java/lcmtypes_procman.jar - Next, it was time to install the serial port drivers for the TS 16 and create the virtual serial ports that navproc will interact with
- I used the remote desktop and opened firefox and browsed to
http://www.digi.com/support/cpts8 - I scrolled down to "Drivers & Patches" and clicked on "RealPort Driver"
- I then selected "Linux" from the operating system list and then clicked on the tgz version 1.9-42 which downloaded a file named
40002086_AC.tgz -
In a terminal and ran
cd ~/Downloads tar -xzf 40002086.tgz cd dgrp-1.9 sudo ./configure sudo make all sudo make install sudo make postinstall -
Before we use the ports, we add a patch to make port numbers 1-16 instead of 0-15. Edit the dgrp_udev file by running
sudo vi /usr/bin/dgrp_devand add the lineTMPPORT=`expr $TMPPORT + 1`
Note
Note that we are using single back ticks here
Warning
The spaces before and after the + are vital!!!
- There is only one TS 16 that we are using, so we add those 16 ports to the navproc VM by running:
sudo dgrp_cfg_node init -v -e none a 134.89.10.99 16 - That added 16 ports at
/dev/tty_dgrp_a_XXand then created symlinks at/dev/ttya00 -> /dev/ttya15. - I then rebooted the machine by running
sudo shutdown -r nowjust to make sure the ports stayed connected. - Since the simulator is running on the Moxa and is already feeding simulated data to the TS16, I could check that they are working by running
sudo dinc /dev/ttya02and I see DATA!!!! -
Lastly, we want to change the permissions so ops account does not need sudo to access to ports. We do this by running
sudo vi /etc/udev/rules.d/10-dgrp.rulesand add the GROUP, MODE and OPTIONS tags to the tty_dgrp line and comment out the other two which should end up looking like this:KERNEL=="tty_dgrp*", PROGRAM="/usr/bin/dgrp_udev %k", SYMLINK+="%c", TAG="systemd", GROUP="dialout", MODE="0666”, OPTIONS="last_rule" #KERNEL=="cu_dgrp*", PROGRAM="/usr/bin/dgrp_udev %k", SYMLINK+="%c", TAG="systemd" #KERNEL=="pr_dgrp*", PROGRAM="/usr/bin/dgrp_udev %k", SYMLINK+="%c", TAG="systemd" -
I rebooted the machine again and then from a terminal ran
dinc /dev/tty03and got data (did not need sudo for this now) -
Next, it was time to build the navproc software and get it running. In a terminal, I ran:
cd ~/CoreNav/navproc-process make cd bin/sim vi core_nav_sim.ini -
To properly use the spy GUI with the lcm types, I had to build and install the java versions of the LCM types by running
cd ~/CoreNave/navproc-common/lcm_types lcm-gen -j --jpath ./java --jmkdir true *.lcm cd java javac -cp ~/Libs/lcm/build/lcm-java/lcm.jar *.java jar -cvf lcmtypes_corenav.jar corenav` sudo cp lcmtype_corenav.jar /usr/local/share/java -
In order to use the new procman deputy and sheriff I had to change all the scripts in
navproc-process/bin/simas well as the bot-proc.cfg file to match the new format. - I edited the core_nav_sim.ini file to match the port and signal mapping shown at the top of this document
Warning
With the navproc machine being a VM, it seemed to mess with LCM. When I tried to run navproc_start, it would error out with a bunch of stuff including "LCM self test failed". With some googling I found that if you run the following, it gets rid of that error: sudo ifconfig lo multicast & sudo route add -net 224.0.0.0 netmask 240.0.0.0 dev lo.
- Now I can start/stop navproc and it looks like it's working!!!
-
Now time to set up the logr processes
cd ~/CoreNave/navproc-logger/src/datalogger make clean make sudo make install -
After this, the logr executable will be installed in the
/usr/local/bindirectory. -
The actually loggers and bcservers are configured and run other projects called
coreloggingandbcserver. To get all this up and runningcd git clone https://kgomes@bitbucket.org/mbari/bcserver.git git clone https://kgomes@bitbucket.org/mbari/corelogging.git cd bcserver make clean make sudo make install cd cd corelogging/scripts -
I then edited the my_cron_with_logging and commented out the lines at the bottom that FTP files to shore and send shop locations to the ODSS and telepresence.
- I then loaded the crontab by running
crontab < my_cron_with_loggingand waited a couple of minutes for everything to start up. - I then looked at the log files and ran Dataprobe against it to make sure it worked.
Note
While troubleshooting, I turned off the firewall on ubuntu by running sudo ufw disable as I thought that might have been blocking the bcserver ports. I think the problem I was having was not related to the firewall per se, but I left it off anyway. I should enable it and configure openenings for just the ports I need.