Development
This is a guide for developers working on the navproc and logging systems at MBARI. There are two sections maintained here to capture the differences in the development stack for the two main ships (Rachel Carson and soon, the David Packard)
Rachel Carson
For the software running on the Rachel Carson, there are four types of processes that run to support data management. They are:
- Navproc processes which are LCM publishers or serial writers that are started and managed by procman
- logr processes which consist of LCM subscribers that read in messages from navproc processes, create log file and then publish the messages again.
- bcserver processes which consume the messages being published by the logr processes and stage the latest data items so that clients can grab the latest data over UDP without crossing into the ROV subnet.
- dm-to-amqp process which takes data from the bcserver and pushes summary messages to the AMQP server on shore.
Code
Class Diagram
Navproc
The navproc related code is stored in a BitBucket repository and consists of two main repositories. The navproc-common repository and the navproc-process repository. To manage the navproc processes, procman from the libbot2 library is used. When the libbot2 library is built (as specified in the computer_setup.md file), an executable named 'bot-procman-deputy' is installed in /usr/local/bin. This deputy process is started and somehow it knows to read in the bot-proc.cfg file that is located in the navproc-processes directory under the bin/(platform) directory. This configuration file dictates what processes are the be started, what 'host' they are associated with and if they should be auto respawned if they die.
To develop these processes, you would need to build up a computer using the computer setup instruction and then check out the navproc-common and navproc-process repos. In the navproc-process repo, you can then run 'make' which will generate the executables that will be started up by the bot-procman-deputy. Note that as part of the make, the lcm types that are defined in navproc-common are processed into _t.hpp files located in the corenav directory local to each publisher that is defined in the src directory. As an example, you can look at the NMEA GPS LCM Type. When 'make' gets run in the navproc-process directory, for this example, a 'corenav' subdirectory gets created under the nmeaGPS_pub directory in navproc-process/src directory and all the LCM types are processed and (among others), a nmeaGPS_t.hpp file will get created. For each LCM type, there is also an equivalent *_pub.cpp file that is used to publish LCM message for this specific type. Along with the *_pub.cpp file, there is also a *_frame.cpp file for the type. The make process also compiles the *_pub.cpp file into an executable which is what is run by the procman processes. As a general rule for these processes:
- It creates an lcm_interface object by passing in the lcm url from the configuration file. There is then an initialize process that constructs a native LCM object using that URL.
- The *_pub process starts up and creates a type specific lcm publisher (nmeaGPS for example), a generic serial string publisher, and a navproc publisher.
- It then adds those publishers to the lcm_interface which basically assigns the LCM object that was created with the lcm url to each publisher so the LCM object is shared across all publishers.
- A serial port object is created using the parameters from the configuration file and then a loop is started that looks for data on that port.
- If no data is received, a stale message gets published after a timeout.
- If data is received, the raw string messages gets published.
- Once the string message gets published, the process uses the appropriate type_frame.cpp methods to try and parse the message into the LCM type that was defined for that type. If that message gest parsed OK, it then gets published using the type-specific publisher
Logr
The logr processes are built and managed by code from three different repositories. The logr process code itself is located in the navproc-logger repository which has dependencies on the navproc-common repository, but the scripts, config files and crontab entries are managed in the corelogging repository. They are started by cron which passes in two parameters to each logger. The first is the location of the .ini file and the second is that task name. The task name has to match a section header in the .ini file so that configuration of the logr can be read in correctly. The configuration is read in and if the location of the navproc .ini file is found, that is also read in. There is also a .cfg file associated with the task name and that is also read in. That configuration is then handed to the load_config method of the logr_item_list object. In that load method, each line of the .cfg file is parsed into a logrItemNode struct and a linked list of these items is built out of the .cfg file. This is basically just a way to convert the information from the .cfg file into a linked list. Once this linked list is created, the process marches through the list and creates a subscriberNode struct for each list item and then stores it in a list of subscriberNodes held in the SubscriberNodeList code. This basically just stores information in a struct that will be used later to actually create the subscriber. Once the struct list is created, the code iterates over that list and creates subcribers for each channel. After the subscribers are created (and registered with the lcm_interface), the code calls start() on the lcm interface. At this point, the subscribers start to receive messages and the type-specific handlers process them. For example, the lcm_nmeaGPS_sub class decodes the LCM message and populates the type-specific data.
There is a class defined in the lcm_subscriber called item_status that is used to track the status of the lcm message stream. In each type-specfic subscriber class, it overrides the get_item methods and sets the values on the incoming status item. From what I can tell, at least in the subscriber, it reads data from the decoded LCM message and then sets the items on the status object, but it does not use the status_item in the type-specific class at least.
BCServer
The bcserver code is kept in the bcserver repository.
DM to AMQP
The dm-to-amqp code and management scripts are maintained in the corelogging repository and are located under the common/dm-to-amqp directory.
Process Startup and Monitoring
When the computer is booted, at some point, the bot procman deputy kicks off the processes that are defined in the /home/ops/CoreNav/navproc-process/bin/sim/bot_proc.cfg file. This file has entries like:
cmd "nmeaGPS_pub"
{
exec = "$APP_PATH/nmeaGPS_pub/nmeaGPS_pub $CORENAV_INI";
host = "navproc";
auto_respawn = "false";
}
which define a single process to be started automatically. The APP_PATH points to /home/ops/CoreNav/navproc-process/src directory and the CORENAV_INI file is the /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini file. This file contains all the configuration for the processes. It defined the URL where the LCM messages will be published over UDP (lcmUrl=udpm://239.255.76.67:4224?ttl=0) and has sections for each of the processes. For example, the section for the NMEA GPS process above looks like:
##############################################################################
[nmeaGPS_pub]
##############################################################################
serialPort = /dev/ttya01
serialTimeout = 5
This basically tells the process what serial port to look. There are more complex configurations and options for serial outs, etc. If you run a ps -ef on the navproc machine, you will see a list of processes that look like this:
ops 2836 1965 0 Feb22 pts/0 00:06:01 bot-procman-deputy -l navproc.2023-Feb-22_08.47.36.log -n navproc
ops 2842 2836 0 Feb22 pts/1 00:00:00 /home/ops/CoreNav/navproc-process/src/zmq_proxy/proxy -f 5555 -b 5556
ops 2845 2836 0 Feb22 pts/2 00:04:09 /home/ops/CoreNav/navproc-process/src/octans_pub/octans_pub /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini
ops 2846 2836 0 Feb22 pts/3 00:04:44 /home/ops/CoreNav/navproc-process/src/digiquartz_pub/digiquartz_pub /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini
ops 2847 2836 0 Feb22 pts/4 00:03:23 /home/ops/CoreNav/navproc-process/src/mbaridepth_pub/mbaridepth_pub /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini
ops 2848 2836 0 Feb22 pts/5 00:02:11 /home/ops/CoreNav/navproc-process/src/seabirdctd_pub/seabirdctd_pub /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini
ops 2849 2836 0 Feb22 pts/6 00:02:10 /home/ops/CoreNav/navproc-process/src/nmeaGPS_pub/nmeaGPS_pub /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini
ops 2850 2836 0 Feb22 pts/7 00:01:18 /home/ops/CoreNav/navproc-process/src/ventanaCSP_pub/ventanaCSP_pub /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini
ops 2851 2836 0 Feb22 pts/8 00:01:32 /home/ops/CoreNav/navproc-process/src/shipGyro_pub/shipGyro_pub /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini
ops 2858 2836 0 Feb22 pts/9 00:01:32 /home/ops/CoreNav/navproc-process/src/winfrog_pub/winfrog_pub /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini
ops 2861 2836 0 Feb22 pts/10 00:04:01 /home/ops/CoreNav/navproc-process/src/winfrog_ser_out/winfrog_ser_out /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini
ops 2862 2836 0 Feb22 pts/11 00:01:33 /home/ops/CoreNav/navproc-process/src/dvecsIn_pub/dvecsIn_pub /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini
ops 2863 2836 0 Feb22 pts/12 00:08:28 /home/ops/CoreNav/navproc-process/src/dvecsOut_pub/dvecsOut_pub /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini
ops 2864 2836 0 Feb22 pts/13 00:03:27 /home/ops/CoreNav/navproc-process/src/lapbox_hd_pub/lapbox_hd_pub /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini
ops 2865 2836 0 Feb22 pts/14 00:04:57 /home/ops/CoreNav/navproc-process/src/vorne_ser_out/vorne_ser_out /home/ops/CoreNav/navproc-process/bin/sim/core_nav_sim.ini
These are the various navproc processes that are started by procman deputy. To get a graphical UI view/management of these processes, you can run the procman sheriff from the console/VNC by double-clicking on the navprocGUI shortcut on the desktop. If you expand the 'groups' you will see a list of these processes and their status and you can start/stop them from the GUI (see figure below).

Once the processes are started, they start reading from the various ports defined in the core_nav_xxx.ini file and then start spitting out UDP datagrams at the lcmUrl.
If you want to look at the various LCM messages, you can run the lcm-spy tool by double-clicking on the navprocSpy icon on the Desktop and that will open up the lcm-spy tool and it will automatically be pointed at the LCM URL. You can then click on items to inspect the messages coming across the various channels (see figure below).

The next set of processes to start are the logr processes and they are started by crontab. If you look at the crontab by running crontab -l you will see a bunch of stuff, but specifically there are lines that look like
* * * * * /home/ops/corelogging/sim/scripts/startLogr shipnavlogr > /home/ops/corelogging/sim/logs/out_shipnav_logr 2>&1
This entry runs a script called startLogr which is located in the corelogging source code repository. This script looks up some configuration information and then starts an instance of the logr executable and passes in the configuration file and the name of the logger which will link it to the correct section in the configuration file. For example, the above cron entry passes in the shipnavlogr name which has a corresponding section in the core_logging.ini file that looks like:
##############################################################################
[shipnavlogr]
##############################################################################
#name of coredata root directory
coredataRoot = /home/ops/corelogging/sim
#coredata console output logs dirctory
coredataLogsDir = logs
#coredata cfg files directory
coredataCfgsDir = cfgs
#coredata data files directory
coredataDataDir = data
# name of logr items config file (in the cfgs dirctory)
coredataLogItemsCfg = shipnavlogr.cfg
#platform may be used in forming filenames or tagging datafiles etc
#typically wfly or rcsn
coredataPlatform = sim
#Broadcast and client UDP port assignments
coredataBroadcastPort = 54002
clientConnectPort = 54003
This configuration tells the logr executable a few things like where to read the configs from and where to write data and log files to. There are two key configuration properties to pay attention to. The coredataLogItemsCfg property tells the logr executable what file to read in for the configuration of what items to read from the LCM messages and what items to write to the data files (and what format to write it in). The second is the coredataBroadcastPort which is the port where the UDP messages from the logr executable will be pushed to. The clientConnectPort will be used by the BCServer process, which is described later.
If you look at the logger configuration file (shipnavlogr.cfg in this example) you will see the properties used to set up the loggers. Here is an example for the shipnavlogr (comments and blank lines removed)
UPDATE 1
LOG NMEA_GPS lcm_nmeaGPS_sub ZDA_USECS Int SHIP.GPS.TIME %d No No Slave 10 No
LOG VENTANA_CSP lcm_ventanaCSP_sub PRESSURE Double ROV.PRESSURE %4.2lf No No Slave 30 No
LOG SEABIRD_CTD lcm_seabirdctd_sub PRESSURE Double ROV.CTD.PRESSURE %5.1lf No No Slave 30 No
LOG NMEA_GPS lcm_nmeaGPS_sub LAT Double SHIP.GPS.LAT %3.6lf No No Slave 30 No
LOG NMEA_GPS lcm_nmeaGPS_sub LON Double SHIP.GPS.LON %4.6lf No No Slave 30 No
LOG NMEA_GPS lcm_nmeaGPS_sub DIFFERENTIAL BOOL SHIP.GPS.DIFFERENTIAL %d No No Slave 30 No
LOG NMEA_GPS lcm_nmeaGPS_sub QUALITY Int SHIP.GPS.QUALITY %d No No Slave 30 No
LOG NMEA_GPS lcm_nmeaGPS_sub HDOP DOUBLE SHIP.GPS.HDOP %3.1lf No No Slave 30 No
LOG SHIP_GYRO lcm_shipGyro_sub HEADING DOUBLE SHIP.GYRO.DEGREES %3.1lf Yes No Slave 30 No
LOG NMEA_GPS lcm_nmeaGPS_sub NO_ITEM Double SHIP.WIND.TRUE_SPEED %lf Yes No Slave 30 No
LOG NMEA_GPS lcm_nmeaGPS_sub NO_ITEM Double SHIP.WIND.TRUE_DIRECTION %lf No No Slave 30 No
LOG WINFROG lcm_winfrog_sub LAT Double ROV.POSITION.LAT %lf Yes No Slave 30 Yes
LOG WINFROG lcm_winfrog_sub LON Double ROV.POSITION.LON %lf No No Slave 30 Yes
LOG VENTANA_CSP lcm_ventanaCSP_sub DEPTH Double ROV.MBARI.DEPTH %lf No No Slave 30 No
LOG VENTANA_CSP lcm_ventanaCSP_sub HEADING Double ROV.HEADING %lf No No Slave 30 No
LOG VENTANA_CSP lcm_ventanaCSP_sub PITCH Double ROV.PITCH %lf No No Slave 30 No
LOG VENTANA_CSP lcm_ventanaCSP_sub ROLL Double ROV.ROLL %lf No No Slave 30 No
LOG VENTANA_CSP lcm_ventanaCSP_sub ALTITUDE Double ROV.ALTITUDE %lf No No Slave 30 No
The UPDATE property has a couple of different meanings.
Below is a diagram that captures the high-level picture of these processes and the data flows.