Skip to content

Navproc, Datamanager, Logging on the Western Flyer

The code that is currently running on the Western Flyer to support navproc, datamanager and core logging is different than what is running on the Rachel Carson. The code running on the Flyer is the older code that is stored in several data repositories in CVS. Since the rollout of the upgraded navproc and logging software took longer than expected, the decision was made to keep the logging on the flyer using the old code because the Flyer only had a year until retirement. So, currently, there are two machines running navproc, datamanager and logging on the flyer with the production machine being wfnavproc1.wf.mbari.org and the spare backup being wfnavproc2.wf.mbari.org. They are currently always up and running, but the only active processes running are on wfnavproc1.

The navproc code currently lives in the CVS tiburon2 repo and is checked out in /usr/tiburon2 on both machines.

Datamanager

The code for datamanager is deployed in the /usr/local/dm directory and looks to be a direct copy of the code from somewhere (not in a repository)

Logging

The logging code is broken up into two locations. The main body of the corelogging code is stored in the CVS coredata repo and is checked out to the /coredata directory. It is responsible for the bcservers and logrs that run to coordinate the UDP messages into log files that are stored in /coredata/data.

The second repository of corelogging is actually the code that is running on the Carson and is checked out to /home/coredata/corelogging and is from the Git Repo here. While the logging code from this repo is not running, there is one Python script that pushes data to shore that are necessary for various applications (monitoring, etc.). So, the only code running from the Git repository is the python script in /home/coredata/corelogging/common/dm-to-amqp/dm-to-amqp.py. In order to get that python script running, the following steps were taken (on both wfnavproc1 and wfnavproc2)

  1. ssh'd in as 'coredata'.
  2. cloned the corelogging repository using git clone https://kgomes@bitbucket.org/mbari/corelogging.git which created a corelogging directory in /home/coredata.
  3. Python 2.7 was already installed, but I needed to create a virtual environment for this code so I did the following:

    sudo apt-get install python-pip
    sudo apt-get install python-dev
    sudo apt-get install build-essential
    sudo pip install virtualenv virtualenvwrapper
    sudo pip install --upgrade pip
    cd ~
    virtualenv venv-amqp
    source venv-amqp/bin/activate
    pip install enum
    pip install pika
    pip install amqplib
    
  4. Copied the config.json.wf file to config.json and edited the config.json file and added the username and password for the AQMP server that messages will be sent to.

  5. In crontab, I added the following entry:
    0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58  *  *  *  *   /home/coredata/corelogging/common/dm-to-amqp/wfRunDMtoAMQP  > /coredata/logs/.out_runDMtoAMQP 2>&1
    

Cutover

Currently, navproc and datamanager are only running on wfnavproc1 and are not running on wfnavproc2. There are cronjobs under the coredata account on wfnavproc2 that try to start the logrs but fail because navproc and datamanager are not running. Once they are running, the loggers should start automatically within a couple of minutes. If there was a failure of wfnavproc1, the spare system would need to be brought up by doing the following:

  1. Make sure wfnavproc1 is shutdown somehow (if it's still running)
  2. ssh into wfnavproc2 as user tiburon
  3. run ds
  4. wait for datamanager to start up (can take awhile)
  5. run ns to start up navproc
  6. On the copilot machine, in order to get Dataprobe working, you will have to edit the JSON file that is in the Dataprobe directory in the docricketts home directory and change the machine name from wfnavproc1 to wfnavproc2.
  7. Also note that the ArcMapNav plugin won't work unless you put in the wfnavproc2.wf.mbari.org address manually in the configuration.

NOTE An important thing to know is that if this cutover is done, the nightly transfers of data to shore will need to be managed manually to prevent clobbering of good data. By default, the nightly transfers on wfnavproc2 are disabled so when the ship returns to shore, someone needs to assess the situation and status of log files on both wfnavproc1 and wfnavproc2 and see which files need to be sync'd to the shore FTP site.