Skip to content

Dataprobe Users Guide

WesternFlyer/Tiburon Data Logging- Users Guide

Author: Rich Schramm Revision: 3.0 Date: 21-Jan-12

Revision history

  • 1.0 01-Jan-98 Original
  • 2.0 11-Dec-98 Updated video logger configuration -rs
  • 2.2 12-Jan-99 pl_navproc has been renamed to navproc. Also, temporary (I hope) problem, must telnet into Pt.Lobos subnet (eg 'lobos') to get to navproc -rs
  • 2.3 12,Jan-99 modified as needed from Lobos to Western Flyer
  • 2.4 26-Jun-O0 added reference to drillsled logger. Omit Andy Pearce from contact list.
  • 2.5 23-Feb-01 added reference to dataprobe logger. Modified videologr output list to reflect timecode_valid etc.
  • 2.6 12-Jul-02 removed references to 'telnet navproc'. Frequent connections to navprocessor's netports caused more problems. Dataprobe was written and later modified to eliminate the need for this facility.
  • 3.0 21-Jan-12 Major update. For DocRicketts and Linux servers.

This guide is intended to provide guidance to enable the ROV pilots and Western Flyer support staff to be able to start, stop and monitor the core data logging on the Western Flyer.

Background

Core data acquisition and logging is done by programs running on the Linux workstations named ‘itchy’ and 'scratchy' (Figure 1). Scratchy is where the are three critical loggers are running, one for each of the major sub-systems: navigation, video and rovctd. Other loggers can easily be set up for special logging needs given adequate advanced notice. The three core loggers are actually three identical copies of the same executable program (wflogger.c) given different names and configured monitor a different set of datamanager items.

All data inputs to the loggers are through the MBARI datamanager facilities. Datamanager is a distributed publish/subscribe system in which all items are available across all nodes of the system as if they were created locally on that node. Datamanager items are provided via the navprocessor (named ‘itchy’) which reads the serial data streams from the ROV, GPS, Gyro, CTD etc. and parses them into individual named items with a timestamp from the host computer clock (synced to NTP).

Loggers are datamanager clients that read a configuration file at start-up which tells them which named items to subscribe to and at what rate. Loggers log constantly, with the data going to files which are broken-up by year-day number based on GMT time.

The logger programs are started automatically shortly after each workstation is booted by the Unix cron facility. Their existence is checked for by scripts which are run once every minute. If a script executed by cron cannot detect its process, it attempts to restart it. THIS FEATURE DOES NOT ACTUALLY VERIFY THAT THE TASK IS REALLY LOGGING DATA. That verification requires a user to log into the system as described below and to monitor the actual output.; or more typically, just verify that the streams are updating using a PC program called Dataprobe.

Warning

On WesternFlyer the datamanager services and navproc must be started manually by the pilots after each system reboot !!!!

An additional logger has been set up called samplelogr. The default config has this application logging position once per hour just to have something to do. This was set up to allow users to easily create a special purpose logger if needed. All that is needed is to modify the config file as instructed later in this document, and restart the application 'samplelogr'. The files will automatically appear in the data directory and be transferred to shoreside archives.

Additional loggers have been set-up in the past for the Tiburon's drillsled (dsledlogr) and others. These loggers are used to support the unique requirements of specific science projects and may or may not be active on the system at any given time.

Each logger program can optionally be configured provide its data records to a standard network port via a UDP datagram server.

WFTB-Core-DM-System

Figure 1. Western Flyer / Doc Ricketts core data management system.

Datamanager

The MBARI datamanager is the data distribution mechanism for all for the data logged by the system. It is basically a networked based publish/subscribe system, meaning there are tasks which can register with datamanager to be providers of named items, and tasks which register to as subscribers to items (or both). Subscribers can either register to be notified (or ' triggered') when an item updates, or they can simply poll the datamanager periodically to see what the 'current' value of an item is. 'Current' is used loosely as what is actually provided is whatever the value was the last time it was written to the datamanager, which could be days or weeks old. A robust consumer therefore needs to examine the status and time associated with the item to fully determine its age and validity.

The logging tasks are primarily consumers of a list of datamanager items however they also provide some items back to datamanager which can be used to monitor the health of the logger. The logger, through its configuration file, can be used to implement a triggered or a polling logger (or a combination).

In mid-2001 the logger program was modified to optionally output data records out a standard UDP datagram socket to better support the program dataprobe program. This, along with a companion server for each logger lighten the load placed on the telnet ports of the navprocessor.

The Navprocessor

The core of the scientific data distribution systems for both the Pt. Lobos/Ventana and Western Flyer/DocRicketts is a program named navprocd; or ‘the navprocessor'. On the Western Flyer it is named itchy and on the Rachel Carson it is named navproc.

Its primary functions are to:

  1. host a datamanager node
  2. parse and timestamp data streams from a variety of sources into datamanager items
  3. aggregate, format and output datamanager items via serial ports as needed
  4. aggregate, format and output datamanager items via telnet ports as needed

The navprocessor concept was developed to help solve the top-side data management needs for the Western Flyer navigation system. The approach worked well and has since been deployed on the Pt. Lobos.

The key objectives of the design were:

  1. standardize the data distribution methods for all data using the same datamanager technology developed for originally for ROV Tiburon.
  2. 'abstract' the data from its various sources to provide flexibility. For example, the Western Flyer will undoubtedly see several different GPS positioning systems over time however the concept of a data element like SHIP.LATITUDE.DEGREES is not likely to change. The navprocessor concept allows us to isolate even substantial changes in input streams at the entry point to the data management system and minimize the impact.
  3. separate functions to simplify software and improve reliability - navprocessors parse and distribute data, loggers only need to worry about logging. A display applications should only be concerned with visualization - not acquisition and logging etc..

Starting Logging

Pt. Lobos/Ventana data logging is usually started automatically after reboot via a cron job owned by coredata.

Warning

On WesternFlyer the datamanager services and navproc must be started manually by the pilots after each system reboot !!!!

Logging typically runs all of the time even while the ship is in port. However if logging was stopped for maintenance purposes, the cron job will need to be restarted. It is always best to let cron handle the actual startup of the loggers as the start scripts also set the appropriate broadcast ports and companion UDP servers for each logger. To do so:

  1. Log in as coredata (pilots know the pswd, same as for user tiburon)
  2. Enter the command startlogging at the Unix prompt.
  3. Logging applications will restart shortly after the top of the next minute.
  4. Monitor the output to satisfy yourself that logging has resumed (See Monitoring output below...)
  5. You can now safely log out (the cron job and logging applications will continue to run).

Stopping Logging

The presence of the data logging applications is tested for once-per-minute via a cron job owned by coredata. If missing, they are restarted automatically. Logging typically runs all of the time, however for maintenance purposes it may be necessary to stop all or some logging for a period of time.

To stop ALL logging AND the cron job that automatically restarts it

  1. Log in as coredata
  2. Enter the command stoplogging at the unix prompt.
  3. Don’t forget to restart when your through!!!

To momentarily stop an individual logger - but NOT the cron job (ex. to force a re-read of the configuration file for an application)

  1. Log in as coredata
  2. Enter the command: shutdown_shipnav_logr
    1. or: shutdown_video_logr
    2. or: shutdown_rovctd_logr
    3. or: shutdown_sample_logr

At the top of the next minute, cron will restart the effected application. Obviously, you will have had to have edited the appropriate config file as the automatic restart may occur within seconds of the shutdown command (see Changing configuration Files below)

Note the same commands could be used to shutdown the special loggers such as dataprobelogr. However you should not need to be doing this and are strongly discouraged from interfering with them!

Monitoring Data Logging

There are several ways to monitor that data is actually be logged. Each will be described below.

  1. Watch the actual data file updates using the Unix 'tail' command.
  2. Establish a telnet connection to the dataport on the navprocessor
  3. Examine the log file for each logger for errors and status messages.
  4. You can use the PC program Dataprobe to continuously monitor the datastreams.

Using the Unix 'tail' command

You can watch the actual data file updates using the Unix 'tail' command. This is the method which gives the highest level of certainty that data is being recorded.

Warning

while running a 'tail', 'more' etc. on a data file, the ability of the cron script to detect a dead logger is defeated since the application name is part of the data file name. This shouldn’t be a problem since the point of running a 'tail' is to monitor that the data is being logged. However it is not advisable to leave a tail running if your not going to be actively looking at its output.

  1. Login to workstation scratchy as user coredata
  2. Change to the data directory: cd /coredata/data
  3. Obtain a directory of the datafiles to locate the current days filename. For example if your interested in the navigation data, use: ls shipnavlogr Then locate the file for the current yearday.
  4. Excecute a tail command such as (assuming 'today' is 1997 day 167):
    tail -f 1997167shipnavlogr.dat
    

Data will begin to scroll by at whatever update rate was set for the data stream you are monitoring.

Note

The file will roll-over to the next day at midnight GMT so a tail will need to be killed and restarted on the next days file.

Variable names associated with each column can be found in the headers of the datafiles themselves, by reading the configuration files or by looking at the Output Data section of this guide..

To see the column headers of the file, execute a command similar to:

    more 1997167shipnavlogr.dat

Note

If the config file has been changed during the day, you will want to grep the file for the # character to pick up the most recent header block which will be further down in the file...

Note

Some of the non-core loggers may be configured to require the vehicle be in the water. Check the cfg file for the logger in question if you suspect a problem..

Monitoring data via Dataprobe

You are encouraged to run the PC program Dataprobe continuously during your dive. Consult the pilots regarding how to start Dataprobe.

Run dataprobe on Copilot or the science PC ("lava" as of Jul'02). See the dataprobe users guide for details.

Examining the maintenance log files

Maintenance log files for each logger are in /coredata/logs (these are not ‘the data’. they are for error messages etc generated by the logging application….

Changing Configurations

Caution

Changing the core system config files can cause chaos to down-stream data processing and even result in the complete loss of valuable mission data. Please consult the appropriate people before making any changes. These would be Rich Schramm, Mike McCann or the pilots to start with.

It is HIGHLY preferable that you let us set up additional logging tasks if you have special needs. It is quite easy for us to do and the system is very flexible. We just need a moderate amount of advanced notice to be able to match schedules.

Optionally, a fourth logger (called samplelogr) has been provided for pilots/scientists to customize for short-term needs. The default config for this application is logging position once-per-hour. You could even copy one of the core config files over samplelogr.cfg if want to just change something simple like up-ing the sample rate for a series of dives - since it is quite acceptable for them to run in parallel!

But if you must - please only modify the core logger config files as a last resort. Here's how to do it for any of the loggers...

Modify the cfg in /coredata/cfg file as needed (but do not change any filenames!). Then stop and restart the logging application as described. The files will appear in the data directory.

Alert

Note that configurations are only read in when the program starts!!

To change any configuration:

  1. modify the config file. (Configuration files for each executing logger are in /coredata/cfgs)
  2. momentarily stop the appropriate logger (see Stopping logging above).
  3. the new cfg will take effect when the application auto-restarts on the minute. (assuming the cron job is running as per above).

Trouble Shooting

Almost all problems so far have resulted from having other processes owned by coredata which have the same name as the logging application or possibly arguments containing the same name (such as a tail running on a datafile as described above).

The best thing to try is to kill all of these manually

  1. as coredata or as root, kill the cron job if it's running:

    crontab –r
    
  2. find the pid's of each executing task (sample output follows cmd):

    ps -ef | grep logr
    
    coredata 814 1 4 16:30:03 ? 0:00 /coredata/bin/videologr
    coredata 816 1 4 16:30:03 ? 0:00 /coredata/bin/shipnavlogr
    coredata 806 1 4 16:30:02 ? 0:00 /coredata/bin/rovctdlogr
    coredata 845 1 4 16:30:02 ? 0:00 /coredata/bin/samplelogr
    
  3. kill each pid (example uses pid's from above):

    kill 814 
    kill 816 
    kill 806 
    kill 845
    
  4. Make sure no other tasks are running with contain names or arguments lists containing the strings samplelogr,videologr,shipnavlogr,rovctdlogr,advlogr etc. This can be done via:

    ps -ef | grep videologr
    ps -ef | grep shipnavlogr
    ps -ef | grep rovctdlogr
    ps -ef | grep samplelogr
    
  5. Kill or close any that are returned (except the 'grep' that may be shown for each)

  6. From /coredata/bin, restart the cron job via command (or issue the startlogging command)

    crontab my_cron_with_logging
    
  7. The applications should all start themselves shortly after the next even minute. Look in the logs directory for the application to get clues as to what might be troubling it further.

Outputs

The following are the sub-systems for which data are logged:

  • Ship/Rov Navigation data
  • Rov CTD data
  • Video timecode and camera state data

Ship and ROV Navigation Data

Warning

It is strongly recommended that you periodically check to make certain all is well if the data is important to you!!

Ship and Rov navigation data are logged continuously with one record collected every second. The acquired data can be viewed either directly by monitoring the data files or by Dataprobe.

Caution

Nav data is often displayed graphically on displays in the control room and on the bridge. Those displays are entirely separate from the data logging function. Just because data is updating on those devices does not mean data is being logged. You must follow the directions here for monitoring data logging.

Note

Please consult the current logger configuration files to verify that no changes have been made since this document was written

The data items logged are listed in column order below:

    LOGHOST.SYSTEM.UTC
    SHIP.GPS.TIME 
    ROV.CTD.INWATER
    SHIP.GPS.LAT
    SHIP.GPS.LON              
    SHIP.GPS.DIFFERENTIAL
    SHIP.GPS.QUALITY 
    SHIP.GPS.HDOP  
    SHIP.GYRO.DEGREES  
    SHIP.WIND.TRUE_SPEED  
    SHIP.WIND.TRUE_DIRECTION
    ROV.POSITION.LAT 
    ROV.POSITION.LON 
    ROV.DIGIQ.SCALED_PRESSURE  
    ROV.DVECS.HEADING 
    ROV.DVECS.PITCH
    ROV.DVECS.ROLL
    ROV.DVECS.ALTITUDE
    ROV.CTD.PRESSURE
    ROV.MBARI.DEPTH 
    ROV.SURVEY.POD_UP
    ROV.SURVEY.DIGIQ_UP
    ROV.SURVEY.OCTANS_UP
    ROV.SURVEY.DVL_UP
    ROV.SURVEY.ALTIMETER_UP
    LOGHOST.SYSTEM.CLOCKTIME.YEAR
    LOGHOST.SYSTEM.CLOCKTIME.YEARDAY
    LOGHOST.SYSTEM.CLOCKTIME.TIMESTRING
    LOGHOST.SYSTEM.CLOCKTIME.TIMEZONE

ROV CTD Data

Warning

It is strongly recommended that you periodically check to make certain all is well if the data is important to you!!

ROV CTD data are logged continuously with one record collected every seconds. The acquired data can be viewed either directly by monitoring the data files or bt dataprobe.

Caution

ROVCTD data is often displayed graphically on the Labview CTDGUI. That display is entirely separate from the data logging function. Just because data is updating on that device does not mean data is being logged. You must follow the directions here for monitoring data logging.

Note

Please consult the current logger configuration files to verify that no changes have been made since this document was written.

The data items logged are listed in column order below:

    LOGHOST.SYSTEM.UTC
    ROV.CTD.INWATER
    ROV.CTD.TEMPERATURE
    ROV.CTD.CONDUCTIVITY
    ROV.CTD.PRESSURE
    ROV.CTD.ANALOG1
    ROV.CTD.ANALOG2
    ROV.CTD.ANALOG3
    ROV.CTD.ANALOG4
    ROV.OPTODE.DPHASE
    ROV.POSITION.LAT
    ROV.POSITION.LON
    SHIP.GPS.LAT
    ROV.SURVEY.CTD_PWR
    ROV.SURVEY.OPTODE_PWR 
    ROV.SURVEY.POD_PWR
    LOGHOST.SYSTEM.CLOCKTIME.YEAR
    LOGHOST.SYSTEM.CLOCKTIME.YEARDAY
    LOGHOST.SYSTEM.CLOCKTIME.TIMESTRING
    LOGHOST.SYSTEM.CLOCKTIME.TIMEZONE

Video timecode and camera state data

Warning

It is strongly recommended that you periodically check to make certain all is well if the data is important to you!!

Video timecode and camera state data are logged continuously with one record collected every second. The acquired data can be viewed either directly by monitoring the data files or by dataprobe.

Note

Please consult the logger configuration files to verify that no changes have been made since this document was written.

The data items logged are listed in column order below:

    LOGHOST.SYSTEM.UTC
    SHIP.GPS.TIME
    ROV.CTD.INWATER
    SHIP.VIDEO_TIMECODE.STRING
    SHIP.VIDEO_TIMECODE.VALID
    SHIP.VIDEO_TIMECODE.DECKSTATUS
    SHIP.HDTV_TIMECODE.STRING
    SHIP.HDTV_TIMECODE.VALID
    SHIP.HDTV_TIMECODE.DECKSTATUS
    HD_LAPBOX.ZOOM
    HD_LAPBOX.FOCUS
    HD_LAPBOX.IRIS
    LOGHOST.SYSTEM.CLOCKTIME.YEAR
    LOGHOST.SYSTEM.CLOCKTIME.YEARDAY
    LOGHOST.SYSTEM.CLOCKTIME.TIMESTRING
    LOGHOST.SYSTEM.CLOCKTIME.TIMEZONE

WesternFlyer/DocRicketts Dataprobe- Users Guide

Author: Rich Schramm Revision: 2.0 Date: 24-Jan-2012

Revision history

  1. 1.0 23-Feb-01 original.
  2. 1.1 18-Jul-02 Removed telnet to navproc. Added quicklook, and references to udp servers.-rs
  3. 1.2 19-Feb-03 Changed references to 'batray' to 'shelf', reflecting move of datalogging functions off of batray.
  4. 2.0 24-Jan-2012 Major revisions for DocRicketts and Linux servers (including references to ‘dial-up lines’!!)

Introduction

This guide is intended to provide guidance to enable the ROV pilots and Western Flyer support staff to be able to start, stop, and troubleshoot the software application named Dataprobe. Dataprobe is a PC application for monitoring the collection of core MBARI scientific data streams.

Successful core data monitoring entails having fundamental knowledge of how the data acquisition sub-systems interact with each other, what some of the historical instrument and sub-system failure modes are, and what clues the system can give us to infer that it is working properly. Many of the checks that the system developers and expert support staff routinely use to monitor and debug the system have been coded into an application named Dataprobe.

This program automatically performs over seventy specific tests on the core data-streams every two seconds and presents the results through a very simple and highly visual computer display. Dataprobe also provides on-screen troubleshooting help associated with each test. The program can be run from any PC on the vessel or remotely over the network when the ship is in range.

Note

I cannot stress strongly enough that dataprobe is just one piece of software that is able to look at the outputs of a complex system. We can only use it to infer the overall the health of a system that dataprobe itself requires to be running at some minimal level. Therefore it is not a guarantee that all is well, it is just one tool in the bag of tricks.

Alert

DO NOT RELY SOLELY ON DATAPROBE TO MONITOR YOUR DATA.

Alert

FOLLOW PROCEDURES IN THE WesternFlyer/Tiburon Data Logging - Users Guide TO CHECK YOUR DATA FREQUENTLY - AT PERIODS THAT REPRESENT ACCEPTABLE DATA LOSS TO YOU!.

Background

Data logging is performed by programs running on the Linux server named 'scratchy'. All data inputs to the loggers are through the MBARI datamanager facilities. Datamanager ‘nodes’ are running on a topside computers called ‘itchy’ which runs the navprocessor , and on ‘scratchy’ which handles data logging (Figure 2).

There are data logging programs running on scratchy for each of the major sub-systems: navigation, video and rovctd. A unique feature of the logging system is that in addition to writing strings of data to files, each logger also writes it’s string to a broadcast network port on shelf. A companion program to each logger listens for this broadcast and also for requests from network client programs (such as dataprobe).. It is these messages on the network which Dataprobe relies on to monitor the health of the system.

This can provide a somewhat-warm-fuzzy-feeling that logging is occurring but its not an absolute warranty. You must also follow procedures in the WesternFlyer/Tiburon Data Logging - Users Guide to check your data frequently.

Dataprobe is a PC-based program which can be installed by Support Engineering staff on any networked PC. Time must be allotted well in advance to schedule installation and allow adequate testing.

Dataprobe relies on a special logger named dataprobelogr that has been set up on shelf to provide the information it needs to be able to monitor the core loggers. Dataprobelogr data is provided via the same mechanism described above.

WFDR-Core-DM-System

Figure 2. Western Flyer / Doc Ricketts core data management system.

Datamanager

The MBARI datamanager is the data distribution mechanism for all for the data logged by the system. It is basically a networked based publish/subscribe system, meaning there are tasks which can register with datamanager to be providers of named items, and tasks which register to as subscribers to items (or both). Subscribers can either register to be notified (or ' triggered') when an item updates, or they can simply poll the datamanager periodically to see what the 'current' value of an item is. 'Current' is used loosely as what is actually provided is whatever the value was the last time it was written to the datamanager, which could be days or weeks old. A robust consumer therefore needs to examine the status and time associated with the item to fully determine its age and validity.

Datamanager is a ‘distributed’ system. There are datamanager programs – or ‘nodes’ running on each computer that has programs that read from or write to datamanger items. Part of each datamanager node’s job is to keep track of what items are available at all of the other nodes. So a computer program running at node 3 can read an item from it’s datamanager that is actually being provided over at node 1, without having to know it which node it came from.

The navprocessor

The core of the scientific data distribution systems for both the Pt.Lobos/Ventana and Western Flyer/Tiburon is a task called the 'navprocd'.. On the Western Flyer it is run on linux server named ‘itchy’ and on the Pt.Lobos it is run on ‘navproc’. Its primary functions are to:

  1. host a datamanager node
  2. parse data streams from a variety of sources into timestamped datamanager items
  3. aggregate, format and output datamanager items via serial ports as needed
  4. aggregate, format and output datamanager items via telnet ports as needed

The navprocessor concept was developed to help solve the top-side data management needs for the Western Flyer navigation system. The approach worked well and has since been deployed on the Pt. Lobos.

The key objectives of the design were:

  1. standardize the data distribution methods for all data using the same datamanager technology developed for Tiburon.
  2. abstract' the data from its various sources to provide flexibility. For example, the Western Flyer will undoubtedly see several different GPS positioning systems over time however the concept of a data element like SHIP.LATITUDE.DEGREES is not likely to change. The navprocessor concept allows us to isolate even substantial changes in input streams at the entry point to the data management system and minimize the impact.
  3. separate functions to simplify software and improve reliability - navprocessors parse and distribute data, loggers only need to worry about logging. A display application should only be concerned with visualization - not data acquisition and logging etc..

Starting Dataprobe

Note

Dataprobe is currently installed on ‘copilot’ and the science PC 'lava' in the ROV Control Room

Due to how easy it is for users to change the pc ‘environments’, steps 1 and/or 2 might not apply.

Dataprobe can be started in one of these usual PC ways…

  1. There should be a shortcut installed on the Desktop (look for an icon named Dataprobe) and double click it.
  2. Start – Programs –Dataprobe – Dataprobe.exe
  3. The program is installed in C:/Program Files/Dataprobe/dataprobe.exe – cd there and double-click it.

You should see the main panel that looks like this:

DP-panel-1

Each button on this panel is related to a major data-logging sub-system. Clicking on the button with the mouse will pop-up a detailed panel for that sub-system. The color of each button on this panel indicates a summary of success (green) or failure (red or yellow) of all controls on the corresponding detail panel. Example detail panel is shown below. By right-clicking in the grey margins you can enable/disable the display of test result icons in the windows system tray.

DP-panel-2

The checkbox next to a control can be used to 'acknowledge' any test failure. If, for example, there is no differential GPS signal available, the differential mode will fail and the button will go red at the detail control as well as on the main panel. If you are working in a region where DGPS is not available (i.e.. its not really an error condition) - it is useful to 'acknowledge' this by un-checking the test. The button will then appear yellow (caution). And, assuming all other test buttons on the sub-panel are green, then the main panel will also show yellow. If any other test on this panel goes red - the summary button will then turn from yellow to red indicating some other problem has occurred.

Dataprobelogr

Dataprobe relies on a special logger named dataprobelogr that has been set up on scratchy to provide the information Dataprobe needs to be able to monitor the core loggers. Dataprobelogr is started automatically shortly after the server is booted by the Unix cron facility. Its existence is checked for by scripts which are run once every few minutes. If a script executed by cron cannot detect its process, it automatically attempts to restart it.

Warning

Both Datamanager and Navproc code must be running on ‘itchy’ and ‘scratchy’. Refer to the WesternFlyer/DocRicketts Data Logging- Users Guide

Starting dataprobelogr

It is always best to let cron handle the actual startup of the loggers. However it can be started and stopped manually. To do so:

  1. Log in as coredata
  2. Enter the command start_dataprobe_logr’ at the Unix prompt.
  3. Monitor the output to satisfy yourself that logging has resumed (See Monitoring output below...)
  4. You can now safely log out (the cron job and applications will continue to run).

Stopping dataprobelogr

The presence of the dataprobelogr application is tested for once every few minutes via a cron job owned by user coredata. If missing, it is restarted automatically. It typically runs all of the time, however for maintenance purposes it may be necessary to stop it manually for a period of time.

Note

The same commands could be used to shutdown the special loggers such as dataprobelogr. However you should not need to be doing this and are strongly discouraged from interfering with them!