Skip to content

The MARS Data Management Requirements

Broadly outlines the requirements that led to the development of engineering data management system for MARS

Appox Date: 14-Feb-2011

OBJECTIVE: Load MARS Node engineering data into ssds for archive and plotting.

The physical systems which produce MARS Node data can be thought of as being comprised of four major components: the shore-side high voltage power controller (PSC), the deployed Medium Voltage Converter (MVC), the deployed Low-Voltage Converter (LVC) and a Onewire HA7 temperature sensor network (which was added within the LVC housing to monitor internal temperatures). The sensors and loads of the MVC and LVC are controlled and accessed via the Node Power Controller (NPC) which is at the heart of the wet-side system. The NPC, Onewire and LVC systems are physically contained in the LVC Housing. The MVC is a separate physical housing. As of this writing, its sensors are read by the a/d converters of the NPC (a 2nd generation MVC will have its own a/d and controller and be read directly). The LVC can be further logically sub-divided into Internal Loads (Cisco routers, muxes, etc.) and External ‘Science’ Loads (eight) and misc. such as Groundfault and common voltage buses etc.

The ‘PMACS Console’ provides the visible operator user-interface to the ‘PMACS Server’. All communications with the NPC, PSC and HA7 systems are through the shore-side ‘PMACS Server’.

Fig1 Fig.1 A ‘logical’ view of the MARS system from a data management perspective

Besides being the communications gateway to the deployed hardware the PMACS Server polls the NPC subsystems for all of the LVC and MVC engineering data at fixed intervals and logs it to various daily text files. Raw NPC text files are archived to directories accessible via an apache web server. PSC data logging service will need to be created. The PMACS Server also runs an incomplete prototype logger service for the Onewire sensor network, which will need to be reworked.

Peering Deeper into PMACS Server

'PMACS Server' is the both the name given to a hardware platform and to a collection programs written in various languages running there. A great deal of research has gone to understanding how all of the pieces of the system interact. For our data management purposes, the relevant functionality 'PMACS Server' provides is reduced to:

• Provides the gateway for communications with the PSC and NPC. • Communication down the cable is via XMLRPC protocol to servers running at those two targets. • Acts a bridge between XMLRPC and clients (primarily the PMACS Console) via a SOAP service. • Nothing really prevents clients from communicating via XML-RPC directly however it is probably not wise for the NPC due to loading. In fact, XMLRPC was used directly in my initial data logger for the PSC.

• Periodically polls the NPC for all a/d channels and i/o-bit states and logs as raw counts to daily archive of 'csv' files

• Provides a SOAP service to access the Onewire sensor network and the PSC.

• Hosts an Apache Web Server. Important elements to our understanding are: • Exposes the archive of the NPC daily csv files • Contains an important xml file that explains mapping of all of the fields of the NPC csv files,( including calibration coefficients). • Contains two important 'WSDL' files which enable clients to use the SOAP services to access NPC, PSC and the Onewire server. • Hosts the mars wiki.

• The NPC related SOAP Services provide: • Mapping of named fields to a/d channels and io port bits and bridge between SOAP and XMLRPC protocols • Calibrated engineering values from the NPC a/d data to the PMACS Console (really to any SOAP client) • Switching of I/O Ports at the request of PMAC Console to perform the ground fault testing. • Switching on/off of both internal loads and external loads etc.

NPC Logging

NPC is a PC104 that resides in LVC. Implements an XMLRPC interface that the PMACS Server wraps with a SOAP server. It controls power through 4 digital IO plus with A/D converter boards. PMACS Server polls the NPC for a full data record that gives the raw counts of each A/D channel and the bit state of each IO port. The record contains info about/from all Internal Loads (cisco routers etc.), External Loads (ports 1-8, 48v and 375v bus), ground fault, MV sensors etc.

Important points: • Accessed via a SOAP Server or XMLRPC • Service described by .wsdl file on web server. • Logger is well established and stable. • Daily files in web accessible archive location • Record is raw counts and each value needs to be scaled using calibrations. • MARS Operators want External Loads 1…8, Groundfaults, Internal Loads broken out as ‘logical devices’ • Its not necessary to push NPC raw data into ssds.

PSC Logging

PSC is a PC104 that resides in the beach-house with the 100KV DC supply. Implements an XMLRPC interface that the PMACS Server wraps with SOAP. Initial console application/logger I wrote in early 2009 needs to be turned into a true logger.

Important points: • Accessed via a SOAP Server or XMLRPC • Service described by .wsdl file on web server. • My prototype logger needs following improvements: • An errorlog and general hardening • Needs to write files in a commonly accessible archive location (exposed thru firewall) • Convert from XMLRPC to SOAP protocol (would be very similar to needs of Onewire described later).

Groundfault

No one could explain how the ground fault detection works. Archeology has now yielded: 1. the NPC & PMACS-Server are logging state of all 4 a/d boards 1x per sec. 1. the PMACS-Console (vb.net) requests the PMACS-Server (via SOAP) to make a gf() test of a given “name”. (ie. ‘v400’ ‘v400r’ ‘v48’ ‘v48r’ or ‘test’) 1. PMACS-Server maps test names to a digital i/o board,port, and bit bit and sends a set cmd to the NPC via XMLRPC. (that actually sets the io port bit to 0) that controls which FET gets switched in 1. Groundfault test measurements are returned in a single a/d channel of the 1hz NPC raw data stream. 1. the PMACS-Server waits ~6 secs, then parses the next NPC record, waits 4 more seconds, clears the io bit and then returns the value scaled to microamps. 1. PMACS Console updated GUI and requests the next test in sequence. (note test=‘test’ never seems to run... I understand that to be a manual process to be done by operator if needed, after switching off the console sequence of tests)..

Note there appears to be an error on the console in either the test ‘A,B,C,D’ labeling or with the 375v, 375com, 48v,48com text and boxes.

MARS Operators want all four groundfaults to be broken out of the NPC record to a single timestamped record and treated as a single ‘device’ so that values can be plotted against a common time axis.

INTERNAL LOADS

Internal loads consist of the LVC internal devices such as the Cisco routers, the HA7 controller, fiber muxes etc that can all be switch on/off via the NPC. Also includes the ‘bus monitors’ ie. the common bus voltage measurements.

MARS Operators want all internal loads to be broken out of the NPC record to a single timestamped record and treated as a single ‘device’ so that values can be plotted against a common time axis.

Data need to be scaled in engineering units with individual calibration applied.

EXTERNAL LOADS

External loads are the 8 ‘science ports’. They are basically switchable on/off 48v and 375v power supplies with associated current sensors and circuit breakers.

MARS Operators want each external load to be broken out of the NPC record to timestamped records and treated as a standalone ‘device’ so that values can be plotted against a common time axis.

Data need to be scaled in engineering units with individual calibration applied

Onewire Network

The Onewire (or HA7) Network is an array of temperature sensors and a data acquisition box which was added to the LVC housing. Onewire sensors replace the sensors placed strategically on internal devices such as routers, xternal load controllers etc. The original sensors were considered ‘too noisy’ to be of use.

Important points: • Accessed via a SOAP Server (written in Ruby) • Service described by .wsdl file on web server. • Prototype logger that needs some improvements. • Currently writes a single massive file (needs log rotation). • Starts on boot; needs a provision to restart if crashed. • No errorlog • Needs to write files in a commonly accessible archive location (exposed thru firewall)

Additional Concerns

MARS software is complex and documentation is ‘sparse’. MARS software is consists of many layers and was written in multiple languages. It is an operational system that needs to be kept running 24/7. Therefore a least disruptive path needs to be followed to minimize risk to ongoing science.

Least disruptive to me means: • Resist modifying the existing applications code as much as possible. • Avoid adding additional languages and packages. • Avoid updating existing packages, libraries, servers compilers etc (SOAP, Apache, Python, Ruby VB.Net) • Avoid increasing number and complexity of processing tasks as much as possible.

Approach

The raw NPC data and the information required to interpret it are already exposed on the PMACS web site. (Though this will need to be secured behind the MARS firewall in the near future). Need to develop robust loggers for PSC and Onewire data to similar web accessible location. We need to the programs to parse, convert and ingest that data to SSDS. The raw logging will be on the PMACS server, everything else can happen on the SSDS side. This will keep separation between the legacy MARS and the SSDS environments.