Skip to content

M1 Mooring Turn

This is the procedure when the Observatory Support Group turns an OASIS mooring.  A turn means that the currently deployed mooring is recovered and a new one is deployed with a different set of instruments at the same nominal location.

To ease the stress of SSDS data processing continuity on the day of the turn, all the instrument XML and metadata processing can be tested with the dockside test deployment that is already part of standard OSG operating procedure. This test deployment will get added to SSDS, clearly labeled as a test deployment and the netCDF files and plots will be generated for complete end-to-end testing of the data processing path before the actual turn. When the actual deployment (the turn) happens we just need to change the name of the top-level deployment, e.g. from "Test M1 - June 2021" to "M1 - June 2021".

Procedure for setting up a test mooring deployment

This procedure coincides with standard operation procedure for assembling instruments on a mooring that is about to be deployed. As soon as the mooring configuration spreadsheet is distributed, the OASIS can is closed, and data begins flowing via the getM1 scripts on coredata, these steps can be followed to set up test deployments and corresponding netCDF files and plots using the DStoNetCDF.pl and combineTS.pl, combineAll.pl scripts in the DPforSSDS/cimt project. This test can help validate instrument metadata xml and the ssds.cfg file as well as produce a complete end-to-end test of the full SSDS data path.

  1. Copy SSDS instrument metadata config files from previous deployment to new deployment to create a starting place. You can do this logged in as ssdsadmin on elvis if you want to, but I find that, since these files are on an Atlas share, mounting on my local machine makes it easier as I can use the editor of choice on my local machine. These instructions show this for a Mac, but this can be done in Windows too.
  2. First, I mount smb://atlas.shore.mbari.org/ssdsdata on my Mac using Finder.

    mount-ssdsdata

  3. Then, I copy the current deployment directory (mooring/m1/2020 in this example), to the new deployment year (2021).

    duplicate-year

  4. Of course you will copy the previous year's deployment configuration to a directory for the current year (hereafter referred to as <YYYY> or yyyy). The new directory will have subdirectories named data, cfg, and xml (maybe logs, but likely that is empty).

    Note

    A way to make things a bit easier is to go back two deployments and copy the files as they are often reused from the previously recovered mooring. This just makes things much more expedient.

  5. OSG keeps a spreadsheet with all the instruments and their SSDS ids in an Atlas share named smb://atlas.shore.mbari.org/DMO in the OSG/LocalMoorings/M1-YYYY

    osg-ss-location

  6. Open the spreadsheet to get the list of all the instruments and their associated SSDS IDs

    osg-ss

    Warning

    MAKE SURE you double check the IDs specified in the excel spreadsheet are not already currently deployed on the mooring. If you use an ID that is already deployed, it will wreak havoc on the current deployment data streams.

  7. Now, open the ssdsdata/mooring/m1/YYYY/cfg/ssds.cfg file in your favorite editor

    Note

    Do not use an editor that could inject hidden characters into the file (like Microsoft Word) use a plain text editor. Visual Studio Code is used in this example

    Note

    You can consult the OASIS2SSDS Documentation for details on the format of the ssds.cfg file.

  8. Update the comment in the first line in the file to reflect the author and year of the deployment configuration

  9. For the second line in the file (named 'platform'), this is a number that rotates from deployment to deployment between two fixed numbers. It represents the actual toroid float that the buoy is deployed on and is used to differentiate between the two rotating deployments. It is either 1414 or 1305. Since this ssds.cfg file was copied from the currently deployed mooring, just use the other number than the one that is in the new ssds.cfg file. For example, the ssds.cfg file I copied over had a platform ID of 1414, so I changed it to 1305. Note that you need to make three changes in each of the lines in the file. The SSDS ID of the instrument, the YYYY part of the path that represents the year and the SSDS ID XML file name at the end to match the SSDS ID. For example:

    platform = 1305, http://dods.mbari.org/data/ssdsdata/mooring/m1/2021/xml/1305.xml
    

    Note

    If you copied the files from two deployments ago, this number will likely be correct as this number is rotated with each mooring turn.

  10. The next line contains the default_metadata entry and the only thing that needs to be changed here is the YYYY in the path to match the year of the new deployment

  11. For the rest of the lines in the file, you basically need to change the SSDS IDs to match those specified in the OSG spreadsheet. Any special notes for entries are below. Also, do NOT forget to change the year entry in each line.

    1. Note that occasionally, you might find a line for an instrument type that is duplicated, like this:

      instrument = Metsys,1397,http://dods.mbari.org/data/ssdsdata/mooring/m1/2020/xml/1397.xml,2020/08/25 11:00:00,2021/04/01 22:57:00
      instrument = Metsys,1453,http://dods.mbari.org/data/ssdsdata/mooring/m1/2020/xml/1453.xml
      

      This happens when an instrument is replaced during the mooring deployment. In this example, the original Metsys (1397) ran from 8/25/2020 until 4/1/2021 when it was then replaced by Metsys with ID of 1453. When setting up a new deployment, just replace both lines with the new instrument ID like the following:

      instrument = Metsys,1397,http://dods.mbari.org/data/ssdsdata/mooring/m1/2021/xml/1397.xml
      
    2. Another note is that the ISUS group didn't really care about tracking the instrument's history so it just used the same SSDS IS repeatedly and that entry should always use SSDS ID of 1807.

      Warning

      Since the ISUS is the same number as is currently deployed, you need to comment out the entry for 1807 by placing a # sign at the front of the line. When the mooring is actually turned, you will uncomment that line.

    3. For the instrument named 'MicroCAT - elevator' in the OSG spreadsheet, that matches 'Microcat' in the ssds.cfg file.

    4. The 'Scattering; HydroScat-2' instrument in the OSG spreadsheet is the 'HOBI_HS2' in the ssds.cfg file.
    5. Like for the ISUS, the Durafet pH group didn't really care about tracking the instrument's history so it just used the same SSDS IS repeatedly and that entry should always use SSDS ID of 1809.
    6. The OSG spreadsheet has SSDS IDs for the HyperOCR instruments. These don't actually show up in the ssds.cfg file, but will be updated in the XML for the HyperOCR instrument. You can ignore those until we get to the section where we update the XML files.
    7. For the Microcats that are listed with depth (e.g. 'MicroCAT - 40 m'), they following mapping is used:
      1. MicroCAT - 40m = IM02
      2. MicroCAT - 60m pressure = IM03
      3. MicroCAT - 80m = IM04
      4. MicroCAT - 100m pressure = IM05
      5. MicroCAT - 150m = IM06
      6. MicroCAT - 200m pressure = IM07
      7. MicroCAT - 250m = IM08
      8. MicroCAT - 300m pressure = IM09
    8. The entry for 'Scattering/fluor, HyrdoScat-2' on the OSG spreadsheet matches 'IM_HS2' in the ssds.cfg
    9. The entry for 'Fluometer/nephalometer, FLNTUSB' on the OSG spreadsheet matches 'ECO_FLNTU' in the ssds.cfg
    10. The entry for 'MicroCAT - 10m cage' on the OSG spreadsheet matches 'Microcat2' in the ssds.cfg file
    11. The entry for 'MicroCAT - 20m cage' on the OSG spreadsheet matches 'Microcat3' in the ssds.cfg file
    12. The entry for 'MicroCat Spare' on the OSG spreadsheet matches 'IM10' in the ssds.cfg file
    13. The entry for 'SBE 37IM w O2- 225m' on the OSG spreadsheet matches 'IM11' in the ssds.cfg file
    14. The entry for 'SBE 37IM w O2- 50m' on the OSG spreadsheet matches 'IM12' in the ssds.cfg file
    15. The entry for 'GPS_CLOCK' is a 'virtual' instrument and always uses the same SSDS ID of 1742

      Warning

      Since the GPS_CLOCK is the same number as is currently deployed, you need to comment out the entry for 1742 by placing a # sign at the front of the line. When the mooring is actually turned, you will uncomment that line.

    16. The entry for 'Fluorometer, WETStar' on the OSD spreadsheet matches 'Fluor' in the ssds.cfg file. Note there is NO XML for this file as it does not report data.

    17. You can leave the outputFileURLPrefix and inputFileURLPrefix entries the same in the ssds.cfg file
    18. For the three process lines, just change the year to match the year of the deployment this is for everywhere in each line.
  12. Once the ssds.cfg file is configured to match the IDs in the OSG spreadsheet, it's time to get all the XML set up correctly. With some small exceptions that will be noted, the general process is to go line-by-line through the ssds.cfg file and make sure there is a matching XML file in the xml directory and that it's been updated with the up-to-date information. I usually walk back through previous deployment's xml directories and look for the most recent XML file with the correct SSDS ID, then copy it to the xml directory for my new deployment. You can remove any XML files in the new deployment that don't have entries in the ssds.cfg file (that is likely most of them). Also make sure you copy over the defaultInstrument.xml, fluorProcessRun.xml, oasisCanProcessRun.xml, processRun.xml and SampleRecordDescription.xml files to the xml directory for your new deployment.

    Note

    Technically, all the XML files are stored in a CVS repository and should be under source control. We have wandered away from that, but it would be easy to commit them and keep them updated.

  13. Once all the XML files are in the xml directory, I go through them one-by-one and verify the are correct and that the comments are correct. A lot of times there is a comment stating what deployment the XML applies to. Make sure that is updated to match the current deployment you are preparing for. In the XML make sure RecordVariable names are not set to standard coordinate axis names (longitude, latitude, depth, time), these are reserved for the OceanSITES data sets which derive from the insturment netCDF files produced with this metadata. Instead choose specific names, e.g. 'MetsysTime' for the Metsys time field. If you made changes to the XML you probably want to check the new XML file back into source control. Using the OASIS-specific schema will help with editing.

  14. Edit the HyperOCR XML file and make sure the SSDS IDs of the child OCR devices in the XML match those that are in the OSG spreadsheet.

    Warning

    Make sure you assign the proper name of the deployment in the 1414.xml or 1305.xml files (depending on which one is used in the deployment). This is REALLY important. For example, for this test deployment, I edited the 1305.xml file and changed the 'name' attribute of the 'Deployment' tag to Test M1 - June 2021 and adjusted the start date to the date the test will start.

  15. Next, make sure the 'nominalDepth' attributes on the 'Deployment' tag in the Microcat's XML matches what is in the OSG spreadsheet for all Microcats.

  16. While not part of the SSDS configuration, you need to make sure that the scripting on the machine coredata.shore.mbari.org is set up to point to this configuration file. If you ssh to coredata and switch over to the coredata account using sudo -u coredata -i, you can then look at the script that will be downloading the M1 data. It's located in /u/coredata/moorings/downloads/bin and should be named getM1_YYYYMM for the year and month of this new deployment. In that file, it point to a configuration directory located at /mbari/oasis_coredata/deployments/m1/YYYYMM/cfg which should contain a m1.cfg file. In that m1.cfg file, there is a line that should read ssds /ssdsdata/mooring/m1/YYYY/cfg/ssds.cfg which should point to the ssds.cfg file you just edited. You will just want to make sure that pathway is correct.
  17. Once the ssds.cfg and XML files are ready to go, you need to turn on SSDS publishing in the getM1 download script for the new test deployment. To do this, you need to be able to login to the machine coredata.shore.mbari.org and switch over to the coredata account using sudo -u coredata -i and then cd to the /u/coredata/moorings/downloads/bin directory and edit the getM1_YYYYMM script and towards the bottom of the script change the SEND_SSDS to TRUE and should look like this:

    set SEND_SSDS = $TRUE
    ####set SEND_SSDS = $FALSE
    
  18. This will enable the OASIS2SSDS processing that is installed on elvis.shore.mbari.org. You can see that at the bottom of the getM1_YYYYMM script and that software reads the data and XML and publishes it to SSDS.

  19. After letting this process run for a bit, you can check the logs that are located on the ssdsdata Atlas share in the mooring/logs directory. The files that have log entries that might be of interest are extract.log, extractRawData.log, process.log, processData.log and runs.log.
  20. TODO kgomes - Document Explorer View and how to use Aqua Data Studio to edit Metadata in DB and stitch together the full parent-child tree. The automatic attachment of child deployments that is suppose to work in SSDS does not and often times it will generate 'new' deployments of the parent device even though one already exists. You basically need to use the combination of the Explorer and a tool like Aqua Data Studio to edit the metadata in the database to construct the proper parent-child tree structure for the test deployment.

    Warning

    This next step does not work and I am still debugging why createDuplicateDeepDeployment is broken. More to come, but for now, the next step can be skipped. If this step is still broken, it is easiest to just find the most recent alternate virtual instrument (from the previously recovered mooring deployment) and just move it to point to the test deployment.

  21. There is a 'virtual' instrument that tracks download statistics from the mooring. In order to allow us to set this up in a test environment, the device IDs for the 'dlinfo' instrument are rotated in a similar manner to how we rotate the device IDs for the mooring toroid. If the toroid ID for this new deployment is 1414, the 'dlinfo' instrument ID is 1698 and if the toroid ID is 1305, the 'dlinfo' ID is 1768. Before making changes here, ssh into coredata and look at the getM1_YYYYMM script and make sure the correct section of deviceID and parentID are uncommented (and the other should be commented out). Once that is verified, you can create the new 'dlinfo' deployment. The easiest way to do this is to use SSDS to copy a previous 'deployment' of the download stats 'device' and then attach that new copy to this new deployment. The most direct way is to use the createDuplicateDeepDeployment service call. Go to the SSDS Explorer and select 'Mooring Deployments' from the drop-down. Go to the currently deployed M1 and expand the attached devices and click on the 'dlinfo' deployment and write down the ID from the details panel on the right side (in this example, it's 51295). Then construct the following call and paste it in your browser (making sure the insert the ID you just wrote down and the startDate you want to use for the new deployment):

    http://new-ssds.mbari.org:8080/servlet/MetadataAccessServlet?method=createDuplicateDeepDeployment&objectToInvokeOn=DataProducerAccess&p1Type=moos.ssds.metadata.DataProducer&p1Value=DataProducer|id=51295&p2Type=Date&p2Value=2021-06-14T16:00:00Z&p3Type=boolean&p3Value=false&p4Type=Date&p4Value=2021-06-14T16:00:00Z&p5Type=String&p5Value=getM1-download&p6Type=String&p6Value=&delimiter=|
    

    Here are the keys to the parameters of this call:

    Key:
    ----
    http://localhost:8080/servlet/MetadataAccessServlet
    ?responseType=text
    &delimiter=|
    &objectToInvokeOn=DataProducerAccess
    &method=createDuplicateDeepDeployment
    &p1Type=DataProducer
    &p1Value=DataProducer|id=XXXX (XXXX is the ID of the deployment to copy)
    &p2Type=Date
    &p2Value=XXXXXX (XXXXXX is the start date of the new copy in XML format YYYY-MM-DDTHH:MM:SSZ)
    &p3Type=boolean
    &p3Value=(true|false) (this is to indicate if you want the original deployment to be closed)
    &p4Type=Date
    &p4Value=XXXXXX (XXXXXX is the end date for the original deployment (if p3Value is true))
    &p5Type=String
    &p5Value=XXXXXX (XXXXXX is the DataProducer name of the new DataProducer)
    &p6Type=String
    &p6Value=XXXXXX (XXXXXX is the base URL to use for the new DataContainers that will be created).
    &delimiter=|
    

    After successfully running that query in your browser, go back to the SSDS Explorer and search for deployments by Device ID using the ID from the currently deployed dlinfo. The call you made, created a new duplicate deployment of the same device, but made it parentless so you need to search for all the deployments of that device and then find the one that starts with the start date you entered in the call. Write down the ID of the new dlinfo deployment. In ADS, in the DataProducer table, edit the new deployment record for 'dlinfo' that you just go the ID for and to adjust it's parentID_FK to be for the new mooring and the DeviceID_FK to match the one for this deployment. While there edit the start time and name as appropriate.

  22. Edit the getM1_YYYYMM script on coredata to use the proper device and parent ID (this will be the device ID of the torroid of the mooring deployment) and make sure that the /oasis/bin/ssdsSubmit.pl $deviceId $parentId "$starttime_es,$endtime_es,$filesize,$rtnsts" line in the script is configured to run.

  23. Once it looks like everything has been processed properly into the SSDS and you can see the test deployment, you can let that run for a bit so more data can be sent to the SSDS. After awhile, you can then manually generate the processed data products to check everyhing.

    1. ssh into elvis as you
    2. sudo -u ssdsadmin -i
    3. cd /u/ssdsadmin/dev/DPforSSDS/cimt
    4. vi DStoNetCDF.pl and change lookup for M1Test to point to 'Test M1 - MONTH YEAR'
    5. Exit out of the editor and at the command line, run the following (obviously put in the proper year and month in the DDIR line for your deployment)
      /bin/csh
      setenv LD_LIBRARY_PATH /usr/local/lib
      setenv DISPLAY elvis:4
      setenv SSDS_SERVER new-ssds.mbari.org:8080
      setenv PATH $PATH\:/usr/local/bin\:.
      setenv ODIR /mbari/ssdsdata/deployments
      set DDIR=202106
      source /usr/local/ferret_paths.csh
      DStoNetCDF.pl -mooring M1Test -ssdsServer $SSDS_SERVER -ssdsDataServer $SSDS_SERVER -outputDir $ODIR -verbose -current
      
  24. After the processing runs, you should be able to look at this http://dods.mbari.org/data/ssdsdata/deployments/m1test/current_qcPlots.html URL to see if everything looks OK. If everything looks good, you could then edit the /u/ssdsadmin/dev/DPforSSDS/cimt/processCurrentMooringDataStreams.sh file and change the DDIR under the Test M1 section to the proper year and month and then uncomment those lines to let the NetCDF product generation take place hourly. This is not necessary and could just be run manually at certain times to check files.

    #
    # Test M1
    #
    set DDIR=202106
    DStoNetCDF.pl -mooring M1Test -ssdsServer $SSDS_SERVER -ssdsDataServer $SSDS_SERVER -outputDir $ODIR -verbose -current
    

Deployment Day

Note

The inductive CTD string is turned off just before deployment so the last couple of days there will likely not be any inductive CTD data.

The boat normally very early in the AM. It steams to a location near the currently deployed M1. The mooring is lifted off the deck and deployed over the side of the boat with a crane. Then the boat slowly steams away until all the cable has been deployed. The last thing is for the anchor to be pushed over the side by lifting one side of a plate the anchor is sitting on and it just slides off the back.

Once the buoy enters the water, the software side of things should happen.

  1. In the getM1_YYYYMM from the old mooring, turn off the SEND_SSDS flag (set to false). This disables the publishing of data to the SSDS for the old mooring so it can be 'closed'.
  2. Once that is done, that effectively means the data stops flowing from the old mooring. Take the following steps to 'close' down the old mooring deployment

    1. ssh into elvis and switch over to the ssdadmin account (sudo -u ssdsadmin -i)
    2. cd dev/DPforSSDS/cimt and vi the processCurrentMooringDataStreams.sh and comment out the old M1 processing and the Test M1 processing (if enabled). Save and exit.
    3. Now, open the SSDS Explorer in your browser
    4. Search for 'M1 -' to get all M1 deployments (due to a bug, you have to do this search twice if this is the first time to the page)
    5. Find the old deployment and click on the name. You will see a list of details in the right pane which will contain the ID which you should write down (in this example it's 50827).

      get-ssds-id

    6. Expand the deployment by clicking on the arrow to the left of the name, then click on the arrow next to 'attached devices' to expand that list and then click on the name 'Radiometer-Hyperspectral' and write down that ID (in this example it's 50949).

      get-ssds-child-id

    7. Go to Aqua Data Studio (ADS), connect to the database on Dione and search for all deployments that are linked to top level deployment. In this case, M1 deployment ID is 50827 and Radiometer is 50949. SQL is then select * from ssdsdba.DataProducer where id = 50827 or ParentID_FK = 50827 or ParentID_FK = 50949.

    8. Edit all those rows and add an end date to all rows that don't have one to 'close' the deployment. Verify by refreshing Explorer and drilling down again.
    9. In the SSDS Explorer, search for 'M1 -' again to get the IDs for the test deployment (in this example, 51499 and 51500)
    10. In ADS, update the name of the parent deployment to get rid of 'Test' and update the startDate to match end date of the deployment that you just closed and add the nominal latitude and longitude values based on where the mooring should be located.

    Warning

    Adding the nominal latitude and longitude is critical to the downstream processing of all the data. You cannot skip this step

    1. In ADS, run select * from ssdsdba.DataProducerGroup to find the IDs for DataProducerGroups M1 Deployments and Mooring Deployments (107 and 111 respectively in this example).
    2. Insert the new M1 deployment into these two groups. In this example, the statements were INSERT INTO ssdsdba.DataProducerAssocDataProducerGroup (DataProducerID_FK, DataProducerGroupID_FK) VALUES (51499, 107) and INSERT INTO ssdsdba.DataProducerAssocDataProducerGroup (DataProducerID_FK, DataProducerGroupID_FK) VALUES (51499, 111)
    3. You now have to uncomment the ISUS (1807) and ClockSync (1742) lines in the ssds.cfg file for the new deployment. This should eventually create new child deployments under the new M1 deployment in the SSDS.
    4. On elvis, edit the ~/dev/DPforSSDS/cimt/DStoNetCDF.pl script and update the M1 entry in the ssdsMooringDeplNames dictionary so that it points to the new named deployment (for example: M1 - June 2021d)
    5. On elvis, vi processCurrentMooringDataStreams.sh and update the DDIR for the M1 entry to the new YYYYMM and uncomment those lines so the wills start running again.
    6. Again, on elvis, cd /mbari/ssdsdata/deployments and vi previous.html so you can manually created a link to the 'closed' deployment. Basically, you will copy/paste the second to last line in the section for M1, edit it so it matches the YYYY, MM and dates. Also, update the 'current' line to point to the new YYYYMM. In this example, it looked like this when done:

    7. 201907 2019-07-29 to 2020-08-25
    8. 202008 2020-08-25 to 2021-06-14
    9. 202106 2021-06-14 to present
  3. Again, on elvis, cd ~/dev/DPforSSDS/cimt

  4. vi DEPLOYMENTS and copy the four lines from the previously closed deployment (in this case the one from 2019) and add four new lines for the deployment you just closed

    DStoNetCDF.pl -mooring M1 -deployment "M1 - July 2019" -ssdsServer new-ssds.mbari.org -ssdsDataServer new-ssds.mbari.org -outputDir /mbari/ssdsdata/deployments -procClosed -verbose
    combineM.pl -mooring M1 -ssdsServer new-ssds.mbari.org -deployments 201907 -ssdsDataServer new-ssds.mbari.org -inputDir /mbari/ssdsdata/deployments
    combineTS.pl -mooring M1 -ssdsServer new-ssds.mbari.org -deployments 201907 -ssdsDataServer new-ssds.mbari.org -inputDir /mbari/ssdsdata/deployments
    combineAll.pl -mooring M1 -ssdsServer new-ssds.mbari.org -deployments 201907 -ssdsDataServer new-ssds.mbari.org -inputDir /mbari/ssdsdata/deployments
    
    DStoNetCDF.pl -mooring M1 -deployment "M1 - August 2020" -ssdsServer new-ssds.mbari.org -ssdsDataServer new-ssds.mbari.org -outputDir /mbari/ssdsdata/deployments -procClosed -verbose
    combineM.pl -mooring M1 -ssdsServer new-ssds.mbari.org -deployments 202008 -ssdsDataServer new-ssds.mbari.org -inputDir /mbari/ssdsdata/deployments
    combineTS.pl -mooring M1 -ssdsServer new-ssds.mbari.org -deployments 202008 -ssdsDataServer new-ssds.mbari.org -inputDir /mbari/ssdsdata/deployments
    combineAll.pl -mooring M1 -ssdsServer new-ssds.mbari.org -deployments 202008 -ssdsDataServer new-ssds.mbari.org -inputDir /mbari/ssdsdata/deployments
    
  5. You want to now manually run each of the new lines you just created to create a complete, close deployment. In this case I ran (note I did not set DDIR like I did above):

    /bin/csh
    setenv LD_LIBRARY_PATH /usr/local/lib
    setenv DISPLAY elvis:4
    setenv SSDS_SERVER new-ssds.mbari.org:8080
    setenv PATH $PATH\:/usr/local/bin\:.
    setenv ODIR /mbari/ssdsdata/deployments
    source /usr/local/ferret_paths.csh
    DStoNetCDF.pl -mooring M1 -deployment "M1 - August 2020" -ssdsServer new-ssds.mbari.org -ssdsDataServer new-ssds.mbari.org -outputDir /mbari/ssdsdata/deployments -procClosed -verbose
    combineM.pl -mooring M1 -ssdsServer new-ssds.mbari.org -deployments 202008 -ssdsDataServer new-ssds.mbari.org -inputDir /mbari/ssdsdata/deployments
    combineTS.pl -mooring M1 -ssdsServer new-ssds.mbari.org -deployments 202008 -ssdsDataServer new-ssds.mbari.org -inputDir /mbari/ssdsdata/deployments
    combineAll.pl -mooring M1 -ssdsServer new-ssds.mbari.org -deployments 202008 -ssdsDataServer new-ssds.mbari.org -inputDir /mbari/ssdsdata/deployments
    
  6. This creates a complete web page with all the processed data (for this example, it could found here http://dods.mbari.org/data/ssdsdata/deployments/m1/m1_202008_qcPlots.html)

  7. I usually then email Fred Bahr to let him know the SSDS ID of the new top level toroid (1414 or 1305 depending on the turn)
  8. Next, you want to edit the scripts that are used to monitor the data streams coming into SSDS.
    1. ssh into pismo as 'kgomes'
    2. cd scripts
    3. vi rawPacketIdsToMonitor.txt and change the IDs of the various instruments to match those that are deployed
  9. Also, there is a SPOT tracker that is used to push M1 locations to the ODSS. To update that:
    1. ssh using ssh driftertrack@pismo.shore.mbari.org
    2. cd dev/MBARItracking/scripts
    3. vi drifter.py
    4. Look for the block of code that links the number of the stella drifter to the M1 mooring (Jared usually let's me know that number) and set the correct stella drifter to proxy M1. It should look like this;
      if platformName == 'stella108':
          platformName = 'm1'
          platformType = 'mooring'
      

Handy Tips

  1. If you need to delete a deployment (parent, child, etc.) there is a URL you can call to clean things up. It does a 'deep delete' and prunes all the necessary downstream products and child deployments.
    http://new-ssds.mbari.org:8080/servlet/MetadataAccessServlet?responseType=text&delimiter=\|&objectToInvokeOn=DataProducerAccess&method=deepDelete&p1Type=DataProducer&p1Value=DataProducer|id=<deploymentID>
    

TODOs

  1. Document CVS repo for XML files
  2. Document how to register device in SSDS to get new ID and then generate a new XML document.
  3. Document how to 're-send' XML files to SSDS as packets by 'touching' the XML document so the OASIS2SSDS processing thinks it a new file. Something has to change (even if it's just a comment) in the XML so the SSDS thinks it's a new metadata packet.