Skip to content

ESP Portal Operation and Troubleshooting

  1. ESP Portal Web Application
  2. CouchDB Fauxton Web Interface

Restarting Services

Sometimes, you will want to simply restart the services that are running on the portal machine. In order to do that:

  1. ssh into the esp-portal.mbari.org machine
  2. There are 5 services running as containers and they are:
    1. container-esp-couchdb.service: This is the CouchDB server
    2. container-esp-ia-server.service: This is the image analysis service
    3. container-esp-postgresql.service: This is the PostgreSQL database
    4. container-esp-web-log-parser.service: This is the log parser service
    5. container-esp-web-portal.service: This is the web portal itself which is also where the FTP sync and log parsing runs.
  3. Usually, you want to restart the web portal, but the same method applies to all. For this example, in order to stop the web portal, you would run:

    systemctl --user stop container-esp-web-portal.service

  4. Then to start it, run:

    systemctl --user start container-esp-web-portal.service

Create a New Deployment

While it has been on my list to create a web form (with authentication) to allow user's to create new deployments, I never got that done and you have to create them using the CouchDB interface.

  1. First, go to the CouchDB Fauxton Web Interface
  2. Login (ask SE IE group for password)
  3. Click on the 'esp' database.
  4. In the left side menu, click on the arrow next do 'deployments', then on the arrow next to 'Views', then click on 'byNameAndESPName'. This will bring up the list of all deployment and in the 'key' column you can see a combination of the deployment name and the ESP name. Usually, you just want to create a new deployment by copying an old one
  5. Scroll through the list and look for a recent deployment that has the ESP you are creating a new deployment for and click on that row. You will see the full JSON object for that deployment.
  6. Skip over the '_id' and '_rev' rows, and select the lines that include:

    1. resource
    2. name
    3. startDate (maybe endDate if the deployment is closed)
    4. description
    5. notifySlack
    6. slackChannel
    7. esp (and all the properties inside that object) deployment-copy
  7. Copy the selection

  8. Click on Cancel
  9. Click on 'Create Document'
  10. After the '_id' row, add a comma, hit return and paste in the text you copied
  11. Edit the rows to match your deployment information. Note that if there is an endDate in the pasted text, it should be removed or the portal will not parse the log files. Also, for the initial parsing, I usually set 'notifySlack' to false and then after the log file has been parsed, I change it to true. This prevents all the pre-deployment stuff from being published to Slack (unless you want that).
  12. After changes have been made, click on 'Create Document'

Delete/Reparse a Mission

If you want to remove a mission from the ESP portal, there are currently several steps you need to take because I don't have a delete function implemented in the portal (I know, I know). If you recall, the data for a mission lives in three places. The file system where the files are copied down from the FTP server to parse, the parsed mission is stored in CouchDB database and the ancillary data is stored in a postgres DB. So, in order to clean out things:

  1. I usually first go to the CouchDB web interface and edit the deployment document and add an end date to the deployment I am going to remove. This will keep the parser from trying to parse any data from the log file (essentially disabling deployment parsing). While you are editing the document, copy the _id of the document as you will need that later. I usually copy and paste it to a text editor to have it handy. For example, it should be something like 9c1f7faef87ab9239922f8666d001201
  2. Next, you need to clean out the ancillary data from the esp_ancillary database. I use Aqua Data Studio and connect to the database and then run the following query and subsitute in the deployment id you copied in the step before.

         delete from ancillary_data where ancillary_source_id_fk in (select id from ancillary_sources where deployment_id_fk = '9c1f7faef87ab9239922f8666d001201')
    

  3. This removes the parsed data from the ancillary_data table. Then run the following with your deployment id substituted

         delete from ancillary_sources where deployment_id_fk = '9c1f7faef87ab9239922f8666d001201'
    

  4. This removes the linkages between the deployment and the parsed data in the ancillary_data table. That effectively cleans out the ancillary data from the deployment.

  5. If you want to delete the deployment completely, simply delete the document from the CouchDB and remove the downloaded (local to server) files, but if you want to re-parse the log file, do the following:
  6. Do NOT remove the downloaded raw files from the local storage on the server, you don't need to have the sync re-download them as the pasing happens separate.
  7. In the deployment document in CouchDB, remove the 'endDate' field and clean out everything from the deployment except for the clean deployment information which should look like the following:
      {
          "_id": "9c1f7faef87ab9239922f8666d001201",
          "_rev": "6-24759eeaf5ddb1f4246a7fa01c13a40e",
          "resource": "Deployment",
          "name": "2021 Niagara",
          "startDate": "2021-07-01T00:00:00PST",
          "description": "Deployment of ESP Niagara in 2021",
          "notifySlack": false,
          "slackChannel": "#esp-test",
          "esp": {
              "ftpHost": "espshore.mbari.org",
              "ftpPort": 21,
              "ftpUsername": "anonymous",
              "ftpPassword": "kgomes@mbari.org",
              "ftpWorkingDir": "/ESPniagara",
              "dataDirectory": "/var/log/esp",
              "filesToParse": [
              "/var/log/esp/real.log"
              ],
              "name": "Niagara"
          }
      }
    

Helpful Queries

Here are some queries directly to the back end of the ESP portal that I’ve used in the past and were helpful.

  • The ancillary data is in a single table in the PostgreSQL database and is tagged with a ‘ancillary_source_id_fk’ which tells you which ancillary data variable and which deployment the data is connected to. In order to flatten out the table into something that looks like a CSV file, you could run the following. First find the CouchDB ID of the deployment in question, then query for the ancillary source IDs using:
select * from ancillary_sources where deployment_id_fk = 'aef147fda7224940d9c077419f00836f' order by id
  • Then, copy and paste this query and change the ancillary_source_id_fk’s to match those from the deployment of interest and make sure the name in the quotes is lined up with the correct ID.
select timestamp_utc, 
    max(case when ancillary_source_id_fk = 308 then value end) as "Temp (Deg C)", 
    max(case when ancillary_source_id_fk = 309 then value end) as "Volt",
    max(case when ancillary_source_id_fk = 310 then value end) as "Avg Curr",
    max(case when ancillary_source_id_fk = 311 then value end) as "Power",
    max(case when ancillary_source_id_fk = 312 then value end) as "% Humidity",
    max(case when ancillary_source_id_fk = 313 then value end) as "Inst Curr",
    max(case when ancillary_source_id_fk = 314 then value end) as "Press",
    max(case when ancillary_source_id_fk = 315 then value end) as "Flow"
    from ancillary_data where ancillary_source_id_fk in (select id from ancillary_sources where deployment_id_fk = 'aef147fda7224940d9c077419f00836f') group by timestamp_utc order by timestamp_utc asc;