{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ " ![alt text](https://www.mbari.org/wp-content/uploads/2014/11/logo-mbari-3b.png \"MBARI\")\n", "\n", "
Copyright (c) 2022, MBARI
\n", "\n", " * Distributed under the terms of the GPL License\n", " * Maintainer: dcline@mbari.org\n", " * Authors: Danelle Cline dcline@mbari.org, John Ryan ryjo@mbari.org\n", "\n", "## Kernel Selection\n", "\n", "If running in SageMaker, the Python 3 (Data Science) kernel is sufficient.\n", "The Python 3 (TensorFlow 2.3 Python 3.7 GPU Optimized) will run the inference code faster for a higher cost. For more advanced users,\n", "[SageMaker Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html)\n", "is recommended to process data in bulk." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Applying Machine Learning to classify blue whale A calls\n", "\n", "---\n", "Essential to detection and classification of marine mammal vocalizations are the distinct acoustic attributes of those vocalizations. Machine learning (ML) is an effective way to recognize acoustic attributes and reliably classify such vocalizations.\n", "\n", "In this brief tutorial, we will:\n", "* tap into an extensive (6+ years and growing) archive of sound recordings from a deep-sea location [along the eastern margin of the North Pacific Ocean](https://www.mbari.org/at-sea/cabled-observatory/),\n", "* illustrate the beautiful songs produced by baleen whales, and\n", "* demonstrate the application of ML to classify one of the three types of calls produced by blue whales in their songs.\n", "\n", "If you use this data set or tutorial, please [cite our project](https://ieeexplore.ieee.org/document/7761363)[1]." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Install required dependencies\n", "\n", "First, let's install the required software dependencies. If you are working on local computer, you can skip this next cell. Change your kernel to *pacific-sound-notebooks*, which you installed according to the instructions in the [README](https://github.com/mbari-org/pacific-sound-notebooks/) - this has all the dependencies that are needed. \n", "\n", "Otherwise, if you are using this notebook in a cloud jupyter notebook, select a Python3 compatible kernel, remove the comment # before each line and run the code cell. This only needs to be done once for the duration of this notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# !apt-get update -y && apt-get install -y libsndfile1\n", "# !pip install tensorflow==2.4.1 --quiet\n", "# !pip install boto3 --quiet\n", "# !pip install oceansoundscape==1.1.2 --quiet" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Import all packages" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "from botocore import UNSIGNED\n", "from botocore.client import Config\n", "import cv2\n", "from oceansoundscape.spectrogram.signal import psd_1sec\n", "from oceansoundscape.raven import BLEDParser\n", "from oceansoundscape.spectrogram import conf, denoise, utils\n", "import os\n", "from pathlib import Path\n", "import soundfile as sf\n", "import json\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import tensorflow as tf\n", "from matplotlib.patches import Rectangle" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Why use 2 kHz data?\n", "\n", "Because we are studying the low-frequency calls of blue whales, we don't need the original recordings sampled at 256 kHz. Instead, we will use the 2 kHz decimated data in [WAV](https://en.wikipedia.org/wiki/WAV) format; these are stored in an s3 bucket named pacific-sound-2khz. For more information on the storage and organization of the 2kHz data, please see the [2kHz example](https://docs.mbari.org/pacific-sound/notebooks/PacificSound_2kHz/).\n", "\n", "## List the contents of a monthly directory\n", "\n", "Between August 2015 and July 2021 (a 6-year period), the highest levels of blue whale song activity off central California were detected during November 2017. Let's start by listing the daily 2 kHz files for that month." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s3 = boto3.client('s3',\n", " aws_access_key_id='',\n", " aws_secret_access_key='',\n", " config=Config(signature_version=UNSIGNED))\n", "\n", "year = \"2017\"\n", "month = \"11\"\n", "bucket = 'pacific-sound-2khz'\n", "\n", "for obj in s3.list_objects_v2(Bucket=bucket, Prefix=f'{year}/{month}')['Contents']:\n", " print(obj['Key'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A view of baleen whale song\n", "\n", "Let's produce a spectrogram with sufficient resolution in time and frequency to see the blue whale song with enough resolution to visually identify. We'll limit the exercise to a single hour from a day with calls of variable received intensity (signal strength)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Download a single 2 kHz file" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "year = \"2017\"\n", "month = \"11\"\n", "wav_filename = 'MARS-20171101T000000Z-2kHz.wav'\n", "bucket = 'pacific-sound-2khz'\n", "key = f'{year}/{month}/{wav_filename}'\n", "\n", "s3 = boto3.resource('s3',\n", " aws_access_key_id='',\n", " aws_secret_access_key='',\n", " config=Config(signature_version=UNSIGNED))\n", " \n", "# only download if needed\n", "if not Path(wav_filename).exists():\n", "\n", " # Alternatively, it can be downloaded directly in SageMaker with\n", " # !aws s3 cp s3://{bucket}/{key} . \n", "\n", " print('Downloading') \n", " s3.Bucket(bucket).download_file(key, wav_filename)\n", " print('Done') " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Subset to the 5th hour of the day" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sample_rate = int(2e3)\n", "start_hour = 5\n", "start_frame = int(sample_rate * start_hour * 3600)\n", "duration_frames = int(sample_rate* 3600)\n", "\n", "pacsound_file = sf.SoundFile(wav_filename)\n", "pacsound_file.seek(start_frame)\n", "x = pacsound_file.read(duration_frames, dtype='float32')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Plot the full 2 kHz spectrogram\n", "\n", "Lots of biophony (sounds of ocean life) are represented in this spectrogram. Humpback whale songs are dominant above ~ 100 Hz, while blue and fin whale songs are dominant below ~ 100 Hz. The energy of blue whale A calls is largely between ~70 and 90 Hz (between the white dashed lines). \n", "\n", "For more details about creating a calibrated spectrogram, see the [2 kHz tutorial](https://docs.mbari.org/pacific-sound/notebooks/PacificSound_2kHz/)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sg, f = psd_1sec(x, sample_rate, 177.9) # create calibrated psd\n", "plt.figure(dpi=300)\n", "plt.axhline(73,linestyle='--', color='white')\n", "plt.axhline(91,linestyle='--', color='white')\n", "plt.imshow(sg,extent=[0, 3600, min(f), max(f)],aspect='auto',origin='lower',vmin=30,vmax=100)\n", "plt.yscale('log')\n", "plt.ylim(10,1000)\n", "plt.colorbar()\n", "plt.xlabel('November 01, 2017 Hour 5')\n", "plt.ylabel('Frequency (Hz)')\n", "plt.title('Calibrated spectrum levels')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Band Limited Energy Detections\n", "\n", "In this example, we will focus on classification of calls. Here, detections for potential Blue whale A calls\n", "have been precomputed using Band-Limited-Energy-Detectors BLEDs in the [Raven](https://ravensoundsoftware.com) software. There are many way to do detection, including spectrogram correlation and neural network approaches such as single-shot detectors to name a few. The best detector depends on the acoustic application and its cost/performance tradeoff. \n", "\n", "The approach taken here is to use the BLED detectors to identify not only strong and clear calls, but also many uncertain signals, so we can avoid missing calls." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An output from the BLED detectors might look something like this following simple table.\n", "\n", "Note that this only represents a subset of the detections in the 5th hour for the purposes of this simple example." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bled_file = 'MARS-20171101T000000Z-2kHz.wav.Table01.txt'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%writefile {bled_file}\n", "Selection\tView\tChannel\tLow Freq (Hz)\tHigh Freq (Hz)\tBegin Time (s)\tEnd Time (s)\n", "453\tSpectrogram 1\t1\t70.0\t90.0\t19076.535750\t19088.027750\n", "454\tSpectrogram 1\t1\t70.0\t90.0\t19115.990750\t19132.331750\n", "455\tSpectrogram 1\t1\t70.0\t90.0\t19145.565750\t19161.217750\n", "456\tSpectrogram 1\t1\t70.0\t90.0\t19182.693750\t19200.880750\n", "457\tSpectrogram 1\t1\t70.0\t90.0\t19221.550750\t19235.590750\n", "458\tSpectrogram 1\t1\t70.0\t90.0\t19244.183750\t19270.027750\n", "459\tSpectrogram 1\t1\t70.0\t90.0\t19310.171750\t19328.345750\n", "460\tSpectrogram 1\t1\t70.0\t90.0\t19342.502750\t19360.689750\n", "461\tSpectrogram 1\t1\t70.0\t90.0\t19373.676750\t19389.107750\n", "462\tSpectrogram 1\t1\t70.0\t90.0\t19398.727750\t19410.193750\n", "463\tSpectrogram 1\t1\t70.0\t90.0\t19421.425750\t19436.804750\n", "464\tSpectrogram 1\t1\t70.0\t90.0\t19445.189750\t19461.803750\n", "465\tSpectrogram 1\t1\t70.0\t90.0\t19494.706750\t19507.173750\n", "466\tSpectrogram 1\t1\t70.0\t90.0\t19527.622750\t19538.555750\n", "467\tSpectrogram 1\t1\t70.0\t90.0\t19548.071750\t19561.370750\n", "468\tSpectrogram 1\t1\t70.0\t90.0\t19568.377750\t19585.186750\n", "469\tSpectrogram 1\t1\t70.0\t90.0\t19625.057750\t19645.753750\n", "470\tSpectrogram 1\t1\t70.0\t90.0\t19653.293750\t19676.381750\n", "471\tSpectrogram 1\t1\t70.0\t90.0\t19680.944750\t19704.734750\n", "472\tSpectrogram 1\t1\t70.0\t90.0\t19708.543750\t19723.051750\n", "473\tSpectrogram 1\t1\t70.0\t90.0\t19760.907750\t19778.990750" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Focus on blue whale A calls \n", "\n", "Zooming in on a more narrow frequency and time, we can see the pulse-train nature of the A calls, an important distinction for classification. In this brief segment, we can see A calls of variable received intensity.\n", "\n", "Here, we parse the detections and display the identifying Raven Selection number for reference.\n", "\n", "Notes: \n", " * The first A call in this time period, which was quite strong, does not have a start marker because its start precedes the beginning of this audio segment. \n", " * The full detail of the A calls is still not visible at the resolution presented above. The full detail will become clear when we apply the model and examine individual classified calls." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# an optimum configuration for the call spectrogram generation is defined in the oceanscoundscape package based on extensive\n", "# hyper parameter sweeps. Let's grab those here\n", "blue_a_conf = conf.CONF_DICT['blueA']\n", "\n", "# BLEDParser needs a path object\n", "bled_path = Path(bled_file)\n", "\n", "# parse the detections\n", "parser = BLEDParser(bled_path, blue_a_conf, max_samples=len(x), sampling_rate=sample_rate)\n", "\n", "# plot the spectrogram\n", "plt.figure(dpi=300)\n", "plt.imshow(sg,aspect='auto',origin='lower',vmin=30,vmax=100)\n", "\n", "# add boxes for each normalized detection - detections are normalized to a fixed size (frequency/time) for classification\n", "start_secs = start_hour * 3600\n", "high_freq = blue_a_conf['high_freq']\n", "low_freq = blue_a_conf['low_freq']\n", "duration_secs = blue_a_conf['duration_secs']\n", "freq_window = int(high_freq - low_freq)\n", "for row, item in sorted(parser.data.iterrows()):\n", " detection_start = item.call_start/sample_rate - start_secs # detections are stored in sample numbers so here we convert back to time\n", " plt.gca().add_patch(Rectangle((detection_start,low_freq),duration_secs,freq_window,\n", " edgecolor='black',\n", " facecolor='none',\n", " ls='--',\n", " lw=1))\n", " plt.text(detection_start,high_freq - 8,item.Selection, size='x-small', rotation=45)\n", "\n", "plt.yscale('linear')\n", "plt.ylim(10,high_freq + 10)\n", "plt.xlim(1000, 1800)\n", "plt.xlabel('Second of 01 November 2017')\n", "plt.ylabel('Frequency (Hz)')\n", "plt.title('Detected blue whale A calls')\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Classification of blue whale A calls\n", "\n", "We can turn these detected audio signals into images, one for each detected signal, and classify them using a model trained for only A calls." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Create optimized spectrograms\n", "\n", "Because the model we are demonstrating is image based, the first step to classify the calls is to create optimized spectrograms for each candidate detection." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# read the wav file\n", "xc, sample_rate = sf.read(wav_filename)\n", " \n", "# parse the detections, but this time let's use the entire wav file to allow for padding during denoising\n", "parser_full = BLEDParser(bled_path, blue_a_conf, len(xc), sample_rate)\n", "\n", "# grab the spectrogram parameters\n", "call_width = int(blue_a_conf['duration_secs'] * sample_rate)\n", "num_fft = blue_a_conf['num_fft']\n", "hop_length = round(num_fft * (1 - conf.OVERLAP))\n", "\n", "# generate optimized spectrograms\n", "print(f'Denoising {bled_file} num detections {parser_full.num_detections}')\n", "\n", "# denoise them and store the results in a temporary hdf file cache\n", "cache = denoise.PCENSmooth(Path(f'{bled_path}.hdf'), parser_full, blue_a_conf, xc, sample_rate)\n", "\n", "for row, item in sorted(parser_full.data.iterrows()):\n", " try:\n", " start = int(call_width / hop_length)\n", " end = int(2 * call_width / hop_length)\n", "\n", " data = cache.get_data(item.Selection)\n", "\n", " # subset the call, leaving off the padding for PCEN\n", " subset_data = data[:, start:end]\n", "\n", " # save to a denoised color image\n", " utils.ImageUtils.colorizeDenoise(subset_data, Path(item.image_filename))\n", " except Exception as ex:\n", " print(ex)\n", " continue\n", "\n", "print('Done')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Display the spectrogram images" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "images = list( sorted((Path.cwd()).glob('*.jpg')))\n", "\n", "plt.figure(figsize=(14,14))\n", "plt.rcParams['axes.titlelocation'] = 'center'\n", "for i, image_path in enumerate(images):\n", " # the selection number of part of the filename name,\n", " # e.g. selection 3 20150925T072604.53528828.53552892.sel.03.ch01.spectrogram.jpg\n", " selection = image_path.name.split('.')[4]\n", " image_path = str(image_path)\n", " plt.subplot(5, 5, i + 1)\n", " img = plt.imread(image_path)\n", " plt.title(selection)\n", " plt.imshow(img)\n", " plt.axis('off')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Download the model\n", "\n", "MBARI has a trained model that can be used to classify blue A calls. You can find that model in the pacific-sound-models bucket.\n", "More details on its performance can be found in the [pacific sound model page](http://docs.mbari.org/pacific-sound/models/).\n", "\n", "First, download that model then uncompress it. The model should be version 1 and exist in a subdirectory simply called \"1\"." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bucket = 'pacific-sound-models'\n", "model_filename = 'bluewhale-a-resnet50-2022-02-05-01-58-47-285.tar.gz'\n", "key = model_filename\n", "\n", "print(f'Downloading...') \n", "s3.Bucket(bucket).download_file(key, model_filename)\n", "\n", "# Alternatively, it can be downloaded directly in SageMaker with\n", "# !aws s3 cp s3://{bucket}/{key} . \n", "\n", "print(f'Uncompressing')\n", "!tar -xf {model_filename}\n", "print(f'Done')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "os.listdir(\"1\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load the model weights" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = tf.keras.models.load_model(\"1\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Show the model architecture and grab the normalization parameters\n", "\n", "Here, we can see the last layer only this model only outputs a 2 wide vector - this is for either blue whale A false or blue whale A true calls. Class labels are defined in the config.json with the model as are the normalization parameters." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "config = json.load(open('1/config.json'))\n", "image_mean = np.asarray(config[\"image_mean\"])\n", "image_std = np.asarray(config[\"image_std\"])\n", "print(f\"Labels {config['classes']}\")\n", "print(f\"Training image mean: {image_mean}\")\n", "print(f\"Training image std: {image_std}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Run the model on a single image\n", "\n", "The model outputs an array with the scores between 0-1 for each class. Let's run it on a single image of a strong A call just to verify it is working. The output should be a 2-dimensional numpy array representing each class score. These scores should sum to 1 and the output should be definitely a positive A call \"bat\" i.e. blue A true in the second index." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "batch_size = 1\n", "image_path = '20171101T051942.38365387.38401761.sel.456.ch01.spectrogram.jpg'\n", "image_bgr = cv2.imread(image_path)\n", "image = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB)\n", "# normalize with the same parameters used in training\n", "image_float = np.asarray(image).astype('float32')\n", "image_float = image_float / 255.\n", "image_float = (image_float - image_mean) / ( image_std + 1.e-9)\n", "image_stack = np.concatenate([image_float[np.newaxis, :, :]] * batch_size)\n", "tensor_out = model(image_stack)\n", "# convert the tensor to a numpy array and since this is just a batch size of 1, pull out the single\n", "# prediction\n", "p = tensor_out.numpy()[0]\n", "print(p)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create a helper function to display the class names instead of just a score" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def prediction_to_labelscore(prediction: np.array(float)):\n", " # predictor outputs an index but we want to see the string, so let's map back\n", " # to the order the class labels are stored in the config\n", " label_map = {0: 'baf', 1: 'bat'}\n", "\n", " # get the index of the top score and its score\n", " top_score_index = np.argmax(prediction)\n", " top_score = prediction[top_score_index]\n", " label = label_map[top_score_index]\n", " return label, top_score\n", "\n", "# let's test it with the output above on the single image\n", "plt.imshow(image, aspect='auto')\n", "plt.axis('off')\n", "label, top_score = prediction_to_labelscore(p)\n", "print(f'predicted label {label} score {top_score}')\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Classify all images\n", "\n", "Now that we are confident the model runs okay on a single image, let's run it on all the spectrogram images and display them with their associated scores and capture the predictions in a dictionary to use later." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(14,14))\n", "plt.rcParams['axes.titlelocation'] = 'center'\n", "batch_size = 1\n", "predictions = {}\n", "\n", "for i, image_path in enumerate(images):\n", " print(f'Running model on {image_path.name}')\n", " image_bgr = cv2.imread(image_path.as_posix())\n", " image = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB)\n", " image_float = np.asarray(image).astype('float32')\n", " image_float = image_float / 255.\n", " image_float = (image_float - image_mean) / ( image_std + 1.e-9)\n", " image = np.concatenate([image_float[np.newaxis, :, :]] * batch_size)\n", " tensor_out = model(image)\n", " label, top_score = prediction_to_labelscore(tensor_out.numpy()[0])\n", " predictions[image_path.name] = {'class': label, 'score': top_score}\n", " plt.subplot(5, 5, i + 1)\n", " img = plt.imread(image_path)\n", " plt.imshow(img)\n", " plt.title(f'{selection} {label} {top_score:0.2f}')\n", " plt.axis('off')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Display confident classifications\n", "\n", "Now that the classifications have been run on all the potential detections, let's see all true classifications in the 1-second spectrogram with scores greater then 0.7." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# plot the spectrogram\n", "plt.figure(dpi=300)\n", "plt.imshow(sg,aspect='auto',origin='lower',vmin=30,vmax=100)\n", "\n", "# decorate with boxes for each normalized detection - detections are normalized to a fixed size (frequency/time) for classification\n", "start_secs = 5 * 3600\n", "high_freq = blue_a_conf['high_freq']\n", "low_freq = blue_a_conf['low_freq']\n", "duration_secs = blue_a_conf['duration_secs']\n", "freq_window = int(high_freq - low_freq)\n", "for row, item in sorted(parser.data.iterrows()):\n", " detection_start = item.call_start/sample_rate - start_secs # detections are stored in sample numbers so here we convert back to time\n", " p = predictions[item.image_filename]\n", " if p['score'] > 0.7 and p['class'] == 'bat':\n", " plt.gca().add_patch(Rectangle((detection_start,low_freq),duration_secs,freq_window,\n", " edgecolor='black',\n", " facecolor='none',\n", " ls='--',\n", " lw=1))\n", " plt.text(detection_start,high_freq - 8, f'{item.Selection} bat', size='xx-small', rotation=45)\n", "\n", "plt.yscale('linear')\n", "plt.ylim(10,high_freq + 10)\n", "plt.xlim(1000, 1800)\n", "plt.xlabel('Second of 01 November 2017')\n", "plt.ylabel('Frequency (Hz)')\n", "plt.title('Classified blue whale A calls > 0.7 confidence')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References\n", "\n", "- [1] https://doi.org/10.1109/OCEANS.2016.7761363" ] } ], "metadata": { "kernelspec": { "display_name": "pacific-sound-notebooks", "language": "python", "name": "pacific-sound-notebooks" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3" } }, "nbformat": 4, "nbformat_minor": 1 }