Developer Documentation
RockPi Troubleshooting
Connect via Serial Debugger
It is very helpful to be able to connect to the RockPi via an onboard serial port that you can use to ssh into the Pi. First, power off the RockPi, then you have to open the black case by removing the screw on the bottom of the case. Once open follow the instruction on the RockPi serial console docs to connect the USB-Serial cable to the RockPi. The docs show you how to connect to the RockPi through the serial port for different operating systems. One handy note is that on Linux/Mac machines, you might want to capture a listing of the /dev directory before plugging in the USB cable so you have an easier time figuring out which port is the new USB-Serial port. For my Mac, I used picocom to connect by running:
picocom -b 1500000 -d 8 /dev/tty.usbserial-0001
and then powering on the RockPi. This is a VERY handy way to get to a prompt.
Note
One thing to note with the console is that the width of the lines is very limited, but you can expand the width of the lines by running stty rows 50 cols 250 (you can tweak the numbers to fit your needs)
Connect via SSH Over Wired Network
Once the RockPi is connected to your local network through a router, you can ssh into it via the IP address. The trick if finding what the IP address is. If you hook up a serial console as described above, you can run ifconfig -a to see the list of network interfaces and their assigned IP addresses. You can then ssh at root to that IP address.
Developing
In order to develop the application locally, you will need to run the different support services locally. Docker compose is used to do this. To get a development environment running, do the following:
Warning
TODO KGOMES: Document how to generate .web-secret
- Install Docker on your local machine
- Check out the readinet repository
- Open a terminal and change into the directory where the repo is checked out.
- Copy the environment sample file by running
cp web/.env.sample web/.env - Start everything by running
docker-compose up. This will spin up:- An MQTT server
- An instance of the web server on a python based docker image that acts as the cloud server. The role of
serveris defined in the environment variables for the docker container. - An instance of the web server on a python based docker image that acts as one device interface (what would be running on RockPi). The role of
deviceis defined in the environment variables for the docker container. - Another instance of the web server on a python based docker image that acts as one device interface (what would be running on RockPi). The role of
deviceis defined in the environment variables for the docker container.
Note
If you happen to have the default ports of 5000, 5001, 5002, 1883, or 9001 already in use on your host machine, you will need to change the port numbers in the docker-compose.yml file to get around those conflicts. I believe the READinet application containers will be expecting MQTT to be on ports 1883 and 9001 so changing those may not be possible.
- Note that this will start spitting out a bunch or errors as the containers have not been
bootstrapped. In order to fix this, leave the docker-compose stack running, open another terminal window and run the following:
docker exec -it readinet_server bash
cd readinet
./boostrap.sh
exit
docker exec -it readinet_device1 bash
cd readinet
./boostrap.sh
exit
docker exec -it readinet_device2 bash
cd readinet
./boostrap.sh
exit
- After doing the above, go back to the docker-compose terminal window and kill the docker-compose stack with Cntl-C. Wait for it to gracefully shutdown and then restart it using
docker-compose up.
This should now have successfully deployed a READinet server and two devices. They should be visible at the following locations:
- Server: http://localhost:5000
- Device 1: http://localhost:5001
- Device 2: http://localhost:5002
Warning
One thing that was confusing that I wanted to document is that the main codebase is at the root of the project in the web directory. The reason I mention that is that there is a link to this directory under the devops/ansible/roles/readinet-web/files directory and if you are looking at it in something like Visual Studio Code, it looks like there are two code bases. There is just one.
Note
One thing to note with this is that the code editing is not hot. In other words, if you edit the code, it does not show up in the application until you restart the container. Fortunately, it's not too difficult, you can stop/start containers individually using docker stop readinet_device2 for example and then start it with docker start readinet_device2. Or, to make it even easier, just run docker restart readinet_device2 and refresh the page to see the changes.
Note
Another note is that after you run this, the data from the SQLite databases are stored in the web/instance directory.
Below is a diagram of what the development environment looks like
Database

Questions for Nick
- Sampler Interface
- If a new command is desired (serial), where do those changes get made (sampler, rockpi, and cloud)?
- Can we try to remove a devices from headscale? We tried once and it failed.
- Web Application
- What does the
web/readinet/bootstrap.shscript do?- Purely for local environment dev environment.
- flask db upgrade is idempotent. Will create DB, set up tables, apply migrations it needs to.
- flask seed puts the default records in the database, default accounts, passwords, etc. (look for seed.py) WARNING: Do not run this on production as it deletes everything!!!!!
- Can you walk me through the simulator under web/device_simulator? It's largely out of sync now with the micro-python code I think and I want to be familiar with it/document it in case we try to resurrect it.
- Elicia created the simulator code in her project (under scripts).
- Nick copied that to his codebase and then wrote the
simulate.shwrapper.
- How would I start a simulator that the web application will talk to?
- Can we walk through an example of how to update something in the database using SQLAlchemy?
- Can I start a simluator in local development that is using docker-compose?
- Yes, look at root README on line 109 shows how to do that.
- How/where is the Nginx proxy configured to interact with gunicorn? Looks like it's in
/etc/nginx/sites-enabled/le? - Notes from Nick:
- Mixins are from SQLAlchemy. If you want to add columns, etc. SQLAlchemy has lots of examples, SQLAlchemy will handle the migration scripts, etc. This is standard stuff
- Syncing is different. At a very high level, database tables gets serialized to JSON and transferred over MQTT.
- MQTT topics are device specific and there is a topic for each device. Three topics for each device: sync, log, snapshot. Sync is bidirectional, log topic is one way: server listens/device writes. Snapshots are conflict resolution. If changes made on both sides when offline, snapshot communicates differences for conflict resolution.
- Sync stuff happens in background.py. Notice on line 113 (if not s.is_from_self) to prevent runaway condition.
- If model changes, the sync does not need to be updated. The sync will just use what is in the payload to sync.
- If one side has something in the JSON and it doesn't it will ignore it.
- What does the
- DevOps
- Terraform
- Where do you configure the Route53 to assign the IP to the prod/dev.readinet.org?
- dns.tf file.
- You need to manually configure Route53 in the account with the purchased domain name, edit dns.tf file and it will find it.
- Steps if I need to start from scratch with a new AWS account?
- Talked through this with Nick.
- Do I need to manually update the VPC ID in the Terraform file readinet-server/main to target the correct VPC? I guess this makes sense.
- Yes, VPC ID would need to change. Could query if default is always used or have Terraform build a new VPC.
- Can you walk me through the terraform stuff under the github directory?
- Should be called 'builder', sets up things that AWS needs for building.
- What does the DynamoDB table keep track of on AWS? Is it really just a lock so two clients can't be changing the infrastructure at the same time?
- Yes, just a lock to prevent double apply.
- Where do you configure the Route53 to assign the IP to the prod/dev.readinet.org?
- Ansible
- It appears that the SSL certificates get refreshed when ansible is run. Is this the proper way to refresh the SSL certs?
- Should be a cronjob in headscale certbot renewal (line 61-65 in devops/ansible/roles/headscale/tasks/main.yml)
- Is there a specific task in ansible that will do just that or can we run something inside the container?
- Should I change the let's encrypt email address in ansible hosts file?
- I see the DB upgrade ansible task, how does that actually work? What would a schema change process look like? (devops/ansible/roles/readinet-web/tasks/main.yml line 78)
- It appears that the SSL certificates get refreshed when ansible is run. Is this the proper way to refresh the SSL certs?
- Terraform
- Development
- When spinning up dev environment, cannot log into server, do we need to setup the .env on the server docker container before this can happen? I get an error in Docker
The session is unavailable because no secret key was set. Set the secret_key on the application to something unique and secret. Where does that .env file live? - Can we walk through how you might add an attribute to a class that results in DB column addition?
- What about adding an entirely new model?
- When spinning up dev environment, cannot log into server, do we need to setup the .env on the server docker container before this can happen? I get an error in Docker
- General
- Can we walk through how you would use the backups to do a restore?
- Devices are considered 'throw away'
- Backups are just happening on the server.
- These are backups of Docker volumes and SQL lite (see file backup-readinet-web.sh for details).
- If restore is attempted, should probably shutdown existing docker containers first.
- Move tarball that you want from S3 to the host. Then shutdown containers and run restore.
- Can we walk through how you would use the backups to do a restore?
- OS Build
- Can I delete the build-vagrant.sh and Vagrantfile from the os directory? It just adds confusion
- Yes, can remove the Vagrantfile and build-vagrant.sh
- Currently failing, Nick working on it.
- Going on. He fixed it. The big thing is the 'concurrent mode' to allow for multiple wifi networks. This allows for connecting to local web interface as well as connect to upstream internet.
- Is there any way to speed up the SD card image build? It takes FOREVER!
- Not really, but 24 builds a little quicker.
- Can I delete the build-vagrant.sh and Vagrantfile from the os directory? It just adds confusion
- Synching
- Can we walk through the sync code so I can document/understand how this works?
- Let's say a device has been operating for awhile and is staying in sync. If you make changes to the web application/codebase and want to deploy those changes, you need to image a new SD card. When you put that into the RockPi you need to go through a new initialization, headscale/tailscale and I assume it will look like a new devices and there is no way to sync that to the previous state? That might be OK for now as we are assuming the deployment of a new SD image would be a full instrument reset, but it would be good to be able to link it to the same device in the cloud application if possible.
- Best way to think about it is to factory refresh whenever code changes. There could be ways to update in place with different structure. FlaskDB update should handle that but it gets complicated.
- Yes, it will look like a brand new device with SD card image update.
To Do
- Move from one server to another (from test to dev or prod)
- Restore from backup
- Update the device_simulator so that it matches the commands. The sampler serial protocol was changed from
esp.*tofido.*and the simulator should match that.