CubeSat nano-satellite embedded software fun
It’s been a while since my last post…
The reason for the absence is that I’m nearing completion of my Computer Science Master’s degree, and
have been buried by work between my full time job and a project I’m working on for my degree. This project is in collaboration with a number of other students from a variety of disciplines, primarily Aerospace, and is a CubeSat nanosatellite (10cmx10cmx30cm ~2.5Kg). There’s no public website yet, but one is planned.
The project is called CSSWE, Colorado Student Space Weather Experiment, and its science payload is an energetic particle telescope called REPTile.
The reason for this post is to comment on some of the embedded software architecture we’re using, and the difficulties in debugging and testing it, for the sake of sharing lessons learned.
To give a tiny bit of architectural background, there are several subsystems on this CubeSat, I’m involved with the C&DH (command and data handling) subsystem on board the satellite and the ground network (GN) subsystem on the ground. The GN is comprised of a computer, software, radio etc. that sit on the ground and are used to communicate once the satellite is in orbit around Earth. I’ve been involved in this sort of software before, so it’s not the most exciting area for me.
The C&DH subsystem on the other hand is. I did some embedded development and automated hardware testing at Sun a few years back and really dug it. It was very enjoyable working with the robotic tape library firmware team, and looking at how certain code changes affected all aspects of the hardware in a tape library. Some of my favorites were the performance tests to determine how small changes to optical barcode reading code affected overall slot to drive mount times. It was always a hoot to work hard all week with the team, and then launch my automated tests for a long test over the weekend. Seeing multiple tape library robots moving tapes around was oddly amusing, like observing a little dance. Anyhow, enough reminiscing, now back to the CubeSat…
The C&DH subsystem uses off the shelf parts from Pumpkin Inc, specifically their CubeSatKit. We’re using Pumpkin’s flight motherboard and A3 pluggable processor module (MSP4302618F). We’re also using the Salvo RTOS from Pumpkin, which is designed for exactly these kinds of tiny applications. There are two interfaces to other subsystems, SPI and I2C, and this is where the challenges come in. To develop code that must communicate with other devices, it usually helps to have access to them, which in this case is the other subsystems and sensors. Interface definitions and mockups can only get you so far when it comes to this sort of embedded development. So many things come into play, like timing and latency, which aren’t easy to simulate. The biggest problem has been the fact that everything else is being developed in parallel, so we don’t have access to the other hardware, or at least not to any that’s in a particularly useful or usable state.
This is what I wanted to focus on for this post, specifically that when faced with a shortage information for development and testing, and when interfaces are still being defined and are in flux, you still have to find a way to move things forward.
Our solution to that problem was to implement our C such that the interfaces were flexible and easy to change relative
to the fluidity on the project. The fundamental things like which bus (SPI or I2C) haven’t changed, but I2C addresses haven’t been coordinated or definitely defined for the 20+ temp/voltage/etc sensors and RTC, science instrument for example. Then, to address the fact that we don’t have a science instrument to get data from, or real temp/voltage/etc sensors yet, another team member suggested the following useful piece of hardware => A USB to SPI, I2C converter.
Our flexible firmware combined with this hardware and some software on a desktop PC allows us to emulate every
single device we need to communicate with. The converter is physically connected to the same bus, and special software duplicates the interface and behavior of the devices on the other side with very little code.
This has improved code quality and sped up development by leaps and bounds. The biggest lesson learned, which is one I had to learn for the Nth time unfortunately, is that it’s imperative to not only mock up interfaces and test them syntactically, but to emulate the actual environment to be used in production as closely as possible, for more comprehensive testing during development. This is truly essential for embedded software projects where interaction with hardware is required.
I can’t stress enough how important it is to find ways to do this as early in a project as possible…
Hopefully I won’t have to re-learn this yet again