Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Restructured objectives, started the tasks part

After an unsuccessful launch of Hermes IIIII, we have a great opportunity to finish everything that was unfinished at the Hermes II launch, and really put into effect everything this subteam has been working on for the last 3 years. There is not much time until IAP 2020, so we really need to focus our effort on all the deliverables and make sure we prioritize them correctly, as some of them will probably have to be pushed beyond the launch of Hermes III. You can find more about the problems/highlights on the Hermes II flight here.

Table of Contents
indent5px

Subteam objectives on the Hermes III flight

These are ordered by decreasing importance:

...

1. Telemetry from flight

...

  1. Very likely COTS still in command

We want to recover telemetry data from the flight even in the event of a catastrophic failure, similar to the one of Hermes II. For that, we need a reliable radio connection throughout the whole flight.

We still do not know how the sensors behave during flight, so it would be extremely valuable to get detailed sensor data. We need to make sure our data logging is reliable and our data storage is durable, specifically, able to survive a ballistic impact.

2. Parachute deployment

Clearly, we want the AV bay to fire the parachutes. We would also want to put Pyxida in command, but this is yet to be decided

3. Camera video footage

We want to recover video footage from the rocket. Therefore, we need cameras we can turn on with a radio connection and the video of which we can recover in catastrophic scenarios.

How to achieve the objectives

Cameras

At least in my opinion (Luka), this will be one of the first priorities. Cameras we've used so far cannot be reliably turned on and have a whole list of other problems. If I may say luckily, most of the FireFly legacy cameras were destroyed in flight, so we have to buy new ones. Let's take this opportunity to invest in some more serious cameras and do this properly.

I suggest we allocate a whole subsystem to the cameras: the cameras themselves, the power for them and the BBC which is in control of all of them. Our requirements for the cameras subsystem:

  • The power for the cameras is separate from any other subsystem, but there is only one power source (with potential redundancy) for the whole cameras subsystem. We can check the power level, which will later be done through a separate power monitoring system.

  • We can turn each of the cameras on and off from Pyxida and check their status.
  • We can turn the video on or off on each of the cameras, and check the status of the video recording.

For this, we will need:

  • New cameras. Research needs to be done in order to find a solution that is not too pricy but also allows for external power and external control of video and turning on/off.
  • A new revision of the BBC. The new BBC will have many more responsibilities than revision 1.

Pyxida firmware work

By far the first priority will be fixing the issues we had days before launch when we were detecting liftoff immediately after arming the rocket, without any acceleration. 

/include and /src directories

Header files need to be put into the include directory for better organization of the codebase. I will do this after all branches have been merged and conflicts resolved before we start working on the code again in the fall.

Testing

Clearly, the issue we had happened due to a lack of integration testing. Had that been done, we would have spotted the error earlier in the coding process (and not a day before launch).

Unit testing

Great work has been done so far, with around 120 unit tests written. However, there are still some components of our code that require testing, unit testing is most effective when all the parts of the code are unit tested.

The issue that occurred was clearly a problem with the integration of components of the code, as all unit tests were still passing. A possible improvement we could make is put down more detailed specifications on how each component should be used with respect to others, and make sure other components don't make different assumptions about this component.

Integration testing (HOOTL)

In my opinion, Hardware Out Of The Loop (HOOTL) testing has been completely neglected this year, which was not a good thing. I do think this was simply because of lack of time, as we have mostly been working towards restructuring code. Nevertheless, this should be of major focus in the fall.

With HOOTL, we want to catch the integration errors we cannot catch with unit testing, which happen when unit tests assume the functionality of a component incorrectly. There is a whole range of possible HOOTL requirements, from a simple executable to a full-blown GUI app. I think we should settle somewhere in the middle, especially in the beginning, and we can add on features later.

The possible requirements, in order of feasibility and importance:

  • We are able to compile all of the Pyxida firmware into an executable that can be run on a non-embedded computer. Therefore, we mock the sensor dependencies and step through the code with the GDB debugger. We set out a quick launch expectation check in the main.cpp file. Note: I have done a simple version of that in playground.cpp
  • We are able to preset sensor values for different times for a duration of the whole flight, and we log all interactions of the code with sensor objects and other hardware features. We set default responses for them as well. We spit out the log to a file.
  • We can feed HOOTL RasAero sim data or a flight log, and the sensor values being fed by the mock sensors are the ones from the log file. The latter will be very important after we obtain data from flight tests.
  • We are able to connect the HOOTL radio mocked-out module with MockPyxida so that we can observe the code simultaneously on the ground station.
  • We have a GUI that allows us to step through each update loop, and manually set sensor values and observe the radio output, logs, digital pins, communications to BBC etc.

The last one definitely does not seem feasible in the next 6 months, especially given the length of this wiki page, but surely a good thing to work on/keep in mind.

This is just a rough outline of the different levels of complexity HOOTL could be built to. Certainly, many requirements will change before we actually start working on it. A very important thing to keep in mind is to keep it flexible, such that a moderate update in the firmware code does not completely screw up the whole interface.

Additionally, even if we decide to build a higher-end testing utility, it would probably make sense to keep a few low-end integration tests as a part of the testing executable, together with unit tests, which we run every time there is a change in the firmware code, just to keep track of what got broken. This would also play nicely if we decide to push through with the Continuous Integration (CI), where these tests are run on every push/pull request.

 

Pyxida hardware improvements

 

 

...