FPGA-based Camera Module
By Richard
- 11 minutes read - 2149 wordsThe goal of this project was to develop a small, low-power camera system suitable for integration as a CubeSat payload. Three prototypes were developed, resulting in a flexible dual-camera architecture for many kinds of on-orbit imaging needs. The extent of the project is a proof of concept prototype to demonstrate the imaging pipeline capabilities, not a final space-grade module.
High-Level Requirements
This project was intended to satisfy the requirements for the Canadian Satellite Design Challenge, with the goal of flying it as a payload on SFUSat’s entry to the competition. The competition requires that the satellite provide imaging capability that can be scheduled from the ground.
As an extension to these requirements, the team decided to develop a dual-band system targeting the visbile and NIR bands. With the capability to image in both of these bands, we can create interesting data products such as NDVI. Products like these can be used to evaluate the health of crops, and doing the processing on-orbit reduces the downlink requirements of the satellite. To my knowledge, there is currently no COTS module with all of these capabilities.
- 40x40 km swath at 400 km orbit
- Low power
- Visible and NIR imaging capability
- On-orbit image processing
- Easy to assemble
Requirements Flowdown
1. Swath Width
The swath width of the camera system is defined by its lens selection and the specifics of the image sensor. With a low budget and a small team, sensor selection was limited to available components on electronic component distributors such as Digi-Key and Mouser. This excludes sensor manufacturers such as Sony, who do not make their devices easy to purchase in small quantities. Availability of sensor development boards was also desired, as it would mean we could prototype with the sensor without the expense and effort of building a PCB for it. With these constraints in place, we derive some more requirements and constraints.
- Image sensor available from common distributor
- Image sensor or sensor family available on inexpensive development board
- Resolution availability between 2 MP and 5 MP
Lens selection was handled by other members of the team, but generally followed these requirements:
- Vacuum-rated lens preferred
- Minimal moving parts - fixed aperture preferred
- Focus at infinity
- CS mount or M12 mount
- Robust lens design rated for high vibration environments
2. Low Power
With CubeSats, low power is always preferred. While it is possible for the host satellite bus to include deployable panels, it is overall much simpler to specify low-power subsystems if they are available. There are many components to a good low-power payload strategy, but the primary way we decided to tackle this was to ensure that the camera can be completely powered-down by the satellite’s onboard computer when not in use. From this, several requirements can be derived:
The module shall:
- provide capability for host satellite to turn off power from the camera module
- provide status reporting for whether it is safe to power-down the camera
- include non-volatile image storage
- provide the ability able to retrieve images from non-volatile memory
- operate over a single 3.3 V supply
- consume no more than 1 W when in use (imaging)
3. Visible and NIR Imaging
To accomplish the dual-band imaging, we employ two sensors, one with the IR filter removed. Public Lab demonstrates that with two sensors and appropriate filtering, acceptable NDVI images can be obtained from COTS image sensors from OmniVision. We employ similar filtering.
4. On-Orbit Image Processing
As previously mentioned, the ability to process images on-orbit is extremely useful for reducing downlink requirements. Three classes of devices were considered - FPGAs, applications-class processors (think Cortex A), or microcontrollers.
Microcontrollers are interesting from a low-power perspective. Some STM32 devices have built-in JPEG encoding and parallel camera interfaces. Unfortunately, the devices available were limited to a resolution maximum of about 4 MP. Processing NDVI images with a microcontroller is also a decent challenge from a memory perspective. No microcontrollers I surveyed could easily support two image sensors with the dedicated camera interface hardware. Given more time, I still really want to try to make an STM32-based NDVI camera. I do think it’s possible with some creativity (and maybe some FPGA help).
Applications processors are certainly capable of doing the required processing, and are available with camera interfaces and Linux support for image sensors. However, they are relatively power hungry. Additionally, radiation tolerance becomes a concern with devices built on modern semiconductor nodes. While the duty cycle of this payload is likely low, it would still be prudent to implement error detection and correction in software, which is not a small undertaking. I have also personally witnessed the death of multiple microprocessors at radiation tests at dose levels that would not support the mission timeline requirements.
Because of the drawbacks of the other two device classes, we selected an FPGA as the processing platform for this project. Some benefits of this are:
- Ability to implement completely parallel image acquisition in two bands
- Ability to hardware accelerate NDVI processing
- With Xilinx parts, built-in support for ECC memory and TMR for radiation tolerance of the soft microcontroller (more on this later)
- Complete flexibility on interfacing and system architecture (processors, sensors, memories, etc.)
Of course, developing a system around an FPGA is a significant undertaking. However, the ability to optimize the system for any particular desired feature (parallelism, power, fancy processing) is a large benefit. It’s also way more fun!
5. Ease of Assembly
This is a low-budget capstone project with some fairly expensive components. Therefore, we could not realistically budget anything for professional assembly services. The compressed timeline also presented a great risk even when we were given a sponsorship opportunity for professional assembly. A two week or month-long turnaround is a significant chunk of time in this project.
Therefore, we had to keep everything within the realm of what I know well - hand assembly of large-ish SMD components. The general constraints this presents are:
- 0402 components or larger
- No large BGAs (this is a problem for modern FPGAs).
- Relatively coarse spacing between components.
Over the past few years, I’ve developed a solid soldering process at SFUSat, one that has been rated “I’d fly it on a CubeSat” by a Canadian Space Agency PCB expert upon inspecting one of our boards :). Given that we’ve decided to use an FPGA, how are we ever going to assemble this thing when our only options are BGA parts?
FPGA Selection
In order to incorporate an FPGA into this prototype without having to solder the device ourselves, we need to use a System on Module (SoM). The plan from the start of the project was to mount the SoM “backpack-style” to PCB on the opposite side from the image sensors.
Having to choose a SoM limits the FPGA selection quite significantly. As always, we have cost and availability as key decision factors too. However, the limiting factor on our SoM selection was pin availability. The pin map of our final revision looked like this, but multiplied by 2:
50% of the pin map for AROS v2
Our pin requirements took out a lot of SoM/development board possibilities like some of the inexpensive ones from Arrow. Trenz Electronic has an extensive range of 4x5 cm SoMs that are also available on Digi-Key. We settled on the TE0725 which is more of a development board, due to its lower cost and small-enough footprint.
The FPGA on board has 35k logic elements, and our utilization was less than 12% of that when the final design (including MicroBlaze processor) was instantiated. The board also has 87 IO pins, and we used nearly all of them. The TE0725 itself fit our requirements well, but we did encounter some problems while using it.
TE0725 Drawbacks
- PCB pads for all through-hole connectors have very small annular rings. They are extremely difficult to solder reliably, especially GND pads that sink heat into the ground plane.
- Poor connector mate between TE0790 JTAG adaptor and the board resulted in flaky JTAG connections.
- HyperRAM onboard uses 1.8 V logic and is not configurable, so locks several IO banks to 1.8 V.
- Poorly organized documentation. Most details are in the schematic as opposed to the reference manual. It would be great to see a pin map in the reference manual!
TE0725 Advantages
- Inexpensive, available, and offers several FPGA sizes and grades.
- Large number of user IOs.
- Configuration and the general design flow were smooth when JTAG connected.
System
The system is built from several IP components and several distinct pieces of hardware.
System block diagram. Erratum: one frame buffer block should be labelled visible instead of NIR. Erratum: each image sensor should have an I2C interface to the FPGA
MicroBlaze Microcontroller
A MicroBlaze soft microcontroller is instantiated inside the FPGA fabric. This device was selected because it is royalty free and features extensive support for the design flow within the Xilinx tools. Other options such as RISC-V and ARM cores were also evaluated, but MicroBlaze offered the best tool support.
Firmware
Firmware is written in C++ and consists of a simple state machine. The camera sits idle until commands are received over the control UART. Commands generally trigger a sequence of triggers and waits on various register bits. These bits control the custom hardware.
AXI Register Block
Register block generated with airhdl
The purpose of the register block is to allow firmware to monitor and control custom hardware using a memory-mapped interface. Essentially, we are just implementing our own microcontroller peripherals. Various register bits can be toggled in firmware, and those bits can directly trigger some custom hardware. Similarly, outputs of custom hardware blocks can be piped back into the register block so we can read them from firmware. When it’s all said and done, the firmware looks just like a low-level driver for any peripheral, like an SPI module, except all of the bits are exactly what we need to monitor and control the imaging process.
The design uses a custom AXI register block generated using airhdl. Airhdl is a free web-based tool that lets you design a register block and export VHDL (or other representations) to use in a design or simulation. Airhdl is fantastic. I really enjoyed using it, and it enabled us to link hardware and firmware very quickly.
Control
The camera is controlled over a UART. Commands are strings terminated with \r\n
, and the firmware picks up the commands and executes the appropriate function.
Export Handling
To export images, we provide two interfaces. The first is a UART at 115200 baud. This interface was chosen for its simplicity in terms of interfacing it to a PC to grab images. Using any USB-serial convertor like the FT232
and pyserial
, we can easily write scripts to grab image data and process it. It’s very slow (a full resolution raw image takes several minutes to transfer), but it was simple to get working. I used this interface during development.
The export UART and the control UART shared a TX line. Firmware controlled a hardware multiplexer on this line, which swapped its drive from the control UART to the export block. The control UART is connected to the MicroBlaze and was used for commands and responses. The export UART only could transmit, and offered hardware-accelerated exporting of images from frame buffer RAM.
The other interface is an 8-bit parallel to USB chip, the FT245
. After writing drivers to interact with it, we could have grabbed images from the camera at USB 2.0 speeds.
Frame Buffer
HyperRAM was used as a frame buffer. HyperRAM is DRAM that has the refresh managed internally, so it presents similar to an SRAM and is therefore fairly easy to use. HyperRAM was chosen for its large-enough size and straightforward interface.
The image is placed into the frame buffer as it is transferred out of the image sensor. From there, it sits until the export block draws the image out, or we initiate a transfer to nonvolatile storage.
Software
A member of our team was responsible for the ground segment software development. They created a pretty slick app in Python with Tkinter
as the GUI solution. It offered several buttons and menus for configurability, and sent commands to a backend script using zeroMQ
.
Ground Control Software
The backend script communicated with the board using pyserial
and was adapted from a script used to process images during development. The backend would read in the image data and transform it from raw RGB 565 into a JPEG. Once this was done, it would signal to the GUI that the image coule be updated in the UI.
Conclusion
Three prototypes were built over a period of 8 months, all were capable of taking small images by the end of the project timeline. While none of the modules would be ready for on-orbit use, we developed a proof-of-concept electronics and firmware platform that could be used as a starting point for developing any kind of custom camera.