

Since joining UHDT in January 2023, I’ve been working on the Image Processing subsystem for the SUAS competition. This ongoing project develops systems to identify targets and determine GPS coordinates for autonomous payload delivery, adapting to evolving requirements and industry challenges.
ToC
- Introduction
- Contributions
- 2025 Imaging System Design
- 2024 Imaging System Design
- 2023 Imaging System Design
Introduction
The University of Hawaii Drone Technologies (UHDT) is a Vertically Integrated Project (VIP) that has been researching and developing drones intended to compete at the annual Student Unmanned Aerial Systems (SUAS) Competition hosted by Robonation, and previously the Association for Unmanned Vehicle Systems International (AUVSI). The completed drone must be able to autonomously navigate a course and complete a series of tasks that include but are not limited to identifying and locating designated targets and dropping payloads corresponding to each target. The competition environment aims to simulate search-and-rescue and package delivery scenarios.
The VIP was established in the Spring of 2014 and is a multi-disciplinary team involving computer, electrical, and mechanical engineers. The team is divided into four subsystems that focus on a key component of the competition: Air Delivery (AD), Hardware, Image Processing (IP), and Software. I joined the project as a member of the Image Processing subsystem in the Spring of 2023 and am the current subsystem lead.
Contributions
During my time as a member of the IP subsystem, I gained more experience with programming in Python, developing the Koa High-Performance-Computing Cluster, and workflows for creating/training machine learning models. I helped develop more comprehensive dataset generation scripts that automate labeling and integrate more closely with tools such as Roboflow in addition to workflows that accelerate model training for the object detection, shape classification, and alphanumeric classification algorithms used in the drone’s imaging system. I also worked extensively on transitioning the color classification algorithm to use K-Means clustering with the HSV color model and experimented with using K-Nearest Neighbors instead of a bandpass filter as the method of classification.
As the lead for the IP Subsystem, I am continuing work on the image processing algorithms, developing systems for documenting code and processes, and providing guidance to subsystem members. I am also responsible for communicating with other subsystem leads and integrating the imaging system with the other systems that make up the drone.
2025 Imaging System Design
UHDT will not be competing in the SUAS 2025 competition to focus on resolving critical areas in the system design discovered during the 2024 competition and will holding an internal competition at the end of the semester based on the rules provided for the 2025 competition. The new competition rules present significant changes to the object detection and air drop tasks. In previous years, the drone needed to be capable of detecting, classifying, and localizing targets with a fixed set of possible values for their alphanumeric character, shape, and colors of the alphanumeric character and shape. A corresponding payload must be matched and dropped onto its target. The new guidelines now require the drone to be capable of identifying 15 possible objects (a comprehensive list can be found here). Four of these possible objects will be selected as targets during competition. The majority of the work done over the Fall 2024 semester involved modifying dataset generation workflows.
2024 Imaging System Design
Under the 2024 competition rules, targets were characterized by their alphanumeric character, shape, and the colors of the alpahanumeric character and chape. A water bottle representing a payload must be dropped within 15 feet of its corresponding target. YOLOv8 machine learning models were used to detect targets in a photo and classify their shapes. The 4K images taken by the drone’s camera were pre-processed with hyper-slicing to divide the unprocessed image into smaller sections and improve detection of smaller targets. The color recognition algorithm quantized target images with K-means clustering and used a bandpass filter to determine the colors of the alphanumeric character and the shape in addition to extracting a targets alphanumeric character. The extracted alphanumeric character was provided as an input to a Tesseract OCR model trained on a custom artificial dataset. An optimized payload matching algorithm takes the information provided by its preceding algorithms and uses their results to match payloads with their corresponding targets and generate a waypoint file for the drone to follow.
2023 Imaging System Design
The rules for the SUAS 2023 Competition can be found here. The overall design for the drone’s imaging system remained relatively the same between 2023 and 2024 with minor changes in underlying technology and the most noticeable change being the separation of the object detection and shape classification algorithms. The object detection and shape classification algorithms used a YOLOv5 model. The transition to using K-means clustering for color classification and Tesseract for alphanumeric classification was started during this period.