(This is a student paper)
With the rise in availability and capability of Unmanned Aerial Systems (UAS) there is an opportunity to further expand their use for the purposes of scientific research, monitoring of critical infrastructure, and emergency management operations. One limiting factor for small UAS (sUAS) is their relatively short flight time (generally < 30 min for battery powered UAS), after which human intervention is required to swap out the batteries for recharging. This shortfall prevents these systems from being fully autonomous for a sufficiently long period to accommodate most operational scenarios.
One solution to this is to land UAS on a charging platform (stationary or mobile), recharge its power source, and then continue operations. Global Positioning System (GPS) position determinations are not sufficient for such precise measurements, and in some situations are not available (eg, inside a building or a mine). Even with advanced augmentation such as Real-Time Kinematic (RTK) positioning, repeated landings in the same commanded location can vary by more than 20 cm. To address this shortfall, a more precise position determination and command system is necessary for the terminal landing phase of a flight.
The system described in this paper utilizes a Pixhawk-based quadcopter that has been modified to incorporate charging pads on its landing legs. A landing platform mounted to an Unmanned Ground Vehicle (UGV) has been constructed with 4 conical receptacles. At the bottom of these receptacles are charging contacts that connect to the pads on the legs of the UAS. The UGV carries supplemental batteries capable of supplying multiple UAS recharge cycles. An Infra Red (IR) camera mounted in the center of the landing platform is used to track a pair of IR LEDs mounted to the Pixhawk-based UAS platform.
The camera’s images are processed by a RaspberryPI 3 Model B+ and, using OpenCV libraries, can be used to determine the position and orientation of the UAS within the field of view of the camera. The system uses Dronekit API commands to command the UAS to correct its angular orientation, move directly over the landing/charging platform, and then descend, keeping the UAS centered by using the active monitoring provided by the camera tracking of the IR LEDs mounted to the UAS.
To reduce the complexity of the problem from 3D to 2D, an onboard laser altimeter is used to measure the height of the UAS, the camera tracking system then only needs to determine the X-Y position of the UAS relative to the field of view of the camera. Camera frames are processed within a loop to guide the UAS into the landing platform, comparing angular, lateral, and height information of the UAS to that of the UGV platform. This loop is repeated for each image frame until the UAS has successfully landed.
This autonomous landing capability may either be paired with a UGV to support missions requiring mobile units with long duration (eg, pipeline inspections, mine rescue operations) or paired with a static platform at a desired location to support recurring mission sets (eg, cross-sectional measurements of a glacier) or an on-command asynchronous mission sets (eg, search and rescue).
This paper details the design and operation of the aforementioned autonomous landing system and how this project was accomplished by a UNIVERSITY graduate student to satisfy program requirements for their MSEE and simultaneously support real-world mission requirements for UNIVERSITY’s Federal Aviation Administration (FAA) UAS Test Site.
Are you a researcher? Would you like to cite this paper?
Visit the ASEE document repository at
for more tools and easy citations.