We are currently offering Master's, Bachelor's, and R&D engineering projects with wireless communications, networking and data analytics.
On-board Monocular and Stereocular Depth Estimation for UAV Navigation
You(students) will learn architectures of CNNs and training procedures. You will also gain hands-on experience of deploying neural network based algorithm within Robot Operating System (ROS) on on-board processor. CollaborationThis project is a part of the Drones4Energy innovation project, which targets proving autonomous inspections using UAVs. PrerequisitesMathematical modelling, Python programming, Embedded software development. ContactLiping shi (liping@eng.au.dk), Rune Hylsberg Jacobsen (rhj@eng.au.dk) (Image source: A. Geiger, P. Lenz, C. Stiller, R. Urtasun, Depth Prediction Evaluation) |
Flying Ad Hoc Networks (FANETs)
The project addresses the problem of forming wireless mesh networks in a swarm of unmanned aerial vehicles (UAVs). Mesh networks are challenged by mobility of the UAVs and interference and must implement self-organizing mechanisms to cope with this dynamics. Important functions of mesh networks are neighbor discovery, path selection and maintenance of routing information. The study will include experimental work based on the IEEE 802.11s standard, which amends WiFi to support mesh networking. You will also gain hands-on experience of wireless communication on our UAV platform. Collaboration
This project is a part of the Drones4Energy innovation project, which targets proving autonomous inspections using UAVs. PrerequisitesWireless networks, Embedded software development. ContactRune Hylsberg Jacobsen (rhj@eng.au.dk) |
Multi-UAV System for Autonomous Exploration
Collaboration
This project is a part of the Drones4Energy innovation project, which targets proving autonomous inspections using UAVs. PrerequisitesC++ or Python programming, Linux. ContactsLiping Shi (liping@eng.au.dk) and Rune Hylsberg Jacobsen (rhj@eng.au.dk) (Picture reference: Master thesis project, Simon Anh Cao Nguyen) |
Inspection Path Planning and 3D Reconstruction
The project addresses the problem of 3D structural inspection path planning. The problem is to find best UAV viewpoints and a connecting path for a complete inspection of the 3D structural, with the precondition of a 3D point cloud model or a triangular mesh representation of the structural. 3D image reconstruction need to be developed. The path planning should consider the limitation of the dynamics of UAV and its on-board camera, e.g. FOV.
Contacts: Liping Shi (liping@eng.au.dk) and Rune H. Jacobsen (rhj@eng.au.dk) CollaborationThe project is part of the Drones4Energy innovation project. The project may collaborate with another project group that research on 3D point cloud mapping (LiDAR) and free space recognition. PrerequisitesLinux, programming, experimental skills |
Positioning in Urban area for mobile robots
The target GNSS-RTK solution is a Network Real Time Kinematic (NRTK) positioning system, registering RTK service from TAPAS (www.tapasweb.dk) over the Internet. The TAPAS platform consists of a network of 11 reference stations deployed in Aarhus city, which improve data from global satellite-based navigation systems such as GPS and Galileo and others (GNSS). The positioning quality (e.g., accuracy and timeliness) may be affected by the communication quality between the on-board GNSS receiver (dual-band GNSS) and the TAPAS platform. Students working in this project will investigate and characterize the interference sources, fading and shadowing components, plan the in-field measurements and evaluate the research assumptions. The project will furthermore investigate adaptive algorithm to let the robot decide when and how to switch or convert from GNSS-based positioning to non-GNSS positioning (e.g. technology you could see in Lunar rover for robot state estimation using visual inertial odometry and wheel odometry), while the robot is moving in some environment that the GNSS signal is not reliable. To provide explainability of the empirical study, models for the latest GNSS-based positioning solutions will be involved. During the project, students will learn the GNSS-RTK technology, characteristics of GNSS signal transmission and gain experience on mobile robot platform and development using the ROS (Robot Operating System). CollaborationThis project collaborates with Capra Robotics ApS (www.capra.ooo) ContactsRune Hylsberg Jacobsen (rhj@eng.au.dk) Liping Shi (liping@eng.au.dk); |
Beam Steering and Beamforming Algorithms for Nanosatellite Communications
The goal of this project is to develop beam steering and beamforming algorithms for nanosatellites. A depiction of beam steering is shown in the figure, where by controlling the phases of the feeding signals to the antennas, the direction of the whole radiation pattern can be electronically controlled. This is achieved by providing an input steering angle to the controller C in the diagram. Beamforming is similar, but instead, also the form of the pattern can be designed as well. The work will focus on the beam steering and beamforming algorithms obtained from an input reference angle from the satellite attitude determination and control system (ADCS) to a desired target angle. Reference signals to digital phase shifters will be considered. The algorithms will be developed in high-level language for design and analysis, where implementation will be carried in a low end platform such as the Arduino or the Raspberry Pi. The student will have the opportunity to work hands-on with our NAN lab facilities, nanosatellite equipment and collaborate on current research in satellite projects. PrerequisitesProgramming in a high-level (e.g. Python) or/and low-level (C/C++), Linux, embedded programming. ContactsRune Hylsberg Jacobsen (rhj@eng.au.dk) (image sources: Wikipedia (top), Adaptive beam steering (bottom)) |
Satellite Image Inpainting using Deep Neural NetworksThe amount of publicly available satellite imagery has grown tremendously, as a result of the open data policies from ESA and NASA. However, clouds obstruct the view in the satellite imagery, especially in Northern European countries. To alleviate this issue, recently developed deep learning methods for image inpainting tasks can be employed. In this project, the aim is to investigate how generative deep neural networks can be used to inpaint the missing parts of satellite imagery after cloud removal. It is important, however, that the inpainted parts are trustworthy representations of the ground. To achieve this, the input image must be combined with historic data, and possibly satellite radar data, and the performance of the deep learning methods used must be evaluated thoroughly. The satellite imagery used should be from the Sentinel-2 satellite, and the student(s) will have GPU processing power available for training the deep learning models. Contacts:Jacob Høxbroe Jeppesen (jhj@eng.au.dk) and Rune Hylsberg Jacobsen (rhj@eng.au.dk) CollaborationThis project is part of the Future Cropping Partnership. The results will be presented as part of the outcome of the project, and if the results are promising, our aim is to collaborate on publishing a scientific paper based on the project. PrerequisitesMachine learning/deep learning, Python programming |