Fremtidens forskere inden for ingeniørvidenskab støbes hos os. Vores ph.d.-studerende har høje akademiske ambitioner og leverer resultater af høj kvalitet til både den private og den offentlige sektor. Vores hovedfokus er anvendt forskning, og vi har et stærkt samarbejde med branchen for elektro- og computerteknologi, fordi vi forstår deres kerneudfordringer og bidrager til at udvikle løsningerne.
Her på siden kan du møde nogle af vores ph.d.-studerende og læse om deres projekter.
Regningen for korrosion løber årligt op i ca. 3 pct. af bruttoverdensproduktet, BVP, og der er derfor et stigende internationalt fokus på monitorering af infrastrukturprojekter. Institut for Elektro- og Computerteknologi, Aarhus Universitet, er i færd med at udvikle et smart plaster, der kan skære en stor del af rust-regningen.
I august 2018 kollapsede den 1.182 meter lange og 45 meter høje bro Ponte Morandi i Norditalien. 43 mennesker døde, 600 mistede deres hjem og Italiens stolte ingeniørvidenskabelige renommé led et eftertrykkeligt knæk. Og selv om den præcise årsag til kollapset endnu ikke er kendt, har efterforskere fundet beviser for, at korrosion og manglende strukturelt vedligehold var årsagen til den tragiske begivenhed.
Formålet er at udvikle en form for sensorplaster, der påsættes armeringen og støbes med ind i betonkonstruktionen. Sensoren og grænsefladeelektronikken drives ved hjælp af energihøstende teknologier for at sikre kontinuerlig overvågning af stålets tilstand,” siger Jaamac Hassan Hire, der er erhvervs-ph.d.-studerende på projektet.
The significance of improving the sustainability of buildings has been increasing rapidly over the recent years. While it is important to analyze the technical aspects that might be related to the impacts of buildings and neighborhoods on our environment, it is crucial to consider the social criteria in the context of sustainable building environment.
In this project, we investigate the importance of social criteria in the building environment.
In addition, we develop and implement a decision support software solution in order to provide an assessment methodology. This methodology can verify the satisfaction of social aspects that directly affect the well-being of the buildings’ occupants such as indoor air quality, visual and thermal comfort.
This research will be carried out on actual buildings that are part of the newly renovated AU living lab, as an integration with the EU-funded PROBONO project, which aims to provide strategic planning tools from technical, environmental, economic and social perspectives to create recommendations and standardization actions for the European construction industry.
Project title: Next Generation, People-Centred Analysis Tools for Holistic, Sustainable Renovation of Green Buildings and Neighbourhoods (GBN)
PhD student: Yazan Nidal Hasan Zayed
Project period: 15.11.2022 - 14.11.2025
Main supervisor: Carl Schultz
Co-supervisor: Aliakbar Kamari
Automatic sleep scoring make use of algorithms inferring sleep states from sleep recordings. In recent years, these algorithms have shown great performance, even reaching human performance, by employing various deep learning approaches. Most of these models are trained on recordings of healthy patients or a few specific patient groups, and they can not be expected to generalize to other patient groups as sleep differs with both age and health. In addition, sleep EEG recordings are expensive to produce and label for training.
In this project, we will seek to develop and test “transfer learning” or “semi-supervised learning” approaches to this problem, adapting high performing models trained on large data sets to perform well on smaller data sets recorded for different patient groups or with a different device.
In particular, the developed methods will be used not only for regular, clinical sleep recordings, but also for recordings made using the ear-EEG device developed at the Center for Ear EEG.
In diseases causing blindness by photoreceptor degeneration, such as retinitis pigmentosa and age-related macular degeneration, the neuronal circuit in the eye remains intact and functional. Two approaches to restore visual function are photovoltaic (PV) retinal implants and optogenetics (OG).
The PV implants are micro-scale solar cell units which when illuminated produce a small current. The neuroretina consists of neuron layers in the retina that propagate the signal from the photoreceptors to the visual cortex in the brain. When implanted in close proximity to the neuroretina the currents of the PV implant activate the neurons locally, thereby mimicking the photoreceptor signals, which then can be perceived as an image in the brain.
In OG viruses are used to deliver DNA encoding light-sensitive proteins to the neuroretina, often targeting the retinal ganglion cells. The neurons can then be stimulated directly by illumination as the light-sensitive proteins induce their activation.
Both PV implants and OG have been tested separately, but in this novel project we aim to combine them to restore visual function further than the two approaches can discretely. To achieve this aim micro solar cells and viruses are designed and produced and tested individually in ex vivo experiments with pig eyes. Subsequently, animal studies will be performed to assess the combination treatment in vivo in a disease model.
Furthermore, the project aims to develop a controlled drug delivery method for the optogenetics component that ensures safety, quantity, and area of delivery.
Project title: Hybrid photovoltaic and optogenetics stimulation of the neuroretina for restoring visual function in blind patients
PhD student name: Asbjørn Cortnum Jørgensen
Main supervisor: Farshad Moradi
Co-supervisor(s): Rasmus Schmidt Davidsen
Project period: 01.08.2023 – 31.07.2026
To design, implement and evaluate secure data sharing and controlled, privacy-preserving processing of IoT data using the cloud-to-thing IoTalentum infrastructure, trusted execution environments (e.g., Intel SGX), local computing resources, and judicious allocation of data. To investigate compression techniques that allow for processing directly on compressed data as a way to reduce data traffic and memory usage, both in local, MEC, and cloud devices in order to address the explosion in generated data worldwide.
The number of hearing-impaired individuals is growing rapidly on a global scale, due to the increase of the elderly population. Hearing impairment can make it difficult to communicate with the world around us, and it can be so difficult that certain social situations are avoided, which can lead to isolation, cognitive decline and even depression. Therefore, it is no great surprise that how well a person can understand speech (speech intelligibility) has been a high priority when developing and fitting hearing aids. The golden standard for determining speech intelligibility is a behavioral test, where the hearing-impaired individual is introduced to speech elements and repeats back what is heard. This is not only a highly time-consuming process, but also a very subjective procedure, that has shown not to always correspond to the users experience in their daily lives.
Can we measure speech intelligibility using electroencephalography?
This project aims to investigate whether it is possible to measure how well speech is understood using electroencephalography (EEG). If this is possible the immediate benefit will be obtaining a new objective measure to evaluate whether for example a hearing aid improvement is enhancing speech intelligibility in a lab setting. The next natural step will be to investigate whether it is possible to measure speech intelligibility using ear-EEG. Ear-EEG is a device used for recording electrophysiological signals using electrodes placed inside the ear. The benefit of using the ear-EEG is that it is possible to measure EEG in the natural environment of the user, in an unobtrusive and mobile manner. This hopefully resulting in a better speech intelligibility for the end user, and thereby closing the communication gap caused by their hearing impairment.
This project is carried out with close collaboration with the Danish companies Vestas, Velux, Millpart, Hydrospecma, and Landia, and the Ringkøbing-Skjern municipality.
Each company focuses on different products and industries. The goal of this project intends to contribute to digital technologies of the involved companies for improving product design and production using virtualization and simulation methods, considering strategies to enhance optimization and efficiency of the processes.
This will be achieved through the combination and integration of already existing tools for digital twin representation and software for process and product management. This PhD project is associated with the Digital Transformation Lab (DTL) in Ringkøbing-Skjern Municipality targeting a digital transformation of the 5 companies in the area. The DTL will be used for the experimental parts of this PhD project. The DTL will be key in ensuring the best exchange of research-based knowledge from the university with practice-based knowledge at local companies.
Project title: Shortening time to market for product design using simulation and traceability in a digital twin context
PhD student: Santiago Gil Arboleda
Project period: 01.06.2021 – 31.05.2024
Main supervisor: Peter Gorm Larsen
Co-supervisor: Alexandros Iosifidis
Auditory Attention Decoding (AAD) is envisioned to become an important technology for the next generations of hearing aids. With an AAD component, a hearing device will be able to detect which sound sources the user is attending, and thereby the audio signal processing in the hearing device can be adapted accordingly.
In this project, we propose a new approach in which the AAD is based on cognitive processing of the audio event stimuli. This is expected to better separate attended and non-attended sound sources. The approach try to extract event related potentials and analyze using state-of-the-art deep learning model. We will validate the algorithm on Ear-EEG signal which is minimalistic and will allow the technology to be implemented in future hearing aid devices.
The ReMaRo ETN project aims to develop Artificial Intelligence methods for submarine robotics with quantified reliability, correctness specifications, models, tests, and analysis & verification methods.
Within this project, I am involved in the development of deep learning algorithms for vision-based navigation for underwater safety critical applications. One of the underwater contexts contemplated within the projects is pipeline inspection. That is, I will develop visual (camera-based) localization algorithms for pipeline inspection, supported by other sensors such as sonars. Moreover, I will work on the reliability assessment of these methods with the help of other PhD students from the ReMaRo consortium to identify whether if the localization measurements are reliable or not depending on the quality of the measurements taken.
My PhD project is part of MADE FAST in close collaboration with Vestas. In the last 15 years, the wind turbine has become bigger and bigger, and continues to do so. This means that the value chain is becoming increasingly challenged and transportations cost are rabidly increasing. An option to cope with this challenge is to move the manufacturing closer to the installation site. To this extent Vestas would like to create a movable factory that can be assembled/disassembled on site and transported in containers.
The vision of this project is to build a digital twin that enables establishing a movable factory solution that can be moved around the world and configured so the assembly processes can be conducted in a safe way. The digital twin will be able to demonstrate the assembly and disassembly of the movable factory for selected cases. In addition, it will be able to compare the actual assembly process sequence against the digital model and in case of larger discrepancies warn the appropriate operators.
The academic goal of the project is to provide new applied methods and results demonstrated in industry where we can document significant improvements with digital twins.
The Internet of Things (IoT) refers to a paradigm in which internet connectivity is ubiquitous among all kinds of devices everywhere. Associated with this is a massive increase in data collection and use, which has the potential to deliver huge long-term value to people and society. For example, reducing downtime and maintenance costs for production systems, improving autonomous vehicle safety and reducing environmental impact through efficiency improvements.
However, current technology is not prepared to deal with so many devices transmitting and using data simultaneously. To realise the benefits of IoT, an end-to-end framework that considers data compressing and data analysis wholistically is critical.
The goal of this project is to investigate the synergies and trade-offs between data compression and analysis. Specifically, this will involve developing algorithms for doing analytics directly on compressed data and optimising compression for both storage and analytics concurrently.
I am researching in optical nonlinearities to design a robust and scalable laser device with narrow linewidth for emission at various wavelengths. Nonlinear optics allows amongst other interesting effects, for wavelength conversion. This can be used for realizing lasers, at wavelengths otherwise hard to come by and have been used to achieve narrow linewidth. Implementing nonlinear optics in photonic integrated circuits (PIC’s) can enhance the nonlinear effects, while making it robust and small. This could for example allow for handheld instruments to measure and identify gasses in exhausted breaths for medical analysis.
Money laundering is a serious financial crime with devastating consequences; it enables drug dealing, corruption, and terrorism.
To combat money laundering, banks are required to monitor and report suspicious customer behavior to authorities. In practice, all banks address this task by use of electronic anti-money laundering (AML) systems. Traditional systems raise alarms based on pre-defined and fixed rules, effectively “if this, then that” statements. These systems exhibit poor performance, with up to 98% of all alarms being false positives.
The academic literature on statistical and machine learning methods for anti-money laundering (AML) is relatively limited. This is undoubtedly connected to the lack of publicly available data sets.
The Machine Learning for Anti-Money Laundering project aims to develop advanced machine learning models to raise and qualify AML alarms – Freeing up inquiry recourses for actual money laundering cases. The project is funded by Spar Nord bank, also making all its data available for the project. The project focuses on the application of recurrent neural networks and attempts to address three fundamental challenges of machine learning for anti-money laundering: extreme class imbalance, concept drift, and alert-feedback interaction.
Integrating manufacturing robots in the industry is currently a complex task, where system integrators spend multiple hours on selecting the appropriate parts and integrating them. The aim of this project is to decrease the integration time of manufacturing robots and lower the barrier between the untrained end-users and robotic technology. This will be done using digital twins, enabling simulation of the combination of different robotics parts. An example of this could be combining the necessary parts for a Pick and Place application, consisting of a base, a robotic arm, a gripper and sensors. This can then be simulated and displayed to the interested client.
The main outcome of this will be the ability to easily set up different robotic manufacturing applications and simulate them, depending on the clients needs. The clients can easily access the simulations through a web-platform, allowing them to better understand the potential robotic solutions they can adopt. This will create a state-of-the-art hardware and software one-stop-shop for automation, achieving the next big leap in democratizing robotic technologies.
People suffering from hearing loss can benefit from the use of hearing aids. To work properly, it is crucial that the hearing aids are fitted in close accordance with the hearing abilities of the individual hearing aid’s user. In some cases, hearing can deteriorate relatively quickly, especially with increasing age. Thus, it is important to re-fit the hearing aid recurrently.
Traditionally, hearing aid fitting is carried out in the clinic, where different behavioral tests are used to determine hearing thresholds. Alternatively, hearing loss can be characterized based on electrophysiological measures. This is typically based on the auditory steady-state responses (ASSR) recorded from a few electroencephalography (EEG) channels placed on the scalp.
Ear-EEG is a novel EEG recording method in which EEG signals are recorded from electrodes located on an earpiece placed in the ear. Ear-EEG can potentially enable integration of EEG recording into hearing aids and performing of ASSR based hearing threshold estimation in daily life.
Traditionally, ASSR based hearing threshold estimation has been performed using amplitude modulated continuous signals. Hearing tests based on monotonous stimuli of this kind make the user tired and unmotivated and can be inconvenient for the user, especially when the test must carry out for a long period of time.
Natural sounds such as speech is much more pleasant to listen to, thus speech-based hearing tests are more appealing to say yes to and can easier be implemented in daily life. Speech-based hearing test can be performed while the user, for instance, listen to the audio book.
The aim of this Ph.D. project is to investigate the possibilities of using the natural sounds, and in particular speech signals, to estimate hearing thresholds based on the ear-EEG.
Physicians are typically concerned with sounds originating from the body, which are traditionally listened to through an acoustic stethoscope. However, interpretation of sounds through such instruments is subject to the training of the individual physician. To overcome this challenge and further advance the method, an electronic version of the stethoscope has been invented, allowing for more objective interpretation of such sounds.
Ear-EEG is a technique specifically designed to monitor brain activity during everyday activities. Through an ear-fitted device, physiological signals reflecting the subject’s brain activity are measured.
This project aims to extend the ear-EEG platform with a body-coupled microphone, i.e. an electronic stethoscope in the ear, enabling recording in real-life environments with a discrete and unobtrusive wearable device. Furthermore, we seek to explore methods for joint processing of ear-EEG signals and sounds picked up from the body-coupled microphone . This would allow for a broader understanding of the current state of the body and is highly applicable in both research and in medical devices for health monitoring.
This project is carried out at the newly founded Center for Ear-EEG
Our society has become increasingly reliant on technical innovation to improve our quality of living. We expect our power grid to reliably supply electricity to our homes and production facilities and we expect our cars to take us safely to work every morning.
These are examples of Cyber-Physical Systems (CPSs), that is systems characterised by a strong coupling between the physical process and the software which controls it.
These types of systems are often safety critical as well, for example the failure of the power grid may cause disruption of other critical infrastructure, whereas the failure of a car may cause it to crash.
To keep up with the rising complexity of CPSs, computer assisted design (CAD) software is commonly used to simulate the individual parts of the system. The next step in this evolution is collaborative simulation (co-sim), where all components of a system are simulated at once. Rather than verifying only the individual components, this methodology makes it possible verify that whole system works as intended.
The central challenge in adopting a simulation-based approach to developing systems is the difficulty of creating the models. Today, this is very labour intensive and requires highly specialized expertise and software. This goal of this project is to develop machine learning based techniques for modelling CPSs, that provides accurate models without requiring knowledge of the internal workings of the system.