This project aims to develop novel techniques for distributed AI learning and inference in resource-constrained environments, and in edge-cloud computing continuum. Given the limitations of IoT devices and embedded systems in terms of memory, processing capacity and energy, we will design methods to intelligently split AI tasks across multiple devices. The goal is to improve the inference and learning speed, scalability, and energy efficiency of AIoT systems, while maintaining accuracy.
We will optimize resource usage while meeting accuracy and latency requirements, enabling AI models to function efficiently even in dynamic computation and communication environment. The developed methods will be applied to real-life use cases in industry. The outcomes of this project will make advanced AI more feasible for real-world applications, especially on low-power devices in distributed environments.