These CIFRE: Multi-sensor Fusion on Embedded Devices
FR61 NXP Semiconductors France SAS
Le poste en détail
Environment
This PhD is a collaboration between Inria’s ACENTAURI research team and NXP Semiconductors’ Vision Technology Engineering Center (VTEC).
ACENTAURI focuses on intelligent, autonomous, and mobile robotics, with expertise spanning perception, decision‑making, and multi‑robot collaboration. The team develops hybrid AI approaches that combine model‑based and data‑driven methods, validated on real robotic platforms such as autonomous cars, AGVs, and drones. Their work targets smart territories, smart cities, and smart factories, emphasizing robust multi‑sensor cooperation and strong industrial transfer.
NXP Semiconductors designs the processors that power next‑generation embedded intelligent systems, ensuring they are safe, secure, fast, and reliable. Future autonomous vehicles, robots, drones, and mobile devices will rely on NXP neural processing units (such as the eIQ Neutron NPU) to achieve high‑performance inference. The VTEC team in Sophia Antipolis develops the software ecosystem that enables efficient vision pipelines on NXP hardware. To push technological limits, NXP designs optimized AI architectures tailored to customer needs and NXP processors.
This PhD sits at the intersection of advanced robotics, multi‑sensor perception, and efficient AI architectures, contributing jointly to scientific research and industrial innovation.
Motivation and Objectives
The goal of this PhD is to design and evaluate robust multi-sensor perception systems capable of understanding their environment using heterogeneous sensors such as cameras, radars, LiDARs, and UWB devices.
Target tasks include object detection, mapping, and activity monitoring.
The long-term objective is to build an adaptive fusion framework that dynamically selects and weights each sensor modality depending on environmental conditions (e.g., indoors/outdoors, partial occlusions, degraded visibility due to fog or rain).
The PhD will begin with an evaluation of current Radar + Vision systems developed at NXP, benchmarking them against LiDAR-based perception systems using state-of-the-art fusion techniques.
The project will then extend the framework to additional sensor types. A key research axis is the integration of uncertainty estimation throughout the fusion pipeline, and the assessment of how uncertainty propagates and affects downstream tasks such as object detection.
Finally, the candidate will collaborate closely with NXP teams to deploy and validate the developed framework on real-world use cases in robotics or smart-home contexts.
Throughout the project, the candidate will pay close attention to embedded constraints such as latency, energy, memory footprint and quantization, thus paving the way for industrialization and integration into NXP’s customer solutions.
The candidate is expected to publish in top-tier conferences and journals in robotics and computer vision, such as ICRA, IROS, CVPR, or ICCV, with at least one major publication targeted per year.
Skills
The ideal candidate holds a strong academic background in Computer Science, Robotics, Artificial Intelligence, or a related field, with solid foundations in mathematics and algorithmic thinking.
Required skills:
- Proficiency in software development, particularly with tools and languages such as Python, C/C++,
- Git and Linux,
- Experience with at least one deep learning framework (e.g., PyTorch, TensorFlow),
- Understanding of sensor technologies and perception algorithms.
- Strong written and spoken English is essential, as the research will be conducted in a collaborative academic–industrial environment
Nice-to-have:
- Familiarity with embedded systems and hardware-aware programming,
- Experience with multi-sensor perception or uncertainty estimation.
Scientific curiosity, autonomy, and the ability to work in a multidisciplinary team are key qualities.
#LI-8e4d