Aerial imagery and its interpretation

What is aerial imagery and how can it be interpreted with AI?

Airborne systems mounted with sensing devices are able to acquire digital observations of our planet at low cost, on wide, possibly hardly accessible (e.g. mountainous environments) areas. Technological advances led to lighter, smaller systems providing finer observations. In this project, the aerial imagery is acquired through UAV and consists in color images, multispectral images (covering also the invisible spectrum such as infrared wavelengths), and 3D point clouds.

The Increasing availability of massive amount of remotely-sensed data also comes with a cost: the need for automated method to analyze such data, and to generate some meaningful knowledge out of it. Recently, such a need has been mostly filled thanks to Artificial Intelligence techniques, which have been shown successful in a wide range of fields. When applied to digital images, such techniques can help for instance recognizing a scene and the objects it includes. Still, their application to ecological data remains low, possibly due to the numerous scientific challenges it raises. In this project, the AI techniques will help to automatically assess plant spatial distribution.

Objectives of this research axis:

The objective is to combine airborne data and in-situ species characterization in a deep learning framework to ensure species mapping. We assume that using such multiple sources in a multi-task, deep neural network should allow to derive high-resolution information beyond the standard land cover mapping achieved with semantic segmentation networks (that usually fail to extract precise object contours). It will enable the analysis of the emerging pattern of multiple plant interactions at the community scale, and illustrate the potential of Artificial Intelligence (AI) in ecology.

To reach this objective, we will include the following tasks: acquiring the multispectral data; designing a deep network able to perform semantic segmentation with ultra-high spatial resolution; learning to unmix or disentangle multispectral images in order to distinguish different species in mixes of species; coupling optical and LiDAR information into a multi-modal, multi-task deep architecture; and coupling deep learning with physical models for biophysical parameters (i.e. plant traits) estimation.

Main researchers:

Sébastien Lefèvre, IRISA lab / OBELIX team, Full Professor at the University of South Brittany (UBS).

Thomas Corpetti, LETG lab / OBELIX team, Senior Scientist at CNRS.

Thomas Dewez, Scientist at BRGM.

Benjamin Pradel and Marie Deboisvilliers, Engineers at l’Avion Jaune.

Florent Guiotte, IRISA lab, Postdoc, developing novel AI methods for ultra-high resolution imagery, with the help of L’Avion Jaune company.

Hoàng-Ân Lê, IRISA lab, Postdoc, developing novel AI methods for processing 3D point clouds.

Achievements:

LIDAR, photogrametric and multispectral airborne data acquisition were performed in 2020. Corresponding data will enable monitoring of vegetation growth and development of deep neural network for automatic vegetation identification by AI.

A deep neural network learning DEM from 3D point cloud  has been implemented (Hoàng-Ân Lê’s Postdoc)

Comments are closed.