loading page

Synergizing Proximal Remote Sensing Modalities for Enhanced Prediction of Key Agronomic Crop Traits
  • +9
  • Erin Farmer,
  • Peter Michael,
  • Ruyu Yan,
  • Nick Lepak,
  • Cinta Romay,
  • Edward S. Buckler,
  • Noah Snavely,
  • Ying Sun,
  • Kelly Robbins,
  • Joseph L. Gage,
  • Abe Davis,
  • Michael A. Gore
Erin Farmer

Corresponding Author:eef52@cornell.edu

Author Profile
Peter Michael
Ruyu Yan
Nick Lepak
Cinta Romay
Edward S. Buckler
Noah Snavely
Ying Sun
Kelly Robbins
Joseph L. Gage
Abe Davis
Michael A. Gore

Abstract

Recent progress in proximal remote sensing has elevated both the spatial and temporal resolution of data acquisition, expanding the accessibility of these technologies for digital agriculture applications. These advanced sensors enable the gathering of extensive and novel datasets, proving instrumental in accurately characterizing phenotypes and parameterizing models for crop growth. Despite the distinctive structural, spatial, and spectral information embedded in these data streams, they have predominantly been utilized in isolation. Thus, this research aims to integrate these disparate data sources to improve estimations of agronomically important crop traits, such as yield. Deep learning methods, such as autoencoders, will be used to extract latent phenotypes, which will be used to characterize manually measured traits. We focus on multispectral images (MSIs) collected by unoccupied aerial vehicles and lidar scans collected by unoccupied ground vehicles. MSIs capture canopy-level spectral information, including the red, green, blue, red edge, and near infrared bands. Lidar scans are converted to point clouds to construct the three-dimensional sub-canopy architecture of maize plants. Data were collected on maize hybrids as part of the Genomes to Fields project, from 2018 to 2022, in Aurora, NY. Autoencoder model training on MSIs shows that latent phenotypes are effective image representations, containing relevant and sufficient information to generate image reconstructions. The latent codes are also predictive of the image date and normalized difference vegetation index values. Latent phenotypes were extracted from the lidar point clouds as well, and the prediction accuracies of models using these measurements separately and jointly will be compared.
31 Oct 2023Submitted to NAPPN 2024
31 Oct 2023Published in NAPPN 2024