Raw Feature Extraction
When an efficient mask was created, it can segment all the stacked data (layers from different bands or sensors). As a side note, before applying the mask to the layers of data, it is preferable to resample all layers to the finest pixel size to match each other. What remains after segmentation would be numerous pixels from one or multiple bands based on the available sensors. For example, thermal data have one band only, while hyperspectral data can have hundreds of bands. Most of the published literature has used the pixels' average in each band as a representative of the crop/band. However, some researchers reported higher correlations when using quartiles (25% and 75%) instead of simple averaging, particularly in thermal data[1]. Additionally, when a whole canopy is reduced to a single number, especially when using machine learning and artificial intelligence techniques, a big part of valuable information is being discarded. As a result, a sub-canopy level [2] or even a pixel level[3] data augmentation might help capture the most information possible.
The features extracted from the segmented images are referred to as raw features that include but are not limited to the minimum, maximum, average, median, standard deviation, and other histogram features. These data represent the variations in a specific band in the region of interest. Most of the time, the raw features exhibit a strong correlation with the response variable. But sometimes, the raw features alone cannot explain all the variability in response, raising the need for the engineered features that are explained in the feature engineering part. Usually, all the steps above are done on an open-source platform such as python or R, satisfying the intended project's requirements.
References
[1] T. Zhao, “REMOTE SENSING OF WATER STRESS IN ALMOND TREES USING UNMANNED AERIAL VEHICLES,” p. 131, 2018.
[2] T. Zhao, B. Stark, Y. Q. Chen, A. L. Ray, and D. Doll, “Challenges in Water Stress Quantification Using Small Unmanned Aerial System (sUAS): Lessons from a Growing Season of Almond,” Journal of Intelligent and Robotic Systems: Theory and Applications, vol. 88, no. 2–4, pp. 721–735, Dec. 2017, doi: 10.1007/s10846-017-0513-x.
[3] A. Moghimi, A. Pourreza, G. Zuniga-Ramirez, L. E. Williams, and M. W. Fidelibus, “A Novel Machine Learning Approach to Estimate Grapevine Leaf Nitrogen Concentration Using Aerial Multispectral Imagery,” Remote Sensing, vol. 12, no. 21, p. 3515, 2020.