Image mosaicing (also called compositing or stitching) is a technique used since the early days of photography to join two or more images with some overlapping regions [1]. Nowadays, image mosaicing is an indispensable part of remote sensing applications due to the extensive coverage of such images. For example, sometimes satellite images do not cover the area of interest with a single image, or clouds block some parts of the image. As a result, images from several dates can be used with mosaicing techniques to reconstruct a mosaic for the area of interest. In UAS applications, mosaicing is even more critical than satellite images. UASs cover an area using tens or hundreds of relatively wide-FOV images that are used to generate a mosaic image of the whole area. The wider FOV results in images of the same place that look slightly different due to the view angle, making the mosaicing challenging.

Image mosaicing is still one of the hot subjects, especially for remote sensing applications. The two primary steps for image mosaicing include 1) image alignment and 2) blending [2]. In the alignment step, the common region of the overlapping images are used to align the images on top of each other using either the intensity-based methods, e.g., Normalized Cross Correlation (NCC), that depend on the intensity of pixels, or the feature-based methods, e.g., Scale-invariant feature transform (SIFT), that rely on some distinct or salient features such as edges and points[3]. In intensity-based alignments, the lack of or a weak calibration process will produce images that poorly match each other, resulting in distorted or misaligned mosaics. After the alignment process, overlapping regions should be blended with minimal artifacts. Discontinuities are often noticeable in the overlapping region resulting from misalignment errors or photometric differences between images. Blending algorithms, therefore, play an essential role in lightening such discontinuities. The blending methods can be classified into three groups: transition smoothing (weighted average of the constituent images), optimal seam finding (the least noticeable boundary detection), and hybrid blending [2]. However, as discussed throughout this paper, remote sensing applications mostly rely on reflectance data, which are represented as intensity values in images. As a result, blending techniques must be practiced with the utmost precaution to avoid erroneous modification of the original reflectance data. Nevertheless, most of the software used in remote sensing (Pix4D and Agisoft, for example) use images taken with a large front and side over-laps (at least 85% frontal overlap and at least 70% side overlap for dense vegetation such as orchards[4]) and then, based on some mathematical techniques and weighted averaging, create the mosaic files modifying the original data. As a result, for analyses that require precise reflectance values in different bands, employing software that alters the original reflectance values in the blending process is not recommended.


[1]       C. Bielski and P. Soille, “Adaptive Mosaicing: Principle and Application to the Mosaicing of Large Image Data Sets,” in Adaptive and Natural Computing Algorithms, Berlin, Heidelberg, 2007, pp. 500–507. doi: 10.1007/978-3-540-71629-7_56.

[2]       A. Pandey and U. C. Pati, “Image mosaicing: A deeper insight,” Image and Vision Computing, vol. 89, pp. 236–257, Sep. 2019, doi: 10.1016/j.imavis.2019.07.002.

[3]       S. Ait-Aoudia, R. Mahiou, H. Djebli, and E. Guerrout, “Satellite and Aerial Image Mosaicing - A Comparative Insight,” in 2012 16th International Conference on Information Visualisation, Jul. 2012, pp. 652–657. doi: 10.1109/IV.2012.113.

[4]       Pix4D, “Step 1. Before Starting a Project > 1. Designing the Image Acquisition Plan > a. Selecting the Image Acquisition Plan Type,” Support, 2019. (accessed Dec. 07, 2020).