Supervised Object- and Pixel-based Classification of Las Vegas Landsat Imagery

Introduction

Land cover classification is crucial for urban areas like Las Vegas, NV, contributing to urban planning, environmental monitoring, and resource management. In this article preview, we explore the use of supervised object-based and pixel-based classification methods for accurate land cover mapping in Las Vegas.

By incorporating object-based and pixel-based approaches, we aim to improve land cover classification accuracy in the study area. Object-based classification considers the spectral information of remote sensing images and the spatial and contextual information of image objects, leading to more accurate classification results. On the other hand, pixel-based classification focuses solely on the spectral characteristics of individual pixels. This study will dive deeper into the key differences between object- and pixel-based classification methods, highlighting the benefits and limitations of both approaches, and providing a comprehensive understanding of their strengths and weaknesses.

Purpose

This study classified land cover in Las Vegas, NV using Landsat satellite imagery. Through the application of supervised classification techniques, different land cover types were identified in the area. The study employed object-based classification for larger areas and pixel-based classification for more detailed analysis. The accuracy of the classification results was then assessed by comparing them with ground truth data. The resulting classified land cover map has significant implications for urban planning and environmental monitoring in Las Vegas, NV.

Methods

The final parameter of image segmentation is the minimum segment size. The minimum segment size parameter “sets the threshold for minimum segment size” in pixels (Pitcher, 2022a, 7:59). “Segments smaller than this size are merged with their best fitting neighbor segment” (ArcGIS Pro, 2022a, para. 10). Thus, the minimum segment size parameter significantly impacts both the smoothing and level of differentiation between features in an object-based supervised classification. It is imperative to note that this measurement is in pixels. Therefore, the best way to assess what value would be appropriate for the classification of the source image is to either note the pixel size of your image and use the measure tool to see how many pixels comprise the smallest feature you wish to be classified correctly and distinctly from other features surrounding it or to zoom into the feature and count the number of pixels it contains. For my image, I zoomed into the smallest patch of irrigated lawn in the urban region of Las Vegas to measure how large it is and to count the pixels comprising it to cross-reference with the measurement. The smallest unit of irrigated lawn that I wished to classify was 2 pixels long or 60 meters (the Landsat 8 image pixel size is 30 x 30 meters). This indicated that my minimum segment size parameter could not be higher than 2 or the small, irrigated lawn features would be merged with a neighboring segment. The choice to not choose 1 pixel as the minimum segment size was made to prevent the small speckling of single-pixel segments, and I determined that the merging of these 1- pixel features was acceptable in terms of my desired level of classification and accuracy of resulting classification in comparison with the source imagery.

Development of Classification Schema. The classification schema was selected using visual inspection of the true color Landsat 8 image, Google Earth, and the 5 (infrared), 2 (blue), and 3 (green) band combination of Landsat 8. Bands 5, 2, and 3 were selected due to the combination's accentuation of vegetation, minerals, and brightness, catering to the desired classification schema focused on differentiating soil, vegetation, water, and development cover of the land (Okin, 2022a). I examined aspects of the region’s vegetation and soil coverage that would potentially benefit from classification for analysis.

I determined that the soil/bare ground coverage of the area could be split into three primary categories: (i) Barren/Soil(Sparse Vegetation), defined as not evidencing a prominent vegetation response in the true color or 5, 2, 3 band combination nor a particular dominant composition mineral; (ii) Iron-based Soil defined as evidencing a strong iron hue in the true color image and a red-violet hue in the 5, 2, 3 band combination image; and (iii) Salt or Silicate-based Soil defined as evidencing a strong bright, tan/white hue in the true color image and the 5, 2, 3 band combination image. Baren/Soil (Sparse Vegetation) appears to be a greyish color in the true color and the 5, 2, 3 band combination images due to the soil’s relatively low reflectance and high absorption in the blue, green, red, and NIR bands (Hively et al., 2021; WorldOfTech, 2021). Iron-based Soil appears an iron-red-brownish color in the true color image due to the mineral’s higher reflectance in the red wavelength in comparison to the relatively low reflectance in the blue and green wavelengths (Sahwan et al., 2021). The Iron-based soil appears as a red-violet hue in the 5, 2, 3 band combination due to the mineral’s higher reflectance in the NIR wavelength than in the blue or green wavelengths. The Salt/Silicate Soil reflects so highly in both band combinations due to the small grain size and minerals in the soil, which increase reflectance as well as the dryness of the soil which also increases spectral reflectance (Jensen, 2016; Okin, 2022b).

I used similar evaluation methods to generate the vegetation coverage types most relevant to the region. The prominent and meaningful vegetation types for a desert ecosystem that has experienced urbanization were split into two categories: (i) Irrigated Lawn defined as evidencing strong spectral response in the 5, 2, 3 band combination image with a bright red hue and a bright green hue in the true color image; (ii) Chaparral Grassland/Desert Shrub (Dense Vegetation) defined as evidencing a strong spectral response in the 5, 2, 3 band combination image with a deep dark red and light to deep green hue in the true color image. These characteristics of the class are due to chlorophyll’s strong absorption in the red and blue wavelengths and reflectance in the green and infrared wavelengths (Jensen, 2016; Okin, 2022b).

The Water class was a relevant classification due to Lake Mead’s presence in the image and the hypothetical potential analysis that could be conducted with a water classification. The water category features were determined via visual inspection of both the true color and the 5, 2, 3 band combination images. Water features appeared particularly well defined in the 5, 2, 3 band combination due to water strongly absorbing the long wavelengths of red and NIR wavelengths and reflecting the blue and green wavelengths (NASA, n.d.; Jensen, 2016; Okin, 2022b).

The Urban/Man-made/Developed class was created due to the feature’s prominent coverage of the study region as well as the susceptibility to incorrect classification of the feature as the Salt or Silicate-based Soil. The Urban features appear as varying shades of grey and white in the true color image varying shades of white, green, and grey in the 5, 2, 3 band combination image due to the low surface reflectance of many building materials in the blue, green, red, and NIR wavelengths, with the exception of roofs which reflect strongly in comparison for all visible wavelengths (Mondejar & Tongco, 2019; Jensen, 2016; Okin, 2022b; Zhao & Zhu, 2022).

Data. The study relied upon the use of Landsat 8 imagery obtained from the USGS GloVis platform and NAIP imagery obtained from ArcGIS Services via ArcGIS Pro. Landsat 8 imagery was used for classification of the Las Vegas, NV imagery while NAIP imagery was used for accuracy assessment of the classification outputs.

Classification. ArcGIS Pro geoprocessing tools were used for supervised, object- and pixel-based classifications. The five land cover classes specified for this study were water, urban/developed, irrigated lawn, chaparral grassland/desert shrub (dense), barren/soil (including sparse vegetation), iron-based soil, and salt/silicate-based soil.

Walk-through of Parameter Space Segmentation for Object-based Classification. Segmentation is utilized by object-based classification and “groups adjoining pixels together based on similar spectral and geometric characteristics” based on three user-defined parameters: spectral detail, spatial detail, and minimum segment size (Pitcher, 2022a, 6:29). Segmentation “takes into account both color and shape characteristics when grouping pixels into objects” allows the user to “vary the amount of detail that characterizes a feature of interest” (ArcGIS Pro, 2022b, para. 5’ ArcGIS Pro, 2022c, para. 7). It is important to remember that a “segmented raster dataset is different from a pixel image, in that each segment (sometimes referred to as a super pixel) is represented by one set values” (ArcGIS Pro, 2022b, para. 9). This is achieved using the “Mean Shift approach” which

uses a moving window that calculates an average pixel value to determine which pixels should be included in each segment. As the window moves over the image, it iteratively recomputes the value to make sure that each segment is suitable. The result is a grouping of image pixels into a segment characterized by an average color (ArcGIS Pro, 2022b, para. 4).

The spectral detail parameter “Set[s] the level of importance given to the spectral differences of features in your imagery” and the user can select a value between “1.0” and “20.0” (ArcGIS Pro, 2022a, para. 5-6). The higher the value the user selects, the higher the level of detail and segmentation of even minimally spectrally different features. For example, when I set the spectral value to 8 and examined the urban region of the Landsat 8 image, I noticed that over the airport the runway and the bare soil around it had been smoothed into one segment, which “will affect the size and homogeneity of a segment” (ArcGIS Pro, 2022b, para. 11). Because one of my goals was to differentiate soil and urban land cover in the image, this smoothing and combining of the two features would cause confusion in the image classification wizard as both spectral values would be assigned as training data for either soil or urban land cover classes. Let’s imagine that I chose to classify this smoother, conjoined, segment as urban; this training data may produce the misclassification of both land cover types in the final object-based classification output (in comparison with the source image) as they have now been generalized into a single, homogenous segment. One of the potential issues with a high spectral detail parameter value is that the potential of a highly segmented image is that it can make training data selection more cumbersome when trying to achieve the full spectral range of a class represented in the image. Therefore, it is important for the user to examine the source image for the spectral range of their desired classes and the proximity of spectral values between two different classes. After examining these characteristics of the Landsat 8 image and generating a few segmented images using various spectral detail values, I selected a spectral detail value of 20. Values any lower than 20 evidenced issues like the example mentioned above pertaining to the airport feature’s spectral value becoming averaged with the soil spectral value beside it. This occurred because a spectral detail parameter value below 20 indicated, for this specific image, that it was not of high enough importance to separate the similar spectral values exhibited by the airport and soil regions; thus, they were grouped and assigned an average value (ArcGIS Pro, 2022a; ArcGIS Pro, 2022b).

The next parameter to be defined is the spatial detail parameter which defines how important the “proximity between features in your imagery” is (ArcGIS Pro, 2022a, para. 7; Pitcher, 2022a). The potential range of input values for spatial detail is anywhere between 1 and 20. The higher values of this range are most appropriate for imagery where the features the user desires to be classified distinctly are “small and clustered together,” while lower values contribute to the smoothing effect and generalization of features in close proximity to one another, creating “spatially smoother outputs” (ArcGIS Pro, 2022a, para. 8-9; Pitcher, 2022a). For example, when I used a lower spatial detail value of 10, the soil class distinctions I wished to make were not achievable as the spatially close soil types were often grouped into a singular segment with an average value. This would have impeded not only the ability to classify the various soil compositions separately but also would have resulted in lower accuracy of class assignment when compared with the NAIP imagery during accuracy assessment. Smoother outputs also contribute to “longer processing times” (ArcGIS Pro, 2022a, para. 9). Selecting too high of a spatial detail parameter value can result in too much distinction between features and can make the training data selection process harder due to the user seeking to sample the full spectral range of a class represented in the image. Resultantly, it is important for the user to evaluate the features present in the image with consideration of their desired classes to establish if their features are small in size and spatially clustered together or if they are larger, occur in more distinct zones of the source image, and desire a higher level of smoothing in the segmented image. Similarly, they should consider the impacts the spatial detail value can have on their accuracy in comparison to the source image. With the functionality of the spatial detail parameter in mind, after evaluation of my image and desired classes and the experimentation of differing values, I selected the spatial detail parameter value of 15. In my case, this value generated a balance between defining features in close proximity to one another and reduced smoothing while avoiding the over-segmentation of features in the source image. This allowed me to make the distinction between soil composition, urban, water, and irrigated vegetation features despite their proximity to one another.

Summarized Classification Schema

ClassesSubclass? If yes, of what?Value
Water No1
Urban/Man-made/Developed No 2
Irrigated Lawn No 3
Chaparral Grassland/Desert Shrub (Dense Vegetation) No 4
Barren/Soil (Sparse Vegetation) No 5
Iron-based Soil Yes, subclass of Barren/Soil 6
Salt or Silicate-based Soil Yes, subclass of Barren/Soil 7

Training Samples Generated

Classes# Training Samples
Water 53
Urban/Man-made/Developed 31
Irrigated Lawn 65
Chaparral Grassland/Desert Shrub(Dense Vegetation) 48
Barren/Soil(Sparse Vegetation) 59
Iron-based Soil 62
Salt or Silicate-based Soil 63

Classification Training Data Over Segmented Landsat 8 Image of Las Vegas

Results

Object-based Supervised Classification is “performed on localized neighborhoods of pixels” using segmentation and training samples to classify the resulting segmented image (ArcGIS Pro, 2022c, para. 7). ArcGIS Pro notes that Object-based Supervised Classification generates “cleaner classification results” due to segmentation creating objects with an ability to “more closely resemble real-world features” (ArcGIS Pro, 2022c, para. 7). Conversely, Pixel-based Supervised Classification is “performed on a per-pixel basis, where the spectral characteristics of the individual pixel determines the class to which it is assigned” without taking into account neighboring values (ArcGIS Pro, 2022c, para. 6). ArcGIS Pro notes that Pixel-based Supervised Classification has a greater susceptibility of producing less homogenous results.

Let’s evaluate my results from both the Object-based and Pixel-based Supervised Classifications to determine which was more accurate for this assignment. The accuracy of the two Supervised Classification results was analyzed using Accuracy Assessment Points. Accuracy Assessment Points Accuracy Assessment Points generate “randomly sampled points” throughout a classified image based on user input parameters that allow the user to then manually evaluate and compare the classification results to a reference image of their choosing (ArcGIS Pro, 2022d, para. 1). For the assessment of a classified image, the user will define the Target Field as Classified. Then the user will select the desired minimum number of randomly generated points. I select a value of 50, and the Accuracy Assessment Points tool assigned 91 points for the Object-based classification and 88 points for the Pixel-based classification. The next parameter to determine is the Sampling Strategy; the options are stratified random, equalized stratified random, and random. I selected stratified random as this strategy assigns a number of points to each class based on the amount of the classified image that it covers. After the tool ran and generated the points, I used the table to visually assess the ground truth of the location using NAIP imagery.

After completing the evaluation of all Accuracy Assessment Points for both the Object- based and Pixel-based Supervised Classifications, a confusion matrix was generated on the table used for accuracy assessment (ArcGIS Pro, 2022e). The Compute Confusion Matrix tool “calculates the user's accuracy and producer's accuracy [0-1; 1 as 100% accuracy] for each class as well as an overall kappa index of agreement” (ArcGIS Pro, 2022e, para. 3). The confusion matrix tells the user “what classes are confused with other classes” (Okin, 2022c). U_Accuracy (user) represents the percentage of pixels that are correctly classified (errors of commission), while P_Accuracy (producer; omission errors) represents the percentage of pixels that are correctly classified out of all the accuracy points tested (ArcGIS Pro, 2022e; Okin, 2022d). The diagonal values occurring between the Accuracy Assessment ground truth columns and classification result rows contain the number of correctly classified pixels per class (ArcGIS Pro, 2022e). The Kappa index indicates “how different the confusion matrix would be from a random matrix… as [an indicator to] the accuracy of a classification” however, Professor Okin notes that this measurement is not the most accurate indicator (Okin, 2022d, 4:07). The overall accuracy is shown at the intersection of U_Accuracy and P_Accuracy and is a calculation of the overall error subtracted from a value of 1.

Object-based and Pixel-based Classification Confusion Matrices

Object-based Supervised Classification Accuracy Results. The Object-based Supervised Classification Confusion Matrix presents the results of comparing the 91 Accuracy Assessment Points to the classifications they were assigned.

Water Class (1)

The User Accuracy value is 1, indicating that this class was not found to be the correct classification of an incorrectly classified region hosting an Accuracy Assessment Point. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in the Water class (0) as the errors of commission. The commission error rate (0) was then calculated by dividing the errors of commission by the total number of pixels for the classified image Water class (10). The User Accuracy (1) was determined by subtracting the commission error rate (0) from 1. The User Accuracy value indicates that this class was perfectly classified.

Class 1 (Water) was assigned a total of 11 Accuracy Assessment Points, and 1 of them was incorrectly classified and was revealed to actually be in the Urban/Man-made/Developed class. The Producer Accuracy value is 0.909091, calculated from the error of omission value of 1 divided by the total number of reference points for the class (11) to generate the omission error rate (0.0909090909). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (0.909091), indicating that this class was well classified with reference to the NAIP imagery findings.

Urban/Man-made/Developed Class (2)

The User Accuracy value is 0.636364, due to the identification of 4 Accuracy Assessment Points that should have been classified as different classes in the classified image. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in Class 2 (4) as the errors of commission. The commission error rate (0.36363636363) was then calculated by dividing the errors of commission by the total number of pixels in the classified image identified in Class 2 (11). The User Accuracy (0.636364) was determined by subtracting the commission error rate (0.36363636363) from 1. The User Accuracy value indicates that this class was sufficiently classified.

The Urban/Man-made/Developed Class (2) was assigned a total of 8 Accuracy Assessment Points, and 1 of them should have been classified as Irrigated Lawn (3). The Producer Accuracy value is 0.875, calculated from the error of omission value of 1 divided by the total number of reference points for the class (8) to generate the omission error rate (0.125). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (0.875), indicating that this class was fairly well classified with reference to the NAIP imagery findings.

Irrigated Lawn (3)

The User Accuracy value is 0.5, due to the identification of 5 Accuracy Assessment Points that should have been classified as different classes in the classified image. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in Class 3 (5) as the errors of commission. The commission error rate (0.5) was then calculated by dividing the errors of commission by the total number of pixels in the classified image identified in Class 3 (10). The User Accuracy (0.5) was determined by subtracting the commission error rate (0.5) from 1. The User Accuracy value indicates that this class was not well classified.

The Irrigated Lawn Class (3) was assigned a total of 7 Accuracy Assessment Points, and 2 of them should have been classified as Urban/Man- made/Developed (2). The Producer Accuracy value is 0.714286, calculated from the error of omission value of 2 divided by the total number of reference points for the class (7) to generate the omission error rate (0.28571428571). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (0.714286), indicating that this class was moderately well classified with reference to the NAIP imagery findings.

Chaparral Grassland/ Desert Shrub (4)

The User Accuracy value is 0.5, due to the identification of 5 Accuracy Assessment Points that should have been classified as different classes in the classified image. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in Class 4 (5) as the errors of commission. The commission error rate (0.5) was then calculated by dividing the errors of commission by the total number of pixels in the classified image identified in Class 4 (10). The User Accuracy (0.5) was determined by subtracting the commission error rate (0.5) from 1. The User Accuracy value indicates that this class was not well classified.

The Chaparral Grassland/ Desert Shrub Class (4) was assigned a total of 8 Accuracy Assessment Points, and 3 of them should have been classified as other classes. The Producer Accuracy value is 0.625, calculated from the error of omission value of 3 divided by the total number of reference points for the class (8) to generate the omission error rate (0.375). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (0.625), indicating that this class was not well classified with reference to the NAIP imagery findings.

Barren/Soil (5)

The User Accuracy value is 0.933333, due to the identification of 2 Accuracy Assessment Points that should have been classified as a different class in the classified image. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in Class 5 (2) as the errors of commission. The commission error rate (0.06666666666) was then calculated by dividing the errors of commission by the total number of pixels in the classified image identified in Class 5 (30). The User Accuracy (0. 933333) was determined by subtracting the commission error rate (0. 06666666666) from 1. The User Accuracy value indicates that this class was not well classified.

The Barren/Soil Class (5) was assigned a total of 41 Accuracy Assessment Points, and 13 of them should have been classified as other classes. The Producer Accuracy value is 0.682927, calculated from the error of omission value of 13 divided by the total number of reference points for the class (41) to generate the omission error rate (0.31707317073). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (0. 682927), indicating that this class was not well classified with reference to the NAIP imagery findings.

Iron-based Soil (6)

The User Accuracy value is 0.7, due to the identification of 3 Accuracy Assessment Points that should have been classified as a different class in the classified image. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in Class 6 (3) as the errors of commission. The commission error rate (0.3) was then calculated by dividing the errors of commission by the total number of pixels in the classified image identified in Class 6 (10). The User Accuracy (0. 7) was determined by subtracting the commission error rate (0. 3) from 1. The User Accuracy value indicates that this class was sufficiently classified.

The Iron-based Soil Class (6) was assigned a total of 8 Accuracy Assessment Points, and 1 of them should have been classified as another class. The Producer Accuracy value is 0.875, calculated from the error of omission value of 1 divided by the total number of reference points for the class (8) to generate the omission error rate (0.125). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (0.875), indicating that this class was well classified with reference to the NAIP imagery findings.

Salt or Silicate-based Soil (7)

The User Accuracy value is 0.8, due to the identification of 2 Accuracy Assessment Points that should have been classified as different classes in the classified image. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in Class 7 (2) as the errors of commission. The commission error rate (0.2) was then calculated by dividing the errors of commission by the total number of pixels in the classified image identified in Class 7 (10). The User Accuracy (0. 8) was determined by subtracting the commission error rate (0.2) from 1. The User Accuracy value indicates that this class was pretty well classified.

The Salt or Silicate-based Soil Class (7) was assigned a total of 8 Accuracy Assessment Points, and 0 of them should have been classified as another class. The Producer Accuracy value is 1, calculated from the error of omission value of 0 divided by the total number of reference points for the class (8) to generate the omission error rate (0). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (1), indicating that this class was perfectly classified with reference to the NAIP imagery findings.

The Overall Accuracy of the matrix for all classes is 0.769231. The Overall Accuracy was calculated using the overall errors (21) divided by the sum of all entries (91) to generate the overall error rate (0.23076923076). The overall error rate was then subtracted from 1 to generate an Overall Accuracy value of 0.769231. Overall, the confusion matrix for the Object-based Supervised Classification shows acceptable accuracy. The Kappa index value is 0.707932, indicating that this confusion matrix would be significantly different from a random confusion matrix and the overall accuracy is high (ArcGIS Pro, 2022e; Okin, 2022d).

Pixel-based Supervised Classification Accuracy Results. The Pixel-based Supervised Classification Confusion Matrix presents the results of comparing the 88 Accuracy Assessment Points to the classifications they were assigned.

Water Class (1)

The User Accuracy value is 1, indicating that this class was not found to be the correct classification of an incorrectly classified region hosting an Accuracy Assessment Point. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in the Water class (0) as the errors of commission. The commission error rate (0) was then calculated by dividing the errors of commission by the total number of pixels for the classified image Water class (10). The User Accuracy (1) was determined by subtracting the commission error rate (0) from 1. The User Accuracy value indicates that this class was perfectly classified.

Class 1 (Water) was assigned a total of 11 Accuracy Assessment Points, and 1 of them was incorrectly classified and was revealed to belong to another class. The Producer Accuracy value is 0.909091, calculated from the error of omission value of 1 divided by the total number of reference points for the class (11) to generate the omission error rate (0.0909090909). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (0.909091), indicating that this class was well classified with reference to the NAIP imagery findings.

Urban/Man-made/Developed Class (2)

The User Accuracy value is 0.818182, due to the identification of 2 Accuracy Assessment Points that should have been classified as different classes in the classified image. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in Class 2 (2) as the errors of commission. The commission error rate (0.18181818181) was then calculated by dividing the errors of commission by the total number of pixels in the classified image identified in Class 2 (11). The User Accuracy (0.818182) was determined by subtracting the commission error rate (0. 18181818181) from 1. The User Accuracy value indicates that this class was well classified.

The Urban/Man-made/Developed Class (2) was assigned a total of 10 Accuracy Assessment Points, and 1 of them should have been classified as Irrigated Lawn (3). The Producer Accuracy value is 0.9, calculated from the error of omission value of 1 divided by the total number of reference points for the class (10) to generate the omission error rate (0.1). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (0.9), indicating that this class was well classified with reference to the NAIP imagery findings.

Irrigated Lawn (3)

The User Accuracy value is 0.4, due to the identification of 6 Accuracy Assessment Points that should have been classified as different classes in the classified image. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in Class 3 (6) as the errors of commission. The commission error rate (0.6) was then calculated by dividing the errors of commission by the total number of pixels in the classified image identified in Class 3 (10). The User Accuracy (0.4) was determined by subtracting the commission error rate (0.6) from 1. The User Accuracy value indicates that this class was not well classified.

The Irrigated Lawn Class (3) was assigned a total of 7 Accuracy Assessment Points, and 3 of them should have been classified as another class. The Producer Accuracy value is 0.571429, calculated from the error of omission value of 3 divided by the total number of reference points for the class (7) to generate the omission error rate (0.42857142857). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (0.571429), indicating that this class was not well classified with reference to the NAIP imagery findings.

Chaparral Grassland/ Desert Shrub (4)

The User Accuracy value is 0.6, due to the identification of 4 Accuracy Assessment Points that should have been classified as different classes in the classified image. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in Class 4 (4) as the errors of commission. The commission error rate (0.4) was then calculated by dividing the errors of commission by the total number of pixels in the classified image identified in Class 4 (10). The User Accuracy (0.6) was determined by subtracting the commission error rate (0.4) from 1. The User Accuracy value indicates that this class was sufficiently well classified.

The Chaparral Grassland/ Desert Shrub Class (4) was assigned a total of 7 Accuracy Assessment Points, and 1 of them should have been classified as another class. The Producer Accuracy value is 0.857143, calculated from the error of omission value of 1 divided by the total number of reference points for the class (7) to generate the omission error rate (0.14285714285). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (0.857143), indicating that this class was well classified with reference to the NAIP imagery findings.

Barren/Soil (5)

The User Accuracy value is 0.962963, due to the identification of 1 Accuracy Assessment Points that should have been classified as a different class in the classified image. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in Class 5 (1) as the errors of commission. The commission error rate (0.03703703703) was then calculated by dividing the errors of commission by the total number of pixels in the classified image identified in Class 5 (27). The User Accuracy (0. 962963) was determined by subtracting the commission error rate (0. 03703703703) from 1. The User Accuracy value indicates that this class was very well classified.

The Barren/Soil Class (5) was assigned a total of 40 Accuracy Assessment Points, and 14 of them should have been classified as other classes. The Producer Accuracy value is 0.65, calculated from the error of omission value of 14 divided by the total number of reference points for the class (40) to generate the omission error rate (0.35). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (0. 65), indicating that this class was sufficiently classified with reference to the NAIP imagery findings.

Iron-based Soil (6)

The User Accuracy value is 0.5, due to the identification of 5 Accuracy Assessment Points that should have been classified as a different class in the classified image. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in Class 6 (5) as the errors of commission. The commission error rate (0.5) was then calculated by dividing the errors of commission by the total number of pixels in the classified image identified in Class 6 (10). The User Accuracy (0. 5) was determined by subtracting the commission error rate (0.5) from 1. The User Accuracy value indicates that this class was not well classified.

The Iron-based Soil Class (6) was assigned a total of 7 Accuracy Assessment Points, and 2 of them should have been classified as another class. The Producer Accuracy value is 0.714286, calculated from the error of omission value of 2 divided by the total number of reference points for the class (7) to generate the omission error rate (0.28571428571). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (0. 714286), indicating that this class was moderately well classified with reference to the NAIP imagery findings.

Salt or Silicate-based Soil (7)

The User Accuracy value is 0.6, due to the identification of 4 Accuracy Assessment Points that should have been classified as different classes in the classified image. The User Accuracy value was calculated by first calculating the sum of misclassified pixels for the classified image in Class 7 (4) as the errors of commission. The commission error rate (0.4) was then calculated by dividing the errors of commission by the total number of pixels in the classified image identified in Class 7 (10). The User Accuracy (0. 6) was determined by subtracting the commission error rate (0.4) from 1. The User Accuracy value indicates that this class was sufficiently classified.

The Salt or Silicate-based Soil Class (7) was assigned a total of 6 Accuracy Assessment Points, and 0 of them should have been classified as another class. The Producer Accuracy value is 1, calculated from the error of omission value of 0 divided by the total number of reference points for the class (6) to generate the omission error rate (0). The omission error rate is then subtracted from a value of 1 to produce the Producer’s Accuracy value (1), indicating that this class was perfectly classified with reference to the NAIP imagery findings.

The Overall Accuracy of the matrix for all classes is 0.75. The Overall Accuracy was calculated using the overall errors (22) divided by the sum of all entries (88) to generate the overall error rate (0.25). The overall error rate was then subtracted from 1 to generate an Overall Accuracy value of 0.75. Overall, the confusion matrix for the Pixel-based Supervised Classification shows acceptable accuracy. The Kappa index value is 0.686427, indicating that this confusion matrix would be significantly different from a random confusion matrix and the overall accuracy is high (ArcGIS Pro, 2022e; Okin, 2022d).

Comparing Object- and Pixel-based Supervised Classification Accuracy Results. A comparison of the two confusion matrix results shows that the Overall Accuracy values were pretty similar, with the Object-based Supervised Classification surpassing the Pixel- based Supervised Classification by only 0.019231. Similarly, the Kappa Index value of the Object-based Supervised Classification surpassed the Pixel-based Supervised Classification by only 0.021505. In a comparison of which classification outperformed the other in Producer and User Accuracy, tables are used to summarize:

Overall, the Object-based classification had a greater higher accuracy count than the Pixel- based classification when treating ties as null. I believe that upon visual inspection and during reclassification, the Pixel-based classification appeared more accurate, but perhaps that is simply incorrect. On the other hand, the measurement of accuracy and the accuracy of the accuracy results could be influenced by many facets of the process. During lectures, Dr. Okin brought up the way that positional accuracy, poor registration, sampling design, accuracy metrics, and inconsistencies in the data used for accuracy assessment could have on the accuracy of accuracy assessments and, therefore, confusion matrixes (Okin, 2022c). Additionally, the Accuracy Assessment Point value input was 50 and relatively low compared to the ideal hundreds of assessment points generated (Pitcher, 2022b).

References

ArcGIS Pro. (2022a). Segmentation—ArcGIS Pro | Documentation. ESRI. https://pro.arcgis.com/en/pro-app/latest/help/analysis/image- analyst/segmentation.htm

ArcGIS Pro. (2022b). Understanding segmentation and classification—ArcGIS Pro | Documentation. ESRI. https://pro.arcgis.com/en/pro-app/latest/tool-reference/spatial- analyst/understanding-segmentation-and-classification.htm

ArcGIS Pro. (2022c). Image Classification Wizard—ArcGIS Pro | Documentation. ESRI. https://pro.arcgis.com/en/pro-app/latest/help/analysis/image-analyst/the-image- classification-wizard.htm

Hively, W. D., Lamb, B. T., Daughtry, C. S., Serbin, G., Dennison, P., Kokaly, R. F., Wu, Z., & Masek, J. G. (2021). Evaluation of swir crop residue bands for the Landsat Next Mission.

Jensen, J. R. (2016). Introductory Digital Image Processing: A Remote Sensing Perspective. Pearson Education.

Landsat Missions. (2023, June 7). Landsat Missions Timeline | U.S. Geological Survey. USGS. https://www.usgs.gov/media/images/landsat-missions-timeline

Mondejar, J. P., & Tongco, A. F. (2019). Near Infrared Band of landsat 8 as water index: A case study around Cordova and Lapu-Lapu City, Cebu, Philippines. Sustainable Environment Research, 29(1). https://doi.org/10.1186/s42834-019-0016-5

NASA. (n.d.). Ocean Color. NASA. Retrieved October 7, 2022, from https://science.nasa.gov/earth- science/oceanography/livingocean/oceancolor#:~:text=The%20red%2C%20yellow% 2C%20and%20green,water%20molecules%20in%20the%20ocean

Okin, G. (2022a). Red-Green-Blue Color Composites. MAGIST 411. Retrieved November 18, 2022, from https://bruinlearn.ucla.edu/courses/141360/pages/411-dot-2-2-red-green- blue-color-composites-12-23?module_item_id=5376616

Okin, G. (2022b). Why do surfaces look different? Atmospheric and Geologic Materials. MAGIST 411. Retrieved November 18, 2022, from https://bruinlearn.ucla.edu/courses/141360/pages/411-dot-1-8-why-do-surfaces- lookdifferent-atmospheric-and-geologic-materials-8-45?module_item_id=5376598

Okin, G. (2022c). Accuracy Assessment. MAGIST 411. Retrieved November 19, 2022, from https://bruinlearn.ucla.edu/courses/141360/pages/411-dot-4-8-accuracy-assessment- 17-21?module_item_id=5376665

Okin, G. (2022d). Accuracy Assessment – Confusion Matrices. MAGIST 411. Retrieved November 19, 2022, from https://bruinlearn.ucla.edu/courses/141360/pages/411-dot- 4-9-accuracy-assessment-confusion-matrices-8-42?module_item_id=5376666

Pitcher, L. (2022a). ArcGIS Pro Image Classification Wizard Overview. MAGIST 411. BruinLearn. Retrieved November 18, 2022, from https://bruinlearn.ucla.edu/courses/141360/pages/td04-dot-01-arcgis-pro-image- classification-wizard-overview-14-23?module_item_id=5376671

Pitcher, L. (2022b). Accuracy Assessment. MAGIST 411. BruinLearn. Retrieved November 18, 2022, from https://bruinlearn.ucla.edu/courses/141360/pages/td04-dot-03-accuracy- assessment-10-35?module_item_id=5376673

WorldOfTech. (2021, December 22). Landsat 8 bands and band combinations: Learn gis. WorldOfTech. Retrieved October 9, 2022, from https://www.worldofitech.com/landsat-8-bands-combinations/#Color_Infrared_5_4_3

Zhao, Y., & Zhu, Z. (2022). ASI: An artificial surface index for Landsat 8 imagery. International Journal of Applied Earth Observation and Geoinformation, 107, 102703. https://doi.org/10.1016/j.jag.2022.102703

Previous
Previous

Evaluating the Impact of Drought Using Remote Sensing Methods

Next
Next

Water Usage Savings and Xeriscaping Conversion in Beverly Hills, CA