Semi-Automatic Classification Plugin version 9 officially released

I'm glad to announce the release of the Semi-Automatic Classification Plugin version 9 (codename "Foundation").

 

This new version is compatible with QGIS 4 (based on Qt 6 framework). 

Until QGIS 4 is officially released, in order to try the new Semi-Automatic Classification Plugin you can install the prerelease (QGIS 3.99 master).

 

The following is the changelog:
  • new version for QGIS 4
  • built on the new Remotior Sensus version 0.6
  • new simplified interface designed for new users
  • added automatic download of Remotior Sensus if library is not available or outdated
  • in the Working toolbar added button to open a Copernicus Browser link at QGIS map coordinates
  • in the Working toolbar added buttons to show or hide custom layers or groups by name also using keyboard shortcuts Z, X, and C
  • in Download products added option to create band set
  • various bug fixing
The simplified interface is designed for new users in order to ease the classification process, from the definition of input images to executing the classification algorithm.

 

 

The "Complete interface" can always be loaded from the settings in the SCP menu by deactivating the option "Simplified interface" and restartin QGIS.

A major addition is the integratation of deep learning foundation models and pretrained models into the Semi-Automatic Classification Plugin.

Pretrained models are machine learning or deep learning models that are trained in order to provide a tool usueful for the classification process. Pretrained models can be trained for specific classes or trained on broad data such as fundation models. Foundation models are usually the result of self-supervised learning that don't require training data.

Usually, pretrained models are trained for specific classes that can be used directly for classification of input data, resulting in a classification output (for example, a pretrained model can classify a Landsat image using the classes Water, Soil, Built-up, Vegetation). Foundation models can be used in different ways, for example adaptated to a specific use case by training with additional data. 


In addition, models can be used to produce embeddings, which are vectors that can be used as input to train a supervised classification model such as Random Forest. For example, a Sentinel-2 image with 10 bands can be processed using a foundation model to produce embeddings at n dimension (for instance of size 128, thus enriching the information at pixel level), and then train a supervised classification algorithm on these embeddings. 


The Semi-Automatic Classification Plugin automatically downloads the model weights and performs the classification. Please note that each model has specific characteristics and specific preprocessing of input image. Currently, the following models are available:

  • Swin-v2-Base model for Sentinel-2 single image. Source: Sentinel2_SwinB_SI_MS, pretrained by the Allen Institute for Artificial Intelligence (SatlasPretrain: https://satlas-pretrain.allen.ai)
  • Swin-v2-Tiny model for Sentinel-2 single image. Source: Sentinel2_SwinT_SI_MS, pretrained by the Allen Institute for Artificial Intelligence (SatlasPretrain: https://satlas-pretrain.allen.ai)
  • Swin-v2-Base model for Landsat 8 or Landsat 9 single image. Source: Landsat_SwinB_SI, pretrained by the Allen Institute for Artificial Intelligence (SatlasPretrain: https://satlas-pretrain.allen.ai)
  • Swin-v2-Base segmentation for Sentinel-2 single image (4 bands). Output classes: background, water, developed, tree, shrub, grass, crop, bare, snow, wetland, mangroves, moss. Source: Satlas_MS_tci-b08_epoch150, pretrained by DPR Team as part of the DPR Zoo Segmentation Hub framework (https://github.com/DPR25/dpr-zoo-segmentation-hub) based on SatlasPretrain models (https://satlas-pretrain.allen.ai)
  • Swin-v2-Base segmentation for Sentinel-2 single image (3 bands). Output classes: background, water, developed, tree, shrub, grass, crop, bare, snow, wetland, mangroves, moss. Source: Satlas_RGB1_epoch70, pretrained by DPR Team as part of the DPR Zoo Segmentation Hub framework (https://github.com/DPR25/dpr-zoo-segmentation-hub) based on SatlasPretrain models (https://satlas-pretrain.allen.ai)

 

Many thanks to the developers that made available the above models. 

Hopefully ther models will be integrated in the Semi-Automatic Classification Plugin in the future. 

 
For any comment or question, join the Facebook group or GitHub discussions about the Semi-Automatic Classification Plugin.
Newer posts Older posts