Halcon20.11深度学习语义分割模型

2024-08-28 04:44

本文主要是介绍Halcon20.11深度学习语义分割模型,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

1.前言:深度目标检测模型已经可以满足一大部分的检测需求,但是在逐像素精度分割方向是无法做到的。这时候就需要训练深度语义分割模型,标注工具依然使用的MVTec Deep Learning Tool 24.05。实现顺序依然是先用标注工具把所有的图标注完毕后,导出标注数据集,即可利用此数据集,在halcon代码中训练。
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
2.上干货,深度语义分割训练源码:
模型训练预处理准备阶段********************
dev_update_off ()
*


  • *** Set Input/Output paths. ***

TotalPath:=‘E:/HalconDL_Project/瓶盖划伤语义分割/标注/’

  • Directory with image data.
    ImageDir := TotalPath+‘Images’
  • Directory with ground truth segmentation images.
    SegmentationDir := TotalPath+‘Bottle_labels’
  • All example data is written to this folder.
    ExampleDataDir := TotalPath + ‘segment_Bottle_defects_data’
  • Dataset directory basename for any outputs written by preprocess_dl_dataset.
  • This name will be extended by the image dimensions the dataset will have after preprocessing.
    DataDirectoryBaseName := ExampleDataDir + ‘/dldataset_Bottle_’
  • Store preprocess params separately in order to use it e.g. during inference.
    PreprocessParamFileBaseName := ‘/dl_preprocess_param.hdict’
    *标注数据集路径
    DirectPath_Bottle:=TotalPath + ‘Bottle.hdict’

  • *** Set parameters ***

  • Class names.
    ClassNames := [‘Bacground’,‘Dirty’,‘HuaShang’]
  • Class IDs.
    ClassIDs := [0,1,2]
  • Percentages for splitting the dataset.
    TrainingPercent := 70
    ValidationPercent := 15
  • Image dimensions the images are rescaled to during preprocessing.
    ImageWidth := 400
    ImageHeight := 400
    ImageNumChannels := 3
  • Gray value range for gray value normalization of the images.
    ImageRangeMin := -127
    ImageRangeMax := 128
  • Further parameters for image preprocessing.
    NormalizationType := ‘none’
    DomainHandling := ‘full_domain’
    IgnoreClassIDs := []
    SetBackgroundID := []
    ClassIDsBackground := []
  • In order to get a reproducible split we set a random seed.
  • This means that re-running the script results in the same split of DLDataset.
    SeedRand := 42

  • ** Read the labeled data and split it into train/validation and test ***

  • Set the random seed.
    set_system (‘seed_rand’, SeedRand)
  • Read the dataset.
    read_dict (DirectPath_Bottle, [], [], DLDataset)
  • read_dl_dataset_segmentation (ImageDir, SegmentationDir, ClassNames, ClassIDs, [], [], [], DLDataset)
  • Generate the split.
    split_dl_dataset (DLDataset, TrainingPercent, ValidationPercent, [])

  • ** Preprocess the dataset ***

  • Create the output directory if it does not exist yet.
    file_exists (ExampleDataDir, FileExists)
    if (not FileExists)
    make_dir (ExampleDataDir)
    endif
  • Create preprocess param.
    create_dl_preprocess_param (‘segmentation’, ImageWidth, ImageHeight, ImageNumChannels, ImageRangeMin, ImageRangeMax, NormalizationType, DomainHandling, IgnoreClassIDs, SetBackgroundID, ClassIDsBackground, [], DLPreprocessParam)
  • Dataset directory for any outputs written by preprocess_dl_dataset.
    DataDirectory := DataDirectoryBaseName + ImageWidth + ‘x’ + ImageHeight
  • Preprocess the dataset. This might take a few minutes.
    create_dict (GenParam)
    set_dict_tuple (GenParam, ‘overwrite_files’, true)
    preprocess_dl_dataset (DLDataset, DataDirectory, DLPreprocessParam, GenParam, DLDatasetFilename)
  • Store preprocess params separately in order to use it e.g. during inference.
    PreprocessParamFile := DataDirectory + PreprocessParamFileBaseName
    write_dict (DLPreprocessParam, PreprocessParamFile, [], [])

  • ** Preview the preprocessed dataset ***

  • Before moving on to training, it is recommended to check the preprocessed dataset.
  • Display the DLSamples for 10 randomly selected train images.
    get_dict_tuple (DLDataset, ‘samples’, DatasetSamples)
    find_dl_samples (DatasetSamples, ‘split’, ‘train’, ‘match’, SampleIndices)
    tuple_shuffle (SampleIndices, ShuffledIndices)
    read_dl_samples (DLDataset, ShuffledIndices[0:9], DLSampleBatchDisplay)

create_dict (WindowHandleDict)
for Index := 0 to |DLSampleBatchDisplay| - 1 by 1
* Loop over samples in DLSampleBatchDisplay.
dev_display_dl_data (DLSampleBatchDisplay[Index], [], DLDataset, [‘image’,‘segmentation_image_ground_truth’], [], WindowHandleDict)
get_dict_tuple (WindowHandleDict, ‘segmentation_image_ground_truth’, WindowHandleImage)
dev_set_window (WindowHandleImage[1])
Text := ‘Press Run (F5) to continue’
dev_disp_text (Text, ‘window’, 400, 40, ‘black’, [], [])
stop ()
endfor
*

  • Close windows that have been used for visualization.
    dev_close_window_dict (WindowHandleDict)

模型训练阶段*****************
dev_update_off ()

  • Training can be performed on a GPU or CPU.

  • See the respective system requirements in the Installation Guide.

  • If possible a GPU is used in this example.

  • In case you explicitely wish to run this example on the CPU,

  • choose the CPU device instead.
    query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
    if (|DLDeviceHandles| == 0)
    throw (‘No supported device found to continue this example.’)
    endif

  • Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
    DLDevice := DLDeviceHandles[0]
    get_dl_device_param (DLDevice, ‘type’, DLDeviceType)
    if (DLDeviceType == ‘cpu’)

    • The number of used threads may have an impact
    • on the training duration.
      NumThreadsTraining := 4
      set_system (‘thread_num’, NumThreadsTraining)
      endif

  • *** Set Input/Output paths. ***

  • All example data is written to this folder.
    TotalPath:=‘E:/HalconDL_Project/瓶盖划伤语义分割/标注/’
    ExampleDataDir := TotalPath + ‘segment_Bottle_defects_data’
  • File path of the preprocessed DLDataset.
  • Note: Adapt DataDirectory after preprocessing with another image size.
    DataDirectory := ExampleDataDir + ‘/dldataset_Bottle_400x400’
    DLDatasetFileName := DataDirectory + ‘/dl_dataset.hdict’
  • Output path for the final trained model.
    FinalModelBaseName := ExampleDataDir + ‘/final_dl_model_segmentation’
  • Output path of the best evaluated model.
    BestModelBaseName := ExampleDataDir + ‘/best_dl_model_segmentation’

  • *** Set basic parameters ***

  • The following parameters need to be adapted frequently.
  • Model parameters.
  • The segmentation model to be retrained.
    ModelFileName := ‘pretrained_dl_segmentation_enhanced.hdl’
  • Batch size.
  • If set to ‘maximum’, the batch size is set by set_dl_model_param_max_gpu_batch_size
  • if the runtime ‘gpu’ is given.
    BatchSize := ‘maximum’
  • Initial learning rate.
    InitialLearningRate := 0.0001
  • Momentum should be high if batch size is small.
    Momentum := 0.99
  • Parameters used by train_dl_model.
  • Number of epochs to train the model.
    NumEpochs := 10
  • Evaluation interval (in epochs) to calculate evaluation measures on validation split.
    EvaluationIntervalEpochs := 1
  • Change the learning rate in the following epochs, e.g. [15, 30].
  • Set it to [] if the learning rate should not be changed.
    ChangeLearningRateEpochs := []
  • Change the learning rate to the following values, e.g. InitialLearningRate * [0.1, 0.01].
  • The tuple has to be of the same length as ChangeLearningRateEpochs.
    ChangeLearningRateValues := []

  • *** Set advanced parameters. ***

  • The following parameters might need to be changed in rare cases.
  • Model parameter.
  • Use [] for default weight prior.
    WeightPrior := []
  • Parameters of train_dl_model.
  • Control whether training progress is displayed (true/false).
    EnableDisplay := true
  • Set a random seed for training.
    RandomSeed := 42
    set_system (‘seed_rand’, RandomSeed)
  • In order to obtain nearly deterministic training results on the same GPU
  • (system, driver, cuda-version) you could specify “cudnn_deterministic” as
  • “true”. Note, that this could slow down training a bit.
  • set_system (‘cudnn_deterministic’, ‘true’)
  • Set generic parameters of create_dl_train_param.
  • Please see the documentation of create_dl_train_param for an overview on all available parameters.
    GenParamName := []
    GenParamValue := []
  • Change strategies.
  • It is possible to change model parameters during training.
  • Here, we change the learning rate if specified above.
    if (|ChangeLearningRateEpochs| > 0)
    create_dict (ChangeStrategy)
    • Specify the model parameter to be changed.
      set_dict_tuple (ChangeStrategy, ‘model_param’, ‘learning_rate’)
    • Start the parameter value at ‘initial_value’.
      set_dict_tuple (ChangeStrategy, ‘initial_value’, InitialLearningRate)
    • Change the parameter value at each ‘epochs’ step.
      set_dict_tuple (ChangeStrategy, ‘epochs’, ChangeLearningRateEpochs)
    • Change the parameter value to the corresponding value in values.
      set_dict_tuple (ChangeStrategy, ‘values’, ChangeLearningRateValues)
    • Collect all change strategies as input.
      GenParamName := [GenParamName,‘change’]
      GenParamValue := [GenParamValue,ChangeStrategy]
      endif
  • Serialization strategies.
  • There are several options for saving intermediate models to disk (see create_dl_train_param).
  • Here, the best and final model are saved to the paths set above.
    create_dict (SerializationStrategy)
    set_dict_tuple (SerializationStrategy, ‘type’, ‘best’)
    set_dict_tuple (SerializationStrategy, ‘basename’, BestModelBaseName)
    GenParamName := [GenParamName,‘serialize’]
    GenParamValue := [GenParamValue,SerializationStrategy]
    create_dict (SerializationStrategy)
    set_dict_tuple (SerializationStrategy, ‘type’, ‘final’)
    set_dict_tuple (SerializationStrategy, ‘basename’, FinalModelBaseName)
    GenParamName := [GenParamName,‘serialize’]
    GenParamValue := [GenParamValue,SerializationStrategy]
  • Display parameters.
  • In this example, the evaluation measure for the training spit is not displayed during
  • training (default). If you want to do so, select a certain percentage of the training
  • samples used to evaluate the model during training. A lower percentage helps to speed
  • up the evaluation.
    SelectedPercentageTrainSamples := 0
  • Set the x-axis argument of the training plots.
    XAxisLabel := ‘epochs’
    create_dict (DisplayParam)
    set_dict_tuple (DisplayParam, ‘selected_percentage_train_samples’, SelectedPercentageTrainSamples)
    set_dict_tuple (DisplayParam, ‘x_axis_label’, XAxisLabel)
    GenParamName := [GenParamName,‘display’]
    GenParamValue := [GenParamValue,DisplayParam]

  • *** Read model and dataset. ***

  • Check if all necessary files exist.
    check_data_availability (ExampleDataDir, DLDatasetFileName)
  • Read the preprocessed DLDataset file.
    read_dict (DLDatasetFileName, [], [], DLDataset)
  • Read in the model that was initialized during preprocessing.
    read_dl_model (ModelFileName, DLModelHandle)

set_dl_model_param (DLModelHandle, ‘device’, DLDevice)
*


  • *** Set model parameters. ***

  • Set model parameters according to preprocessing parameters.
    get_dict_tuple (DLDataset, ‘preprocess_param’, DLPreprocessParam)
    get_dict_tuple (DLDataset, ‘class_ids’, ClassIDs)
    set_dl_model_param_based_on_preprocessing (DLModelHandle, DLPreprocessParam, ClassIDs)
  • Set model hyper-parameters as specified above.
    set_dl_model_param (DLModelHandle, ‘learning_rate’, InitialLearningRate)
    set_dl_model_param (DLModelHandle, ‘momentum’, Momentum)
    if (BatchSize == ‘maximum’ and DLDeviceType == ‘gpu’)
    set_dl_model_param_max_gpu_batch_size (DLModelHandle, 100)
    else
    if (BatchSize == ‘maximum’ and DLDeviceType == ‘cpu’)
    * Please set a suitable batch size in case of ‘cpu’
    * training before continuing.
    stop ()
    endif
    set_dl_model_param (DLModelHandle, ‘batch_size’, 1)
    endif
    if (|WeightPrior| > 0)
    set_dl_model_param (DLModelHandle, ‘weight_prior’, WeightPrior)
    endif
    set_dl_model_param (DLModelHandle, ‘runtime_init’, ‘immediately’)

  • *** Train the model. ***

  • Create the generic train parameter dictionary.
    create_dl_train_param (DLModelHandle, NumEpochs, EvaluationIntervalEpochs, EnableDisplay, RandomSeed, GenParamName, GenParamValue, TrainParam)
  • Start the training by calling the training operator
  • train_dl_model_batch () within the following procedure.
    train_dl_model (DLDataset, DLModelHandle, TrainParam, 0.0, TrainResults, TrainInfos, EvaluationInfos)
  • Stop after the training has finished, before closing the windows.
    dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
    stop ()
  • Close training windows.
    dev_close_window ()
    dev_close_window ()

模型评估阶段************************
dev_update_off ()
*

  • By default, this example uses a model pretrained by MVTec. To use the model
  • which was trained in part 2 of this example series, set the following
  • variable to false.
    UsePretrainedModel := true
  • Evaluation can be performed on a GPU or CPU.
  • See the respective system requirements in the Installation Guide.
  • If possible a GPU is used in this example.
  • In case you explicitely wish to run this example on the CPU,
  • choose the CPU device instead.
    query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
    if (|DLDeviceHandles| == 0)
    throw (‘No supported device found to continue this example.’)
    endif
  • Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
    DLDevice := DLDeviceHandles[0]

  • ** Set paths and parameters for the evaluation ***

  • Paths.
    TotalPath:=‘E:/HalconDL_Project/瓶盖划伤语义分割/标注/’
  • Project directory for any outputs written by HALCON.
    ExampleDataDir := TotalPath + ‘segment_Bottle_defects_data’
  • File path of the preprocessed DLDataset.
  • Note: Adapt DataDirectory after preprocessing with another image size.
    DataDirectory := ExampleDataDir + ‘/dldataset_Bottle_400x400’
    DLDatasetFileName := DataDirectory + ‘/dl_dataset.hdict’
  • Path of the retrained segmentation model.
    RetrainedModelFileName := ExampleDataDir + ‘/best_dl_model_segmentation.hdl’
  • Evaluation parameters.
  • Evaluation measures.
    SegmentationMeasures := [‘mean_iou’,‘pixel_accuracy’,‘class_pixel_accuracy’,‘pixel_confusion_matrix’]
  • Batch size used during evaluation.
    BatchSize := 1
  • Display some segmentation results after determining the best model.
    NumDisplay := 3

  • ** Evaluation of the model ***

  • Check if all necessary files exist.
    check_data_availability_COPY_1 (ExampleDataDir, DLDatasetFileName, RetrainedModelFileName, UsePretrainedModel)
  • Read the retrained model.
    read_dl_model (RetrainedModelFileName, DLModelHandle)

set_dl_model_param (DLModelHandle, ‘batch_size’, BatchSize)
*
set_dl_model_param (DLModelHandle, ‘device’, DLDevice)
*

  • Read the preprocessed DLDataset file.
    read_dict (DLDatasetFileName, [], [], DLDataset)
  • Set parameters for evaluation.
    create_dict (GenParamEval)
    set_dict_tuple (GenParamEval, ‘measures’, SegmentationMeasures)
    set_dict_tuple (GenParamEval, ‘show_progress’, ‘true’)
  • Evaluate the retrained model.
    evaluate_dl_model (DLDataset, DLModelHandle, ‘split’, ‘test’, GenParamEval, EvaluationResult, EvalParams)

  • ** Display the results ***

  • Display measures.
    create_dict (WindowHandleDict)
    create_dict (GenParamEvalDisplay)
    set_dict_tuple (GenParamEvalDisplay, ‘display_mode’, [‘measures’,‘absolute_confusion_matrix’])
    dev_display_segmentation_evaluation (EvaluationResult, EvalParams, GenParamEvalDisplay, WindowHandleDict)

dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, ‘box’, ‘true’)
stop ()
*

  • Close window handles.
    dev_close_window_dict (WindowHandleDict)

  • ** Visual inspection of images ***

  • Evaluate the performance of the model qualitatively
  • by visual inspection of images.
  • Select test images randomly.
    get_dict_tuple (DLDataset, ‘samples’, DatasetSamples)
    find_dl_samples (DatasetSamples, ‘split’, ‘test’, ‘match’, SampleIndices)
    tuple_shuffle (SampleIndices, ShuffledIndices)
  • Read the selected samples.
    read_dl_samples (DLDataset, ShuffledIndices[0:NumDisplay - 1], DLSampleBatch)
  • Set parameters for visualization of sample images.
    create_dict (WindowHandleDict)
    create_dict (GenParamDisplay)
    set_dict_tuple (GenParamDisplay, ‘segmentation_exclude_class_ids’, 0)
    set_dict_tuple (GenParamDisplay, ‘segmentation_transparency’, ‘80’)
  • Set batch size of the model to 1.
    set_dl_model_param (DLModelHandle, ‘batch_size’, 1)
  • Apply the retrained model and visualize the results.
    for SampleIndex := 0 to NumDisplay - 1 by 1
    *
    • Apply the model.
      apply_dl_model (DLModelHandle, DLSampleBatch[SampleIndex], [], DLResult)
    • Display the result.
      dev_display_dl_data (DLSampleBatch[SampleIndex], DLResult, DLDataset, [‘segmentation_image_ground_truth’,‘segmentation_image_result’], GenParamDisplay, WindowHandleDict)
    dev_display_continue_message (WindowHandleDict)
    stop ()
    endfor
  • Close the windows.
    dev_close_window_dict (WindowHandleDict)
  • Optimize the memory consumption.
    set_dl_model_param (DLModelHandle, ‘optimize_for_inference’, ‘true’)
    write_dl_model (DLModelHandle, RetrainedModelFileName)
  • Close the windows.
    dev_close_window_dict (WindowHandleDict)

模型测试*****************************
dev_update_off ()
*

  • By default, this example uses a model pretrained by MVTec. To use the model

  • which was trained in part 2 of this example series, set the following

  • variable to false.
    UsePretrainedModel := true

  • Inference can be done on a GPU or CPU.

  • See the respective system requirements in the Installation Guide.

  • If possible a GPU is used in this example.

  • In case you explicitely wish to run this example on the CPU,

  • choose the CPU device instead.
    query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
    if (|DLDeviceHandles| == 0)
    throw (‘No supported device found to continue this example.’)
    endif

  • Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
    DLDevice := DLDeviceHandles[0]


  • ** Set paths and parameters for inference ***

  • We will demonstrate the inference on the example images.
  • In a real application newly incoming images (not used for training or evaluation)
  • would be used here.
  • In this example, we read the images from file.
  • Directory name with the images to be segmented.
    TotalPath:=‘E:/HalconDL_Project/瓶盖划伤语义分割/标注/’
  • Example data folder containing the outputs of the previous example series.
    ExampleDataDir := TotalPath + ‘segment_Bottle_defects_data’
  • File name of dict containing parameters used for preprocessing.
  • Note: Adapt DataDirectory after preprocessing with another image size.
    DataDirectory := ExampleDataDir + ‘/dldataset_Bottle_400x400’
    PreprocessParamFileName := DataDirectory + ‘/dl_preprocess_param.hdict’
  • Path of the retrained segmentation model.
    RetrainedModelFileName := ExampleDataDir + ‘/best_dl_model_segmentation.hdl’
  • Provide the class names and IDs.
  • Class names.
    ClassNames := [‘BackGround’,‘Dirty’,‘HuaShang’]
  • Respective class IDs.
    ClassIDs := [0,1,2]
  • Batch Size used during inference.
    BatchSizeInference := 1

  • ** Inference ***

  • Check if all necessary files exist.
    check_data_availability_COPY_2 (ExampleDataDir, PreprocessParamFileName, RetrainedModelFileName, UsePretrainedModel)

  • Read in the retrained model.
    read_dl_model (RetrainedModelFileName, DLModelHandle)

  • Set the batch size.
    set_dl_model_param (DLModelHandle, ‘batch_size’, BatchSizeInference)

  • Initialize the model for inference.
    set_dl_model_param (DLModelHandle, ‘device’, DLDevice)

  • Get the parameters used for preprocessing.
    read_dict (PreprocessParamFileName, [], [], DLPreprocessParam)

  • Set parameters for visualization of results.
    create_dict (WindowHandleDict)
    create_dict (DatasetInfo)
    set_dict_tuple (DatasetInfo, ‘class_ids’, ClassIDs)
    set_dict_tuple (DatasetInfo, ‘class_names’, ClassNames)
    create_dict (GenParamDisplay)
    set_dict_tuple (GenParamDisplay, ‘segmentation_exclude_class_ids’, 0)
    set_dict_tuple (GenParamDisplay, ‘segmentation_transparency’, ‘80’)
    set_dict_tuple (GenParamDisplay, ‘font_size’, 16)

  • Image Acquisition 01: Code generated by Image Acquisition 01
    list_files (‘E:/HalconDL_Project/瓶盖划伤语义分割/标注/Images’, [‘files’,‘follow_links’], ImageFiles)
    tuple_regexp_select (ImageFiles, [‘\.(tif|tiff|gif|bmp|jpg|jpeg|jp2|png|pcx|pgm|ppm|pbm|xwd|ima|hobj)$’,‘ignore_case’], ImageFiles)
    for Index := 0 to |ImageFiles| - 1 by 1
    read_image (ImageBatch, ImageFiles[Index])
    *

    • Generate the DLSampleBatch.
      gen_dl_samples_from_images (ImageBatch, DLSampleBatch)
    • Preprocess the DLSampleBatch.
      preprocess_dl_samples (DLSampleBatch, DLPreprocessParam)
    • Apply the DL model on the DLSampleBatch.
      apply_dl_model (DLModelHandle, DLSampleBatch, [‘segmentation_image’,‘segmentation_confidence’], DLResultBatch)
    • Postprocessing and visualization.
    • Loop over each sample in the batch.
      for SampleIndex := 0 to BatchSizeInference - 1 by 1
      *
      • Get image.
        get_dict_object (Image, DLSampleBatch[SampleIndex], ‘image’)
      • Get result image.
        get_dict_object (SegmentationImage, DLResultBatch[SampleIndex], ‘segmentation_image’)
      • Postprocessing: Get segmented regions for each class.
        threshold (SegmentationImage, ClassRegions, ClassIDs, ClassIDs)
      • Display results.
        dev_display_dl_data (DLSampleBatch[SampleIndex], DLResultBatch[SampleIndex], DatasetInfo, ‘segmentation_image_result’, GenParamDisplay, WindowHandleDict)
        get_dict_tuple (WindowHandleDict, ‘segmentation_image_result’, WindowHandles)
        dev_set_window (WindowHandles[0])
      • Separate any components of the class regions
      • and display result regions as well as their area.
      • Get area of class regions.
        region_features (ClassRegions, ‘area’, Areas)
      • Here, we do not display the first class, since it is the class ‘good’
      • and we only want to display the defect regions.
        for ClassIndex := 1 to |Areas| - 1 by 1
        if (Areas[ClassIndex] > 0)
        select_obj (ClassRegions, ClassRegion, ClassIndex + 1)
        * Get connected components of the segmented class region.
        connection (ClassRegion, ConnectedRegions)
        area_center (ConnectedRegions, Area, Row, Column)
        for ConnectIndex := 0 to |Area| - 1 by 1
        select_obj (ConnectedRegions, CurrentRegion, ConnectIndex + 1)
        dev_disp_text (ClassNames[ClassIndex] + '\narea: ’ + Area[ConnectIndex] + ‘px’, ‘image’, Row[ConnectIndex] - 10, Column[ConnectIndex] + 10, ‘black’, [], [])
        endfor
        endif
        endfor
      • Display whether the pill is OK, or not.
        dev_display_ok_nok (Areas, WindowHandles[0])
      stop ()
      endfor
      endfor
  • Close windows.
    dev_close_window_dict (WindowHandleDict)
    在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

这篇关于Halcon20.11深度学习语义分割模型的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1113764

相关文章

深度解析Python中递归下降解析器的原理与实现

《深度解析Python中递归下降解析器的原理与实现》在编译器设计、配置文件处理和数据转换领域,递归下降解析器是最常用且最直观的解析技术,本文将详细介绍递归下降解析器的原理与实现,感兴趣的小伙伴可以跟随... 目录引言:解析器的核心价值一、递归下降解析器基础1.1 核心概念解析1.2 基本架构二、简单算术表达

深度解析Java @Serial 注解及常见错误案例

《深度解析Java@Serial注解及常见错误案例》Java14引入@Serial注解,用于编译时校验序列化成员,替代传统方式解决运行时错误,适用于Serializable类的方法/字段,需注意签... 目录Java @Serial 注解深度解析1. 注解本质2. 核心作用(1) 主要用途(2) 适用位置3

Java MCP 的鉴权深度解析

《JavaMCP的鉴权深度解析》文章介绍JavaMCP鉴权的实现方式,指出客户端可通过queryString、header或env传递鉴权信息,服务器端支持工具单独鉴权、过滤器集中鉴权及启动时鉴权... 目录一、MCP Client 侧(负责传递,比较简单)(1)常见的 mcpServers json 配置

Maven中生命周期深度解析与实战指南

《Maven中生命周期深度解析与实战指南》这篇文章主要为大家详细介绍了Maven生命周期实战指南,包含核心概念、阶段详解、SpringBoot特化场景及企业级实践建议,希望对大家有一定的帮助... 目录一、Maven 生命周期哲学二、default生命周期核心阶段详解(高频使用)三、clean生命周期核心阶

深度剖析SpringBoot日志性能提升的原因与解决

《深度剖析SpringBoot日志性能提升的原因与解决》日志记录本该是辅助工具,却为何成了性能瓶颈,SpringBoot如何用代码彻底破解日志导致的高延迟问题,感兴趣的小伙伴可以跟随小编一起学习一下... 目录前言第一章:日志性能陷阱的底层原理1.1 日志级别的“双刃剑”效应1.2 同步日志的“吞吐量杀手”

Unity新手入门学习殿堂级知识详细讲解(图文)

《Unity新手入门学习殿堂级知识详细讲解(图文)》Unity是一款跨平台游戏引擎,支持2D/3D及VR/AR开发,核心功能模块包括图形、音频、物理等,通过可视化编辑器与脚本扩展实现开发,项目结构含A... 目录入门概述什么是 UnityUnity引擎基础认知编辑器核心操作Unity 编辑器项目模式分类工程

深度解析Python yfinance的核心功能和高级用法

《深度解析Pythonyfinance的核心功能和高级用法》yfinance是一个功能强大且易于使用的Python库,用于从YahooFinance获取金融数据,本教程将深入探讨yfinance的核... 目录yfinance 深度解析教程 (python)1. 简介与安装1.1 什么是 yfinance?

Python学习笔记之getattr和hasattr用法示例详解

《Python学习笔记之getattr和hasattr用法示例详解》在Python中,hasattr()、getattr()和setattr()是一组内置函数,用于对对象的属性进行操作和查询,这篇文章... 目录1.getattr用法详解1.1 基本作用1.2 示例1.3 原理2.hasattr用法详解2.

深度解析Spring Security 中的 SecurityFilterChain核心功能

《深度解析SpringSecurity中的SecurityFilterChain核心功能》SecurityFilterChain通过组件化配置、类型安全路径匹配、多链协同三大特性,重构了Spri... 目录Spring Security 中的SecurityFilterChain深度解析一、Security

深度解析Nginx日志分析与499状态码问题解决

《深度解析Nginx日志分析与499状态码问题解决》在Web服务器运维和性能优化过程中,Nginx日志是排查问题的重要依据,本文将围绕Nginx日志分析、499状态码的成因、排查方法及解决方案展开讨论... 目录前言1. Nginx日志基础1.1 Nginx日志存放位置1.2 Nginx日志格式2. 499