基于C++和Python的虹膜识别测试结果对比

2023-11-05 22:20

本文主要是介绍基于C++和Python的虹膜识别测试结果对比,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

目录

一、说明

二、测试处理

1、基于Python的虹膜识别

2、基于C++虹膜识别

三、测试结果


一、说明

本文主要对基于C++和Python的虹膜识别的效果进行测试,并对比其结果。基于C++的虹膜识别测试工程为:基于C++和Opencv2的虹膜识别工程。基于Python的测试代码为:https://github.com/thuyngch/Iris-Recognition。

二、测试处理

为了使用相同的数据集进行测试,本文以Python版本的虹膜识别为主,保存测试过程中使用的测试图片,然后用C++版本对这些图片进行测试,测试结果的评价标准也按照Python版本中的标准:

fscore = 2*precision*recall / (precision+recall)

1、基于Python的虹膜识别

主要以代码https://github.com/thuyngch/Iris-Recognition为主,使用说明在原代码中已经很详细,本文使用的环境为:Window10、Python3.7.3。需要注意的几点如下:

(1)虹膜识别的测试文件为:eval_casia1.py,需要到python文件加下运行:python3 eval_casia1.py。注意代码中测试数据集CASIA-IrisV1存放路径:

CASIA1_DIR = "../CASIA1"

(2)直接运行python3 eval_casia1.py会出错,根据错误提示,添加了

if __name__ == '__main__':

即代码为:

if __name__ == '__main__':#------------------------------------------------------------------------------#	Main execution#------------------------------------------------------------------------------# Get identities of MMU2 datasetidentities = glob(os.path.join(CASIA1_DIR, "**"))identities = sorted([os.path.basename(identity) for identity in identities])n_identities = len(identities)print("Number of identities:", n_identities)# Construct a dictionary of filesfiles_dict = {}image_files = []for identity in identities:files = glob(os.path.join(CASIA1_DIR, identity, "*.*"))shuffle(files)files_dict[identity] = files[:N_IMAGES]image_files += files[:N_IMAGES]n_image_files = len(image_files)print("Number of image files:", n_image_files)# save test images to file 'pro_imgs.txt'txt_open = open('./pro_imgs.txt', 'w')for i in image_files:line = i + '\n'txt_open.write(line)txt_open.close()# Ground truthground_truth = np.zeros([n_image_files, n_image_files], dtype=int)for i in range(ground_truth.shape[0]):for j in range(ground_truth.shape[1]):if i//N_IMAGES == j//N_IMAGES:ground_truth[i, j] = 1# Evaluate parameterspools = Pool(processes=cpu_count())best_results = []for eye_threshold in tqdm(eyelashes_thresholds, total=len(eyelashes_thresholds)):# Extract featuresargs = zip(image_files, repeat(eye_threshold), repeat(False))features = list(pools.map(pool_func_extract_feature, args))# Calculate the distancesargs = []for i in range(n_image_files):for j in range(n_image_files):if i>=j:continuearg = (features[i][0], features[i][1], features[j][0], features[j][1])args.append(arg)distances = pools.map(pool_func_calHammingDist, args)# Construct a distance matrixk = 0dist_mat = np.zeros([n_image_files, n_image_files])for i in range(n_image_files):for j in range(n_image_files):if i<j:dist_mat[i, j] = distances[k]k += 1elif i>j:dist_mat[i, j] = dist_mat[j, i]# Metricsaccuracies, precisions, recalls, fscores = [], [], [], []for threshold in thresholds:decision_map = (dist_mat<=threshold).astype(int)accuracy = (decision_map==ground_truth).sum() / ground_truth.sizeprecision = (ground_truth*decision_map).sum() / decision_map.sum()recall = (ground_truth*decision_map).sum() / ground_truth.sum()fscore = 2*precision*recall / (precision+recall)accuracies.append(accuracy)precisions.append(precision)recalls.append(recall)fscores.append(fscore)# Save the best resultbest_fscore = max(fscores)best_threshold = thresholds[fscores.index(best_fscore)]best_accuracy = accuracies[fscores.index(best_fscore)]best_precision = precisions[fscores.index(best_fscore)]best_recall = recalls[fscores.index(best_fscore)]best_results.append((eye_threshold, best_threshold, best_fscore, best_accuracy, best_precision, best_recall))# Show the final best resulteye_thresholds = [item[0] for item in best_results]thresholds = [item[1] for item in best_results]fscores = [item[2] for item in best_results]accuracies = [item[3] for item in best_results]precisions = [item[4] for item in best_results]recalls = [item[5] for item in best_results]print("Maximum fscore: ", max(fscores))print("Best accuracy: ", accuracies[fscores.index(max(fscores))])print("Best precision: ", precisions[fscores.index(max(fscores))])print("Best recall: ", recalls[fscores.index(max(fscores))])print("Best eye_threshold: ", eye_thresholds[fscores.index(max(fscores))])print("Best threshold: ", thresholds[fscores.index(max(fscores))])
CASIA-IrisV1数据集共有108人,每人为7张虹膜图片,代码中随机从7张图片中选取4个:

N_IMAGES = 4

故总测试图片数为:

108*4=432

将随机选取的图片进行保存:

 # save test images to file 'pro_imgs.txt'txt_open = open('./pro_imgs.txt', 'w')for i in image_files:line = i + '\n'txt_open.write(line)txt_open.close()

代码中有两个参数:eyelashes_thresholds和thresholds。eyelashes_thresholds为眼睫毛的阈值,范围为[10,250],选取25个数值:

eyelashes_thresholds = np.linspace(start=10, stop=250, num=25)

[ 10. 20. 30. 40. 50. 60. 70. 80. 90. 100. 110. 120. 130. 140. 150. 160. 170. 180. 190. 200. 210. 220. 230. 240. 250.]

thresholds为判断两个虹膜是否为同一个的阈值,两个虹膜的差异值小于thresholds则为同一人,否则为不同人。取值范围为[0, 1],选取100个,步长为:

(1-0)/(100-1)= 0.01010101

thresholds = np.linspace(start=0.0, stop=1.0, num=100)

[0. 0.01010101 0.02020202 0.03030303 0.04040404 0.05050505 0.06060606 0.07070707 0.08080808 0.09090909 0.1010101 0.11111111 0.12121212 0.13131313 0.14141414 0.15151515 0.16161616 0.17171717 0.18181818 0.19191919 0.2020202 0.21212121 0.22222222 0.23232323 0.24242424 0.25252525 0.26262626 0.27272727 0.28282828 0.29292929 0.3030303 0.31313131 0.32323232 0.33333333 0.34343434 0.35353535 0.36363636 0.37373737 0.38383838 0.39393939 0.4040404 0.41414141 0.42424242 0.43434343 0.44444444 0.45454545 0.46464646 0.47474747 0.48484848 0.49494949 0.50505051 0.51515152 0.52525253 0.53535354 0.54545455 0.55555556 0.56565657 0.57575758 0.58585859 0.5959596 0.60606061 0.61616162 0.62626263 0.63636364 0.64646465 0.65656566 0.66666667 0.67676768 0.68686869 0.6969697 0.70707071 0.71717172 0.72727273 0.73737374 0.74747475 0.75757576 0.76767677 0.77777778 0.78787879 0.7979798 0.80808081 0.81818182 0.82828283 0.83838384 0.84848485 0.85858586 0.86868687 0.87878788 0.88888889 0.8989899 0.90909091 0.91919192 0.92929293 0.93939394 0.94949495 0.95959596 0.96969697 0.97979798 0.98989899 1. ]

(3)注意运行路径

2、基于C++虹膜识别

具体代码详见:基于C++和Opencv2的虹膜识别工程。原代码中测试图片为:下一个与上一个进行比较。为了和Python代码保持一致,需要做下修改。

(1)将CASIA1测试数据图片复制到data文件夹中;

(2)修改process.ini文件。修改图片存放文件名称为Python中保存的文件:

Load List of images = pro_imgs.txt

修改图片存放路径:

Load original images = CASIA1/

屏蔽所有保存路径:

#Save segmented images = Output/SegmentedImages/

#Save contours parameters = Output/CircleParameters/

#Save masks of iris = Output/Masks/

#Save normalized images = Output/NormalizedImages/

#Save normalized masks = Output/NormalizedMasks/

#Save iris codes = Output/IrisCodes/

#Save matching scores = Output/score.txt

(3)添加my_run()。在OsiManager.h中加入void my_run();如下图所示

在OsiManager.cpp中添加:

void OsiManager::my_run()
{cout << endl;cout << "================" << endl;cout << "Start processing" << endl;cout << "================" << endl;cout << endl;// If matching is requested, create a fileofstream result_matching;if (mProcessMatching && mOutputFileMatchingScores != ""){try{result_matching.open(mOutputFileMatchingScores.c_str(), ios::out);}catch (exception & e){cout << e.what() << endl;throw runtime_error("Cannot create the file for matching scores : " + mOutputFileMatchingScores);}}int num_test_imgs = mListOfImages.size();int N_IMAGES = 4;float threshold = 0.393939393939394;int decision_map_num = 0;int ground_truth_num = 0;// ground_truth and dist_mat initialbool(*ground_truth)[432] = new bool[num_test_imgs][432];float(*dist_mat)[432] = new float[num_test_imgs][432];bool(*decision_map)[432] = new bool[num_test_imgs][432];for (int i = 0; i<num_test_imgs; ++i){for (int j = 0; j<num_test_imgs; ++j){if (i / N_IMAGES == j / N_IMAGES){ground_truth[i][j] = 1;ground_truth_num++;}elseground_truth[i][j] = 0;dist_mat[i][j] = 0;decision_map[i][j] = 0;}}//end ground_truth and dist_mat initialvector<OsiEye> eyr_res;cout << "Extract Features start!" << endl;for (int i = 0; i < num_test_imgs; i++){OsiEye eye;processOneEye(mListOfImages[i], eye);eyr_res.push_back(eye);cout << i + 1 << " pic is over!!!" << endl;}cout << "Extract Features end!!!" << endl;for (int i = 0; i < num_test_imgs; i++){for (int j = 0; j < num_test_imgs; ++j){cout << "start: " << i << " " << j << " ";if (i < j){OsiEye eyr_i = eyr_res[i];OsiEye eyr_j = eyr_res[j];float score = (eyr_res[i]).match((eyr_res[j]), mpApplicationPoints);dist_mat[i][j] = score;}else if (i > j){dist_mat[i][j] = dist_mat[j][i];}if (dist_mat[i][j] < threshold){decision_map[i][j] = 1;decision_map_num++;}cout << dist_mat[i][j] << " " << decision_map[i][j] << endl;}}// result int accuracy_num = 0;int precision_num = 0;for (int i = 0; i < num_test_imgs; i++){for (int j = 0; j < num_test_imgs; ++j){if (decision_map[i][j] == ground_truth[i][j]){accuracy_num++;}if (decision_map[i][j] && ground_truth[i][j]){precision_num++;}}}float accuracy = float(accuracy_num) / float(num_test_imgs*num_test_imgs);float precision_res = float(precision_num) / float(decision_map_num);float recall = float(precision_num) / float(ground_truth_num);float fscore = 2 * precision_res*recall / (precision_res + recall);cout << "Result: " << endl << "accuracy: " << accuracy << "\t"<< "precision_res: " << precision_res << "\t"<< "recall: " << recall << "\t"<< "fscore: " << fscore << endl;// end result//将阈值从0到1按照0.01步长递增,寻找最优的fscore值,并获取对应的最优阈值float fscore_best = 0.0;float threshold_best = 0.0;float recall_best = 0.0;float precision_best = 0.0;float accuracy_best = 0.0;for (threshold = 0.0; threshold < 1.0; threshold += 0.01010101){int accuracy_num = 0;int precision_num = 0;decision_map_num = 0;for (int i = 0; i < num_test_imgs; ++i){for (int j = 0; j < num_test_imgs; ++j){if (dist_mat[i][j] < threshold){decision_map[i][j] = 1;decision_map_num++;}if (decision_map[i][j] == ground_truth[i][j]){accuracy_num++;}if (decision_map[i][j] && ground_truth[i][j]){precision_num++;}}}float accuracy = float(accuracy_num) / float(num_test_imgs*num_test_imgs);float precision_res = float(precision_num) / float(decision_map_num);float recall = float(precision_num) / float(ground_truth_num);float fscore = 2 * precision_res*recall / (precision_res + recall);cout << "Result:   " << "accuracy: " << accuracy << "\t"<< "precision_res: " << precision_res << "\t"<< "recall: " << recall << "\t"<< "fscore: " << fscore << endl;if (fscore > fscore_best){fscore_best = fscore;threshold_best = threshold;recall_best = recall;precision_best = precision_res;accuracy_best = accuracy;}for (int i = 0; i<num_test_imgs; ++i){for (int j = 0; j<num_test_imgs; ++j){decision_map[i][j] = 0;}}}cout << "Best fscore is: " << fscore_best << ", Best threshold is: " << threshold_best <<", Best recall is: " << recall_best << ", Best precision is: " << precision_best << ", Best accuracy is: " << accuracy_best << endl;// save dist_matofstream result_dist;string result_dist_path = "../data/dist_mat.txt";try{result_dist.open(result_dist_path.c_str(), ios::out);}catch (exception & e){cout << e.what() << endl;throw runtime_error("Cannot create the file for result_dist_path scores : " + result_dist_path);}if (result_dist){try{for (int i = 0; i < num_test_imgs; i++){int j = 0;for (j = 0; j < num_test_imgs - 1; ++j){result_dist << dist_mat[i][j] << " ";}result_dist << dist_mat[i][j] << "\n";}}catch (exception & e){cout << e.what() << endl;throw runtime_error("Error while saving result of matching in " + mOutputFileMatchingScores);}}// end save dist_matcout << endl;cout << "==============" << endl;cout << "End processing" << endl;cout << "==============" << endl;cout << endl;} // end of function

注意:这里只选取了Python代码中的thresholds作为最优结果选择参数,其他参数使用原C++代码中默认的。

(4)若在上述代码中保存了decision_map.txt文件,则寻找最优阈值的代码如下:

void test_best_thr(string test_resuilt_file)
{// Open the fileifstream file(test_resuilt_file.c_str(), ifstream::in);if (!file.good())throw runtime_error("Cannot read configuration file " + test_resuilt_file);int decision_map_num = 0;int ground_truth_num = 0;int N_IMAGES = 4;int num_test_imgs = 432; //测试图片总数,定义选取N_IMAGES时,总数为:N_IMAGES*108(108为数据集CASIA1中人的数量,即文件夹的数量)// ground_truth and dist_mat initialbool(*ground_truth)[432] = new bool[num_test_imgs][432]; //定义ground_truth变量float(*dist_mat)[432] = new float[num_test_imgs][432];  //定义测试结果dist_mat变量bool(*decision_map)[432] = new bool[num_test_imgs][432]; // 根据测试结果和设置的阈值得到的正确与否(0/1)的变量// 变量初始化for (int i = 0; i<num_test_imgs; ++i){for (int j = 0; j<num_test_imgs; ++j){if (i / N_IMAGES == j / N_IMAGES){ground_truth[i][j] = 1;ground_truth_num++;}elseground_truth[i][j] = 0;dist_mat[i][j] = 0;decision_map[i][j] = 0;}}// 读取结果文件并按照结果文件保存的格式重新将结果取到dist_mat变量中int x = 0;bool flag = true;while (file.good() && !file.eof() && flag){// Get the new linestring line;getline(file, line);// Filter out commentsif (!line.empty()){int y = 0;size_t start = 0, index = line.find_first_of(" ", 0);while (index != line.npos){if (start != index){std::string s = line.substr(start, index - start);float res = atof(s.c_str());dist_mat[x][y] = res;y++;start = index + 1;index = line.find_first_of(" ", start);}elsebreak;}if (!line.substr(start).empty()){std::string s = line.substr(start);float res = atof(s.c_str());dist_mat[x][y] = res;y++;}if (y != num_test_imgs){cout << y << " != " << num_test_imgs << endl;flag = false;break;}}x++;}if (x != num_test_imgs){cout << x << " != " << num_test_imgs << endl;return;}//将阈值从0到1按照0.01步长递增,寻找最优的fscore值,并获取对应的最优阈值float fscore_best = 0.0;float threshold_best = 0.0;float recall_best = 0.0;float precision_best = 0.0;float accuracy_best = 0.0;for (float threshold = 0; threshold < 1.0; threshold += 0.01010101){int accuracy_num = 0;int precision_num = 0;decision_map_num = 0;for (int i = 0.0; i < num_test_imgs; ++i){for (int j = 0; j < num_test_imgs; ++j){if (dist_mat[i][j] < threshold){decision_map[i][j] = 1;decision_map_num++;}if (decision_map[i][j] == ground_truth[i][j]){accuracy_num++;}if (decision_map[i][j] && ground_truth[i][j]){precision_num++;}}}float accuracy = float(accuracy_num) / float(num_test_imgs*num_test_imgs);float precision_res = float(precision_num) / float(decision_map_num);float recall = float(precision_num) / float(ground_truth_num);float fscore = 2 * precision_res*recall / (precision_res + recall);cout << "Result:   " << "accuracy: " << accuracy << "\t"<< "precision_res: " << precision_res << "\t"<< "recall: " << recall << "\t"<< "fscore: " << fscore << endl;if (fscore > fscore_best){fscore_best = fscore;threshold_best = threshold;recall_best = recall;precision_best = precision_res;accuracy_best = accuracy;}for (int i = 0; i<num_test_imgs; ++i){for (int j = 0; j<num_test_imgs; ++j){decision_map[i][j] = 0;}}}cout << "Best fscore is: " << fscore_best << ", Best threshold is: " << threshold_best << ", Best recall is: " << recall_best << ", Best precision is: " << precision_best << ", Best accuracy is: " << accuracy_best << endl;
}

(5)main中调用

将osi.run()改为osi.my_run();

三、测试结果

测试结果如下表所示,在CASIA-IrisV1上,C++版本的效果要优于Python版本,具体原因,后续看代码比较再来说明。

这篇关于基于C++和Python的虹膜识别测试结果对比的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/352820

相关文章

Python的Darts库实现时间序列预测

《Python的Darts库实现时间序列预测》Darts一个集统计、机器学习与深度学习模型于一体的Python时间序列预测库,本文主要介绍了Python的Darts库实现时间序列预测,感兴趣的可以了解... 目录目录一、什么是 Darts?二、安装与基本配置安装 Darts导入基础模块三、时间序列数据结构与

Python正则表达式匹配和替换的操作指南

《Python正则表达式匹配和替换的操作指南》正则表达式是处理文本的强大工具,Python通过re模块提供了完整的正则表达式功能,本文将通过代码示例详细介绍Python中的正则匹配和替换操作,需要的朋... 目录基础语法导入re模块基本元字符常用匹配方法1. re.match() - 从字符串开头匹配2.

C++右移运算符的一个小坑及解决

《C++右移运算符的一个小坑及解决》文章指出右移运算符处理负数时左侧补1导致死循环,与除法行为不同,强调需注意补码机制以正确统计二进制1的个数... 目录我遇到了这么一个www.chinasem.cn函数由此可以看到也很好理解总结我遇到了这么一个函数template<typename T>unsigned

Python使用FastAPI实现大文件分片上传与断点续传功能

《Python使用FastAPI实现大文件分片上传与断点续传功能》大文件直传常遇到超时、网络抖动失败、失败后只能重传的问题,分片上传+断点续传可以把大文件拆成若干小块逐个上传,并在中断后从已完成分片继... 目录一、接口设计二、服务端实现(FastAPI)2.1 运行环境2.2 目录结构建议2.3 serv

通过Docker容器部署Python环境的全流程

《通过Docker容器部署Python环境的全流程》在现代化开发流程中,Docker因其轻量化、环境隔离和跨平台一致性的特性,已成为部署Python应用的标准工具,本文将详细演示如何通过Docker容... 目录引言一、docker与python的协同优势二、核心步骤详解三、进阶配置技巧四、生产环境最佳实践

Python一次性将指定版本所有包上传PyPI镜像解决方案

《Python一次性将指定版本所有包上传PyPI镜像解决方案》本文主要介绍了一个安全、完整、可离线部署的解决方案,用于一次性准备指定Python版本的所有包,然后导出到内网环境,感兴趣的小伙伴可以跟随... 目录为什么需要这个方案完整解决方案1. 项目目录结构2. 创建智能下载脚本3. 创建包清单生成脚本4

Python实现Excel批量样式修改器(附完整代码)

《Python实现Excel批量样式修改器(附完整代码)》这篇文章主要为大家详细介绍了如何使用Python实现一个Excel批量样式修改器,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一... 目录前言功能特性核心功能界面特性系统要求安装说明使用指南基本操作流程高级功能技术实现核心技术栈关键函

python获取指定名字的程序的文件路径的两种方法

《python获取指定名字的程序的文件路径的两种方法》本文主要介绍了python获取指定名字的程序的文件路径的两种方法,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要... 最近在做项目,需要用到给定一个程序名字就可以自动获取到这个程序在Windows系统下的绝对路径,以下

使用Python批量将.ncm格式的音频文件转换为.mp3格式的实战详解

《使用Python批量将.ncm格式的音频文件转换为.mp3格式的实战详解》本文详细介绍了如何使用Python通过ncmdump工具批量将.ncm音频转换为.mp3的步骤,包括安装、配置ffmpeg环... 目录1. 前言2. 安装 ncmdump3. 实现 .ncm 转 .mp34. 执行过程5. 执行结

Python实现批量CSV转Excel的高性能处理方案

《Python实现批量CSV转Excel的高性能处理方案》在日常办公中,我们经常需要将CSV格式的数据转换为Excel文件,本文将介绍一个基于Python的高性能解决方案,感兴趣的小伙伴可以跟随小编一... 目录一、场景需求二、技术方案三、核心代码四、批量处理方案五、性能优化六、使用示例完整代码七、小结一、