[iOS]拍照后人脸检测

2024-08-20 23:32
文章标签 检测 ios 拍照 人脸

本文主要是介绍[iOS]拍照后人脸检测,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

[iOS]拍照后人脸检测


Demo:http://download.csdn.net/detail/u012881779/9677467



#import "FaceStreamViewController.h"
#import <AVFoundation/AVFoundation.h>@interface FaceStreamViewController ()<AVCaptureMetadataOutputObjectsDelegate, UIAlertViewDelegate>@property (strong, nonatomic) AVCaptureSession            * session;
// AVCaptureSession对象来执行输入设备和输出设备之间的数据传递
@property (nonatomic, strong) AVCaptureDeviceInput        * videoInput;
// AVCaptureDeviceInput对象是输入流
@property (nonatomic, strong) AVCaptureStillImageOutput   * stillImageOutput;
// 照片输出流对象,当然这里照相机只有拍照功能,所以只需要这个对象就够了
@property (nonatomic, strong) AVCaptureVideoPreviewLayer  * previewLayer;
// 拍照按钮
@property (nonatomic, strong) UIButton                    * shutterButton;@property (nonatomic, strong ) UITapGestureRecognizer     * tapGesture;
@property (weak, nonatomic) IBOutlet UIButton             * tapPaizhaoBut;@end@implementation FaceStreamViewController- (void)viewDidLoad {[super viewDidLoad];[self session];[self swapFrontAndBackCameras];
}// 点击拍照
- (IBAction)tapShutterCameraAction:(id)sender {[self shutterCamera];
}-(void)viewWillAppear:(BOOL)animated {[super viewWillAppear:animated];[self.session startRunning];}- (void)dealloc {[self releaseAction];
}- (void)releaseAction {self.session = nil;self.videoInput = nil;self.stillImageOutput = nil;self.previewLayer = nil;self.shutterButton = nil;self.tapGesture = nil;
}- (void)viewDidAppear:(BOOL)animated {[super viewDidAppear:animated];self.tapGesture=[[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(onViewClicked:)];[self.view addGestureRecognizer:self.tapGesture];}- (void)onViewClicked:(id)sender {[self swapFrontAndBackCameras];
}// Switching between front and back cameras
- (AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition)position {NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];for ( AVCaptureDevice *device in devices )if ( device.position == position )return device;return nil;
}// 打开前置摄像头
- (void)swapFrontAndBackCameras {// Assume the session is already runningNSArray *inputs = self.session.inputs;for ( AVCaptureDeviceInput *input in inputs ) {AVCaptureDevice *device = input.device;if ( [device hasMediaType:AVMediaTypeVideo] ) {AVCaptureDevicePosition position = device.position;AVCaptureDevice *newCamera = nil;AVCaptureDeviceInput *newInput = nil;if (position == AVCaptureDevicePositionFront)newCamera = [self cameraWithPosition:AVCaptureDevicePositionBack];elsenewCamera = [self cameraWithPosition:AVCaptureDevicePositionFront];newInput = [AVCaptureDeviceInput deviceInputWithDevice:newCamera error:nil];// beginConfiguration ensures that pending changes are not applied immediately[self.session beginConfiguration];[self.session removeInput:input];[self.session addInput:newInput];// Changes take effect once the outermost commitConfiguration is invoked.[self.session commitConfiguration];break;}} 
}- (AVCaptureSession *)session {if (!_session) {// 1.获取输入设备(摄像头)AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];// 2.根据输入设备创建输入对象self.videoInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];if (self.videoInput == nil) {return nil;}// 3.创建输出对象self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];// 这是输出流的设置参数AVVideoCodecJPEG参数表示以JPEG的图片格式输出图片NSDictionary * outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil];[self.stillImageOutput setOutputSettings:outputSettings];// 4.创建会话(桥梁)AVCaptureSession *session = [[AVCaptureSession alloc]init];// 实现高质量的输出和摄像,默认值为AVCaptureSessionPresetHigh,可以不写[session setSessionPreset:AVCaptureSessionPresetHigh];// 5.添加输入和输出到会话中(判断session是否已满)if ([session canAddInput:self.videoInput]) {[session addInput:self.videoInput];}if ([session canAddOutput:self.stillImageOutput]) {[session addOutput:self.stillImageOutput];}// 6.创建预览图层self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];self.previewLayer.frame = [UIScreen mainScreen].bounds;[self.view.layer insertSublayer:self.previewLayer atIndex:0];_session = session;}return _session;
}- (void) shutterCamera {AVCaptureConnection * videoConnection = [self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo];if (!videoConnection) {NSLog(@"take photo failed!");return;}[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {if (imageDataSampleBuffer == NULL) {return;}NSData * imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];UIImage * imagevv = [UIImage imageWithData:imageData];NSLog(@"\n已经获取到图片");imagevv = [self fixOrientation:imagevv];// 图片中是否包含人脸imagevv = [self judgeInPictureContainImage:imagevv];if(imagevv){imageData = UIImageJPEGRepresentation(imagevv, 0.5);NSString *sandboxPath = NSHomeDirectory();NSString *documentPath = [sandboxPath stringByAppendingPathComponent:@"Documents"];NSString *FilePath=[documentPath stringByAppendingPathComponent:@"headerImgData.jpg"];NSData *imgData = imageData;NSFileManager *fileManager = [NSFileManager defaultManager];//向一个文件中写入数据,属性字典允许你制定要创建[fileManager createFileAtPath:FilePath contents:imgData attributes:nil];}else{}}];
}- (UIImage *)fixOrientation:(UIImage *)aImage {// No-op if the orientation is already correctif (aImage.imageOrientation == UIImageOrientationUp)return aImage;// We need to calculate the proper transformation to make the image upright.// We do it in 2 steps: Rotate if Left/Right/Down, and then flip if Mirrored.CGAffineTransform transform = CGAffineTransformIdentity;switch (aImage.imageOrientation) {case UIImageOrientationDown:case UIImageOrientationDownMirrored:transform = CGAffineTransformTranslate(transform, aImage.size.width, aImage.size.height);transform = CGAffineTransformRotate(transform, M_PI);break;case UIImageOrientationLeft:case UIImageOrientationLeftMirrored:transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);transform = CGAffineTransformRotate(transform, M_PI_2);break;case UIImageOrientationRight:case UIImageOrientationRightMirrored:transform = CGAffineTransformTranslate(transform, 0, aImage.size.height);transform = CGAffineTransformRotate(transform, -M_PI_2);break;default:break;}switch (aImage.imageOrientation) {case UIImageOrientationUpMirrored:case UIImageOrientationDownMirrored:transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);transform = CGAffineTransformScale(transform, -1, 1);break;case UIImageOrientationLeftMirrored:case UIImageOrientationRightMirrored:transform = CGAffineTransformTranslate(transform, aImage.size.height, 0);transform = CGAffineTransformScale(transform, -1, 1);break;default:break;}// Now we draw the underlying CGImage into a new context, applying the transform// calculated above.CGContextRef ctx = CGBitmapContextCreate(NULL, aImage.size.width, aImage.size.height,CGImageGetBitsPerComponent(aImage.CGImage), 0,CGImageGetColorSpace(aImage.CGImage),CGImageGetBitmapInfo(aImage.CGImage));CGContextConcatCTM(ctx, transform);switch (aImage.imageOrientation) {case UIImageOrientationLeft:case UIImageOrientationLeftMirrored:case UIImageOrientationRight:case UIImageOrientationRightMirrored:// Grr...CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.height,aImage.size.width), aImage.CGImage);break;default:CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.width,aImage.size.height), aImage.CGImage);break;}// And now we just create a new UIImage from the drawing contextCGImageRef cgimg = CGBitmapContextCreateImage(ctx);UIImage *img = [UIImage imageWithCGImage:cgimg];CGContextRelease(ctx);CGImageRelease(cgimg);return img;
}// 图片中是否包含人脸
- (UIImage *)judgeInPictureContainImage:(UIImage *)thePicture {UIImage *newImg;UIImage *aImage = thePicture;CIImage *image = [CIImage imageWithCGImage:aImage.CGImage];NSDictionary  *opts = [NSDictionary dictionaryWithObject:CIDetectorAccuracyHighforKey:CIDetectorAccuracy];CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFacecontext:niloptions:opts];//得到面部数据NSArray* features = [detector featuresInImage:image];for (CIFaceFeature *f in features){CGRect aRect = f.bounds;NSLog(@"%f, %f, %f, %f", aRect.origin.x, aRect.origin.y, aRect.size.width, aRect.size.height);CGRect newRect = CGRectMake(0, 0, aImage.size.width, aImage.size.height);float blFloat = 320/320.0;newRect.size.width = aImage.size.width;float heiFloat = aImage.size.width/(blFloat);newRect.size.height = heiFloat;float zFloat = (aImage.size.height - newRect.size.height)/2.0;newRect.origin.y = zFloat;newImg = [self imageFromImage:aImage inRect:newRect];//眼睛和嘴的位置if(f.hasLeftEyePosition) NSLog(@"Left eye %g %g\n", f.leftEyePosition.x, f.leftEyePosition.y);if(f.hasRightEyePosition) NSLog(@"Right eye %g %g\n", f.rightEyePosition.x, f.rightEyePosition.y);if(f.hasMouthPosition) NSLog(@"Mouth %g %g\n", f.mouthPosition.x, f.mouthPosition.y);}if(![self judgeChangePictureHaveFace:newImg]){newImg = nil;}return newImg;
}- (BOOL)judgeChangePictureHaveFace:(UIImage *)thePicture {BOOL result = NO;UIImage *aImage = thePicture;CIImage *image = [CIImage imageWithCGImage:aImage.CGImage];NSDictionary  *opts = [NSDictionary dictionaryWithObject:CIDetectorAccuracyHighforKey:CIDetectorAccuracy];CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFacecontext:niloptions:opts];//得到面部数据NSArray* features = [detector featuresInImage:image];for (CIFaceFeature *f in features){result = YES;CGRect aRect = f.bounds;}return result;
}// 剪切图片
- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {CGImageRef sourceImageRef = [image CGImage];CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);UIImage *newImage = [UIImage imageWithCGImage:newImageRef];CGImageRelease(newImageRef);return newImage;
}@end

示图:



这篇关于[iOS]拍照后人脸检测的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1091487

相关文章

OpenCV实现实时颜色检测的示例

《OpenCV实现实时颜色检测的示例》本文主要介绍了OpenCV实现实时颜色检测的示例,通过HSV色彩空间转换和色调范围判断实现红黄绿蓝颜色检测,包含视频捕捉、区域标记、颜色分析等功能,具有一定的参考... 目录一、引言二、系统概述三、代码解析1. 导入库2. 颜色识别函数3. 主程序循环四、HSV色彩空间

Android与iOS设备MAC地址生成原理及Java实现详解

《Android与iOS设备MAC地址生成原理及Java实现详解》在无线网络通信中,MAC(MediaAccessControl)地址是设备的唯一网络标识符,本文主要介绍了Android与iOS设备M... 目录引言1. MAC地址基础1.1 MAC地址的组成1.2 MAC地址的分类2. android与I

使用Python实现IP地址和端口状态检测与监控

《使用Python实现IP地址和端口状态检测与监控》在网络运维和服务器管理中,IP地址和端口的可用性监控是保障业务连续性的基础需求,本文将带你用Python从零打造一个高可用IP监控系统,感兴趣的小伙... 目录概述:为什么需要IP监控系统使用步骤说明1. 环境准备2. 系统部署3. 核心功能配置系统效果展

Python如何实现PDF隐私信息检测

《Python如何实现PDF隐私信息检测》随着越来越多的个人信息以电子形式存储和传输,确保这些信息的安全至关重要,本文将介绍如何使用Python检测PDF文件中的隐私信息,需要的可以参考下... 目录项目背景技术栈代码解析功能说明运行结php果在当今,数据隐私保护变得尤为重要。随着越来越多的个人信息以电子形

SpringBoot使用Apache Tika检测敏感信息

《SpringBoot使用ApacheTika检测敏感信息》ApacheTika是一个功能强大的内容分析工具,它能够从多种文件格式中提取文本、元数据以及其他结构化信息,下面我们来看看如何使用Ap... 目录Tika 主要特性1. 多格式支持2. 自动文件类型检测3. 文本和元数据提取4. 支持 OCR(光学

综合安防管理平台LntonAIServer视频监控汇聚抖动检测算法优势

LntonAIServer视频质量诊断功能中的抖动检测是一个专门针对视频稳定性进行分析的功能。抖动通常是指视频帧之间的不必要运动,这种运动可能是由于摄像机的移动、传输中的错误或编解码问题导致的。抖动检测对于确保视频内容的平滑性和观看体验至关重要。 优势 1. 提高图像质量 - 清晰度提升:减少抖动,提高图像的清晰度和细节表现力,使得监控画面更加真实可信。 - 细节增强:在低光条件下,抖

安卓链接正常显示,ios#符被转义%23导致链接访问404

原因分析: url中含有特殊字符 中文未编码 都有可能导致URL转换失败,所以需要对url编码处理  如下: guard let allowUrl = webUrl.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed) else {return} 后面发现当url中有#号时,会被误伤转义为%23,导致链接无法访问

烟火目标检测数据集 7800张 烟火检测 带标注 voc yolo

一个包含7800张带标注图像的数据集,专门用于烟火目标检测,是一个非常有价值的资源,尤其对于那些致力于公共安全、事件管理和烟花表演监控等领域的人士而言。下面是对此数据集的一个详细介绍: 数据集名称:烟火目标检测数据集 数据集规模: 图片数量:7800张类别:主要包含烟火类目标,可能还包括其他相关类别,如烟火发射装置、背景等。格式:图像文件通常为JPEG或PNG格式;标注文件可能为X

基于 YOLOv5 的积水检测系统:打造高效智能的智慧城市应用

在城市发展中,积水问题日益严重,特别是在大雨过后,积水往往会影响交通甚至威胁人们的安全。通过现代计算机视觉技术,我们能够智能化地检测和识别积水区域,减少潜在危险。本文将介绍如何使用 YOLOv5 和 PyQt5 搭建一个积水检测系统,结合深度学习和直观的图形界面,为用户提供高效的解决方案。 源码地址: PyQt5+YoloV5 实现积水检测系统 预览: 项目背景

JavaFX应用更新检测功能(在线自动更新方案)

JavaFX开发的桌面应用属于C端,一般来说需要版本检测和自动更新功能,这里记录一下一种版本检测和自动更新的方法。 1. 整体方案 JavaFX.应用版本检测、自动更新主要涉及一下步骤: 读取本地应用版本拉取远程版本并比较两个版本如果需要升级,那么拉取更新历史弹出升级控制窗口用户选择升级时,拉取升级包解压,重启应用用户选择忽略时,本地版本标志为忽略版本用户选择取消时,隐藏升级控制窗口 2.