Jetson Nano 【15】安装Dlib以及一个demo

2024-03-05 18:48
文章标签 安装 15 demo jetson nano dlib

本文主要是介绍Jetson Nano 【15】安装Dlib以及一个demo,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

文章目录

        • Dlib介绍
        • Nano 安装过程
          • Nano基本环境安装
          • 下载压缩包、解压
          • 修改源码
          • 安装
          • 安装face_recognition 库
        • Demo
          • 代码

  • Dlib的Github入口

  • Nano的安装文档

Dlib介绍
  • 关于Dlib的介绍可以看:这位老哥的转载
  • 我有时间会认真学习,整理他知识点(坑位预定)
Nano 安装过程
  • 安装过程参考上面Nano安装文档,本文基本就是上面安装文档截取、翻译与解释。当然,安装过程是我亲自经历的。
  • Nano基本环境安装
    • 这里默认系统什么都装好了,基本环境也是。
sudo apt-get update
sudo apt-get install python3-pip cmake libopenblas-dev liblapack-dev libjpeg-dev
  • installSwapfile安装,这里由于本人之前玩过目标识别,所以装过交换空间这个,就不装了。
    在这里插入图片描述

  • 下载压缩包、解压
wget http://dlib.net/files/dlib-19.17.tar.bz2 
tar jxvf dlib-19.17.tar.bz2 
cd dlib-19.17
  • 这里官方下载用的是wget,本人使用的是浏览器直接下载(这个是个静态文件,链接就可以直接下载),发现开个科#学#商务办公会快点,当然本身也不大。
  • 修改源码
    • 这里是关键点,关于这个,Github上的ReadME也提到了,是Nano上cudnn的问题,只需注释854行的代码即可正常编译,关于编译问题本人遇到很多次了,贴心指出来还是很好评的,毕竟可以省下很多时间。
//不用gedit,其他编辑器也是可以的,本人用的是vim,找到对应文件即可
gedit dlib/cuda/cudnn_dlibapi.cpp
//注释掉这行
forward_algo = forward_best_algo;
  • 安装
    • 退到第一层(有setup.py那一页),开始安装
    • sudo python3 setup.py install
    • 这里文档里有个有意思的东西:This will take around 30–60 minutes to finish and your Jetson Nano might get hot, but just let it run.实测是10分钟,而且发热量不大,可以接受,安装过程及其顺利。
  • 安装face_recognition 库
    • 文档提示需要再安装face_recognition 库,装完就可以使用了。
    • sudo pip3 install face_recognition去不去sudo没测试过,本人加上了。
Demo
  • 先来张效果图,用的树莓派的摄像头,光线问题拍的比较黑,但是能识别~
    在这里插入图片描述

  • 代码
    • 这个源码网址是:传送
    • 官方是wget下的wget -O doorcam.py tiny.cc/doorcam,不过似乎有神秘的力量阻挡了链接。
    • 接下来是源码,连注释都没改,放心食用
import face_recognition
import cv2
from datetime import datetime, timedelta
import numpy as np
import platform
import pickle# Our list of known face encodings and a matching list of metadata about each face.
known_face_encodings = []
known_face_metadata = []def save_known_faces():with open("known_faces.dat", "wb") as face_data_file:face_data = [known_face_encodings, known_face_metadata]pickle.dump(face_data, face_data_file)print("Known faces backed up to disk.")def load_known_faces():global known_face_encodings, known_face_metadatatry:with open("known_faces.dat", "rb") as face_data_file:known_face_encodings, known_face_metadata = pickle.load(face_data_file)print("Known faces loaded from disk.")except FileNotFoundError as e:print("No previous face data found - starting with a blank known face list.")passdef running_on_jetson_nano():# To make the same code work on a laptop or on a Jetson Nano, we'll detect when we are running on the Nano# so that we can access the camera correctly in that case.# On a normal Intel laptop, platform.machine() will be "x86_64" instead of "aarch64"return platform.machine() == "aarch64"def get_jetson_gstreamer_source(capture_width=1280, capture_height=720, display_width=1280, display_height=720, framerate=60, flip_method=0):"""Return an OpenCV-compatible video source description that uses gstreamer to capture video from the camera on a Jetson Nano"""return (f'nvarguscamerasrc ! video/x-raw(memory:NVMM), ' +f'width=(int){capture_width}, height=(int){capture_height}, ' +f'format=(string)NV12, framerate=(fraction){framerate}/1 ! ' +f'nvvidconv flip-method={flip_method} ! ' +f'video/x-raw, width=(int){display_width}, height=(int){display_height}, format=(string)BGRx ! ' +'videoconvert ! video/x-raw, format=(string)BGR ! appsink')def register_new_face(face_encoding, face_image):"""Add a new person to our list of known faces"""# Add the face encoding to the list of known facesknown_face_encodings.append(face_encoding)# Add a matching dictionary entry to our metadata list.# We can use this to keep track of how many times a person has visited, when we last saw them, etc.known_face_metadata.append({"first_seen": datetime.now(),"first_seen_this_interaction": datetime.now(),"last_seen": datetime.now(),"seen_count": 1,"seen_frames": 1,"face_image": face_image,})def lookup_known_face(face_encoding):"""See if this is a face we already have in our face list"""metadata = None# If our known face list is empty, just return nothing since we can't possibly have seen this face.if len(known_face_encodings) == 0:return metadata# Calculate the face distance between the unknown face and every face on in our known face list# This will return a floating point number between 0.0 and 1.0 for each known face. The smaller the number,# the more similar that face was to the unknown face.face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)# Get the known face that had the lowest distance (i.e. most similar) from the unknown face.best_match_index = np.argmin(face_distances)# If the face with the lowest distance had a distance under 0.6, we consider it a face match.# 0.6 comes from how the face recognition model was trained. It was trained to make sure pictures# of the same person always were less than 0.6 away from each other.# Here, we are loosening the threshold a little bit to 0.65 because it is unlikely that two very similar# people will come up to the door at the same time.if face_distances[best_match_index] < 0.65:# If we have a match, look up the metadata we've saved for it (like the first time we saw it, etc)metadata = known_face_metadata[best_match_index]# Update the metadata for the face so we can keep track of how recently we have seen this face.metadata["last_seen"] = datetime.now()metadata["seen_frames"] += 1# We'll also keep a total "seen count" that tracks how many times this person has come to the door.# But we can say that if we have seen this person within the last 5 minutes, it is still the same# visit, not a new visit. But if they go away for awhile and come back, that is a new visit.if datetime.now() - metadata["first_seen_this_interaction"] > timedelta(minutes=5):metadata["first_seen_this_interaction"] = datetime.now()metadata["seen_count"] += 1return metadatadef main_loop():# Get access to the webcam. The method is different depending on if this is running on a laptop or a Jetson Nano.if running_on_jetson_nano():# Accessing the camera with OpenCV on a Jetson Nano requires gstreamer with a custom gstreamer source stringvideo_capture = cv2.VideoCapture(get_jetson_gstreamer_source(), cv2.CAP_GSTREAMER)else:# Accessing the camera with OpenCV on a laptop just requires passing in the number of the webcam (usually 0)# Note: You can pass in a filename instead if you want to process a video file instead of a live camera streamvideo_capture = cv2.VideoCapture(0)# Track how long since we last saved a copy of our known faces to disk as a backup.number_of_faces_since_save = 0while True:# Grab a single frame of videoret, frame = video_capture.read()# Resize frame of video to 1/4 size for faster face recognition processingsmall_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)rgb_small_frame = small_frame[:, :, ::-1]# Find all the face locations and face encodings in the current frame of videoface_locations = face_recognition.face_locations(rgb_small_frame)face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)# Loop through each detected face and see if it is one we have seen before# If so, we'll give it a label that we'll draw on top of the video.face_labels = []for face_location, face_encoding in zip(face_locations, face_encodings):# See if this face is in our list of known faces.metadata = lookup_known_face(face_encoding)# If we found the face, label the face with some useful information.if metadata is not None:time_at_door = datetime.now() - metadata['first_seen_this_interaction']face_label = f"At door {int(time_at_door.total_seconds())}s"# If this is a brand new face, add it to our list of known faceselse:face_label = "New visitor!"# Grab the image of the the face from the current frame of videotop, right, bottom, left = face_locationface_image = small_frame[top:bottom, left:right]face_image = cv2.resize(face_image, (150, 150))# Add the new face to our known face dataregister_new_face(face_encoding, face_image)face_labels.append(face_label)# Draw a box around each face and label each facefor (top, right, bottom, left), face_label in zip(face_locations, face_labels):# Scale back up face locations since the frame we detected in was scaled to 1/4 sizetop *= 4right *= 4bottom *= 4left *= 4# Draw a box around the facecv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)# Draw a label with a name below the facecv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)cv2.putText(frame, face_label, (left + 6, bottom - 6), cv2.FONT_HERSHEY_DUPLEX, 0.8, (255, 255, 255), 1)# Display recent visitor imagesnumber_of_recent_visitors = 0for metadata in known_face_metadata:# If we have seen this person in the last minute, draw their imageif datetime.now() - metadata["last_seen"] < timedelta(seconds=10) and metadata["seen_frames"] > 5:# Draw the known face imagex_position = number_of_recent_visitors * 150frame[30:180, x_position:x_position + 150] = metadata["face_image"]number_of_recent_visitors += 1# Label the image with how many times they have visitedvisits = metadata['seen_count']visit_label = f"{visits} visits"if visits == 1:visit_label = "First visit"cv2.putText(frame, visit_label, (x_position + 10, 170), cv2.FONT_HERSHEY_DUPLEX, 0.6, (255, 255, 255), 1)if number_of_recent_visitors > 0:cv2.putText(frame, "Visitors at Door", (5, 18), cv2.FONT_HERSHEY_DUPLEX, 0.8, (255, 255, 255), 1)# Display the final frame of video with boxes drawn around each detected famescv2.imshow('Video', frame)# Hit 'q' on the keyboard to quit!if cv2.waitKey(1) & 0xFF == ord('q'):save_known_faces()break# We need to save our known faces back to disk every so often in case something crashes.if len(face_locations) > 0 and number_of_faces_since_save > 100:save_known_faces()number_of_faces_since_save = 0else:number_of_faces_since_save += 1# Release handle to the webcamvideo_capture.release()cv2.destroyAllWindows()if __name__ == "__main__":load_known_faces()main_loop()	

这篇关于Jetson Nano 【15】安装Dlib以及一个demo的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/777471

相关文章

Python UV安装、升级、卸载详细步骤记录

《PythonUV安装、升级、卸载详细步骤记录》:本文主要介绍PythonUV安装、升级、卸载的详细步骤,uv是Astral推出的下一代Python包与项目管理器,主打单一可执行文件、极致性能... 目录安装检查升级设置自动补全卸载UV 命令总结 官方文档详见:https://docs.astral.sh/

Nexus安装和启动的实现教程

《Nexus安装和启动的实现教程》:本文主要介绍Nexus安装和启动的实现教程,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录一、Nexus下载二、Nexus安装和启动三、关闭Nexus总结一、Nexus下载官方下载链接:DownloadWindows系统根

Java SWT库详解与安装指南(最新推荐)

《JavaSWT库详解与安装指南(最新推荐)》:本文主要介绍JavaSWT库详解与安装指南,在本章中,我们介绍了如何下载、安装SWTJAR包,并详述了在Eclipse以及命令行环境中配置Java... 目录1. Java SWT类库概述2. SWT与AWT和Swing的区别2.1 历史背景与设计理念2.1.

安装centos8设置基础软件仓库时出错的解决方案

《安装centos8设置基础软件仓库时出错的解决方案》:本文主要介绍安装centos8设置基础软件仓库时出错的解决方案,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐... 目录安装Centos8设置基础软件仓库时出错版本 8版本 8.2.200android4版本 javas

Pytorch介绍与安装过程

《Pytorch介绍与安装过程》PyTorch因其直观的设计、卓越的灵活性以及强大的动态计算图功能,迅速在学术界和工业界获得了广泛认可,成为当前深度学习研究和开发的主流工具之一,本文给大家介绍Pyto... 目录1、Pytorch介绍1.1、核心理念1.2、核心组件与功能1.3、适用场景与优势总结1.4、优

conda安装GPU版pytorch默认却是cpu版本

《conda安装GPU版pytorch默认却是cpu版本》本文主要介绍了遇到Conda安装PyTorchGPU版本却默认安装CPU的问题,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的... 目录一、问题描述二、网上解决方案罗列【此节为反面方案罗列!!!】三、发现的根本原因[独家]3.1 p

windows系统上如何进行maven安装和配置方式

《windows系统上如何进行maven安装和配置方式》:本文主要介绍windows系统上如何进行maven安装和配置方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不... 目录1. Maven 简介2. maven的下载与安装2.1 下载 Maven2.2 Maven安装2.

Redis指南及6.2.x版本安装过程

《Redis指南及6.2.x版本安装过程》Redis是完全开源免费的,遵守BSD协议,是一个高性能(NOSQL)的key-value数据库,Redis是一个开源的使用ANSIC语言编写、支持网络、... 目录概述Redis特点Redis应用场景缓存缓存分布式会话分布式锁社交网络最新列表Redis各版本介绍旧

Linux下安装Anaconda3全过程

《Linux下安装Anaconda3全过程》:本文主要介绍Linux下安装Anaconda3全过程,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录简介环境下载安装一、找到下载好的文件名为Anaconda3-2018.12-linux-x86_64的安装包二、或者通

MySQL 安装配置超完整教程

《MySQL安装配置超完整教程》MySQL是一款广泛使用的开源关系型数据库管理系统(RDBMS),由瑞典MySQLAB公司开发,目前属于Oracle公司旗下产品,:本文主要介绍MySQL安装配置... 目录一、mysql 简介二、下载 MySQL三、安装 MySQL四、配置环境变量五、配置 MySQL5.1