Ensenso手眼标定cpp实现

2024-01-27 13:48
文章标签 实现 标定 cpp 手眼 ensenso

本文主要是介绍Ensenso手眼标定cpp实现,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

原理Ensenso SDK有介绍,这里是代码实现的简易版本。

需要修改serial number ,我的是194224。

使用方法:

将halcon标定板固定在机械臂上,运行代码,移动机械臂,输入机械臂上标定板当前的姿态(重复次数大于5即可)

成功的话,相机参数的link就已经被修改了,此时的坐标系统一为机械臂基座标系。

目前机械臂的移动是我手动控制的,也可以固定某些点输入,自动运行。

有问题的话,可以加我QQ:1441405602

#include <stdio.h>#include "nxLib.h"
#include <ros/ros.h>
#include <tf2_ros/transform_broadcaster.h>
#include <tf2_geometry_msgs/tf2_geometry_msgs.h> // Needed for conversion from geometry_msgs to tf2::Transform
#include <geometry_msgs/Pose.h>
#include <geometry_msgs/PoseStamped.h>
#include <geometry_msgs/TransformStamped.h>
#include <string>
#include <tf2_ros/buffer.h>
#include <tf2/LinearMath/Transform.h>#include <tf/transform_listener.h>
#include <tf/transform_broadcaster.h>#include <iostream>
using namespace ros;
using namespace std;
bool poseIsValid(const tf::Pose& pose)
{auto origin = pose.getOrigin();if (std::isnan(origin.x()) || std::isnan(origin.y()) || std::isnan(origin.z())){return false;}auto rotation = pose.getRotation();if (std::isnan(rotation.getAngle())){return false;}auto rotationAxis = rotation.getAxis();if (std::isnan(rotationAxis.x()) || std::isnan(rotationAxis.y()) || std::isnan(rotationAxis.z())){return false;}return true;
}void writePoseToNxLib(tf::Pose const& pose, NxLibItem const& node)
{// Initialize the node to be empty. This is necessary, because there is a bug in some versions of the NxLib that// overwrites the whole transformation node with an identity transformation as soon as a new node in /Links gets// created.node.setNull();if (poseIsValid(pose)){auto origin = pose.getOrigin();node[itmTranslation][0] = origin.x() * 1000;  // ROS transformation is in// meters, NxLib expects it to// be in millimeters.node[itmTranslation][1] = origin.y() * 1000;node[itmTranslation][2] = origin.z() * 1000;auto rotation = pose.getRotation();node[itmRotation][itmAngle] = rotation.getAngle();auto rotationAxis = rotation.getAxis();node[itmRotation][itmAxis][0] = rotationAxis.x();node[itmRotation][itmAxis][1] = rotationAxis.y();node[itmRotation][itmAxis][2] = rotationAxis.z();}else{// Use an identity transformation as a reasonable default value.node[itmTranslation][0] = 0;node[itmTranslation][1] = 0;node[itmTranslation][2] = 0;node[itmRotation][itmAngle] = 0;node[itmRotation][itmAxis][0] = 1;node[itmRotation][itmAxis][1] = 0;node[itmRotation][itmAxis][2] = 0;}
}int main(void)
{try {// Initialize NxLib and enumerate camerasnxLibInitialize(true);// Reference to the first camera in the node BySerialNoNxLibItem root;NxLibItem camera = root[itmCameras][itmBySerialNo][0];// Open the EnsensoNxLibCommand open(cmdOpen);open.parameters()[itmCameras] = camera[itmSerialNumber].asString();open.execute();// We assume that the camera with the serial "1234" is already open. See here for information on how this// can be done.// Move your robot into a suitable starting position here. Make sure that the pattern can be seen from// the selected position.//tf::StampedTransform robotPose;// std::vector<tf::Pose> handEyeCalibrationRobotPoses;vector<tf::Transform> robotPoses;geometry_msgs::Pose robot_pose;// Set the pattern's grid spacing so that we don't need to decode the data from the pattern later. You// will need to adapt this line to the size of the calibration pattern that you are using.NxLibItem()[itmParameters][itmPattern][itmGridSpacing] = 20;// Discard any pattern observations that might already be in the pattern buffer.NxLibCommand(cmdDiscardPatterns).execute();// Turn off the camera's projector so that we can observe the calibration pattern.NxLibItem()[itmCameras]["194224"][itmParameters][itmCapture][itmProjector] = false;NxLibItem()[itmCameras]["194224"][itmParameters][itmCapture][itmFrontLight] = true;// We will observe the pattern 20 times. You can adapt this number depending on how accurate you need the// calibration to be.for (int i = 0; i < 10; i++) {// Move your robot to a new position from which the pattern can be seen. It might be a good idea to// choose these positions randomly.cout<<"Please enter x:";cin>>robot_pose.position.x;cout<<"Please enter y:";cin>>robot_pose.position.y;cout<<"Please enter z:";cin>>robot_pose.position.z;cout<<"Please enter rw:";cin>>robot_pose.orientation.w;cout<<"Please enter rx:";cin>>robot_pose.orientation.x;cout<<"Please enter ry:";cin>>robot_pose.orientation.y;cout<<"Please enter rz:";cin>>robot_pose.orientation.z;tf::Pose tfPose;tf::poseMsgToTF(robot_pose, tfPose);robotPoses.push_back(tfPose);// Make sure that the robot is not moving anymore. You might want to wait for a few seconds to avoid// any oscillations.sleep(2);// Observe the calibration pattern and store the observation in the pattern buffer.NxLibCommand capture(cmdCapture);capture.parameters()[itmCameras] = "194224";capture.execute();bool foundPattern = false;try {NxLibCommand collectPattern(cmdCollectPattern);collectPattern.parameters()[itmCameras] = "194224";collectPattern.execute();foundPattern = true;} catch (NxLibException& e) {printf("An NxLib API error with code %d (%s) occurred while accessing item %s.\n", e.getErrorCode(),e.getErrorText().c_str(), e.getItemPath().c_str());}if (foundPattern) {// We actually found a pattern. Get the current pose of your robot (from which the pattern was// observed) and store it somewhere.cout<< i <<"success"<<endl;} else {// The calibration pattern could not be found in the camera image. When your robot poses are// selected randomly, you might want to choose a different one.}}// You can insert a recalibration here, as you already captured stereo patterns anyway. See here for a// code snippet that does a recalibration.// We collected enough patterns and can start the calibration.NxLibCommand calibrateHandEye(cmdCalibrateHandEye);calibrateHandEye.parameters()[itmSetup] = valFixed; // Or "valMoving" when your have a moving setup.// At this point, you need to put your stored robot poses into the command's Transformations parameter.//calibrateHandEye.parameters()[itmTransformations] = robotPoses;for (size_t i = 0; i < robotPoses.size(); i++){writePoseToNxLib(robotPoses[i], calibrateHandEye.parameters()[itmTransformations][i]);}// Start the calibration. Note that this might take a few minutes if you did a lot of pattern observations.calibrateHandEye.execute();// Store the new calibration to the camera's EEPROM.NxLibCommand storeCalibration(cmdStoreCalibration);storeCalibration.parameters()[itmCameras] = "194224";storeCalibration.parameters()[itmLink] = true;storeCalibration.execute();// Close & finalizeNxLibCommand close(cmdClose);close.execute();} catch (NxLibException& e) { // Display NxLib API exceptions, if anyprintf("An NxLib API error with code %d (%s) occurred while accessing item %s.\n", e.getErrorCode(),e.getErrorText().c_str(), e.getItemPath().c_str());if (e.getErrorCode() == NxLibExecutionFailed)printf("/Execute:\n%s\n", NxLibItem(itmExecute).asJson(true).c_str());} nxLibFinalize();return 0;
}

 

 

这篇关于Ensenso手眼标定cpp实现的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/650423

相关文章

使用Python和OpenCV库实现实时颜色识别系统

《使用Python和OpenCV库实现实时颜色识别系统》:本文主要介绍使用Python和OpenCV库实现的实时颜色识别系统,这个系统能够通过摄像头捕捉视频流,并在视频中指定区域内识别主要颜色(红... 目录一、引言二、系统概述三、代码解析1. 导入库2. 颜色识别函数3. 主程序循环四、HSV色彩空间详解

PostgreSQL中MVCC 机制的实现

《PostgreSQL中MVCC机制的实现》本文主要介绍了PostgreSQL中MVCC机制的实现,通过多版本数据存储、快照隔离和事务ID管理实现高并发读写,具有一定的参考价值,感兴趣的可以了解一下... 目录一 MVCC 基本原理python1.1 MVCC 核心概念1.2 与传统锁机制对比二 Postg

SpringBoot整合Flowable实现工作流的详细流程

《SpringBoot整合Flowable实现工作流的详细流程》Flowable是一个使用Java编写的轻量级业务流程引擎,Flowable流程引擎可用于部署BPMN2.0流程定义,创建这些流程定义的... 目录1、流程引擎介绍2、创建项目3、画流程图4、开发接口4.1 Java 类梳理4.2 查看流程图4

C++中零拷贝的多种实现方式

《C++中零拷贝的多种实现方式》本文主要介绍了C++中零拷贝的实现示例,旨在在减少数据在内存中的不必要复制,从而提高程序性能、降低内存使用并减少CPU消耗,零拷贝技术通过多种方式实现,下面就来了解一下... 目录一、C++中零拷贝技术的核心概念二、std::string_view 简介三、std::stri

C++高效内存池实现减少动态分配开销的解决方案

《C++高效内存池实现减少动态分配开销的解决方案》C++动态内存分配存在系统调用开销、碎片化和锁竞争等性能问题,内存池通过预分配、分块管理和缓存复用解决这些问题,下面就来了解一下... 目录一、C++内存分配的性能挑战二、内存池技术的核心原理三、主流内存池实现:TCMalloc与Jemalloc1. TCM

OpenCV实现实时颜色检测的示例

《OpenCV实现实时颜色检测的示例》本文主要介绍了OpenCV实现实时颜色检测的示例,通过HSV色彩空间转换和色调范围判断实现红黄绿蓝颜色检测,包含视频捕捉、区域标记、颜色分析等功能,具有一定的参考... 目录一、引言二、系统概述三、代码解析1. 导入库2. 颜色识别函数3. 主程序循环四、HSV色彩空间

Python实现精准提取 PDF中的文本,表格与图片

《Python实现精准提取PDF中的文本,表格与图片》在实际的系统开发中,处理PDF文件不仅限于读取整页文本,还有提取文档中的表格数据,图片或特定区域的内容,下面我们来看看如何使用Python实... 目录安装 python 库提取 PDF 文本内容:获取整页文本与指定区域内容获取页面上的所有文本内容获取

基于Python实现一个Windows Tree命令工具

《基于Python实现一个WindowsTree命令工具》今天想要在Windows平台的CMD命令终端窗口中使用像Linux下的tree命令,打印一下目录结构层级树,然而还真有tree命令,但是发现... 目录引言实现代码使用说明可用选项示例用法功能特点添加到环境变量方法一:创建批处理文件并添加到PATH1

Java使用HttpClient实现图片下载与本地保存功能

《Java使用HttpClient实现图片下载与本地保存功能》在当今数字化时代,网络资源的获取与处理已成为软件开发中的常见需求,其中,图片作为网络上最常见的资源之一,其下载与保存功能在许多应用场景中都... 目录引言一、Apache HttpClient简介二、技术栈与环境准备三、实现图片下载与保存功能1.

canal实现mysql数据同步的详细过程

《canal实现mysql数据同步的详细过程》:本文主要介绍canal实现mysql数据同步的详细过程,本文通过实例图文相结合给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的... 目录1、canal下载2、mysql同步用户创建和授权3、canal admin安装和启动4、canal