====== ArUco marker detection ======
//Aruco// markers are squares with bit patterns which can easily be detected and their orientation can always be determined due to their non-symmetric pattern. These markers can be generated [[https://chev.me/arucogen/ | here]].
==== Detection with OpenCV ====
The repository in the following [[https://git.cs.lth.se/robotlab/realsense_aruco_detection | link]] implements detection of //aruco// markers in opencv with a realsense camera, the authors are Pontus Rosqvist, Josefin Gustafsson, Marcus Nagy and Martin Lyrå. This script can take the following command line arguments:
^ Argument Name ^ Description ^
| -d | Specify if the depth image should be used. True or False. |
| -p | Specify where the camera parameters are saved, if left empty the camera parameters are loaded from the camera. |
| -pause | Specify where the aruco markers and their size is saved. |
The //aruco// marker ids and their size should be specified in a text file of the following form:
id: size, length
10: 7, 10
23: 7, 20
34: 5, 12.2
The first line is ignored so it should only contain the header "id: size, length" which tells you how the data is parsed. All lines after that are parsed where the number to the left of the colon is an //aruco// marker id which is associated with the length in millimeters which is the second number to the right of the colon. The first number to the right of the colon specifies the width of the //aruco// marker in pixels, the possible sizes are 4, 5, 6 and 7.
The code expects a realsense camera to be connected to the computer. If the pipeline does not receive a frame from the camera within 5 seconds the following error will be thrown
RuntimeError: Frame didn't arrive within 5000
In which case one should check that the camera works and sends frames to the computer with realsense-viewer.
It is very important to check if the camera is recognized as a USB2 device or USB3 device since USB2 supports a smaller sets of resolution than USB3. If a resolution that is not possible is requested (for example if the camera is detected as a USB2 device and the expected resolution is too big) the pipeline throws the following error:
Runtime error: Couldn't resolve requests
In which case one should check what resolution the frames from the camera have with realsense-viewer to verify that the expected resolution is correct.
First the //aruco// markers of some size are detected in the current frame with [[https://docs.opencv.org/3.4/d9/d6a/group__aruco.html#ga061ee5b694d30fa2258dd4f13dc98129 | cv2.aruco.detectMarkers]]. For this function to work the python package [[https://pypi.org/project/opencv-contrib-python/ | opencv-contrib-python]] needs to be installed. From this we get the pixel locations of the corners in the image, since the size of each marker is known we can recover the 3d position of all of these corners [[https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga549c2075fac14829ff4a58bc931c033d | solvePnP]]. This function returns the translation of the //aruco// marker and the angle-axis representation of the orientation.
In order to make use of the translation of the //aruco// markers and the angle-axis one could use //openCV// [[https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html#ga61585db663d9da06b68e70cfbf6a1eac|cv2.Rodrigues]] on the angle-axis to get the [[https://mathworld.wolfram.com/RotationMatrix.html|rotation matrix]] and sequentially use SciPy [[https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.transform.Rotation.html|scipy.spatial.transform.Rotation]] on the rotation matrix
to get a [[https://mathworld.wolfram.com/Quaternion.html|quaternion]]. With this it is easy to construct a //rospy// [[http://docs.ros.org/en/melodic/api/geometry_msgs/html/msg/Pose.html|geometry_msgs/Pose.msg]].
This script only draws the determined pose of each detected //aruco// marker in each frame and displays it but it can easily be modified to instead return the pose of each //aruco// marker and their associated id.
==== Detection with SkiROS ====
To detect //aruco// makers with a skill and implement a skill in [[https://github.com/RVMI/skiros2/wiki|SkiROS]] one could use the wiki to get a better understanding of how todo or use the example skill "camera_skill" as an example which uses the implmented //aruco// marker deteciton in the above section for picking up an object.
There are a couple of steps to create a skill in **SkiROS**.
- [[https://github.com/RVMI/skiros2/wiki/Tutorial-3:-Create-a-skill|Create a skill]]. Checkout the section right above part 2 as well all of part 2.
- Create a //SkillDescription//
- addParam needed for the skill
- Create a //PrimitiveBase//
- Handle the different situation like, [onPreempt, onInit, onStart, execute].
- Create a //SkillBase// [OPTIONAL] Required if wanted to create skills executed in order.
- [[https://github.com/RVMI/skiros2/wiki/Tutorial-3:-Create-a-skill#part-3-create-a-simple-skill|See here]].
- Add skill to simulation in e.g. [[https://git.cs.lth.se/robotlab/heron/heron_launch/-/blob/camera_skill/launch/simulation.launch|launch file]] for //Heron//.
- Add skill code to into one of the skill repositories:
- [[https://git.cs.lth.se/robotlab/rvmi/skills_sandbox|General skill]].
- [[https://git.cs.lth.se/robotlab/rvmi/vision_skills|Vision skill]].
- Add possible new ontologies to the [[https://git.cs.lth.se/robotlab/heron/heron_launch/-/blob/master/owl/scenes/heron.turtle|ontology]] for //Heron//.