====== Pose estimation ====== There are a couple of interesting repositories: * https://git.cs.lth.se/robotlab/object-recognition-and-pose-estimation * https://github.com/DLR-RM/AugmentedAutoencoder * https://github.com/shbe-aau/multi-pose-estimation * https://github.com/HampusAstrom/yolov3 (reduced, simpler repo to get things working on heron) * https://github.com/robberthofmanfm/yolo/releases (used as a fileserver for the weights to avoid git LFS) The former two are inspirations to the third one. The authors are [[hampus.astrom@cs.lth.se|Hampus Astrom]] and [[shbe@create.aau.dk|Stefan Bengtson]]. It uses an autoencoder from pytorch3d, which itself is built on top of pytorch. For training, the most important file is [[https://github.com/shbe-aau/multi-pose-estimation/blob/master/multi-pose/experiments/experiment_template.cfg|experiment_template.cfg]] ==== Troubleshooting ==== Since it runs in a docker container and uses nvidia, you'll want to pass the ''--gpus all'' flag to ''docker run''. \\ However, when calling ''docker build'', no such flag exists. The build will fail. \\ For me, the answer was to specify the default docker runtime as one that uses nvidia-docker.\\ - ''sudo apt-get install nvidia-container-runtime'' # follow this guide https://nvidia.github.io/nvidia-container-runtime/ - edit ''/etc/docker/daemon.json'' and make sure it looks something like this: { "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime", "runtimeArgs": [] } }, "default-runtime": "nvidia" } - ''sudo systemctl restart docker'' I've learned this from: https://stackoverflow.com/questions/59691207/docker-build-with-nvidia-runtime