2D Pose Estimation
We experimented extensively with Openpose in both real-world and simulation settings. For the real-world data, we experimented on the Shibuya crossing live video – a traffic intersection that streams a live video feed 24/7. For the simulation setting, we set up the JTA Dataset Mods [2] to hook on to the GTA 5 game and collect the dataset.
Experiments and Results on Shibuya videos
When we naively apply openpose on the test video we get the below result:
But since we know there are people mainly in the bottom left, and top right portion of the frame when people are waiting and in the middle when people are crossing the intersection, we can crop the video to these sections of the video and run Openpose. The results are as follows:
We can observe that Openpose performs well for a specific resolution of video and scale of persons in the video. We make the following observations:
As a base requirement, we require a relative scale: of 1/10 x 1/16 (Height x Width), and an absolute scale: of 59 x 23 (pixels). We found that anything lesser than this has a low detection fidelity.
We also did some experiments on various Zoom levels:
We also experimented with combining both cropping and zooming the video:
We can see an improvement in results by crop and zoom of the video. But is this a feasible method? Can we always do this during deployment in our test domain?
Disadvantages:
(1) Low resolution causes the model unable to tell objects from people.
(2) Low resolution makes it hard for models to tell people apart from the background.
(3) Existing models are not trained with the “bird’s eye view”.
A solution we build our dataset since it has the following advantages:
(1) Control the number of cameras and the angle of the cameras
(2) Define the scene: weather, time of the day, etc.
(3) Obtain ground truth joints key points from the physical engine.
So we try the above approach even on the synthetic data. And the results are as follows:
We can observe that the OpenPose mispredicted some joints even after crop-and-resize. Hence, we can’t use Openpose off-the-shelf and we need to train our pose estimation models on the dataset we generate.
3D Pose Estimation
1. PARE
2. ROMP
PARE and ROMP fail when camera angles are elevated
Failure cases:
We can see that even with cropping and applying super-resolution the results aren’t great. This is also an issue with the dataset. The datasets these models were trained on, were as follows:
Hence, both ROMP and PARE don’t perform too well in the test domain. We then test the BEV model on the Tepper dataset.
3. BEV
Hence, like in the 2D pose estimation case we still have to train the BEV model on the GTA5 dataset we create
References
[1] Zhe Cao et. al. “OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields” CVPR 2017
[2] Fabbri, Matteo, et al. “Learning to detect and track visible and occluded body joints in a virtual world.” ECCV 2018.
[3] Ledig, Christian, et al. “Photo-realistic single image super-resolution using a generative adversarial network.” CVPR 2017.
[4] Wang, A., Biswas, A., Admoni, H., & Steinfeld, A. (2022). Towards Rich, Portable, and Large-Scale Pedestrian Data Collection. ArXiv, abs/2203.01974.