Spring 2022

2D Pose Estimation

We experimented extensively with Openpose in both real-world and simulation settings. For the real-world data, we experimented on the Shibuya crossing live video – a traffic intersection that streams a live video feed 24/7. For the simulation setting, we set up the JTA Dataset Mods [2] to hook on to the GTA 5 game and collect the dataset.

Experiments and Results on Shibuya videos

When we naively apply openpose on the test video we get the below result:

Directly applying Openpose on test video

But since we know there are people mainly in the bottom left, and top right portion of the frame when people are waiting and in the middle when people are crossing the intersection, we can crop the video to these sections of the video and run Openpose. The results are as follows:

One solution: Crop and resize
Bottom left crop
Top right crop
Center crop

We can observe that Openpose performs well for a specific resolution of video and scale of persons in the video. We make the following observations:

The relative scale of humans in the video frame

As a base requirement, we require a relative scale: of 1/10 x 1/16 (Height x Width), and an absolute scale: of 59 x 23 (pixels). We found that anything lesser than this has a low detection fidelity.

We also did some experiments on various Zoom levels:

Directly applying Openpose on test video
2x Zoom
3x Zoom
4x Zoom

We also experimented with combining both cropping and zooming the video:

Directly applying Openpose on test video
Video crop to left bottom
4x zoom
Video crop to right bottom
4x zoom

We can see an improvement in results by crop and zoom of the video. But is this a feasible method? Can we always do this during deployment in our test domain?

Disadvantages:

(1) Low resolution causes the model unable to tell objects from people.

(2) Low resolution makes it hard for models to tell people apart from the background.

(3) Existing models are not trained with the “bird’s eye view”.

A solution we build our dataset since it has the following advantages:

(1) Control the number of cameras and the angle of the cameras

(2) Define the scene: weather, time of the day, etc.

(3) Obtain ground truth joints key points from the physical engine.

So we try the above approach even on the synthetic data. And the results are as follows:

Test video after zoom and crop to region-of-interest
Results on Openpose
Left: Ground truth; Right: OpenPose Prediction

We can observe that the OpenPose mispredicted some joints even after crop-and-resize. Hence, we can’t use Openpose off-the-shelf and we need to train our pose estimation models on the dataset we generate.

3D Pose Estimation

1. PARE

2. ROMP

PARE and ROMP fail when camera angles are elevated

Failure cases:

3D pose estimation failure cases
People at the top aren’t detected due to the scale issue mentioned above
Apply the crop-and-resize approach to 3D pose estimation models as well
Cropping video and applying ROMP. Results on Top Right Crop
Results after applying Super-resolution using SRGAN[3] on cropped video

We can see that even with cropping and applying super-resolution the results aren’t great. This is also an issue with the dataset. The datasets these models were trained on, were as follows:

Challenges with 3D pose estimation

Hence, both ROMP and PARE don’t perform too well in the test domain. We then test the BEV model on the Tepper dataset.

3. BEV

We see an improvement in this camera angle on using BEV
BEV predicts accurate human meshes in front and BeV
However, it still fails when we test it on cropped and zoom images of our GTA 5 dataset

Hence, like in the 2D pose estimation case we still have to train the BEV model on the GTA5 dataset we create

References

[1] Zhe Cao et. al. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields” CVPR 2017

[2] Fabbri, Matteo, et al. “Learning to detect and track visible and occluded body joints in a virtual world.” ECCV 2018.

[3] Ledig, Christian, et al. “Photo-realistic single image super-resolution using a generative adversarial network.” CVPR 2017.

[4] Wang, A., Biswas, A., Admoni, H., & Steinfeld, A. (2022). Towards Rich, Portable, and Large-Scale Pedestrian Data Collection. ArXiv, abs/2203.01974.