Pressure ulcers are incredibly painful but easily preventable. BodyMAP shows great promise in predicting 3D pose and pressure from depth and pressure images, but the model fails to generalize well to real-world data. Our work focuses on improving and expanding upon BodyMAP to improve its generalizability and increase accuracy. Having accurate 3D models and pressure on patients’ bodies will help nurses effectively move patients, preventing pressure ulcers.
Background
Pressure ulcers affect over 2.5 million in the US, costing over $26 billion each year. Pressure ulcers, or bed sores, occur when consistent pressure is applied to the same parts of a patient’s body. Pressure ulcers are common in terminal patients, nursing home patients, and those with long-term illness, in general, those who have a hard time readjusting their own position. Pressure ulcers are easily preventable, the most common form of prevention is moving the patient every 2 hours. Moving the patient only works effectively if pressure is successfully shifted to different parts of the body. We aim to aid healthcare professionals in moving patients by predicting the 3D pose and pressure on a patient’s body from the pressure mat and depth camera input. With the knowledge of pose and pressure, the patient is successfully moved to alleviate and shift pressure.
Knowing the 3D pose of a patient in a hospital bed can be used for applications beyond pressure ulcer prevention. Sleep-related motivation disorders, sleep quality assessments, epilepsy monitoring, and robotic assistance are some of the many areas where knowing the 3D pose of a patient can be beneficial.
Motivation
Our work builds off Abishek Tandon’s model, BodyMAP–Jointly Predicting Body Mesh and 3D Applied Pressure Map for People in Bed. Tandon created a model for jointly predicting the 3D pose and pressure of a patient from the depth camera and pressure mat data. Tandon’s model is state-of-the-art, but does not generalize well to real-world data. As shown below, even on poses similar to those found in the test data from the dataset the BodyMAP model was trained on, BodyMAP does poorly on results on data we captured ourselves.

Our work focuses on improving upon and adding to BodyMAP in two ways: Model Advancement and Deployment.
Model Advancement
The objective of this project is to develop an innovative training and inference pipeline designed to manage domain shifts between synthetic and real-world data more effectively. Additionally, this advancement in modeling aims to utilize diffusion models to address complex in-bed covering scenarios commonly found in real-world applications. The following figure illustrates how the model tends to produce inaccurate and unreliable in-bed pose predictions under various cover scenarios, as obstructions like blankets or tables can distort depth images, leading to blurry boundaries.
The following figure illustrates how the model tends to produce inaccurate and unreliable in-bed pose predictions under various cover scenarios, as obstructions like blankets or tables can distort depth images, leading to blurry boundaries.
Approach
Goal: Implement the conditional diffusion model to mitigate occlusions and improve in-bed pose predictions, thereby reducing data distribution shifts.

Results
Visualization Results
By using the diffusion model, it is possible to address unseen occlusion scenarios, such as tables, which were not part of the training dataset.

Based on the visualizations, using the reconstructed images for in-bed pose predictions results in outcomes that closely resemble those from uncovered scenarios, yielding more accurate predictions.

Quantitative Results
The following table displays quantitative results measuring pose error, 3D shape error, and pressure distribution error. All values in the table represent error, with lower values indicating better performance. It is clear that the results obtained from the “cover2 clean” approach, which uses reconstructed images, closely align with those from original uncovered images and significantly outperform predictions made using covered images.


Deployment
Deployment focuses on transforming real-world data to fit the distribution of the training data to generalize and yield more accurate results.
BodyMAP has shown poor performance on real-world data, it fails to generalize beyond the training and test sets. To make the model deployable in the real world we need to account for the shifts and differences in the data distribution. In a hospital or nursing home setting, beds move, pressure mats slide, cameras get bumped, and the environment is uncontrolled. To improve the performance of BodyMAP, we will transform the input depth and pressure map images before sending them into the model, avoiding the negative effects of the uncontrolled environment.

First, we capture depth and RGB images with an Intel RealSense camera and pressure data with a BodiTrak pressure map. Next, we detect ArUco tags on the corners of the bed, allowing us to crop the depth image to just the bed. Once we have the depth image cropped we can align the depth and pressure mat images, so the patient lies in the same place in both images. Finally, we need to transform the depth and RGB images to fit the training data distribution. We will find the transformation through the optimization of transformation parameters such as shift, scale, and rotation. This optimization process is shown below, the loss for the neural network (NN) producing the transformation parameters is between a ground truth SMLP body model from the RGB image and the one produced by BodyMAP.

RGB, pressure, and depth data have been captured, as we will need plenty of data to train this network. Next semester we plan to capture even more data to train the model.