Feasibility of 3D Stucture Sensing for accurate assessment of chronic wound dimensions

By June 24, 2015Publication
Acknowledgement-GUMC-fetured

#308 – CARS / Lecture or Poster

Feasibility Of 3D Structure Sensing For Accurate Assessment Of Chronic Wound Dimensions

Clinical Applications / Other Specialties / Verification, validation & evaluation (assessment)

Kyle L. Wu1 , Katie Maselli2 , Anna Howell2 , Daniel Gerber2 , Emmanuel Wilson1 , Patrick Cheng1 , Peter Cw Kim1 , Ozgur Guler1

  1. Sheikh Zayed Institute for Surgical Innovation, Children’s National Health System, Washington, United States
  2. MedStar Georgetown University Hospital, Washington, United States

keywords: 3D Structure Sensing, Chronic Wounds, Diabetic Wound, Computer Vision, Wound Assessment

Purpose: Chronic wounds affect approximately 6.5 million patients with a cost of $25 billion per year in the United States alone [1]. Despite this enormous clinical burden, current wound management technologies are underdeveloped and most clinicians rely on inaccurate visual assessment. Ruler-based assessments can overestimate wound area by up to 44% [2]. Tracing wounds using planimetry can give a better estimate of size but is time consuming and still highly variable between operators. Simple point-of-care solution that enables comprehensive 3D wound assessment on a mobile device would significantly improve the care and outcome. We hypothesize that sufficient accuracy and precision for wound assessment can be achieved using a commercially available 3D structure sensor. For this purpose we determine the feasibility of the current prototype and the implemented algorithms on phantom measurements with well know geometry in this research.

Methods: Building on top of the Occipital technology that utilizes infrared based structure sensing, we developed computer vision algorithms to allow real-time acquisition of the full set of wound measurements with minimal effort. The device used is an iPad fitted with a 3D structure sensor, similar to a portable Microsoft Kinect™ [3].

The metric measurement of the wound is done in four steps. First, the user defines semi-automatically the wound border. The outline of the wound border is used to define the region of interest on the depth-map, provided by the structure sensor. The region of interest on the depth-map is then transformed into 3D space where the metric measurement is performed.
Wound border segmentation
Segmentation is done using the interactive Graph Cuts algorithm implemented by Boykov et. al. [4] The user specifies the seed regions for wound and non-wound areas using simple finger swipes on a touchscreen. The segmentation result is displayed in real-time, and the user also has the flexibility to fine-tune the segmentation if needed.
Mapping wound outline onto depth-map
The structure sensor is rigidly attached to the iPad with a fixed mounting bracket. The structure sensor’s camera and the built-in iPad camera have a rigid 6DOF transform between them, which can be calibrated using the calibration application provided by the vendor. This makes sure that depth-map and color image are aligned. Therefore, the outline of the wound, defined in the segmentation step on the color image, can be used to define the wound border on the depth map.
Generating 3D surface from depth-map
The depth-map, which is a 2D image with depth values at pixel positions, can be transformed into a 3D surface using the intrinsic camera parameters of the structure sensor.
Measuring wound
The 3D surface of the wound, from the previous step is used to perform metric measurements, such as length, width, depth, perimeter, area and volume. Length and width are the maximum extent of a rectangle encapsulating the wound boundary. The perimeter can be computed by adding the line segments delineating the wound boundary. For area, volume, and depth, we first create a reference plane using paraboloid fitting to close the 3D wound model. This reference plane follows the anatomical shape of the surrounding body curvature, representing what normal skin surface should be without the wound. The area of the wound can be calculated as the surface area of the reference plane enclosed within the wound boundary. The volume is the space encapsulated by the reference plane and the wound surface; depth is the maximum distance between these two surfaces.
Implementation has been done using the OpenCV library. OpenCV is a C/C++ based language that can be easily integrated into the iOS development platform (Objective-C based).

To obtain a baseline error measures in the context of wound measurements, we designed two 3D models, using SolidWorks (Waltham, MA, USA), that mimic basic wound geometry. Figure 1 (A) shows one of the 3D models. The second 3D model is scaled by x1.5. We then printed the 3D models using the Objet Connex 500 rapid prototyping system (Stratasys Inc., Eden Prairie, MN). The 3D printer has a layer print resolution of 16-microns.

Experiment setup:

Figure 1, shows setup of the experiment with the 3D model and the prototype (Figure 1 (B)). Two 3D printed models with known geometry were measured from varying positions (see Figure 1 inset for different positions). At each position the 3D model was positioned parallel to the prototype (0°) and with +/- 10° deviation from the parallel position, respectively. Measurements were repeated ten times at each position.

Figure 1: Experimental setup with 3D model (A) and prototype camera (B). Inset, top right, shows the varying positions.

Results: The mean, standard deviation, RMS, and percent error for all measurements at each distance and angle is shown in Table 1. The percent error was less than 3.5% for all measurements with the most error in depth (2.87%) and volume (3.46%).

Table 1: Combined mean, standard deviation, RMS, and percent error for each measurement at all three distances and angles.

Conclusions: This experiment demonstrates the feasibility of using the Structure Sensor to capture the topographical details needed to generate accurate 3D wound dimensions.
We will further refine, and validate the algorithms using this experimental setup. The target accuracy measure currently stands at < 3.5% relative error vs. the 3D phantom ground truth (Up to 5% error is acceptable for the wound measurement field according to a panel of experts).

Future: Evaluate device usability and reliability through pilot clinician testing with realistic wound model.

References: References

  1. Dargaville T et al. Sensors and Imaging for Wound Healing: a Review. Biosensors and Bioelectronics. 2013. 41: 30-42
  2. Gethin G. The Importance of Continuous Wound Measuring. Wounds. 2006. 2:60-68.
  3. http://www.xbox.com/en-US/xbox-one/accessories/kinect-for-xbox-one, last access 1/9/2015
  4. Boykov Y, Jolly MP. Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D images. International Conference on Computer Vision, (ICCV), 2001. I:105-112.

Leave a Reply