UBC Faculty Research and Publications

Ultrasound guided spine needle insertion. Chen, Elvis C. S.; Mousavi, Parvin; Gill, Sean; Fichtinger, Gabor; Abolmaesumi, Purang 2010

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata


Abolmaesumi_SPIE_7625_762538.pdf [ 16.43MB ]
JSON: 1.0107556.json
JSON-LD: 1.0107556+ld.json
RDF/XML (Pretty): 1.0107556.xml
RDF/JSON: 1.0107556+rdf.json
Turtle: 1.0107556+rdf-turtle.txt
N-Triples: 1.0107556+rdf-ntriples.txt
Original Record: 1.0107556 +original-record.json
Full Text

Full Text

Ultrasound Guided Spine Needle Insertion Elvis C. S. Chena, Parvin Mousavia, Sean Gilla, Gabor Fichtingera,b,c, and Purang Abolmaesumia,b,c,d aSchool of Computing, Queen’s University, Canada bDepartment of Surgery, Queen’s University, Canada cDepartment of Electrical and Computer Engineering, Queen’s University, Canada dDepartment of Electrical and Computer Engineering, The University of British Columbia, Canada ABSTRACT An ultrasound (US) guided, CT augmented, spine needle insertion navigational system is introduced. The system consists of an electromagnetic (EM) sensor, an US machine, and a preoperative CT volume of the patient anatomy. Three-dimensional (3D) US volume is reconstructed intraoperatively from a set of two-dimensional (2D) freehand US slices, and is coregistered with the preoperative CT. This allows the preoperative CT volume to be used in the intraoperative clinical coordinate. The spatial relationship between the patient anatomy, surgical tools, and the US transducer are tracked using the EM sensor, and are displayed with respect to the CT volume. The pose of the US transducer is used to interpolate the CT volume, providing the physician with a 2D “x-ray vision” to guide the needle insertion. Many of the system software components are GPU-accelerated, allowing real-time performance of the guidance system in a clinical setting. Keywords: Ultrasound, image guided procedures, multi-model registration, multi-modality display, spine, GPU. 1. INTRODUCTION Several spinal interventions such as facet joint injection,1 vertebroplasty, and kyphoplasty,2 involve the insertion of a surgical needle into the spinal column. These interventions are good examples of challenging surgical techniques: the small, narrow channel between vertebrae, the oblique entry angle, and close proximity to other critical tissues such as nerve blocks make precise treatment delivery difficult. Contemporary clinical practice often employs either the use of ionizing imaging modalities, such as fluoroscope3 or CT,4 or a freehand technique3 to guide the intraoperative needle insertion. Ionizing imaging modalities increase the health risks for both the patient and the medical practitioner while the freehand techniques often lead to faulty needle placements. Fluoroscopic or x-ray images, being the projected 2D imaging modalities, also require mental transformation and interpretation of the image by the medical practitioner. Many studies have demonstrated the potential efficacy of CT-guided surgical procedures.5, 6 CT provides excellent anatomic detail for bony structure and for needle track. This specificity is particularly useful for difficult surgical sites such as the facet joints, increasing the precision of these procedures and help confirming needle placement. Theoretical disadvantages of CT-guided procedures are the increased radiation to both the patient and the medical practitioner and the availability of costly and immobile imaging devices. Towards reducing ionizing radiation, low-dose imaging protocols7 and navigational systems8–10 have been employed to complement the conventional CT-guided procedures. Navigation and tracking systems visualize the actual position and orientation of the surgical instrument with respect to the CT volume, thus reduce the need for verification scans or CT fluoroscopy. For navigational system to work, however, the CT volume must be aligned with the intra- operative location of the patient anatomy. A solution is to construct an intraoperative 3D freehand US volume that can be co-registered to the preoperative CT via a volume-to-volume registration algorithm. Recently, US-guided procedures have been demonstrated as a viable alternative to CT- or fluoro-guided pro- cedures.11, 12 Ultrasound provides real-time imaging, does not produce ionizing radiation, is widely available and Send correspondence to E.C.S. Chen: chene@cs.queensu.ca Medical Imaging 2010: Visualization, Image-Guided Procedures, and Modeling, edited by Kenneth H. Wong, Michael I. Miga, Proc. of SPIE Vol. 7625, 762538 · © 2010 SPIE · CCC code: 1605-7422/10/$18 · doi: 10.1117/12.843716 Proc. of SPIE Vol. 7625  762538-1 Downloaded from SPIE Digital Library on 15 Aug 2011 to Terms of Use:  http://spiedl.org/termsrelatively inexpensive. However, on its own, US images are hard to interpret and offer little spatial information. Small surgical instruments such as spine needle are also hard to be denoted within US images and must be imaged in-plane of the US transducer. In this work, we combine the specificity and accuracy of CT with the ease of use, speed, and safe operation of ultrasound. We present an US-guided navigational system consisting of an EM sensor, an US machine, and a preoperative CT volume of the spine anatomy. 3D ultrasound volume is reconstructed intraoperatively and is co-registered with the CT volume. The pose of the tracked US transducer is used to interpolate the CT volume, producing a slice of 2D CT image that corresponds to the live US image. Combined with a surface rendering and a Digitally Reconstructed Radiograph (DRR) of the spine, this system provides an intuitive user interface for spinal interventions. 2. METHODS Preoperative CT of the lumbar spine is obtained. Medical instruments, including surgical needle and the US transducer, are tracked using an EM sensor (Northern Digital, Waterloo, ONT, Canada). The US transducer is calibrated13 against the magnetic dynamic reference body (DRB). This allows the construction of a 3D US volume from a series of freehand 2D US images.14 The reconstructed US volume is coregistered with the pre- operative CT via a GPU-accelerated, biomechanically constrained, intensity-based registration algorithm that registers each vertebra independently.15 The GPU-accelerated implementation of the registration algorithm enables real-time performance, which is necessary in a clinical setting. The pose of the tracked US transducer is used to define the 2D texture coordinate system to “slice” through the CT volume, allowing the CT image corresponding to the live US image to be used as visual aid. A 5 DOF magnetic surgical needle (18 gauge, 200 mm, Northern Digital, Waterloo, ONT, Canada) was used in the study. 2.1 CT to US volume registration The core of the proposed navigational system is the GPU-accelerated, CT volume to US volume registration algorithm. It is an extension of the work by Wein at al.,16, 17 where the preoperative CT is transformed into a simulated US volume prior to the calculation of a similarity metric. We extended the work by Wein at al., which is a rigid registration, to a groupwise registration, allowing each vertebra to be registered independently and simultaneously. We further constrained the registration parameters of adjacent vertebrae by a biomechanical model to improve the registration result. Similar to,18 our algorithm was implemented in GPU (CUDA, NVIDIA Inc.) which provided the performance needed for a clinical deployment. CT − L3 CT − L2 CT − L4 CT − L5 T1CT − L1 T3 T4 T5 T2 Reconstructed CT Biomechanical Model Mapped CT US Reflections Simulated US Energy Similarity Metric US Optimizer GPU Figure 1. Work-flow of the group-wise CT to US volume registration. All the components, except for the optimizer, are implemented in GPU (CUDA, NVIDIA Inc.). Proc. of SPIE Vol. 7625  762538-2 Downloaded from SPIE Digital Library on 15 Aug 2011 to Terms of Use:  http://spiedl.org/termsThe registration workflow is depicted in Figure 1. The preoperative CT was manually separated into sub- volumes, each of which containing a single vertebra (L1 to L5). The registration is an iterative optimization process, where the parameters to be optimized are the transformations (T1 to T5) that bring the CT segments into the US volume coordinate. Each transformation is represented by 3 rotations plus 3 translations, thus for n vertebrae the parameter space has a dimensionality of 6 × n. During each optimization step, the transformation was applied to each CT segment. CT segments were then fused into a single volume: if there were overlapping voxels, the maximum intensity was selected for the reconstructed volume, thus preserving the bone structure. Any gap in the final volume was filled with a default value that approximated the intensity of the soft tissue in CT. The US simulation was computed from the reconstructed CT volume. The detailed formulation was previously published:15–17, 19 An US reflection volume and a mapped CT volume were derived from the reconstructed CT volume and combined in a weight manner into a simulated US volume. The simulated US reflections model the US beam passing through the tissue as a ray, with the assumption that the CT intensities can be related to the acoustic impedance values used to calculate US transmission and reflection. For each column in the CT volume, the transmission and reflection of the beam is calculated at each voxel based on the following equations: ∆r(x, y, d) = (dT∇u(x, y)) |∇u(x, y)|(2µ(x, y))2 (1) ∆t(x, y) = 1− ( |∇µ(x, y)| (2µ(x, y) )2 (2) r(x, y) = I(x, y − 1)∆r(x, y, d) (3) I(x, y) = { I(x, y − 1)∆t(x, y), |∆µ(x, y)| < τ 0, |∆µ(x, y)| ≥ τ (4) where d is the direction of the US beam, µ is the intensity of the CT image, ∆r is the reflection coefficient, r is the simulated reflection intensity, ∆t is the transmission coefficient, τ is the threshold for full reflection and I is the intensity of the simulated US beam. Any gradient value greater than a preset threshold (450 Hounsfield Unit in our simulations) causes full reflection of the US beam intensity at that point, setting the incoming US beam intensity for all subsequent points on the scan line to zero. A log-compression is applied to the simulated reflection image to amplify small reflections: r(x, y) = log (1 + ar(x, y))log(1 + a) (5) The CT intensities are mapped to values closer to those corresponding to the tissues in the US data using an approximation of the curve presented in:16, 17 p(x, y) = 1.36µ(x, y)− 1429 (6) The final step in the US simulation is the weighting of the simulated US reflection, the mapped CT, and a bias term. A least-squares optimization is used to calculate the weights such that the values in the simulation best match the corresponding intensities in the real US volume: f(x, y) = { αp(x, y) + βr(x, y) + γ, I(x, y) > 0 0, I(x, y) = 0 (7) where f is the simulated US image and α,β,γ are the weights for their respective images. The Linear Correlation of Linear Combination (LC2) metric16, 17 was used to compare the similarity between the actual US volume and the simulated US volume: LC2 = ∑(U(x, y)− f(x, y))2 N × V ar(U) (8) where N is the number of overlapping voxels between the US and CT volumes, and U is the actual US image intensity. Proc. of SPIE Vol. 7625  762538-3 Downloaded from SPIE Digital Library on 15 Aug 2011 to Terms of Use:  http://spiedl.org/terms2.2 Biomechanical model for spine In order to achieve a better convergence rate for the registration algorithm, a biomechanical model for the vertebra was incorporated15, 20 to constrain the movement between adjacent vertebrae. It models the relation between the displacement of the intervertebral structures and the reaction forces and moments: K =       100 0 50 0 −1640 0 0 110 0 150 0 580 50 0 780 0 −760 0 0 150 0 148E5 0 −8040 −1640 0 −760 0 152E5 0 0 580 0 −8040 0 153E5       [ Nmm rad−1 ] (9) where K is the stiffness matrix representing the intervertebral structures. The energy stored within the inter- vertebral structure is modeled as a general spring: U = 1 2 (xTKx) (10) where x is a vector representing the change in translation and rotation of the intervertebral link: it is calculated as the relative transform between two consecutive vertebrae. The energy is calculated across all vertebrae and normalized based on the energy of a maximum misalignment (±15 mm translation along each axis and ±15◦ rotation about each axis):) E = ∑V i∈V ULi,Li+1 |V | × Umax (11) where E is the normalized energy of the system, V is the set of vertebrae to be registered, ULi,Li+1 represent the energy of the model calculated for adjacent vertebrae, and Umax is the energy of the maximum misalignment. This normalized energy acts as a penalty to the similarity metric, giving rise to: BCLC2 = LC2 − σE (12) where BCLC2 is what we call the Biomechanically Constrained Linear Combination of Linear Correlation,15 and σ is a user-defined weight to blend the biomechanical penalty with the normal LC2 intensity-based measure. 2.3 Apparatus All components of the registration algorithm, with the exception of the numerical optimizer, are implemented in CUDA API (NVIDIA Inc.). The optimizer, Covariance Matrix Adaption - Evolution Strategy,21 is implemented in C++. The system runs on a quad-core Intel CPU (2.4 GHz) with 4 GB of RAM and an NVIDIA 8800GTS video card. A SonixRP ultrasound machine (Ultrasonix, Richmond, BC, Canada) was used in the study. The SonixRP is accompanied by software tools that allow the raw US images to be acquired digitally without the use of an analog frame grabber. A SP10-60 US transducer (Ultrasonix, Richmond, BC, Canada) operating at 6.6 MHz with a depth of 5.5 cm was used. An Aurora 6 DOF EM sensor (FlexCord, Northern Digital, Waterloo, ONT, Canada) was rigidly attached to the US transducer. A Chiba Aurora Needle (Northern Digital, Waterloo, ONT, Canada) served as the spine needle. The Chiba needle is a 2-part needle composed of a 18 gauge outer cannula and a 200 mm stylet (Figure 2a). A 5 DOF sensor is integrated into the tip of the stylet which allows the needle top to be tracked as it is inserted into the tissue. No tool calibration is needle for the Chiba Aurora Needle. All EM sensors were tracked at a rate of 15 Hz. To demonstrate the functionality of the navigational system, a spine phantom mimicking the actual anatomy was build. The phantom is comprised of two parts: a spine model based on segmented patient CT data, and the surrounding gel that prohibits the appearance of soft tissue in US images. Vertebrae L1 to L5 were manually segmented from a patient CT data using ITK-Snap. The segmented CT volume was used to reconstruct a surface model of the spine, which is then printed using a 3D shape printer (Cimetrix Solutions, Oshawa, ONT, Canada). Proc. of SPIE Vol. 7625  762538-4 Downloaded from SPIE Digital Library on 15 Aug 2011 to Terms of Use:  http://spiedl.org/terms(a) (b) (c) Figure 2. Apparatus: (a) Aurora 5 DOF needle, the 2-part needle has a 18 gauge cannula; a 200 mm stylet with EM sensor embedded in the tip, (b) A printed spine (L1 to L5) that preserves the natural curvature, and (c) the spine phantom that is composed of the printed spine model immersed in a agar-gelatine gel. The housing is embedded with external radioopaque markers that is visible in CT; these are used for landmark-based registration that serves as the gold standard for the CT-US volume registration. This approach preserves the natural curvatures of the spine between the patient data and the printed model (Figure 2b). The model was placed in a box embedded with external radioopaque landmarks and was filled with soft-tissue mimicking gel (Figure 2c). This gel is based on an agar-gelatine recipe:22 1.17% agar (A9799, Sigma-Aldrich, St. Louis, MO, USA), 3.60% gelatin (G9382, Sigma-Aldrich), 1% Germall Plus (International Specialty products, Wayne, New Jersey, USA) as a preservative, 3% cellulose (S5504, Sigma-Aldrich) for speckle, and 3.2% glycerol (G6279, Sigma-Aldrich) to adjust the speed of sound to approximately 1540 m/s. Note that the recipe percentages are by mass, not volume. This gel is designed to simulate the appearance of soft tissue in US, including speckle and refraction. A high-resolution CT volume (0.46×0.46×0.625 mm3) and an US volume (6.6 MHz with a depth of 5.5 cm) of the phantom were acquired. The external landmarks embedded on the box were manually identified in both CT and the reconstructed US volume and serve as the basis to establish the gold standard for the volume-to-volume registration algorithm. 3. RESULTS AND DISCUSSION The validation of the CT to US registration was previously reported.15, 19 This technique is able to register initial misalignments up to 20 mm with a success rate of 82%, and those of up to 10 mm with a success rate of 98.6%. In this work, the preoperative CT has a dimension of 512 × 512 × 256 voxels and the reconstructed US volume has a dimension of 291× 223× 284 voxels. We implemented the registration algorithm in both CPU (C++) and GPU (CUDA) and the timing results are presented below: for a single vertebra (rigid) a speed-up of 30× is obtained (18s vs.s 534s) while for a 3 vertebra (group-wise) registration a speed-up of 70× is obtained (756s vs. 55, 000s). We are currently extending our implementation to include all five lumbar vertebrae. Figure 3 depicts the experimental setup of the navigational system. All relevant information about the surgical scene are presented in a single display. Since ultrasound images are acquired digitally without loss of quality, the display on the US machine is considered as secondary and not needed. Figure 4 depicts the user interface of the navigation system. A combination of the DRR rendering and surface- based rendering are used. DRR rendering, shown on the top-left of Figure 4, provides a familiar visualization that a medical practitioner. DRR does not, however, depict geometrical features due to its projection nature. Surface rendering, shown from two orthogonal angles at the bottom of the user interface, complements DRR by providing both depth and spatial perception of bony anatomy. Notice the facet joints are clearly visible in the surface rendering. The poses of the US transducer and surgical needle are sampled at 15 Hz. The pose of the US transducer is used to calculate intersection between the viewing plane of the live US image and the bounding box containing the CT volume. This defines a textured plane with texture-coordinate that “slices” through the CT volume, Proc. of SPIE Vol. 7625  762538-5 Downloaded from SPIE Digital Library on 15 Aug 2011 to Terms of Use:  http://spiedl.org/termsFigure 3. Apparatus of the spine-needle navigation system. The live US images are digitally acquired and displayed on the computer monitor. resulting in a 2D CT image that corresponds to the live US image. This CT augmented user interface diminishes the need to interpret a noisy US image. It serves as a visual verification of the CT to US registration, and provides visual cue for needle guidance. Figure 4 depicts the 2D slice of the CT volume, 2D slice of the reconstructed US volume, and the live US image obtained in real-time from the US machine. Note they are all showing the same anatomical feature. 4. CONCLUSION AND FUTURE WORK A US-guided, CT-augmented, navigation system for spinal intervention is presented. The novelty of this system is the incorporation of a group-wise, biomechanically constrained, GPU-accelerated CT to US volume registration, allowing the deployment of the preoperative CT volume, and thus providing both surface and DRR rendition of the anatomy that are otherwise unavailable with the conventional surgical procedures. The system can be used for many spinal interventions such as facet joint injection, pedicle screw insertion, and kyphoplasty surgery. Future work include: the extension of the CT to US registration to all 5 lumbar spine, to improve the efficiency of the CUDA implementation for further speed-up, the visualization of the surgical instrument in the 2D CT and US images, validation of the system accuracy as a combination of the registration algorithm and EM tracker performance, and providing uncertainty visualization23 of surgical plan. We are also investigating the possibility to eliminate positional tracker completely by the deployment of US speckle tracking. Swine animal and clinical studies as a validation for the navigational system is also planned. REFERENCES [1] Boswell, M. V., Colson, J. D., Sehgal, N., Dunbar, E. E., and Epter, R., “A Systematic Review of Thera- peutic Facet Joint Interventions in Chronic Spinal Pain,” Pain Physician 10, 229–253 (Jan. 2007). Proc. of SPIE Vol. 7625  762538-6 Downloaded from SPIE Digital Library on 15 Aug 2011 to Terms of Use:  http://spiedl.org/termsFigure 4. User interface of the proposed US-guided, CT-augmented, navigation system. Both DRR and surface rendering of the bony anatomy are presented in orthogonal views. On the right: Slice of CT (top) and reconstructed US volumes (middle ) corresponding to the live US image (bottom) are used as verification for the registration algorithm and enhances the needle guidance. The pose of the US transducer, surgical needle, and 2D images are updated at 15 Hz. [2] Burton, A. W. and Mendel, E., “Vertebroplasty and Kyphoplasty,” Pain Physician 6, 335–343 (July 2003). [3] Carrino, J. A., Morrison, W. B., Parker, L., Schweitzer, M. E., Levin, D. C., and Sunshine, J. H., “Spinal Injection Procedures: Volume, Provider Distribution, and Reimbursement in the U.S. Medicare Population from 1993 to 1999,” Radiology 225, 723–729 (Dec. 2002). [4] Gangi, A., Dietemann, J.-L., Mortazavi, R., Pfleger, D., Kauff, C., and Roy, C., “CT-guided interventional procedures for pain management in the lumbosacral spine,” Radiographics 18, 621–633 (May 1998). [5] Silbergleit, R., Mehta, B. A., Sanders, W. P., and Talati, S. J., “Imaging-guided Injection Techniques with Fluoroscopy and CT for Spinal Pain Management,” Radiographics 21, 927–939 (July 2001). [6] Aguirre, D. A., Bermudez, S., and Diaz, O. M., “Spinal CT-guided interventional procedures for management of chronic back pain,” Journal of Vascular and Interventional Radiology 16, 689–697 (May 2005). [7] Slomczykowski, M., Roberto, M., Schneeberger, P., Ozdoba, C., and Vock, P., “Radiation Dose for Pedicle Screw Insertion: Fluoroscopic Method Versus Computer-Assisted Surgery,” Spine 24, 975–982 (May 1999). [8] Jacobi, V., Thalhammer, A., and Kirchner, J., “Value of a laser guidance system for CT interventions: a phantom study,” European Radiology 9, 137–140 (Jan. 1999). [9] Bruners, P., Penzkofer, T., Nagel, M., Elfring, R., Gronloh, N., Schmitz-Rode, T., Gu¨nther, R., and Mahnken, A., “Electromagnetic tracking for CT-guided spine interventions: phantom, ex-vivo and in-vivo results,” European Radiology 19, 990–994 (Apr. 2009). [10] Proschek, D., Kafchitsas, K., Rauschmann, M., Kurth, A., Vogl, T., and Geiger, F., “Reduction of radiation dose during facet joint injection using the new image guidance system SabreSourceTM: a prospective study in 60 patients,” European Spine Journal 18, 546–553 (Apr. 2009). [11] Greher, M., Kirchmair, L., Enna, B., Kovacs, P., Gustorff, B., Kapral, S., and Moriggl, B., “Ultrasound- guided Lumbar Facet Nerve Block: Accuracy of a New Technique Confirmed by Computed Tomography,” Anesthesiology 101, 1195–1200 (Nov. 2004). [12] Galiano, K., Obwegeser, A. A., Walch, C., Schatzer, R., Ploner, F., and Gruber, H., “Ultrasound-Guided Versus Computed Tomography-Controlled Facet Joint Injections in the Lumbar Spine: A Prospective Randomized Clinical Trial,” Regional Anesthesia and Pain Medicine 32, 317–322 (July 2007). Proc. of SPIE Vol. 7625  762538-7 Downloaded from SPIE Digital Library on 15 Aug 2011 to Terms of Use:  http://spiedl.org/terms[13] Prager, R. W., Rohling, R. N., Gee, A. H., and Berman, L., “Rapid calibration for 3-D freehand ultrasound,” Ultrasound in Medicine & Biology 24(6), 855–869 (1998). [14] Gobbi, D. G. and Peters, Terry, M., “Interactive Intra-operative 3D Ultrasound Reconstruction and Vi- sualization,” in [Medical Image Computing and Computer-Assisted Intervention - MICCAI 2002 ], Lecture Notes in Computer Science 2489, 156–163 (2002). [15] Gill, S., Mousavi, P., Fichtinger, G., Chen, E. C. S., Boisvert, J., Pichora, D., and Abolmaesumi, P., “Biomechanically constrained groupwise US to CT registration of the lumbar spine,” in [Medical image computing and computer-assisted intervention – MICCAI 2009 ], Yang, G.-Z., Hawkes, D., Rueckert, D., Noble, A., and Taylor, C., eds., Lecture Notes in Computer Science 5761, 803–810, Springer (2009). [16] Wein, W., Brunke, S., Khamene, A., Callstrom, M. R., and Navab, N., “Automatic CT-ultrasound registra- tion for diagnostic imaging and image-guided intervention,” Medical Image Analysis 12(5), 577–585 (2008). Special issue on the 10th international conference on medical imaging and computer assisted intervention - MICCAI 2007. [17] Wein, W., Khamene, A., Clevert, D.-A., Kutter, O., and Navab, N., “Simulation and Fully Automatic Multimodal Registration of Medical Ultrasound,” in [Medical Image Computing and Computer-Assisted Intervention - MICCAI 2007 ], Ayache, N., Ourselin, S., and Maeder, A., eds., Lecture Notes in Computer Science 4791, 136–143, Springer Berlin / Heidelberg (2007). [18] Reichl, T., Passenger, J., Acosta, O., and Salvado, O., “Ultrasound goes GPU: real-time simulation using CUDA,” in [Proceedings of SPIE ], Miga, M. I. and Wong, K. H., eds., 7261, 726116 (Mar. 2009). [19] Gill, S., Mousavi, P., Fichtinger, G., Pichora, D., and Abolmaesumi, P., “Group-wise registration of ul- trasound to CT images of human vertebrae,” in [Proceeding of SPIE ], Miga, M. I. and Wong, K. H., eds., 7261(1), 726110 (2009). [20] Desroches, G., Aubin, C.-E., Sucato, D. J., and Rivard, C.-H., “Simulation of an anterior spine instrumenta- tion in adolescent idiopathic scoliosis using a flexible multi-body model,” Medical and Biological Engineering and Computing 45, 759–768 (Aug. 2007). [21] Hansen, N. and Ostermeier, A., “Adapting arbitrary normal mutation distributions in evolution strate- gies: the covariance matrix adaptation,” in [Proceedings of the 1996 IEEE International Conference on Evolutionary Computation ], 312–317 (1996). [22] Madsen, E. L., Hobson, M. A., Shi, H., Varghese, T., and Frank, G. R., “Tissue-mimicking agar/gelatin materials for use in heterogeneous elastography phantoms,” Physics in Medicine and Biology 50, 5597–5618 (Dec. 2005). [23] Simpson, A. L., Ma, B., Chen, E. C. S., Ellis, R. E., and Stewart, A. J., “Using Registration Uncertainty Visualization in a User Study of a Simple Surgical Task,” in [Medical Image Computing and Computer- Assisted Intervention - MICCAI 2006 ], Larsen, R., Nielsen, M., and Sporring, J., eds., Lecture Notes in Computer Science 4191, 397–404, Springer Berlin / Heidelberg (2006). Proc. of SPIE Vol. 7625  762538-8 Downloaded from SPIE Digital Library on 15 Aug 2011 to Terms of Use:  http://spiedl.org/terms


Citation Scheme:


Usage Statistics

Country Views Downloads
China 2 0
United States 1 0
City Views Downloads
Shenzhen 2 0
Ashburn 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items