Karimi et al. BMCMedical Imaging (2016) 16:11 DOI 10.1186/s12880-016-0112-5TECHNICAL ADVANCE Open AccessA sinogram denoising algorithm forlow-dose computed tomographyDavood Karimi1*, Pierre Deman2, Rabab Ward1 and Nancy Ford2AbstractBackground: From the viewpoint of the patients’ health, reducing the radiation dose in computed tomography (CT)is highly desirable. However, projection measurements acquired under low-dose conditions will contain much noise.Therefore, reconstruction of high-quality images from low-dose scans requires effective denoising of the projectionmeasurements.Methods: We propose a denoising algorithm that is based on maximizing the data likelihood and sparsity in thegradient domain. For Poisson noise, this formulation automatically leads to a locally adaptive denoising scheme.Because the resulting optimization problem is hard to solve and may also lead to artifacts, we suggest an explicitlylocal denoising method by adapting an existing algorithm for normally-distributed noise. We apply the proposedmethod on sets of simulated and real cone-beam projections and compare its performance with two other algorithms.Results: The proposed algorithm effectively suppresses the noise in simulated and real CT projections. Denoising ofthe projections with the proposed algorithm leads to a substantial improvement of the reconstructed image in termsof noise level, spatial resolution, and visual quality.Conclusion: The proposed algorithm can suppress very strong quantum noise in CT projections. Therefore, it can beused as an effective tool in low-dose CT.Keywords: Computed tomography, Sinogram denoising, Poisson noise, Total variation, Low-dose CTBackgroundComputed tomography (CT) is one of the most widelyused imaging modalities in medicine and its usagehas been consistently growing over the past decades[1]. Although CT is an indispensable tool in modernmedicine, one of its drawbacks is that the radiation usedin this type of imaging can be harmful to the patienthealth. Reducing the radiation dose requires reducing thenumber of projection measurements or reducing the radi-ation intensity, which in turn results in noisier measure-ments. Reconstructing a high-quality CT image from suchmeasurement is a great challenge. The traditional imagereconstruction methods in CT are based on filtered back-projection (FBP). Although FBP-based methods are fastand relatively easy to implement, they perform very poorlywhen the number of projections is reduced or when themeasurements are very noisy. With increased awareness*Correspondence: karimi@ece.ubc.ca12366 Main Mall, Vancouver, BC V6T 1Z4, CanadaFull list of author information is available at the end of the articleof the harmful effects of excessive radiation, recent yearshave witnessed a new interest in statistical reconstructionmethods in CT. These methods promise reconstructinga high-quality image from undersampled or noisy projec-tions. Statistical image reconstruction methods in CT arebased, as their name implies, on including the statistics ofthe detected photon counts in the reconstruction process.They can be classified into three categories [2, 3]:(A) Methods that work on the raw data (sinogram)-these are mostly smoothing techniques that aim atreducing the Poisson noise in the projection data.The processed sinogram data are then used toreconstruct the image using an iterative or filteredbackprojection (FBP) method.(B) Methods that work on the reconstructed image-these are also denoising techniques, but operate inthe image domain.(C) Methods that use statistical modeling of the imagingprocess to devise iterative algorithms for imagereconstruction.© 2016 Karimi et al. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, andreproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to theCreative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.Karimi et al. BMCMedical Imaging (2016) 16:11 Page 2 of 14Whereas the methods in category (C) usually lead tobetter image quality, the first two types of methods areusually easier to implement, faster, and independent of theimage reconstruction method. Among the first two cat-egories of methods, category (A) is usually much moreeffective. This is because while we have a good under-standing of the properties of the noise in the raw (i.e.projection) data, the statistics of the noise in the recon-structed image are not as well understood. Hence, meth-ods that filter the reconstructed image are post-processingalgorithms that cannot use the photon statistics and theirperformance is in general inferior to filtering in the rawdata domain.Previous studies in sinogram denoising are numerousand they present a diverse set of possible approaches tothe problem. Because the variance of the Poisson noisechanges with the signal amplitude, adaptive filters area natural choice and they have been researched exten-sively [4, 5]. Bayesian approaches, usually maximizing aweighted sum of the data likelihood and a regularity term,have also been proposed in several studies [6, 7]. Thesealgorithms can be very elaborate because the resultingoptimization problem is usually hard to solve. At theother end of the spectrum of algorithmic complexity, thereexist very simple methods involving shift-invariant low-pass filters that reduce the high-frequency variations inthe sinogram [8, 9]. As one might expect, these meth-ods are much less effective because they do not considerthe signal-dependent nature of the noise. Another classof algorithms that have been used for sinogram denoisinginclude multi-resolution methods involving short-timeFourier and wavelet transforms [10, 11]. These methodsare based on a standard signal processing approach of sep-arating the noise from the signal in the transform domain.A number of studies have focused on denoising the pro-jection data in the Radon space (i.e. after the logarithmtransformation). Even though the photon counts follow aPoisson distribution, the noise in the Radon space followsan approximately Gaussian distribution [12, 13]. There-fore, some studies have proposed algorithms for noisesuppression in the Radon space based on the assump-tion that the noise is normally distributed [14, 15]. TheGaussianity assumption significantly simplifies the prob-lem and widens the range of tools that can be employedin algorithm development. However, as the photon countsdecrease, the distribution of the noise in the Radon spacecan significantly depart from a Gaussian distribution [13].Therefore, these methods become less efficient in low-dose setting, which is exactly where noise suppression ismostly needed.In the past decade, image denoising has witnesseda growing interest in patch-based methods. The twomain classes of patch-based denoising methods includedictionary-based denoising methods and nonlocal-meansmethods. Dictionary-based denoising methods are basedon the assumption that small patches of natural imageshave a sparse representation in a (usually overcom-plete) dictionary that can be learned from training data[16, 17]. As opposed to more traditional denoising meth-ods that use an off-the-shelf basis (e.g., a wavelet basis),these dictionary-based methods adapt the dictionary tothe specific class of images at hand. If the dictionary iswell-trained, genuine image features will have a sparserepresentation in the dictionary, whereas noise will nothave such a sparse representation. Therefore, sparserepresentation of image patches will lead to denoising.Dictionary-based denoising methods have shown to behighly successful in denoising of general natural images[16, 17] as well as medical images [18, 19]. The nonlocal-means methods, on the other hand, exploit the self-similarities in natural images. To estimate the denoisedvalue of each pixel, they consider a small patch centeredon that pixel and search the image for patches that aresimilar to this patch. The true (i.e., denoised) value of thepixel is estimated by some collaborative filtering of thefound similar patches. This collaborative filtering was inthe form of a weighted averaging in the original nonlocal-means algorithm [20, 21], but has much more elaborateforms in more recent algorithms [22]. Although the ori-gins of these denoising methods go back approximately10 years, they have been extended and improved in manyways and they present some of the best available imagedenoising methods. One of the limitations of patch-basedmethods, especially for large 3D images that are com-mon in medical applications, is their high computationaldemands.In this study, we suggest smoothing the noisy sinogramsby minimizing a cost function that consists of a data like-lihood term and a regularization term that promotes gra-dient sparsity. The resulting optimality condition suggestsan adaptive filter that carries out a stronger denoisingwhere the signal intensity is higher, which is the expectedoutcome under Poisson distribution. Instead of solvingthe resulting optimization problem directly, we suggestmodifying an existing algorithm such that it approxi-mates the exact solution locally. Therefore, our approachcomputes the denoised value of each sinogram pixel bysolving a local optimization problem in a small neighbor-hood around that pixel. We will evaluate the performanceof the suggested approach by applying it to noisy simu-lated projections and low-dose projections acquired froma micro-CT scanner.MethodsGiven a noisy image, v, the goal is to find an image uthat ideally preserves all important features in the imagewhile reducing the unwanted noise. Since this is an ill-posed inverse problem, additional information needs to beKarimi et al. BMCMedical Imaging (2016) 16:11 Page 3 of 14introduced to regularize the problem. The denoised imagecan then be found by minimizing a cost function of theform:Eλ(u) = 12 ||u − v||22 + λR(u) (1)The first term in Eλ(u) reflects the requirement that thedenoised image should be close to the noisy image andthe second term is the regularization. In the context ofdenoising problem formulated as (1), a proper regulariza-tion is necessary because without a regularizing term thenoisy image itself is the only solution of the problem; i.e.,if λ = 0 then u = v is the only minimizer of Eλ(u). Afamous example of variational denoising methods is theRudin-Osher-Fatemi (ROF) model that has the followingform [23]:Eλ(u) = 12 ||u − v||22 + λ∫|∇u| (2)where denotes the image domain and λ is the regu-larization parameter. The second term, where ∇u is thegradient of u, is the regularization term. The choice of2-norm in the first term stems from the assumptionof additive Gaussian noise. For the case of Poisson dis-tribution considered in this study, this term has to bemodified.Let u(i, j) and v(i, j) denote pixels of discretized ver-sions of u and v. For an arbitrary pixel location (fromthe probability mass function of a variable with Poissondistribution):P(v(i, j);u(i, j)) = e−u(i,j)u(i, j)v(i,j)v(i, j)! (3)assuming the pixel values are independent, for the entireimage we will have:P(v|u) =∏i,je−u(i,j)u(i, j)v(i,j)v(i, j)! (4)We ignore the denominator, which is independent ofu. Since we want to find a functional to minimize, weconsider the negative logarithm of the numerator:−log(P(v|u) ∝∑i,ju(i, j) − v(i, j)log (u(i, j)) (5)With this modified fidelity term, the new energy func-tional to be minimized will have the following form [24]:Eλ(u) = 12∫(u − v logu) + λ∫|∇u| (6)Now, compare the optimality conditions for the twomodels (obtained from the Euler-Lagrange equation):{(u − v) + λ p = 0 For the ROF model in (2)(u − v) + (λu) p = 0 For the new model in (6)(7)where p is a sub-gradient of∫|∇u|. The only differencebetween the two equations is the dependence of the reg-ularization parameter on u in the new model. The newmodel suggests that a stronger smoothing must be appliedwhere signal has larger values. This outcome agrees withwhat we expect since under Poisson distribution noiselevel is proportional to signal intensity.Minimization of the new functional in (6) is not astraightforward problem. One approach is to first replace|∇u| with √|∇u|2 + for a small > 0 and then toapply a gradient descent iteration [24]. Another approach,suggested in [25], is to use a Taylor’s expansion of thedata fidelity term and to minimize this approximatemodel. In this paper, we follow an algorithm that wasdeveloped for the original ROF model [26]. However,we denoise each sinogram pixel separately by minimiz-ing Eλ(u) in a small neighborhood around that pixel,and with a regularization parameter inspired by the newmodel.Total variation denoising has proved to be very effec-tive when applied to piecewise-constant images. On morecomplex images, however, it can lead to undesired effects.Most importantly, on images with piecewise-linear fea-tures, i.e. regions with smooth change in intensity, ROF’soriginal model leads to staircase artifacts. This is because,by design, ROF’s model favors piecewise-constant solu-tions. We believe that this can be a major problem whenapplying TV denoising to sinogram images because evenif the imaged object is piecewise-constant, its projectionscan be very far from piecewise-constant. This is easy tovisualize because a feature with uniform intensity in theimaged object will have a piecewise-constant projection inthe sinogram only if different rays from the x-ray sourceto the detector plane cut the same length through thatfeature, which is a very unlikely scenario. Hence, the pro-jections are very likely to contain features of higher ordersof regularity (i.e., piecewise-linear, piecewise-quadratic,etc.) that would suffer from a staircase effect when treatedwith the ROF model. A heavily researched approach toreducing the staircase effect is to replace the 1 normof the gradient with the 1 norm of higher-order differ-ential operators [27, 28]. A less sophisticated approach,but one that has a trivial implementation, is to performthe total variation minimization locally. This approachhas also been shown to alleviate the staircase effect [29].Moreover, with a local minimization strategy, if the sizeof the neighborhood considered in minimization is smallenough, one can safely assume that the signal intensityand noise level are approximately constant. Therefore, asolution based on the ROF’s original model will be a goodapproximation to the solution of the new model based onPoisson noise. This way, we can utilize efficient existingalgorithms for the ROFmodel while avoiding the staircaseartifacts.Karimi et al. BMCMedical Imaging (2016) 16:11 Page 4 of 14Since our approach is based on Chambolle’s famousalgorithm [26], we briefly describe this algorithm here. Ifwe denote by X and Y the space of the image u and itsgradient,∇u, respectively, then an alternative definition oftotal variation of u is:∑i,j|∇u|i,j = sup{〈p,∇u〉Y : p ∈ Y , |pi,j| ≤ 1} (8)Chambolle introduced the discrete divergence opera-tor as the dual of the gradient operator, i.e. 〈p,∇u〉Y =〈−div p,u〉X . In the discrete image domain:(div p)i,j =(p1i,j − p1i−1,j)+(p2i,j − p2i,j−1)(9)Because of the duality of the gradient and divergenceoperators, total variation can also be written as:∑i,j|∇u|i,j = supz∈K〈z,u〉X K = {div p : p ∈ Y , |pi,j| ≤ 1}(10)The minimizer of the energy functional in (2) is thenobtained by projecting v onto the set λK :u = v − πλK (v) (11)which is equivalent to minimizing the Euclidian distancebetween v and λ div p, and this can be achieved via thefollowing iteration for computing p:p0 = 0; pn+1i,j =pni,j + τ (∇(div pn − v/λ))i,j1 + τ | (∇(div pn − v/λ))i,j |(12)where τ > 0 is the step size. For a small enough step size,τ ≤ 1/8, the algorithm is guaranteed to converge [26].Instead of a global solution, we minimize the energyfunctional (6) in a small neighborhood of each pixel.To this end, let us denote by ω the set of indices thatdefine the desired neighborhood around the current pixel.For example, for a square neighborhood of size (2m +1) × (2m + 1) pixels that we used in this study, ω ={(i, j) : i, j = −m : m}. We also consider a normalizedGaussian weighting function on this neighborhood:W (i, j) = exp(− (i2 + j2)h2)(13)The local problem will then become that of minimizingthe following functional:Eλ,W (u′) = 12 ||u′ − vω||2W + λ′∫ω|∇u′| (14)where ||.||2W denotes the weighted norm with weights Wand vω denotes the noisy image restricted to the windowω around the current pixel. It must be clear that here u′is a function defined only on the neighborhood ω. Thesolution of this local optimization problem will be similarto Chambolle’s algorithm described above [29]. The onlydifference is in the update formula for p:p0 = 0; pn+1i,j =pni,j + τ(∇ (D−1div pn − v/λ′))i,j1 + τ | (∇ (D−1div pn − v/λ′))i,j |(15)where D is a diagonal matrix whose diagonal elements arethe values ofW.The regularization parameter, λ′, must be chosenaccording to (7). The simplest approach is to setλ′ = λv(i, j), where λ is a global regularization parameterand v(i, j) is the value of the current pixel in the noisyimage. Since v(i, j) is noisy, a better choice is to use aweighted local average as the estimate of the intensityof the true image at the current pixel (note that themaximum-likelihood estimate of the mean of a Poissonprocess from a set of observations is the arithmetic meanof the observations). Therefore, we suggest the followingchoice for the local regularization parameter.λ′ = λ∑−a≤i′,j′≤a W ′(i′, j′)v(i − i′, j − j′)∑−a≤i′,j′≤a W ′(i′, j′) whereW ′(i, j) = exp(−(i2 + j2)h′2)(16)There are several parameters in the proposed algo-rithm. The global regularization parameter λ controls thestrength of the denoising. It should be set based on thedesired level of smoothing. Parameter m sets the size ofthe neighborhood considered around each pixel, whichin this study was chosen to be a square window of size(2m+ 1) × (2m+ 1). Numerical experiments in [29] haveshown that in total variation denoising, the influence mapof a pixel is usually limited to a radius of approximately10 pixels for typical values of the regularization param-eter. Therefore, a good value for m would be around 10,which is the value we used for all experiments reportedin this paper. The width of the Gaussian weighting func-tionW is adjusted through h. We used h = 2m which wefound empirically to work well. Similarly, a and h′ in (16)determine the size of the window and the weights usedfor determining the local regularization parameter. Thesehave to be set based on the noise level in the image; largervalues should be chosen when noise is stronger. In thisstudy, we used a = 4 and h′ = 2a.A simple implementation of the proposed algorithm canbe computationally intensive because it will involve solv-ing a minimization problem, though very small, for everyindividual pixel in the sinogram. This will be a majordrawback because a big advantage of sinogram denois-ing methods, compared to iterative image reconstructionKarimi et al. BMCMedical Imaging (2016) 16:11 Page 5 of 14methods, is the shorter computational time. To reducethe computational time, after minimizing the local costfunction (14) around the current pixel, we will replacethe value of all pixels in the window of size (2a + 1) ×(2a + 1) around the current pixel, instead of just the cen-ter pixel, and then shift the window by (2a + 1). Ourextensive numerical experiments with simulated and realprojections showed that with this approach the results willbe almost identical to the case where only one pixel isdenoised at a time. This is the approach that we followedin all experiments reported in this paper.Results and discussionWe evaluated the proposed denoising algorithm on sim-ulated projections and two sets of real low-dose projec-tions of a physical phantom and a rat obtained using amicro-CT scanner. We compared the performance of ourproposed algorithm with two other methods:1. A bilateral filtering algorithm proposed in [14]. Thisbilateral filtering algorithm works by minimizing acost function of the following form:E(u) =∑k∑k′∈kP1(k, k′)P2(k, k′)where k denotes the index of a pixel, k is aneighborhood around this pixel, and P1 and P2 aretwo cost functions in terms of the spatial distanceand difference in pixel values, respectively, bothhaving Gaussian forms:P1(k, k′) = exp(−(k − k′)22d2)P2(k, k′) = 1 − exp(− (uk − uk′)22σ 2)The algorithm parameters include d and σ thatcontrol the range of weighting for P1 and P2. Theauthors in [14] suggest a fixed filter length of w = 5and fixed d = w/6 = 5/6 (that they foundempirically to be a good choice) and try a range ofvalues for σ between 0.7 and 2.8. In this study, weapplied the bilateral filtering for several values of σ inthis range and apply the proposed algorithm forseveral values of the regularization parameter, λ.2. A nonlocal principal component analysis (NL-PCA)algorithm proposed in [30]. In this method, patchesof the image are first clustered using the K-Meansalgorithm. For all patches in a cluster a Poisson PCA(also known as exponential PCA) is performed todenoise them. The PCA problem is solved viaalternating Newton’s iterations. The denoisedpatches are returned to their original locations andaveraged (to account for the patch overlaps) in orderto form the denoised image. Patch-based methodsare computationally very intensive. Therefore, withthis algorithm we used parameter settings thatresulted in a reasonable computational time.Simulated dataWe simulated 360 noisy cone-beam projections, from0◦ to 359◦ with 1◦ increments from a 3D Shepp-Loganphantom according to the following model [12, 31]:Nid = Ni0 e(−∫i μds) (17)where Nid and Ni0 denote, respectively, the number ofdetected and incident photons for the ray that extendsfrom the source to the detector i and∫i μds is the lineintegral of the attenuation coefficient along that ray. Weassumed Ni0 to be constant for all i, i.e., no bowtie filtra-tion, and Nid to be a Poisson-distributed random variablewith the expected value given by (17). We used two valuesof Ni0 = 500 and 2000 to simulate two sets of projections,which we will call high-noise and low-noise, respectively.The phantom size was 512 × 512 × 512 voxels and theprojections were each 700 × 700 pixels in size.Figure 1 shows one-dimensional profiles of the noisyand denoised projections. The plots in this figureshow that the proposed TV-based denoising significantlyremoves the noise and seems to be superior to bilateral fil-tering andNL-PCA. For quantitative comparison, becausewe have access to the true noise-free projections, we usethe following criteria:(a) Root Mean Square of the Error (RMSE), where erroris defined as the difference between the denoised andthe true (i.e., noise-free) projections.(b) Mutual information (MI), which treats theprojections as stochastic processes [32, 33]:MI(u∗, uˆ) =h∑i=1h∑j=1p(u∗i , uˆj)log( p(u∗i , uˆj)p(u∗i )p(uˆj))Here, u∗ and uˆ represent the true and denoisedprojections, respectively. We used histograms of u∗and uˆ for estimating p(u∗i ) and p(uˆj) and their jointhistogram for estimating p(u∗i , uˆj), and h is thenumber of bins in the histograms. We normalizedthe computed MI(u∗, uˆ) by dividing it by MI(u∗,u∗).Figure 2 shows the plots of RMSE and MI for theproposed algorithm, bilateral filtering, and NL-PCA. Forthe proposed algorithm, we have plotted these valuesfor 10 logarithmically-spaced values of λ in the range[ 0.01, 1], which we found to give the best denoisingresults. For bilateral filtering, following [14], we haveplotted these values for 10 linearly-spaced values of σin the range [ 0.5, 3.2]. From these plots it is clear thatthe proposed algorithm has achieved significantly betterKarimi et al. BMCMedical Imaging (2016) 16:11 Page 6 of 14Fig. 1 Two typical one-dimensional profiles of the noisy and denoised projections simulated from the Shepp-Logan phantom. The thin blue curvein each plot shows the corresponding noise-free projection. The left column is for the high-noise case and the right column is for the low-noisecase. a the noisy sinogram, b denoised using bilateral filtering, c denoised with NL-PCA, and d denoised with the proposed TV-based algorithmdenoising results than bilateral filtering and NL-PCA.Best results with the proposed algorithm are achievedwith λ values around 0.1 and the denoising is too strongfor λ > 1. For bilateral filtering, we found that bestdenoising results were usually obtained for values of σclose to 3.0 and the performance did not improve orslightly deteriorated when σ was increased beyond 3.2.The solid squares on these plots show the optimum valueof the corresponding parameter (i.e., lowest RMSE orhighest MI). The phantom profiles shown in Fig. 1 for theKarimi et al. BMCMedical Imaging (2016) 16:11 Page 7 of 14Fig. 2 Comparison between different denoising algorithms in terms of RMSE and MI for the high-noise projections (top row) and low-noiseprojections (bottom row) simulated from the Shepp-Logan phantom. Values for the bilateral filtering algorithm are plotted as a function of σ (thebottom horizontal axis), whereas the values for the proposed algorithm are plotted as a function of the regularization parameter λ (the top horizontalaxis). The solid squares indicate the points of optimumproposed algorithm and bilateral filtering were obtainedwith the parameter values that resulted in the lowestRMSE.Real cone-beam projections of a physical phantomCone-beam projections were acquired from a physicalphantom using a Gamma Medica eXplore CT 120 micro-CT scanner. The scanner had a flat panel detector. Thedistance from the source to the detector and to the axisof rotation were 449.29 mm and 397.04 mm, respectively.We generated two scans of the same phantom:1. Low-noise scan. Consisting of 720 projectionsbetween 0◦ and 360◦ at 0.5◦ intervals. The tubevoltage, tube current, and exposure time were 70 kV,40mA, and 25 ms, respectively.2. High-noise scan. Consisting of 240 projectionsbetween 0◦ and 360◦ at 1.5◦ intervals. The tubevoltage, tube current, and exposure time (perprojection) were 50 kV, 32mA, and 16 mAs,respectively.Since we do not have access to the true (i.e., noise-free) projections, we compare the performance of thedenoising algorithms in terms of the quality of thereconstructed images. We used the low-noise scan toreconstruct a high-quality image of the phantom using theFeldkamp-Davis-Kress (FDK) algorithm [34].Wewill referto this as “the reference image”. To evaluate the denoisingalgorithms, we applied them on the high-noise projec-tions, reconstructed the image of the phantom from thedenoised projections using the FDK algorithm, and com-pared the reconstructed image with the reference image.Similar to the experiment with the simulated projections,we performed the denoising for 10 linearly-spaced valuesof σ in the range [ 0.5, 3.2] for bilateral filtering. Similarly,we ran the proposed algorithm with 10 logarithmically-spaced values of λ in the range [ 0.001, 0.1]. Each pro-jection was 875 × 568 pixels in size and the size of thereconstructed image of the phantom was 880× 880× 650voxels, with isotropic voxels of 0.1 × 0.1 × 0.1 mm3.In order to assess the overall quality of the reconstructedimages, we use the following two criteria:(a) Root Mean Square of the Error (RMSE). Where erroris defined as the difference between the imagereconstructed from denoised projections and thereference image.Karimi et al. BMCMedical Imaging (2016) 16:11 Page 8 of 14(b) Structural similarity index (SSIM) between thereconstructed image and the reference image. SSIMis used as a measure of the overall closeness of twoimages and is defined as [35]:SSIM(x, xˆ) = (2μxμxˆ + C1) (2σxxˆ + C2)(μ2x + μ2xˆ + C1)+ (σ 2x + σ 2xˆ + C2)where μx and σx represent the mean and standarddeviation of the image x, σxxˆ is its covariance withimage xˆ, and C1 and C2 are constants.The plots of RMSE and SSIM are shown in Fig. 3.Compared with both bilateral filtering and NL-PCA, theimage reconstructed from projections denoised using theproposed algorithm has a significantly lower RSME andhigher SSIM. Best results in terms of SSIM with the pro-posed algorithm are obtained with λ = 0.0129 and forbilateral filtering algorithm with σ = 2.6.The plots in Fig. 3 reflect the overall closeness of theimages reconstructed from the denoised projections andthe reference image. The imaged phantom had differentmodules that allowed for a more detailed evaluation of thequality of the reconstructed images [36]. A set of fine coilsinside the phantom allow for visual inspection of the spa-tial resolution in the reconstructed image. Figure 4 showstwo of these coils in the images reconstructed from noisyand denoised projections. These coils have thicknessesof 500 μm and 200 μm, corresponding to spatial reso-lutions of 1 and 2.5 line pairs per mm, respectively. Theimage shown for the proposed algorithm corresponds toλ = 0.0129 and the image shown for bilateral filteringcorresponds to σ = 2.6. As we mentioned above, theseparameter values led to highest SSIM. The images showa marked improvement in the image quality via sinogramdenoising. It also seems that the proposed algorithm leadsto a smoother image without affecting the spatial res-olution. In Fig. 5 we have shown a profile through thecenter of the 500 − μm coil for the images reconstructedfrom noisy and denoised projections and also the differ-ence between them and the reference image for a closercomparison. It is clear from these profiles that the imagereconstructed from the projections denoised using theproposed algorithm are closer to the reference image.In order to compare the denoising algorithms in termsof the trade-off between noise and spatial resolution, wefollowed an approach similar to that suggested in [14].Specifically, we computed the following two numbers asmeasures of spatial resolution and noise level in the recon-structed image of the phantom:Measure of spatial resolution. The imaged phantomincluded a slanted edge that consisted of a plastic-airboundary, specially designed for accurate estimation ofthe modulation transfer function (MTF). We used themethod proposed in [37] for estimation of the MTF overthe range of spatial frequencies between 0 and 5 mm−1.As also suggested in [14], we use the spatial frequency atwhich the normalized MTF reaches a value of 0.10 as arepresentative number for spatial resolution.Measure of noise level. A uniform polycarbonate diskis included in the phantom for the purpose of assessingthe noise level in the reconstructed image. We selectedfive cubes, each 10× 10× 10 voxels, at different locationswithin this disk and computed the standard deviation ofthe voxel values in each cube.We use the average standarddeviation of voxel values in these cubes as a measure ofnoise level.We computed the above two values for bilateral filter-ing algorithm with 10 linearly-spaced values of σ in therange [ 0.5, 3.2] and for the proposed TV-based algorithmwith 10 logarithmically-spaced values of λ in the range[ 0.001, 0.1]. In Fig. 6, we have shown plots of these twovalues for the three denoising algorithms. Note that ahigh spatial resolution and a low noise level are desirable.Therefore, all three denoising algorithms have improvedFig. 3 Performance comparison between different sinogram denoising algorithms in terms of RMSE and SSIM on the scan of the physical phantom.Values for the bilateral filtering algorithm are plotted as a function of σ (the bottom horizontal axis), whereas the values for the proposed algorithmare plotted as a function of the regularization parameter λ (the top horizontal axis). The solid squares indicate the points of optimumKarimi et al. BMCMedical Imaging (2016) 16:11 Page 9 of 14Fig. 4 The 200 − μm (top row) and 500 − μm (bottom row) coils in the images reconstructed from noisy and denoised projections of the physicalphantom; a the reference image, b without denoising, c bilateral filtering, d NL-PCA, and e the proposed algorithmthe quality of the reconstructed image for the range ofparameter values used (except for λ = 0.1 with the pro-posed algorithm). Moreover, the proposed algorithm hasachieved better results than bilateral filtering and NL-PCA. Specifically, for λ ∈[ 0.0077, 0.0359] the proposedalgorithm has achieved both higher spatial resolutionand lower noise than bilateral filtering (for any param-eter value) and NL-PCA. In Fig. 6, we have also shownplots of the MTF obtained with the three denoising algo-rithms. All three sinogram denoising algorithms have ledto an improvement in the spatial resolution in the recon-structed image. The proposed algorithm has resulted ina higher MTF than bilateral filtering and NL-PCA for allspatial frequencies.Real cone-beam projections of a ratWe obtained a fresh rat carcass from our institutionalanimal facility and used the same micro-CT scannerdescribed above to obtain post mortem images of therat thorax. Since the animal was obtained post mortem,no ethical approvals were required for the micro-CTscans. Because the internal organs of the rat constantlymoved, we were unable to create two identical scanswith different noise levels as we did for the phantom.Therefore, we scanned the rat only once. The scanconsisted of 720 projections between 0◦ and 360◦at 0.5◦ intervals with the tube voltage, tube current,and exposure time equal to 70 kV, 32mA, and 16 ms,respectively.Similar to our approach in the experiment with thephysical phantom, since we do not have access to the trueprojections, we evaluate the performance of the denois-ing algorithms in terms of the quality of the reconstructedimages. To create a high-quality reference image from thefull set of 720 projections, we first reconstructed an initialimage using the FDK algorithm. Then, we used five itera-tions of MFISTA algorithm [38] to improve the quality ofthe FDK-reconstructed image. The resulting image had avery high quality and we used it as the reference image forevaluating the performance of the denoising algorithms.We applied the denoising algorithms on a subset of 240projections of the same scan (projections at 1.5◦ inter-vals) and reconstructed the image of the rat using the FDKalgorithm.Similar to the physical phantom experiment, we useRMSE and SSIM as a measure of the overall closeness ofthe reconstructed images to the reference image. Figure 7shows these criteria for the three sinogram denoising algo-rithms. From this figure, denoising of the projections withthe proposed algorithm has lead to superior results interms of RMSE and SSIM compared to bilateral filteringand NL-PCA.For a visual comparison, Fig. 8 shows a typical 2Dslice of the reconstructed images of the rat. For the pro-posed algorithm and bilateral filtering, the images shownin this figure are obtained using the parameter valuesthat resulted in the lowest SSIM, i.e., λ = 0.0129 andσ = 2.6 (see Fig. 7). The window of the linear atten-uation coefficient, μ, used to display these slices was[ 0, 0.55]. To allow a better visual comparison, we haveselected two regions of interest (ROI) within this sliceand have shown them in zoomed-in views and with nar-rower μ-windows. The ROI shown on the top left ofeach slice contains fat surrounded with soft tissue; thisROI is shown with a magnification factor of 1.5 andwith a μ-window of [ 0.15, 0.20]. The ROI shown onKarimi et al. BMCMedical Imaging (2016) 16:11 Page 10 of 14Fig. 5 The left column shows a profile through the 500 − μm coil in the images of the physical phantom reconstructed from noisy and denoisedprojections: a without denoising, b bilateral filtering, c NL-PCA, and d the proposed algorithm. In each of these plots in the left column, we haveincluded the profile of the reference image (the blue curves). For a better comparison of the denoising algorithms, in the right column we haveshown the difference between the profiles shown in the left column and the profile of the reference imagethe top right of each slice contains bone surroundedwith soft tissue; this ROI is shown with a magnifica-tion factor of 2.0 and with a μ-window of [ 0.18, 0.50].These images show a strong positive effect for sinogramdenoising in terms of the visual quality of the recon-structed image. Moreover, denoising with the proposedKarimi et al. BMCMedical Imaging (2016) 16:11 Page 11 of 14Fig. 6 Left: plots of the normalized MTF obtained by different sinogram denoising algorithms and those of the reference image and the imagereconstructed without sinogram denoising. Right: plots of noise level versus spatial resolution for different denoising algorithms. The dashedhorizontal and vertical lines in this plot show the corresponding values for the image reconstructed without sinogram denoisingalgorithm seems to have resulted in a higher-qualityimage, especially in the soft-tissue ROI.In order to compare the denoising algorithms in termsof the trade-off between noise suppression and spatial res-olution, we again followed an approach similar to thatproposed in [14]. Specifically, we selected an ROI shownin Fig. 9 and computed the following measures of noiselevel and spatial resolution:Measure of spatial resolution. We compute the maxi-mum absolute value of the gradient (i.e., slope) along theline L marked in the ROI shown in Fig. 9 as a measureof spatial resolution. A sharper slope indicates a higherspatial resolution and it will give a larger gradient.Measure of noise level. We consider a cube of size50 × 50 × 50 voxels, the cross-section of which is shownin the displayed ROI. From the reference image, we iden-tified this cube as being highly uniform. Therefore, wecomputed the standard deviation of the voxel values inthis cube as a measure of noise level.The results are plotted in Fig. 9. This plot is very simi-lar to the plot shown for the physical phantom experimentin Fig. 6. The main observations are that all three sino-gram denoising algorithms have improved the quality ofthe reconstructed image in terms of spatial resolution andnoise level, and that the proposed algorithm can outper-form the bilateral filtering algorithm and NL-PCA withthe right selection of the regularization parameter. Specif-ically, with λ ∈[ 0.0129, 0.0359], the proposed algorithmhas resulted in lower noise and better spatial resolutionthan bilateral filtering (with any choice of σ ) andNL-PCA.Fig. 7 Performance comparison between different sinogram denoising algorithms on the rat scan. Values for the bilateral filtering algorithm areplotted as a function of σ (the bottom horizontal axis), whereas the values for the proposed algorithm are plotted as a function of the regularizationparameter λ (the top horizontal axis). The solid squares indicate the points of optimumKarimi et al. BMCMedical Imaging (2016) 16:11 Page 12 of 14Fig. 8 A slice of the image of the rat reconstructed from noisy and denoised projections: a the reference image, b without denoising, c bilateralfiltering, d NL-PCA, and e the proposed algorithm. The locations of the selected ROIs have been marked on the reference image (a). This slice isthrough the thorax just below the carina (where the trachea divides into left and right bronchi)Computation timeIn order to compare the computational time of the pro-posed algorithm with that of bilateral filtering and NL-PCA, we considered the denoising of 240 projections ofthe rat. As we mentioned above, each projection was875 × 568 pixels. The proposed TV-based algorithmimplemented inMatlab version R2012b and executed on aWindows 7 PC with 16 GB of memory and 3.4 GHz IntelCore i7 CPU needed approximately 6 minutes to denoiseall 240 projections. In comparison, bilateral filtering andNL-PCA needed 8.5 minutes and 42 minutes, respec-tively, for the same denoising task. In general, patch-baseddenoising methods such as NL-PCA require long compu-tational times. Nevertheless, sinogram denoising methodsKarimi et al. BMCMedical Imaging (2016) 16:11 Page 13 of 14Fig. 9 Left: the ROI used to compute the noise level and spatial resolution in the reconstructed images of the rat; the noise level was computed asthe standard deviation of voxel values in the cube C and the spatial resolution was computed as the maximum gradient along the line L. Right: plotsof noise level versus spatial resolution for the three denoising algorithms. The dashed horizontal and vertical lines in this plot show thecorresponding values for the image reconstructed without sinogram denoisingare in general much less computationally intensive thaniterative reconstruction methods. A single forward orback projection using fast algorithms such as the separablefootprints algorithm [39] or the distance-driven algorithm[40] takes approximately 2 hours on the same computer;each iteration of an iterative reconstruction algorithmneeds one forward-projection and one back-projection.Of course, it is becoming very common to implementiterative reconstruction methods on GPU, but the samecan also be done for sinogram denoising algorithms.Sinogram denoising is by nature highly parallelizablebecause each projection can be denoised independently ofothers.ConclusionsSinogram denoising can be a very effective approachto improving the quality of low-dose CT images. Inthis paper, we presented a fast and efficient method fordenoising of low-dose CT projections. The performanceof the proposed method on simulated and real CT pro-jections shows that this method can lead to a substantialreduction in the noise level without degrading the spatialresolution. Our results show that the proposed algorithmis superior to bilateral filtering and NL-PCA in termsof denoising performance and computational speed. Theproposedmethod is based on the assumption that the trueimage has a sparse gradient. Although algorithms basedon this assumption have proved successful in image pro-cessing, CT projections are likely to not fit this model verywell. This is because projections of piece-wise constantobjects are not in general piece-wise constant, ratherthey are piece-wise smooth. It is well-known that TV-denoising of piece-wise smooth images can result in stair-case artifacts in the denoised image. Our approach of localdenoising reduces the risk of staircase artifacts. Anotherapproach to reducing these artifacts is to consider higher-order differentials in the model. Such a formulation maylead to results comparable to those reported in this paper,but the optimization algorithms involved will be signifi-cantly more complicated. We consider this approach as afuture work.AbbreviationsCT: Computed tomography; PCA: Principal component analysis; FBP: Filteredback-projections; ROF: Rudin-Osher-Fatemi; TV: Total variation; NL-PCA:Nonlocal principal component analysis; RMSE: Root mean square of error; MI:Mutual information; FDK: Feldkamp-Davis-Kress; SSIM: Structural similarityindex; MTF: Modulation transfer function; ROI: Region of interest.Competing interestsThe authors declare that they have no competing interests.Authors’ contributionsDK and PD conceived of the study. PD and NF performed the study designand data collection. Algorithm development and data analysis was carried outby DK and RW. DK prepared and submitted the manuscript. All authorscooperated in the interpretation of the results. All authors have read andapproved the final manuscript.AcknowledgementsMicro-CT imaging was performed in the Centre for High-ThroughputPhenogenomics at the University of British Columbia, a facility supported bythe Canada Foundation for Innovation, British Columbia KnowledgeKarimi et al. BMCMedical Imaging (2016) 16:11 Page 14 of 14Development Foundation, and the UBC Faculty of Dentistry. Nancy L. Fordwould like to acknowledge the funding from her NSERC Discovery Grant.Author details12366 Main Mall, Vancouver, BC V6T 1Z4, Canada. 22151 Wesbrook Mall,Vancouver, BC V6T 1Z3, Canada.Received: 15 July 2015 Accepted: 13 January 2016References1. Berrington de González A, Mahesh M, Kim K, et al. Projected cancer risksfrom computed tomographic scans performed in the United States in2007. Arch Intern Med. 2009;169(22):2071.2. Beister M, Kolditz D, Kalender WA. Iterative reconstruction methods inX-ray CT. Physica Med. 2012;28(2):94.3. Fessler JA. Statistical image reconstruction methods for transmissiontomography. Handb Med Imaging. 2000;2:1.4. Hsieh J. Adaptive streak artifact reduction in computed tomographyresulting from excessive x-ray photon noise. Med Phys. 1998;25(11):2139.5. Kachelriess M, Watzke O, Kalender W. Generalized multi-dimensionaladaptive filtering for conventional and spiral single-slice, multi-slice, andcone-beam CT. Med Phys. 2001;28(4):475.6. Forthmann P, Köhler T, Begemann PGC, Defrise M. Penalizedmaximum-likelihood sinogram restoration for dual focal spot computedtomography. Phys Med Biol. 2007;52(15):4513.7. LaRivière PJ. Penalized-likelihood sinogram smoothing for low-dose CT.Med Phys. 2005;32(6):1676.8. Kak AC, Slaney M. Principles of Computerized Tomographic Imaging.Philadelphia, PA: Society for Industrial and Applied Mathematics; 2001.9. Natterer F. The Mathematics of Computerized Tomography. Philadelphia,PA: Society for Industrial and Applied Mathematics; 2001.10. Jiao C, Wang D, Lu H, Zhang Z, Liang JZ. Multiscale noise reduction onlow-dose CT sinogram by stationary wavelet transform. In: NuclearScience Symposium Conference Record, 2008. NSS ’08. IEEE; 2008.p. 5339–44.11. Sahiner B, Yagle A. Reconstruction from projections undertime-frequency constraints. Med Imaging IEEE Trans. 1995;14(2):193.12. Macovski A. Medical Imaging Systems. Upper Saddle River, NJ: PrenticeHall; 1983.13. Wang J, Lu H, Liang Z, Eremina D, Zhang G, Wang S, et al. Anexperimental study on the noise properties of x-ray CT sinogram data inRadon space. Phys Med Biol. 2008;53(12):3327.14. Manduca A, Yu L, Trzasko JD, Khaylova N, Kofler JM, McCollough CM,et al. Projection space denoising with bilateral filtering and CT noisemodeling for dose reduction in CT. Med Phys. 2009;36(11):4911.15. Wang J, Li T, Lu H, Liang Z. Penalized weighted least-squares approachto sinogram noise reduction and image reconstruction for low-dose X-raycomputed tomography. Med Imaging IEEE Trans. 2006;25(10):1272.16. Elad M, Aharon M. Image denoising via sparse and redundantrepresentations over learned dictionaries. Image Process IEEE Trans.2006;15(12):3736.17. Mairal J, Elad M, Sapiro G. Sparse representation for color imagerestoration. IEEE Trans Image Process. 2008;17(1):53.18. Chen Y, Yin X, Shi L, Shu H, Luo L, Coatrieux JL, et al. Improvingabdomen tumor low-dose CT images using a fast dictionary learningbased processing. Phys Med Biol. 2013;58(16):5803.19. Li S, Fang L, Yin H. An efficient dictionary learning algorithm and itsapplication to 3-D medical image denoising. Biomed Eng IEEE Trans.2012;59(2):417.20. Buades A, Coll B, Morel JM. A non-local algorithm for image denoising.In: Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEEComputer Society Conference on, vol 2. IEEE; 2005. p. 60–5.21. Buades A, Coll B, Morel JM. A review of image denoising algorithms, witha new one. Multiscale Model Simul. 2005;4(2):490.22. Maggioni M, Katkovnik V, Egiazarian K, Foi A. A nonlocaltransform-domain filter for volumetric data denoising and reconstruction.IEEE Trans Image Process. 2013;22(1):1057.23. Rudin LI, Osher S, Fatemi E, Physica D. Nonlinear total variation basednoise removal algorithms. Nonlinear Phenomena. 1992;60(1-4):259.24. Le T, Chartrand R, Asaki T. A variational approach to reconstructingimages corrupted by Poisson noise. J Math Imaging Vis. 2007;27(3):257.25. Sawatzky A, Brune C, Müller J, Burger M. Total variation processing ofimages with Poisson statistics In: Jiang X, Petkov N, editors. ComputerAnalysis of Images and Patterns, Lecture Notes in Computer Science, vol.5702. Berlin, Heidelberg: Springer; 2009. p. 533–40.26. Chambolle A. An algorithm for total variation minimization andapplications. J Math Imaging Vis. 2004;20(1-2):89.27. Lefkimmiatis S, Bourquard A, Unser M. Hessian-based normregularization for image restoration with biomedical applications. ImageProcess IEEE Trans. 2012;21(3):983.28. Lysaker M, Tai X. Iterative image restoration combining total variationminimization and a second-order functional. Int J Comput Vis. 2006;66(1):5.29. Louchet C, Moisan L. Total variation as a local filter. SIAM J Imaging Sci.2011;4(2):651.30. Salmon J, Harmany Z, Deledalle CA, Willett R. Poisson noise reductionwith non-local PCA. J Math Imaging Vis. 2014;48(2):279.doi:10.1007/s10851-013-0435-6. http://dx.doi.org/10.1007/s10851-013-0435-6.31. Kundel HL, Van Metter RL, Beutel J. Handbook of Medical Imaging,Volume 1. Physics and Psychophysics. Bellingham, WA: SPIE Publications;2000.32. Pluim J, Maintz J, Viergever M. Mutual-information-based registration ofmedical images: a survey. Med Imaging IEEE Trans. 2003;22(8):986.33. Bian J, Siewerdsen JH, Han X, Sidky EY, Prince JL, Pelizzari CA, et al.Evaluation of sparse-view reconstruction from flat-panel-detectorcone-beam CT. Phys Med Biol. 2010;55(22):6575.34. Feldkamp LA, Davis LC, Kress JW. Practical cone-beam algorithm. J OptSoc Am A. 1984;1(6):612.35. Wang Z, Bovik A, Sheikh H, Simoncelli E. Image quality assessment: fromerror visibility to structural similarity. Image Process IEEE Trans. 2004;13(4):600.36. Du LY, Umoh J, Nikolov HN, Pollmann SI, Lee TY, Holdsworth DW. Aquality assurance phantom for the performance evaluation of volumetricmicro-CT systems. Phys Med Biol. 2007;52(23):7087.37. Buhr E, Günther-Kohfahl S, Neitzel U. Simple method for modulationtransfer function determination of digital imaging detectors from edgeimages. In: Medical Imaging 2003. Bellingham, WA, USA: InternationalSociety for Optics and Photonics; 2003. p. 877–84.38. Beck A, Teboulle M. Fast gradient-based algorithms for constrained totalvariation image denoising and deblurring problems. Image Process IEEETrans. 2009;18(11):2419.39. Long Y, Fessler J, Balter J. 3D forward and back-projection for x-ray CTusing separable footprints. Med Imaging IEEE Trans. 2010;29(11):1839.40. De Man B, Basu S. Distance-driven projection and backprojection in threedimensions. Phys Med Biol. 2004;49(11):2463.• We accept pre-submission inquiries • Our selector tool helps you to find the most relevant journal• We provide round the clock customer support • Convenient online submission• Thorough peer review• Inclusion in PubMed and all major indexing services • Maximum visibility for your researchSubmit your manuscript atwww.biomedcentral.com/submitSubmit your next manuscript to BioMed Central and we will help you at every step:
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Faculty Research and Publications /
- A sinogram denoising algorithm for low-dose computed...
Open Collections
UBC Faculty Research and Publications
A sinogram denoising algorithm for low-dose computed tomography Karimi, Davood; Deman, Pierre; Ward, Rabab; Ford, Nancy Jan 22, 2016
pdf
Page Metadata
Item Metadata
Title | A sinogram denoising algorithm for low-dose computed tomography |
Creator |
Karimi, Davood Deman, Pierre Ward, Rabab Ford, Nancy |
Publisher | BioMed Central |
Date Issued | 2016-01-22 |
Description | Background: From the viewpoint of the patients’ health, reducing the radiation dose in computed tomography (CT) is highly desirable. However, projection measurements acquired under low-dose conditions will contain much noise. Therefore, reconstruction of high-quality images from low-dose scans requires effective denoising of the projection measurements. Methods We propose a denoising algorithm that is based on maximizing the data likelihood and sparsity in the gradient domain. For Poisson noise, this formulation automatically leads to a locally adaptive denoising scheme. Because the resulting optimization problem is hard to solve and may also lead to artifacts, we suggest an explicitly local denoising method by adapting an existing algorithm for normally-distributed noise. We apply the proposed method on sets of simulated and real cone-beam projections and compare its performance with two other algorithms. Results The proposed algorithm effectively suppresses the noise in simulated and real CT projections. Denoising of the projections with the proposed algorithm leads to a substantial improvement of the reconstructed image in terms of noise level, spatial resolution, and visual quality. Conclusion The proposed algorithm can suppress very strong quantum noise in CT projections. Therefore, it can be used as an effective tool in low-dose CT. |
Subject |
Computed tomography Sinogram denoising Poisson noise Total variation Low-dose CT |
Genre |
Article |
Type |
Text |
Language | eng |
Date Available | 2016-08-12 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution 4.0 International (CC BY 4.0) |
DOI | 10.14288/1.0307510 |
URI | http://hdl.handle.net/2429/58766 |
Affiliation |
Other UBC |
Citation | BMC Medical Imaging. 2016 Jan 22;16(1):11 |
Publisher DOI | 10.1186/s12880-016-0112-5 |
Peer Review Status | Reviewed |
Scholarly Level | Faculty |
Copyright Holder | Karimi et al. |
Rights URI | http://creativecommons.org/licenses/by/4.0/ |
Aggregated Source Repository | DSpace |
Download
- Media
- 52383-12880_2016_Article_112.pdf [ 1.95MB ]
- Metadata
- JSON: 52383-1.0307510.json
- JSON-LD: 52383-1.0307510-ld.json
- RDF/XML (Pretty): 52383-1.0307510-rdf.xml
- RDF/JSON: 52383-1.0307510-rdf.json
- Turtle: 52383-1.0307510-turtle.txt
- N-Triples: 52383-1.0307510-rdf-ntriples.txt
- Original Record: 52383-1.0307510-source.json
- Full Text
- 52383-1.0307510-fulltext.txt
- Citation
- 52383-1.0307510.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.52383.1-0307510/manifest