UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Applications of inverse problems in fluids and imaging Gregson, James 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_september_gregson_james.pdf [ 50.69MB ]
Metadata
JSON: 24-1.0166394.json
JSON-LD: 24-1.0166394-ld.json
RDF/XML (Pretty): 24-1.0166394-rdf.xml
RDF/JSON: 24-1.0166394-rdf.json
Turtle: 24-1.0166394-turtle.txt
N-Triples: 24-1.0166394-rdf-ntriples.txt
Original Record: 24-1.0166394-source.json
Full Text
24-1.0166394-fulltext.txt
Citation
24-1.0166394.ris

Full Text

Applications of Inverse Problems inFluids and ImagingbyJames GregsonB.Eng., Dalhousie University, 2005M.ASc., The University of British Columbia, 2009A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Computer Science)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)July 2015c© James Gregson 2015AbstractThree applications of inverse problems relating to fluid imaging and imagedeblurring are presented. The first two, tomographic reconstruction of dyeconcentration fields from multi-view video and deblurring of photographs,are addressed by a stochastic optimization scheme that allows a wide va-riety of priors to be incorporated into the reconstruction process within astraightforward framework. The third, estimation of fluid velocities fromvolumetric dye concentration fields, highlights a previously unexplored con-nection between fluid simulation and proximal algorithms from convex opti-mization. This connection allows several classical imaging inverse problemsto be investigated in the context of fluids, including optical flow, denois-ing and deconvolution. The connection also allows inverse problems to beincorporated into fluid simulation for the purposes of physically-based reg-ularization of optical flow and for stylistic modifications of fluid captures.Through both methods and all three applications the importance of incor-porating domain-specific priors into inverse problems for fluids and imagingis highlighted.iiPrefaceThe content of this thesis is based upon three publications:• [68] Gregson, J., Krimerman, M., Hullin, M. B., Heidrich, W., Stochas-tic Tomography and its Applications in 3D imaging of Mixing Flu-ids, ACM Transactions on Graphics (Proceedings SIGGRAPH 2012),31(4), 2012• [66] Gregson, J., Heide, F., Hullin, M. B., Rouf, M., Heidrich, W.,Stochastic Deconvolution, IEEE Conference on Computer Vision andPattern Recognition (CVPR), 2013• [67] Gregson, J., Ihrke, I., Thuerey, N., Heidrich, W., From Captureto Simulation - Connecting Forward and Inverse Problems in Flu-ids, ACM Transactions on Graphics (Proceedings SIGGRAPH 2014),33(4), 2014Chapter 3 is based upon [68], with text and notation heavily adapted forthe thesis. I developed the original Stochastic Tomography algorithm, wrotethe initial draft of the paper and helped with revisions of the final text. Ialso extended and re-implemented the calibration procedure as well as thefirmware for the camera and strobe controllers. Mike Krimerman capturedmost of the datasets (with my assistance) and processed the videos intoframes. Matthias Hullin helped with the paper revisions, provided sugges-tions during the course of the research, generated most of the renderings inthe paper as well as created the submission video. Wolfgang Heidrich pro-vided guidance during the course of the research, did the majority of editingof the final submission and jointly conceived of the idea of image-based reg-ularization with me.Chapter 4 is based upon [66], also heavily adapted for this thesis. I im-plemented the stochastic deconvolution algorithm based loosely on a proof-of-concept MATLAB implementation by Felix Heide. Felix conceived ofthe idea of applying the stochastic random walk framework to deconvolu-tion problems and generated results from several of the competing methods.iiiPrefaceI conceived of and implemented the gamma and content-dependent priorsand implemented/tested the saturation and boundary framework conceivedof by Wolfgang Heidrich as well as the spatially varying deblurring frame-work (I don’t recall whose idea this was originally), wrote an initial draft ofthe paper and helped in subsequent edits for resubmission. Matthias Hullingenerated the per-pixel PSF input data and helped with editing the finalpaper while Mushfiqur Rouf performed a parameter study and generatedthe final submission video.Chapter 5 is based upon [67], again, heavily adapted for the thesis. Irealized the connection between proximal operators and pressure projectionand developed/implemented the fluid tracking algorithm, proposed/imple-mented the fluid analogues to imaging denoising and deconvolution problemsand introduced forward simulation into the reconstruction loop. Ivo Ihrkeprovided suggestions on inverse problems and performed much of the syn-thetic quantitative analysis as well as contributed to editing the paper. NilsThuerey provided helpful guidance on fluid simulation, ran many of the fi-nal results and contributed to the writing and editing of the final paper.Wolfgang Heidrich provided guidance on inverse problems and did much ofthe paper writing and editing.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Background and Related Work . . . . . . . . . . . . . . . . . 52.1 Classes of Inverse Problem . . . . . . . . . . . . . . . . . . . 72.1.1 Tomographic Reconstruction . . . . . . . . . . . . . . 72.1.2 Image Deblurring . . . . . . . . . . . . . . . . . . . . 132.1.3 Velocity Estimation . . . . . . . . . . . . . . . . . . . 152.2 Prior Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.1 L2 Priors . . . . . . . . . . . . . . . . . . . . . . . . . 202.2.2 L1 Priors . . . . . . . . . . . . . . . . . . . . . . . . . 222.2.3 Non-Convex Priors . . . . . . . . . . . . . . . . . . . 252.2.4 Other Prior Classes . . . . . . . . . . . . . . . . . . . 262.3 Solution Methods . . . . . . . . . . . . . . . . . . . . . . . . 262.3.1 Smooth Methods . . . . . . . . . . . . . . . . . . . . 272.3.2 Non-Smooth Methods . . . . . . . . . . . . . . . . . . 312.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36vTable of Contents3 Tomography of Mixing Fluids . . . . . . . . . . . . . . . . . . 383.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.2 Visible Light Computed Tomography . . . . . . . . . . . . . 403.3 Stochastic Tomography Algorithm . . . . . . . . . . . . . . . 423.3.1 Random Walk Algorithm . . . . . . . . . . . . . . . . 443.3.2 Regularization . . . . . . . . . . . . . . . . . . . . . . 493.3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 523.4 Experimental Setup and Implementation . . . . . . . . . . . 533.5 Reconstruction Results . . . . . . . . . . . . . . . . . . . . . 563.5.1 2D Synthetic Results . . . . . . . . . . . . . . . . . . 563.5.2 3D Capture of Mixing Fluids . . . . . . . . . . . . . . 583.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644 Image Deblurring . . . . . . . . . . . . . . . . . . . . . . . . . . 664.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 664.2 Stochastic Deconvolution . . . . . . . . . . . . . . . . . . . . 674.2.1 Mutation Strategy . . . . . . . . . . . . . . . . . . . . 714.2.2 Regularization . . . . . . . . . . . . . . . . . . . . . . 714.2.3 Boundary Conditions & Saturation . . . . . . . . . . 734.2.4 Non-Uniform Blur Kernels . . . . . . . . . . . . . . . 744.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 754.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865 Fluid Velocity Estimation . . . . . . . . . . . . . . . . . . . . 885.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 885.2 Overview of Incompressible Fluid Simulation . . . . . . . . . 905.3 Pressure Projection as a Proximal Operator . . . . . . . . . 915.4 Divergence-free Optical Flow using ADMM . . . . . . . . . . 925.4.1 Solution with ADMM . . . . . . . . . . . . . . . . . . 955.4.2 Extension to multiscale . . . . . . . . . . . . . . . . . 975.5 Fluid Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . 995.6 Fluid Tracking Results . . . . . . . . . . . . . . . . . . . . . 1015.6.1 Tracking Validation . . . . . . . . . . . . . . . . . . . 1025.7 Stylistic Modifications . . . . . . . . . . . . . . . . . . . . . . 1105.8 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 1185.9 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195.10 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206 Conclusions and Future Work . . . . . . . . . . . . . . . . . . 122viTable of ContentsBibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125viiList of Tables2.1 Common proximal operators . . . . . . . . . . . . . . . . . . 355.1 Optimization parameters for ground-truth comparisons . . . . 105viiiList of Figures2.1 Computed tomography forward model . . . . . . . . . . . . . 82.2 Discretization stencil of tomography problems . . . . . . . . . 102.3 Visible light tomography setup . . . . . . . . . . . . . . . . . 122.4 The deconvolution imaging model . . . . . . . . . . . . . . . . 133.1 Visible light tomography capture setup . . . . . . . . . . . . . 403.2 Illustration of tomography forward model . . . . . . . . . . . 413.3 Depiction of sample evaluation . . . . . . . . . . . . . . . . . 473.4 Depiction of a sampling chain . . . . . . . . . . . . . . . . . . 493.5 Evaluation of regularizers . . . . . . . . . . . . . . . . . . . . 513.6 Illustration of two-plane and radiometric calibration . . . . . 543.7 Synthetic comparison of Stochastic Tomography and SART . 573.8 Reconstruction of ’tube’ dataset . . . . . . . . . . . . . . . . . 593.9 Reconstruction progress over time . . . . . . . . . . . . . . . 603.10 Comparison of photo and reconstruction for ’smoke’ dataset . 613.11 Frames of ’smoke’ dataset . . . . . . . . . . . . . . . . . . . . 623.12 Frames of ’bloom’ dataset . . . . . . . . . . . . . . . . . . . . 633.13 Comparison of regularization approaches . . . . . . . . . . . . 644.1 Algorithm progress over time . . . . . . . . . . . . . . . . . . 684.2 Depiction of boundary and saturation handling . . . . . . . . 744.3 Comparison of ’train’ image with Raskar et. al. . . . . . . . . 764.4 Comparison of ’white car’ image with Raskar et al. . . . . . . 774.5 Comparison of ’fountain’ image with Fergus et. al. . . . . . . 794.6 Comparison of ’roma’ image with Xu & Jia . . . . . . . . . . 804.7 Boundary and saturation results . . . . . . . . . . . . . . . . 814.8 Convergence history and sample histogram . . . . . . . . . . 824.9 Deconvolution of defocus blur . . . . . . . . . . . . . . . . . . 834.10 Deconvolution of colour images . . . . . . . . . . . . . . . . . 844.11 Deblurring with spatially varying kernel . . . . . . . . . . . . 854.12 Illustration of a data-dependent prior . . . . . . . . . . . . . . 86ixList of Figures5.1 Visual comparison of velocity reconstruction . . . . . . . . . . 1035.2 Grount-truth comparison for a 3D simulation . . . . . . . . . 1055.3 Fluid tracking for a flow with a solid obstacle . . . . . . . . . 1065.4 Passive advection of densities . . . . . . . . . . . . . . . . . . 1075.5 Simulations initialized from states estimated by fluid tracking 1095.6 Comparison of resolution enhancement methods (smoke) . . . 1115.7 Comparison of resolution enhancement methods (bloom) . . . 1125.8 Restarted capture in a modified domain . . . . . . . . . . . . 1145.9 Mapping of flow from one domain to another . . . . . . . . . 1155.10 Fluid guiding results . . . . . . . . . . . . . . . . . . . . . . . 118xAcknowledgementsI’d like to thank my supervisor, Wolfgang Heidrich, for his many sugges-tions on the technical and practical aspects of these projects as well as forintroducing me to inverse problems in the first place.I would also like to thank Felix Heide and Lei Xiao for many helpfuldiscussions on inverse problems and optimization.xiDedicationTo Christine, for her patience and support.xiiChapter 1IntroductionThis thesis concerns applications of inverse problems to problems in fluidsand imaging, specifically as they relate to computer graphics. In the past,inverse problems have been largely confined to industrial or medical appli-cations, however, over time, more and more applications have arisen in newareas. Applications of inverse problems are now widespread and range fromreconstructing medical images to removing blur from astronomical imagesto modelling mineral/oil deposits or mapping the velocities of the earth’scrust.Recently, inverse problems have started to be applied to solving prob-lems in computer graphics. A (small) subset of examples from the past tenyears includes: tomographic scanning of transparent objects [149], gases [8],flames [84], flowers [86], using tomography [157] or matrix factorization [104]to optimize for content on multi-layered displays, using deconvolution andmatrix-factorization for super-resolution projection [74], for mesh denois-ing [73, 126], using optical flow to track texture in facial performance cap-ture [25] and background distortions in refractive gas scanning [8] as well asspecifically formulated inverse problems for transient imaging [75].These works highlight a long-term trend in computer graphics of draw-ing on methods from other areas to combine and extend them in new ways.Other examples of this are the work of Baraff in bringing rigid body simu-lation to graphics [10, 12, 11] or Foster & Metaxas [59] in adapting compu-tational fluid dynamics to graphics.This thesis attempts to continue this trend of applying inverse problemsto graphics, focusing on fluids and imaging. Three separate but relatedapplications are presented: the first is the reconstruction of mixing fluidsfrom multi-view video, the second is the problem of removing blur from pho-tographs and the third is estimating fluid velocities. These three problemsrepresent computer graphics adaptations of three classical imaging inverseproblems: computed tomography, image deconvolution and optical flow, re-spectively. The first two applications share a novel optimization approachthat allows prior knowledge to be introduced into inverse problems. Thethird shows a connection between problems in fluids and imaging that al-1Chapter 1. Introductionlows forward simulation and inverse problems to be intermixed. This letsstate of the art methods from convex optimization be applied to problemsin fluid dynamics.Through these three applications and two algorithms, this thesis showsthat simple methods can be applied to solve difficult inverse problems inimaging and that there is a previously unexplored connection between fluidsand classical imaging inverse problems. The examples in this thesis alsohighlight the importance of regularization, incorporating domain-specificknowledge into the reconstruction process, to solving inverse problems ingraphics. This point in particular arises in all three of the applicationsconsidered here.Beyond reinforcing the previous points, this thesis also makes the fol-lowing, specific contributions:• A stochastic reconstruction algorithm is developed that allows nearly-arbitrary priors to be introduced into linear inverse problems. Thestochastic method is applied to fluid capture and image deblurringwhere it demonstrates state of the art results by making use of severalnew priors that are well adapted to problems in graphics.• An equivalence between key operations used in fluid simulation andconvex optimization is shown. This equivalence allows fluid velocitiesto be estimated with vastly improved accuracy compared to standardtechniques, allows fluid capture and simulation to be intermixed andshows that many inverse problems in imaging have unexplored inter-pretations as fluid problems.• It is demonstrated that volumetric, full-state and fluid imaging usingoff-the-shelf hardware can be practical. This allows not only the con-centration fields of mixing fluids but also plausible velocity fields to beestimated for complex flow scenarios.In the remainder of this thesis, Chapter 2 introduces some backgroundinformation on what inverse problems are as well as the three inverse prob-lems considered by this thesis. It then discusses some of the issues that arisewhen solving inverse problems, including introducing regularizers or priorsand the change in structure introduced by those priors. Chapter 2 concludeswith a discussion of the algorithms that can be applied to solve regularizedinverse problems along with the benefits and limitations of these methods.Chapter 3 next introduces the problem of reconstructing volumetric con-centration fields of mixing fluids from multi-view video using computed to-mography. One of the key challenges in this topic is regularization, since2Chapter 1. Introductionthe reconstruction problem is severely under-constrained. This is addressedby a stochastic reconstruction algorithm that allows nearly arbitrary priorsto be incorporated into the reconstruction problem easily. By introducingthese priors, high-quality reconstructions of volumetric density fields can beobtained from data captured using off-the-shelf cameras.The ability to capture and reconstruct fluid phenomena has vast poten-tial not only within graphics, but throughout many scientific fields. Appli-cations are as varied as industrial mixing, microfluidics, weather predictionand aerodynamics, in part because fluid dynamics are notoriously difficultto simulate accurately even for relatively simple flows such as steam ris-ing from a kettle. Approximations made in deriving the physical models,the specific discretizations used to express those physical models, as wellas the mesh structure, resolution and time-advance algorithms chosen alleffect the results of even the simplest of simulations. Adding to the litany ofchallenges is an extreme sensitivity to initial conditions, unknown or under-specified physical models (particularly in the case of turbulent or chaoticflows) and many flows of interest being inherently unstable with fine-scaleperturbations producing large-scale, long-lasting flow features.Fluid capture has the potential to sidestep many of these issues by di-rectly imaging the flow of interest. This avoids many of the issues relatedto flow modeling since all relevant physics is necessarily present in the flow.Imaging has a long history in experimental fluids, ranging from simple pho-tographs to Schlieren imaging to flash X-ray and optical tomography but isaccompanied by its own set of challenges: namely capturing enough data, atsufficient resolution with an imaging mode that is capable of measuring orinferring useful information about the flow. Chapter 3 builds upon previouswork in fluid capture by introducing a practical method for fluid reconstruc-tion that allows priors to be introduced into the reconstruction problem toallow better reconstructions to be inferred from relatively few views of aflow. This directly helps to address the issue of the cost and complexity offluid capture hardware, one of the main limiting factors in fluid imaging.Chapter 4 takes the stochastic reconstruction algorithm and applies it toa new application, image deblurring, to show that the stochastic approachcan be adapted to other areas. Numerous priors (including several new pri-ors) are implemented in the stochastic framework and are shown to producestate-of-the-art results when regularizing deblurring problems. The exten-sion to deblurring addresses several issues that are commonly encountered inreal-world deconvolution and deblurring, specifically spatially varying blurs,a lack of reliable measurements near image boundaries and at saturatedmeasurements as well as difficulties in incorporating prior expectations on3Chapter 1. Introductionthe solution in an efficient and straightforward manner.Chapter 5 then returns to the subject of fluid scanning, building on theresults from Chapter 3 by estimating dense volumetric velocity fields fromtomographically reconstructed concentration fields. This produces the first,to my knowledge, fully passive volumetric captures of unstable 3D fluid flowsincluding not only concentration distributions but plausible velocity fieldsestimates as well. The algorithm employed to do this draws on a previouslyunexplored relationship between pressure-projection used in incompressiblefluid simulation and proximal operators used in convex optimization. Thisrelationship is exploited to show that several imaging inverse problems haveanalogues in fluid dynamics, with the resulting algorithms blurring the linesbetween forward simulation and inverse problems in fluids.The method and results in Chapter 5 represent a significant advancein passive fluid capture. Results are presented for simulations initializedfrom the fluid states (concentration and velocity fields) computed from pas-sively imaged fluids. These simulations reproduce many salient features ofthe captured flows, implying that the states inferred from both the tomo-graphic captures of Chapter 3 and velocities estimated in Chapter 5 havesome physical validity, in spite of being obtained from highly incompletedata. I attribute this in large part to the inclusion of sparsity priors in thetomography problem and to the coupling of simulation and velocity esti-mation and feel that this connection can be exploited further by applyingconvex optimization techniques to a variety of fluids problems.The thesis concludes with some closing remarks in Chapter 6 includingsome speculation on avenues for future work and improvements.4Chapter 2Background and RelatedWorkInverse problems are the process of inferring the state of a system given aset of measurements obtained from the system and a mathematical forwardmodel that models the system. Inverse problems arise in a number of differ-ent areas ranging from photography to oil-exploration to medical imagingand are consequently important topics of research.In the applications considered by this thesis, the system state, p, is theweight vector for a discretization of a scalar or vector field, p(~x), defined overa domain, Ω. The measurements are a set of discrete values, q, obtained,for example, as the pixel values in a digital photograph. The measurementsthemselves may or may not represent physical fields. The forward model,F , is a vector-valued function that maps the system state to the measure-ments as in Equation 2.1, effectively acting as a simulation of the physicalmeasurement process.q = F (p) (2.1)The inverse problem associated with Equation 2.1 attempts to determine theinverse of the forward model, F , and apply it to the known measurements,q, in order to obtain an estimate of the unknown solution state, p, as inEquation 2.2.p ≈ F−1(q) (2.2)Generally the process of inverting the forward model can be performed onlyapproximately. There are several reasons why this can occur including:• The number of independent measurements is often smaller than thenumber of unknowns in the system. This makes the system of equa-tions under-determined so that the inverse of F does not exist.• The forward model may be known only approximately, e.g. due tocalibration error causing the system of equations to be inconsistent.• The forward model is not deterministic, e.g. sensors capturing mea-surements may introduce noise to the measurements.5Chapter 2. Background and Related WorkAll three of these issues may affect measurements taken from a given phys-ical system and may interact in undesirable ways. For instance, when thesystem is under-determined there may be many system states that will ex-actly reproduce the measurements but very few that represent a realisticphysical configuration. Further introducing noise to the measurements cansubsequently result in solutions that exactly satisfy the forward model inEquation 2.1 but have severe, non-physical artefacts.In order to obtain reasonable estimates of the unknown state, it is com-mon to require that the system state only reproduce the measurements toreasonable accuracy while remaining plausible. Plausible, in this case, meansthat the solution has certain properties that are expected based on the sys-tem being measured. These may include expectations such as smoothnessor some form of simplicity. A common way to modify Equation 2.2 in orderto include these properties is to restate the problem as finding the minimizerof the weighted sum of a data term that penalizes error in reproducing theobserved measurements with a prior or regularizer function, Γ(p), that pe-nalizes deviation from expected solution properties. This alteration is shownin Equation 2.3 for the very common least-squares data term that is usedfor the applications in this thesis:p∗ = arg minp12‖q− F (p)‖22︸ ︷︷ ︸data term+ γΓ(p)︸ ︷︷ ︸prior(2.3)Much of the difficulty in solving inverse problems concerns choosing a priorfunction, Γ(p), that restricts the solutions to a set of physically plausibleconfigurations while keeping the optimization problem from Equation 2.3tractable. The choice is often application specific, with some priors outper-forming others in certain areas. Unfortunately, many of the most effectivechoices of prior alter the mathematical structure of Equation 2.3 to makeit more difficult to solve. As a result, new priors frequently require customsolvers to be derived in order to efficiently minimize Equation 2.3, althoughin recent years effective solvers for some common classes of priors have be-come available.In the remainder of this chapter, Section 2.1 introduces the three appli-cations considered in this thesis: reconstruction of mixing fluids, deblurringof images and fluid velocity estimation. These are all expressed as inverseproblems in the form of Equation 2.3. Section 2.2 then describes some priorclasses that are commonly used when solving inverse problems, the diffi-culties that are encountered when optimizing for them and relates these to62.1. Classes of Inverse Problemprevious work in each of the applications. Section 2.3 concludes with a se-lection of commonly used methods for solving inverse problems, includingspecialized methods that have been adapted for specific prior classes.2.1 Classes of Inverse ProblemThe following sections introduce the three inverse problems considered inthis thesis. All three can be framed in the form of Equation 2.3 but havedomain-specific features that must be considered. Discussion of the priorsand optimization approaches that are used for each problem type is deferredto Section 2.2 which discusses the priors and Section 2.3 where solutionmethods are introduced.2.1.1 Tomographic ReconstructionThe first of the three applications considered is the reconstruction of mixingfluids from multiview video. For restricted classes of fluids this can be framedas a computed tomography (CT) problem. Tomography is the process ofinferring the content of a scene from lower-dimensional projections of thatscene and is used extensively in medical imaging, geophysical imaging andmore recently, in computer graphics. Different applications can have differ-ent forward models, common examples include x-ray attenuation in medicalCT [140, 93], detection of decay of radioisotopes in medical position emis-sion tomography [141, 9], per-path electrical conductivity effects [20, 41, 79],per-path discrepancies in seismic measurements [119, 62, 27] and dopplertomography [40, 155]. Seismic and doppler tomography are particularlyinteresting as they have imaging modes that can recover structure and/orvelocity data, although they have restrictive application domains.One of the primary ways in which these applications differ is in how asignal is propagated through the imaged medium. This is related to theunderlying physics being exploited for imaging purposes.For idealized x-ray tomography, x-rays travel in straight paths from anx-ray source to any one of a set of x-ray detectors oriented at multipleangles (see Figure 2.1), losing energy proportional to the spatially varyingattenuation coefficient of the imaged scene. This same principle is used in anoptical setting in Chapter 3 to image fluids where, under fairly reasonableassumptions, ray paths are also straight lines and attenuating (or emissive)image formation also apply.72.1. Classes of Inverse Problem(a) acquisition (b) measurement formationFigure 2.1: Computed tomography forward model. Left: X-rays travel alongstraight ray paths from a source (not pictured) to sets of detectors orientedat multiple angles that measure projections of the subject. Right: Eachmeasurement is a line integral over the unknown solution.Tomographic Image Formation ModelThe image formation model for x-ray tomography consists of an x-raysource which emits x-rays at unit intensity towards the object to be imaged,after which is placed a set of x-ray detectors. Each of the detectors recordsone component, qi, of the measurement vector, q. Assuming an ideal atten-uating image formation model with no scattering, the intensity measured ateach detector can be modelled by the Beer-Lambert equations:qi = e−∫Ωip(~x)dΩi (2.4)where Ωi is the straight ray-path between the source and the ith detectorand p(~x) is the spatially varying attenuation coefficient for the domain, seeFigure 2.1. A linear image formation model for the measurement qi can berecovered by taking the log of both sides of Equation 2.4 to get Equation 2.5,which relates the measurement qi to the attenuation coefficient.∫Ωip(~x)dΩi = − log qi (2.5)Discretizing the attenuation coefficient, p(~x), on a voxel or pixel basis allowsthe log of each measurement qi to be expressed as a linear combination of82.1. Classes of Inverse Problemunknown values pj as in Equation 2.6:∑jfi,jpj = − log qi (2.6)Many such measurements can then be stacked into a linear system to forma linear forward model q′ = Fp by defining q′ = − log q. Seeking a so-lution that minimizes the squared errors in reproducing the measurementssubject to a prior (discussed in Section 2.2) then allows the the tomographicreconstruction problem to be posed as the minimization problem shown inEquation 2.7.p∗ = arg minp12‖q′ − Fp‖22 + γΓ(p) (2.7)Assuming that the prior function, Γ, is convex, finding the p∗ that minimizesEquation 2.7 gives an approximation to the spatially varying attenuation co-efficient field p(~x) that best balances reproducing the observed measurementvector q with the prior expectations on the solution properties imposed bythe regularization function Γ. Implicit in this formulation are assumptionsthat the image is formed by pure attenuation with no scattering, that theray paths Ωi for each measurement qi are known accurately and that thechoice of discretization of p(~x) is appropriate. These are all potential sourcesof error that can contribute to non-physical artefacts. Such artefacts are ide-ally suppressed by the prior, although it is often the case that the prior alsoinvolves assumptions that may not be satisfied perfectly for a given scene.It possible to slightly modify Equation 2.7 to function with purely emis-sive domains where light is emitted isotropically based on a spatially varyingemissivity coefficient. In the purely emissive case, instead of following theBeer-Lambert law from Equation 2.4, each measurement qi is formed by theray-integral of p(~x) (now representing a spatially varying emissivity) alongthe ray Ωi, as in Equation 2.8.qi =∫Ωip(~x)dΩi (2.8)Equation 2.8 is linear, so discretization and stacking of the equations formultiple measurements can be performed to obtain an identical result asEquation 2.7, except with q′ replaced by q.92.1. Classes of Inverse ProblemProperties of Tomography ProblemsNeglecting the prior, the minimization in Equation 2.7 appears straight-forward since it is a simple linear-least-squares problem. In spite of this,there are a few challenging aspects of tomography problems.(a) PDE discretization (b) tomographyFigure 2.2: Discretization stencil of tomography problems. Tomographyresults in much denser systems than discretizations of partial differentialequations. Left: Each equation in a PDE typically has a fixed small numberof non-zero entries, e.g. the Laplace stencil shown. Right: In contrast,tomography uses O(√N) non-zeros per row in 2D and O( 3√N) non-zerosper row in 3D.The first is the scale of the problem. Each measurement in q is obtainedby the ray-integral of the unknown solution, p(~x), over the ray Ωi corre-sponding to the measurement. Although the data matrix is sparse (sinceeach unknown pj does not interact with every measurement qi) if a 2Ddomain Ω is discretized with N voxels each row of the data matrix Fi willhave O(√N) entries, as shown in Figure 2.2. Even very coarse discretiza-tions with only 100 or so mesh divisions per axis lead to each row of thedata matrix having several hundred non-zero components. The data matrixfor such a discretization can easily reach tens of gigabytes, depending on thenumber of measurements.This contrasts strongly with discretizations of partial differential equa-102.1. Classes of Inverse Problemtions (PDEs) that, on uniform grids, generally have a stencil with size in-dependent of the mesh resolution. The significantly higher storage requiredfor the systems produced by tomography problems means that tomogra-phy problems must generally be solved with matrix-free methods. Thisrestriction limits preconditioning strategies and solver choices. A number ofmatrix-free methods are discussed in Section 2.3.However, for some applications such as medical CT, the data collectioncan be performed in structured manner that allows highly efficient solversto be used. In particular, medical CT usually acquires (or rearranges andresamples) data as one-dimensional orthogonal projections of individual twodimensional slices. When data is captured in this manner, a single large-scale3D problem is reduced to a collection of much smaller 2D sub-problems. Fastsolvers based on the Fourier slice theorem can then be used (see Section 2.3)to solve each of these sub-problems independently.Order-of-magnitude estimates of the rank of the data matrix can becomputed based on the chosen discretization, number of images (angles)recorded and resolution of each image. A 2D domain discretized into a2562 image from which 1D projections that each contain 256 measurementswould consequently need 256 angles to have a unique minimizer of Equa-tion 2.7. Since scan time, machine cost and radiation dose of the patient(or all three) increase with number of angles it would seem reasonable toincrease the resolution of the imagers. This only works to a point since theresulting measurements become increasingly linearly dependent so the rankof the data matrix F may not increase proportionally to the resolution of thesensors. This can be understood in an intuitive fashion when consideringthat a single, arbitrarily high-resolution projection will not contain any in-formation about the distribution of the solution along each ray even thoughit may fully encode the frequency content perpendicular to the projectionaxis. Consequently, tomography must balance the number of projectionswith the resolution of the projections to maximize image quality.There are strong time, economic and safety pressures with tomogra-phy that have spurred research into performing reconstructions from re-duced numbers of angles [51, 22, 60, 32, 31], with restricted regions of inter-est [165, 162, 163, 49] or with reduced radiation dosage [116, 154, 160, 105].These have arisen in the context of non-destructive testing, where physicalconstraints prevent all angles from being acquired, and in medical settingswhere concern over radiation doses associated with medical imaging has in-creased. This research has recently been largely focused on the role of thepriors Γ in obtaining good reconstructions from limited data and is discussedin Section 2.2.112.1. Classes of Inverse ProblemVisible Light Computed TomographyThe image formation model for either attenuating or emissive CT alsoapplies for light in the visible spectrum. This allows data to be capturedwith consumer cameras after geometric calibration of the ray paths, Ωi, andradiometric calibration of the camera response curves. Unfortunately it isnot possible to collect data with orthographic projections in this case with-out expensive collimation optics or very long baselines. Since the measure-ment rays define a frustum (Figure 2.3), it is also not possible to resamplecaptured data into approximately orthographic projections. Consequently,visible light tomography generally cannot use solvers based on the Fourierslice theorem. Furthermore, consumer cameras tend to be relatively expen-sive and capture at high-resolution. This often leads to the wrong balancebetween number of projections and projection resolution needed to maxi-mize image quality. Often there are less than two dozen cameras, each atmegapixel or higher resolution.Figure 2.3: Visible light tomography setup. Capturing visible light tomog-raphy datasets with perspective cameras prevents resampling into ortho-graphic projections due to convergence of ray paths in two axes (left) andinteractions of multiple cameras (right). This is unlike the fan-beam meth-ods for slice-based 2D tomography, e.g. as described in [93].In spite of this, visible light tomography has been applied to a number ofgraphically relevant cases, including scanning of transparent objects [149],flames [84] and refractive transparent gas flows [8], where it offers the pos-sibility to reconstruct dynamic phenomena with rapidly changing topology.Such phenomena are challenging to interpret from individual projectionsalone.122.1. Classes of Inverse ProblemRecent research into priors and associated solvers has started to makehigh-quality reconstructions possible from visible light tomography setups.These often have 20 or fewer cameras and can be under-determined by fac-tors of ten or more.2.1.2 Image DeblurringThe second application considered in this thesis is image deblurring & decon-volution. These are closely related inverse problems that occur frequentlyin imaging. Deconvolution models the formation of a blurry image in anoptical system as the convolution of a sharp intrinsic image by a spatiallyinvariant blur kernel, see Figure 2.4. Deblurring is similar, but allows theblur kernel to vary in space due to optical variations/aberrations and scenedepth.Figure 2.4: The deconvolution imaging model expresses the formation of ablurry measured image as the convolution of a blur kernel, representing thecapture optics, with a sharp intrinsic image.In the non-blind deconvolution problem, the goal is to recover an estimateof the sharp intrinsic image given the blurry image and blur kernel. The blinddeconvolution problem seeks an estimate of the sharp image and blur-kernelgiven only the blurry image. The method presented in Chapter 4 directlyapplies only to the non-blind. However, methods for blind deconvolutionoften alternate between kernel estimation and non-blind deconvolution, sothe method has applications in both problems.Like tomography, deblurring occurs in a number of different contexts,including microscopy [111, 142, 139, 55], astronomy [138, 87, 45, 130], pho-tography [102, 30, 101, 143] or anywhere that a signal is measured but is132.1. Classes of Inverse Problemcorrupted by a known (or calibrated/assumed) filter.Image Formation ModelDenoting the known, spatially invariant blur-kernel as f , the measuredblurry image by q and the sharp intrinsic image to be estimated as p, theforward model for deconvolution is simply convolution of the intrinsic imageby the filter, as shown in Equation 2.9.q = f⊗p (2.9)Using this definition, it is possible to write the objective function for the de-convolution problem as Equation 2.10 by introducing a least-squares penaltyon the forward model and adding a prior function Γ.p∗ = arg minp12‖q− f⊗p‖22 + γΓ(p) (2.10)Since convolution is a linear operation, it is possible to rewrite Equation 2.10by expressing the blur kernel, f , as a Toeplitz matrix, F. This is shown inEquation 2.11.p∗ = arg minp12‖q− Fp‖22 + γΓ(p) (2.11)Equation 2.11 is equivalent to Equation 2.10 except that the images q andp are now interpreted as vectors. This is the same form as the general formin Equation 2.3, except the data term is linear as in the tomography case.Equation 2.11 is also the form that is used for deblurring problems exceptthat F is no longer a Toeplitz matrix since its coefficients and zero-patternvary with the blur kernel.Properties of Deconvolution/Deblurring ProblemsThe sparsity of the blur operation in deconvolution depends on the phys-ical size of the optical point-spread function relative to the pixel pitch of thesensor. Images captured at high resolution with large blurs can consequentlylead to systems with comparable density as tomography, e.g. a 20×20 pixelblur kernel leads to 400 non-zero entries per row of F. Consequently, liketomography, deconvolution tends to use matrix-free solvers, except for verysmall problem instances.The quality of intrinsic image estimates, p, that can be obtained indeconvolution depends largely on the blur kernel, f . Better results are gen-erally obtained when the blur kernel preserves some high-frequency content142.1. Classes of Inverse Problemcompared to those that behave like low-pass filters. Consequently, vastlybetter reconstructions can usually be obtained from motion blur kernels,which tend to average over curved paths, than equivalently sized defocusblur kernels which average over areas. This can be attributed to the motionblur kernels preserving more of the intrinsic image frequency content.This is similar to the case of tomography, where it is desirable to balancethe number and resolution of projection images obtained in order to preservehigh-frequency content. Crude estimates of the rank of the system matrixin deconvolution problems are more difficult to obtain since the numberof equations and unknowns is typically equal: deconvolution problems arerank-deficient due to the blur kernel attenuating certain frequencies to zeromagnitude.As in tomography, it is desirable to obtain high-quality reconstructionseven when cost or capture conditions are adverse. This has led to extensiveresearch into effective priors for reconstructing natural images that havebeen corrupted by noise and blurs. These are reviewed in Section 2.2. Freelyavailable image databases (e.g. [156, 134, 57, 109]) and the general simplicityof working with images has allowed this research to proceed very quickly.2.1.3 Velocity EstimationComputing the correspondences between pairs of images is a common taskin computer vision. Applications of correspondence estimation include pho-tometric stereo reconstruction [61], navigation [167, 77] and feature track-ing/matching [108, 95, 15] among others.Optical flow is a common approach to computing image correspondencesthat has seen considerable research over the past 30 years (e.g. [80, 13, 28,29, 145]). Optical flow assumes that that there is a warping that can beapplied to one image in order to reproduce the second image to a goodapproximation. This warping is related to the velocity of points in the sceneif the two images are taken from the same location but at different times.This thesis refers to this problem as velocity estimation since Chapter 5focuses on estimating the velocities of captured fluid concentration fields.Forward ModelVelocity estimation has been approached in several different ways. Avery general statement of the problem under the constraint of constantbrightness [80] is shown in Equation 2.12 in which IA and IB are images ofthe scene at different times tA & tB, ~u = [u(~x), v(~x)]T is the velocity field152.1. Classes of Inverse Problemto be estimated and ~x is a spatial coordinate:IB(~x)− IA(~x−∫ tBtA~udt)= 0 (2.12)Equation 2.12 should be satisfied for all spatial coordinates ~x but is non-linear due to the warping in the second term. When only the final pixeloffsets that register one image with the other are required it is commonto approximate the path integral as ∆t~u, which simplifies Equation 2.12substantially:IB(~x)− IA (~x−∆t~u) = 0 (2.13)When velocities are small with respect to the pixel pitch, Equation 2.13 canbe simplified by linearizing IA with respect to ~u in Equation 2.13. Thisresults in a partial differential equation (PDE) form of optical flow, shownin Equation 2.14IB(~x)− IA(~x) + ∆t∇~xIA(~x)T~u = 0 (2.14)where ∇~x denotes the gradient of a field with respect to the spatial coordi-nate ~x. Discretizing the images IA, IB and the velocity field ~u on a pixelbasis then allows Equation 2.14 to be written for each pixel centre ~xi andstacked as a linear system of equations.IB − IA + ∆t [∇~xIA] ~u = 0 (2.15)Here the images have been redefined as their pixelized equivalents, [∇~xIA] isthe sparse matrix defined by stacking the image spatial gradients as rows and~u is the vector of pixel velocities. Defining the data matrix F = −∆t [∇~xIA]and q = IB − IA allows finding the discretized velocity field ~u∗ that bestsatisfies Equation 2.15 for all pixels to be put into the familiar regularizedleast-squares form from Sections 2.1.1 & 2.1.2 and is shown in Equation 2.16.~u∗ = arg min~u12‖q− F~u‖22 + γΓ(~u) (2.16)The one distinction of the minimization in Equation 2.16 is that the solvedvariable ~u∗ is a discretization of a vector field rather than a scalar field.This influences the choices and definition of the prior function Γ as will beseen in Chapter 5 but is otherwise a trivial change. The rank of opticalflow problems is also affected: in 2D, each pixel has two degrees of freedom(velocity components), but only one equation. This increases dependenceon the prior to obtain good solutions.162.1. Classes of Inverse ProblemProperties of Velocity Estimation ProblemsWhen viewed in the context of machine vision, Equation 2.12 only holdswhen the brightness of corresponding pixels in IA and IB is constant betweentimes tA and tB. This condition is referred to as the brightness constancyassumption and can often be violated by changes in lighting, dynamic ad-justment of camera settings and scene occlusions. Numerous methods havebeen developed to work with scenes in which brightness constancy does nothold as well as to improve robustness of variational optical flow methods, athorough evaluation is provided by [145].On the other hand, when viewed from a fluids context, Equation 2.12holds exactly for the physical case of pure advection of a scalar field. Inthis case the linearization in Equation 2.14 is a discrete-time version of thepassive advection PDE (Equation 2.17) from the Navier-Stokes equations,which models the transport of a passively advected scalar I over time.∂I(~x)∂t= −∇~xI(~x) · ~u (2.17)Consequently, although the reduction to PDE form is due to simplifyingassumptions in the context of machine vision, it actually represents a dis-cretization of the governing equations when considered from a fluids per-spective.The PDE in Equation 2.17 represents a forward model (simulation) forevolution of passively advected quantities. However, the linearization intro-duced in Equation 2.16 only holds for motions smaller than the pixel pitchsince it is based on a Taylor expansion around each pixel. This limitationis analogous to the Courant, Friedrichs and Lewy condition [48] for explicittimestepping in forward simulation. This conditions requires that the dif-ference scheme used to discretize the simulation contains the characteristiclines of the PDE. For advection with a fixed velocity field, the characteris-tic lines are the velocity field itself so this sets a lower bound on the meshspacing for a given timestep or an upper bound on the timestep for a givenmesh spacing.In forward simulation this does not pose a problem, since the timestepused in evolving the advected quantity can be arbitrarily chosen to meet thisrequirement. However this restriction is quite limiting for velocity estimationsince frames are typically provided at a fixed timestep but have motionssignificantly larger than the pixel pitch. A common solution is to use a scalespace that estimates motions from coarse to fine scales on down-sampled172.2. Prior Classesversions of the input images. Working on down-sampled images allows themotions computed at a given scale to be kept smaller than the pixel pitch.Motions estimated at coarse scales are then used to warp the images at thenext finest scale. A common example of such a method is [113] which allowslarge motions to be recovered while representing each individual scale as alinear inverse problem.Finally, the data term used in PDE based optical flow is only able toestimate motion when the dot product of the image intensity gradient withthe velocity field is non-zero. This means that there must be image gradientcomponents in the direction of the velocity vector otherwise the correspond-ing row contributes no information to the velocity estimation problem.This behaviour means that velocity estimations problems are usuallyhighly under-determined since the input images, IA and IB, often have onlyisolated regions of texture at material/occlusion boundaries. As a result, thedata term only constrains the flow vectors in the vicinity of image edges andstrong texture while providing no information in other areas. This problemis commonly addressed through the prior, Γ, introduced in Equation 2.16which serves to interpolate and extrapolate velocities from textured regionsto untextured ones.Consequently, velocity estimation problems are under-determined in acontent-dependent fashion. This is one of the features that distinguishesthem from other inverse problems. In both tomography and deconvolutionthe matrix, F, is not influenced by the values of the measurements. Deblur-ring with spatially varying blur-kernels, however, is another example of acontent dependent problem since the blur kernel can have a dependence onthe scene depth.Another way that velocity estimation differs from tomography or de-convolution problems is that the data matrix, F, is typically very sparse.Since the forward model is simply a discretization of a PDE (Equation 2.17),the number of non-zeros per row is constant and independent of resolution.This means that it is often not necessary to use matrix-free solvers and thatsophisticated linear solvers can be applied.2.2 Prior ClassesThe three inverse problems introduced in Sections 2.1.1-2.1.3 all have for-ward models that can be represented (at least within a scale) as the productof a data matrix, F, with the unknowns, p. Since they are all generallyunder-determined and may have calibration errors or noisy measurements,182.2. Prior Classessolutions are only required to reproduce the measurements in a least-squaressense (via the data-term) subject to assumptions on the solution properties.These properties are enforced via the prior or regularizer function, Γ(p).This leads to the minimization problem shown in Equation 2.18.p∗ = arg minp12‖q− Fp‖22︸ ︷︷ ︸data term+ γΓ(p)︸ ︷︷ ︸prior(2.18)Section 2.1 defined the data-term for the three problem classes but did notdiscuss the prior function Γ. This section does the reverse, concentrating ondifferent types of priors that can be applied under the assumption that thedata term is defined as in Equation 2.18.Typically the definition of the prior involves penalizing variation of thesolution from classes of solutions with specific properties. These propertiescan include smoothness, having values within a physically realizable range(e.g. having non-negative emissivity), possessing certain statistics or simplybeing a member of a set of valid configurations. Each of these propertiesmust be described mathematically in order to define the prior function, Γ,and it is common for priors to change the mathematical structure of theproblem. Depending on the prior, the resulting objective function can benon-smooth, non-convex or even discontinuous. The altered structure makesthe resulting minimization in Equation 2.18 more difficult to solve and thisis generally the main challenge encountered in solving inverse problems.Often customized solvers need to be developed for specific prior classes,some examples are discussed more in Section 2.3.One particularly useful family of prior functions involves applying a costfunction or loss function, C, to the product of a linear operator, G, with theestimated solution p. The cost function takes a vector argument (the prod-uct Gp) and produces a scalar indicating how undesirable or implausiblethe current solution configuration is. The resulting regularizer is defined inEquation 2.19.Γ(p) = C(Gp) (2.19)The family of regularizers defined by Equation 2.19 includes some of themost widely used and most successful priors for inverse problems. Differentchoices of the cost function C and prior matrix G imply different assumptionson the solution properties. This section reviews several common examplesfrom this class of priors.When the unknown solution is a vector field (e.g. in optical flow whensolving for velocity field ~u), priors are frequently applied independently to192.2. Prior Classeseach vector field component. In other cases, the prior couples componentsof the vector field together, allowing information from one component to beincorporated into the others.2.2.1 L2 PriorsWhen the cost function in Equation 2.19 is chosen to be the 2-norm theprior acts as a Tikhonov regularizer for the data term in Equation 2.18.The resulting prior (Equation 2.20) penalizes the squared Euclidean lengthof the product Gp. This disproportionately penalizes solutions that producelarge components of Gp.Γ(p) = ‖Gp‖22 (2.20)The advantage of choosing priors in this form is that the optimal solution,p∗, to Equation 2.18 can be written in closed form by setting the gradientof Equation 2.18 to zero and simplifying to obtain Equation 2.21.p∗ =(FTF + 2γGTG)−1FTq (2.21)With minor assumptions on the data term and prior matrix G (specificallystrict convexity of Equation 2.18), p∗ is the unique minimizer of Equa-tion 2.18 which can computed by solving a system of linear equations. Thismakes L2 priors some of the easiest to incorporate into an inverse prob-lem since the resulting objective functions are smooth and convex quadraticforms that can be solved using standard methods. Different choices of theprior matrix, G, imply different assumptions about the solution which arediscussed below.Minimum-norm solutionsChoosing G = I, where I is the identity matrix, penalizes the squaredEuclidean length of the solution vector, resulting in Equation 2.22.Γ(p) = ‖p‖22 (2.22)Equation 2.22 is a classic form of Tikhonov regularization that is commonlyused for under-determined linear systems. In looking for a minimum 2-norm solution, this prior seeks solutions that favour many low-amplitudecomponents over those having large components. This is known as ridge-regression when applied to model fitting [72].A disadvantage of this particular prior on imaging problems is that unlessthe solution is expressed in a basis with spatial structure (e.g. the Fourier202.2. Prior Classesor wavelet bases), pixel values are treated independently so low-magnitudebut noisy solutions may still result. However, one useful example is in fluidvelocity estimation, where (assuming constant density flows) minimum 2-norm solutions correspond to solutions with minimum kinetic energy.Smooth solutionsChoosing G = ∇~x, where ∇~x is a matrix that computes per-pixel spatialgradients, penalizes solutions with high-gradients and is referred to as an L2smoothness prior. The prior function is shown in Equation 2.23.Γ(p) = ‖∇~xp‖22= pT∇T~x∇~xp= pT∇2~xp (2.23)A minimum of Equation 2.23 can be found by setting its gradient to zero,which produces the Laplace equation, ∇2~xp = 0. Optimal solutions with re-spect to the prior consequently satisfy the Laplace equation, which impliessmoothness. It is worth noting that this is at best a heuristic: many smoothsolutions may not satisfy the Laplace equation and there is little reason tosuspect that the solution to an arbitrary inverse problem should. Again,an exception of sorts occurs for estimating the velocity of fluids, where vis-cous effects are modelled by the Laplace operator applied to each velocitycomponent.When solutions are expected to be even smoother than what results fromEquation 2.23, the prior function can be defined to penalize the 2-norm ofthe second derivatives of the solution. This results in an L2 curvature prior,shown in Equation 2.24.Γ(p) = ‖∇2~xp‖22 (2.24)The smoothness priors in Equation 2.23 & 2.24 are effective for suppress-ing high-frequency oscillations in the solutions and are particularly usefulwhen the unknown solution is expected to be smooth. Variants of smooth-ness priors have been used in microscopy [127], tomography [81, 82], de-convolution [99, 42, 90] and in optical flow, where using the prior shown inEquation 2.23 in a PDE optical flow formulation results in the ubiquitousHorn-Schunk method [80].The disadvantage of using L2 priors when the solution is not expected tobe smooth (or only piecewise smooth) is that although they can effectivelyfilter high-frequency artefacts they strongly penalize areas where the solution212.2. Prior Classesis non-smooth. This often manifests as low-frequency ringing or blurringartefacts near (what should be) discontinuities. This limitation has led tointerest in so-called sparse priors which are able to handle piecewise smoothsolutions more naturally.2.2.2 L1 PriorsA limitation of the L2 smoothness priors discussed in the previous sectionis that they tend to over-smooth sharp features in reconstructed images.There is strong interest in priors that regularize inverse problems by im-posing smoothness or simplicity assumptions but which also preserve dis-continuities. The reason for this is that discontinuites play a major role inthe structure and appearance of medical images, photos and motions, wherethey distinguish tissue/material types and object boundaries.Such priors are often referred to as sparse which indicates that the so-lution, when represented in a suitable basis, consists of predominately zerocomponents with only a small subset of non-zero components. Incorporat-ing sparsity as a constraint is challenging since it requires minimizing thenumber of non-zero entries (the 0-norm) in the solution vector. This resultsin a NP-hard combinatorial optimization problem that can only be solvedby exhaustive search in the general case [35].However it has been empirically known for several decades that usingthe 1-norm as a cost function results in sparser solutions than the 2-norm,but results in a continuous and convex optimization unlike the 0-norm. Re-sults from compressed sensing have subsequently shown that exact solutionsto under-determined inverse problems can be found when the solutions areknown a priori to be sparse in some basis and measurement is performedsuitably [34], i.e. solutions that solve the 0-norm problem exactly withoutresorting to exhaustive search in specific cases. These results have beenextended to include certain inverse problems regularized by total variationpriors (see below) as well [33, 118]. Unfortunately, the results rely on restric-tive classes of sampling matrices that do not apply to the applications inthis thesis. Regardless, the use of L1 priors for improving reconstruction ofdiscontinuities has been well established. Minimizers of identical objectivefunctions can be interpreted as MAP solution estimates under the assump-tion that solution components follow a Laplace distribution, see e.g. [121].The definition of sparsity is often relaxed somewhat to include sparselydistributed image gradients or curvatures, i.e. that images are piecewiseconstant or linear, respectively. An example of this is ROF or total-variationdenoising [131], which minimizes the sum of gradient magnitudes, rather222.2. Prior Classesthan the sum of squared gradient magnitudes (the L2 smoothness prior), inorder to faithfully reproduce sharp edges.The first of the sparsity inducing priors considered are L1 priors of theform in Equation 2.25. These choose the loss function as the 1-norm of theproduct Gp. By using the 1-norm, these priors decrease the penalty incurredby large solution components compared with the 2-norm and consequentlyfavours solutions where the product Gp is sparse, meaning that Gp has arelatively small number of non-zero components.Γ(p) = ‖Gp‖1 (2.25)The advantage of using L1 priors is that they encourage sparser solutionsthan the L2 priors from Section 2.2.1 but unlike the zero norm loss functionthe prior functions themselves are continuous and convex. When combinedwith an appropriate convex data term (such as the least-squares data termfrom Equation 2.18), this results in a convex objective function with a singleglobal minimum. Such functions are preferred over non-convex functionssince it removes the possibility of the optimization algorithm becoming stuckin a local minimum of the function.However the disadvantage of using L1 priors is that the the resultingproblems are significantly more difficult to solve than problems regularizedwith the smooth and quadratic L2 priors. Only recently have efficient solversbeen developed for general classes of L1 priors, some are discussed in Sec-tion 2.3.As in the case for the L2 priors in Section 2.2.1, the choice of G affectswhat the corresponding assumptions are on the solution. Some commonchoices for the prior matrix G are described below.Minimum 1-Norm SolutionsChoosing G = I results in the prior shown in Equation 2.26. The com-bined problem of the minimization in Equation 2.18 with the prior definedby Equation 2.26 is commonly referred to as basis-pursuit denoising whenused in an inverse problem context [39] or, in a slightly modified but equiv-alent form, as a LASSO (least absolute shrinkage and selection operator)problem [72] when used for sparse regression.Γ(p) = ‖p‖1 (2.26)When solving inverse problems, defining the prior as in Equation 2.26 ismost common when the solution is represented in an orthogonal basis where232.2. Prior Classessparsity corresponds to some idea of simplicity. Some examples include theFourier or wavelet bases [50]. Even when the solution is not represented ina sparse basis, it is possible to modify the data term by defining the datamatrix as the product F = F′B, where B transforms from the sparse basisto the frame in which the data term is most naturally defined and F′ is theoriginal data matrix [16].Sparse Derivatives & CurvaturesAnalogous priors to the L2 smoothness and curvature priors in Equa-tions 2.23 & 2.24 can also be defined for the 1-norm as shown in Equa-tion 2.27 and 2.28. As before, these penalize solution gradients and secondderivatives, except that the penalty on regions of high-gradient or curvatureis reduced relative to the L2 priors. Since solutions with sparse gradients areobtained, these priors favour piecewise constant or smooth solutions withmostly zero gradients or curvatures, respectively.Γ(p) = ‖∇~xp‖1 (2.27)Γ(p) = ‖∇2~xp‖1 (2.28)Total Variation PriorA closely related prior to the sparse derivatives prior is the total varia-tion (TV), defined in Equation 2.29 for the discrete setting by denoting thespatial solution gradient vector at pixel i as ∇~xip.Γ(p) =∑i‖∇~xip‖2 (2.29)The total variation prior is an isotropic variant of the sparse derivativesprior that has the advantage of reducing blocky artefacts in reconstructionssince it is rotationally invariant. This change means that it no longer fitsthe form Γ(p) = ‖Gp‖1, however the resulting prior is convex and can besolved for efficiently, e.g. as in [37].The total-variation prior has seen widespread use in inverse problems(e.g. [153, 38, 120, 28, 166, 37]) for its ability to reconstruct solutions thatare piecewise constant. This important class of images can include im-ages consisting of discrete tissues types in medical imaging or the motionof individual objects in optical flow. It is often used in deconvolution anddeblurring as well, where images tend to be piecewise smooth.242.2. Prior ClassesHowever, in some cases the distribution of gradients produced by thetotal-variation prior from Equation 2.29 or from the class defined by Equa-tion 2.25 is not as sparse as what is observed in real-world data. The statis-tics of natural images shows this to be true for deblurring/deconvolutionand has motivated the development of priors that encourage even sparserdistributions of gradients and curvatures.2.2.3 Non-Convex PriorsSparser distributions than can be obtained from an L1 loss function can beobtained by looking for sparser penalties than the 1-norm. Such penaltiesare often referred to as fractional norms although they don’t satisfy thetriangle inequality.A common example of a sparser cost function is C(v) =∑j |vj |α. De-noting the j’th row of G by Gj and choosing α ∈ [0, 1), this leads to theprior in Equation 2.30.Γ(p) =∑j|Gj · p|α (2.30)Choosing α > 0 results in a continuous, but non-smooth and non-convexproblem while choosing α = 0 results in an NP-hard combinatorial problemthat is rarely used in practice, although it is sometimes approximated viareweighting schemes [35].The advantages of the non-convex priors is that they can theoreticallyprovide sparser gradient distributions than L1 priors [101]. A major disad-vantage is that the problems are no longer convex, so solution algorithmscan become stuck in local minima of the objective function, or, in the caseof the L0 loss function, require exhaustive enumeration to solve in the gen-eral case. This also makes them more sensitive to starting conditions andparameter selection than other methods.In spite of the difficulties, non-convex priors have seen widespread use indenoising, where the statistical distribution of gradients in so-called naturalimages are sparser than those produced by L1 priors. Sparser priors haveconsequently been used in deblurring and deconvolution, where they areintended to effect a compromise between improving the mathematical modelof the problem while remaining solvable in practice.Non-convex sparse gradients priors are widely used in image deconvo-lution, e.g. [107, 101, 91, 106], since the exponent α can be tuned to thedistribution of gradients observed in natural images. In some cases, suchas [101], these can be motivated as MAP estimates of an imaging model252.3. Solution Methodsthat introduces Gaussian-distributed errors with prior assumption that im-age gradients follow a hyper-Laplacian distribution.2.2.4 Other Prior ClassesThe priors presented so far in this section have been fairly simple in thesense that they impose common assumptions, such as solutions being low-magnitude, (piecewise) smooth or being sparse. However it is often possibleto incorporate additional information about the system into the reconstruc-tion process in the form of additional priors and, by doing so, obtain im-proved results.Some examples of this include introducing priors that couple image chan-nels together to compensate for chromatic blur kernels [76] or that regularizeimages in a gamma-corrected space since images are often captured in a lin-ear space but only viewed once gamma corrected [66].Other priors make higher level assumptions about content than simplyassuming statistical distributions of the solution. This can include assumingthat images have only a finite set of colors either globally [66] or locally [91].More common recently is to attempt to learn models from images and thenpenalize deviation from these models [43, 88, 96]. Still other priors can begeneralized to restricting the solution to a set of admissible solutions withzero penalty and inadmissible solutions with infinite penalty. This penaltyfunction is discontinous but specialized methods for solving it are discussedin Section 2.3 and then applied in Chapter 5 to the problem of fluid velocityestimation.Introducing more detailed assumptions can frequently improve the qual-ity of reconstructions, however the priors that result from these assumptionsmay be any combination of non-convex, non-smooth and discontinuous. Thisoften means that incorporating a new prior involves developing an entirelynew solver designed specifically to optimize for the resulting objective func-tion.The stochastic method used in Chapters 3 and 4 is intended to simplifythis process, allowing priors to be tested in a common framework beforeinvesting the effort of developing customized solvers.2.3 Solution MethodsSection 2.2 introduced several common prior classes that are used to reg-ularize inverse problems. However, it did not discuss how the resultingoptimization problems are solved, except to state that it is often required262.3. Solution Methodsthat the solvers be matrix-free due to the scale of typical imaging problems.This is extremely important for tomography and deconvolution/deblurringwhere, if the system were stored explicitly as a sparse matrix, each of the(typically several million rows) can have several hundred non-zeros. Thescale of such systems precludes many traditional linear-algebra approachessuch as direct solvers or factorization-based preconditioners on consumercomputers.Solution methods can generally be grouped into algorithms for smoothobjective functions which are discussed in Section 2.3.1 and those for non-smooth objective functions, introduced in Section 2.3.2.2.3.1 Smooth MethodsStandard methods from optimization can be applied when the objectivefunction for a regularized inverse problem is smooth, provided that theycan operate in a matrix-free fashion. When the objective function is non-smooth more sophisticated techniques must be used. However, in manycases, methods designed for non-smooth objectives involve solving one ormore smooth sub-problems, each of which can use the methods from thissection.In some cases, structure of the problem allows specialized methods to bedeveloped. These are similar to the spectral methods used to solve PDEswith periodic boundary conditions, in that they are extremely efficient butonly apply in certain circumstances. Two such approaches are discussed inSection 2.3.1, one for tomography (which unfortunately does not apply wellto graphics problems) and one for deconvolution that is further discussed inChapter 5.In the discussion that follows, the regularized inverse problem from Equa-tion 2.3 will be represented by a function E(p), defined in Equation 2.31.E(p) =12‖q− Fp‖22 + γΓ(p) (2.31)It is assumed, for the remainder of this section, that the prior is one of theL2 priors discussed in Section 2.2.1 or is at least smooth and convex.Gradient Descent and Related MethodsOne of the simplest methods for minimizing a smooth function is gradientdescent. Given an objective function, E(p), taking a vector argument andreturning a scalar and denoting its gradient by ∇E(p), gradient descent272.3. Solution Methodsconsists of the steps shown in Listing 1 where k is an iteration index and τis a step size.Algorithm 1 Gradient Desent1: procedure GradientDescent( E, p(0) )2: k ← 03: while ‖∇E(p(k))‖22 <  do4: p(k+1) ← p(k) − τ (k)∇E(p(k))5: k ← k + 16: end while7: return p(k)8: end procedureThe choice of τ can either be set by a line search per iteration to minimizethe 1D auxillary function E(p(k−1) − τ∇E(p(k−1))), set statically based onthe Lipschitz constant of f or scheduled in accelerated gradients methods,e.g. [16].When the objective function in Equation 2.18 consists only of a least-squares data-term E(p) = 12‖q−Fp‖22 (i.e Γ(p) = 0 or γ = 0), the gradientof the objective function is given by Equation 2.32:∇E(p) = FT (q− Fp) (2.32)Gradient descent is often not used for standard optimization problems, suchas those involving PDEs, because it converges slowly. In spite of this itis often applied, in various forms, to solving linear inverse problems sincei) the scale of the problems is such that more advanced methods are im-practical, ii) uncertainty and systematic errors in the forward model meanthat solving beyond a modest level of accuracy is unnecessary and iii) dueto (ii) relatively few iterations of gradient descent are needed to reach thislevel. Furthermore, the artefacts produced by gradient descent (e.g. streaksin tomography or ringing in deconvolution) often have sufficiently differentstructure from the underlying scene that the results can be still be inter-preted even when solved without explicit regularization.There are many variations on gradient descent that are commonly usedin inverse problems. Most involve performing gradient descent on subsets ofthe measurements, with the differences lying primarily in how those subsetsare chosen. Stochastic gradient descent [21] performs gradient descent onsubsets of measurements (or single measurements) chosen uniformly at ran-dom. Each subset consequently provides an unbiased estimate of the true282.3. Solution Methodsgradient so although individual steps may increase the objective the overallmethod converges. Related methods improve convergence rates by incorpo-rating a memory of previous gradient estimates [135] or by controlling theerror of gradient estimates as the optimization proceeds [136].Related to stochastic gradient descent is the randomized Kaczmarz method[144] which operates on the equation for single measurements chosen propor-tionally to the ‖Fi‖22. This method is also known as the algebraic reconstruc-tion technique (ART) in tomography. Performing weighted gradient descenton subsets of measurements defined by each projection angle is known as thesimultaneous algebraic reconstruction technique (SART) [5]. An excellentsummary of such methods in the context of tomography is can be foundin [93].Smooth priors can be easily incorporated into gradient descent solvers,however L2 priors based upon spatial derivatives or curvatures (such as thosein Section 2.2.1) couple the data term to a PDE. This may make the com-bined problem harder to solve effectively with gradient descent. However,when L2 priors are included, the combined problem in Equation 2.3 is usu-ally convex. This allows quasi-Newton or conjugate gradient optimizationsto be performed.(Quasi) Newton and Conjugate Gradient MethodsLimited memory quasi-Newton and conjugate gradient methods differ fromgradient descent by directly solving for a minimum of the objective function.If the objective function is strictly convex (which is assumed via introducinga suitable prior), this point is unique. When the prior is one of the L2 priorsintroduced in Section 2.2.1, this corresponds to solving Equation 2.33.∇E(p) =(FTF + 2γGTG)p− FTq = 0 (2.33)Limited memory quasi-Newton methods solve this system by applying New-ton’s method to an approximation to the Hessian matrix ∇2E(p) (or in-verse Hessian matrix) constructed in a matrix-free fashion at run-time usinga history of point-sampled gradient evaluations [159]. Conjugate gradientmethods use matrix-vector products with the parenthesized matrix in Equa-tion 2.33 to solve in a Krylov subspace [133]. By representing these matricesimplicitly, e.g. by ray-tracing or convolution, these multiplications can beperformed without constructing or storing F or G.This allows both methods to be applied in a matrix-free fashion and useminimal storage but achieve much faster convergence than standard gradi-ent descent. Consequently, L2 regularized inverse problems can be solved292.3. Solution Methodsdirectly. These methods can also be used as the main computational blocksfor splitting and proximal methods, which are discussed in Section 2.3.2.Specialized Fourier MethodsThe structure of some inverse problems allows them to be solved extremelyefficiently in Fourier space. Deconvolution and tomography are two examplesthat exploit the Fourier convolution theorem and the Fourier slice theorem,respectively. These theorems allow Equation 2.31 to be expressed in Fourierspace and solved by pointwise operations. The solution can then be ob-tained by inverse Fourier transform. Similar ideas underlie the very popularFiltered Backprojection Algorithm (FBP) [5] for tomography and Fourierimplementations of the classical Richarson Lucy algorithm [129] which is anexpectation maximization algorithm for deconvolution.Several conditions must be met in order to apply these approaches, sim-ilar to the conditions required to apply spectral methods to solving partialdifferential equations [36]. First the solution must be periodic. This is gener-ally not a problem in tomography, where there is usually an ambient domainwith constant or known properties surrounding the subject. However, thisrestriction can introduces artefacts near boundaries in deconvolution if nothandled correctly. A second restriction, in tomography, is that the inputdata is obtained as, or resampled to, uniformly spaced orthographic projec-tions. The perspective projections used in optical tomography generally donot meet this restriction.With suitable boundary handling, deconvolution can exploit Fouriersolvers provided the point spread function is spatially constant and thatany priors that are used can be expressed as convolutions. In this case, theminimum for Equation 2.31 can can be found with Equation 2.34, where thenotation F(.) and F−1 (.) represent the forward and inverse Fourier trans-form, ∗ denotes the complex conjugate and all multiplication and divisionis performed pointwise.p = F−1(F(F)∗F(q)F(F)∗F(F) + γF(G)∗F(G))(2.34)With known data and prior matrices, this can be implemented by transform-ing the measurements to the Fourier domain, performing some pointwiseoperations and transforming the result back, giving asymptotic complexityof O(n log n) in the number of pixels. However, Equation 2.34 can only beapplied when F and G represent convolutions and for L2 loss function onthe data term and prior.302.3. Solution Methods2.3.2 Non-Smooth MethodsSection 2.3.1 reviewed several optimization approaches for smooth objectivefunctions. These approaches work well for smooth functions, but poorly onnon-smooth functions since they are based upon Taylor expansion of theobjective function. In these cases, the Taylor expansion is a poor approxi-mation to local function behaviour since gradients are not uniquely defined.This means that methods designed for smooth functions usually convergeslowly if at all.Non-differentiable objective functions, including the L1 norm, conse-quently pose a challenge. In spite of this, the theoretical properties of lossfunctions like the L1 norm, with its robustness to outliers and noise, hasinspired considerably research into non-smooth optimization. A selection ofthis is reviewed in the remaining sections of this chapter, starting with someearly methods and concluding with more modern methods.Programming and Interior Point MethodsOne of the most direct ways of solving L1 problems is to express themas quadratic programs. Suppose the objective function E(p) is given byEquation 2.35.E(p) =12‖q− Fp‖22 + γ‖p‖1 (2.35)This can be expressed as a quadratic program by splitting each unknowninto positive and negative components pi = p+i −p−i where p±i ≥ 0, leadingto the quadratic program in Equation 2.36:arg minp+,p−12∣∣∣∣∣∣∣∣q−[F −F][p+p−]∣∣∣∣∣∣∣∣22+ γp+ + γp−subject to p+ ≥ 0p− ≥ 0 (2.36)Solutions to Equation 2.36 are clearly equivalent to solutions of the originalproblem since if p+i > 0, p−i = 0 and vice versa. Otherwise the objectivefunction could be trivially decreased by subtracting min(p+i ,p−i ) from both.Other, much more general convex programming approaches exist forsolving a variety of non-smooth problems that are beyond the scope of thisreview. These include second order cone programs and semi-definite pro-grams [151]. Both the quadratic programming formulation described aboveand these other approaches can be practically solved in polynomial time312.3. Solution Methodsusing interior-point methods and software is widely available, e.g. [63, 110,147].These approaches are very powerful but have several disadvantages.Some of these include that they increase the number of unknowns or convertan unconstrained problem into a constrained one. Another issue is that theformulations tend to be specific to one class of problem or prior, making itnecessary to reformulate the problem when changing the prior. The last,most severe, issue is that exploiting structure in the problems can be verydifficult. This makes scaling to very large problems challenging. Imagingproblems, which can have tens of millions of unknowns, often fall into thisclass. The sheer scale of these problems and difficulties in solving them havehelped motivate the development of splitting and proximal methods, whichare discussed in the following section.Splitting and Proximal MethodsOne of the key difficulties in solving inverse problems with sparse priors isthat the priors alter the structure of the problem from a relatively straight-forward smooth and convex problem to a much more challenging non-smooth(and potentially non-convex problem). The previous section showed that thenon-smooth terms could be absorbed into a set of constraints, which canbe dealt with using quadratic, cone or semi-definite programming methods,however the resulting problems do not scale well to the number of unknownscommon in imaging.The splitting and proximal methods in this section avoid this by splittingthe difficult non-smooth problem into multiple coupled subproblems that aresolved in an alternating fashion. Fast solvers can then exploit structure ineach subproblem, since there is no requirement that each subproblem besolved with the same method.The splitting approach can be illustrated for the minimization of twoconvex functions:p = arg minpf(p) + g(Gp) (2.37)This can be split into a constrained problem in two sets of unknown variables:p, z = arg minp,zf(p) + g(z)subject to Gp = z (2.38)This splitting allows the alternating direction method of multipliers (ADMM)to be applied. ADMM [23] is an augmented Lagrangian method that allows322.3. Solution Methodsthe optimization in Equation 2.38 to be solved iteratively as two coupledsubproblems and a (scaled) Lagrange multiplier update.p(k+1) = arg minpf(p) +ρ2‖Gp− z(k) + y(k)‖22 (2.39)z(k+1) = arg minzg(z) +ρ2‖Gp(k+1) − z + y(k)‖22 (2.40)y(k+1) = y(k) + Gp(k+1) − z(k+1) (2.41)When f and g are convex, Equations 2.39-2.41 provably converge to a min-imizer of the combined problem arg minp f(p) + g(Gp).In the (common) case of the L1 prior Γ(p) = ‖Gp‖1 with a least squaresdata term, the functions could be defined as f(p) = 12‖q−Fp‖22 and g(z) =γ‖z‖1, leading to the update formulas in Equations 2.42-2.44.p(k+1) = arg minp12‖q− Fp‖22 +ρ2‖Gp− z(k) + y(k)‖22 (2.42)z(k+1) = arg minzγ‖z‖1 +ρ2‖Gp(k+1) − z + y(k)‖22 (2.43)y(k+1) = y(k) + Gp(k+1) − z(k+1) (2.44)The advantage of introducing the splitting can be seen by examining Equa-tion 2.42 & 2.43. The first benefit is that the two functions f and g are nowseparate minimizations due to the introduction of the splitting variables, z,and so can be addressed by separate and specialized solvers. The second,larger benefit is that by introducing the splitting variables the non-smoothfunction g in Equation 2.43 decouples into a set of 1D L1 problems, one foreach entry of z. Each of these can be solved in parallel and have the formshown in Equation 2.45.z(k+1)i = arg minzi|zi|+ρ2γzi −(Gp(k+1) − y(k))i︸ ︷︷ ︸vi2= arg minzi|zi|+ρ2γ(zi − vi)2 (2.45)Equation 2.45 can be solved exactly in constant time by applying the soft-threshold or shrinkage operator [50] shown in Equation 2.46.z(k+1)i =vi −γρ vi >γρ0 |vi| ≤γρvi +γρ vi < −γρ(2.46)332.3. Solution MethodsThis example highlights the benefit of introducing the splitting in Equa-tion 2.38. By splitting the two terms of the objective the resulting algo-rithm simply alternates between solving an unconstrained smooth minimiza-tion in the first sub-step (Equation 2.42), evaluating Equation 2.46 for eachcomponent of the splitting variable z and performing some straightforwardmatrix-vector operations to update the Lagrange multiplier vector, y. Thecombined algorithm provably converges to the optimal value of the originalproblem and eliminates issues caused by non-smooth minimization throughthe splitting.The ideas behind splitting methods can be generalized by defining theproximal operator of a function and developing minimization algorithms forcompound objectives either partially, or wholly, in terms of the proximaloperators for each component.The proximal operator of a function f is defined in Equation 2.47 andis conceptually similar to a single step of a trust-region minimization of f .The proximal operator returns the minimum of the sum of the function anda penalty/regularization term that acts to keep the result close to the inputargument v.proxλf (v) = arg minpf(p) +12λ‖p− v‖22 (2.47)Although very similar to the splitting approaches described earlier, the ad-vantage of basing minimization algorithms on proximal operators is thatconvergence theory for the algorithms can be derived based on the genericproperties of proximal operators rather than on the specific functions beingsolved. Consequently proximal algorithms generalize well; it is possible tochange a prior simply by evaluating a different proximal operator. Similarly,deriving the proximal operator for a new objective function allows existingminimization algorithms to be applied directly.The proximal framework is very powerful and proximal operators havebeen derived for a wide range of common functions, including quadraticforms, L1 norms, non-negativity/bound constraints and many more. Someexamples are shown in Table 2.1.Once the proximal operators for the component functions of the objec-tive have been defined, there are a number of algorithms that can be appliedincluding ADMM [122] and the primal-dual method of [37]. These methodsprovably converge for convex functions and have been applied to numerousinverse problems and machine learning problems with great success. Evenwhen the functions are non-convex it may still be possible to solve challeng-ing problems via proximal methods, e.g. low-rank matrix factorization [122]342.3. Solution MethodsTable 2.1: Common proximal operators. Many more examples can be foundin [122, 44]f(p) = proxλf (v) = Notesf(p) arg minp f(p) +12λ‖p− v‖22 Arbitrary convex f12pTAp + pT c (I + λA)−1(v − λc) Quadratic form12‖Fp− q‖22(I + λFTF)−1(v + λFTq) Linear least squares‖p‖1vi + λ, vi < −λ0, |vi| ≤ λvi − λ, vi > λL1 norm (component-wise shrinkage){0, p ∈ C∞, otherwiseΠC(v) Set membership (Eu-clidean projection ontoC){0, pi ∈ [a, b]∞, otherwisemax(a,min(b,vi)) Bound constraints(component-wise)g∗(p) v − γprox 1γ g(v/γ) Convex conjugate (viaMoreau Decomposition)In spite of their benefits, proximal methods do have some drawbacks.The first is that the proximal operators must first be derived for the com-ponent functions of the objective function (which themselves should ideallybe convex). This can be challenging for many functions.A second drawback is that proximal methods can have very high memoryrequirements when a number of priors are needed. Consider the anisotropicTV prior Γ(p) = ‖∇~xp‖1: the number of splitting variables and Lagrangemultipliers needed is the product of the number of unknowns in p and thenumber of spatial dimensions. For 1D problems this corresponds to a mem-ory footprint of 3X the memory needed for the solution alone, in 2D thisincreases to 5X and in 3D it becomes 7X. This can become quite onerousin practice: a 3D problem involving an L1 penalty on spatial gradients andcurvatures in 3D with non-negativity constraints on the variables that is dis-cretized on a 5123 grid requires approximately 15 Gb of RAM when storedin double precision and solved via a proximal method. That such a simplechoice of priors can result in such a large increase in memory usage is oneof the prime drawbacks of splitting methods.352.4. Discussion2.4 DiscussionThis section has reviewed the three inverse problems considered in this the-sis, computed tomography, image deconvolution/deblurring and optical flow.These three inverse problems can all be expressed the common form of a con-vex prior/regularizer added to a least-squares data term, where the purposeof the prior is encourage solutions with specific properties such as smooth-ness or simplicity.The choice of the prior is often application dependent, however a verycommon class consists of applying a loss function to the product of a linearoperator with the solution. The selection of the operator and cost functiondetermine the corresponding assumptions on the solution, typically eitherthat it has low magnitude, is smooth, or that it is sparse or has sparsegradients and curvatures. These priors are often combined and added toapplication specific priors that enforce more specific assumptions on the so-lution. Some of these were highlighted in Section 2.2.4, others are discussedin Chapters 3-5.Adding a prior to the reconstruction can allow vastly improved esti-mates of the unknown solution to be recovered but this improvement comesat the price of a significantly harder optimization problem to solve. Thisdifficulty has led to the development of non-smooth optimization methodsthat can efficiently solve for some classes of priors by splitting the objectivefunction into coupled subproblems that can be addressed in an alternatingfashion. Examples include the Alternating Direction Method of Multipli-ers [23], proximal methods [122] or the primal-dual method of Chambolle &Pock [37]. These produce state of the art results for many prior classes, par-ticularly for simple L1 priors such as the total variation prior. However, theresulting methods can incur significant memory penalties on compound pri-ors and require significant up-front effort to determine the correct splittingand to develop solvers for the specific sub-problems.In some cases, the problems that must be solved map well to existingapproaches. This is the case in Chapter 5, where an equivalence betweena key operation in fluid simulation with the proximal operator is exploitedto apply sparse reconstruction methods to problems in fluid dynamics. Inother cases, although the problem maps well, the memory overhead for thechosen priors is impractical; this is particularly true for Chapter 5 wherecompound priors are applied to volumetric data over hundreds of frames.Finally, in other cases, non-convex priors are used that do not meet theconvexity requirements of splitting methods or which only have proximaloperators for specific parameter choices. An example of this is the non-362.4. Discussionconvex hyper-Laplacian prior [101], where analytic solutions are only knownfor specific parameter choices and the solutions must otherwise be approx-imated numerically. Rather than investing considerable effort up front indeveloping new optimization schemes for untested priors it is preferable tohave a method in which new priors can be quickly evaluated. This saves theeffort of customizing solvers for those priors that yield improvements.This has partially motivated the development of a stochastic reconstruc-tion algorithm in Chapters 3 and 4 that solves regularized inverse problemsand allows nearly arbitrary priors to be introduced. The benefit of this ap-proach is that priors can be quickly evaluated without needing to developcustomized solvers for each prior variation that is tested, all while workingin a straightforward framework.37Chapter 3Tomography of MixingFluids3.1 IntroductionThe first application considered in this thesis is of the passive capture of mix-ing fluids. Interest in fluid capture has grown over the last decade and thiscan be partly attributed to an increase in the capture of dynamic phenomenain general. Recent years have seen computer graphics researchers capturedeforming bodies such as garments [26, 125], bodies & motions [53, 71] andfaces [52, 25, 18, 17] as well as of liquid surfaces [83], flames and gases [8, 84].In work following the content presented in this chapter, Gu et al. havealso proposed an acquisition process [69] based on compressed sensing us-ing structured light at 15 frames-per-second (FPS). For slow-moving flows,this approach may well yield improved results over the method in this chap-ter, however many of the flows considered in this chapter change consider-ably between frames even when imaged at 60 FPS. Recently Alterman etal. have also presented city-scale tomographic reconstructions of turbulencestrength [4] based on random variations of recorded images, which presentsan interesting approach to acquire local statistical properties of flows usefulin meteorology and turbine placement.Fluid capture has to some extent lagged behind other areas. This ispartly due to the variety of fluid phenomena, the speed at which they oc-cur and the technical challenges of reflection, refraction and scattering thatoccur in many flows relevant to graphics and engineering. Due to limits onreconstructing light paths in scenes with refractive or reflective events [103],it is generally necessary to restrict attention to a single flow of interest andflow conditions to cases that limit reflective or refractive light paths withinthe capture volume.This chapter focuses on the capture one such flow, that of an emissiveor attenuating tracer fluid mixing with a clear ambient fluid of equal refrac-tive index. Flows of this type do not have interior refractive or reflective383.1. Introductionevents and, for sufficiently low concentrations of tracer fluid, have little orno scattering. In spite of this they are highly relevant since they can modeleveryday phenomena using fluid similitude [158] which matches key dimen-sionless numbers (such as the Reynold’s number). This allows effects likesmoke stacks, rising smoke from a cigarette and the mixing that occurs whenone fluid is poured into another, e.g. milk into coffee, to be captured usingdifferent fluids at different scales.In this chapter, visible light computed tomography based on arrays ofoff-the-shelf consumer cameras (similar to [8]) is used to capture multi-viewvideo of these flows. The captured data serves as input measurements for acomputed tomography problem. To solve the tomography problem, a novelreconstruction algorithm named Stochastic Tomography (ST) is presentedthat is inspired by the Metropolis-Hastings algorithm. Stochastic Tomog-raphy addresses several issues common in tomographic reconstruction fromarrays of consumer cameras, including:• The method is both matrix-free and (by default) grid-free since it rep-resents the solution as a set of point samples generated by a stochasticrandom walk. This prevents having to impose a fixed discretizationbefore starting the reconstruction and results in very low memory us-age.• By using random walks, the algorithm automatically adapts to non-uniformly distributed fluids. This allows the algorithm to focus effortin regions containing the tracer fluid and to largely ignore empty re-gions without requiring data-structures such as octrees [85].• By representing the solution as point samples, the algorithm can re-construct and re-render arbitrary views of captured data on-the-fly ina streaming fashion, without ever storing volume data.• The algorithm easily incorporates nearly arbitrary regularizers andpriors, including non-smooth priors, into the reconstruction processwhen using an auxiliary grid. Although convergence of the method isonly guaranteed for smooth priors, in practice the method produceshigh-quality reconstructions without requiring the significant mem-ory overhead of splitting methods or incurring a significant runtimepenalty.The following sections introduce the visible light computed tomographyproblem as solved by Stochastic Tomography as well as some of the chal-lenges393.2. Visible Light Computed Tomography3.2 Visible Light Computed TomographyThe problem of reconstructing an attenuating or emissive scene from pro-jected data was introduced in Chapter 2 but is briefly summarised here fora purely emissive scene observed by a number of cameras. Such a setup isshown in Figure 3.1 which shows an array of consumer camcorders, each ob-serving the capture volume from a different angle. Introducing an emissive(glowing) or fluorescing dye to a clear ambient fluid results in complex fluidflows that are observed from multiple angles by the cameras.Figure 3.1: Visible light tomography capture setup featuring an array ofconsumer camcorders observing a reconstruction volume from different an-gles. Dye can be introduced from the top of the domain by pouring or byinjection via the syringe to the base of the domain. A rotation stage un-der the reconstruction volume and calibration target (not pictured) allowper-pixel ray mappings to be computed for each cameraCareful geometric calibration allows every pixel, i, in each camera to beassociated with a unique ray, Ωi, through the reconstruction volume [149],see Figure 3.2(a). Similarly, careful radiometric calibration lets the pixelvalues from one camera be related to those from another camera. Oncecalibrated, each pixel in each camera can be treated as a measurement, qi,that is a function of the spatially varying tracer fluid emissivity, p(~x).403.2. Visible Light Computed Tomography(a) acquisition (b) measurement formationFigure 3.2: Illustration of tomography forward problem. Left: Geometriccalibration allows each pixel from each camera to be associated with a uniqueray through the reconstruction volume. Right: Each pixel value is then ameasurement of the ray integral of the unknown emissivity field along thecorresponding rayAssuming a perfectly transparent ambient fluid and purely emissivetracer fluid, these measurements are formed according to the physical modelshown in Equation 3.1 and Figure 3.2(b).qi =∫Ωip(~x)d~x (3.1)Computed tomography algorithms typically attempt to solve this problemby discretizing the unknown tracer concentration field with basis functions(usually voxels or pixels). This results in a linear equation for each mea-surement i in terms of the discretization weights (components of) p. Eachof these equations has the form shown in Equation 3.2.qi = Fi · p (3.2)Stacking many such equations results in a linear system q = Fp thatmust be satisfied in order for the discretized field to reproduce the observedmeasurements. In practice, for visible light computed tomography, linear de-pendence of the measurements and an insufficient number of cameras resultsin a severely under-determined system. To remedy this and compensate forcalibration errors and noise in the capture process, a prior function, Γ(p),can be introduced to favour more plausible solution. An estimate, p∗, of the413.3. Stochastic Tomography Algorithmunknown state is then recovered via an optimization such as the one shownin Equation 3.3.p∗ = arg minp12‖q− Fp‖22 + γΓ(p) (3.3)A challenge with visible light computed tomography is that, by capturingdata with cameras having perspective projections, it is generally not possibleto acquire data in slices as is common in tomography for medical imaging.This makes it necessary to solve for the entire domain at once rather than asa sequence of 2D problems. A further difficulty is that the number of viewsis usually small (10-20 cameras) so that the problems are severely underdetermined and require strong priors be introduced. It is also common forthe tracer fluid to occupy a relatively small volume fraction of the domainso that most degrees of freedom do not actually contribute to the recon-struction. This last issue can been addressed somewhat by using a visualhull constraint [8] or spatially adaptive data structures [85], however thesecan still include many redundant degrees of freedom or involve complexdata-structures, respectively.The following section introduces the Stochastic Tomography algorithmwhich is well suited to this type of reconstruction problem and which largelyeliminates these issues. The algorithm is based upon stochastically gener-ated point samples that are adaptively placed within the domain. Thisallows the algorithm to adapt sampling effort based on the spatial variationof the tracer fluid automatically and without complex data structures. Aswill be explained, it also allows tomographic projections to be re-renderedon the fly without storing volume data. A final benefit is the ability tooptionally incorporate priors to improve reconstruction quality when thesystem is under-determined.3.3 Stochastic Tomography AlgorithmRather than immediately discretize Equation 3.1 like existing computed to-mography methods, Stochastic Tomography begins by defining measurementresiduals, r, as in Equation 3.4.r(p(~x)) = q− F (p(~x)) (3.4)Temporarily ignoring the prior function, these residuals are squared andsummed over all measurements to obtain an objective function E(p(~x)) for423.3. Stochastic Tomography Algorithmthe reconstruction problem. This objective function, shown in Equation 3.5,is equivalent to the data term in Equation 3.3.E(p(~x)) =12‖r(p(~x))‖22 (3.5)Stochastic tomography iteratively makes randomized updates to the solutionto minimize this function. It expresses the solution recursively at iterationk as p(~x)(k) = p(~x)(k−1) + (k)∆p(k)(~x), where (k) is a weighting factor and∆p(k)(~x) is the randomized update field.Using this definition, the (negated) change in objective function due tothe update (k)∆p(k)(~x) is given by Equation 3.6.∆E((k)∆p(~x)(k))= E(p(~x)(k−1))− E(p(~x)(k−1) + (k)∆p(k)(~x))(3.6)If ∆E((k)∆p(~x)(k))is positive, the objective function decreases and theupdate improves the fit of the current estimate of the solution to the data.In this case the update is helpful and should be accepted, i.e. included inthe solution. Once accepted, the update is never modified. Otherwise, theupdate has worsened the fit to the data and should be rejected or discardedby setting (k) = 0, which has the effect of zeroing its effect on the recon-struction.The advantage of this approach is that the when the forward model isa linear operator, the algorithm can maintain the set of residuals up to thek’th iteration instead of the full sequence of updates. These residuals arealso defined recursively in Equation 3.7.r(k) = q− F(p(~x)(k))= q− F(p(~x)(k−1) + (k)∆p(k)(~x))= q− F(p(~x)(k−1))− F((k)∆p(k)(~x))= r(k−1) − F((k)∆p(k)(~x))(3.7)The recursive definition is beneficial when the chosen update has small spa-tial support since updates to the solution in a small spatial region only effecta relatively small number of measurements. This in turn means that the sec-ond term of Equation 3.7 can be evaluated cheaply while r(k−1) (which iscached and stored) encodes the effects of all previous updates, since updatesare not modified once accepted. Consequently ∆E((k)∆p(k)(~x))can beevaluated cheaply for each update.433.3. Stochastic Tomography AlgorithmBased on this, a minimization algorithm for 3.5 can be developed thatgenerates a sequence of updates that each reduce Equation 3.5.3.3.1 Random Walk AlgorithmStochastic Tomography exploits the ability to cheaply evaluate the effectof compact local updates to the solution by generating randomized chainsof updates (hereafter referred to as samples or mutations) via a stochasticsampling algorithm inspired by the Metropolis-Hastings algorithm [114]. Akey strength of the Metropolis-Hastings algorithm is that it locally exploresaround highly profitable samples. When used for physically based rendering,this leads to an ability to locate and refine around strong caustics or lightpaths through complex sets of occluders [152].Stochastic Tomography applies this idea in a tomography context wherethe marker fluid may be distributed non-uniformly within the domain. Thisoccurs when scanning mixing fluids where the volume of marker dye mayonly represent a minuscule fraction of the total volume but cover a largespatial extent within the domain. The algorithm uses a Metropolis-Hastings-like heuristic generate proposed solution updates based on the success ofprevious updates, resulting in an algorithm that is conceptually similar tocoordinate descent. However it is important to stress that the chain ofupdates is not a Markov chain, and so the method can not be considered aMarkov chain Monte-Carlo method.To apply this idea, the Stochastic Tomography algorithm represents thesolution as a set of samples s(k) that are generated as randomized chains.Each sample, s(k), is of the form s(k) = {±,∆p(k)(~x)}, where  is a con-stant and is generated by mutating (perturbing) the next most recent sam-ple, s(k−1). The current sample, s(k), is then probabilistically acceptedor rejected based upon the improvement to the objective function value,∆E(s(k)). Pseudo-code for the Stochastic Tomography algorithm is shownin Algorithm 2.443.3. Stochastic Tomography AlgorithmAlgorithm 2 Stochastic Tomography algorithm1: procedure StochasticTomography( N , E, p(~x)(0), r(0) )2: // initialization3: k ← 04: s(0) ← uniform-random-sample()5: // sampling loop6: for k = 1 : N do7: // generate a new sample8: s(k) ← mutate(s(k−1))9: // probabilistically accept s(k) based10: // on objective function improvement11: a← max(0,min(1,∆E(s(k))/∆E(s(k−1))))12: if ∆E(s(k−)) < 0 or uniform-random() ≤ a then13: // sample accepted, walk14: // continues from s(k)15: if ∆E(s(k)) > 0 then16: // sample improves solution, update17: // residual and solution p(~x)(k)18: record(s(k))19: end if20: else21: // sample was rejected, keep exploring22: // near previous sample23: s(k) = s(k−1)24: end if25: end for26: return p(~x)(k), r(k)27: end procedureThe Stochastic Tomography algorithm requires four functions to be de-fined that fully specify the problem. These are listed below:• ∆E(s(k)) evaluates the change in objective function due to the sam-ple s(k) which (for the data term) can be evaluated as ∆E(s(k)) =12(‖r(k−1)‖22 − ‖r(k−1) − F((k)∆p(k)(~x))‖22). Modifications neededfor regularization are discussed below.• uniform-random-sample() generates a sample uniformly at randomwithin the domain and provides a starting point for the sampling chain.453.3. Stochastic Tomography Algorithm• mutate(s(k−1)) takes the previous sample s(k−1) as input and perturbsit to propose the current sample s(k)• record(s(k)) takes the current sample and updates the residuals r(k)via Equation 3.7 as well as any volume images needed for output orregularization (discussed in Section 3.3.2).Multiple runs of the Stochastic Tomography algorithm of length N aregenerated and continue until a user-specified sample budget is exhausted.This helps to ensure ergodicity and is discussed further in the context of themutation strategy (the mutate(s(k−1)) function).Spatial DiscretizationAt this point, no spatial discretization has occurred and the only restric-tion is that the forward model is a linear operator. This restriction allowscontributions of the previous k−1 to collected into the residuals vector rk−1and cached.Stochastic tomography defines the solution updates as a set of discretepoint samples and Dirac functions, s(k) = {±, δ(~x − ~x(k))}, each of whichdeposits  units of emissivity at its location ~x(k) via the Dirac (unit impulse)function, δ. The selection of  is a user parameter that is related to thenumber of samples, N , total number of sampling chains, M , total observedemissivity, tot and sample acceptance rate, α. The sample acceptance rateis typically 5-10% and the parameters are related by Equation 3.8. Themethod is relatively insensitive to the sampling chain length, N , values from1000 − 20000 have been successfully used so an arbitrary choice allows theuser to then select either  or the number of chains M with the unspecifiedparameter determined by Equation 3.8.αMN =tot(3.8)Since the vast majority of samples do not fall exactly on the rays cor-responding to a discrete measurement, kernel density estimation [123] (orsplatting) is used to distribute the emissivity of a sample to adjacent mea-surements within the image of residuals for each camera. This is depicted inFigure 3.3. Bilinear weights are used to distribute the emissivity, howeverit is possible to use other kernels for this purpose, for example piece-wiseconstant functions within pixels (as used in Chapter 4 in the context of im-age deblurring) or radial basis functions. The only restrictions are that theforward model should remain linear and that the chosen kernels should have463.3. Stochastic Tomography Algorithmrelatively small spatial support so that changes to the objective function∆E(s(k)) can be evaluated cheaply.Figure 3.3: Depiction of sample evaluation. A tentative sample s(k) is placedat the dotted point. Due to compact support of the data term, only theblack dotted residuals are affected by s(k), allowing ∆E(s(k)) to be evaluatedcheaply. If the sum of squared errors decreases as a result of s(k), the sampleis probabilistically accepted via Russian Roulette. Otherwise it is discarded.The majority of the computation required for evaluating ∆E(s(k)) liesin computing the projection of ~x(k) onto the residual image stored for eachcamera. Consequently, both signs of ± are considered when evaluating∆E(s(k)) and the one that produces the largest decrease in the objective iskept. Another optimization is to reject any sample falling outside the visualhull of the input data, similar to [8], since these samples cannot contributeto the reconstruction and will only require subsequent effort to correct laterin the reconstruction.473.3. Stochastic Tomography AlgorithmMutation StrategyThe mutation strategy generates a proposed sample s(k) = {±, δ(~x −~x(k))} based on the previous sample s(k−1) = {±, δ(~x − ~x(k−1))} and isintended to favour samples that are more likely to improve the residual.Since residuals tend to be spatially correlated, a reasonable heuristic is localexploration in the neighbourhood of the last successful sample, s(k−1).A simple strategy to accomplish this is to use Gauss-distributed offsetsfrom the previous sample, i.e. ~x(k) = ~x(k−1) + [N (0, σ),N (0, σ),N (0, σ)]T ,where N (0, σ) samples from a Gaussian distribution with mean zero andstandard deviation σ. The value of σ is a user-specified parameter usuallyset to 5− 10% of the reconstruction volume diagonal.The Gaussian mutation strategy is symmetric and ergodic, meaning thatit selects every point in the domain with finite probability. This is an impor-tant property for the mutation strategy since it prevents the method fromnot sampling areas of the domain in the limit of sample count. In contrast,non-ergodic sampling strategies may never sample certain areas.The sampling process is depicted in Figure 3.4 where black points con-nected by a line represent the sequence of accepted samples. The dottedend point of the chain always serves as the central point of the mutationstrategy. White points, which increase rather than decrease the objectivefunction, are generated as the algorithm proceeds and discarded as they areencountered.483.3. Stochastic Tomography AlgorithmFigure 3.4: Depiction of a sampling chain. The sequence of accepted sam-ples (connected black points) form a chain that terminates with the mostrecently accepted sample (dotted centre). This sample forms the centre ofthe distribution for proposing the next samples. White points representsamples that are discarded during the sampling process. Most samples aregenerated close to the most recently accepted sample, but by virtue of anergodic mutation scheme, large offsets occur occasionally.Once generated, samples that improve the objective function are ac-cepted or rejected based on Russian Roulette using the threshold probabilityin Equation 3.9.a← max(0,min(1,∆E(s(k))/∆E(s(k−1))))(3.9)This process helps to ensure that the random-walks locally explore produc-tive areas of the domain while occasionally jumping to new regions sinceevery point is sampled with finite probability.3.3.2 RegularizationRegularization is often required when reconstructing fluids from multi-viewvideo since measurements are often acquired from as few as 10-20 cam-eras. The resulting tomography systems are consequently severely under-determined which leads to streak artefacts or noise in the reconstructions(see Figure 3.7). These artefacts may be acceptable when the goal is to493.3. Stochastic Tomography Algorithmrender novel viewpoints of data, where averaging introduced by renderingtends to smooth noise and mask streaks somewhat, but are undesirable wheninspecting slices of the domain.Nearly arbitrary regularization terms can be incorporated into the Stochas-tic Tomography algorithm provided they have relatively compact spatialsupport. Regularizers are evaluated via an auxiliary grid that approximateseither the current solution p(~x)(k) or a projection of the current solution.Samples are applied to the auxiliary grid via splatting (kernel density es-timation) using bilinear weights to maintain the current approximation tothe solution or its projection as samples are recorded. When using regular-ization, a slightly modified version of the objective function in Equation 3.5is used that incorporates a prior function Γ(p(~x)). The modified objectivefunction is shown in Equation 3.10.E(p(~x)) =12‖r(p(~x))‖22 + γΓ(p(~x)) (3.10)Evaluating the change in objective function ∆E(s(k)) due to a sample s(k)is performed identically to the unregularized case, except that changes tothe regularization function ∆Γ(s(k)) need to be evaluated as well. The reg-ularizer is evaluated with its stencil centred on the sample location, ~x(k)(or its projection), and at positions whose stencil includes the sample lo-cation both before and after placing the sample. These points correspondto a horizontally and vertically mirrored version of the stencil, as shown inFigure 3.5. The difference between the (altered) portions of the objectivefunction computed before and after placing the sample is then weighted byγ and added to the change in the data fitting term.Volume Based PriorsWhen reconstructing 3D volumes, priors are incorporated by applyingkernel density estimation to an auxiliary volumetric grid on the fly during therandom walk. The resulting intermediate volume representation can then beused to estimate how a new sample will effect the regularizer energy. Threevolumetric priors have been implemented, an L2 smoothness penalty, an L1Sum-of-Absolute-Differences (SAD) prior and the L1 Total-Variation (TV)prior.The L2 prior is evaluated on a standard 5- or 7-point Laplacian sten-cil (for 2D and 3D respectively) and the result is squared. The TV priorreturns the magnitude of the solution gradient at the sample location, com-puted using first-order finite differences. The SAD regularizer sums absolute503.3. Stochastic Tomography AlgorithmFigure 3.5: Evaluation of regularizers. The change in regularizer function isevaluated at the sample position ~x(k) or its projection (dotted center) whichinvolves the stencil indicated by the black points. The regularizer is alsoevaluated at all points that include ~x(k) (white points) corresponding to areflection of the stencil about ~x(k). Note that ~x(k) need not be aligned tothe underlying grid.differences between the centre point and its 26 neighboring stencil locationsin a 3x3x3 window. This last case, even after removing duplicate evalua-tions, would require 15 times the memory of simply storing the solution ifsolved using a splitting method such as ADMM due to the 7 sets of splittingvariables and Lagrange multipliers needed in excess of the solution.Image Based PriorsStoring and updating a volumetric image can be avoided entirely if thegoal is only to render novel viewpoints of the data. In this case, outputimages can be accumulated via kernel density estimation and the regularizerenergy computed on these images. 2D variants of any of the regularizersdiscussed above can then be used, summing the result over all output images.The result of using image based priors is to bias the placement of samplesin such a manner as to improve the output image quality, e.g. that theimage have sparse gradients. This is distinct from simply filtering renderedimages since sample placement is determined by balancing the competingobjectives of having high-quality output images as well as good fit to themeasurements. Filtering as a post-process, in contrast, yields results thatare no longer consistent with the measurements.The advantage of using image-based priors is that only the memory513.3. Stochastic Tomography Algorithmneeded to store the input projections, a few samples and the output imagesis needed. This is often only a few bytes more than the size of the input andoutput data.Experiments were performed to see if volumetric regularization could beachieved by using image-based regularization of randomly generated projec-tions. Unfortunately, even for a large number of projections, this approachdid not improve the quality of volume images.3.3.3 DiscussionThis section discusses the relationship of the proposed sampling scheme tothe Metropolis-Hastings algorithm as well as the outlines the convergencebehaviour of the method.Relationship to Metropolis-Hastings SamplingAlthough the random walk algorithm in Algoritihm 2 is very similarto the Metropolis-Hastings sampling algorithm, there are several key dif-ferences that are important to point out. The primary difference is thatStochastic Tomography does not produce a Markov chain. This is evidentsince the sampling strategy is driven by ∆E(), which changes as samplesare introduced and reduce the objective function. Consequently each sam-ple, s(k), depends on the full sampling history rather than only the previoussample, s(k−1). A second important distinction is that Metropolis-Hastingstypically integrates a known probability distribution that can be evaluated towithin to a constant (e.g. pathwise light transport). In contrast, StochasticTomography optimizes for an unknown function: the volume estimate p(~x)being inferred.ConvergenceConvergence for the method on smooth and convex functions can be ar-gued in a similar manner to coordinate descent by quantizing the continuoussample positions s(k) to an arbitrarily fine voxel grid and taking the limitof infinite sample budget and vanishing . Under these conditions i) everyvoxel is sampled arbitrarily many times due to ergodicity, ii) the objectivefunction is strictly non-increasing and iii) a perturbation of at least one voxelvalue is a descent direction unless |∇E(p(~x))| ≤√nL [100] (where n is thenumber of voxels and L is the Lipschitz constant of E(p(~x))), which impliesconvergence to a level determined by the magnitude of .523.4. Experimental Setup and ImplementationAlternative convergence arguments hold when the function being opti-mized is non-smooth and separable but do not hold when the objective isnon-separable [150]. Most of the L1 priors discussed in Section 3.3.2 and inChapter 2 are non-separable. In spite of this, the method generally does wellon non-smooth and non-separable objectives in practice, as is demonstratedin Section 3.5.It is possible, however, to obtain a separable problem for many priors byintroducing splitting variables in a manner similar to the ADMM examplefrom Chapter 2. This results in a smooth subproblem (that may be solvedwith Stochastic Tomography) and a set of 1D separable & non-differentiableminimizations of the splitting variables. The disadvantage of this approach isthat the priors must be amenable to splitting and the splitting variables andLagrange multipliers can require considerable extra memory. This largelyoffsets the benefits of using Stochastic Tomography in the first place.3.4 Experimental Setup and ImplementationThe experimental setup used to capture input data for subsequent recon-struction by the Stochastic Tomography algorithm consists of a cylindri-cal glass beaker (similar to the one used by Trifonov et al. [149] that ismounted on a rotation stage. The beaker is observed by an array of be-tween 5 and 15 Sony consumer camcorders. The reconstruction volume iscontained fully within the beaker and ray-pixel correspondences are com-puted using the two-plane approach of Trifonov et al. (see Figure 3.6), asintroduced by [117]. It is important to point out that, due to refraction ofrays at the air-beaker & beaker-water interfaces, this data cannot be resam-pled to form orthographic projections. Consequently, methods based on theFourier slice theorem cannot be used to reconstruct the volume data.533.4. Experimental Setup and ImplementationFigure 3.6: Illustration of two-plane and radiometric calibration. Althoughper-pixel light paths have refraction events at the beaker surface, they arestraight once inside the reconstruction volume. Per-pixel correspondencesfrom two planar targets in the reconstruction volume then fully parameter-ize the ray for each pixel. During calibration, a light-source rotates with thedomain to consistently illuminate the front plane for each camera. Corre-spondence points then allow camera responses to be normalized to a mastercamera.During the experiments, tap water was used as the clear ambient fluidand typically either tap water or isopropyl alcohol mixed with fluoroscein-sodium powder (a fluorescent dye) was used as a marker fluid. An exceptionis one capture where the undissolved dye powder was introduced directly tothe reconstruction volume water surface.IlluminationWhite LED strobe panels were used to illuminate the scene. These allowthe cameras to be optically synchronized in order to compensate for rollingshutter artefacts as in [24]. For slow moving effects, constant illuminationfrom ultra-violet LEDS was used which offers higher contrast and reducedstray light, but only has frame-level synchronization. In both cases, the543.4. Experimental Setup and Implementationimage formation model consists of dye absorbing incident illumination andisotropically re-emitting it. By neglecting re-absorbtion of the re-emittedlight along the path to the camera, the pure emission image formation modelof Equation 3.1 is obtained.This is a simplification of the true physics, since both volumetric shad-owing and distance from the light sources affects the incident illumination atany point within the reconstruction volume. Consequently a unique globalmapping between intensity in the reconstructions and dye concentration isnot possible.Radiometric CalibrationTo compare measurements between camera views a straightforward ra-diometric calibration is performed. This procedure begins by lighting thecalibration target planes with a ’calibration headlight’ that is rigidly at-tached to the calibration target (see Figure 3.6). The headlight ensuresthat lighting conditions are consistent when the target is rotated to faceeach camera during geometric calibration. An inverse gamma curve is thenapplied to the captured images and, after designating one camera as themaster, an exposure scaling factor is computed. This scaling factor mini-mizes the luminance discrepancy between a set of corresponding points inthe master image and each view. Each frame from each camera is then scaledby these factors. Finally regions outside the calibrated area are zeroed anda background frame subtracted to obtain the initial measurements vector,q, for each captured frame.ImplementationThe algorithm was implemented using C++ using the CImg library in-ternally for image manipulations. The VTK visualization library [137] wasused for loading and storing of 2D and 3D images and point sets while theEigen C++ linear algebra library was used for fitting interpolating poly-nomials to the ray pierce points during geometric calibration. The CalTaglibrary [7] was used to automatically generate correspondence points be-tween self-identifying fiducial tags and the two plane positions used duringcalibration. Sample datasets as well as a document describing their formatand the calibration procedures are available [64].553.5. Reconstruction Results3.5 Reconstruction ResultsThe following sections present results computed using the Stochastic To-mography algorithm for a standard tomography test case as well as for datacaptured from the experimental setup described in Section 3.4.3.5.1 2D Synthetic ResultsA comparison between the Simultaneous Algebraic Reconstruction Tech-nique (SART) [5] and Stochastic Tomography was performed to evaluatethe Stochastic Tomography algorithm. SART was chosen since it appliesto general camera models, produces reasonably high-quality reconstructionsand forms the basis of some more advanced methods such as [165].Figure 3.7 showns the results for reconstructing the Shepp-Logan phan-tom (a standard tomography test case) from 16 orthographic projections ona 2562 output grid. This is a challenging task for computed tomographyalgorithms since the limited number of views causes the data term to beseverely under-determined (a factor of 16 in this case assuming linearly in-dependent measurements). The parameters of Stochastic Tomography wereadjusted to have runtime less than or equal to the runtime of the SARTresults in the interest of a fair comparison and the visual hull constraintof [8] was incorporated into the SART algorithm, significantly improvingthe SART results. The basic, unregularized Stochastic Tomography resultshown in Figure 3.7(c) shows artefacts that are structurally similar but morepronounced than the SART results 3.7(b); SART selects a smoother solutionfrom the null-space of the linear system which is not surprising given theaveraging involved in backprojecting measurement residuals. Even so, theRMS errors with respect to the ground truth are slightly decreased in theStochastic Tomography results.Adding regularizers to the Stochastic Tomography algorithm improvesthe comparison. The L2 prior removes most of the streak artefacts, but blurshard edges (Figure 3.7(d)) as expected. Incorporating the total-variation(TV) prior allows the reconstruction to better preserve these edges whilesimultaneously improving the fit to the data (Figure 3.7(e)), while the sum-of-absolute-differences (SAD) regularizer (Figure 3.7(f)) performs the best,both in terms of the appearance and errors with respect to ground truth.RMS errors computed with respect to ground-truth indicate that, asexpected, Stochastic Tomography with the SAD regularizer performs thebest with an RMS error of 0.080 compared to the RMS errors of 0.090and 0.104 for the unregularized Stochastic Tomography and SART results563.5. Reconstruction Results(a) Ground Truth (b) SART, RMS 0.104 (c) ST, RMS 0.090(d) ST-L2, RMS 0.085 (e) ST-TV, RMS 0.085 (f) ST-SAD, RMS 0.080Figure 3.7: Synthetic comparison of Stochastic Tomography and SART.Reconstruction of the high-contrast Shepp-Logan phantom (a), a standardmedical imaging test case, from 16 orthographic views using SART (b) andStochastic Tomography (ST) (c)-(f) on a 256x256 output grid (ST resultsaccumulated with bi-linear kernel density estimation). All results are shownon the same scale with runtimes between 47 and 55 seconds. RMS erroswith respect to ground truth are also provided.573.5. Reconstruction Resultsrespectively. The discrepancy between unregularized SART and StochasticTomography is somewhat surprising, although it may be related to errorsintroduced between the polygonal visual hull and elliptical ground-truth.Stochastic Tomography appears to be less sensitive to this issue and doesnot produce the same artifacts.Although it is possible to implement regularized version of the SARTalgorithm, e.g. as done by Yu & Wang [165], it is worth noting that chang-ing regularizers for these methods requires re-deriving and re-implementingmuch of the solver. The splitting/proximal methods of Chapter 2 can alsobe applied. However, applying these to the SAD regularizer in 3D requires15 times the memory needed for the solution along. This quickly becomesprohibitive, requiring over 16 GB of RAM at a resolution of 5123 in doubleprecision.From these results it can be concluded that Stochastic Tomography isa viable approach for performing regularized tomographic reconstruction.This is further demonstrated in the 3D results presented in the next section.3.5.2 3D Capture of Mixing FluidsSeveral captures were undertaken using the experimental setup of Section 3.4.A range of fluid interactions are demonstrated. These include a buoyant jetundergoing laminar-to-turbulent transition, turbulent mixing during pour-ing and a thin-featured quasi-stable flow produced by a propagating vortexring. All reconstructions were performed on a single core of an Intel i7desktop computer. Sample budgets and timings are reported in the figurecaptions.Figure 3.8 shows several timesteps of fluoroscein-sodium dropping ap-proximately 1cm through a thin tube into still water. As the dyed fluidenters the still water, a quasi-stable vortex ring is formed that propagatesdownwards while slowing and developing a complex petal structure due toa Kelvin-Helmholtz instability induced by shear at the material interface.Stochastic Tomography is able to capture fine-scale, transient features of thisflow. The reconstruction of this dataset used 100 million sample mutationswith 65000 sampling chains from 16 cameras regularized with grid-basedSAD. In Figure 3.9 the reconstruction is shown at varying levels of com-pletion to illustrate the placement of positive samples (blue) and negativesamples (red). The method begins by placing a large number of positivesamples, effectively ’roughing in’ the scene. This coarse approximation isthen refined by placing a mixture of positive and negative samples to re-produce fine-scale details. Note that this behaviour is not pre-specified; it583.5. Reconstruction Results(a) Photograph (b) Reconstructed framesFigure 3.8: Reconstruction of the ’tube’ dataset. Left: Example capturedframe of one of the cameras. Right: Reconstructions of an unsteady two-phase flow at different points in time. Note how Stochastic Tomographyalgorithm manages to capture the fine-scaled features in this challengingdataset. Reconstruction parameters: SAD prior, 100M sample mutationswith 65000 sample chains, 6 min/frame.occurs automatically from using the algorithm in Algorithm 2.Figure 3.10 shows a comparison between one of the input images and agrid-free re-rendering from a similar viewpoint of an isopropyl alcohol andfluoroscein sodium dye mixture rising under buoyancy in still tap-water.The dye was injected into the bottom of the reconstruction volume througha syringe. Stochastic tomography captures much of the fine-scaled detailsof the laminar-to-turbulent transition both spatially and over time. Suchturbulent details and dynamics are challenging to simulate realistically with-out careful adjustment and tuning of simulations, but can be captured easilyand subsequently integrated into simulations (see Chapter 5). A series ofrenderings of this data from artificial viewpoints are shown in Figure 3.11.Figure 3.11 also highlights a shortcoming of the capture setup: optical re-flections of the emissive volume can occur on the back surface of the capturevolume (glass beaker). These reflections can be spuriously reconstructed bythe tomography where they introduce isolated and erroneous patches of dye,593.5. Reconstruction Results(a) 10% (b) 30% (c) 50% (d) 70% (e) 90%Figure 3.9: Reconstruction progress over time. The previous 10,000 acceptedsamples are shown as points: blue points represent positive samples andred points denote negative samples. The algorithm initially places a largenumber of positive samples to roughly reconstruct the scene, then refinesthis coarse estimate with a mix of positive and negative samples603.5. Reconstruction Results(a) (b)Figure 3.10: Comparison of photo and reconstruction for ’smoke’ dataset.95% isopropyl alcohol rising under buoyancy after injection into still tap-water. Camera view (left) and grid-free Stochastic Tomography reconstruc-tion (right) from 15 views re-rendered from a similar viewpoint. Many fine-scaled features of the transition from laminar to turbulent flow are capturedby the rendering, however minor camera misalignments limit the reconstruc-tion resolution. Reconstruction parameters: No prior, 200M sample muta-tions with 10000 sample chains, 14 min/frameas shown in Figure 3.11(e).Figure 3.12 shows novel viewpoints of a diluted fluoroscein-sodium so-lution being poured into still tap water. Small, high-energy eddies formalmost instantly and then diffuse and grow over time. Each frame was re-constructed using the Stochastic Tomography algorithm using 400 millionsample mutations/frame with the grid-based SAD regularizer on a 2003 grid.An extreme example of a reconstruction from just 5 cameras is shownin Figure 3.13. In this capture, fluoroscein-sodium dye powder was intro-duced at the water surface and allowed to mix and dissolve, sinking undergravity. This dataset demonstrates the difference between regularized andunregularized Stochastic Tomography, using a very coarse reconstruction to613.5. Reconstruction Results(a) frame 10 (b) frame 20 (c) frame 30(d) frame 40 (e) frame 50 (f) frame 40Figure 3.11: Frames of ’smoke’ dataset. Buoyancy-driven flow of fluoroscein-sodium & alcohol solution in water. Reconstruction parameters: No prior,200M sample mutations with 65000 sampling chains, 14 min/frame623.5. Reconstruction Results(a) frame 10 (b) frame 20 (c) frame 30(d) frame 40 (e) frame 50Figure 3.12: Frames of ’bloom’ dataset. Time-resolved reconstruction offluoroscein-sodium solution being poured into still tap-water. Complex vor-tices and eddies develop, diffuse and grow as the the flow progresses. Recon-struction parameters: SAD prior, 400M sample mutations, 10000 samplingchains, 31 min/frame633.6. Conclusionshighlight the differences. The left image of Figure 3.13 shows the resultusing the image-based SAD regularizer. The right column shows unregular-ized reconstructions (top), the grid-based SAD (middle) and image-basedSAD (bottom). The regularized results show a clear improvement in imagequality over the unregularized case, with minimal visual difference betweenthe grid-based and image-based priors. This allows high-quality regularizedrenderings of volume data to be performed on the fly, without storing volumedata of any kind.Figure 3.13: Comparison of regularization approaches for deliberately under-sampled reconstruction of dye powder added to the surface of the capturevolume from only 5 cameras using image-based regularization (left). Theright column shows a comparison of regularization strategies. From top tobottom: Unregularized result, regularized with volumetric SAD prior, andfinally image-based SAD prior. The image-based regularization producesrendered results comparable to grid-based regularization but does not re-quire volume data.3.6 ConclusionsThis chapter introduced Stochastic Tomography, a new random-walk algo-rithm for tomographic reconstruction of 3D volumes. The key advantage643.6. Conclusionsof this approach lies in avoiding a fixed spatial discretization which allowsthe volume to be represented in a grid-free-fashion without a fixed reso-lution tradeoff. Further advantages are the (optional) ability to introducenearly arbitrary priors into the reconstruction process with little to no run-time overhead and the use of image-based priors for on-the-fly rendering.This chapter also introduced a new image based prior for tomographic re-construction when only rendered images are desired as output. Using theimage based prior allows volumes to be re-rendered on the fly, directly fromprojection data, without storing volume data.Although convergence in the general case of non-smooth & non-separablepriors cannot be guaranteed, the method performs quite well in practice.Highly detailed reconstructions can be produced using relatively few cameraswhile avoiding several drawbacks of common methods from sparse optimiza-tion. On the algorithmic side, Stochastic Tomography is intuitively similarto a fixed step-length coordinate-descent method, in that convergence to theneighborhood of a minimum can be guaranteed for smooth functions by theinability to place a sample anywhere in the domain. This interpretation isstrengthened in Chapter 4 where a variation of the algorithm is applied tothe image deblurring problem.This chapter also applied Stochastic Tomography to the detailed 3Dimaging of complex fluid mixing behaviours. As mentioned in Section 3.4,the resulting reconstructions are qualitative rather than quantitative sinceincident illumination, scattering and shadowing in the volume are not ac-counted for in the image formation model. Additional sources of error arereflections in the capture domain that pollute the reconstructions. However,as is shown in Chapter 5, even qualitative reconstructions are sufficient toyield reasonable estimates of fluid velocities. The combined process of tomo-graphically reconstructing concentration fields and estimating velocity fieldsresults in a fully-passive, volumetric fluid imaging approach that producesestimates of the full fluid state. These estimates can subsequently be usedin simulation and applied to real-world problems in fluids.65Chapter 4Image Deblurring4.1 IntroductionChapter 3 introduced the Stochastic Tomography algorithm, a method forsolving regularized linear inverse problems based on stochastic random walks.In this chapter, the stochastic random walk algorithm is adapted for use insolving image deconvolution and deblurring problems. The resulting al-gorithm, referred to as Stochastic Deconvolution, amounts to a variant ofcoordinate descent where search directions are chosen based on a randomwalk algorithm that takes advantage of spatial coherence.By solving the image deblurring problem in this fashion, Stochastic De-convolution addresses several inherent issues with deconvolution algorithms:• Ease of Implementation. Both the basic algorithm and the regular-ized variants are very straightforward to implement, relying on onlytwo simple operations: splatting of the point-spread-function (PSF)and point evaluation of the data and regularization terms.• Regularization Research. Due to the simplicity of implementingnew priors, Stochastic Deconvolution enables research into new reg-ularization terms and priors through rapid iteration. This chapterdemonstrates that the method works well for a wide variety of regu-larization approaches, including smooth, non-smooth & convex, non-smooth & non-convex, discontinuous and even data-dependent priors.• Boundary Conditions. Blurry images necessarily include informa-tion from outside the image frame due to the spatial support of thePSF. Deblurring algorithms must take this information into accountto avoid introducing artefacts around image boundaries. StochasticDeconvolution handles this in a natural framework that can also beapplied to saturated regions.• Non-uniform PSFs. Stochastic Deconvolution naturally handlesdeblurring problems with spatially varying kernels such as rotationsabout the optical axis.664.2. Stochastic Deconvolution4.2 Stochastic DeconvolutionThe derivation of the Stochastic Deconvolution algorithm follows closelyto the derivation of the Stochastic Tomography algorithm in Chapter 3.Denoting the blur kernel by f , the measured image as q and the estimate ofthe intrinsic image as p, the image deconvolution problem can be expressedas the minimization in Equation 4.1:arg minp12‖q− f⊗p‖22 + γΓ(p) (4.1)To solve Equation 4.1 using the same framework as Stochastic Tomgora-phy, the Stochastic Deconvolution algorithm maintains a residual image, r,defined by Equation 4.2 in addition to the solution image, p.r = q− f⊗p (4.2)This residual image can be used to evaluate the data term of Equation 4.1.This allows the minimization to be expressed in terms of the residuals, asshown in Equation 4.3.E(p) =12‖r‖22 + γΓ(p) (4.3)Like Stochastic Tomography, the Stochastic Deconvolution algorithm makesa sequence of compact updates to the solution that are never changed onceapplied. As before, the solution is defined recursively by Equation 4.4.p(k) = p(k−1) + (k)∆p(k) (4.4)Here (k) is a scalar and ∆p(k) is a solution update. Equation 4.4 allowsthe residual image at the k’th update to be expressed in the form shown inEquation 4.5 by exploiting the fact that convolution is a linear operator.r(k) = q− f⊗p(k)= q− f⊗(p(k−1) + (k)∆p(k))= q− f⊗p(k−1) − f⊗(k)∆p(k)= r(k−1) − f⊗(k)∆p(k) (4.5)When an individual component of p~xk is modified by adding or subtracting(k) units from the pixel corresponding to ~x(k), the resulting solution up-date can be expressed as ±(k)δ(~x(k)). Here δ(~xk) is the 2D discrete Dirac674.2. Stochastic Deconvolutionfunction for the pixel ~xk. The resulting change to the residual image is∓(k)f⊗δ(~x(k)), meaning that only pixels that fall within the PSF centredat ~x(k) are affected by the update. For motion blur or small defocus kernels,this means that relatively few residuals are modified by the solution update.This small support of individual points allows the same approach as wasused in the Stochastic Tomography algorithm from Chapter 3 to decon-volution problems since the effect of local updates to the solution can becomputed cheaply. Furthermore, since every pixel in a local region is usu-ally blurred by the same or similar blur kernel, there is likely to be strongspatial correlation between residuals. This means that the local explorationproperty of the random walk algorithm is likely to find profitable locationsto modify. This is illustrated in Figure 4.1 where the sampling chains focuseffort around edges where high residuals are strongly correlated.Figure 4.1: Algorithm progress over time showing stochastic random walksthat form the basis of Stochastic Deconvolution. Green points representenergy added to the reconstruction while blue represent show energy sub-tracted. The algorithm automatically focuses sampling effort in regionswhere fast improvement to the system objective function are obtained.The random walk algorithm generates samples s(k) = {~x(k), (k)} forwhich the change in the data-term portion of Equation 4.3 can be evaluatedcheaply by combining Equations 4.3-4.5 to get Equation 4.6. Implementa-tions of Equation 4.6 can take advantage of the fact that most components(those outside of the PSF centred at ~x(k)) of the residuals, r, do not change684.2. Stochastic Deconvolutionas a result of s(k). Discussion of the regularizer is deferred until Section 4.2.2.∆E(s(k)) =12‖r(k−1)‖22 −12‖r(k)‖22 + γΓ(p(k−1))− γΓ(p(k))=12‖r(k−1)‖22 −12‖r(k−1) − f⊗(k)δ(~x(k))‖22 . . .. . . +γΓ(p(k−1))− γΓ(p(k)) (4.6)Although very similar, the Stochastic Deconvolution algorithm incorporatesa few modifications to the original Stochastic Tomography algorithm (Algo-rithm 2). The modified algorithm is shown in Algorithms 3 & 4.Algorithm 3 Stochastic Deconvolution1: procedure StochasticDeconvolutionSampling(M , N ,∗,min,τ)2: m← 03: ← ∗4: while m < M do5: a← StochasticDeconvolutionSamplingChain(,N)6: if a < amin and τ ≥ min then7: ← τ8: end if9: end while10: end procedureA key difference between the original and modified algorithm is thatthe Stochastic Deconvolution algorithm uses an adaptively sized  ratherthan a fixed size as in the original Stochastic Tomography algorithm. Thisimproves the acceptance rate of samples. A second difference is that thenew algorithm automatically records any sample that improves the objectivefunction but only changes the base position for drawing the next mutationwhen the original conditions are met. The new algorithm also randomlyresets the chain to a random point within the domain with a probability of1%, which helps to ensure ergodicity of the sampling chains and to ensuremore uniform exploration of the domain when relatively few samples areused.694.2. Stochastic DeconvolutionAlgorithm 4 Stochastic Deconvolution Sampling Chain1: procedure StochasticDeconvolutionSamplingChain(, N)2: // generate an initial sample3: s(0) ← {uniform-random-sample(), }4: k = 1, n = 05: while k < N do6: // mutate the current sample or reset with 1% probability7: if uniform-random() ≤ 0.99 then8: s(k) ← mutate(s(k−1))9: else10: s(k) ← {uniform-random-sample(), }11: end if12:13: // record the sample if ∆E > 014: if ∆E(s(k)) > 0 then15: record(s(k))16: n← n+ 117: end if18:19: // probabilistically accept s(k)20: a← min(1.0, ∆E(s(k))∆E(s(k−1)))21: if ∆E(s(k−1) > 0 and uniform-random() > a then22: // Sample was rejected, continue from previous23: s(k) = s(k−1)24: end if25:26: // update iteration counter27: k ← k + 128: end while29: // return the acceptance rate30: return n/N31: end procedureThese three changes improve the sample acceptance rate from 5 − 10%to 30−40% on the examples in this chapter. Recording any sample that im-proves the objective function is based on the observation that some samplesyield improvements but were discarded by the original algorithm.The new algorithm begins with a relatively high value of  = ∗ andthen reduces  as the sample acceptance rate falls below a threshold, amin =704.2. Stochastic Deconvolution7.5%. This allows the algorithm to make rapid initial progress but makesmaller adjustments later as samples become harder to place. Setting afloor, min, on the step size  ensures that the convergence arguments forergodic mutation strategies in the limit of sample count still hold since aseries of unlucky sampling chains could otherwise decrease  too rapidly.4.2.1 Mutation StrategyThe mutation strategy used by Stochastic Deconvolution mirrors that usedby Stochastic Tomography by drawing the next sample from a Gaussiandistribution centred around the most recently accepted sample, i.e. ~x(k) =~x(k−1) + [N (0, σ),N (0, σ)]T . N (µ, σ) is a 1D Gaussian centred at µ withstandard deviation σ. The standard deviation is chosen to be 50− 100% ofthe PSF width, with the specific choice not critical.When generating a mutated sample it is important to ensure that thesampler does not truncate the Gaussian: every point in the domain mustbe sampled with finite probability from every other point. This conditionensures that the mutation strategy is ergodic and does not miss portions ofthe domain.4.2.2 RegularizationRegularization is implemented in a similar manner to Stochastic Tomog-raphy: by evaluating the change in the prior Γ as a result of sample s(k)(Equation 4.6). Sparsity of the prior stencil can be exploited by noting thatmany priors have compact support involving only a few neighboring pixels.In this case, computing the change in value is inexpensive since the regular-izer only needs to be evaluated for pixels whose stencils include the modifiedpixel.Several priors were implemented in this work to show the flexibility ofthe method to different regularization styles. These are discussed in theremainder of the section.Total VariationAs discussed in Chapter 2, Total variation (TV) priors enforce the as-sumption that the intrinsic image is piecewise constant, or equivalently, thatthe image gradients are sparse. TV priors are one of the most popular priorsin use for linear inverse problems since they result in the sparsest gradientdistributions possible in a convex optimization problem. Three variants of714.2. Stochastic DeconvolutionTV prior were implemented in this chapter. Each uses one of the followingenergies in which ∇~xp(~xj) denotes the (discrete) gradient computed at pixel~xj .ΓTV (p) =∑~xj∈Ω‖∇~xp(~xj)‖2 (4.7)ΓATV (p) =∑~xj∈Ω‖∇~xp(~xj)‖1 (4.8)ΓMTV (p) =∑~xj∈Ω√√√√3∑i=1(‖∇~xpi(~xj)‖22)(4.9)Equation 4.7 corresponds to the standard TV prior. Equation 4.8 is ananisotropic variant that has the advantage that it fits the L1 prior definitionΓ(p) = ‖Gp‖1 from Chapter 2 and so can be solved with standard L1solvers. Equation 4.9 is an adaptation to color images from [164], where thesum over i corresponds to summation over the image channels. This colorimage prior introduces an L1 penalty on the length of the per-pixel vectorwhose components are defined by the magnitude of gradients of individualchannels. This has the effect of coupling the image channels together, asopposed to simply applying TV independently to each channel.Sparse 1st and 2nd Order DerivativesAlso implemented was a version of the sparse prior introduced by Levinet al. [107] which includes priors on the image first and second derivativesusing a fractional 0.8 ’norm’ to enforce a heavy-tailed distribution of imagegradients and curvatures. Details can be found in [107].Gamma-corrected Sum of Absolute DifferencesA new prior that is introduced in this chapter is a gamma-correctedprior, which originates from the observation that the forward model fordeconvolution must be expressed for linear light but the majority of imagesare viewed after applying gamma correction (typically raising pixel valuesto an exponent ≈ 2.2). Gamma correction amplifies intensity changes indark regions of the image and attenuates those in bright regions. Withtraditional priors (such as the TV family), this makes artefacts in darkregions more noticeable than in light regions. The result is lower image724.2. Stochastic Deconvolutionquality in dark regions. Increasing the prior weight improves dark regionsbut over-regularizes lighter portions of the image.This reduction in quality can be mitigated by performing regularizationon gamma-corrected pixel values in order to equalize the effect of the reg-ularizer at all brightness levels. To achieve this, a gamma curve is appliedto the pixel values before evaluating a sum-of-absolute-differences prior (aswas found to be effective in Chapter 3) over a 3x3 window, W , centred at~xj as shown in Equation 4.10.Γ(p) =∑~xj∈Ω∑i∈W (~xj)|p(~xi)12.2 − p(~xj)12.2 | (4.10)This is a non-smooth and non-convex prior that is non-trivial to develop acustom solver for and which introduces a significant memory penalty if thesplitting approaches of Chapter 2 are applied. However it is easily handledby the Stochastic Deconvolution framework.Discontinuous and Data-dependent RegularizersThe Stochastic Deconvolution approach also generalizes to data-dependentand discontinuous regularizers. An example of this is shown in Section 4.4.4.2.3 Boundary Conditions & SaturationCapturing an out of focus image necessarily results in pixels near the imageboundary receiving contributions from outside the nominal image boundarydue to the effect of the point-spread function. Reconstructing boundarypixels is consequently problematic since they depend on content from outsidethe image, where there are no measurements.Stochastic Deconvolution handles such cases by padding the image bythe PSF width and building a mask that indicates which pixels in the paddedimage correspond to captured image pixels versus padding pixels. When per-forming the sampling process, samples are permitted to be placed anywherewithin the image interior or padding regions, but contributions to ∆E(s(k))from the data term are only included when the mask indicates they belongto interior pixels. Consequently, in the padding region, ∆E(s(k)) is simplythe change in the prior energy.This process serves to disable the data term in regions without validcaptured measurements, but includes pixels in the padding regions in the734.2. Stochastic Deconvolutionreconstruction process, as illustrated in Figure 4.2. Consequently, the al-gorithm is able to adjust the content of padding regions to improve thereconstruction in the interior pixels, synthesizing plausible content that im-proves the reconstruction quality.Figure 4.2: Depiction of boundary and saturation handling. Boundariesare handled by the algorithm by padding the image by the PSF width andmasking padding pixels (shaded border) from the objective function dataterm. This allows content to be optimized in the padding regions to improveimage quality in interior regions. The same principle is applied to saturatedpixels (interior shaded region) to prevent ringing.The same process can be applied to saturated pixels, which have inher-ently untrustworthy measurements. Simply excluding saturated pixels fromthe mask causes the algorithm to plausibly inpaint saturated regions withcontent that improves the reconstruction in unsaturated regions.Boundary conditions and saturation are consequently handled quite nat-urally by the algorithm. This is unlike methods expressed in Fourier spacewhich cannot perform this type of data masking due to the requirement thatall operators be expressible as convolutions.4.2.4 Non-Uniform Blur KernelsStochastic Deconvolution also handles non-uniform blur kernels in a naturalmanner. Since the method is not based upon convolution operations inFourier space, it is straightforward to evaluate a different blur kernel for744.3. Implementationeach pixel. In contrast, Fourier methods must approximate spatially varyingPSFs by breaking the image into tiles, where each tile is assumed to havea constant blur kernel. Spatially varying blur kernels can be acquired usingeither the method of Hirsch et al. [78] or by using the inertial measurementunits that are built into many cellphones [89].4.3 ImplementationThe Stochastic Deconvolution algorithm was implemented in C++ using theCImg image processing library [1] for image input and output as well as forsome internal manipulations of images. A simplified and commented sampleimplementation is available from the project website [65] that demonstrateshow to perform residual and image updates as well as evaluate the totalvariation variation prior.4.4 ResultsThis section compares results computed using different regularization strate-gies and objective functions as well as compares to existing methods. Run-times vary based on the size & sparsity of the PSF as well as the imagesize, but are typically only a few minutes. An example is a 0.7 megapixelmonochrome image with a 21x21 pixel PSF took 126 seconds using an un-optimized implementation.Comparison to Existing MethodsFigures 4.3 and 4.4 show comparisons of Stochastic Deconvolution withthe Coded Exposure Photography method of Raskar et al. [128]. StochasticDeconvolution produces results with less noise and fewer chromatic arte-facts by introducing priors into the deconvolution problem, although this isexpected since the original method is unregularized. Enlarged views of thetrain image help to highlight the behaviour of the different priors, includingthe total variation (TV) prior (Figure 4.3(c)), the sparse image derivativesprior from Levin et al. [107] (Figure 4.3(d)) as well as the gamma prior fromSection 4.2.2 (Figure 4.3(e)).754.4. Results(a) Stochastic Deconvolution, Gamma prior, γ = 0.001(b) Raskar et. al. (c) SD, TV prior, γ = 0.00025(d) SD, Levin prior, γ = 0.00025 (e) SD, Gamma prior, γ = 0.001Figure 4.3: Comparison of ’train’ image with Raskar et. al. (left) vs.Stochastic Deconvolution (right) using the regularizer of Levin et. al. In-corporation of the regularizer significantly reduces the noise in the recon-structed image while preserving image detail. Stochastic Deconvolution uses100XNpixels samples and σ = 32764.4. ResultsAll three priors reduce noise and chromatic artefacts, however the twonon-convex priors shown in Figures 4.3(d) and 4.3(e) produce the smoothestresults. The gamma prior accomplishes its intended purpose of reducingartefacts in dark regions. This highlights the flexibility of the StochasticDeconvolution framework where it is trivial to experiment with new priors,unlike previous methods that must be customized for each prior.Figure 4.4 shows a comparison between Raskar et al., and StochasticDeconvolution for the white car image. This comparison also uses the gammaprior, resulting in reduced noise in dark regions of the image such as thewheels and windows while also improving legibility of the text on the cab,demonstrating that the gamma prior is effective for improving image qualityand details.(a) Raskar et al.(b) SD, gamma prior, γ = 0.0015Figure 4.4: Comparison of ’white car’ image with Raskar et al. (top).Stochastic Deconvolution (bottom). The addition of a prior helps to sup-press noise and chromatic artefacts present in the original results while im-proving legibility of the text. Stochastic Deconvolution uses 100XNpixelssamples and σ = 32Figure 4.5 shows a comparison between Fergus et al. [58] and StochasticDeconvolution. Stochastic Deconvolution produces sharper results with less774.4. Resultsringing and is also able to reconstruct the image all the way to the boundary.Figure 4.6 shows a comparison with the relatively new method of Xu andJia [161] using Stochastic Deconvolution with the prior of Levin et al. Theresults are very comparable even on this challenging dataset: both methodsshow minor artefacts throughout the image, however the results are verysimilar in terms of overall quality.Figure 4.7 shows the effects of the boundary condition handling for in-painting plausible content in boundary regions, including additional win-dows and staircase details. This content has been synthesized by the algo-rithm using information about the padding region that has been encodedinto interior image pixels as well as by the prior.Empirical ConvergenceAs a variant of coordinate-descent, Stochastic Deconvolution has noguarantee of convergence for general non-smooth objective functions. How-ever, empirical tests were performed using the anisotropic TV prior andthe attained objective function values compared to the provably convergentmethod of Chambolle & Pock [37]. For the blurred image in Figure 4.8, theobjective value computed by Stochastic Deconvolution after 300 × Npixelsmutations was 26.42, while the objective value for the primal-dual methodwas 26.69. We attribute the minor discrepancy to differences in boundaryhandling and termination criteria between the two methods. The objectivefunction history is also shown in Figure 4.8, showing fast initial convergencerate that gradually flattens. However visual convergence is very quick inpractice: by iteration 50 the grey-values are being adjusted by only 0.08%of the maximum range and are visually indistinguishable fro the results at300 iterations without careful, pixel-level examination.784.4. Results(a) Fergus et al.(b) SD, gamma prior, γ = 0.003Figure 4.5: Comparison of ’fountain’ image with Fergus et al.(top). Sto-hastic Deconvolution (bottom) shows substantially reduced ringing as wellas much-improved handling of image boundaries due to the use of theStochastic Deconvolution boundary condition. Stochastic Deconvolutionuses 100XNpixels samples and σ = 32794.4. Results(a) inpuit (b) Xu and Jia(c) Method of Levin et al. (d) SD, Levin prior, γ = 0.0005Figure 4.6: Comparison of ’roma’ image with Xu and Jia (using kernelsestimated by Xu and Jia) for the Roma image. Stochastic Deconvolutionuses 100XNpixels samples and σ = 32804.4. Results(a) boundary inpainting (b) details(c) Method of Levin et al. (d) SD, Levin prior, γ = 0.0001Figure 4.7: Boundary and saturation results. Top row: inpainted detailsfrom the Stochastic Deconvolution boundary condition: windows are addedto a building on the boundary (red outline) and staircase details outsidethe image are introduces (yellow outline). Zoom in to the top-left image foradditional features. Bottom row: the method of Levin et al. rings for highlysaturated pixels, but masking these regions from the reconstruction allowsStochastic Deconvolution to produce much smaller artifacts.814.4. Results(a) Blurred input (b) Computed solution(c) Convergence history (d) Sample histogramFigure 4.8: Convergence history and sample histogram of Stochastic De-convolution down to  < 4× 10−9 for the anisotropic TV prior with weightγ = 10−3. Each Stochastic Deconvolution iteration has approximately equalcomputational cost to one gradient descent step using image-space convolu-tions but is able to focus sampling effort near details, as illustrated by thesampling histogram.824.4. ResultsDefocus Blur and Lens AberrationsStochastic Deconvolution can also be applied to remove defocus blursand lens aberrations in images taken with standard SLR cameras. Resultscomparing Stochastic Deconvolution using the Levin prior to the method ofLevin et al. are shown in Figure 4.9. The results are very similar due tousing the same prior.(a) Blurred input(b) Method of Levin et al. (c) SD, Levin prior, γ = 1e− 3Figure 4.9: Deconvolution of defocus blur. Comparison of the method ofLevin et al. with Stochastic Deconvolution for defocus blur captured usinga standard SLR, 100XNpixels samples.Figure 4.10 shows a color image blurred by a synthetic, wavelength-834.4. Resultsdependent PSF. Deblurring using the MTV prior results in a sharper imagewith reduced chromatic artefacts. Optimization for such priors has been thefocus of several papers [164, 19], but they are easily and quickly implementedin the Stochastic Deconvolution framework.(a) SD, TV prior(b) SD, MTV priorFigure 4.10: Deconvolution of colour images. Comparison of per-channelTV (top) with the multichannel MTV prior (bottom) for a blur kernel withchromatic aberrations. Image sharpness is improved and color artefactsreduced around the tree branchesSpatially Varying PSFsFigure 4.11 shows a synthetic example for deblurring results of a strong,spatially varying motion blur with rotation about the optical axis. Existingmethods typically must break the input image into tiles to approximate suchblurs, but they can be expressed directly in Stochastic Deconvolution.844.4. ResultsFigure 4.11: Deblurring with a spatially varying kernel. Left: an imageblurred by a simulated spatially varying PSF consisting of strong motionblur and rotation about the optical axis (centre) and the correspondingresult after deblurring with Stochastic Deconvolution (right). StochasticDeconvolution naturally adapts to spatially varying PSFs without resortingto patch-based approximations.Data Dependent RegularizersFigure 4.12 shows an example of a simple, data-dependent regularizer.An image known to consist of only five distinct but unknown colours isblurred and corrupted with noise. Deblurring using the standard TV regu-larizer gives a PSNR value of 31.91 dB among all prior weights tested, how-ever by clustering the image colors periodically and adding the L1 distanceto the nearest cluster this can be improved by 0.7 dB while simultaneouslydecreasing the weight on the TV prior by an order of magnitude.854.5. Conclusions(a) Blurred input (b) TV, PSNR: 32.0 dB (c) DD TV, PSNR: 32.7dBFigure 4.12: Illustration of a data-dependent prior comparing TV deblurringversus a simple data-dependent regularizer that uses a TV prior plus theminimum distance to one of five RGB clusters (computed by K-means).The data dependent method improves the reconstruction by 0.7 dB over allparameter values.Although this is something of a contrived example, many applicationscan exploit domain-specific information, for example, known image grey-values for tissues in magnetic resonance imaging (MRI) for specific machinesettings. Exploiting this knowledge can reduce reliance on heuristics suchas gradient sparsity and yield improved reconstructions. However, suchdiscontinuous priors based on discrete choices can be difficult to implementin conventional solvers based on gradients.4.5 ConclusionsThis chapter has presented Stochastic Deconvolution, a general-purposemethod for deconvolution and deblurring based on stochastic random walks.Stochastic Deconvolution is straightforward to implement, easily incorpo-rates state-of-the-art priors and produces high quality results.The performance of the method is comparable to methods such as that ofLevin et al. [107], but not with methods that operate in the Fourier domainor other highly optimized solvers. However the decrease in performance isbalanced by the ability to trivially incorporate nearly arbitrary priors, touse spatially varying blur kernels and to correctly handle image boundariesand saturation.Stochastic Deconvolution consequently serves as an excellent test-bedfor new priors. Implementing custom regularizers in other methods often864.5. Conclusionsmeans deriving an entirely new solver, but many priors can be implementedand first results obtained in a matter of minutes by using Stochastic Decon-volution. This makes Stochastic Deconvolution an excellent tool for experi-menting with new regularization strategies.This has been demonstrated by the results in this chapter, in whichtwo new priors were developed to address practical problems in deblurring.The first, the gamma-corrected prior, addresses the issue of dark portionsof images being over-regularized by performing regularization on gamma-corrected pixel values. The second is the data dependent prior which servesas an example of how detailed assumptions on scene content can be incor-porated into deconvolution problems. This chapter and Chapter 3 conse-quently serve to demonstrate that the stochastic random walk algorithm isan effective tool for solving regularized inverse problems that easily adaptsto a wide variety of priors.The next chapter returns to the problem of fluid scanning introduced inChapter 3 by investigating the problem of estimating fluid velocities fromscanned concentration fields. As in the Chapter 3 and this chapter, regu-larization is key to obtaining good velocity estimates. However Chapter 5differs by introducing forward simulation as a regularizer for this challeng-ing problem and by exploiting an equivalence between pressure correction influid simulation and proximal operators in convex optimization. This equiv-alence allows many imaging inverse problems, including deconvolution, tobe applied in a fluids context.87Chapter 5Fluid Velocity Estimation5.1 IntroductionThis chapter now returns to the problem of passive fluid imaging. Chapter 3presented a hardware setup and reconstruction algorithm that allows vol-umetric dye concentration fields to be tomographically reconstructed frommultiview video of mixing fluids. This is a powerful technique that enablesvolumetric fluid imaging in a passive manner, however uses of the data arelimited since velocity information is not captured. Having only part of thefluid state means that captures can be inspected, re-rendered from alternateviewpoints and replayed but not modified.Unfortunately, obtaining the full fluid state by direct scanning is ex-tremely challenging except in a few, quite restrictive, cases such as [40, 62].At meter scales, velocity cannot be directly imaged but must be inferredfrom the motion of some visible quantity. This has been exploited in, forexample, particle image velocimetry (PIV) [3], which carefully seeds the flowwith particles that can be individually or collectively tracked in illuminatedsections. This can provide very accurate velocity measurements, but onlywithin the illuminated section. Volumetric extensions of PIV, e.g. [56], al-low volumetric imaging based on tomographically reconstructing the seedingparticles, however the out-of-plane dimension is generally quite small due tothe need to resolve individual particles. This limits either domain thickness,seeding density, reconstruction resolution or all three.Instead this chapter investigates inferring the velocity fields of fluids frompassive observation of the motion of reconstructed dye concentration fields,i.e. the fields that are reconstructed by Chapter 3. This is related to thevelocity estimation problem introduced in Section 2.1.3, however adaptingoptical flow methods to fluids is challenging due to the physical constraintsthat must be imposed: fluids are expected to satisfy the Navier-Stokes equa-tions and additionally, for the incompressible flows considered here, havevelocity fields that are divergence free.The key finding of this chapter is that the pressure solve employed bymany fluid simulators can be interpreted as a proximal operator for member-885.1. Introductionship in the set of divergence-free flows. This allows the problem of estimatingdivergence free velocities to be expressed very efficiently in a proximal frame-work. The Navier-Stokes equations are also introduced as an implicit prioron the reconstruction via a predictor-corrector approach that looks for low-energy corrections to velocity fields predicted by forward simulation. Thevelocity fields estimated by this approach allow existing simulation tech-niques such as up-resing (super-resolution) to be applied to captured flows,among other applications.The resulting algorithm closely resembles the operator splitting approachto time-stepping that is commonly used in fluid solvers to advance convec-tive, diffusive and body forces separately. This viewpoint interprets low-energy velocity corrections computed by velocity estimation as body-forcesintegrated over a simulation timestep. By adopting this perspective, it ispossible to look for other forcing terms that can achieve a desired effect.One example is demonstrated in the context of guiding fluids, where a bodyforce field is estimated that matches low spatial frequencies of a simulationvelocity field to an input guidance field. This problem can be expressed inthe proximal framework and demonstrates meaningful integration of inverse-problems into simulation as well as of captured data into simulation.Based on these results, the specific contributions of this chapter are:• establishing the equivalence between the pressure-solves used in manyfluid simulations and proximal operators used for constrained and reg-ularized inverse problems.• a practical, physically constrained, multi-scale tracking method forconcentration fields that reconstructs plausible and temporally coher-ent velocity fields• re-simulation of the captured fluids for adding details or performingdomain modifications and• initial results for solving other mixed forward and inverse problems influids using the same proximal operator framework.It is likely that many more such applications can be developed basedupon the same methodology of applying powerful methods from convex op-timization to challenging fluids problems.895.2. Overview of Incompressible Fluid Simulation5.2 Overview of Incompressible Fluid SimulationIncompressible fluid flow can be modelled by the incompressible Navier-Stokes equations (Equations 5.1 & 5.2). These equations describe the time-evolution of the fluid velocity ~u = [u(~x), v(~x), w(~x)] as a function of itself,pressure P , mass density ρ, viscous stresses induced by the viscous stresstensor τ as well as external body forces ~f , all subject to the condition thatthe flow be mass-conserving (divergence-free).∇ · ~u = 0 (5.1)~ut + ~u · ∇~u+∇P/ρ = (∇ · τ)/ρ+ ~f (5.2)In a general flow, the transport of an observable scalar field φ based on avelocity field ~u is given by the scalar transport equation (Equation 5.3).φt = −∇ · (~uφ)− α∇2φ+ s (5.3)Here φ refers to the observable concentrations of a marker fluid such as dyeor smoke with approximately equal mass density to the ambient fluid, α isthe diffusivity and s is a source (production) term. In the common case ofnegligible diffusivity over short time-scales (α ≈ 0), no production (s = 0)and incompressibility (∇ · ~u = 0), this equation reduces to Equation 5.4,which models pure advection of the quantity φ by the velocity field ~u.φt + ~u · ∇φ = 0 (5.4)Equation 5.4 is simply a continuous-time version of the partial differentialequation form of optical flow under the common brightness constancy con-straint (see Section 2.1.3). This suggests that this form of optical flow canserve as a forward model for the transport portion of fluid velocity esti-mation. It is also interesting to note that although brightness constancyis an assumption that is violated in many optical flow applications due toshadowing, changing illumination or non-Lambertian reflectance functions,it holds under much milder assumptions in fluid dynamics for a wide rangeof flow cases.Equations 5.2 & 5.4, when combined with the incompressibility con-straint of Equation 5.1, describe the time-evolution of a non-diffusive incom-pressible flow transporting a marker fluid of equal mass density. A challengein fluid simulation is how to advance these equations in time, since althoughthere are time-evolution equations for velocity ~u and scalar concentration φ,there is no time-evolution equation for pressure P .905.3. Pressure Projection as a Proximal OperatorThe pressure field is interesting in incompressible fluid dynamics sinceits main purpose is to enforce incompressibility. This can be seen via theHelmholtz decomposition, which expresses an arbitrary vector field as thesum of a divergence-free component (expressed as the curl of a vector po-tential ~a) and a curl-free component (expressed as the gradient of a scalarpotential P ), i.e.. ~u = −∇P +∇× ~a. The vector identities ∇× (∇P ) = 0and ∇ · (∇ × ~a) = 0 show that these components are orthogonal. Thismeans that the pressure gradient term in Equation 5.2 can only influencedivergent flow components introduced by the other terms. This has lead tothe widespread use of pressure projection methods for fluid simulation.Pressure projection advances a simulation forward one timestep (∆t)using all terms from Equations 5.2 except the pressure gradient term toproduce an intermediate velocity field ~u∗. The resulting velocity fields willgenerally not satisfy the incompressibility constraints so a correction stepcomputes the pressure field that exactly counteracts the introduced diver-gence. The gradient of this pressure field is then used to correct the inter-mediate velocity field to satisfy Equation 5.1 and affects only the divergentcomponent of velocity. These steps are typically implemented as shown inEquation 5.5 & 5.6:P = (∇2)−1(∇ · ~u∗) (5.5)~u = ~u∗ −∆t∇P/ρ (5.6)5.3 Pressure Projection as a Proximal OperatorApplying the proximal methods of Section 2.3.2 to inverse problems in fluidsrequires a proximal operator for the constraint that the computed velocityfields be divergence free. This proximal operator can be expressed in termsof a hard constraint g(~u) which is a discontinuous, extended-value penaltyas defined in Equation 5.7.g(~u) ={0, ∇ · ~u = 0∞, otherwise(5.7)The proximal operator for membership in the set of divergence-free flows isthen defined by Equation 5.8.proxλg(~u∗) = arg min~ug(~u) +12λ‖~u− ~u∗‖22= project(~u∗) (5.8)915.4. Divergence-free Optical Flow using ADMMThe unique minimum of Equation 5.8 is the closest incompressible velocityto the input argument ~u∗, as measured by change in kinetic energy. This isequivalent to the pressure projection scheme described in Section 5.2 sincethe correction applied exactly cancels the divergent portions of the flowwithout affecting the divergence-free portions. Since the divergence-freeand curl-free components of the Helmholtz decomposition are orthogonalthe update has minimal 2-norm and consequently minimizes Equation 5.8.The result is that pressure-projection implements the proximal operator formembership in the set of divergence-free flows.This is the key observation of this chapter that enables all of the sub-sequent applications. It is important for several reasons. The first is thatit allows any convex function of flow velocities to be constrained by thedivergence-free condition and solved using the proximal methods of Chap-ter 2, without resulting in a monolithic constrained problem. The second isthat pressure projection requires solving an N ×N Poisson problem (whereN is the number of voxels) rather than the original problem subject to anN × 3N constraint matrix. This is beneficial since the problem size is re-duced and because the Poisson problem can be solved very quickly using,for example, multigrid or spectral methods.5.4 Divergence-free Optical Flow using ADMMWith the equivalence between pressure projection and the proximal operatorfor divergence-free flow established, it is possible to lay the groundwork forthe fluid tracking algorithm. The fluid tracking algorithm is conceptuallysplit into two components. The first component estimates divergence-freevelocity fields based on a PDE formulation of optical flow. Divergence-free or divergence-regularized flows have been estimated in the past usinga variety of methods, including div-curl regularization [70, 47, 46, 132] andwavelets [54, 92] but these approaches result in monolithic problems thattightly couple the constraints to the objective function. Instead, this chapterexploits the equivalence of pressure projection with proximal operators toapply proximal methods to the constrained optical flow problem.The second component is a standard forward simulation which providesthe starting conditions for the optical flow problems. This incorporates theprior knowledge that the observed flow is governed by the Navier-Stokesequations so optical flow estimates should only correct minor discrepan-cies introduced by grid-effects and modelling assumptions. This is similarto [132] which applied the same concept to 2D flows but used a different925.4. Divergence-free Optical Flow using ADMMformulation that resulted in a large-scale finite element problem that wouldbe challenging to extend to 3D.A key portion of the fluid tracking algorithm is consequently the solu-tion to divergence-free optical flow problems. These are used to correct thevelocities predicted by forward simulation. Each optical flow problem isformulated as a minimization for the velocity update ∆~u, shown in Equa-tion 5.9, which is the sum of an optical-flow photo-consistency term, EPC ,an L2 smoothness penalty ESM , an L2 kinetic energy penalty, EKE and ahard constraint that the computed flows be divergence free EDIV .arg min∆~uEPC(∆~u) + αESM (∆~u) + βEKE(∆~u)︸ ︷︷ ︸f(∆~u)+EDIV (∆~u)︸ ︷︷ ︸g(∆~u)(5.9)The photo-consistency, smoothness and kinetic energy terms are collectivelyreferred to as the objective function, f , while the EDIV is the constraintfunction g, mirroring their definitions from Section 2.3.2 where such objec-tive functions were minimized using the ADMM algorithm. Each of theseterms is defined below.Photo-consistency term, EPCThe photo-consistency term is defined by the standard PDE discretiza-tion of optical flow, extended to 3D. Equation 5.10 shows the definition incontinuous space and time.EPC(∆~u) =12∫Ω(∂φ∂t+ ∆~u · ∇φ)2dΩ (5.10)The velocity update ∆~u is discretized in space on a regular voxel grid andthe image gradient ∇~xφ is discretized with first-order finite differences andstored component-wise in the row of the data matrix F corresponding toeach voxel. The time-derivative of the image is also discretized with firstorder differences using the discrete frames given as input to the fluid trackingproblem. Applying both of these substitutions gives the discretized photo-consistency term in Equation 5.11.EPC(∆~u) =12‖φ2 − φ1∆t+ F∆~u‖22=12∆~uTFTF∆~u + ∆~uTFTφ2 − φ1∆t(5.11)935.4. Divergence-free Optical Flow using ADMMSmoothness term, ESMThe smoothness term ESM is a standard L2 penalty on gradient magni-tude for each velocity update component ∆uj , shown for continuous spacein Equation 5.12.ESM (∆~u) =123∑j=1∫Ω‖∇∆~uj‖22dΩ (5.12)The smoothness term is necessary to extrapolate velocities into regionswhere spatial gradients of the input image vanish. In these regions, thephoto-consistency term EPC contributes no information to the reconstruc-tion problem since the corresponding rows of F are all zero. The smoothnessterm allows information from regions with non-zero gradients to be used inthese areas.Once discretized, the smoothness term consists of the sum of three dis-crete Laplacian matrices Lj which each discretize the continuous Laplacianoperator ∇2∆uj = ‖∇∆uj‖22 for the subscripted velocity component, result-ing in the discretized smoothness term in Equation 5.13.ESM (∆~u) =123∑j=1∆~uLj∆~u (5.13)Kinetic energy term, EKEThe final term in the objective function F is the kinetic energy termEKE . The purpose of this term is to favour minimally energetic velocityupdates ∆~u. Its definition in continuous space is shown in Equation 5.14.EKE(δ~u) =12∫Ωδ~u2dΩ (5.14)Equation 5.14 can be trivially discretized to form Equation 5.15.EKE(∆~u) =12∆~uT∆~u (5.15)By favouring minimally energetic update the kinetic energy term accom-plishes two objectives. First, it penalizes deviations from the states pre-dicted by forward simulation since the velocity update is computed with re-spect to these states. This implicitly introduces the Navier-Stokes equations945.4. Divergence-free Optical Flow using ADMMas a prior on the velocity estimation procedure, even though Equation 5.2does not appear directly in the objective function f . Consequently, the flowestimation does not try to undo dynamics introduced by forward simulation.The second effect of the kinetic energy term is to counteract the smooth-ness term. Although the smoothness term is necessary to extrapolate ve-locities into regions with no texture, it tends to do so too strongly, oftensetting large volumes of fluid into motion in order to reduce streamline cur-vature by small amounts. Penalizing kinetic energy allows this effect to becontrolled and can have the counter-intuitive effect of producing flows thatappear more energetic near textured regions, but which have lower overallkinetic energy due to having less total volume of fluid in motion.5.4.1 Solution with ADMMThe equivalence of pressure-projection and the proximal operator for theset of divergence-free velocity fields allows the ADMM algorithm to be usedto solve inverse problems in velocity subject to the constraint that the re-constructed velocity fields are divergence-free. The fluid-tracking algorithmexploits this by using divergence-free optical flow based on ADMM to solvefor a low-energy update, ∆~u.This problem is expressed as a minimization in the form of Equation 5.16arg min∆~uf(∆~u) + g(∆~u) (5.16)where the functions f and g are convex functions, with f defining the dataterm and g defining the divergence-free constraint. The data term f isdefined as the sum of the photo-consistency, smoothness and kinetic energyterms, while the constraint g is defined as the indicator function for the setof divergence free flows (Equation 5.7)To solve Equation 5.16 using ADMM, a splitting variable, zˆ, is intro-duced to yield the constrained problem in Equation 5.17:arg min∆~u,zˆf(∆~u) + g(zˆ)such that: ∆~u = zˆ (5.17)Applying the ADMM algorithm to the minimization in Equation 5.17 yieldsthe proximal form of ADMM shown in Algorithm 5. The proximal form ofADMM is convenient to work with since the functions f and g are accessedonly through their proximal operators.955.4. Divergence-free Optical Flow using ADMMWith the function f defined as in Equation 5.9 (and its individual termsdefined by Equations 5.11, 5.13 & 5.15), the proximal operator of f is foundas the proximal operator for a quadratic form from Table 2.1. At this point,in anticipation of a reconstructing large-scale motions using a multi-scaleapproach, the velocity update ∆~u is split into two components, the first(∆~uin) representing an estimate obtained from a coarse scale that is treatedas a fixed constant and the second (δ~u) serves as the active degrees offreedom, giving Equation 5.18.∆~u = ∆~uin + δ~u (5.18)The definition of the proximal operator of f is then given by Equation 5.19,where the term related to the data matrix F and the constant coarse scaleestimate ∆~uin (crossed out in Equation 5.19) is omitted since this termis handled by the multi-scale flow estimation procedure. This is necessarysince the photo-consistency EPC term in Equation 5.11 is only valid formotions smaller than the voxel pitch. The multi-scale framework operatesin a coarse-to-fine manner and warps the concentration fields at each levelby velocities estimated at the next coarsest level. This roughly aligns theconcentration fields such that the relative motions computed at each levelare smaller than the voxel pitch, however the warping accounts for the effectof the neglected term.proxλf (~v) = (I + λA)−1 (~v − λb) (5.19)A = FTF + α3∑j=1Lj + βIb = FTφ2 − φ1∆t−XXXXXFTF∆~uin − α3∑j=1Lj∆~uin − β∆~uinEquation 5.19 is the solution to a symmetric positive definite system of linearequations, allowing it to be solved using the Conjugate Gradient method to-gether with a drop-tolerance incomplete Cholesky (ILDLTT) preconditioner.The preconditioner is pre-computed once and cached whenever the proxλfis evaluated. The Conjugate Gradient solver can also be warm-started atevery evaluation of proxλf .The proximal operator of the divergence constraint g can also be definedto take the splitting of the velocity update ∆~u into a fixed coarse estimate∆~uin and active solution variables δ~u.proxλg(~v) = Project(∆~uin + ~v)−∆~uin (5.20)965.4. Divergence-free Optical Flow using ADMMThe proximal operator for g simply re-uses the pressure-projection routinefor an existing fluid solver, offsetting the parameter by the initial update∆~uin to account for the scale-splitting. The staggered discretization ofAndo et al. [6] was modified for voxel grids with trilinear basis functionsand used to implement the pressure solve. This pressure solver locates thepressure degrees of freedom on the cell vertices and the velocities at thecell center. This has the advantage of placing the velocities for the fluidsimulator at the same location as the optical flow code, unlike the standardMAC discretization [59] which locates velocities on the cell faces. Howeverthe disadvantage is that the memory required and density of the systems tosolve rises considerably when using this staggering on 3D voxel grids.With the proximal operators defined, the ADMM algorithm is shown inAlgorithm 5, where the proximal operators proxλf and proxλg are definedby Equations 5.19 & 5.20 respectively.Algorithm 5 ADMM (proximal form)1: procedure ADMM(proxλf ,proxλg)2: // Initialize velocity, splitting variable and multipliers3: δ~u(0), z(0),y(0) ← 04: while k < kmax do5: // Update change in velocity δ~u6: δ~u(k+1) ← proxλf (zˆ(k) − y(k))7:8: // Update splitting variable zˆ9: zˆ(k+1) ← proxλg(δ~u(k+1) + y(k))10:11: // Update Lagrange multipliers λ12: y(k+1) ← y(k) + δ~u(k+1) − zˆ(k+1)13:14: k ← k + 115: end while16: return δ~u(kmax)17: end procedure5.4.2 Extension to multiscaleThe ADMM algorithm in Algorithm 5 and the proximal operators in Equa-tion 5.19 & 5.20 can be used to recover divergence-free velocity vectors but isonly valid for motions comparable to the voxel pitch due to the linearization975.4. Divergence-free Optical Flow using ADMMof the forward model. Since there is typically little control of the magnitudeof motions when capturing data, this condition is likely to be violated.This limitation can be partially addressed by using a scale-space, whichestimates coarse motions on downsampled voxel grids such that the lineariza-tion holds and then warps by these estimated motions when estimating thenext coarsest scale.Algorithm 6 Divergence-free Optical Flow using ADMM1: procedure OpticalFlowADMM(φ1,φ2,∆t,l)2: if l < lmax then3: // Estimate correction at coarse scale and upsample4: φC1 ← SmoothAndCubicDownsample(φ1, η, σ)5: φC2 ← SmoothAndCubicDownsample(φ2, η, σ)6: ∆~uC ← OpticalFlowADMM(φC1 ,φC2 ,∆t, l + 1)7: ∆~uin ← 1η ∗ CubicUpsample(∆~uC , 1η )8: else9: // Coarsest scale, initialize correction to zero10: ∆~uin ← 011: end if12: // Align images using estimated correction13: φ∗2 ← Advect(φ2,∆~uin,−∆t)14:15: // Update correction with ADMM algorithm16: // Define proxλf & proxλg as in Equations 5.19 & 5.2017: ∆~u←∆~uin + ADMM(proxλF ,proxλG)18:19: // Return computed update20: return ∆~u21: end procedureThe multi-scale extension relies on several auxiliary routines.• SmoothAndCubicDownsample(φ, η, σ) - Cubicly downsamples φ byeta after smoothing by a Gaussian filter with standard deviation σ.• CubicUpsample(∆~u, 1η ) - Cubically upsamples φ by1η .• Advect(φ, ~u,∆t) - Advects φ forward by ∆t units of time using veloc-ity field ~u using BFECC [97] with 3rd order spatial interpolation andmin-/max-limiting.985.5. Fluid TrackingConceptually the method works by warping the second image φ2 back-wards in time by the coarsely estimated velocity field at the next coarserlevel. This aligns the φ1 and φ∗2 to (ideally) within the voxel pitch such thatthe Taylor expansion using in formulating the PDE form of optical flowholds. This warping is the reason for the neglected term in Equation 5.19:the neglected (linear) term is handled by the (non-linear) warping by thecoarse velocity field.The multi-scale extension closely follows the multi-scale Horne-Schunckframework of Meinhardt & Llopis [113], however several operations are per-formed differently in the adaptation to fluids, including solving using thepreconditioned Conjugate-Gradient method, introduction of the divergence-free constraint/ADMM and using higher-accuracy and limited fluid advec-tion rather than straightforward warping. The method is also extended byincorporating it into a higher-level fluid tracking framework intended to in-corporate fluid dynamics and temporal coherence, described in the followingsection.5.5 Fluid TrackingSection 5.4 describes a multi-scale optical flow algorithm capable of recon-structing divergence-free estimates of fluid motions passively from concentra-tion fields. This algorithm is made possible by the equivalence of pressure-projection and the proximal operator for the set of divergence-free flows aswell as the equivalence between PDE optical flow under the assumption ofbrightness-constancy and the advection equation. The non-linear large-scaleoptical flow problem is solved as a sequence of convex linear minimizationsat progressively finer scales and is similar existing divergence-free velocityestimation algorithms, e.g. [70, 47, 46].However these methods (including Algorithm 6) do not yield satisfactoryresults when applied to tomographically reconstructed concentration fieldsdirectly. Such datasets typically have sparsely distributed texture that causethe optical flow problem to be severely under-determined due to large regionswith zero concentration gradients. The resulting velocity reconstructionshave severe popping and are artificially smooth due to needing high priorweights to suppress noise. These effects are illustrated in the comparisonsin Section 5.6, where both the classical Horn-Schunck and a divergence-penalized variant of Horn-Schunck are applied to the fluid tracking problem.The underlying difficulty is a lack of dynamics information in the recon-struction problem: neither the divergence-free proximal operator nor the995.5. Fluid Trackingoptical flow data term takes the momentum portion (Equation 5.2) of theNavier-Stokes equations into account. Most methods estimate each veloc-ity field connecting a pair of adjacent frames independently. Consequentlythere is no reason to expect Equation 5.2 to be satisfied over time, sincethe (unique) minimizer of the regularized optical flow problem is computedwithout regard to temporal coherence or dynamics.Although temporal coherence can be introduced into the reconstructionprocess by jointly solving for the velocity fields at all frames simultaneously,the non-linearity of Equation 5.2 makes it extremely challenging to incorpo-rate as a prior. This is in turn made more difficult by the sheer scale of theproblem.Instead, the approach presented here frames fluid tracking as the processof perturbing a running simulation to match the input capture. This is sim-ilar in concept to [132] although their formulation differs significantly. Theresulting algorithm (Algorithm 7) can be expressed as a forward simulationusing the Navier-Stokes equations to which a data-dependent source term isapplied via operator splitting.1005.6. Fluid Tracking ResultsAlgorithm 7 Fluid Tracking Reconstruction Loop1: procedure FluidTracking(φ1, . . . ,φkmax ,~u0)2: k ← 13: while k < kmax do4: // predict next velocity & project to be div. free5: ~˜u∗k ← Advect(~uk−1, ~uk−1,∆t)6: ~˜uk ← Project(~˜u∗k)7:8: // warp φk+1 backwards based on ~˜ut9: φ˜k ← Advect(φk+1, ~˜uk,−∆t)10:11: // compute correction ∆~uk to align12: // φk and φ˜k such that ∇ ·∆~uk = 013: ∆~uk ← OpticalFlowADMM(φk, φ˜k,∆t, 0)14:15: // apply correction as source term16: ~uk ← ~˜uk + ∆t∆~uk∆t17:18: // update loop indices and current time19: t← t+ ∆t20: k ← k + 121: end while22: end procedureThe resulting algorithm is consequently a blend between optical flowestimation and a fluid simulation. It uses the Navier-Stokes equations im-plicitly as a prior by looking for low-energy velocity corrections betweenstates predicted by simulation and the observed concentration fields. Theimprovement that this offers in result quality is discussed in the next sectionfor both real and simulated data.5.6 Fluid Tracking ResultsThis section provides evaluations of the performance of fluid tracking andshows several applications that make use of data captured experimentallyusing the optical tomography setup of Chapter 3. These datasets consist oftomographically reconstructed dye concentration fields in mixing fluids andare challenging for a number of reasons. First, they were acquired usingrelatively few camera views which limits the level of texture in the interior1015.6. Fluid Tracking Resultsof the flows. Second, as outlined in Chapter 3, the reconstructions haveself-shadowing and scattering artifacts due to a linearized approximation tothe physical forward model. This means that brightness constancy holdsonly approximately and at short time scales. Finally, each frame was recon-structed independently, which causes temporal noise and flicker. Nonethelessthe data represents the current state of the art in volumetric fluid capture.5.6.1 Tracking ValidationValidation of the fluid tracking algorithm was carried out in two ways. Thefirst was validation on simulated data for which there are known, ground-truth, velocity fields. This represents an idealized case for the trackingsince the input data is relatively free of artefacts introduced by tomographicreconstruction.The second way that the method was validated was through resimulation.Here the captured tomographic data and estimated velocities using fluidtracking were used as initial conditions for a forward simulation. Both ofthese are discussed in the following section.Synthetic ResultsThis section evaluates the fluid-tracking algorithm on synthetically gener-ated flows. These flows have the benefit of providing ground-truth, known,velocity fields. The fluid tracking algorithm is compared to two representa-tive methods for estimating the motion between two frames. The first is amulti-scale extension of the classical Horn-Schunck (H&S) technique that isdesigned to recover large motions. This choice is motivated by the continuedcompetitiveness when implemented in a modern way [145]. The second isan extension to the multi-scale Horn-Schunck method (H&S div. penalty)that incorporates a penalty on the flow divergence. This is to mimic thedivergence-free performance of methods such as [46] and is included not somuch to show a specific method but the overall behaviour of the class ofdivergence-free optical flow algorithms when applied naively to fluids. Theproposed fluid-tracking method is referred to as ’ADMM full’ and a versionomitting the kinetic energy constraint is referred to as ’ADMM no kin.’.The validation experiment consisted of a simulated ground-truth se-quence of a 3D incompressible flow (resolution 50x100x50), that transportsa buoyant fluid through a simple domain. The sequence has 118 framesand a central slice through the simulated dye concentrations is shown inFigure 5.1.1025.6. Fluid Tracking ResultsAngular ErrorFlow VelocitiesVelocity Norm ErrorH&S Div.-PenaltyDensityGround Truth 09006Horn&Schunck ADMM fullFigure 5.1: Visual comparison of velocity reconstruction between Horn-Schunck (H&S), divergence-penalized H&S and the proposed method. Fromtop to bottom: magnitude of vector difference w.r.t. ground truth, angu-lar error, absolute HSV color-encoded direction plots. The figure shows acentral slice through the domain.Two different error metrics adapted from the optical flow communitywere used to evaluate the results. The velocity norm error measures the1035.6. Fluid Tracking Resultsnorm of the vector difference between the ground truth velocities and theestimated velocities. The angular error measure the directional accuracyof the motion. The angular error is weighted by the velocity norm to helpdistinguish large-scale trends from finer features such as slight shifting ofeddies or the low-energy turbulence that was used as initial conditions. Thiswas done since such effects result in very high angular errors even thoughthe overall flow behaviour is consistent.Figure 5.1 shows a visual comparison of the angular error, velocity normerror and a color-coded visualization of the velocity vectors. All plots showthe same volume slice 25 and have a common scale. The proposed methodthat tightly couples simulation and velocity estimation significantly outper-forms the other two methods. The errors are significantly lower and fluidtracking is the only method that successfully captures the overall structureof the velocity field. This is even more apparent in videos of the reconstruc-tions.The temporal behaviour of the different algorithms is shown quantita-tively in Figure 5.2. The Horn-Schunck techniques are susceptible to incoher-ent estimates whereas the proposed fluid-tracking method shows a smootherror evolution that is due to cumulative error (left). The large angular er-rors at the beginning of the sequence correspond to the initial conditions ofisotropic turbulence (right): at the start of the sequence there is no dye inthe domain so the algorithm has no way of estimating these quantities. Byframe 60, the dye has reached the majority of the domain and the opticalflow reaches its highest angular accuracy. Although the Horn-Schunck esti-mates are nearly random, the fluid tracking algorithm produces reasonableresults.1045.6. Fluid Tracking ResultsH&S H&S div. ADMM full ADMM no kin.α 0.2 0.2 1e-4 1e-4β 0.0 0.0 1.0 0.0γ 0.0 100.0 h.c. h.c.ADMM iter. n.a. n.a. 5 5ADMM λ n.a. n.a. 1.0 1.0Table 5.1: Optimization parameters for ground truth comparisons. ’n.a.’stands for ’not applicable’, ’h.c.’ for ’hard constraint’. The H&S solverscompute a solution to the linear system, corresponding to Eq. 5.9 with adivergence penalty using CG, whereas ADMM uses the proposed algorithms.The multi-scale parameters are common for all methods: 3 levels, σ =0.5,η =0.65.time frameVelocity Norm Error (RMS) Horn & SchunckH&S div-penaltyADMM no kineticADMM full0 20 40 60 80 1001.00.5time frameAngular Error (RMS)Horn & SchunckH&S div-penaltyADMM no kineticADMM full0 20 40 60 80 10010050Figure 5.2: Ground-truth comparison for a 3D simulation for different meth-ods. Velocity norm error (left) and norm-weighted angular error (right) areshown. The Horn-Schunck based two-frame methods suffer from temporalartefacts. The direction estimates of the flow vectors are unsatisfactory. Incontrast, the fluid-tracking method shows very good temporal coherence andsignificantly improved direction estimates.To test the effect of the kinetic energy prior, Figure 5.2 also includesa version with this prior disabled, ”ADMM no kinetic”. The numericalperformance is worse than the full method for most of the sequence, howeveronce the simulation reaches the full domain, performance drops slightly dueto over-regularization. Adaptively setting this parameter, either globally orlocally, could potentially improve the results.Comparisons were also performed for flows with solid obstacles and vary-ing levels of data corruption. Input data was generated on a 100x200x100voxel grid and consisted of a spherical dye inflow with randomized initial1055.6. Fluid Tracking Resultsvelocities and a spherical obstacle that triggers turbulent mixing as shown inFigure 5.3. The solid boundary conditions were enforced by the variationalapproach of Batty et al. [14] in the pressure solve and as a penalty in thevelocity estimation subproblem.0 50 100 15000.020.040.060.080.1time framevelocity norm error (RMS)blur 0.0blur 0.5blur 1.0original0 20 40 60 80 100 120 140020406080100time frameAngular Error (RMS)blur 0.0blur 0.5blur 1.0originalFigure 5.3: Fluid tracking for a flow with a solid obstacle. Top row: dyeenters from a spherical inflow and impinges on a solid sphere. A slice throughthe domain shows dye concentration (contour color), ground-truth velocityvectors (black) and reconstructed velocity vectors (white). Bottom row:velocity norm and angular errors for native resolution (”original”) and half-resolution reconstructions with varying levels of blur (σblur = [0.0, 0.5, 1.0]voxels) applied to the input density fields. α = 1e− 4, β = 1.0The performance on this case was evaluated at the full-resolution and ona half-resolution grid with varying levels of blur added to the input concen-tration fields. This was done to approximate the loss of data quality thancan arise when scanning using methods such as [68], which often producerelatively low resolution reconstructions with strong filtering. The results1065.6. Fluid Tracking Resultsshow that the fluid-tracking method is capable of producing reasonable ve-locity predictions even in the presence of obstacles and for moderate blurs.Strong spatial blur cause the reconstructions to diverge more quickly overtime in both metrics.Resimulated CapturesCombining tomographically reconstructed dye concentration fields with ve-locity estimates obtained from the fluid tracking algorithm results in a pas-sively obtained estimate of the full, volumetric fluid state. This allows sim-ulations to be started directly from captured data, for the first time in ourknowledge.An interesting question is how close this estimation & resimulation pro-cess can get to reality. Several sources of error already limit the captureaccuracy: relatively few views means that the detailed interior texture offlows may not be reconstructed accurately by tomography and this is ex-acerbated by systematic errors introduced by linearizing the tomographicforward model. The fluid tracking algorithm, as a combined fluid simula-tion and optical flow algorithm also introduces errors due to grid effects andthe priors needed to obtain a strictly convex problem; the previous sectionshows that these are fairly accurate, but drift and smoothing introduced bythe simulation and priors increases over time.Capture H&S H&S div. ADMMFigure 5.4: Passive advection of densities in a reconstructed time-varyingvelocity fieldOne way of evaluating this is to passively advect dye concentrationthrough the computed velocity fields and compare the results to the cap-1075.6. Fluid Tracking Resultstured. Figure 5.4 shows this experiment for the capture concentrations, theHorn-Schunck method and the divergence-penalized Horn-Schunck method.The results show that the fluid tracking algorithm better captures the fineand medium scales of the flow, as well as the relevant dynamics, even overseveral seconds of turbulent mixing. The effect is even more apparent onvideo, where the Horn-Schunck methods show severe popping and an overallsludgy behaviour.Another way to evaluate performance is to take a frame of the cap-ture and its corresponding estimated velocity field as initial conditions fora forward simulation and compare this to the captured data, as shown inFigure 5.5. This is a much more challenging test than simply re-advectingconcentration fields passively, since it requires not only that the estimatedinitial conditions be highly accurate but also that the simulations are appro-priate. Furthermore, it is well known that turbulent flows exhibit consid-erable energy transfer between spatial scales in the velocity field, includingscales that are significantly smaller than can be resolved by either tomo-graphic reconstruction or fluid tracking. Flows in which fine-scale motionsdamp quickly are consequently expected to perform better in this evalua-tion than flows where fine-scaled features drive the dynamics, such as influid instabilities.1085.6. Fluid Tracking Results(a) bloom (b) smokeFigure 5.5: Simulations initialized from states estimated by fluid track-ing. Resimulation of the bloom (left) and smoke (right) datasets at 10frame intervals started from initial conditions obtained by tomography andfluid tracking. Despite fine-scaled turbulent mixing, the simulations pre-serve many features of the bloom dataset for over a second. In contrast,the smoke resimulation, with buoyancy induced instabilities results in morerapid divergence from the capture.This expectation is confirmed in the two test cases considered in Fig-ure 5.5. In the ”bloom” capture (Figure 5.5(a) (which involves dye be-ing poured into still water), fine-scaled turbulent eddies diffuse and decayinto larger-scaled features. These high-frequency details damp quickly af-ter which the flow is dominated by large-scale features. Consequently, theoverall behaviour of the simulated flows matches that of the capture well forover a second since the main features are well resolved by the grid.The opposite effect can be seen in the ”smoke” capture (Figure 5.5(b)),in which dye mixed with alcohol rises under the effects of buoyancy. Herethe buoyancy results in a fluid instability with relatively high-spatial fre-quency forcing terms driving the flow. In this case, the captured data isunder-resolved and systematic errors in the concentration fields result in thisforcing term being misrepresented in the simulation, which is itself under-resolved for the very fine-scaled features that develop. Consequently thesimulation diverges more quickly from the capture.These two examples demonstrate that using estimated velocities as initial1095.7. Stylistic Modificationsconditions for a forward simulation does not necessarily reproduce the origi-nal flow. Variations of the type of flow, accuracy of the simulator and minormodelling errors all contribute to divergence of the simulated flow from thecapture. This is a common problem not just with captured data, simplymodifying the resolution of a simulation can often give very different resultsand in many cases it is not possible or practical to reach a grid-convergedresult. This has been addressed in graphics simulations by synthesizing de-tails (e.g. [98]). The same techniques can be applied to captured flows usingvelocities estimated by fluid tracking.5.7 Stylistic ModificationsHaving a method to estimate plausible velocity fields for tomographicallycaptured concentration fields means that estimates of the full fluid state areknown. This allows captured data to be used in the standard visual effectspipeline. Previously a lack of velocity information limited applications tosimple playback and re-rendering of captured scenes. The following sectionsdescribe a number of ways that captured data can be applied to visualeffects, and in the process, demonstrate some of the potential applicationsof combining fluid simulation and inverse problems.Resolution EnhancementIncreasing the apparent resolution of captures can be achieved in two ways:the first is to simply re-advect concentration fields through the estimatedvelocities in order to remove regularization and temporal artefacts from cap-tures, while the second is to apply simulation ’up-resing’ (super-resolution)techniques. The first approach is demonstrated in Figure 5.4 in the contextof evaluating the tracking performance.However this can be extended by re-rendering the resulting, re-advecteddensity fields in a novel fashion. Figure 5.6 shows this for the ’smoke’ capturein the column labelled ”Passive Advection”. The results vastly increase theapparent resolution of the captures, resulting in sharper material interfacesand more details of the mixing process. Note that an exact match for thedensities is not achieved since the in-flow conditions cannot be matched. Fig-ure 5.6 also highlights that captures of one phenomena can be re-interpretedin a different context: here capture of a dye in water is re-rendered as smokein air. Figure 5.7 also shows the wavelet-turbulence method [98] appliedto the estimated velocity fields (as calculated by Mantaflow [124]). This1105.7. Stylistic ModificationsTomography Passive Advection Super-resolvedFrame 16Frame 40Frame 65Frame 97Figure 5.6: Comparison of resolution enhancement methods (smoke). Inputcaptures (left column), passively re-advected concentration fields (centrecolumn) and super-resolved flows using synthetic turbulence (right column)for the smoke dataset. Fluid tracking parameters: α = 0.005, β = 2.51115.7. Stylistic ModificationsFrame 130Frame  145Frame 160Frame 175Tomography Passive Advection Super-resolvedFigure 5.7: Comparison of resolution enhancement methods (bloom). Inputcaptures (left column), passively re-advected concentration fields (centrecolumn) and super-resolved flows using synthetic turbulence (right column)for the bloom dataset. Fluid tracking parameters: α = 0.005, β = 2.51125.7. Stylistic Modificationsmethod allows details to be synthesized without changing the global be-haviour of the captures. The same approaches are applied to the smokecapture in Figure 5.6.Domain AlterationThe domain of a capture can also be altered artistically once the full fluidstate has been estimated by running the capture up to a desired time-frame,after which a simulation is restarted at that frame to dynamically interactwith the altered boundary conditions. This way the computed velocity fieldcan be used to transfer captures into modified domains. Figure 5.8 showsthe smoke capture being transferred into a domain with a synthetic obstacleadded: the switchover to simulation allows the smoke to interact realisticallywith the new obstacle.1135.7. Stylistic ModificationsFigure 5.8: Restarted capture in a modified domain. The ability to estimatefluid velocities allows simulations to be restarted in modified domains, hereshowing the ’smoke’ dataset restarted in a domain with an added sphericalobstacle.When the boundary conditions of the domain change severely, the map-ping from source to target domain may introduce popping, since pressure-projection in the first simulation step will seek the closest (in a kinetic energysense) incompressible field that satisfies the new, modified boundary con-ditions. This can weight empty regions of the domain more strongly thanvisible regions with non-zero dye concentration.This effect can be partially avoided by formulating the transfer processas an inverse problem that can be solved using ADMM (Algorithm 5) bysimply re-defining the function f . The modified f expresses the desire tomap velocities to the new domain such that minimal changes are introducedin dyed regions. This is shown in Equation 5.21, where W is a diagonalweighting matrix indicating the relative cost of altering the the velocity at1145.7. Stylistic Modificationsa given voxelf(∆~u) =12‖W∆~u‖22 + αESM (∆~u) (5.21)Solving Equation 5.21 with Algorithm 5 and adding ∆~u to the initial con-ditions for velocity yields a velocity field satisfying the new boundary con-ditions that is as close as possible in an importance-weighted kinetic-energymetric to the original velocity field. This is the fluids analog to a weightedversion of a denoising problem, which is a classical inverse problem in imag-ing.Figure 5.9 shows this approach applied to a flow in which a syntheticobstacle is introduced and a weighting function combining dye concentra-tion and proximity to the synthetic obstacle is used to apply the ADMMalgorithm to remapping the flow. Using this approach results in reducederrors for visible regions of the flow.Figure 5.9: Mapping of flow from one domain to another. Directly restart-ing the flow results in significant errors in velocity for the remapped flows.Weighting by an importance function (inset) based on concentration anddistance to boundaries and solving via ADMM singificantly reduces theseerrors in visible portions of the flowThe example shown in Figure 5.9 is an extreme example of domainremapping that demonstrates how inverse problems can be put to use solv-ing practical fluids problems. Note that this was not needed for Figure 5.8:modifications to the domain must be very severe to see a visible improvementover simply restarting as a simulation directly.Guided SimulationThe link between inverse problems and fluids can be strengthened furtherby examining the problem of guided simulation, which seeks to apply per-1155.7. Stylistic Modificationsturbations to a simulation in order to achieve some target artistic effect.This problem has been addressed before using adjoint methods [112, 148]however these approaches involve non-linear formulations that must differ-entiate the entire fluid solver in order to determine partial derivatives ofthe controlled variables with respect to the control parameters. Anothervariant involves control particles within a smoothed particle hydrodynamicsframework [146], which also results in a non-linear formulation.However, by adopting an inverse problems interpretation with perturba-tions applied via operator splitting in a manner similar to the fluid trackingalgorithm (Algorithm 7) it is possible to obtain a fully linear constrainedproblem. In this approach, there is a target velocity field, ~uREF , that pro-vides a set of desired coarse-scale velocities and the goal is to perturb a run-ning simulation with an update, ∆~u, that causes the simulation to matchthe large-scale behaviour of the target, while keeping some characteristicsof the original simulation.The resulting simulation loop is only a slight modification of the fluidtracking algorithm (Algorithm 7) and is shown in Algorithm 8. First a newstate is predicted by forward simulation. Next an update is computed to thevelocity field to improve the match with respect to the target field ~uREF .Finally this update is applied as source term and the next simulation stepis taken.1165.7. Stylistic ModificationsAlgorithm 8 Guided Simulation Loop1: procedure GuidedSimulation(~uREF 1 , . . . , ~uREFkmax ,~u0,φ0)2: k ← 13: while k < kmax do4: // Use forward simulation to compute predicted5: // next velocity and concentration field6: φk ← Advect(~uk−1,φk−1,∆t)7: ~˜u∗k ← Advect(~uk−1, ~uk−1,∆t)8: ~˜uk ← Project(~˜u∗k)9:10: // compute correction ∆~uk to match coarse scales11: // of simulation to target velocity using12: // Equation 5.23 for proxλf13: ∆~uk ← ADMM(proxλf ,proxλg)14:15: // apply correction as source term16: ~uk ← ~˜uk + ∆t∆~uk∆t17:18: // update loop indices and current time19: t← t+ ∆t20: k ← k + 121: end while22: end procedureTaking inspiration from imaging, this problem can be formulated as adeconvolution problem by defining a circulant matrix B that implements alow-pass filter and defining the objective function for the problem F as inEquation 5.22.f(∆~u) =12‖B ((~uREF − ~u)−∆~u) ‖22 + αESM (∆~u) + βEKE(∆~u) (5.22)Since both the matrix B and the stencils implied by the smoothness priorESM and kinetic energy prior EKE are circulant, the proximal operator forF can be solved in Fourier space extremely efficiently via Equation 5.23.proxλf (v) = F−1(λF(B)∗F(B)F(~uREF − ~u) + F(v)λF(B)2 + αF(L) + β + 1)(5.23)This example serves to highlight the benefits of splitting the constraintfunction from the objective function. Solving Equation 5.22 subject to the1175.8. Implementationdivergence-free constraint is challenging due to the sheer scale of the objec-tive function (B may have several thousand non-zeros per row) and con-straint. However, by splitting the problem the objective function F can besolved in O(n log n) time and the constraint can be solved in O(n) time periteration of ADMM. Generally very few iterations of ADMM are requiredto get adequate solutions.original simulation guided simulation target behaviorFigure 5.10: Fluid guiding results: curl-noise is added to a resimulationof the smoke dataset to change its visual appearance. Without guiding theresulting simulation quickly diverges from the capture (left). By introducingthe fluid guiding to constraint low-frequency components of the velocity fieldto the original capture, the simulation tracks the original capture better,but preserves the stylistic modifications. Blur radius: 2 voxels, λ = 1000,α = 0.001, β = 0.0This is illustrated in Figure 5.10, in which a resimulation of the smokecapture is stylistically modified using curl-noise as a source term. Theperturbations cause the large-scale behaviour to quickly diverge from theoriginal capture. By introducing the ADMM-based fluid guiding approachthe simulation preserves the low-frequencies of the capture and the high-frequencies introduced by the curl-noise source term.5.8 ImplementationThe fluid-tracking, domain alteration and fluid guiding algorithms describedin the previous sections were implemented using C++ using the VTK visual-ization library [137] for volumetric image input and output and the CImg [1]1185.9. Limitationsimage processing library as an internal data structure for manipulating vol-umetric data and performing forward/inverse fast Fourier transforms onvolumetric data. The GMM++ [2] library was used to solve the discretizedlinear systems needed for optical flow and pressure projection. The Pois-son systems needed for pressure projection were preconditioned using theIncomplete Cholesky preconditioner as implemented in GMM++.Advection was implemented using BFECC [97]. The pressure projec-tion was implemented using the variational approach of [6], adapted to vox-els using single-point integration of the discretized gradients as well as thesolid-fluid coupling scheme of [14]. This allows interior boundaries that arenot aligned to the grid axes as well as colocates the velocity vectors andconcentration fields which is beneficial since it avoids interpolating fieldsfrom staggered to centred discretizations between the pressure projectionand data term proximal operators. A disadvantage is that the pressure-projection linear system becomes much more dense, having approximately27 non-zeros per row rather than 7, which imposes a severe memory penaltyand limits the scale of the reconstructions. Future work could examinealternative discretizations that require less memory than the current imple-mentation.5.9 LimitationsThere are several limitations to the fluid tracking approach presented in thischapter. The most significant is that the method depends on the presenceand motion of texture in the input data to infer plausible velocity fields.Performance consequently degrades when the input data has little texturewhich can occur when the spatial distribution of marker fluid is nearly static.This limitation is inherited from the partial-differential equation formulationof optical flow.A second limitation is in determining appropriate boundary conditionsto apply at inflows and outflows when evaluating the pressure projectionand velocity estimation proximal operators. For the pressure projection inparticular, it is necessary to specify these conditions such that a divergence-free velocity field exists, however there is a difficulty in determining suitable,consistent, conditions at fluid inflows, since the velocities at these locations isnot known from the experimental conditions. To avoid this issue, the imple-mentation used enforces that exterior boundaries are treated as stationarysolids so that a divergence free flow exists. This leads to an incompatibilitywith the velocity estimation term near these boundaries and prevents, in the1195.10. Conclusionscurrent implementation, general inflows and outflows from being estimatedcorrectly. Future work could address this limitation by adding an additionalconstraint requiring that the sum of inflows and outflows equal zero and freeinflow/outflow conditions on boundaries where fluid enters and exits.A less severe limitation is that velocity fields for each frame must bereconstructed in sequence due to posing the velocity estimation problem asperturbing a running simulation. This prevents reconstructing the velocityfield for each frame in parallel, although the improvement in result qualitydue to this choice seems to more than compensate for the limitation.Finally the weights for the kinetic energy and smoothness priors must betuned for a given flow and capture setup to achieve a good balance betweenreconstruction detail and noise levels. This is an inherent problem in inverseproblems. One principled method of addressing this could be to performcaptures of benchmark flows with known (or measured behaviour) and usecross-validation to select regularization parameters for given flow cases.5.10 ConclusionsThis chapter showed that the pressure solve commonly used in incompress-ible flow solvers can be interpreted as a proximal operator for the set ofdivergence-free flows. This equivalence allows a wide variety of practicalproblems in fluid dynamics to be addressed using advanced methods fromconvex optimization. The resulting algorithms can exploit structure in boththe objective and constraints, yielding modular and efficient algorithms.There are many applications for this approach. This chapter introduceda constrained form of optical flow that is appropriate for incompressibleflows, as well as a combined optical-flow & simulation framework for im-proving temporal coherence of the resulting reconstructions. The resultingfluid tracking algorithm couples the Navier-Stokes equations into the recon-struction process, resulting in vastly improved velocity field estimates.Several classical inverse problems are cast as problems in fluid dynamicsusing the equivalence of pressure projection and the proximal operator fordivergence free flows: optical flow, suitably modified, is recast as fluid veloc-ity estimation. Denoising is cast as the problem of transferring a flow fromone domain to another while preserving salient features of the flow. Finallydeconvolution is cast as the problem of guiding a running simulation, whichfurther demonstrates the relationship between inverse problems and fluidsproblems. The final example, shown in Figure 5.10, involves three separateinverse problems: tomography to reconstruct concentration fields, optical1205.10. Conclusionsflow/fluid tracking to estimated velocities and deconvolution to use thosevelocities to guide a running simulation.There are likely many further applications. Using the proximal methodsframework it is possible to solve a wide variety of objective functions in-cluding some non-smooth and discontinuous functions. The approach couldpotentially be used to implement non-smooth boundary conditions such asthe wall-separation condition of Batty et al. [14] (preliminary results indi-cate that the concept works) and the ADMM algorithm as applied to in-compressible fluids is similar to Iterated Orthogonal Projection [115], whichsplits volume and boundary condition portions of the pressure solve.Finally, ADMM has been used to solve problems in machine learningsuch as dictionary learning [94], which could enable the development ofdata-driven models for fluids in a natural and efficient framework.121Chapter 6Conclusions and FutureWorkThis thesis addressed several applications of inverse problems in fluids andimaging. Chapter 3 focused on tomographic reconstruction of dye concentra-tion from optical tomography and proposed a stochastic method for perform-ing these reconstructions. The resulting method is well suited to graphicsapplications, having low memory and the ability to re-render captured dataon the fly. It also allows priors to be incorporated into the reconstructionin a straightforward manner, allowing image and volume-based priors to beused.Chapter 4 then adapted this method to image deblurring. The flexibilityof the stochastic method also allows it to address several issues in imagedeblurring, specifically spatially varying point-spread functions, saturationand under-regularization of dark image regions. The stochastic method wasshown to produce state of the art results for a wide variety of priors inthe literature and two additional priors were proposed to address under-regularization of dark regions and to demonstrate content dependent priors.Chapter 5 then returned to the problem of scanning fluids and devel-oped a fluid tracking algorithm to estimate temporally coherent velocityfields from tomographically reconstructed concentration fields. This showeda connection between fluid simulation and convex optimization that allowedseveral imaging problems to be recast as fluids problems and solved withstandard methods. Chapter 5 also showed that challenging priors can beintegrated into the reconstruction process via the equivalence between prox-imal operators and pressure projection, and through implicit coupling withthe Navier-Stokes equations.The three applications and two methods from Chapters 3-5 serve todemonstrate the importance of regularization when applying inverse prob-lems to graphics. Capture conditions are often adverse in graphics applica-tions: there will rarely be as many cameras as needed, photos will often betaken with blur and fluid flows will rarely have well-resolved, dense texture.However, by carefully tailoring priors to the conditions, useful reconstruc-122Chapter 6. Conclusions and Future Worktions can still often be obtained.This thesis also shows that new approaches to well established prob-lems can prove beneficial. The stochastic algorithm of Chapters 3 and 4 isa surprisingly simple algorithm for solving inverse problems, but producesgood results without many of the drawbacks of other methods. The con-verse is also true: sometimes established methods prove highly effective inunexpected areas, such as in the connection of proximal methods to fluidsimulation.Rapidly increasing computational resources as well as recent theoreticaladvances have combined to make solving inverse problems a viable approachto problems in graphics, imaging and elsewhere. The limiting factor, inmy opinion, is the availability of effective priors for each problem type.Priors help to dramatically reduce the set of possible solutions to a problem(sometimes to a single, unique, solution) which can dramatically improvethe quality of reconstructions. However they can also significantly increasethe difficulty in solving the resulting optimization problems.For inverse problems involving incompressible fluids, in particular, I feelthat the equivalence of pressure-projection with the proximal operator forthe indicator function of divergence-free flows has great potential. Introduc-ing the incompressibility constraint has traditionally required solvers basedeither on complex (sometimes non-linear) discretizations or monolithic con-strained formulations that scale poorly. The equivalence of these operationsmeans that asymptotically optimal methods from convex optimization canbe directly applied to problems in fluids, resulting in iterative algorithmsthat can exploit structure in subproblems to obtain efficient, practical meth-ods. Work currently in submission investigates applying these methods tonon-smooth problems involving the pull-away condition proposed in [14] aswell as extending the guiding approach of Chapter 5. I suspect that furtherimprovements can be obtained in fluid reconstructions by exploiting more so-phisticated machine-learning approaches such as Gaussian Mixture Modelsor Dictionary Learning to build databases of fluid behavior. Early, unpub-lished, experiments indicate that truncated Singular Value Decompositionsof volumetric patches of simulated flows results in basis functions capturinguniform, rotational and shear flows. This suggests that salient features offluid flows may be teased out by general purpose convex optimization ap-proaches. I believe this structure can be exploited in compression of flowresults, regularization of inverse problems involving fluid flows as well as po-tentially being incorporated into optimization based simulations. The chal-lenge however, is to ensure that reconstructions obtained using data-drivenor heuristic priors are actually accurate representations of the underlying123Chapter 6. Conclusions and Future Workphenomena: too heavy a reliance on priors will only reproduce what hasalready been seen.124Bibliography[1] Cimg image processing library. http://http://www.cimg.eu/. Ac-cessed: 2015-07-03.[2] Gmm++ linear algebra library. http://download.gna.org/getfem/html/homepage/gmm/.Accessed: 2015-07-03.[3] Ronald J Adrian and Jerry Westerweel. Particle image velocimetry.Number 30. Cambridge University Press, 2011.[4] Marina Alterman, Yoav Y Schechner, Minh Vo, and Srinivasa GNarasimhan. Passive tomography of turbulence strength. In Com-puter Vision–ECCV 2014, pages 47–60. Springer, 2014.[5] AH Andersen and Avinash C Kak. Simultaneous algebraic reconstruc-tion technique (sart): a superior implementation of the art algorithm.Ultrasonic imaging, 6(1):81–94, 1984.[6] Ryoichi Ando, Nils Thu¨rey, and Chris Wojtan. Highly adaptive liquidsimulations on tetrahedral meshes. ACM Transactions on Graphics(TOG), 32(4):103, 2013.[7] Bradley Atcheson, Felix Heide, and Wolfgang Heidrich. Caltag: Highprecision fiducial markers for camera calibration. In VMV, volume 10,pages 41–48. Citeseer, 2010.[8] Bradley Atcheson, Ivo Ihrke, Wolfgang Heidrich, Art Tevs, DerekBradley, Marcus Magnor, and Hans-Peter Seidel. Time-resolved 3dcapture of non-stationary gas flows. ACM Transactions on Graphics(Proc. SIGGRAPH Asia), 27(5):132, 2008.[9] Dale L Bailey, David W Townsend, Peter E Valk, and Michael NMaisey. Positron emission tomography. Springer, 2005.[10] David Baraff. Non-penetrating rigid body simulation. 1993.125Bibliography[11] David Baraff. Fast contact force computation for nonpenetrating rigidbodies. In Proceedings of the 21st annual conference on Computergraphics and interactive techniques, pages 23–34. ACM, 1994.[12] David Baraff. An introduction to physically based modeling: rigidbody simulation iunconstrained rigid body dynamics. 1997.[13] John L Barron, David J Fleet, and Steven S Beauchemin. Performanceof optical flow techniques. International journal of computer vision,12(1):43–77, 1994.[14] Christopher Batty, Florence Bertails, and Robert Bridson. A fast vari-ational framework for accurate solid-fluid coupling. ACM Transactionson Graphics (TOG), 26(3):100, 2007.[15] Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. Surf: Speededup robust features. In Computer Vision–ECCV 2006, pages 404–417.Springer, 2006.[16] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholdingalgorithm for linear inverse problems. SIAM Journal on Imaging Sci-ences, 2(1):183–202, 2009.[17] Thabo Beeler, Fabian Hahn, Derek Bradley, Bernd Bickel, Paul Beard-sley, Craig Gotsman, Robert W Sumner, and Markus Gross. High-quality passive facial performance capture using anchor frames. InACM Transactions on Graphics (TOG), volume 30, page 75. ACM,2011.[18] Bernd Bickel, Mario Botsch, Roland Angst, Wojciech Matusik, MiguelOtaduy, Hanspeter Pfister, and Markus Gross. Multi-scale captureof facial geometry and motion. In ACM Transactions on Graphics(TOG), volume 26, page 33. ACM, 2007.[19] Peter Blomgren and Tony F Chan. Color tv: total variation meth-ods for restoration of vector-valued images. Image Processing, IEEETransactions on, 7(3):304–309, 1998.[20] Marc Bodenstein, Matthias David, and Klaus Markstaller. Principlesof electrical impedance tomography and its clinical application. Crit-ical care medicine, 37(2):713–724, 2009.[21] Le´on Bottou. Stochastic gradient descent tricks. In Neural Networks:Tricks of the Trade, pages 421–436. Springer, 2012.126Bibliography[22] Jeffrey E Boyd and James J Little. Complementary data fusion forlimited-angle tomography. In Computer Vision and Pattern Recog-nition, 1994. Proceedings CVPR’94., 1994 IEEE Computer SocietyConference on, pages 288–294. IEEE, 1994.[23] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and JonathanEckstein. Distributed optimization and statistical learning via thealternating direction method of multipliers. Foundations and Trends R©in Machine Learning, 3(1):1–122, 2011.[24] Derek Bradley, Bradley Atcheson, Ivo Ihrke, and Wolfgang Heidrich.Synchronization and rolling shutter compensation for consumer videocamera arrays. In Computer Vision and Pattern Recognition Work-shops, 2009. CVPR Workshops 2009. IEEE Computer Society Con-ference on, pages 1–8. IEEE, 2009.[25] Derek Bradley, Wolfgang Heidrich, Tiberiu Popa, and Alla Sheffer.High resolution passive facial performance capture. ACM Transactionson Graphics (TOG), 29(4):41, 2010.[26] Derek Bradley, Tiberiu Popa, Alla Sheffer, Wolfgang Heidrich, andTamy Boubekeur. Markerless garment capture. In ACM Transactionson Graphics (TOG), volume 27, page 99. ACM, 2008.[27] ND Bregman, RC Bailey, and CH Chapman. Crosshole seismic to-mography. Geophysics, 54(2):200–215, 1989.[28] Thomas Brox, Andre´s Bruhn, Nils Papenberg, and Joachim Weickert.High accuracy optical flow estimation based on a theory for warping.In Computer Vision-ECCV 2004, pages 25–36. Springer, 2004.[29] Thomas Brox and Jitendra Malik. Large displacement optical flow:descriptor matching in variational motion estimation. Pattern Anal-ysis and Machine Intelligence, IEEE Transactions on, 33(3):500–513,2011.[30] Patrizio Campisi and Karen Egiazarian. Blind image deconvolution:theory and applications. CRC press, 2007.[31] Emmanuel Candes, Laurent Demanet, David Donoho, and LexingYing. Fast discrete curvelet transforms. Multiscale Modeling & Sim-ulation, 5(3):861–899, 2006.127Bibliography[32] Emmanuel J Candes and David L Donoho. Curvelets and reconstruc-tion of images from noisy radon data. In International Symposium onOptical Science and Technology, pages 108–117. International Societyfor Optics and Photonics, 2000.[33] Emmanuel J Cande`s, Justin Romberg, and Terence Tao. Robust un-certainty principles: Exact signal reconstruction from highly incom-plete frequency information. Information Theory, IEEE Transactionson, 52(2):489–509, 2006.[34] Emmanuel J Cande`s and Michael B Wakin. An introduction to com-pressive sampling. Signal Processing Magazine, IEEE, 25(2):21–30,2008.[35] Emmanuel J Candes, Michael B Wakin, and Stephen P Boyd. En-hancing sparsity by reweighted 1 minimization. Journal of Fourieranalysis and applications, 14(5-6):877–905, 2008.[36] Claudio Canuto, M Hussaini, A Quarteroni, and TAJ Zang. Spec-tral Methods in Fluid Dynamics (Scientific Computation). Springer-Verlag, New York-Heidelberg-Berlin, 1987.[37] Antonin Chambolle and Thomas Pock. A first-order primal-dual al-gorithm for convex problems with applications to imaging. Journal ofMathematical Imaging and Vision, 40(1):120–145, 2011.[38] Tony F Chan, Gene H Golub, and Pep Mulet. A nonlinear primal-dualmethod for total variation-based image restoration. SIAM Journal onScientific Computing, 20(6):1964–1977, 1999.[39] Scott Shaobing Chen, David L Donoho, and Michael A Saunders.Atomic decomposition by basis pursuit. SIAM journal on scientificcomputing, 20(1):33–61, 1998.[40] Zhongping Chen, Yonghua Zhao, Shyam M Srinivas, J Stuart Nel-son, Neal Prakash, and Ron D Frostig. Optical doppler tomography.Selected Topics in Quantum Electronics, IEEE Journal of, 5(4):1134–1142, 1999.[41] Margaret Cheney, David Isaacson, and Jonathan C Newell. Electricalimpedance tomography. SIAM review, 41(1):85–101, 1999.[42] Sunghyun Cho and Seungyong Lee. Fast motion deblurring. In ACMTransactions on Graphics (TOG), volume 28, page 145. ACM, 2009.128Bibliography[43] Taeg Sang Cho, Neel Joshi, C Lawrence Zitnick, Sing Bing Kang,Richard Szeliski, and William T Freeman. A content-aware imageprior. In Computer Vision and Pattern Recognition (CVPR), 2010IEEE Conference on, pages 169–176. IEEE, 2010.[44] Patrick L Combettes and Jean-Christophe Pesquet. Proximal splittingmethods in signal processing. In Fixed-point algorithms for inverseproblems in science and engineering, pages 185–212. Springer, 2011.[45] TJ Cornwell. Radio-interferometric imaging of very large objects. As-tronomy and Astrophysics, 202:316–321, 1988.[46] Thomas Corpetti, Dominique Heitz, G Arroyo, Etienne Memin, andAlina Santa-Cruz. Fluid experimental flow estimation based on anoptical-flow scheme. Experiments in fluids, 40(1):80–97, 2006.[47] Thomas Corpetti, E´tienne Me´min, and Patrick Pe´rez. Dense estima-tion of fluid flows. Pattern Analysis and Machine Intelligence, IEEETransactions on, 24(3):365–380, 2002.[48] Richard Courant, Kurt Friedrichs, and Hans Lewy. On the partialdifference equations of mathematical physics. IBM journal of Researchand Development, 11(2):215–234, 1967.[49] M Courdurier, F Noo, M Defrise, and H Kudo. Solving the interiorproblem of computed tomography using a priori knowledge. Inverseproblems, 24(6):065001, 2008.[50] Ingrid Daubechies, Michel Defrise, and Christine De Mol. An itera-tive thresholding algorithm for linear inverse problems with a spar-sity constraint. Communications on pure and applied mathematics,57(11):1413–1457, 2004.[51] Mark E Davison. The ill-conditioned nature of the limited an-gle tomography problem. SIAM Journal on Applied Mathematics,43(2):428–448, 1983.[52] Edilson De Aguiar, Carsten Stoll, Christian Theobalt, Naveed Ahmed,Hans-Peter Seidel, and Sebastian Thrun. Performance capture fromsparse multi-view video. ACM Transactions on Graphics (TOG),27(3):98, 2008.129Bibliography[53] Edilson De Aguiar, Christian Theobalt, Carsten Stoll, and H-P Sei-del. Marker-less deformable mesh tracking for human shape and mo-tion capture. In Computer Vision and Pattern Recognition, 2007.CVPR’07. IEEE Conference on, pages 1–8. IEEE, 2007.[54] Pierre De´rian, Patrick He´as, Ce´dric Herzet, Etienne Memin, et al.Wavelets and optical flow motion estimation. Numerical Mathematics:Theory, Methods and Applications, 6:116–137, 2013.[55] Lisa Dixon, Fook Chiong Cheong, and David G Grier. Holographicdeconvolution microscopy for high-resolution particle tracking. Opticsexpress, 19(17):16410–16417, 2011.[56] Gerrit E Elsinga, Fulvio Scarano, Bernhard Wieneke, andBW Van Oudheusden. Tomographic particle image velocimetry. Ex-periments in Fluids, 41(6):933–947, 2006.[57] Mark Everingham, Luc Van Gool, Christopher KI Williams, JohnWinn, and Andrew Zisserman. The pascal visual object classes (voc)challenge. International journal of computer vision, 88(2):303–338,2010.[58] Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T Roweis, andWilliam T Freeman. Removing camera shake from a single photo-graph. In ACM Transactions on Graphics (TOG), volume 25, pages787–794. ACM, 2006.[59] Nick Foster and Dimitri Metaxas. Realistic animation of liquids.Graphical models and image processing, 58(5):471–483, 1996.[60] Ju¨rgen Frikel. Sparse regularization in limited angle tomography. Ap-plied and Computational Harmonic Analysis, 34(1):117–141, 2013.[61] Andreas Geiger, Martin Roser, and Raquel Urtasun. Efficient large-scale stereo matching. In Computer Vision–ACCV 2010, pages 25–38.Springer, 2011.[62] Stephen P Grand, Rob D van der Hilst, and Sri Widiyantoro. Globalseismic tomography: A snapshot of convection in the earth. GSAtoday, 7(4):1–7, 1997.[63] Michael Grant and Stephen Boyd. Cvx: Matlab software for disci-plined convex programming, 2008.130Bibliography[64] James Gregson. Stochastic tomography project page.http://www.cs.ubc.ca/labs/imager/tr/2012/StochasticTomography/,2012.[65] James Gregson. Stochastic deconvolution project page.http://www.cs.ubc.ca/labs/imager/tr/2013/StochasticDeconvolution/,2013.[66] James Gregson, Felix Heide, Matthias B Hullin, Mushfiqur Rouf, andWolfgang Heidrich. Stochastic deconvolution. In Computer Visionand Pattern Recognition (CVPR), 2013 IEEE Conference on, pages1043–1050. IEEE, 2013.[67] James Gregson, Ivo Ihrke, Nils Thuerey, and Wolfgang Heidrich. Fromcapture to simulation: Connecting forward and inverse problems influids. ACM Trans. Graph., 33(4):139:1–139:11, July 2014.[68] James Gregson, Michael Krimerman, Matthias B Hullin, and Wolf-gang Heidrich. Stochastic tomography and its applications in 3d imag-ing of mixing fluids. ACM Trans. Graph., 31(4):52, 2012.[69] Jinwei Gu, Shree K Nayar, Eitan Grinspun, Peter N Belhumeur, andRavi Ramamoorthi. Compressive structured light for recovering inho-mogeneous participating media. Pattern Analysis and Machine Intel-ligence, IEEE Transactions on, 35(3):1–1, 2013.[70] Sandeep N Gupta and Jerry L Prince. On div-curl regularization formotion estimation in 3-d volumetric imaging. In Image Processing,1996. Proceedings., International Conference on, volume 1, pages 929–932. IEEE, 1996.[71] Nils Hasler, Bodo Rosenhahn, T Thormahlen, Michael Wand, Ju¨rgenGall, and H-P Seidel. Markerless motion capture with unsynchronizedmoving cameras. In Computer Vision and Pattern Recognition, 2009.CVPR 2009. IEEE Conference on, pages 224–231. IEEE, 2009.[72] Trevor Hastie, Robert Tibshirani, Jerome Friedman, T Hastie, J Fried-man, and R Tibshirani. The elements of statistical learning, volume 2.Springer, 2009.[73] Lei He and Scott Schaefer. Mesh denoising via l 0 minimization. ACMTransactions on Graphics (TOG), 32(4):64, 2013.131Bibliography[74] Felix Heide, James Gregson, Gordon Wetzstein, Ramesh Raskar, andWolfgang Heidrich. A compressive superresolution display. In Compu-tational Optical Sensing and Imaging, pages CM4C–3. Optical Societyof America, 2014.[75] Felix Heide, Matthias B Hullin, James Gregson, and Wolfgang Hei-drich. Low-budget transient imaging using photonic mixer devices.ACM Transactions on Graphics (TOG), 32(4):45, 2013.[76] Felix Heide, Mushfiqur Rouf, Matthias B Hullin, Bjorn Labitzke, Wolf-gang Heidrich, and Andreas Kolb. High-quality computational imag-ing through simple lenses. ACM Transactions on Graphics (TOG),32(5):149, 2013.[77] Bruno He´risse´, Tarek Hamel, Robert Mahony, and F-X Russotto.Landing a vtol unmanned aerial vehicle on a moving platform usingoptical flow. Robotics, IEEE Transactions on, 28(1):77–89, 2012.[78] Michael Hirsch, Christian J Schuler, Stefan Harmeling, and BernhardScholkopf. Fast removal of non-uniform camera shake. In ComputerVision (ICCV), 2011 IEEE International Conference on, pages 463–470. IEEE, 2011.[79] David S Holder. Electrical impedance tomography: methods, historyand applications. CRC Press, 2010.[80] Berthold K Horn and Brian G Schunck. Determining optical flow. In1981 Technical Symposium East, pages 319–331. International Societyfor Optics and Photonics, 1981.[81] P Hua, JG Webster, and WJ Tompkins. A regularised electricalimpedance tomography reconstruction algorithm. Clinical Physics andPhysiological Measurement, 9(4A):137, 1988.[82] Ping Hua, Eung Je Woo, John G Webster, and Willis J Tompkins. Iter-ative reconstruction methods using regularization and optimal currentpatterns in electrical impedance tomography. Medical Imaging, IEEETransactions on, 10(4):621–628, 1991.[83] Ivo Ihrke, B Goidluecke, and Marcus Magnor. Reconstructing thegeometry of flowing water. In Computer Vision, 2005. ICCV 2005.Tenth IEEE International Conference on, volume 2, pages 1055–1060.IEEE, 2005.132Bibliography[84] Ivo Ihrke and Marcus Magnor. Image-based tomographic reconstruc-tion of flames. In Proceedings of the 2004 ACM SIGGRAPH/Euro-graphics Symposium on Computer Animation, SCA ’04, pages 365–373, Aire-la-Ville, Switzerland, Switzerland, 2004. Eurographics Asso-ciation.[85] Ivo Ihrke and Marcus Magnor. Adaptive grid optical tomography.Graphical Models, 68(5):484–495, 2006.[86] Takashi Ijiri, Shin Yoshizawa, Hideo Yokota, and Takeo Igarashi.Flower modeling via x-ray computed tomography. ACM Trans.Graph., 33(4):48:1–48:10, July 2014.[87] Stuart M Jefferies and Julian C Christou. Restoration of astronomicalimages by iterative blind deconvolution. The Astrophysical Journal,415:862, 1993.[88] Manjunath V Joshi, Subhasis Chaudhuri, and Rajkiran Panuganti. Alearning-based method for image super-resolution from zoomed obser-vations. IEEE Transactions on Systems, Man, and Cybernetics PartB: Cybernetics, 35(3):527–537, 2005.[89] Neel Joshi, Sing Bing Kang, C Lawrence Zitnick, and Richard Szeliski.Image deblurring using inertial measurement sensors. ACM Transac-tions on Graphics (TOG), 29(4):30, 2010.[90] Neel Joshi, Richard Szeliski, and David Kriegman. Psf estimationusing sharp edge prediction. In Computer Vision and Pattern Recog-nition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE,2008.[91] Neel Joshi, C Lawrence Zitnick, Richard Szeliski, and David Krieg-man. Image deblurring and denoising using color priors. In ComputerVision and Pattern Recognition, 2009. CVPR 2009. IEEE Conferenceon, pages 1550–1557. IEEE, 2009.[92] S Kadri-Harouna, Pierre De´rian, Patrick He´as, and Etienne Memin.Divergence-free wavelets and high order regularization. Internationaljournal of computer vision, 103(1):80–99, 2013.[93] Avinash C.. Kak and Malcolm Slaney. Principles of computerizedtomographic imaging. Society for Industrial and Applied Mathematics,2001.133Bibliography[94] Shiva P Kasiviswanathan, Huahua Wang, Arindam Banerjee, andPrem Melville. Online l1-dictionary learning with application to noveldocument detection. In Advances in Neural Information ProcessingSystems, pages 2258–2266, 2012.[95] Yan Ke and Rahul Sukthankar. Pca-sift: A more distinctive represen-tation for local image descriptors. In Computer Vision and PatternRecognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Com-puter Society Conference on, volume 2, pages II–506. IEEE, 2004.[96] Tal Kenig, Zvi Kam, and Arie Feuer. Blind image deconvolution usingmachine learning for three-dimensional microscopy. Pattern Analysisand Machine Intelligence, IEEE Transactions on, 32(12):2191–2204,2010.[97] ByungMoon Kim, Yingjie Liu, Ignacio Llamas, and Jarek Rossignac.Flowfixer: Using bfecc for fluid simulation. In proceedings of the FirstEurographics conference on Natural Phenomena, pages 51–56. Euro-graphics Association, 2005.[98] Theodore Kim, Nils Thu¨rey, Doug James, and Markus Gross. Waveletturbulence for fluid simulation. In ACM Transactions on Graphics(TOG), volume 27, page 50. ACM, 2008.[99] Rolf Ko¨hler, Michael Hirsch, Bernhard Scho¨lkopf, and Stefan Harmel-ing. Improving alpha matting and motion blurred foreground estima-tion. In ICIP, pages 3446–3450. Citeseer, 2013.[100] Tamara G Kolda, Robert Michael Lewis, and Virginia Torczon. Op-timization by direct search: New perspectives on some classical andmodern methods. SIAM review, 45(3):385–482, 2003.[101] Dilip Krishnan and Rob Fergus. Fast image deconvolution using hyper-laplacian priors. In Advances in Neural Information Processing Sys-tems, pages 1033–1041, 2009.[102] Deepa Kundur and Dimitrios Hatzinakos. Blind image deconvolution.Signal Processing Magazine, IEEE, 13(3):43–64, 1996.[103] Kiriakos N Kutulakos and Eron Steger. A theory of refractive andspecular 3d shape by light-path triangulation. International Journalof Computer Vision, 76(1):13–29, 2008.134Bibliography[104] Douglas Lanman, Matthew Hirsch, Yunhee Kim, and Ramesh Raskar.Content-adaptive parallax barriers: optimizing dual-layer 3d displaysusing low-rank light field factorization. In ACM Transactions onGraphics (TOG), volume 29, page 163. ACM, 2010.[105] Edwin Lee, Benjamin P Fahimian, Cristina V Iancu, Christian Su-loway, Gavin E Murphy, Elizabeth R Wright, Daniel Castan˜o-Dı´ez,Grant J Jensen, and Jianwei Miao. Radiation dose reduction andimage enhancement in biological imaging through equally-sloped to-mography. Journal of structural biology, 164(2):221–227, 2008.[106] Anat Levin, Rob Fergus, Fre´do Durand, and William T Freeman. De-convolution using natural image priors. ACM Transactions on Graph-ics (TOG), 26(3):0–2, 2007.[107] Anat Levin, Rob Fergus, Fre´do Durand, and William T Freeman.Image and depth from a conventional camera with a coded aperture.In ACM Transactions on Graphics (TOG), volume 26, page 70. ACM,2007.[108] David G Lowe. Object recognition from local scale-invariant features.In Computer vision, 1999. The proceedings of the seventh IEEE inter-national conference on, volume 2, pages 1150–1157. Ieee, 1999.[109] David Martin and Charless Fowlkes. The berkeley segmentationdatabase and benchmark. Computer Science Department, BerkeleyUniversity. http://www. eecs. berkeley. edu/Research/Projects/CS/vi-sion/bsds, 2001.[110] Jacob Mattingley and Stephen Boyd. Cvxgen: a code generatorfor embedded convex optimization. Optimization and Engineering,13(1):1–27, 2012.[111] James G McNally, Tatiana Karpova, John Cooper, and Jose´ AngelConchello. Three-dimensional imaging by deconvolution microscopy.Methods, 19(3):373–385, 1999.[112] Antoine McNamara, Adrien Treuille, Zoran Popovic´, and Jos Stam.Fluid control using the adjoint method. In ACM Transactions OnGraphics (TOG), volume 23, pages 449–456. ACM, 2004.[113] Enric Meinhardt-Llopis, Javier Sanchez Perez, and Daniel Konder-mann. Horn-schunck optical flow with a multi-scale strategy. ImageProcessing On Line, 2013:151–172, 2013.135Bibliography[114] Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth,Augusta H Teller, and Edward Teller. Equation of state calculations byfast computing machines. The journal of chemical physics, 21(6):1087–1092, 1953.[115] Jeroen Molemaker, Jonathan M Cohen, Sanjit Patel, and JonyongNoh. Low viscosity flow simulations for animation. In Proceedings ofthe 2008 ACM SIGGRAPH/Eurographics Symposium on ComputerAnimation, pages 9–18. Eurographics Association, 2008.[116] David P Naidich, Christopher H Marshall, Christopher Gribbin,Ronald S Arams, and Dorothy I McCauley. Low-dose ct of the lungs:preliminary observations. Radiology, 175(3):729–731, 1990.[117] Srinivasa G Narasimhan, Shree K Nayar, Bo Sun, and Sanjeev J Kop-pal. Structured light in scattering media. In Computer Vision, 2005.ICCV 2005. Tenth IEEE International Conference on, volume 1, pages420–427. IEEE, 2005.[118] Deanna Needell and Rachel Ward. Stable image reconstruction us-ing total variation minimization. SIAM Journal on Imaging Sciences,6(2):1035–1058, 2013.[119] Guust Nolet. Seismic wave propagation and seismic tomography. InSeismic tomography, pages 1–23. Springer, 1987.[120] Stanley Osher, Martin Burger, Donald Goldfarb, Jinjun Xu, andWotao Yin. An iterative regularization method for total variation-based image restoration. Multiscale Modeling & Simulation, 4(2):460–489, 2005.[121] Ashkan Panahi and Mats Viberg. Maximum a posteriori based regular-ization parameter selection. In Acoustics, Speech and Signal Processing(ICASSP), 2011 IEEE International Conference on, pages 2452–2455.IEEE, 2011.[122] Neal Parikh and Stephen Boyd. Proximal algorithms. Foundationsand Trends in Optimization, 1(3):123–231, 2013.[123] E. Parzen. On estimation of a probability density function and mode.Ann. Math. Stat., 33:1065–1076, 1962.[124] Tobias Pfaff and Nils Thuerey. MantaFlow, 2013.http://mantaflow.ethz.ch.136Bibliography[125] Tiberiu Popa, Quan Zhou, Derek Bradley, Vladislav Kraevoy, HongboFu, Alla Sheffer, and Wolfgang Heidrich. Wrinkling captured garmentsusing space-time data-driven deformation. In Computer Graphics Fo-rum, volume 28, pages 427–435. Wiley Online Library, 2009.[126] Reinhold Preiner, Oliver Mattausch, Murat Arikan, Renato Pajarola,and Michael Wimmer. Continuous projection for fast l1 reconstruc-tion. ACM Transactions on Graphics (Proc. of ACM SIGGRAPH2014), 33(4):47:1–47:13, August 2014.[127] Chrysanthe Preza, Michael I Miller, and Jose-Angel Conchello. Im-age reconstruction for 3d light microscopy with a regularized linearmethod incorporating a smoothness prior. In IS&T/SPIE’s Sympo-sium on Electronic Imaging: Science and Technology, pages 129–139.International Society for Optics and Photonics, 1993.[128] Ramesh Raskar, Amit Agrawal, and Jack Tumblin. Coded exposurephotography: motion deblurring using fluttered shutter. ACM Trans-actions on Graphics (TOG), 25(3):795–804, 2006.[129] William Hadley Richardson. Bayesian-based iterative method of imagerestoration. JOSA, 62(1):55–59, 1972.[130] Michael C Roggemann and Byron M Welsh. Signal-to-noise ratio forastronomical imaging by deconvolution from wave-front sensing. Ap-plied optics, 33(23):5400–5414, 1994.[131] Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear totalvariation based noise removal algorithms. Physica D: Nonlinear Phe-nomena, 60(1):259–268, 1992.[132] Paul Ruhnau, Annette Stahl, and Christoph Schno¨rr. Variational esti-mation of experimental fluid flows with physics-based spatio-temporalregularization. Measurement Science and Technology, 18(3):755, 2007.[133] Yousef Saad. Iterative methods for sparse linear systems. Siam, 2003.[134] Gerald Schaefer and Michal Stich. Ucid: an uncompressed color imagedatabase. In Electronic Imaging 2004, pages 472–480. InternationalSociety for Optics and Photonics, 2003.[135] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizingfinite sums with the stochastic average gradient. arXiv preprintarXiv:1309.2388, 2013.137Bibliography[136] Mark W. Schmidt, Nicolas Le Roux, and Francis Bach. Convergencerates of inexact proximal-gradient methods for convex optimization.CoRR, abs/1109.2415, 2011.[137] Will J Schroeder, Bill Lorensen, and Ken Martin. The visualizationtoolkit. Kitware, 2004.[138] Timothy J Schulz. Multiframe blind deconvolution of astronomicalimages. JOSA A, 10(5):1064–1073, 1993.[139] Joshua W Shaevitz and Daniel A Fletcher. Enhanced three-dimensional deconvolution microscopy using a measured depth-varying point-spread function. JOSA A, 24(9):2622–2627, 2007.[140] Lawrence A Shepp and JB Kruskal. Computerized tomography: thenew medical x-ray technology. American Mathematical Monthly, pages420–439, 1978.[141] Lawrence A Shepp and Yehuda Vardi. Maximum likelihood recon-struction for emission tomography. Medical Imaging, IEEE Transac-tions on, 1(2):113–122, 1982.[142] Jean-Baptiste Sibarita. Deconvolution microscopy. In MicroscopyTechniques, pages 201–243. Springer, 2005.[143] Jean-Luc Starck, Mai K Nguyen, and Fionn Murtagh. Wavelets andcurvelets for image deconvolution: a combined approach. Signal Pro-cessing, 83(10):2279–2283, 2003.[144] Thomas Strohmer and Roman Vershynin. A randomized kaczmarzalgorithm with exponential convergence. Journal of Fourier Analysisand Applications, 15(2):262–278, 2009.[145] Deqing Sun, Stefan Roth, and Michael J Black. A quantitative anal-ysis of current practices in optical flow estimation and the principlesbehind them. International Journal of Computer Vision, 106(2):115–137, 2014.[146] Nils Thu¨rey, Richard Keiser, Mark Pauly, and Ulrich Ru¨de. Detail-preserving fluid control. Graphical Models, 71(6):221–228, 2009.[147] Kim-Chuan Toh, Michael J Todd, and Reha H Tu¨tu¨ncu¨. Sdpt3a mat-lab software package for semidefinite programming, version 1.3. Opti-mization methods and software, 11(1-4):545–581, 1999.138Bibliography[148] Adrien Treuille, Antoine McNamara, Zoran Popovic´, and Jos Stam.Keyframe control of smoke simulations. ACM Trans. Graph.,22(3):716–723, 2003.[149] Borislav Trifonov, Derek Bradley, and Wolfgang Heidrich. Tomo-graphic reconstruction of transparent objects. In Proceedings of the17th Eurographics Conference on Rendering Techniques, EGSR’06,pages 51–60, Aire-la-Ville, Switzerland, Switzerland, 2006. Eurograph-ics Association.[150] Paul Tseng. Convergence of a block coordinate descent method fornondifferentiable minimization. Journal of optimization theory andapplications, 109(3):475–494, 2001.[151] Lieven Vandenberghe and Stephen Boyd. Semidefinite programming.SIAM review, 38(1):49–95, 1996.[152] Eric Veach and Leonidas J Guibas. Metropolis light transport. InProceedings of the 24th annual conference on Computer graphics andinteractive techniques, pages 65–76. ACM Press/Addison-Wesley Pub-lishing Co., 1997.[153] Curtis R Vogel and Mary E Oman. Iterative methods for total varia-tion denoising. SIAM Journal on Scientific Computing, 17(1):227–238,1996.[154] Jing Wang, Tianfang Li, Hongbing Lu, and Zhengrong Liang. Pe-nalized weighted least-squares approach to sinogram noise reductionand image reconstruction for low-dose x-ray computed tomography.Medical Imaging, IEEE Transactions on, 25(10):1272–1283, 2006.[155] XJ Wang, TE Milner, and JS Nelson. Characterization of fluid flowvelocity by optical doppler tomography. Optics letters, 20(11):1337–1339, 1995.[156] Allan G Weber. The usc-sipi image database version 5. USC-SIPIReport, 315:1–24, 1997.[157] Gordon Wetzstein, Douglas Lanman, Wolfgang Heidrich, and RameshRaskar. Layered 3d: Tomographic image synthesis for attenuation-based light field and high dynamic range displays. ACM Trans. Graph.,30(4):95:1–95:12, July 2011.139Bibliography[158] F.M. White. Fluid Mechanics, 4th Ed. WCB/McGraw-Hill, 1999.[159] SJ Wright and J Nocedal. Numerical optimization, volume 2. SpringerNew York, 1999.[160] Jessie Q Xia, Joseph Y Lo, Kai Yang, Carey E Floyd Jr, and John MBoone. Dedicated breast computed tomography: volume image de-noising via a partial-diffusion equation based technique. Medicalphysics, 35(5):1950–1958, 2008.[161] Li Xu and Jiaya Jia. Two-phase kernel estimation for robust mo-tion deblurring. In Computer Vision–ECCV 2010, pages 157–170.Springer, 2010.[162] Qiong Xu, Xuanqin Mou, Ge Wang, Jered Sieren, Eric A Hoffman,and Hengyong Yu. Statistical interior tomography. Medical Imaging,IEEE Transactions on, 30(5):1116–1128, 2011.[163] Jiansheng Yang, Hengyong Yu, Ming Jiang, and Ge Wang. High-order total variation minimization for interior tomography. InverseProblems, 26(3):035013, 2010.[164] Junfeng Yang, Wotao Yin, Yin Zhang, and Yilun Wang. A fast algo-rithm for edge-preserving variational multichannel image restoration.SIAM Journal on Imaging Sciences, 2(2):569–592, 2009.[165] Hengyong Yu and Ge Wang. Compressed sensing based interior to-mography. Physics in medicine and biology, 54(9):2791, 2009.[166] Christopher Zach, Thomas Pock, and Horst Bischof. A duality basedapproach for realtime tv-l 1 optical flow. In Pattern Recognition, pages214–223. Springer, 2007.[167] Simon Zingg, Davide Scaramuzza, Stephan Weiss, and Roland Sieg-wart. Mav navigation through indoor corridors using optical flow. InRobotics and Automation (ICRA), 2010 IEEE International Confer-ence on, pages 3361–3368. IEEE, 2010.140

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0166394/manifest

Comment

Related Items