Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Modeling and analysis of diffusive molecular communication systems Noel, Adam 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_november_noel_adam.pdf [ 1.54MB ]
Metadata
JSON: 24-1.0165819.json
JSON-LD: 24-1.0165819-ld.json
RDF/XML (Pretty): 24-1.0165819-rdf.xml
RDF/JSON: 24-1.0165819-rdf.json
Turtle: 24-1.0165819-turtle.txt
N-Triples: 24-1.0165819-rdf-ntriples.txt
Original Record: 24-1.0165819-source.json
Full Text
24-1.0165819-fulltext.txt
Citation
24-1.0165819.ris

Full Text

Modeling and Analysis of Diffusive Molecular CommunicationSystemsbyAdam NoelB. Eng., Memorial University of Newfoundland, 2009M. A. Sc., The University of British Columbia, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Electrical and Computer Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)September 2015© Adam Noel, 2015AbstractDiffusive molecular communication (MC) is a promising strategy for the transfer of in-formation in synthetic networks at the nanoscale. If such devices could communicate,then it would expand their cumulative capacity and potentially enable applicationssuch as cooperative diagnostics in medicine, bottom-up fabrication in manufacturing,and sensitive environmental monitoring. Diffusion-based MC relies on the randommotion of information molecules due to collisions with other molecules.This dissertation presents a novel system model for three-dimensional diffusiveMC where molecules can also be carried by steady uniform flow or participate inchemical reactions. The expected channel impulse response due to a point source ofmolecules is derived and its statistics are studied. The mutual information betweenconsecutive observations at the receiver is also derived. A simulation framework thataccommodates the details of the system model is introduced.A joint estimation problem is formulated for the underlying system model pa-rameters. The Cramer-Rao lower bound on the variance of estimation error is de-rived. Maximum likelihood estimation is considered and shown to be better than theCramer-Rao lower bound when it is biased. Peak-based estimators are proposed forthe low-complexity estimation of any single channel parameter.Optimal and suboptimal receiver design is considered for detecting the trans-mission of ON/OFF keying impulses. Optimal joint detection provides a bound ondetector performance. The weighted sum detector is proposed as a suboptimal alter-iiAbstractnative that is more physically realizable. The performance of a weighted sum detectorcan become comparable to that of the optimal detector when the environment has amechanism to reduce intersymbol interference.A model for noise sources that continuously release molecules is studied. The time-varying and asymptotic impact of such sources is derived. The model for asymptoticnoise is used to approximate the impact of multiuser interference and also the impactof older bits of intersymbol interference.iiiPrefaceI hereby declare that I am the first author of this thesis. Chapters 25 are basedon work performed under the supervision of Professor Robert Schober and ProfessorKaren C. Cheung.For all chapters and corresponding papers, I conducted the literature surveys onrelevant topics, formulated the system models, performed the analyses, implementedthe simulations, and wrote the manuscripts. My supervisors helped to guide thedirection of the research, validated the analyses, and provided feedback to improvethe manuscripts.Please note that, to improve the organization of this thesis, some publications arerelated to both Chapter 2 and Chapter 4.Publications related to Chapter 2:ˆ A. Noel, K. C. Cheung, and R. Schober, Optimal Receiver Design for DiffusiveMolecular Communication with Flow and Additive Noise, IEEE Transactionson NanoBioscience, vol. 13, no. 3, pp. 350362, Sep. 2014.ˆ A. Noel, K. C. Cheung, and R. Schober, Diffusive Molecular Communicationwith Disruptive Flows, in Proc. IEEE ICC, pp. 36003606, Jun. 2014.ˆ A. Noel, K. C. Cheung, and R. Schober, Improving Receiver Performanceof Diffusive Molecular Communication with Enzymes, IEEE Transactions onNanoBioscience, vol. 13, no. 1, pp. 3143, Mar. 2014.ivPrefaceˆ A. Noel, K. C. Cheung, and R. Schober, Using Dimensional Analysis to AssessScalability and Accuracy in Molecular Communication, in Proc. IEEE ICCMoNaCom, pp. 818823, Jun. 2013.ˆ A. Noel, K. C. Cheung, and R. Schober, Improving Diffusion-Based MolecularCommunication with Unanchored Enzymes, Lecture Notes of the Institute forComputer Sciences, Social Informatics and Telecommunications Engineering,vol. 134, pp. 184198, 2014. In Proc. ICST BIONETICS, Dec. 2012.Publications related to Chapter 3:ˆ A. Noel, K. C. Cheung, and R. Schober, Joint Channel Parameter Estimationvia Diffusive Molecular Communication, to appear in IEEE Transactions onMolecular, Biological, and Multi-Scale Communications, 2015.ˆ A. Noel, K. C. Cheung, and R. Schober, Bounds on Distance Estimationvia Diffusive Molecular Communication, in Proc. IEEE GLOBECOM, pp.28132819, Dec. 2014.Publications related to Chapter 4:ˆ A. Noel, K. C. Cheung, and R. Schober, Optimal Receiver Design for DiffusiveMolecular Communication with Flow and Additive Noise, IEEE Transactionson NanoBioscience, vol. 13, no. 3, pp. 350362, Sep. 2014.ˆ A. Noel, K. C. Cheung, and R. Schober, Diffusive Molecular Communicationwith Disruptive Flows, in Proc. IEEE ICC, pp. 36003606, Jun. 2014.ˆ A. Noel, K. C. Cheung, and R. Schober, Improving Receiver Performanceof Diffusive Molecular Communication with Enzymes, IEEE Transactions onNanoBioscience, vol. 13, no. 1, pp. 3143, Mar. 2014.vPrefacePublication related to Chapter 5:ˆ A. Noel, K. C. Cheung, and R. Schober, A Unifying Model for ExternalNoise Sources and ISI in Diffusive Molecular Communication, IEEE Journalon Selected Areas in Communications, vol. 32, no. 12, pp. 23302343, Dec.2014.Other contributions not presented in this dissertation are listed in Appendix A.viTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiList of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviiList of Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviiiList of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xixAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxviiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2.1 Communications Analysis . . . . . . . . . . . . . . . . . . . 51.2.2 Open Problems . . . . . . . . . . . . . . . . . . . . . . . . . 6viiTable of Contents1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.4 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.5 Organization of the Dissertation . . . . . . . . . . . . . . . . . . . 222 Channel Model and Impulse Response . . . . . . . . . . . . . . . . 262.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.2.1 Dimensional Model . . . . . . . . . . . . . . . . . . . . . . . 282.2.2 Dimensionless Model . . . . . . . . . . . . . . . . . . . . . . 332.2.3 Molecule Sources and the Receiver . . . . . . . . . . . . . . 362.3 Channel Impulse Response . . . . . . . . . . . . . . . . . . . . . . 392.3.1 Derivation of the Expected Point Concentration . . . . . . . 402.3.2 Derivation of the Channel Impulse Response . . . . . . . . 442.3.3 Cumulative Receiver Signal . . . . . . . . . . . . . . . . . . 462.3.4 Statistics of the Channel Impulse Response . . . . . . . . . 482.4 Independence of Receiver Observations . . . . . . . . . . . . . . . . 522.5 Simulation Framework . . . . . . . . . . . . . . . . . . . . . . . . . 572.5.1 Choice of Framework . . . . . . . . . . . . . . . . . . . . . 582.5.2 Simulating Chemical Reactions . . . . . . . . . . . . . . . . 602.5.3 Simulating the Molecule Sources and the Transmitter . . . 622.5.4 Channel Parameter Values . . . . . . . . . . . . . . . . . . 632.6 Numerical and Simulation Results . . . . . . . . . . . . . . . . . . 662.6.1 Expected Channel Impulse Response . . . . . . . . . . . . . 672.6.2 Uniform Concentration Assumption . . . . . . . . . . . . . 702.6.3 Sample Independence . . . . . . . . . . . . . . . . . . . . . 742.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76viiiTable of Contents3 Joint Channel Parameter Estimation . . . . . . . . . . . . . . . . . 773.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773.2 System Model and Estimation Preliminaries . . . . . . . . . . . . . 813.2.1 Physical Environment . . . . . . . . . . . . . . . . . . . . . 813.2.2 The Cramer-Rao Lower Bound . . . . . . . . . . . . . . . . 833.2.3 Maximum Likelihood Estimation . . . . . . . . . . . . . . . 853.3 Joint Parameter Estimation Performance . . . . . . . . . . . . . . 863.3.1 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . 873.3.2 Examples of the CRLB . . . . . . . . . . . . . . . . . . . . 893.3.3 On the Nonexistence of the CRLB . . . . . . . . . . . . . . 913.4 Estimation Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . 913.4.1 ML Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 923.4.2 Peak-Based Estimation . . . . . . . . . . . . . . . . . . . . 973.5 Numerical and Simulation Results . . . . . . . . . . . . . . . . . . 1013.5.1 Optimal Estimation . . . . . . . . . . . . . . . . . . . . . . 1053.5.2 Peak-Based Estimation . . . . . . . . . . . . . . . . . . . . 1113.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1134 Optimal and Suboptimal Receiver Design . . . . . . . . . . . . . . 1164.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1164.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184.3 Optimal Sequence Detection . . . . . . . . . . . . . . . . . . . . . 1204.3.1 Optimal Detector . . . . . . . . . . . . . . . . . . . . . . . 1204.3.2 Optimal Joint Detection Using Viterbi Algorithm . . . . . . 1214.4 Weighted Sum Detection . . . . . . . . . . . . . . . . . . . . . . . 1244.4.1 Detector Design and Performance . . . . . . . . . . . . . . 124ixTable of Contents4.4.2 Optimal Weights . . . . . . . . . . . . . . . . . . . . . . . . 1274.5 Numerical and Simulation Results . . . . . . . . . . . . . . . . . . 1284.5.1 Max Detection . . . . . . . . . . . . . . . . . . . . . . . . . 1304.5.2 Detection Without Intersymbol Interference . . . . . . . . . 1324.5.3 Detection With Varying M . . . . . . . . . . . . . . . . . . 1344.5.4 Detection With Varying Flow . . . . . . . . . . . . . . . . . 1404.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435 A Unifying Model for External Noise Sources and ISI . . . . . . 1465.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1465.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1495.3 External Additive Noise . . . . . . . . . . . . . . . . . . . . . . . . 1515.3.1 General Noise Model . . . . . . . . . . . . . . . . . . . . . . 1525.3.2 Tractable Noise Analysis . . . . . . . . . . . . . . . . . . . 1535.4 Multiuser Interference . . . . . . . . . . . . . . . . . . . . . . . . . 1605.4.1 Complete Multiuser Model . . . . . . . . . . . . . . . . . . 1615.4.2 Asymptotic Interference . . . . . . . . . . . . . . . . . . . . 1615.5 Asymptotic Intersymbol Interference . . . . . . . . . . . . . . . . . 1635.5.1 Decomposition of Received Signal . . . . . . . . . . . . . . 1645.5.2 Application: Weighted Sum Detection . . . . . . . . . . . . 1665.6 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 1685.6.1 Continuous Noise Source . . . . . . . . . . . . . . . . . . . 1695.6.2 Interference and Intersymbol Interference . . . . . . . . . . 1745.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1776 Conclusions and Topics for Future Research . . . . . . . . . . . . 1796.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179xTable of Contents6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1846.2.1 Directions of MC Research . . . . . . . . . . . . . . . . . . 1846.2.2 Modeling and Analysis . . . . . . . . . . . . . . . . . . . . . 186Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190Appendices 202A List of Other Publications . . . . . . . . . . . . . . . . . . . . . . . . 202B Proofs for Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 203B.1 Proof of Theorem 2.1 . . . . . . . . . . . . . . . . . . . . . . . . . 203B.2 Proof of Theorem 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . 207C Proofs for Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . 209C.1 Proof of Theorem 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . 209C.2 Proof of Theorem 5.2 . . . . . . . . . . . . . . . . . . . . . . . . . 210xiList of Tables1.1 Differences between diffusive MC and conventional wireless commu-nication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.1 System parameters used throughout the dissertation. . . . . . . . . 642.2 Enzyme reaction-diffusion parameters for System 2. . . . . . . . . . 663.1 Diagonal elements of the FIM for each desired parameter, as foundby Theorem 3.1 in Section 3.3.1. . . . . . . . . . . . . . . . . . . . . 833.2 System 3 parameters used for numerical and simulation results. . . 1024.1 System parameters used for RX detection. . . . . . . . . . . . . . . 1294.2 Enzyme reaction-diffusion parameters for System 2. . . . . . . . . . 1295.1 Description of the terms in (5.1). . . . . . . . . . . . . . . . . . . . 1515.2 Summary of the equations for the impact of an external noise sourceand the conditions under which they can be used. . . . . . . . . . . 1605.3 System parameters used for numerical and simulation results. . . . 1695.4 Conversion between dimensional and dimensionless variables. . . . . 170xiiList of Figures2.1 The system model considered in this dissertation. . . . . . . . . . . 292.2 The system model with a re-defined coordinate frame for the 3rdmolecule source. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372.3 Average channel impulse response for the base case of System 1 com-pared with variations. . . . . . . . . . . . . . . . . . . . . . . . . . . 672.4 The empirical and expected cumulative distribution function for Sys-tem 1 and three variations. . . . . . . . . . . . . . . . . . . . . . . . 682.5 The expected channel impulse response for the base case of System2, along with a series of variations of a single parameter. . . . . . . 692.6 The empirical and expected cumulative distribution functions forSystem 2 and three variations. . . . . . . . . . . . . . . . . . . . . . 712.7 The relative deviation in N?RX|tx,0 (t?) from the true value (2.34) atthe RX when the uniform concentration assumption (2.37) is appliedand there is no flow or molecule degradation. . . . . . . . . . . . . . 722.8 The relative deviation in N?RX|tx,0 (t?) from the true value (2.34) atthe receiver when the uniform concentration assumption (2.37) isapplied. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732.9 The relative deviation in N?RX|tx,0 (t?) from the true value (2.34) atthe receiver when the uniform concentration assumption (2.37) isapplied. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74xiiiList of Figures2.10 The mutual information of observations made by the RX System 2in bits, measured as a function of ∆tob. . . . . . . . . . . . . . . . . 753.1 The system model considered in this chapter, where there is a singlepoint source and only first-order molecule degradation. . . . . . . . 823.2 The expected channel impulse responseNRX|tx (t) of the environmentdefined by Table 3.2 as a function of time t for varying distance d. . 1043.3 Normalized mean square error of ML distance estimation as a func-tion of the number of observations M and as the knowledge of otherparameters is removed. . . . . . . . . . . . . . . . . . . . . . . . . . 1063.4 Normalized mean square error of ML estimation of each channelparameter when that parameter is the only one that is unknown. . . 1083.5 Normalized mean square error of ML estimation of each channelparameter when that parameter and the distance d are unknown. . 1093.6 Normalized mean square error of ML distance estimation when d isthe only unknown parameter. . . . . . . . . . . . . . . . . . . . . . 1103.7 Normalized mean square error of peak-based distance estimation asa function of the actual distance d for varying window length κ. . . 1123.8 Normalized mean square error of peak-based estimation as a functionof the distance d when the window length is κ = 7. . . . . . . . . . 1144.1 Evaluating the error probability of System 2 as a function of the bitdecision threshold ξ at the RX. . . . . . . . . . . . . . . . . . . . . 1314.2 Average error probability of the base case of System 2 as a functionof M when there is no ISI, NRX|n (t) = 50, and Tint = 200µs. . . . . 1334.3 Average error probability of the base case of System 2 as a functionof M when ISI is included, Tint= 200µs, and NRX|n (t) = 0 or 0.5. . 135xivList of Figures4.4 Average error probability of the base case of System 2 as a functionof M when ISI is included, Tint= 200µs, and the distance d to theRX is varied. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1364.5 Average error probability of System 2 as a function of M when ISIis included, Tint= 100µs, and enzymes are included to mitigate theimpact of ISI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1364.6 Average error probability of System 2 as a function ofM when ISI isincluded, Tint= 100µs, enzymes are included to mitigate the impactof ISI, and an additive noise source is present. . . . . . . . . . . . . 1384.7 Average error probability of System 2 as a function ofM when ISI isincluded, Tint= 100µs, NRX|n (t) = 1, and different degrees of floware present. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394.8 Average error probability of System 1 as a function of v?‖ for M ={2, 5, 10, 40} observations in each bit interval (v?⊥ = 0). . . . . . . . 1414.9 Average error probability of System 1 as a function of v?⊥ for M ={2, 5, 10, 40} observations in each bit interval (v?‖ = 0). . . . . . . . 1434.10 Average error probability of System 1 as a function of v?‖ = v?⊥ forthe numbers of observations in each bit interval M = {2, 5, 10, 40}. . 1445.1 The dimensionless number of noise molecules observed at the receiveras a function of time when v?‖ = v?⊥ = 0 and k? = 0, i.e., when thereis no advection or molecule degradation. . . . . . . . . . . . . . . . 1715.2 The dimensionless number of noise molecules observed at the receiveras a function of time when v?‖ = v?⊥ = 0 and k? = 1. . . . . . . . . . 172xvList of Figures5.3 The dimensionless number of noise molecules observed at the receiveras a function of time when d?n = 0, v?‖ = v?⊥ = 0, and the moleculedegradation rate is k? = {0, 1, 2, 5, 10, 20, 50}. . . . . . . . . . . . . . 1735.4 The dimensionless number of noise molecules observed at the receiveras a function of time when k? = 0, we vary v?‖ or v?⊥, and we considerdn = 0 nm and dn = 100 nm. . . . . . . . . . . . . . . . . . . . . . . 1735.5 The dimensionless number of noise molecules observed at the receiveras a function of time when k? = 0, we vary v?‖ or v?⊥, and we considerdn = 200 nm and dn = 400 nm. . . . . . . . . . . . . . . . . . . . . . 1745.6 The dimensionless number of interfering molecules observed at thereceiver as a function of time an interfering transmitter placed atd2 = 400 nm and d2 = 1µm. . . . . . . . . . . . . . . . . . . . . . . 1755.7 Receiver error probability as a function of F , the number of bitintervals of ISI treated explicitly, for varying molecule degradationrate k?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176xviList of AbbreviationsAWGN Additive White Gaussian NoiseCDF Cumulative Distribution FunctionCRLB Cramer-Rao Lower BoundFIM Fisher Information MatrixISI Intersymbol InterferenceMC Molecular CommunicationML Maximum LikelihoodMSE Mean Square ErrorMUI Multiuser InterferencePMF Probability Mass FunctionRF Radio FrequencyRX ReceiverTX TransmitterUCA Uniform Concentration AssumptionxviiList of Notation| · | Magnitude(·)! Factorialb·c Floor functiond·e Ceiling function[·]T Vector transpose(βa)Binomial coefficient∂∂xPartial derivative with respect to x∇2 LaplacianE[·] Statistical expectation of a random variableerf (·) Error functionln Natural logarithm (base e)log Logarithmmse(·) Mean square errorp(·) Probability mass functionPr(X = x) Probability of variable X having value xsgn(·) Sign functionvar(·) VariancexviiiList of SymbolsThe list of symbols in this dissertation is as follows.β Generic variableΓ(x) Gamma function of xΓ(x, λ) Incomplete Gamma function of x∆t Global simulation time step∆tobTime between receiver observations within a bit intervalη Fluid viscosityθ Polar angle in spherical coordinatesΘ Vector of unknown parametersΘˆ0 Initial estimate of vector ΘΘi ith unknown parameterΘirefReference value for unknown parameter Θiκ Moving filter lengthλ Mean of a Poisson random variableΛ Number of unknown parametersξ Observation thresholdξ? Dimensionless decision thresholdτ Integration variable for timeτ ? Dimensionless integration variable for timeφ Azimuthal angle in spherical coordinatesxixList of SymbolsΦfi [b] Current log likelihood for the ith path leading to state f in the bthbit intervalΨf [b] Cumulative log likelihood for the most likely path leading to state fin the bth bit intervalΨfi [b− 1] Cumulative log likelihood for the ith path leading to the state priorto the fth state in the bth bit intervalω Observation value in probability mass function or cumulative distri-bution functionΩ Arbitrary molecule labelA Information molecule labela Generic variableAPProduct molecule labelb Index for bit (symbol) in transmitter sequencec Generic variableC Concentration of molecule type A at time t and location ~r (compactform)C0 Reference concentration of A moleculesCΩ Concentration of molecule type Ω at time t and location ~r (compactform)CΩ(~r, t) Concentration of molecule type Ω at time t and location ~rC?Ω Dimensionless concentration of molecule type ΩCΘˆ Covariance matrix of an estimator Θˆ for ΘCEtotConstant global sum concentration of bound and unbound enzymemoleculesd Distance from the transmitter to the center of the receiverD Constant diffusion coefficient for molecule type AxxList of Symbolsd? Dimensionless distance from the transmitter to the center of the re-ceiverDΩ Constant diffusion coefficient for molecule type ΩdefEffective distance from the transmitter to the center of the RXd?ef,x Effective dimensionless distance from the transmitter to the center ofthe RX along the x?-dimensiondu Distance of the uth source from the receiverd?u Dimensionless distance of the uth source from the receiverE Enzyme molecule labelEA Intermediate molecule labelF Number of bits (symbols) of explicit channel memoryf Index for state in Viterbi algorithmGΘi,m Component of Fisher information matrix term for unknown parameterΘig (·) Generic functionh (t) Arbitrary function of tI(Θ) Fisher information matrix of vector ΘI(X;Y ) Mutual information between random variables X and YIPob(ξ, c) Regularized incomplete beta functionJi(a) ith order Bessel function of the first kindk First-order degradation reaction ratek? Dimensionless first-order degradation ratek−1 Unbinding reaction rate of Michaelis-Menten kineticsk1 Binding reaction rate of Michaelis-Menten kineticsk2 Degradation reaction rate of Michaelis-Menten kineticskBBoltzmann constant 1.38× 10−23 J/KxxiList of SymbolsL Reference distancem Index for observation in a bit intervalM Number of receiver observations per bit intervaln Index for random noise sourceNArefReference number of moleculesNE Number of enzyme molecules in the propagation environmentNGen,n Constant (average) random generation process of A molecules for thenth noise sourceNGen,n (t) Random generation process of A molecules for the nth noise sourceat time tN?Gen,n (t?A) Dimensionless random generation process of A molecules for the nthnoise source at time t?ANGen,n (t) Time-varying average random generation process of A molecules forthe nth noise source at time t?AN?Gen,n (t?A) Dimensionless time-varying average random generation process of Amolecules for the nth noise source at time t?ANRX(t) Number of molecules observed by the receiver at time tN?RX(t?A) Dimensionless number of molecules observed by the receiver at time tNRX(t) Total number of A molecules expected at the receiver at time t dueto all molecule sourcesN?RX(t?A) Total dimensionless number of A molecules expected at the receiverat time t?A due to all molecule sourcesN?RX|n Asymptotic (i.e., as t? → ∞) dimensionless number of molecules ex-pected from the nth random noise sourceNRX|tx (t) Number of molecules observed by the receiver at time t due to theintended transmitterxxiiList of SymbolsN?RX|tx (t?A) Dimensionless number of molecules observed by the receiver at timet due to the intended transmitterNRX|tx (t) Total number of A molecules expected at the receiver at time t dueto the intended transmitterN?RX|tx (t?A) Total dimensionless number of A molecules expected at the receiverat time t?A due to the intended transmitterNRX|tx (t; b) Number of molecules observed by the receiver at time t that werereleased by the intended transmitter at the start of the bth bit intervalNRX|tx,0 (t) Expected channel impulse response from the transmitterN?RX|tx,0 (t?A) Dimensionless expected channel impulse response from the transmit-terN?RX|tx,cur (t?) Dimensionless number of molecules observed from those released bythe transmitter in the current bit intervalN?RX|tx,cur (t?) Dimensionless number of molecules expected from those released bythe transmitter in the current bit intervalN?RX|tx,ISI (t?) Dimensionless number of molecules observed from those released bythe transmitter within F intervals before the current intervalN?RX|tx,ISI (t?) Dimensionless number of molecules expected from those released bythe transmitter within F intervals before the current intervalN?RX|tx,old (t?) Dimensionless number of molecules observed from those released bythe transmitter before the F intervals before the current intervalN?RX|tx,old (t?) Dimensionless number of molecules expected from those released bythe transmitter before the F intervals before the current intervalN?RX|tx,old Asymptotic approximation of dimensionless number of molecules ex-pected from those released by the transmitter before the F intervalsbefore the current intervalxxiiiList of SymbolsNRX|u (t) Number of molecules observed by the receiver at time t due to theuth sourceN?RX|u (t?A) Dimensionless number of molecules observed by the receiver at timet due to the uth sourceNRX|u (t) Total number of A molecules expected at the receiver at time t dueto the uth sourceN?RX|u (t?A) Total dimensionless number of A molecules expected at the receiverat time t?A due to the uth sourceNRX|u,0 (t) Expected channel impulse response from the uth transmitterNTXNumber of A molecules released by the intended transmitter to senda binary 1NTX,u Number of A molecules released by uth transmitter to send a binary1N?TX,u Dimensionless number of A molecules released by uth transmitter tosend a binary 1P0 Probability that a given bit in the intended transmitter's sequence is0P1 Probability that a given bit in the intended transmitter's sequence is1Parr(tm, tm+1) Probability that a molecule not inside the receiver at time tm is insidethe receiver at time tm+1Pe[b] Expected probability of error of the bth bitPeAverage probability of error, averaged over all bitsPleave(∆tob) Probability that a molecule within the receiver is no longer within thereceiver after time ∆tobPob(t) Probability that a given molecule will be inside the receiver at time txxivList of SymbolsPstay(∆tob) Probability that a molecule within the receiver is still within the re-ceiver after time ∆tobPu,0 Probability that a given bit in the uth transmitter's sequence is 0Pu,1 Probability that a given bit in the uth transmitter's sequence is 1Q Number of parameters to estimater? Dimensionless distance from the originRΩ Radius of molecule type ΩrbindBinding radius for A and E moleculesr?difR?RX− d?nrrmsRoot mean square separation of A and E molecules in one simulationtime stepRRXReceiver radiusR?RXDimensionless receiver radiusr?sumR?RX+ d?nrTXDistance from the transmitter to an arbitrary pointr?TXDimensionless distance from the transmitter to an arbitrary pointrTX,ef Effective distance from the transmitter to an arbitrary pointr?TX,ef Effective dimensionless distance from the transmitter to an arbitrarypoint~r Vector from the origin (i.e., center of the receiver)sb,m Number of molecules observed by the receiver in the mth observationin the bth bit intervals′m Filtered number of molecules observed at the receiversmaxPeak number of molecules observedsb Vector of observations by the receiver in the bth bit intervalt TimexxvList of Symbolst [b,m] Time of the receiver's mth observation in the bth bit intervalt? Dimensionless time for molecule type At?Ω Dimensionless time for molecule type Ωt?A [b,m] Dimensionless time of the receiver's mth observation in the bth bitintervalTintBit (symbol) interval of the intended transmitterTint,u Bit (symbol) interval of the uth transmitterT ?int,u Dimensionless bit (symbol) interval of the uth transmitterTKEnvironment temperature in degrees kelvintm Time of the mth observation at the receivertmaxTime when the peak number of molecules is observedtmaxTime when the most molecules are expected at the receiverttx,0 Time when one impulse of molecules is released by the transmitterttx,ef Time elapsed since one impulse of molecules is released by the trans-mittertu,0 Transmission start time for the uth transmitterU Number of molecule sourcesu Index for molecule source or transmitterv Steady uniform flow speedv? Dimensionless uniform flow speed (Peclet number)VE Volume that restricts the diffusion of enzyme moleculesv‖ Flow component in direction from transmitter to receiverv?‖ Dimensionless flow speed from transmitter towards receiverv⊥ Flow component perpendicular to line from transmitter to receiverv⊥,1 Component of flow along the y-directionv?⊥,1 Dimensionless flow speed along y-dimensionxxviList of Symbolsv⊥,2 Component of flow along the z-directionv?⊥,2 Dimensionless flow speed along z-dimensionVRXReceiver volumeV ?RXDimensionless receiver volumev Steady uniform flow vectorwm Weight of the mth observation in the weighted sum detectorW [b] bth bit in binary (symbol) sequence of the intended transmitterWˆ [b] Receiver decision of the bth bitW Binary (symbol) sequence of the intended transmitterWˆfi [l] lth received bit on the ith path leading to state fWu [b] bth bit in binary (symbol) sequence of the uth transmitterWu Binary (symbol) sequence of the uth transmitterW Set of all possible transmitter sequences of length BX Generic random variablex? Dimensionless x-coordinateY Generic random variabley? Dimensionless y-coordinatez? Dimensionless z-coordinatexxviiAcknowledgmentsI am sincerely grateful for the guidance and support of my supervisors, ProfessorRobert Schober and Professor Karen C. Cheung. Robert's knowledge and patiencewill never cease to impress me. I also thank him for his generous support when Imoved my family to visit him at the Institute for Digital Communication (IDC),Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany, in2013. Karen provided an invaluable complementary perspective and much-neededreality check that guided my work.I thank the members of my examining committee for their time, their feedback,and their interest in my work. Specifically, I thank Professors Christian Kastrup,Andre Marziali, Vikram Krishnamurthy, and Urbashi Mitra (USC).I would also like to acknowledge my external funding sources. My research wassupported by a Doctoral Postgraduate Scholarship from NSERC from 2011-2014, andby a William Sumner Fellowship from 2011-2013.I am very grateful for the lunchtime discussions with my colleagues in KAIS 4090at UBC and also at the IDC in Erlangen. The reflections on language, culture, food,and graduate student life were always worth the break.Last but not least, I would like to thank my family. I am indebted to my wifeJoanne for her love and support for me, Natalie, and Derek. Finally, I thank myextended family, most of whom are in Newfoundland, for their long distance supportof my education on the mainland.xxviiiDedicationTo Natalie and Derek, who just might read this some day.xxixChapter 1Introduction1.1 BackgroundCommunication systems are all around us. Some of these systems enable one personto share information with another, either by natural means (such as when we speakwith the person next to us) or by synthetic means (such as when we send an emailor speak via phone). Many forms of synthetic communication are digital, where themessage produced by the information source is converted into a sequence of binarydigits; see [1, Ch. 1]. Natural communication is not restricted to the exchange of in-formation between humans. Animals can communicate via chemical signaling (e.g.,pheromones), visual signals (e.g., fireflies), sound (e.g., whales can communicate overhundreds of kilometers), tactile signaling (e.g., honeybee dancing), and electric sig-nals (e.g., some species of fish); see [2, Ch. 52]. At a cellular level, we also findthat individual cells need to communicate with each other in order to share informa-tion; see [3, Ch. 16]. This inter-cellular communication could be between single-cellorganisms such as in a community of bacteria or amongst cells in a multicellularorganism.There is interest in implementing synthetic communication at the scale of biolog-ical cells; see [4, 5]. A synthetic network at this scale could operate in a biologicalenvironment, in small industrial devices, or even in the air. This kind of network,where the communicating devices have functional components that are on the order of1Chapter 1. Introductionnanometers in size, has been defined as a nanonetwork in [4]. The interest in nanonet-works are for applications in a diverse number of fields, including biological engineer-ing, healthcare, manufacturing, and environmental monitoring. The functionality ofnanoscale devices would critically depend on the ability to communicate, since it isanticipated that any single device would be too small to have significant processingcapacity. Thus, the fundamental challenge in the implementation of nanonetworks isdesigning appropriate mechanisms that enable communication between the devices.Conventional synthetic strategies for ad hoc communication between mobile de-vices, e.g., radio frequency (RF) transmission, might be unsafe or infeasible in theenvironments where nanonetworks are to be deployed, such as in biological systems.One approach is to gain inspiration from how natural communication occurs in theseenvironments and determine whether natural mechanisms can be adapted for usein synthetic networks. By using biological components, such as genetically modi-fied cells, we might hope to design networks that are inherently biocompatible forimplementation inside of living organisms.Molecular communication (MC), where molecules are used as information carriersbetween a transmitter (TX) and its intended receiver (RX), is ubiquitous in biologicalsystems; see [6]. For example, endocrine signaling is the release of hormone moleculesthat propagate via the bloodstream until they reach the appropriate destination,paracrine signaling is the release of molecules into extracellular fluid that are detectedby local cells, and molecules are also released in the synapses between neurons torelay signals between them, as described in [3, Ch. 16]. MC is used by communitiesof bacteria in quorum sensing to determine whether the bacteria should coordinate toperform tasks that are too difficult for a single bacterium to accomplish; see [7]. Ona larger scale, MC is the underlying mechanism for communication via pheromones,2Chapter 1. Introductionwhich may travel up to a kilometer or more; see [2, Ch. 52].Despite their widespread use, MC networks with biological cells are typicallydesigned for the transmission of limited quantities of information, e.g., a messagethat is a time-varying ON/OFF control signal for a biological process. In order tomeaningfully adapt MC for synthetic networks and their corresponding applications,we seek an understanding of the fundamental limits of MC so that we might transmitarbitrarily large amounts of information; see [8] and the seminal paper introducingMC to the communications research community in [9]. To determine these limits, wemust study the underlying physical phenomena that make MC possible.The MC method that has attracted the most attention from the communicationsresearch community is free diffusion, cf. e.g. [1043]. Early experimental work hasalready developed a functioning macroscale prototype of a system using diffusivecommunication with flow in [42, 44]. Diffusion can be modeled as a random walkwhere molecules collide with other molecules in the propagation environment. Itsprimary advantage is its simplicity, since molecules that are released by a transmittercan freely diffuse away without any external energy or infrastructure requirements.The lack of infrastructure between devices means that diffusion is appropriate for theformation of ad hoc networks between mobile devices.Diffusion can be very fast over short distances, and is a common means of com-munication in nature; many cellular processes rely on diffusion for limited quantitiesof molecules to efficiently propagate both within and between cells, as describedin [3, Ch. 16]. However, the average distance traveled by a diffusing molecule isproportional to the square root of the time that it takes to diffuse. So, molecularcommunication systems have to deal with increasingly longer propagation times asthe receiver is placed further away. There is also a general lack of control over where3Chapter 1. Introductioneach molecules goes, so a large number of molecules is required to ensure that a suffi-cient number arrive at the receiver instead of diffusing away. Thus, the reliability ofa communication link (i.e., between a transmitter and its corresponding receiver) indiffusive MC is affected by the randomness of diffusion in two ways. First, moleculesmight not arrive in the time period when they are expected (i.e., within the symbolinterval). Second, molecules might arrive in a later time period, resulting in inter-symbol interference (ISI). Depending on the specific system model, some moleculesmay never arrive at the receiver (e.g., as in 3-dimensional unbounded diffusion).The rest of this chapter is organized as follows. In Section 1.2, we motivate thisdissertation by justifying communications analysis of the diffusive MC channel andby identifying related open problems. In Section 1.3, we summarize the contribu-tions made in this dissertation. We describe and motivate our main assumptions inSection 1.4. The organization of the rest of this dissertation is provided in Section 1.5.1.2 MotivationThe randomness of diffusion makes it an imperfect process that can be best describedby an expected channel impulse response, i.e., the number of molecules expected at areceiver when molecules are released at some instant by a transmitter. The expectedchannel impulse response is a function of the parameters of the diffusive environment,including its geometry, the distance from the transmitter to the receiver, the diffusioncoefficient of the molecules, and the time elapsed since the molecules were released.Other phenomena can also impact the status of the diffusing molecules and hence thechannel impulse response. These phenomena include chemical reactions that have themolecules of interest as a product or reactant, other sources of those molecules thatare not the intended transmitter, and whether there is any bulk fluid flow; see [6]. In4Chapter 1. Introductionthe remainder of this section, we motivate communications analysis for diffusive MCand identify open problems in this field.1.2.1 Communications AnalysisIn Section 1.1, we established that we seek an understanding of the fundamentallimits of MC so that we might transmit arbitrarily large amounts of information.The most fundamental metric for digital communications performance is the channelcapacity, which is the largest information throughput rate that is achievable for agiven communication channel; see [1, Ch. 1]. Generally, the capacity is independentof the modulation scheme used by the transmitter, but depends on the physicalcharacteristics of the channel. Existing literature on diffusive MC has sought tonumerically characterize the capacity in bits per second or bits per use, cf. e.g. [14,17,4547]. An upper bound on the capacity of a diffusive MC system was derived in [20].While the capacity can be a helpful guideline, it is also important to consider thatindividual transceivers would likely have limited computational abilities. Therefore,it is of interest to consider simple modulation schemes and then assess how simpledetectors perform in comparison to optimally designed detectors. Such analysis wouldprovide valuable insight into the practical design of communication networks basedon diffusion.Detector performance is generally measured in terms of bit error probability, whichis the probability that a bit in the transmitter sequence will be detected incorrectlyby the receiver, e.g., see [1, Ch. 5]. Clearly, a lower bit error probability indicatesa better detector, and we can assess the environmental conditions or the detectorproperties that lead to a lower probability of error. Two methods to determinethe bit error probability for a given detector in a particular environment are by5Chapter 1. Introductionsimulation and by analysis. Both simulation and analysis require a model based on theenvironment, which should account for all of the underlying physical phenomena thatcould affect the probability of error. Simulations do not necessarily require knowledgeof the channel impulse response. Two exceptions are if the impulse response is partof the design of the detector (e.g., in an optimal detector), or if the simulationsthemselves are realizations of the channel impulse statistics instead of simulationsof the underlying physical phenomena. If the bit error probability is being derivedanalytically, then knowledge of the channel impulse response and its statistics arerequired. We emphasize that knowledge of the channel impulse response is necessaryfor meaningful communications analysis.1.2.2 Open ProblemsThe modeling and analysis of diffusive MC systems relies on an accurate underly-ing physical model and the corresponding channel impulse response. Since diffusiveMC is a relatively new area in the field of communications research, there are stillopen problems in establishing a system model and determining the channel impulseresponse. Given a channel impulse response, we can then perform the correspond-ing communications analysis. A selection of open problems in the development of adiffusive MC system model and its analysis are as follows:ˆ Environmental Flow: A flowing propagation environment has generally notbeen considered in diffusive MC system models that have more than one di-mension. However, flow is likely to be present in any environment where MC isdeployed; see [4]. Flows play an important role when the distance that moleculesmust travel is greater than what can be practically achieved via diffusion alone.For example, the advection of blood in the body enables the transport of oxy-6Chapter 1. Introductiongen from the lungs to tissues and also facilitates the removal of waste and toxicmolecules via the liver and kidneys; see [6]. Also, a household electric fan wasused to assist diffusion in the macroscale testbench developed in [42]. Flows canalso help mitigate ISI in a communications context by carrying old moleculesaway from the receiver. The modeling of flow in one-dimensional diffusive MChas assumed that the flow is only in the direction from the transmitter towardsthe receiver, cf. e.g. [13,15,17,18,23,36,37,48]. In such a scenario, if the trans-mitter is encoding information in the exact time that a molecule is released (asin [17, 18]), then the flow must be towards the receiver and so communicationis only possible in that direction. We call a flow in any other direction a dis-ruptive flow, because it could make molecules less likely to reach the receiver.One recent paper that considered disruptive flows in two dimensions is [22].ˆ Mitigation of ISI: The early literature on diffusive MC primarily dealt withISI via passive strategies where the transmitter must wait sufficiently long forpreviously-emitted information molecules to diffuse away before it can releasemore molecules, thereby limiting the maximum transmission rate. For exam-ple, ISI was often ignored, as in [13, 17, 18, 26, 36, 4951], or it was assumedthat interfering molecules are released no earlier than the previous bit inter-val, as in [16, 25, 36, 45, 5257]. One potential strategy to mitigate ISI withoutcompromising the data transmission rate is to modify the channel itself (as wealready discussed for flow). It may not be practical to introduce a bulk flow inan environment where one did not previously exist, but it might be practicalto introduce mechanisms for the information molecules to degrade while theypropagate. For example, enzymes are catalytic molecules with very high selec-tivity for their substrates; see [3, Ch. 16]. If an enzyme molecule is able to break7Chapter 1. Introductiondown an information molecule, then enzymes could be used to reduce ISI. Thisapplication can already be found in nature, e.g., acetylcholinesterase is an en-zyme in the neuromuscular junction that hydrolyzes acetylcholine as it diffusesto its destination, as described in [58, Ch. 12]. One significant advantage ofusing enzymes in the propagation environment is that no additional complexityis required at either the transmitter or the receiver. Enzyme-aided degradationis often described using MichaelisMenten kinetics (see [58, 59]), but simplerchemical kinetics could also be used to approximate MichaelisMenten kineticsin both simulation and analysis.Papers that have considered information molecules reacting in the propagationenvironment of diffusive MC include [15, 26, 31, 60, 61]. In [15], the sponta-neous destruction and duplication of information molecules are treated as noisesources but were not deliberately imposed to improve communication. In [60],the exponential decay of information molecules was considered via simulationas a method to reduce ISI. However, information was measured as the totalnumber of molecules to reach the receiver, so the achievable information rateactually decreased when information molecules were allowed to decay. Theplacement of enzymes along the boundaries of the propagation environmentwith the goal of reducing ISI was proposed in [61] but analytical results werenot provided.ˆ External Noise Sources: The study of molecule sources in a diffusive MCenvironment has generally been limited to considering the intended transmitter.However, there could be other sources of the same type of molecule, which werefer to as external molecule sources. For example, there could be multiuserinterference (MUI) caused by molecules that are emitted by the transmitters of8Chapter 1. Introductionother communication links; see [27,34,55,62,63]. The impact of MUI on capacitywas evaluated numerically in [62]. MUI due to molecules released no earlier thanthe previous symbol interval was considered in [27,55,63]. Analysis of MUI fromrandomly-placed transmitters releasing Gaussian signals was performed in [34].The impact of MUI from transmitters using a practical modulation scheme, inconsideration of all molecules released over all symbol intervals, has not beenestablished.There could also be unrelated biochemical processes that generate the sametype of molecule, which is especially likely if a naturally-occurring moleculeis selected for biocompatability. The spontaneous generation of informationmolecules throughout the environment was considered in [15]. There exists noanalysis of the time-varying or asymptotic impact of an immobile random noisesource in diffusive MC.ˆ Channel Statistics: It is often assumed that the underlying statistics of adiffusive MC channel impulse response in two or three dimensions follows aGaussian distribution (see [14, 16, 19, 22, 24, 35, 41, 64]), perhaps in part dueto the simplicity and prevalence of the Gaussian distribution in conventionalcommunications analysis; see [1]. However, using the Gaussian distributionfor the channel statistics is an approximation and is not always appropriate,especially if individual molecules are very unlikely to arrive at the receiver.The Poisson approximation can be more accurate with unlikely arrivals, whenit is a more accurate approximation of the underlying Binomial distribution;see [65, Ch. 5] and its use in diffusive MC in [11, 15, 20, 27, 38, 39, 43, 6668].There has been very limited work comparing the suitability of the Poisson andGaussian approximations in diffusive MC; recent examples are [68, 69]. Such a9Chapter 1. Introductioncomparison is very important because communications analysis is derived fromthe statistics that are chosen.ˆ Knowledge of Channel Parameters: When designing detectors or ana-lyzing system performance, it is almost universally assumed that the channelparameters are perfectly known. This is not necessarily a realistic assumption,especially if knowledge of the parameters is needed at the local transmitter orreceiver as part of the detector design. Two limitations are that a local receivercan at best make noisy observations of the channel impulse response, and theunderlying parameters might change over time. We claim that there is value inknowing the underlying channel parameters and not just the channel impulseresponse, particularly in applications where the receiver is supposed to monitorthe underlying parameters. For example, the diffusion coefficient could be aproxy for blood composition and used to identify major changes in blood cellcounts, as described in [70]. Existing work in diffusive MC has estimated thedistance between two devices (as in [7174]) and synchronization (as in [64,75]).ˆ Detection at Receiver: The most common receiver design in diffusive MC,where the transmitter releases one type of molecule, has been to count thenumber of molecules that are at the receiver at some instant or the numberthat arrive over the course of a symbol interval (as in [16, 23, 26, 28, 31, 3538,45,51,56,57,76,77]). This number is then compared to a decision threshold todetermine the received bit, and the process is repeated independently in everysymbol interval. More advanced techniques could provide significant improve-ments in performance at the cost of additional computational complexity. Forexample, a detector could make some samples taken during a symbol intervalmore influential than other samples taken at different times. More generally, a10Chapter 1. Introductionreceiver could perform joint detection, where the potential for ISI is includedin the detection criteria.Recent works that have studied optimal sequence detection include [19, 2224,38, 39, 78]. In [19], the Gaussian approximation is applied in a 2-dimensionalenvironment without flow where the transmitter symbols are represented by dif-ferent molecules. Flow was added to that model in [22]. In [23], a 1-dimensionalflowing environment with an absorbing receiver is considered. In [24], the Gaus-sian approximation is applied in a 3-dimensional environment without flow andthe receiver makes a single observation per bit interval. A similar model butwith multiple samples per interval was considered in [78]. The Poisson approx-imation was considered in a 3-dimensional environment without flow where thereceiver makes multiple observations in [38], and in an arbitrary environment(including 3-dimensional) that can have flow and an absorbing receiver in [39].To better understand the scope of these open problems in the context of conven-tional communications research, we highlight some key differences between diffusiveMC with conventional wireless communication (as summarized in Table 1.1). In con-ventional wireless, electromagnetic or optical signals are transmitted through the airover a range of up to many kilometers. The directionality of the signal is influencedby the antenna design and obstacles in the environment. Signals travel at the speedof light and can be impaired by phenomena that include fading and thermal noise.The imperfections in the channel are addressed by the design of the transceivers. Indiffusive MC, the signals are chemical molecules that are released into a fluid andcannot easily travel over a range of more than µm unless the molecules are also car-ried by a flow. The transmitter can have little to no influence on the directionalityof the signal since the direction of diffusion is random. There is also randomness11Chapter 1. IntroductionTable 1.1: Differences between diffusive MC and conventional wireless communica-tion.Property Conventional Wireless DiffusionMedia [8] Air FluidSignal Type [8] Electrical/Optical ChemicalPropagation Speed [8] Speed of light (3× 108m/s) On order of µm/sPropagation Range [8] m-km nm-µm (without flow)Energy Consumed [8] Electrical (high) Chemical (low)Sources of Directionality Antenna, obstacles Flow, boundarySources of Noise Fading, thermal noise Diffusive, chemicalDesign Perspective Transceiver Transceiver, channelintroduced by the channel if the molecules are capable of participating in chemicalreactions. While the design of the transceivers may be constrained by limited compu-tational capacity, we also may have the option of modifying the channel to improvethe communications performance (e.g., by introducing a flow or reaction catalysts).1.3 ContributionsThis dissertation presents a novel model for three-dimensional diffusive MC wherediffusing molecules can also be carried by steady uniform flow or participate in chem-ical reactions that lead to their degradation. Our analysis includes the estimationof the channel parameters at the receiver, optimal and suboptimal detector design,and the development of an approximation for multiuser and intersymbol interferencebased on asymptotic noise.The specific contributions of this dissertation are summarized as follows:1. Channel Modeling: We introduce the most detailed diffusive MC model forwhich a closed-form time domain expression of the expected channel impulse12Chapter 1. Introductionresponse is available. Namely, we model the dynamics of unbounded diffusionin addition to steady uniform flow in any direction, mechanisms for moleculedegradation, and sources of information molecules in addition to the intendedtransmitter. Both flow and molecule degradation are realistic phenomena thatcould improve communications performance without adding complexity to thetransmitter or receiver, whereas additional sources of information moleculeswill negatively impact the communication link. We describe the model in di-mensional and dimensionless forms, where the dimensionless form enables usto scale the model to any arbitrary dimension for which the assumptions arestill valid. Given the model, we derive the expected channel impulse response,which describes the number of molecules expected at the receiver given that animpulse of molecules is released by the transmitter. We show that the under-lying channel statistics for one impulse follow a Binomial distribution. We usesuperposition to describe the number of molecules expected at the receiver dueto all releases of molecules by all sources, and show that the cumulative signalobserved at the receiver can be accurately approximated as a Poisson randomvariable. We also determine the mutual information between consecutive obser-vations made at the receiver to measure the independence of those observations.Simulation results from our custom microscopic simulator show the accuracy ofthe expected channel impulse response, verify the channel statistics, and pro-vide insight about the time needed between consecutive observations to assumethat they are independent.2. Channel Parameter Estimation: We introduce the joint parameter esti-mation problem where we attempt to estimate all of the channel parameters.Namely, we seek to estimate the distance between transceivers, the diffusion13Chapter 1. Introductioncoefficient, the uniform flow vector, the molecule degradation rate, the numberof molecules released by the transmitter, and the time when the molecules arereleased. We estimate the parameters using independent observations of oneimpulse from the transmitter. We find the Cramer-Rao lower bound (CRLB),which is a bound on the variance of estimation error of any locally unbiasedestimator. We derive the maximum likelihood (ML) estimate of any single pa-rameter. We also propose peak-based estimation as a low-complexity protocolfor estimating any one channel parameter. Simulation results compare ML es-timation and peak-based estimation with the CRLB. We show that when theML estimate is biased, it can perform better than the CRLB, but this becomesunlikely as more observations are used.3. Receiver Design: We derive the optimal sequence detector (in a ML sense)as a lower bound on the bit error probability for a receiver detecting ON/OFFkeying impulses from the transmitter. Our optimal detector relies on the Pois-son approximation of the Binomial statistics and is derived for any numberof receiver observations in a bit interval. We introduce weighted sum detec-tors as suboptimal but more physically realizable, where each observation in abit interval is assigned a weight and the sum of the weighted observations iscompared with a decision threshold. We derive the probability of error of theweighted sum detector, where we use the Poisson approximation if the weightsare equal and the Gaussian approximation for non-equal weights. Simulationresults show the average detector error probability and its sensitivity to thechannel phenomena and to the number of samples taken in a single bit interval.4. Noise Modeling: We use the expected channel impulse response to derive thenumber of molecules expected from a source that releases molecules continu-14Chapter 1. Introductionously. Molecules from such a source contribute noise to the intended commu-nication link. We derive the time-varying and asymptotic number of moleculesexpected from a continuous source, and then apply the asymptotic noise modelto approximate the effects of multi-user and intersymbol interference. Simula-tions show that the expressions for the number of molecules from a continuoussource are highly accurate, that asymptotic noise is an effective approximationfor multiuser interference that is sufficiently from the the receiver, and thatasymptotic noise simplifies the evaluation of the expected bit error probabilityof the weighted sum detector.1.4 AssumptionsIn this section, we list and justify our 12 primary system model assumptions. Whereappropriate, we discuss how an assumption approximates a more realistic model,whether it has been commonly adopted in the existing literature, and whether ex-isting literature has been able to relax the assumption. By presenting our primaryassumptions in a single section, we aim to highlight the applicability of the contribu-tions of this dissertation and to easily identify areas for future study. For clarity ofpresentation, we omit the definition of specific system variables until Chapter 2.The 12 primary assumptions of this dissertation are as follows:1. Unbounded Environment: We assume that the propagation environment isunbounded. Any practical molecular environment has boundaries that restrictmotion. The unbounded assumption is often made in diffusive MC research (see[11,12,15,16,19,20,22,24,27,2931,3335,38,40,41,64]), because the expectedchannel impulse response of diffusion from a point source into an unboundedenvironment is a well-known analytical result; see [79, Ch. 3]. Fortunately, as15Chapter 1. Introductionlong as the size of the environment is much greater than the distance betweenthe molecule sources and the RX, then this assumption holds. We test thisassumption indirectly in our simulations that apply MichaelisMenten kinetics,because we must restrict the space over which enzymes can diffuse to a finitevolume that we assume is infinite in size.2. Constant Diffusion Coefficient: We assume that the propagation envi-ronment can be described by a constant diffusion coefficient for each type ofmolecule (i.e., each molecular species). Thus, we assume that the environmenthas uniform constant temperature and viscosity, and that all solute molecules(including the information molecules) are locally dilute everywhere. Strictlyspeaking, the diffusion coefficient is a function of the local concentration of so-lute molecules; see [80]. To be locally dilute means that the number of solutemolecules in any region is small enough for us to ignore potential collisionsbetween those molecules, such that the diffusion coefficient does not vary withthe local concentration. One recent work that considered subdiffusion for MCis [81], where localized molecular crowding was shown to slow down the prop-agation of signals from the TX. Collisions between information molecules werealso considered in [82].3. One Communication Link: We consider a single communication link be-tween one TX and its intended RX, though there can be other molecule sourcesin the same environment. The transmission of information is one way, i.e., uni-directional, such that the RX does not return any information to the TX. Thispoint-to-point model is the simplest to consider and the foundation of morecomplex communication strategies. It has also been the focus of the majorityof existing diffusive MC research, cf. e.g., [13,1528,3033,3538,40,41,43,64].16Chapter 1. IntroductionThe analysis of this model can be extended to scenarios that include relaying,broadcasting, and multiple-access schemes, all of which are comprised of multi-ple point-to-point links. Relaying places one or more transceivers between theinitial TX and the final RX, such that a chain of point-to-point links is usedto reach a RX that might otherwise be too far away to reliably send informa-tion; see [8388] for the analysis of relaying in diffusive MC. In a broadcastscheme, a TX transmits information to multiple RXs. Relevant analysis fromthe perspective of diffusive MC includes [14,60,89]. In a multiple-access scheme,multiple TXs send information to a single RX, as discussed for diffusive MCin [14,51,66,90].4. Synchronized Transceivers: We assume that the RX is synchronized withthe TX, i.e., that they have synchronized clocks for determining the start ofsymbol intervals. Achieving synchronization via diffusion between the TX andRX is a non-trivial problem that has been considered in [64, 75] and that wealso consider for our system model in Chapter 3. The authors of [72] proposedsynchronizing via an external signal that is observed by both the TX and RX,although they did not consider how this might be implemented. It is gener-ally assumed (as in [13, 1520, 2227, 33, 3641]) that perfect synchronizationcan be achieved, although in practice that would depend on the computationalcomplexity of the transceivers and whether an external signal is available. Alack of synchronization can only degrade communication performance, so re-sults obtained with the assumption of synchronization are a bound on practicalperformance.5. Impulsive Molecule Release: We assume that transmitters are point sourcesthat can release impulses of information molecules instantaneously with the use17Chapter 1. Introductionof ON/OFF keying modulation. However, our simulations initialize informationmolecules over a sphere that is centered at the point source. Some other pa-pers that have considered a volume source include [21,30,32,35,43]. Impulsivepoint sources are analytically much simpler than volume sources that releasemolecules over finite time. Realistically, molecules will take finite time to bereleased, but we assume that this time is small relative to the time it takesfor molecules to reach the RX. Authors have considered non-impulsive releasein [24, 29, 30, 35]. Our results with ON/OFF keying can be readily generalizedto pulse amplitude modulation, where the number of molecules released corre-sponds to a particular information symbol, as described in [30,41,45,78,91,92].This assumption appears to contradict our assumption that local concentra-tions are always small enough to model the diffusion coefficient as constant,but we claim that the violation of the dilute assumption is sufficiently brief tobe ignored.6. Steady Uniform Flow: We assume that there is bulk fluid flow that is bothsteady and uniform but can be in any direction. In a steady flow, the pressure,density, and velocity components at each point in the stream do not changewith time; see [93]. If uniform, then these components are identical throughoutthe environment. Steady uniform flow is analytically the simplest type of flow.Existing MC literature has generally assumed that flow is both steady anduniform, but also only in the direction of transmission, i.e., in the same directionas a line pointing from the TX to the RX, cf. e.g. [13, 15, 17, 18, 23, 36, 37, 44,47, 91, 9496]. We define disruptive flows as any flow component that is not inthe direction of transmission. These flows are literally destructive in that theyreduce the peak number of molecules expected to be observed at the receiver.18Chapter 1. IntroductionAnother paper that has considered disruptive flow is [22]. MC models that usea timing channel (as in [17,18]), where every molecule released by the TX mustarrive at the RX, cannot accommodate disruptive flow.Other flows of interest but outside the scope of this dissertation include laminarflow (where successive layers of fluid slide over one another without mixing) andturbulent flow (where fluid motion is even more chaotic than under diffusionalone). These flows can generally describe flow in cylindrical environments suchas blood vessels. Laminar flow is predominant in small blood vessels, whereviscous forces are relatively stronger than inertial forces over the distance scaleof interest; see [58, Ch. 5] and application to MC in [76,9799]. Turbulent flowoccurs when viscous forces dominate inertial forces, and has been consideredfor diffusive MC in [44,100].7. Michaelis-Menten Molecule Degradation: We consider the enzyme-aideddegradation of information molecules in the fluid propagation environment viastochastic MichaelisMenten kinetics. The MichaelisMenten mechanism isgenerally accepted as the fundamental mechanism for simple enzymatic reac-tions; see [58,59]. It has been commonly studied in combination with diffusion,particularly for modeling the hydrolyzing of acetylcholine by the enzyme acetyl-cholinesterase as it diffuses in the neuromuscular junction; see [101103]. Morecomplex enzyme mechanisms include allosteric interactions, where one enzymetypically has binding sites for multiple substrates, and competitive and non-competitive inhibition; see [59, Ch. 10]. MichaelisMenten kinetics at the RXof a diffusive MC system were considered with the steady-state approximation(and deterministic kinetics) in [50, 104].8. First-Order Molecule Degradation: We also consider the direct (i.e., first-19Chapter 1. Introductionorder) stochastic degradation of information molecules in the propagation en-vironment. This model accounts for the natural degradation of the molecules,and it can also be used to approximate higher-order reactions or reaction mech-anisms with multiple steps. In Chapter 2, we show that first-order degrada-tion can approximate the behavior of MichaelisMenten kinetics. From a MCperspective, a first-order approximation was used in [67] to model chemicalreactions at the RX. First-order degradation of information molecules in thepropagation environment has also been considered in [26,31].9. Independent Molecule Behavior: We assume that the behavior of eachmolecule is independent of the behavior of all other molecules of the samespecies. From the context of diffusion, this assumption is the same as assuminga constant diffusion coefficient. However, with MichaelisMenten kinetics, thebehavior of information molecules is inherently coupled. If any one informationmolecule binds to an enzyme, then that enzyme is (temporarily) unavailable toother information molecules, i.e., the effective enzyme concentration decreases.The impact of temporary binding depends on how many enzyme moleculesthere are and the corresponding reaction rate constants. We ignore this effectwhen we approximate the corresponding channel impulse response and whenwe model the cumulative signal at the RX, but it is included in the simulationsof the MichaelisMenten kinetics.10. Passive Receiver: We assume that the RX is an ideal passive observer thatdoes not interact with the information molecules but is able to perfectly countthe number of those molecules that are within its observation space at a giventime. We make this assumption in order to focus on the effects of the propaga-tion environment and for tractability. Recent work in [21,105] has incorporated20Chapter 1. Introductionthe impact of a chemical reaction mechanism at the RX where the observablemolecules are the output of a reaction involving the molecules emitted by theTX. The derivation of the channel impulse response from a point TX to anabsorbing spherical RX was recently introduced to the MC literature in [33].11. Spherical Receiver: When we need to impose a shape on the size of the RX,then we assume that it is spherical (or, equivalently by a factor of 2, hemispher-ical if it is mounted to an infinite plane that forms an elastic boundary of asemi-infinite environment). We focus on the sphere because this shape is nat-urally formed by membrane bi-layers; see [3, Ch. 11]. These bi-layers can havemolecule receptors embedded in them as part of a functioning nanomachine,such as in a cell. Existing literature in more than one dimension generally as-sumes that the RX is circular or spherical (as in [11,12,1416,1922,24,35,38,41,43]) or it is approximated as a point (as in [34,51,55,57,77,106]).12. Poisson Statistics: We assume that we can accurately approximate a Bino-mial distribution with a Poisson distribution, and we also occasionally considera Gaussian approximation. These distributions are used to describe the numberof molecules observed by the RX at some time. The underlying distributionis Binomial, but the Poisson distribution is easier to manipulate. The Poissondistribution is reproductive, so the sum of independent Poisson random vari-ables is also a Poisson random variables whose mean is the sum of the means ofthe individual variables; see [65, Ch. 5]. The Gaussian approximation, which isalso reproductive, has been more widely applied throughout the diffusive MCliterature (e.g., see [14,16,19,22,24,35,41,64]). In [68], the Poisson approxima-tion was found to be more accurate than the Gaussian approximation as theRX is placed further from the TX. We make a more general claim: the Pois-21Chapter 1. Introductionson approximation is more accurate than the Gaussian approximation when asmaller fraction of released molecules are expected at the RX. We consider sys-tems where individual molecules are unlikely to be observed at the RX, so thePoisson approximation is more appropriate. The accuracy of this assumptionwill be assessed in Section 2.6.1.5 Organization of the DissertationThe structure of the dissertation follows directly from the list of contributions and isas follows:ˆ Chapter 2  Channel Model and Impulse Response: In Chapter 2,we define the system model that is studied throughout the entire dissertation.We consider 3-dimensional diffusive MC with steady uniform flow in any di-rection and the ability for information molecules to participate in chemicalreactions that lead to their degradation. We model degradation via eitherMichaelis-Menten kinetics or a first-order reaction. We derive the expectedchannel impulse response from the expected point concentration of informationmolecules, where we either integrate the point concentration over the sphere orwe apply the uniform concentration assumption (UCA) where we assume thatthe concentration of molecules throughout the receiver is uniform and equalto that expected at the center of the receiver. We describe the underlyingstatistics of the observations and motivate using the Poisson approximation ofthe Binomial distribution for the cumulative distribution function. Next, wederive the mutual information between consecutive observations made at thereceiver to determine how frequently independent observations can be made.We also propose a microscopic simulation framework that can accommodate22Chapter 1. Introductionour reaction-diffusion system. Simulations verify the expected channel impulseresponse, assess the underlying channel statistics, verify the UCA, and measurethe independence of observations as a function of the time between them.ˆ Chapter 3  Joint Channel Parameter Estimation: In Chapter 3, weperform joint parameter estimation for the diffusive MC channel, where we es-timate the distance between the transceivers, the diffusion coefficient, the flowvector, the molecule degradation rate, the number of molecules released by thetransmitter, and the time when those molecules were released. We derive theFisher Information Matrix (FIM) to give the Cramer-Rao lower bound (CRLB)on the variance of estimation error of any locally unbiased estimator as a func-tion of independent observations of molecules released in an impulse by thetransmitter. The joint estimation problem simplifies to the estimation of anysubset of the channel parameters. We study maximum likelihood (ML) estima-tion and compare its performance with the CRLB. We show that ML estimationcan be better than the CRLB when the FIM is singular or in the vicinity ofsingularities, such that ML estimation becomes biased. For the implementationof low-complexity estimators we propose peak-based estimation, and show howthe peak number of molecules observed at one time or the time of the peakobservation can be used to estimate any single parameter given knowledge ofthe other parameters.ˆ Chapter 4  Optimal and Suboptimal Receiver Design: In Chapter 4,we design optimal and suboptimal receivers when the transmitter encodes a bi-nary sequence via impulsive ON/OFF keying. The optimal sequence detectoris derived in a ML sense and gives a lower bound on the bit error probabilityfor any detector at the receiver. We propose a modified Viterbi algorithm to23Chapter 1. Introductionimplement the optimal detector that has a limited number of states but whereeach state accounts for the intersymbol interference due to all previous bits. Wethen introduce weighted sum detectors as suboptimal but more physically real-izable detectors. We derive the expected bit error probability of the weightedsum detector, and show the optimality of matched filter weights in the absenceof intersymbol interference. Simulations demonstrate the impact of moleculedegradation, steady uniform flow, external noise sources, and the frequency ofobservations on detector performance.ˆ Chapter 5  A Unifying Model for External Noise Sources and Inter-symbol Interference: In Chapter 5, we extend the model for the cumulativesignal at the receiver to account for molecule sources that release molecules con-tinuously and not only as a series of impulses. Such sources contribute noise tothe intended communication link. We derive time-varying and asymptotic ex-pressions for the number of molecules expected at the receiver due to a sourcethat releases molecules continuously. Closed-form solutions are available fora number of special cases in our system model. Next, we use the model forasymptotic noise to approximate the impact of multiuser interference, wherethe sources of interference release impulses of molecules according to the en-coding of binary sequences via ON/OFF keying. We also use the model forasymptotic noise to approximate the intersymbol interference in the intendedcommunication link. Simulations demonstrate the accuracy of the expressionsfor the number of molecules expected at the receiver, and show how approxi-mating old ISI as asymptotic noise can simplify the evaluation of the expectedbit error probability of the weighted sum detector.ˆ Chapter 6  Conclusions and Topics for Future Research: In Chapter 6,24Chapter 1. Introductionwe provide a summary of the main contributions and findings of this dissertationand outline areas for future research.ˆ Appendix A  List of Other Publications: Appendix A lists contributionsthat were made during the author's Ph.D. program but are outside the scopeof this dissertation.ˆ Appendices B and C  Proofs: Appendices B and C contain proofs of someof the theorems used in this dissertation.25Chapter 2Channel Model and ImpulseResponse2.1 IntroductionMeaningful communications analysis requires a detailed understanding of the prop-agation environment. Performance metrics such as the throughput and reliabilitycannot be described unless the channel between a transmitter and its receiver is char-acterized. For diffusive molecular communication, this means being able to model thebehavior of molecules from the time they are released by the transmitter until theyare removed from the environment. Diffusion is noisy; it is an imperfect processthat can be best described by an expected channel impulse response, i.e., the numberof molecules expected at a receiver when molecules are released at some instant by atransmitter. The expected channel impulse response is a function of the parametersof the diffusive environment, including its geometry, the distance from the transmitterto the receiver, the diffusion coefficient of the molecules, and the time elapsed sincethe molecules were released. Other phenomena can also impact the status of the dif-fusing molecules and hence the channel impulse response. These phenomena includechemical reactions that have the molecules of interest as a product or reactant, othersources of those molecules that are not the intended transmitter, and whether thereis any bulk fluid flow.26Chapter 2. Channel Model and Impulse ResponseEven in the simplest scenario, where diffusion is the only phenomena being con-sidered, the expected channel impulse response of diffusive MC can only be derivedin closed form if there are simplifying assumptions and specific system geometries;see [79]. Most existing literature on diffusive MC has been limited to the diffusion-only scenario, or considered absorption at the receiver when the physical environmentis 1-dimensional.In this chapter, we present our system model, which despite its assumptions isthe most general diffusive MC model for which a closed-form time domain expressionof the channel impulse response is available. The contributions of this chapter are asfollows:1. We present a system model for diffusive MC that includes multiple moleculesources, steady uniform flow in any direction, and chemical reaction mechanismsthat result in the degradation of information molecules in the propagation en-vironment. We derive the expected point concentration due to a point moleculesource in this system, which we then use to derive the expected channel impulseresponse at the RX. The cumulative signal at the RX due to multiple moleculesources is presented, and the statistics of the RX signal are discussed.2. We derive the mutual information between consecutive observations made bythe RX in order to measure the independence of those observations. The as-sumption of independent observations is in general not satisfied, particularlyas the time between observations tends to zero. However, we require sampleindependence for tractability in determining the bounds on parameter estima-tion performance in Chapter 3 and in the design and assessment of detectors inChapter 4. Thus, it is important to develop criteria to measure independence.3. We introduce a custom particle-based simulation framework and present sim-27Chapter 2. Channel Model and Impulse Responseulation results using that framework to verify the expected channel impulseresponse, the statistics of the RX signal, and the mutual information betweenconsecutive observations made by the RX.The remainder of this chapter is organized as follows. In Section 2.2, we describethe system model in both dimensional and dimensionless form, and summarize thebehavior of the RX and of all molecule sources. In Section 2.3, we derive the expectedchannel impulse response, describe the cumulative signal observed by the RX, anddiscuss the statistics of that signal. The independence of consecutive observationsof the RX signal is derived in Section 2.4. We present our simulation frameworkin Section 2.5. Numerical and simulation results to assess the system model arepresented in Section 2.6. Section 2.7 concludes the chapter.2.2 System ModelIn this section, we establish the system model that will be considered throughout theremainder of this dissertation. First, we define the pertinent variables of the physicalenvironment, including the differential equations that describe molecular behavior.Second, we define reference variables to convert the system model into dimensionlessform, which facilitates some of the analysis and generalizes the model's scalabilityto arbitrary dimensions. Finally, we describe the communication link of interest andthe behavior of all molecule sources.2.2.1 Dimensional ModelWe consider a 3-dimensional fluid environment as shown in Fig. 2.2. The environmentis unbounded, filled with unspecified solvent molecules, and has uniform temperature28Chapter 2. Channel Model and Impulse ResponsexTX {−d, 0, 0}yzv‖v⊥vRXVRX12- A moleculeSource u = 2Source u = 3- SourceFigure 2.1: The system model considered in this dissertation. The TX is a pointsource of A molecules and the RX is a passive observer centered at the origin. Otherpoint sources can also release A molecules. The A molecules are shown as smallhallow circles and some are labeled. Molecule 1 is inside VRXand so can be observedby the RX. Molecule 2 was previously inside VRXand is now outside because the RXis non-absorbing. Once released by the TX, the behavior of each molecule is that ofa biased random walk (biased by the steady flow v). At any time, a molecule canalso participate in a degradation reaction via (2.1) or (2.2).and viscosity. The communication link of interest is between two fixed devices, whichwe label the transmitter (TX) and the receiver (RX) because we focus on one-waycommunication. The TX is a point source whereas the RX is a sphere of radius RRXand volume VRX. The coordinate axes are defined by placing the center of the RX atthe origin and the TX at Cartesian coordinates {−d, 0, 0}, i.e., the distance betweenthe TX and the center of the RX is d. The RX is a passive observer that does notimpede diffusion or initiate chemical reactions. By symmetry, the concentrationsobserved in this environment are equivalent (by a factor of 2) to those in the semi-infinite case where z ≥ 0 is the fluid environment, the xy-plane is an elastic boundary,and the receiver is a hemisphere whose circle face lies on the boundary; see [79, Eq.(2.7)].29Chapter 2. Channel Model and Impulse ResponseThe bulk fluid has a steady uniform flow v with components v‖ and v⊥. v‖ isthe component of v in the direction of a line pointing from the TX towards the RX,and v⊥ is the component of v perpendicular to v‖ (the precise direction of v⊥ in theunbounded case is irrelevant due to symmetry).We consider a single type of information molecule, which we label an A molecule.The RX is capable of observing A molecules if they are within VRX. A moleculesare capable of participating in chemical reactions that can lead to their irreversibledegradation. Once degraded, an A molecule can no longer be recognized by the RX.We consider two different degradation models. In the first model, A moleculeshave a negligible natural degradation rate into APmolecules, but an individual Amolecule can degrade much more quickly if it binds to an enzyme molecule. Enzymemolecules are labeled E molecules, and they are distributed uniformly throughout theenvironment. We assume that A and E molecules react via the MichaelisMentenreaction mechanism, described asE + Ak1−−⇀↽−k−1EAk2−→ E + AP, (2.1)where EA is the intermediate formed by the binding of an A molecule to an Emolecule, and k1, k−1, and k2 are the reaction rates for the reactions as shown in(2.1) with units molecule−1m3s−1, s−1, and s−1, respectively. The A molecule in thismechanism is also known as the substrate. We see that A molecules are irreversiblydegraded into APby the reaction defined by k2 while the enzymes are released intactso that they can participate in future reactions. In fact, the sum of enzyme andintermediate molecules remains constant. Throughout this dissertation, we refer tothe three reactions in (2.1) associated with k1, k−1, and k2 as the binding, unbinding,and degradation reactions, respectively. The binding reaction is second order because30Chapter 2. Channel Model and Impulse Responsethere are two reactants (i.e., E and A). The unbinding and degradation reactions areboth first order and they share the same reactant (i.e., EA).In the second degradation model, the A molecules have a non-negligible naturaldegradation rate and there are no other molecular species represented. This reactionmechanism can be described asAk−→ ∅, (2.2)where k is the reaction rate constant in s−1. If k = 0, then this degradation isnegligible. Eq. (2.2) is a first-order reaction, but it can be used to approximatehigher-order reactions or reaction mechanisms with multiple steps, including (2.1).We use a common notation to describe all molecular species of interest (i.e., A, E,and EAmolecules; the formation of APis irreversible and it cannot be detected by theRX so it can be ignored). The concentration of species Ω at the point defined by vector~r and at time t in molecule · m−3 is CΩ(~r, t). We will often write the concentrationas CΩ for compactness. We also define CEtot, which does not vary over time andspace but is the constant global sum concentration of both E and EA molecules. Weassume that all molecules are sufficiently dilute everywhere in the environment, suchthat each molecule diffuses independently of every other molecule and with constantdiffusion coefficient DΩ. For compactness throughout this dissertation, we often dropthe subscript A from the concentration CA and the diffusion coefficient DA when itis the only molecular species of interest, i.e., in the absence of enzymes.One common expression for a constant diffusion coefficient is the Einstein relation,given as [58, Eq. (4.16)]DΩ =kBTK6piηRΩ, (2.3)where kBis the Boltzmann constant (kB= 1.38× 10−23 J/K), TKis the temperaturein degrees kelvin, η is the viscosity of the propagation environment, and RΩ is the31Chapter 2. Channel Model and Impulse Responsemolecule radius. The Einstein relation was derived assuming infinite dilution (i.e.,there is only one Ω molecule) and that the solvent molecules in the propagationenvironment have negligible size in comparison with the Ω molecule; [80, Ch. 5].In practice, the accuracy of the Einstein relation is limited, and we consider it inthis dissertation to get a sense of appropriate values for DΩ (i.e., within an order ofmagnitude). Typically, DΩ is found via experiment.The partial differential equation describing the motion of Ω molecules due toindependent diffusion alone is known as Fick's Second Law and is written as [58, Ch. 4]∂CΩ∂t= DΩ∇2CΩ. (2.4)The general reaction-diffusion equation for species Ω is [107, Eq. (8.12.1)]∂CΩ∂t= DΩ∇2CΩ + gΩ (CΩ, ~r, t) , (2.5)where gΩ (·) is the reaction term. Applying the principles of chemical kinetics (see[59, Ch. 9]) to the reaction mechanism in (2.1), the corresponding system of partialdifferential equations in the absence of flow is∂CA∂t= DA∇2CA − k1CACE + k−1CEA, (2.6)∂CE∂t= DE∇2CE − k1CACE + k−1CEA + k2CEA, (2.7)∂CEA∂t= DEA∇2CEA + k1CACE − k−1CEA − k2CEA. (2.8)Similarly, the partial differential equation describing the reaction-diffusion behav-32Chapter 2. Channel Model and Impulse Responseior of A molecules when (2.2) is the only reaction can be written as∂CA∂t= DA∇2CA − kCA, (2.9)and we can immediately see that (2.9) approximates (2.6) if we assume that k = k1CEand k−1CEA → 0. We will re-visit this approximation in Section 2.3.To incorporate steady uniform flow, we only need to change the diffusion coeffi-cient terms, as described in [108, Ch. 4]. Specifically, for the A molecules we replaceDA∇2CA withDA∇2CA − v‖∂CA∂x− v⊥,1∂CA∂y− v⊥,2∂CA∂z, (2.10)where v2⊥ = v2⊥,1 + v2⊥,2, and analogous substitutions can be made for the E and EAmolecules. For convenience, and without loss of generality (due to symmetry), wecan impose v⊥,2 = 0 so that v⊥ = v⊥,1, i.e., so that the perpendicular flow is alongthe y-direction and we have one less term in (2.10).2.2.2 Dimensionless ModelDimensional analysis facilitates comparison between different dimensional parametersets with the use of reference parameters and the creation of dimensionless constants;see [109]. Dimensional analysis enables us to extrapolate both theoretical and simu-lation results to arbitrary dimensions (as long as the underlying assumptions are stillvalid), and it can also provide clarity of exposition by reducing the number of param-eters that appear in some equations. In this dissertation, all dimensionless variableshave a ? superscript and they are equal to the dimensional variables scaled by theappropriate reference variables.We define reference distance L in m and reference number of molecules NAref. We33Chapter 2. Channel Model and Impulse Responsedefine a reference concentration for each molecular species in molecule · m−3: C0 =NAref/L3 for species A, CEtotfor E (i.e., the constant global sum concentration of Eand EAmolecules), and k1CEtotC0/(k−1+k2) for EA (which is the maximum expectedEA concentration for the MichaelisMenten mechanism in a spatially homogenousenvironment; see [59]). We then define the dimensionless concentrations asC?A =CAC0, C?E =CECEtot, C?EA =CEA(k−1 + k2)k1CEtotC0, (2.11)for species A, E, and EA, respectively. Similarly, dimensionless times are defined ast?A =DAtL2, t?E =DEtL2, t?EA =DEAtL2, (2.12)and for compactness in this dissertation, we omit the A subscript in t?A when A isthe only molecular species of interest. The dimensionless coordinates along the threeaxes arex? =xL, y? =yL, z? =zL, (2.13)and the TX is placed at {−d?, 0, 0}, where d? = d/L. Given the dimensionless model,Fick's Second Law for species Ω can be written in dimensionless form as∂C?Ω∂t?Ω= ∇2C?Ω, (2.14)where∂C?Ω∂t?Ω=∂CΩ∂tL2DΩC0, ∇2C?Ω =L2C0∇2CΩ. (2.15)The system of partial differential equations for enzyme reaction-diffusion kinetics34Chapter 2. Channel Model and Impulse Responsein the absence of flow is written in dimensionless form as∂C?A∂t?A= ∇2C?A − βA,1C?EC?A + βA,1βA,2C?EA, (2.16)∂C?E∂t?E= ∇2C?E − βEC?EC?A + βEC?EA, (2.17)∂C?EA∂t?EA= ∇2C?EA + βEAC?EC?A − βEAC?EA, (2.18)whereβA,1 = L2k1CEtot/DA, βA,2 = k−1/ (k−1 + k2) , (2.19)βE = L2k1C0/DE, βEA = L2 (k−1 + k2) /DEA, (2.20)are dimensionless constants. When (2.2) is the only reaction, the partial differentialequation describing the behavior of the A molecules is∂C?∂t?= ∇2C? − k?C?, (2.21)where k? = L2k/D is the dimensionless degradation rate constant, and here we havedropped the A subscripts.Steady uniform flow is represented dimensionlessly with the Peclet number, v?,written for the A molecules as [6, Ch. 1]v? =vLDA, (2.22)where v = |v| is the speed of the fluid. As a dimensionless value, v? measuresthe relative impact of advection versus diffusion on the molecular transport of Amolecules. If v? = 1, then the typical time for an A molecule to diffuse the reference35Chapter 2. Channel Model and Impulse Responsedistance L, i.e., L2/DA, is equal to the typical time for a molecule to move the samedistance by advection alone. A value of v? much less or much greater than 1 signalsthe dominance of diffusion or advection, respectively. In this dissertation, we find ituseful to define v? along each dimension, which we write asv?‖ =v‖LDA, v?⊥,1 =v⊥,1LDA, v?⊥,2 =v⊥,2LDA. (2.23)To incorporate the flow in the corresponding reaction-diffusion equations, we re-place ∇2C?A with∇2C?A − v?‖∂C?A∂x?− v?⊥∂C?A∂y?, (2.24)where we have already imposed that v?⊥,2 = 0 and v?⊥,1 = v?⊥. For (2.22)-(2.24),analogous substitutions can be made for the E and EA molecules.2.2.3 Molecule Sources and the ReceiverWe assume that there are U point sources of A molecules in the fluid environment.The sources are categorized into transmitters (including the TX at {−d, 0, 0}, whichis the RX's intended transmitter) and random noise. Each transmitter is partici-pating in a communication link, and for convenience we assume that the transmittersin all communication links adopt the same data modulation scheme (but with inde-pendent data and unique transmission parameters). Random noise sources representother chemical processes that randomly generate A molecules.Without loss of generality, we define a separate coordinate axes for every pointsource, such that the uth source is placed at {−du, 0, 0}, where du ≥ 0. As anexample, see Fig. 2.2. This is valid because all A molecules behave independently, so36Chapter 2. Channel Model and Impulse ResponsexTXyzv‖v⊥vRXVRX- A molecule Source u = 2{−d3, 0, 0}- SourceSource u = 3Figure 2.2: The system model with a re-defined coordinate frame for the 3rd moleculesource. Here, unlike in Fig. 2.1, the flow component v‖ is negative.the impact of multiple molecule sources on the RX can be superimposed. Likewise,the advection variables must be re-defined for each source's corresponding coordinateframe, such that v‖ > 0 always represents the flow from the source towards thereceiver and v⊥ is the component of v that is perpendicular to v‖.The transmitters behave as follows. Transmitter u has independent binary se-quence Wu = {Wu [1] ,Wu [2] , . . . } to send to its intended receiver, where Wu [b] isthe bth information bit, Pr(Wu [b] = 1) = Pu,1, and Pr(Wu [b] = 0) = Pu,0. The onlyreceiver that we are concerned with is the RX at the origin. The transmitters do notcoordinate their transmissions so they all transmit simultaneously, such that the uthtransmitter begins transmitting at time t = tu,0. Transmitter u has bit interval Tint,useconds and it releases NTX,u A molecules at the start of the interval to send a binary1 and no molecules to send a binary 0. In dimensionless form, the bit interval is T ?int,uand the number of molecules released is N?TX,u, where the dimensional variables arescaled by DA/L2and 1/NAref, respectively. We refer to the intended TX as the 1st37Chapter 2. Channel Model and Impulse Responsetransmitter, i.e., u = 1, and we drop the 1 subscript or replace it with tx when itis convenient to do so (such as when there is only one transmitter).The random noise sources behave as follows. The nth noise source emits Amolecules according to the random process NGen,n (t) (we change the indexing ofthe source from u to n in order to emphasize that this source is random noiseand not a transmitter of information). NGen,n (t) is represented dimensionlessly asN?Gen,n (t?A) = L2NGen,n (t) / (DANAref). Generally, we do not impose a specific ran-dom process to generate the noise, but we will always assume that the process startsat time t = tn,0 and that molecules are released at a known time-varying average rateNGen,n (t) (or N?Gen,n (t?A) dimensionlessly).We have already noted that the RX is a passive observer that does not impedediffusion or initiate chemical reactions. As an observer, it can count the numberof A molecules that are within VRXat some instant. We generally assume thatthe RX is synchronized to the bit intervals of the TX (we consider how to achievelocal synchronization in Chapter 3). Within a given bit interval, the RX makes Mobservations at times tb [m], and we define a global time sampling function t [b,m] =(b−1)Tint+ tb [m], where b = {1, 2, . . . , B} is the bth bit and m = {1, 2, . . . ,M}. Theglobal sampling times can also be written in dimensionless form as t?A [b,m]. If theobservations within an interval are taken at times separated by constant ∆tob, thentb [m] = m∆tob.We have two representations to describe the values of the observations. In the firstrepresentation, we recognize that molecules could be observed at the RX at any timeand write the number of observed molecules as a function of continuous time, NRX(t)(or N?RX(t?A) in dimensionless form). This representation is convenient when derivingthe channel impulse response, since the impulse response is also a continuous function.38Chapter 2. Channel Model and Impulse ResponseIn the second representation, we consider that the RX makes observations at discretetimes and label the mth observation in the bth interval as sb,m. The M observationsin one interval can be combined into the vector sb = [sb,1, . . . , sb,M ]T, where forconvenience of presentation we may drop the subscript b. This representation is moreconvenient when performing parameter estimation or analyzing the communicationperformance of the RX.2.3 Channel Impulse ResponseIn this section, we derive the expected time-varying point concentration due to an im-pulsive release of molecules by the TX. The expression for the expected concentrationis exact for the simplified case of first-order degradation. For the MichaelisMentenmodel in (2.1), the expression can be either an approximation or a lower bound. Thepoint concentration is then integrated over the volume of the receiver to give theexpected time-varying number of molecules observed at the RX due to an impulsiverelease at the TX, i.e., the expected channel impulse response. We perform the in-tegration for cases where the parameter values enable tractability. For more generalcases, we motivate the uniform concentration assumption (UCA), where we assumethat the concentration of A molecules throughout the RX is constant and equal tothat expected at the center of the RX, i.e., at the origin. Applying the UCA avoidsintegration, and the corresponding channel impulse response does not depend on theRX being spherical, i.e., we only need VRX.Given the expected channel impulse response due to the TX, we apply superpo-sition to describe the expected time-varying number of molecules at the RX due tothe TX's data modulation and due to other sources of A molecules. Finally, we dis-cuss the statistics of the channel impulse response. We comment on the probability39Chapter 2. Channel Model and Impulse Responsedistribution function for a given molecule to be observed by the RX at some timeand we derive the probability mass function for the RX to observe a given numberof molecules at some time.2.3.1 Derivation of the Expected Point ConcentrationGenerally, closed-form analytical solutions for partial differential equations are notalways possible and depend on the boundary conditions that are imposed. However,in the absence of flow and chemical reactions (i.e., given only Fick's Second Lawin (2.4)), the expected concentration at point {x, y, z} due to the release of NTXmolecules from a point source into a 3-dimensional environment is a well-knownresult, which we cite as [58, Eq. (4.28)]CA =NTX(4piDAt)3/2exp(−rTX24DAt), (2.25)where rTXis the distance from the TX to the point {x, y, z} and t is the time sincethe NTXinformation molecules were released. To include the effect of flow, where v⊥is the flow in the direction of the y-axis, we replace r2TXwith r2TX,ef, i.e., the squareof the effective distance between the TX and the point {x, y, z}, written asr2TX,ef = (x+ d − v‖t)2 + (y − v⊥t)2 + z2. (2.26)By effective distance, we mean that we account for the steady uniform flow byeffectively moving the TX with the flow. For example, if we are at the origin andv‖ > 0, then at time t = d/v‖ the expected concentration would be equal to that atthe TX in the absence of flow at the same time. It can be shown that (2.25) satisfies40Chapter 2. Channel Model and Impulse Response(2.4) when the diffusion term in (2.4) is replaced with (2.10). In dimensionless form,we can write (2.25) asC?A =1(4pit?A)3/2exp(−r?TX24t?A), (2.27)where r?TXis the dimensionless distance from the TX to the point {x?, y?, z?}. Toinclude the effect of flow, we replace r?TX2with r?TX,ef2, i.e., the square of the effectivedistance between the TX and the point {x?, y?, z?}, written asr?TX,ef2 = (x? + d? − v?‖ t?A)2 + (y? − v?⊥t?A)2 + (z?)2. (2.28)Now we incorporate the chemical reactions. First, we consider the case of first-order degradation only, i.e., (2.2). It can be shown that the corresponding reaction-diffusion equation, (2.9), has solutionCA =NTX(4piDAt)3/2exp(−kt− rTX,ef24DAt), (2.29)which is simply the no-reaction case in (2.25) scaled by the expected first-order degra-dation rate exp (−kt); see [59, Eq. (9.7)]. Next, we consider the set of reaction-diffusion equations for MichaelisMenten kinetics in (2.6)(2.8). This system ofequations is highly coupled due to the reaction terms and has no known closed-formanalytical solution under our boundary conditions. To obtain a closed-form solution,we make the following simplifying assumptions:1. We assume that the degradation reaction is relatively very fast, i.e., k2 →∞.2. We assume that the unbinding reaction is relatively very slow, i.e., k−1 → 0.41Chapter 2. Channel Model and Impulse ResponseFrom the first assumption, we can claim that CE remains close to the totalconcentration of free and bound enzyme, i.e., CEtot, over all time and space, sincethere will never be a significant quantity of bound enzyme. Thus, CEA remainssmall over all time and space. Before applying explicit bounds on CE and CEA, it issufficient for a solution to assume that they are both steady and uniform (i.e., theyare constant) and it can then be shown that (2.6) with the correction for flow hasthe solutionCA ≈ NTX(4piDAt)3/2exp(−k1CEt− rTX,ef24DAt)+ k−1CEAt, (2.30)and we ignore (2.7) and (2.8). Next, we apply the upper bound on CE (i.e., CEtot)and use the second assumption to apply a lower bound on k−1CEA (i.e., 0) to writethe lower bound on the expected point concentration asCA ≥ NTX(4piDAt)3/2exp(−k1CEtott− rTX,ef24DAt), (2.31)which is intuitively a lower bound because all enzymes are not always available forbinding (i.e., an enzyme cannot bind to an Amolecule if it is already bound to anotherA molecule). The tightness of this lower bound depends directly on the accuracy ofour two assumptions about the reaction rates. The actual expected concentration willbe between (2.31) and the diffusion-only case in (2.25), but will be closer to (2.31) ifthe two assumptions are more accurate (i.e., if the assumptions are not accurate, thenthe degradation of the point concentration by adding enzymes is less than expected).In general, this lower bound loses accuracy as EA is initially created (CE < CEtot),but it eventually improves with time for non-zero reaction rates as all A molecules aredegraded and none remain to bind with the enzymes (i.e., CA, CEA → 0, CE → CEtot,42Chapter 2. Channel Model and Impulse Responseas t→∞). We can also write (2.31) in dimensionless form asC?A ≥1(4pit?A)3/2exp(−L2k1CEtotDAt?A −r?TX,ef24t?A). (2.32)An alternate solution for (2.6) can be derived without our two assumptions aboutthe reaction rate constants k−1 and k2, where it is only assumed that CEA is constant.This is a common step in the analysis of Michaelis-Menten kinetics; see [59, Ch. 10]and its use when considering enzymes at the receiver in [50]. The resulting expressionis similar to (2.31), where the binding rate k1 is replaced with k1k2/ (k−1 + k2). Thisform is an approximation and not a lower bound, but the approximation is much moreaccurate than the lower bound if k−1 → 0 is not satisfied. We will see an example ofthis in Section 2.6.We now comment on the similarity of (2.29) and (2.31). These two expressions forconcentration are equivalent if k = k1CEtot, which is consistent with our observationabout the similarity of the underlying reaction-diffusion equations in Section 2.2.1.For convenience, we will generally refer to (2.29) throughout the remainder of thisdissertation, since we can accommodate MichaelisMenten kinetics via the substitu-tion of k. In fact, (2.29) also reduces to the no-degradation and no-flow case in (2.25)if k = 0, v‖ = v⊥ = 0. Thus, we will generally refer to (2.29) as the expected pointconcentration due to an impulsive release of A molecules by the TX at time t = 0.We also write (2.29) in dimensionless form asC?A =1(4pit?A)3/2exp(−k?t?A −r?TX,ef24t?A). (2.33)43Chapter 2. Channel Model and Impulse Response2.3.2 Derivation of the Channel Impulse ResponseWe now consider the integration of the point concentration over the volume of theRX to derive the expected channel impulse response NRX|tx,0 (t). For clarity of pre-sentation, we derive the dimensionless channel impulse response N?RX|tx,0 (t?A), whichis found by integrating C?A over the dimensionless RX volume V?RX, i.e.,N?RX|tx,0 (t?A) =R?RX∫02pi∫0pi∫0C?Ar?2 sin θdθdφdr?, (2.34)where r? is the magnitude of the distance from the origin (i.e., the center of the RX)to the arbitrary point {x?, y?, z?} within V ?RX. The integration in (2.34) can onlybe solved in closed-form when there is no flow component perpendicular to the linebetween the TX and the center of the RX, i.e., when v⊥ = 0. Otherwise, (2.34) mustbe solved numerically; we reduce (2.34) to an integral over θ and φ in Appendix B.1.We present the closed-form result in the following theorem:Theorem 2.1 (Impulse Response with Flow v? = v?‖). The expected number of Amolecules observed at the RX, when one dimensionless molecule is released from theTX at time t?A = 0 and there is a steady uniform flow v? = v?‖ , is given byN?RX|tx,0 (t?A) =12exp (−k?t?A)[erf(R?RX− d?ef,x2t?A12)+ erf(R?RX+ d?ef,x2t?A12)]+1d?ef,x√t?Api[exp(−(d?ef,x +R?RX)24t?A)− exp(−(d?ef,x −R?RX)24t?A)],(2.35)where d?ef,x = d? − v?‖ t?A is the effective distance from the TX to the center of the RX44Chapter 2. Channel Model and Impulse Responsealong the x?-dimension, and the error function is [110, Eq. (3.1.1)]erf (a) =2pi12∫ a0exp(−c2) dc. (2.36)Proof. Please refer to Appendix B.1.The uniform concentration assumption (UCA) approximates the expected pointconcentration of A molecules throughout the TX as constant and equal to the con-centration expected at the center of the RX. The UCA simplifies (2.34) toN?RX|tx,0 (t?A) =V ?RX(4pit?A)3/2exp(−k?t?A −d?ef,x24t?A− t?Av?⊥24), (2.37)which is valid for any parameter values and any RX shape with volume V ?RX. Weexpect (2.37) to become more accurate as the TX is placed further from the RX.In Section 2.6, we will measure the relative deviation of (2.37) from (2.34), where(2.34) is given by (2.35) if v?⊥ = 0 and solved numerically otherwise. For reference,the dimensional form of (2.37) isNRX|tx,0 (t) =NTXVRX(4piDAt)3/2exp(−kt− d2ef4DAt), (2.38)where here d2ef= (d − v‖t)2 + (v⊥t)2 is the square of the effective distance fromthe TX to the center of the RX accounting for both flow components (which weemphasize because d?ef,x is not a function of v?⊥). We assess the accuracy of the UCAin Section 2.6, but we will generally assume throughout this dissertation that theUCA is valid so that we can directly apply (2.38) and (2.37).45Chapter 2. Channel Model and Impulse Response2.3.3 Cumulative Receiver SignalIn general, the TX releases multiple impulses of molecules at different times, basedon the underlying data sequence W and the bit interval Tint. We assume that thebehavior of individual A molecules is independent, so the total number of A moleculesexpected at the RX at time t due to the TX, NRX|tx (t), is just the sum of the numberof molecules expected due to each impulsive release, i.e.,NRX|tx (t) =b tTint+1c∑b=1W [b]NRX|tx,0 (t− (b− 1)Tint) , (2.39)and this can be written in dimensionless form asN?RX|tx (t?A) =b t?AT?int+1c∑b=1W [b]N?RX|tx,0 (t?A − (b− 1)T ?int) . (2.40)The RX can also expect to observe any A molecule that was released at any priortime by any of the U molecule sources, so long as the molecule has not yet beendegraded. Again, by independence of the behavior of individual molecules, the totalnumber of A molecules expected at the RX at time t due to all molecule sources,NRX(t), is the sum of the number of molecules expected due to each source, i.e.,NRX(t) = NRX|tx (t) +U∑u=2NRX|u (t) , (2.41)where NRX|u (t) is the number of molecules expected due to the uth source. We can46Chapter 2. Channel Model and Impulse Responsewrite (2.41) in dimensionless form asN?RX(t?A) = N?RX|tx (t?A) +U∑u=2N?RX|u (t?A) . (2.42)If the uth source is also a transmitter, then we can write NRX|u (t) analogously to(2.39), i.e.,NRX|u (t) =b t−tu,0Tint,u+1c∑b=1Wu [b]NRX|u,0 (t− tu,0 − (b− 1)Tint,u) , (2.43)where we recall that tu,0 is the timing offset of the uth source, Tint,u is its bit interval,and Wu [b] is its bth bit. The expected impulse response, NRX|u,0 (t), is analogousto NRX|tx,0 (t) and can be found by solving (2.34) with the appropriate substitutionsto account for the uth source instead of the TX. If the uth source is a randomnoise source, then we replace NRX|u (t) with the time-varying molecule emission rateNGen,u (t) (which we will consider in greater detail in Chapter 5).Our discussion of the RX signal has thus far considered the number of moleculesexpected, NRX(t) and not the number that is actually observed, i.e., NRX(t). Clearly,the number of observed molecules will be the sum of the molecules observed fromeach molecule source, i.e.,NRX(t) = NRX|tx (t) +U∑u=2NRX|u (t) , (2.44)and (2.44) is actually more general than (2.41) because it does not depend on the47Chapter 2. Channel Model and Impulse Responseindependence of molecule behavior. We can write (2.44) in dimensionless form asN?RX(t?A) = N?RX|tx (t?A) +U∑u=2N?RX|u (t?A) . (2.45)We cannot write the number of molecules observed from the TX, NRX|tx (t), anal-ogously to (2.39), since (2.39) relies on the number of molecules expected from eachimpulse to be the same (relative to when the impulse was sent). The actual numberof molecules observed at time t from an impulse by the TX at the start of the bthinterval will be a random number whose mean is NRX|tx,0 (t− (b− 1)Tint). We writethe actual observed signal asNRX|tx (t) =b tTint+1c∑b=1NRX|tx (t; b) , (2.46)where NRX|tx (t; b) is the number of molecules observed at time t that were releasedat the start of the bth bit interval. Of course, the RX will not observe any moleculesfrom the bth interval if W [b] = 0, i.e., NRX|tx (t; b) = 0 ∀ t. Expressions analogous to(2.46) can also be written for NRX|u (t), N?RX|tx (t?), and N?RX|u (t?).2.3.4 Statistics of the Channel Impulse ResponseWe now consider the statistics of the channel impulse response, i.e., of the obser-vations at the RX due to molecules being released by a source at time t = 0. Wearbitrarily assume that the source is the TX, and our discussion will apply to othersources that are also transmitters. The statistics of random noise sources will beconsidered in Chapter 5.Consider (2.38), which is the number of molecules expected at the RX when we48Chapter 2. Channel Model and Impulse Responseapply the UCA, when only one molecule is released by the TX, i.e., NTX= 1. Inthat case, the number of molecules expected at the RX at time t is the probabilityof that one molecule being observed at the RX at that time, Pob(t). We write thisexplicitly asPob(t) =VRX(4piDt)3/2exp(−kt− d2ef4Dt), (2.47)and we note that, instead of the UCA, we could consider a dimensional solution tothe integration in (2.34); what matters is that we know Pob(t). Pob(t) is the suc-cess probability of a given molecule being observed. More generally, we release NTXmolecules, and by assuming independent behavior for each, the number of observedmolecules NRX|tx (t) is a Binomial random variable with NTX trials and success proba-bility Pob(t). Thus, we can write the probability mass function (PMF) for NRX|tx (t),or the probability that ω molecules will be observed, as [65, Eq. (5.1.2)]Pr(NRX|tx (t) = ω) =(NTXω)Pob(t)ω (1− Pob(t))NTX−ω , (2.48)and the cumulative distribution function (CDF), or the probability that no more thanξ molecules will be observed, as [65, Ch. 5]Pr(NRX|tx (t) ≤ ξ) =ξ∑ω=0(NTXω)Pob(t)ω (1− Pob(t))NTX−ω (2.49)= 1−NTX∑ω=ξ+1(NTXω)Pob(t)ω (1− Pob(t))NTX−ω , (2.50)where(βa)is the binomial coefficient, i.e.,(βa)=β!a!(β − a)! , (2.51)49Chapter 2. Channel Model and Impulse Responseand (·)! is the factorial operator. Given Pob(t), (2.48) and (2.49) are exact, but theyare difficult to evaluate for large values of NTX. One alternative, as noted in [16],is to use the regularized incomplete beta function IPob(·, ·) based on the individualprobability Pob(t), where [111, Eq. (8.392)]IPob(ξ, c) =Pob∫0tξ−1(1− t)c−1dt1∫0tξ−1(1− t)c−1dt. (2.52)However, the derivative of the incomplete beta function cannot be written inclosed form so the incomplete beta function does not easily lend itself to optimization.We consider two common approximations of the Binomial distribution, as describedin [65, Ch. 5]. First, for large NTXand small Pob(t), such that their product is afinite positive number, the Binomial distribution approaches the Poisson distributionwith mean λ = NTXPob(t) = NRX|tx,0 (t). The PMF is thenPr(NRX|tx (t) = ω)∣∣Poiss=NRX|tx,0 (t)ωexp(−NRX|tx,0 (t))ω!, (2.53)and the CDF isPr(NRX|tx (t) ≤ ξ)∣∣Poiss= exp(−NRX|tx,0 (t)) ξ∑ω=0NRX|tx,0 (t)ωω!. (2.54)Again, we might have difficulty in evaluating the PMF or CDF numerically if NTXis large. An alternative method is to write the CDF of a Poisson random variable Xwith mean λ in terms of Gamma functions, as [112, Eq. (1)]Pr (X < x) =Γ(dxe, λ)Γ(dxe) , (2.55)50Chapter 2. Channel Model and Impulse Responsefor x > 0, where the Gamma and incomplete Gamma functions are defined by [111,Eq. (8.310.1), Eq. (8.350.2)]Γ(x) =∫ ∞0exp (−a) ax−1da, Γ(x, λ) =∫ ∞λexp (−a) ax−1da, (2.56)respectively. The second approximation of the Binomial distribution uses the Gaus-sian approximation of the Poisson distribution, which increases in accuracy as themean of the Poisson distribution increases. Including a continuity correction (ap-propriate when approximating a discrete random variable with a continuous randomvariable; see [65, Ch. 6]), the CDF has the formPr (X < x) =12[1 + erf(x− 0.5− λ√2λ)]. (2.57)Thus far, we have discussed the statistics of the observations at the RX due tomolecules being released by the TX at time t = 0. This analysis readily extends tothe statistics of observations at the RX due to all molecule emissions made by allmolecule sources, i.e., NRX(t), if we apply the Poisson or Gaussian approximations.For a release by the TX at time t = 0, the statistics of the Poisson and Gaussianapproximations use the number of molecules expected, i.e., NRX|tx,0 (t), as the mean ofeach distribution. Both the Poisson and Gaussian distributions have the reproductiveproperty, i.e., the sum of independent Poisson (Gaussian) random variables is alsoa Poisson (Gaussian) random value whose mean is the sum of the means of theindividual variables; see [65, Ch. 5]. We established in (2.41) that the total numberof molecules expected at the RX, NRX(t), is the sum of the number of moleculesexpected from all emissions of all molecule sources. Thus, NRX(t) can also be modeledas a Poisson or Gaussian random variable with time-varying meanNRX(t). This claim51Chapter 2. Channel Model and Impulse Responseis consistent for random noise sources if we assume that the corresponding number ofmolecules observed at the RX also follows a Poisson or Gaussian distribution (whichwe will consider in Chapter 5).Throughout this dissertation, unless otherwise noted, we apply the Poisson ap-proximation. The system parameters that we consider tend to result in a smallfraction of molecules released by the TX that are observed at one time by the RX.Thus, Pob(t) tends to be small, i.e., < 0.01, and we show in Section 2.6 that thePoisson approximation is more accurate than the Gaussian approximation for thesystem parameters that we are interested in. However, there are scenarios where wemust apply the Gaussian approximation, e.g., in evaluating the bit error probabilityof the weighted sum detector with unequal weights in Chapter 4.2.4 Independence of Receiver ObservationsIn this section, we study the assumption that all observations made at the RX areindependent of each other. This assumption is distinct from the assumption that allmolecules behave independently. Intuitively, the assumption of independent sampleswill not be true as the time between samples tends to zero; if the time between samplesis infinitesimally small, then there is insufficient time for either the molecules observedin one sample to leave the RX or for any new molecules to arrive. We will measureindependence using mutual information, which is a measure of how the knowledgeof one variable influences the prediction of a second variable; see [113]. Specifically,for two discrete random variables X and Y taking values x and y, respectively, themutual information I(X;Y ) is [113, Eq. (2.28)]I(X;Y ) =∑x∑yPr (X = x, Y = y) logPr (X = x, Y = y)Pr(X = x) Pr(Y = y), (2.58)52Chapter 2. Channel Model and Impulse Responsewhere the summations are over all possible values of x and y. If the variables X andY are independent, then I(X;Y ) = 0, i.e., knowing X does not provide any greatercertainty in predicting the value of Y , and vice versa. If X and Y are measures ofthe same variable over time, and the time between measures is infinitesimally small,then knowing X lets us predict Y perfectly and mutual information is maximized.We focus on the independence of two consecutive observations at the RX, sincethese are the most likely to be dependent. For compactness, we write these observa-tions at times tb [m] and tb [m+ 1] to be at times tm and tm+1, respectively, such thattm+1 > tm. We apply the UCA to the expected channel impulse response and weconsider molecules that were released by the TX at time t = 0, i.e., NRX|tx,0 (t). Ofcourse, if no molecules are emitted and there are no noise sources, then the mutualinformation between samples of molecules from the transmitter would be zero be-cause every sample would be 0. For tractability, we assume in this section that thereis no flow, though we expect that the presence of flow can only decrease the mutualinformation of the two observations. Also, we assume that the RX is a sphere.Let the observations at times tm and tm+1 be sm and sm+1, respectively. Fromprobability theory, the joint probability distribution of NRX(tm) and NRX (tm+1) canbe evaluated from the conditional probability distribution, i.e.,Pr (NRX(tm) = sm, NRX (tm+1) = sm+1) =Pr (NRX(tm+1) = sm+1 | NRX (tm) = sm) Pr (NRX (tm) = sm) . (2.59)53Chapter 2. Channel Model and Impulse ResponseThe mutual information is thenI(NRX(tm) ;NRX (tm+1)) =∑sm∑sm+1Pr (NRX(tm) = sm, NRX (tm+1) = sm+1)× log Pr (NRX (tm) = sm, NRX (tm+1) = sm+1)Pr(NRX(tm) = sm) Pr(NRX (tm+1) = sm+1),(2.60)where the range of values for the observations sm and sm+1 in the summations shouldaccount for all observations that have a non-negligible probability of occurring. Themarginal probabilities (i.e., of a single variable) in (2.60) can be evaluated for exampleusing (2.53). Eq. (2.53) can also used to determine appropriate ranges for sm andsm+1. The remaining step to evaluate (2.60) is to derive the conditional probabilityin (2.59). We can derive this probability if we have an expression for Pstay(∆tob), theprobability that an information molecule that is observed at the RX at tm is later atthe RX at ∆tob= tm+1 − tm. We assume that an information molecule observed attm is randomly located within the RX sphere, following the uniform concentrationassumption. Then, from [79, Eq. (3.8)], the point concentration CA(r,∆tob) due tothis single molecule at a distance r from the center of the RX, in the absence ofmolecule degradation, isCA(r,∆tob) =38piR3RX[erf(RRX− r2√DA∆tob)+ erf(RRX+ r2√DA∆tob)]+34piR3RXr√DA∆tobpi[exp(−(RRX + r)24DA∆tob)− exp(−(RRX − r)24DA∆tob)],(2.61)where we can easily account for molecule degradation by multiplying (2.61) by afactor of exp (−k∆tob). What follows in the remainder of this section is valid with or54Chapter 2. Channel Model and Impulse Responsewithout this factor included.To derive Pstay(∆tob), we must integrate CA(r,∆tob) over the entire RX volume,i.e.,Pstay(∆tob) =RRX∫02pi∫0pi∫0CA(r,∆tob)r2 sin θdθdφdr = 4piRRX∫0CA(r,∆tob)r2dr. (2.62)We now present the following theorem:Theorem 2.2 (Pstay(∆tob) for one molecule). The probability of an informationmolecule being inside the RX at time ∆tobafter it was last observed inside the RX isgiven byPstay(∆tob) = erf(RRX√DA∆tob)+1RRX√DA∆tobpi×[(1− 2DA∆tobR2RX)exp( −R2RXDA∆tob)+2DA∆tobR2RX− 3], (2.63)where (2.63) is multiplied by exp (−k∆tob) if the A molecules can degrade.Proof. Please refer to Appendix B.2.The probability that a molecule observed at tm is outside the observer at tm+1 isPleave(∆tob) = exp (−k∆tob)− Pstay(∆tob).To derive an expression for (2.59), we also require an expression for Parr(tm, tm+1),the unconditional probability that an information molecule that is outside the RX attm is inside the RX at tm+1. Intuitively, this is equal to the unconditional probabilityof the molecule being inside the RX at tm+1, minus the probability of the moleculebeing inside the RX at tm and still inside the RX at tm+1, i.e.,Parr(tm, tm+1) = Pob (tm+1)− Pob (tm)Pstay (∆tob) . (2.64)55Chapter 2. Channel Model and Impulse ResponseThe conditional probability in (2.59) must consider every possible number of ar-rivals and departures of molecules from the RX in order to result in a net change ofsm+1 − sm information molecules. In other words,Pr (NRX(tm+1) = sm+1 | NRX (tm) = sm) =sm∑i=max(0,sm−sm+1)Pr (i leave) Pr (sm+1 + i− sm arrive) , (2.65)where Pr (i leave) is conditioned on NRX(tm) asPr (i leave | NRX(tm) = sm) =(smi)Pleave(∆tob)i (1− Pleave(∆tob))sm−i , (2.66)and Pr (i arrive) is also conditioned on NRX(tm) asPr (i arrive | NRX(tm) = sm) =(NTX− smi)Parr(tm, tm+1)i [1− Parr(tm, tm+1)]NTX−sm−i . (2.67)Eq. (2.67) can be simplified by applying the Poisson approximation to the bi-nomial distribution and then assuming that NTX− sm ≈ NTX, i.e., any individualmolecule is unlikely to be observed at the receiver at a given time, such that (2.67)is no longer conditioned on sm and becomesPr (i arrive) =[NTXParr(tm, tm+1)]i exp (−NTXParr(tm, tm+1))i!. (2.68)Substituting all components back into (2.59) enables us to write the joint proba-56Chapter 2. Channel Model and Impulse Responsebility distribution of two observations asPr (NRX(tm) = sm, NRX (tm+1) = sm+1) =[NTXPob(tm)]sm exp (−NTX[Pob(tm) + Parr (tm, tm+1)])×sm∑i=max(0,sm−sm+1)Pleave(∆tob)i (1− Pleave(∆tob))sm−ii! (sm − i)!× [NTXParr (tm, tm+1)]sm+1+i−sm(sm+1 + i− sm)! . (2.69)Using (2.69) and (2.53), we can evaluate (2.60) numerically for any pair of ob-servation times tm and tm+1 and also compare with simulations that generate thejoint and marginal probability distributions. We will see in Section 2.6 that as ∆tobincreases, I(NRX(tm) ;NRX (tm+1)) decreases, for any value of tm.2.5 Simulation FrameworkIn this section, we describe the custom simulation framework that we implementedfor all reaction-diffusion simulations presented in this dissertation. First, we motivateour choice of a microscopic simulation model, where every molecule is tracked. Wedescribe how we simulate molecular diffusion and steady uniform flow, then discussthe simulation of the chemical reaction mechanisms. Next, we present how we im-plement the behavior of the molecule sources and the RX. Finally, we discuss theselection of meaningful parameter values and present the values that we considerthroughout the remainder of the dissertation.Our simulations are in effect solving the system of reaction-diffusion equationsthat corresponds to the environment being considered. Instead of directly solvingstochastic versions of the partial differential equations, we simulate the underlying57Chapter 2. Channel Model and Impulse Responsephysical phenomena that are described by the equations. This approach has a coupleof benefits. First, we can be more confident in our verification of the average signalobserved at the receiver. Second, we make the simulations more realistic by relaxingsome of the assumptions that we made to facilitate the analysis, e.g., the simulationsdo not assume that the transmitters are point sources and they do not use the UCAat the receiver. Third, our simulation framework can be readily extended to scenariosthat cannot be as conveniently described by a system of coupled reaction-diffusionequations, such as when there are localized chemical reactions at the receiver.2.5.1 Choice of FrameworkAs we discussed in [69], simulation methods for molecular behavior can range in scalefrom molecular dynamics models (such as that used in LAMMPS [114]), which ac-count for all interactions between all individual molecules (including solvent moleculesin a fluid), to continuum models (such as that used in COMSOL Multiphysics [115])where no individual molecules are described. Two common intermediate modelsthat tend to be suitable for the study of reaction-diffusion environments on the scaleof biological cells are microscopic and mesoscopic models. Both of these models treatthe solvent in a fluid as a continuum and focus on the behavior of solute molecules.Our simulation framework uses a microscopic (or particle-based) model, wherethe precise locations of all individual molecules are known. One well-known suchsimulator is the Smoldyn simulator in [116]. The primary alternative, mesoscopic(or subvolume-based) methods, divide the environment into subvolumes and eachmolecule is known to be in a given subvolume. Microscopic methods tend to be moreaccurate but less computationally efficient than mesoscopic methods. Furthermore,mesoscopic methods must meet the well-stirred requirement, where every subvolume58Chapter 2. Channel Model and Impulse Responseshould have homogeneous molecular concentrations, as described in [117,118].Every free molecule in a microscopic method diffuses independently along eachdimension. Such methods require a constant global time step ∆t (the chosen valueof ∆t represents a tradeoff in accuracy and simulation time) and the simulationseparates the behavior of reaction and diffusion; see [119]. First, all free molecules areindependently displaced along each dimension by generating normal random variableswith variance 2DΩ∆t, where DΩ is the diffusion coefficient of arbitrary species Ω;see [58, Eq. (4.6)]. To account for steady uniform flow, each molecule is also displacedby v‖∆t in x and by v⊥∆t in y. Next, potential reactions are evaluated to see whetherthey would have occurred during ∆t. For bimolecular reactions, a binding radius rbindis defined as how close the centers of two reactant molecules need to be at the endof ∆t in order to assume that the two molecules collided and bound during ∆t. Forunimolecular reactions, a random number is generated using the corresponding rateconstant to declare whether the reaction occurred during ∆t.We noted in Section 2.2 that the physical environment is unbounded. However, weneed to initialize a uniform enzyme concentration CEtot. This would effectively requirean infinite number of E molecules, but for feasibility we restrict the movement of Emolecules to the large volume VE that is centered at the TX. We force the enzymesto stay within VE by reflecting them off of the boundary of VE if diffusion carriesthem outside. In doing so, we simulate a uniform enzyme concentration using afinite number NE of molecules. We do not restrict the diffusion of A molecules. Ifan intermediate EA molecule reaches the boundary of VE, then we probabilisticallyforce the unbinding or degradation reaction based on the relative values of k−1 andk2. As long as VE is sufficiently large, such that its boundary is far away from theRX, then the impact of these forced reactions on the observations made at the RX59Chapter 2. Channel Model and Impulse Responseis negligible. In our simulations, we ensure that the side length of VE is at leastthree times greater than d, i.e., the distance from the transmitter to the center of thereceiver, and this is sufficient to ignore the behavior at the boundary of VE given theother parameter values that we consider.2.5.2 Simulating Chemical ReactionsFirst, we consider the first-order degradation reaction (2.2). For each A moleculeat the end of every time step, we generate an independent uniform random valuea between 0 and 1, and the molecule is degraded and removed if a > exp (−k∆t);see [119, Eq. (13)].The microscopic implementation of MichaelisMenten kinetics in (2.1) is muchmore involved. All three elementary reactions in (2.1) have an enzyme E molecule asa reactant and an intermediate EA molecule as a product (or vice versa). Thus, wemust jointly consider the two unimolecular reactions with EA as the reactant, andwe must take care when modeling the binding and unbinding reactions so that thebinding reaction does not occur when not intended.The probability of the unbinding reaction (k−1) occurring is a function of boththe unbinding and degradation rate constants, written as [119, Eq. (14)]Pr(Reaction k−1) =k−1k−1 + k2[1− exp (−∆t (k−1 + k2))] , (2.70)and the degradation reaction (k2) has an analogous expression by swapping k−1 andk2. A single uniform random number between 0 and 1 can then be used to determinewhether a given EA molecule reacts, and, if so, which reaction occurs.The bimolecular binding reaction (i.e., the binding of an enzyme E moleculeand an information A molecule at rate k1 to form an intermediate EA molecule) is60Chapter 2. Channel Model and Impulse Responsereversible, so we must be careful in our choice of binding radius rbind, time step ∆t,and what we assume when EA reverts back to E and A molecules. If the E andA molecules are not physically separated when the unbinding reaction (k−1) occurs,and rbindis much larger than the expected separation of the E and A molecules bydiffusion in the following time step, then the binding reaction will very likely occur inthe next time step regardless of the actual value of rbind. Therefore, we must considerthe root mean square of the separation of E and A molecules in a single time step,given as [119, Eq. (23)]rrms=√2 (DA +DE) ∆t, (2.71)where DA and DE are the constant diffusion coefficients of the A and E molecules,respectively. In general, an unbinding radius that is larger than rbindis defined toseparate the two molecules as soon as the reversible unbinding reaction occurs. Theobjective in doing so is to prevent the automatic re-binding of the same two moleculesin the next time step and more accurately model the reaction kinetics; see [119].However, if rrms rbind, i.e., if the expected separation of the two molecules inone time step is much larger than the binding radius, then an unbinding radius isunnecessary and it is sufficient to keep the A and E molecules at the same coordinateswhen the unbinding reaction occurs. If ∆t is sufficiently large, then we can definerbindas [119, Eq. 27]rbind=(3k1∆t4pi) 13, (2.72)and this expression is only valid if rrms rbind. Thus, if we are careful with ourselection of k1 and ∆t, such that rrms  rbind with rbind given by (2.72), then we canlegitimately use (2.72) to define rbind. If rrms rbindis not satisfied, then rbindmustbe found using numerical methods; see [119]. In our simulations, we ensure that theuse of (2.72) is justified.61Chapter 2. Channel Model and Impulse ResponseWhen a pair of A and E molecules are within rbindof each other, we move both ofthem to the midpoint of the line between their centers and re-label them as a singleEA molecule. If the corresponding unbinding reaction occurs in a later time step,then the molecule is re-labeled as separate A and E molecules and we do not changetheir locations until they diffuse in the following time step.2.5.3 Simulating the Molecule Sources and the TransmitterIf the uth molecule source is a transmitter (including the intended TX, i.e., u =1), then we generate the binary data sequence Wu based on Pu,1. If the currentsimulation time is the start of an interval where Wu [b] = 1, then NTX,u A moleculesare initialized and centered at the uth source with an initial separation of 2RA betweenadjacent molecules so that together they form a spherical shape.If the uth molecule source is a random noise source, then molecules are initializedaccording to the continuous noise generation function NGen,u (t). In the event thatmolecules must be created within a microscopic time step (i.e., at some time thatis not a multiple of ∆t), then we track the exact time of creation and we apply theactual time step size (which is smaller than ∆t) to those molecules in the next stepof the simulation.When the RX needs to make an observation NRX(t), all free (i.e., unbound) Amolecules that are within VRXare counted. Typically, observations are made atregular increments ∆tobwithin every bit interval. Since the RX is passive, sequencedetection can be performed oine with the received signal NRX(t) or equivalentlywith the observation vector sb for each bit interval, where b = {1, 2, . . . , B}. Thedesign and performance of receiver detection will be studied in detail in Chapter 4.62Chapter 2. Channel Model and Impulse Response2.5.4 Channel Parameter ValuesIt is important to have a sense of realistic parameter values when selecting parametersfor numerical analysis and simulation. A typical animal cell is 5−20µm in diameter,while bacterial cells are generally smaller; see [3, Ch. 1]. Most enzymes are proteinsand are on the order of less than 10 nm in diameter; see [3, Ch. 4]. From the Einsteinrelation in (2.3), smaller molecules diffuse faster, so we may favor small moleculesas information molecules, although the diffusion coefficients for smaller molecules asgiven by (2.3) will be less accurate since (2.3) was derived for infinitely large solutemolecules. Many common small organic molecules, such as glucose, amino acids, andnucleotides, are about 1 nm in diameter. In the limit, single covalent bonds betweentwo atoms are about 0.15 nm long; see [3, Ch. 2].Higher rate constants imply faster chemical reactions. bimolecular rate constantscan be no greater than the collision frequency between the two reactants, i.e., everycollision results in a reaction. The largest possible value of k1 is on the order of1.66× 10−19molecule−1m3 s−1; see [59, Ch. 10] where the limiting rate is listed as onthe order of 108 L/mol/s. k−1 and k2 usually vary between 1 and 105 s−1, with valuesas high as 107 s−1. In theory, we are not limited to existing enzyme-substrate pairs;molecular engineering techniques can be used to modify the enzyme reaction rate,specificity, or thermal stability, or modify the function of an enzyme in the presenceof other molecules; see [3, Ch. 10].The simulation results presented throughout this dissertation are all based on oneof the three systems defined in Table 2.1. Systems 1 and 2 are comparable to eachother and are about an order of magnitude smaller than System 3, both in termsof the physical dimensions and the number of information molecules released in oneimpulse by the TX. This difference is deliberate for computational reasons, because63Chapter 2. Channel Model and Impulse ResponseTable 2.1: System parameters used throughout the dissertation.Parameter Symbol Units System 1 System 2 System 3Chapter(s) - - 2,4,5 2,4 3RX Radius RRXµm 0.05 0.045 0.5Molecules Released NTX- 104 5000 105Distance to RX d µm 0.5 0.3/Vary VariousA Radius RA nm 0.5 0.5 0.5A Diffusion Coeff. DA m2/s 10−9 4.365× 10−10 10−9Degradation Rate k s−1 Vary - 62.5Flow Towards RX v‖ mm/s Vary Vary 2Perpendicular Flow v⊥ mm/s Vary Vary 1Sim. Time Step ∆t µs 0.5 0.5 100both Systems 1 and 2 are used to assess the detection of bit sequences in Chapter 4,whereas System 3 is only considered in Chapter 3 where the RX is estimating thechannel parameters from a single impulse of molecules from the TX. Systems 1 and2 are also used in this chapter to study the channel impulse response, and System 1is considered for the noise analysis and bit detection in Chapter 5.The diffusion coefficient for A in System 1 (and in System 3) is similar to thediffusion coefficient of many small molecules in water at room temperature (see [80,Ch. 5]), and is also comparable to that of small molecules in blood plasma (see [120]).We define a RX radius RRX= 0.5 nm for Systems 1 and 3 only to declare how farapart A molecules must be placed when they are released by the TX. This size islarger than the size of single atoms and on the order of the size of small organicmolecules that might be suitable for signaling; see [3, Ch. 2]. If we choose referencedistance L = d, then a flow speed of v? = 1 translates to a steady flow of 2 mms(onthe order of average capillary blood speed, from 0.1 to 10 mms; see [120]).64Chapter 2. Channel Model and Impulse ResponseThe main difference between Systems 1 and 2 is that System 1 considers first-order degradation of the information molecules whereas System 2 has Michaelis-Menten (i.e., enzyme) kinetics. The additional parameters needed to describe theenzyme reaction-diffusion behavior of System 2 are defined in Table 2.2. The diffusioncoefficient for A in System 2 was found by solving (2.3) with RX radius RRX=0.5 nm, a uniform viscosity of η = 10−3 kg · m−1s−1 and at a temperature of 25 ◦C.The corresponding value of DA, 4.365 × 10−10m2/s, is only different from that ofthe other systems by about a factor of 2. The chosen chemical reaction rates k1,k−1, and k2 are all high in the range of possible values, but this is a side effect ofsimulating such a small physical environment. The channel impulse response canscale to dimensionally homologous systems that have (for example) more moleculesand lower reactivities. Finally, we note that the root mean square of the separationof E and A molecules in a single time step, rrms= 22.9 nm, is much larger than thebinding radius rbind= 2.88 nm found using (2.72), so we claim that our use of (2.72)and not defining an unbinding radius is valid.Overall, the parameter values for System 3 are the most realistic if we considerhaving transceivers that are about the size of cells. The RX has a radius of 0.5µm,which is about the size of a small bacterial cell; see [3, Ch. 1]. The molecule degrada-tion rate k of 62.5 s−1 is sufficient, in the absence of flow, for an RX 4µm from the TXto expect one less molecule at the expected peak concentration time than if k = 0,i.e., NRX|tx,0(tmax)= 6.5 instead of 7.5 (the derivation of tmaxwill be performed inChapter 3).65Chapter 2. Channel Model and Impulse ResponseTable 2.2: Enzyme reaction-diffusion parameters for System 2.Parameter Symbol Units System 2Enzyme Volume VE µm31Enzyme Molecules NE - 5× 104E Diffusion Coefficient DE m2/s 8.731× 10−11EA Diffusion Coefficient DEA m2/s 7.276× 10−11Binding Rate k1m3molecule·s 2× 10−19Unbinding Rate k−1 s−1 104Degradation Rate k2 s−1 106E Radius RE nm 2.5EA Radius REA nm 3RMS radius rrmsnm 22.9Binding Radius rbindnm 2.882.6 Numerical and Simulation ResultsIn this section, we present simulation results to assess the system model that willbe considered for the remainder of this dissertation. We verify the expected channelimpulse response, study the channel statistics, assess the independence of consecutiveobservations by the RX, and verify the simulation framework itself. We refer tothe base case of System 1 as that described by Table 2.1 without any moleculedegradation or steady uniform flow, i.e., k = 0 and v‖ = v⊥ = 0. The base caseof System 2 is that described by Tables 2.1 and 2.2 in the absence of flow. Unlessotherwise noted, curves for expected values are obtained by evaluating (2.38), whichapplies the UCA. Results in this section that are dimensionless use the referencedistance L = d and the reference number of A molecules is NAref= NTX.66Chapter 2. Channel Model and Impulse Response0 50 100 15000.511.522.533.54t [µs]#ofMoleculesExpectedNRX|tx,0(t)  Baseline, Simulationv‖ = 1mm/s, Simulationv⊥ = 4mm/s, Simulationk = 104 s−1, SimulationExpectedFigure 2.3: Average channel impulse response for the base case of System 1 comparedwith variations that have k = 10−1 s−1, v‖ = 1mm/s, and v⊥ = 4mm/s. The UCA isapplied without any noticeable loss in accuracy.2.6.1 Expected Channel Impulse ResponseIn Fig. 2.3, we observe the average channel impulse response for System 1. Wecompare the base case with three variations that have k = 10−1 s−1, v‖ = 1mm/s,and v⊥ = 4mm/s, respectively. Simulations are the average of 5×104 realizations. Allsimulations agree very well with the corresponding curves for expected values from(2.38). The base case has a maximum average observation of about 3.1 molecules attime t = 42µs. A flow of v‖ = 1mm/s is towards the RX from the TX and resultsin a maximum average observation of almost 4 molecules at about the same timeas the maximum average observation in the base case. The flow v⊥ = 4mm/s isdisruptive and results in fewer molecules being observed at the RX. The degradationrate k = 10−1 s−1 results in an even lower peak where the A molecules are beingpermanently removed instead of carried away.In Fig. 2.4, we compare the underlying Binomial statistics of the channel impulse67Chapter 2. Channel Model and Impulse Response0 2 4 6 800.20.40.60.81# of Molecules Observed NRX|tx,0(tmax)CumulativeDistributionFunction  System 1, Simulationv‖ = 1mm/s, Simulationv⊥ = 4mm/s, Simulationk = 104 s−1, SimulationBinom. CDFPoiss. CDFGauss. CDFFigure 2.4: The empirical and expected cumulative distribution function ofNRX|tx,0(tmax)for System 1 and three variations. The Poisson CDF completely over-laps the Binomial CDF in all cases.response with the Poisson and Gaussian approximations for System 1, as simulatedfor Fig. 2.3, where for each variation of system parameters we make an observationat the time when the most molecules are expected, i.e., at tmax(which is derived inChapter 3). We see that individual samples at time tmaxcan range in value from0 to 9. The empirical CDFs and the Poisson approximations all match that of theBinomial CDF. The Gaussian CDFs are a poor approximation here, due to the smalllikelihood for an individual molecule to be observed (104 molecules were released)and the small total number of molecules expected.In Fig. 2.5, we observe the expected channel impulse response for the base caseof System 2 and with a number of variations. Simulations are the average of 104realizations. For each variation, a single parameter is assigned a smaller value thanin the base case. There are two sets of expected value curves generated using (2.38):one for k = k1CEtot(i.e., the lower bound) and one for k = k1k2CEtot/ (k−1 + k2) (i.e.,68Chapter 2. Channel Model and Impulse Response0 0.1 0.2 0.3 0.400.511.5 x 10−3t⋆#ofMoleculesExpectedN⋆ RX|tx,0(t⋆)  System 2No EnzymesNTX = 2.5 × 103NE = 2.5 × 104d = 250 nmk1 = 10−19 m3molecule·sk2 = 5 × 104 s−1  Expected with k = k1CEtotExpected with k = k1k2k−1 +k2CEtotdNE = 0NTXNEk1k2System 2Figure 2.5: The expected channel impulse response for the base case of System 2,along with a series of variations of a single parameter. All markers were the averageof 104 simulations. The variable labels indicate which parameter was modified fromits default value in System 2.the approximation).The results in Fig. 2.5 are presented dimensionlessly to highlight that some vari-ations do not change the observations in dimensionless form. Specifically, reducingthe number of molecules released NTXdoes not change the dimensionless numberof molecules expected, since the reference number of molecules is also adjusted tocompensate. However, reducing NTXdoes make the lower bound on the expectedchannel impulse response slightly more accurate, because decreasing NTXindirectlydecreases the number of intermediate EA molecules that are created, and the lowerbound was derived under the assumption that CEA → 0. Reducing NAE or k1 by onehalf are interchangeable variations since they equally reduce the rate of A moleculesbinding to form intermediate molecules. System 2 without any enzymes results inmeasurably more A molecules present and a much slower decrease over time from the69Chapter 2. Channel Model and Impulse Responsemaximum observation. Not surprisingly, the impulse response is most sensitive to achange in the distance d, which was reduced by only 50 nm to 250 nm but resulted inabout twice as many molecules being observed at the peak observation.The only variation of System 2 in Fig. 2.5 where using k as an approximation isdistinguishable from using it as a lower bound is when k2 = 5× 104 s−1, because weno longer have k2  k−1 (recall that k−1 = 2 × 104 s−1). The lower bound in thiscase is the same as that for the base case of System 2.In Fig. 2.6, we compare the underlying Binomial statistics of the channel impulseresponse with the Poisson and Gaussian approximations for System 2, as simulatedfor Fig. 2.5, where for each variation of system parameters we make an observationat the time when the most molecules are expected, i.e., at tmax. For clarity, we onlyshow curves for the CDFs that deviated the most from the base case. For all BinomialCDFs shown, we use the lower bound for k in (2.38). We see that individual samplesat time tmaxcan range in value from 0 to 15. The empirical CDFs do not match theBinomial CDF as well as they did for System 1, but the Binomial CDF is still veryaccurate overall, and we will continue to use the lower bound for k in System 2 for theremainder of this dissertation. Also, as with System 1, the Poisson approximationsof the Binomial CDF are much more accurate than the Gaussian approximations.2.6.2 Uniform Concentration AssumptionWe now present a more comprehensive test of the uniform concentration assumption.Specifically, we measure the deviation of (2.37) from (2.34). We can do so withoutsimulations because the simulations themselves do not apply the UCA. Eq. (2.34) canbe evaluated using (2.35) when v?⊥ = 0 and numerically otherwise. We do not considermolecule degradation in our comparisons because the UCA is an assumption about70Chapter 2. Channel Model and Impulse Response0 5 10 1500.20.40.60.81# of Molecules Observed NRX|tx,0(tmax)CumulativeDistributionFunction  System 2No EnzymesNTX = 2.5 × 103d = 250 nmBinom. CDFPoiss. CDFGauss. CDFNTXdNE = 0System 2Figure 2.6: The empirical and expected cumulative distribution functions ofNRX|tx,0(tmax)for System 2 and three variations. The Poisson CDF completely over-laps the Binomial CDF in all cases.diffusion and not chemical reactions. Since L = d, we are only interested in R?RX< 1.Smaller values of R?RXcorrespond to a smaller receiver or the RX placed further fromthe transmitter. In Fig. 2.7, we show how much N?RX|tx,0 (t?) at a spherical RX ofradius R?RXdeviates from the true value over time in the absence of flow when weassume that the concentration throughout V ?RXis uniform.We see in Fig. 2.7 that the deviation from the true value of N?RX|tx,0 (t?) increaseswith R?RX. There is significant deviation for any R?RXwhen t? is sufficiently small.N?RX|tx,0 (t?) is underestimated for all R?RXuntil t? = 16and then it is overestimated,but the deviation tends to 0 as t? → ∞. This transition is intuitive; molecules tendto diffuse to the edge of V ?RXbefore they reach the center. However, the center ofV ?RXis closer to the TX than most of V ?RX, leading to the eventual overestimate ofN?RX|tx,0 (t?).The deviation in N?RX|tx,0 (t?) should be no more than a few percent. We are71Chapter 2. Channel Model and Impulse Response0.1 0.2 0.3 0.4 0.5−0.1−0.08−0.06−0.04−0.0200.020.040.06t∗RelativeDeviationofN∗ RX|tx,0(t∗)  R∗RX = 0.05R∗RX = 0.15R∗RX = 0.5Figure 2.7: The relative deviation in N?RX|tx,0 (t?) from the true value (2.34) at theRX when the uniform concentration assumption (2.37) is applied and there is no flowor molecule degradation. The deviation is evaluated as ((2.37)-(2.34))/(2.34). TheRX radius R?RXis increased in increments of 0.05, where a larger R?RXresults in alarger deviation.generally interested in values of t? > 0.1 (see Fig. 2.5), so the initial large deviationfor all values of R?RXis not a major concern. Small values of R?RX, i.e., R?RX≤ 0.15,maintain a deviation of less than 2 % for all t? > 0.1. Thus, we claim that the uniformconcentration assumption is sufficient when studying RX's whose radii are no morethan 15% of the distance from the center of the RX to the TX. All 3 system modelsin Table 2.1 satisfy this requirement.To assess the UCA for different flows in Figs. 2.8 and 2.9, we set R?RX= 0.1 sothat the deviation of the UCA in the no-flow case is less than 1 % for all t? > 0.1.The maximum number of molecules in the no-flow case is expected at t? = 16. Weseparately vary v?‖ and v?⊥ because any flow is equivalent to a combination of v?‖ andv?⊥.In Fig. 2.8, we assess the uniform concentration assumption over time while vary-ing v?‖ from −5 to 5 in increments of 1. All flows severely underestimate N?RX|tx,0 (t?)72Chapter 2. Channel Model and Impulse Response0.1 0.2 0.3 0.4 0.5−0.05−0.04−0.03−0.02−0.0100.01t⋆RelativeDeviationofN∗ RX|tx,0(t∗)  v⋆‖ =5, v⋆⊥ =0v⋆‖ =0, v⋆⊥ =0v⋆‖ =-5, v⋆⊥ =0Figure 2.8: The relative deviation in N?RX|tx,0 (t?) from the true value (2.34) at thereceiver when the uniform concentration assumption (2.37) is applied. The flow v?‖ isvaried from −5 to 5 in increments of 1.(i.e., deviation is much less than 0) for t? < 0.05. This underestimation is becausemolecules are expected to reach the edge of V ?RX(and thus be observed) before theyare expected at the center. When v?‖ is positive, N?RX|tx,0 (t?) is overestimated earlierthan in the no-flow case because the peak number of molecules is observed sooner andthe center of V ?RXis closer to the transmitter than most of V ?RX. When v?‖ is negative,N?RX|tx,0 (t?) is underestimated longer than in the no-flow case. Importantly, apply-ing the uniform concentration assumption to all degrees of flow within the range−2 ≤ v?‖ ≤ 4 (for which advection is not particularly dominant, as we will see inChapter 4 when studying RX design) introduces a deviation of less than 2 % for allt? > 0.1.In Fig. 2.9, we assess the uniform concentration assumption over time while vary-ing v?⊥ from 0 to 5 (by symmetry, N?RX|tx,0 (t?) due to v?⊥ < 0 is equal to that dueto v?⊥ > 0). Similar to when v?‖ < 0, which is also a disruptive flow, a nonzero v?⊥increases the time that N?RX|tx,0 (t?) is underestimated. However, this does not signif-icantly impact the general accuracy of the uniform concentration assumption, as the73Chapter 2. Channel Model and Impulse Response0.1 0.2 0.3 0.4 0.5−0.03−0.025−0.02−0.015−0.01−0.00500.005t⋆RelativeDeviationofN∗ RX|tx,0(t∗)  v⋆‖ =0, v⋆⊥ =5v⋆‖ =0, v⋆⊥ =0Figure 2.9: The relative deviation in N?RX|tx,0 (t?) from the true value (2.34) at thereceiver when the uniform concentration assumption (2.37) is applied. The flow v?⊥is varied from 0 to 5 in increments of 1.deviation for all flows shown is no more than 1.7 % for all t? > 0.1.In summary, we observe that the UCA cannot be universally applied to all degreesof flow at any time with a high degree of accuracy. The deviation of the assump-tion generally increases with the magnitude of the flow, and the assumption is leastaccurate immediately after the release of molecules by the transmitter. The benefitof the assumption is analytical tractability and simplicity, and we will continue toapply the UCA throughout this dissertation. Two notable exceptions are where wesimulate high flow speeds in Chapter 4 and where we derive the general impact of anoise source in Chapter 5.2.6.3 Sample IndependenceThe derivation of optimal channel parameter estimators in Chapter 3, the design ofthe optimal sequence detector in Chapter 4, and the derivation of the average errorprobability of a weighted sum detector in Chapter 4 are based on the assumptionthat all observations made at the receiver are independent of each other. Thus, we74Chapter 2. Channel Model and Impulse Response0 1 2 3 4 510−410−310−210−1100101∆tob = tm+1 − tm [µs]I(NRX(tm);NRX(tm+1))(bits)  tm = 10 µs, Expectedtm = 20 µs, Expectedtm = 50 µs, Expectedtm = 10 µs, Simulatedtm = 20 µs, Simulatedtm = 50 µs, SimulatedFigure 2.10: The mutual information of observations made by the RX System 2 inbits, measured as a function of ∆tob.consider the mutual information between RX observations in the base case of System2 without enzymes as a function of the time between the observations, when the onlyemission of molecules by the transmitter is at time t = 0. In Fig. 2.10, we show theevaluation of the mutual information for the base case when the reference samples aretaken at times tm = {10, 20, 50}µs. Similar simulation results are observed when flowis present or enzymes are added and we omit these results for clarity. For the expectedvalue curves, we evaluate (2.60) numerically as described in Section 2.4, whereas thesimulation curves are found by constructing the joint and marginal PDFs using 5×105independent simulations. Base 2 is used in the logarithm so that mutual informationis measured in bits.In Fig. 2.10, we see that the mutual information drops below 0.01 bits within 4µsof all reference samples. The agreement between the expected value curves and thosegenerated via simulations is quite good and only limited by the accuracy of Pleave(t)and Parr(t) (which critically rely on the uniform concentration assumption).75Chapter 2. Channel Model and Impulse Response2.7 ConclusionIn this chapter, we introduced the system model and established the analytical pre-liminaries that will be considered for the remainder of the dissertation. We modeleda diffusive MC environment with steady uniform flow in any arbitrary direction,sources of information molecules in addition to the transmitter, and mechanisms todegrade the information molecules in the propagation environment. Transmitters re-lease impulses of molecules with ON/OFF keying modulation, and there is a singlereceiver of interest that behaves as a passive observer. We described the model inboth dimensional and dimensionless forms. We derived the expected channel impulseresponse and assessed the statistics of the observations made at the receiver. Westudied the time-varying independence of receiver observations and also described acustom microscopic simulation framework. The simulation results verified the accu-racy of the expected channel impulse response and its statistics, assessed the accuracyof the uniform concentration assumption, and measured the independence of consec-utive receiver observations. The remaining chapters of this dissertation will refer tothis chapter for the model, the analysis of the channel impulse response, and thesimulation parameters.76Chapter 3Joint Channel Parameter Estimation3.1 IntroductionIn Chapter 2, we observed that the channel impulse response depends on the envi-ronmental parameters. Therefore, we claim that the response can be used as a localnoisy observation to estimate the values of those parameters. This is especially truewhen the expected impulse response can be written in closed form (although, as wepresented in Chapter 2, simplifying assumptions are generally needed to obtain aclosed-form expression). By observing the arrival of molecules from a transmitter, anintelligent receiver might learn about the current local conditions of the propagationenvironment, which is essential for some prospective MC applications.For example, consider a healthcare application where a network of microscalesensors are deployed to monitor a patient's bloodstream. The sensors might needto be mounted at regular intervals along the blood vessel walls, such that they needto estimate the distance separating themselves before mounting. By monitoring theremaining individual channel parameters, changes could be detected and the causeof the change might be inferred. The blood flow velocity could be a proxy for bloodpressure. The diffusion coefficient could be a proxy for blood composition and used toidentify major changes in blood cell counts, as described in [70]. The chemical kineticsof the information molecules could be a proxy for blood pH; chemical reactivity varieswith pH, as discussed in [59, Ch. 10]. In summary, knowledge of the individual channel77Chapter 3. Joint Channel Parameter Estimationparameters can be more insightful than knowledge of the expected channel impulseresponse alone, with the caveat that estimating individual parameters is only feasibleif an expression for the channel impulse response as a function of the parameters isavailable.We note that there are also macroscale estimation methods that are used tomeasure channel parameters. For example, there are various experimental methodsto measure diffusion coefficients, such as diaphragm cells and Taylor dispersion; see[80, Ch. 5]. However, these methods are appropriate for laboratory environmentsand might not be suitable for on-going measurements in confined settings where thedeployment of an MC system might be less invasive.In this chapter, we study local joint channel parameter estimation in a diffusiveMC environment, where in the most general case we assume that we know the form ofthe expression for the expected channel impulse response but we assume that we knownone of the individual parameter values. Specifically, we consider the general systemmodel that we introduced in Chapter 2, where a fixed receiver in an unbounded3-dimensional environment observes molecules released by a fixed impulsive source.The molecules experience steady uniform flow and can probabilistically degrade. Weignore the presence of other molecule sources. For tractability, the receiver is apassive observer that can perfectly count the number of information molecules withinits volume at a given instant. Each count is an observation and one or multipleobservations are used to estimate the parameters. Given the assumptions identifiedin Section 1.4, estimator performance within our model can serve as a bound orbenchmark for performance in more realistic environments.Existing literature on parameter estimation via diffusive MC has been limited toone unknown parameter. The distance between devices has been estimated in [7174],78Chapter 3. Joint Channel Parameter Estimationwhereas the time of transmitter release (i.e., synchronization) has been estimatedin [64, 75]. Parameter estimation has only been considered in environments withdiffusion alone and not with fluid flow or molecule degradation.In our model, when the transmitter releases an impulse of molecules, the unknownparameters are the time that the molecules are released, the number of moleculesreleased, the distance to the receiver, the diffusion coefficient, the fluid flow vector,and the molecule degradation rate. We are interested in determining the best possibleperformance of the joint estimation of all of these parameters, as a function of theobservations made by the receiver. We aim to provide bounds on the performance ofany estimation protocol.We focus in this chapter on classical estimation methods, where we assume noprior knowledge about the probability distribution of the parameters being estimated.Bayesian approaches assume that the unknown parameter is sampled from a knowndistribution; see [121, Ch. 10]. We leave the study of such approaches for future work.We do not claim that estimating all parameters simultaneously is a practicalstrategy. Rather, our analysis easily simplifies to the estimation of any subset of thechannel parameters. Furthermore, we gain insight into how the knowledge of anyone parameter decreases the error in estimating any of the other parameters. Theprimary contributions of this chapter are summarized as follows:1. We derive the Fisher Information Matrix (FIM) of our joint parameter estima-tion problem to give the Cramer-Rao lower bound (CRLB) on the variance ofestimation error of any locally unbiased estimator as a function of independentobservations of a transmitted impulse. Bounds on the unbiased estimation ofany subset of the channel parameters can be found by considering the corre-sponding elements of the FIM (e.g., if only estimating the distance, then only79Chapter 3. Joint Channel Parameter Estimation1 of the 28 unique terms in the FIM is needed).2. We study maximum likelihood (ML) estimation of our joint parameter estima-tion problem. Closed-form solutions exist for some single-parameter estimationproblems with one observation. Otherwise, ML estimates can be determinednumerically, via for example the Newton-Raphson method or an exhaustivesearch.3. We consider the presence of singularities in the FIM, in which case any unbiasedestimator will have infinite variance. Dealing with singularities is an openproblem in the parameter estimation literature, cf. e.g. [122125]. Singularitiesin the FIM, or being in the vicinity of a singularity, can have an impact whenestimating one parameter or multiple parameters simultaneously.4. We propose peak-based estimators for low-complexity estimation of a singleparameter. Variants of peak-based distance estimators were originally presentedin [72, 73]. We present a comprehensive discussion of how the peak moleculeobservation and/or the time of the peak number of observed molecules can beused to estimate any single parameter, given knowledge of the other parameters.We note that we focus on parameter estimation when there is only one devicereleasing molecules, i.e., the transmitter, and they are observed by the receiver. Wecoin the term one-way protocols to refer to estimation protocols using this approach,and to distinguish them from two-way protocols (such as those proposed for distanceestimation in [71,72,74]), which rely on feedback from the receiver back to the trans-mitter so that the transmitter makes the estimate. In general, two-way protocols canbe no more accurate than one-way protocols, because two-way methods require thesubsequent detection of two molecule impulses.80Chapter 3. Joint Channel Parameter EstimationThe rest of this chapter is organized as follows. In Section 3.2, we review theexpected channel impulse response from Chapter 2 and review the CRLB and MLestimation. We derive the FIM of the joint estimation problem, from which the CRLBcan be found, in Section 3.3. In Section 3.4, we apply examples of ML estimation tothe joint estimation problem and present the peak-based estimation protocols. Wepresent numerical and simulation results in Section 3.5. Conclusions are drawn inSection 3.6.3.2 System Model and Estimation PreliminariesIn this section, we describe the diffusive environment and the expected channel im-pulse response. We review the definition of the CRLB for vector parameter estima-tion. We also review ML estimation and the Newton-Raphson method for numericalevaluation of the ML estimate.3.2.1 Physical EnvironmentWe consider the system model described in Chapter 2, limited to the intended TX asthe only source of A molecules. We also restrict our modeling of chemical reactionsto first-order degradation as defined in (2.2), since we would not be able to estimatethe individual Michaelis-Menten reaction rate constants (thus, we also drop the sub-script A from all corresponding parameters). This simplified model is summarized inFig. 3.1. The unbounded environment has the RX with volume VRXcentered at theorigin and the point TX at a distance of d from the origin. There is a steady uniformflow v with components v‖ and v⊥, where v‖ is the component of v in the directionof a line pointing from the TX towards the RX.Given our system model, we can write the expected channel impulse response.81Chapter 3. Joint Channel Parameter EstimationxTX {−d, 0, 0}yzv‖v⊥vRXVRX123∅- A moleculekFigure 3.1: The system model considered in this chapter, where there is a singlepoint source and only first-order molecule degradation. Molecule 1 is inside VRXandso can be observed by the RX. Molecule 2 was previously inside VRXand is nowoutside because the RX is non-absorbing. Once released by the TX, the behavior ofeach molecule is that of a biased random walk (biased by the steady flow v) until itundergoes degradation via the chemical reaction described by first-order degradationrate constant k, e.g., molecule 3.We apply the uniform concentration assumption and recall (2.38) asNRX|tx (t) =NTXVRX(4piDt)3/2exp(−kt− d2ef4Dt), (3.1)where d2ef= (d − v‖t)2 + (v⊥t)2 is the square of the effective distance from the TX tothe center of the RX. We drop the 0 in the subscript of NRX|tx (t) because there isonly one release of molecules and we generally assume in this chapter that the RXdoes not know the exact time when the TX releases the impulse of molecules. Thus,we define ttx,ef = t − ttx,0 as the time elapsed since the molecules were released, i.e.,ttx,ef > 0, and we re-write (3.1) asNRX|tx (t) =NTXVRX(4piDttx,ef)3/2exp(−kttx,ef − d2ef4Dttx,ef), (3.2)82Chapter 3. Joint Channel Parameter EstimationTable 3.1: Diagonal elements of the FIM for each desired parameter, as found byTheorem 3.1 in Section 3.3.1.ParameterVariableName ΘiFIM Diagonal Element [I(Θ)]ΘiDistance from TXto RXd∑Mm=1NRX|tx(tm)4D2(v‖ − dttx,ef)2TX Release Time ttx,0∑Mm=1 NRX|tx (tm)(32ttx,ef+ k +v2‖+v2⊥4D− d24Dt2tx,ef)2Diffusion Coeffi-cientD∑Mm=1NRX|tx(tm)4D2(3− (d−v‖ttx,ef)2+v2⊥t2tx,ef2Dttx,ef)2Degradation Rate k∑Mm=1 t2tx,efNRX|tx (tm)Flow Towards RX v‖∑Mm=1NRX|tx(tm)4D2(d − v‖ttx,ef)2Perpendicular Flow v⊥∑Mm=1v2⊥t2tx,ef4D2NRX|tx (tm)Molecules Releasedby TXNTX∑Mm=1NRX|tx(tm)N2TXwhere d2ef= (d − v‖ttx,ef)2 + (v⊥ttx,ef)2. The actual number of molecules observed bythe RX is NRX(t), and the time-varying mean of NRX(t) is given by (3.2). The onlyvariable in (3.2) that we always assume is known to the RX is its volume VRX. Wesummarize the remaining channel parameters in Table 3.1, and we assume that somesubset of those parameters are unknown and must be estimated.3.2.2 The Cramer-Rao Lower BoundThe Cramer-Rao lower bound is a bound on the error variance of any (locally) unbi-ased estimator; biased estimators, or estimators that are locally biased, can outper-form the CRLB. Here, we review the definition of the CRLB for a vector parameteras described in [121, Ch. 3]. The definition easily simplifies in the case of a singleunknown parameter.83Chapter 3. Joint Channel Parameter EstimationAssume that we have a vector of M observations s = [s1, . . . , sM ]Tand a vec-tor of Λ unknown parameters Θ = [Θ1, . . . ,ΘΛ]T, where [·]T is vector transpose.Assume that we know the conditional probability mass function (PMF) of the ob-servations, p(s|Θ). Under standard regularity conditions (see [126, Ch. 1.7]), andby [121, Th. 3.2], the covariance matrix of any unbiased estimator for Θ, CΘˆ , satis-fiesCΘˆ − I−1(Θ) ≥ 0, (3.3)where ≥ 0 means that the matrix is positive semi-definite. An estimator is unbiasedif E[Θˆ] = Θ. The elements of the Fisher information matrix I(Θ) are given by[I(Θ)]Θi,Θj = −E[∂2 ln p(s|Θ)∂Θi∂Θj], (3.4)where E [·] is the expectation taken with respect to p(s|Θ), and the derivatives areevaluated at the true value of Θ. For a positive semi-definite matrix, the diagonalelements are non-negative. Thus, from (3.3) we have[CΘˆ − I−1(Θ)]Θi,Θi≥ 0, (3.5)andvar(Θˆi) =[CΘˆ]Θi,Θi≥ [I−1(Θ)]Θi,Θi, (3.6)where var(Θˆi) is defined as the variance of the estimation error of parameter Θi, i.e.,var(Θˆi) = E[(Θˆi − E[Θˆi])2]. (3.7)Thus, the CRLB on the error variance of the ith parameter, when all Λ parametersare jointly estimated by an unbiased estimator, is given by the ith diagonal element84Chapter 3. Joint Channel Parameter Estimationof the inverse of the FIM. The elements of the FIM are found using (3.4).3.2.3 Maximum Likelihood EstimationML estimation is known as a turn-the-crank procedure because it can be proce-durally implemented for many estimation problems where the observation PMF isknown; see [121, Ch. 7] and examples of exceptions in [127, Ch. 6]. Conventionally,the observations are assumed to be independent and identically distributed. How-ever, in our model, we will have observations that are not identically distributed,since each observation is made at a different sampling time. Nevertheless, it is gen-erally accepted that, in most cases, ML estimation is asymptotically efficient in thesense of the CRLB as the number of observations grows large, i.e., as M →∞, evenif the observations are drawn from different PMFs ; see [127, Ch. 6]. We cannot makeany general claims about the bias or the relative performance of ML estimation for afinite number of observations, though we will observe the efficiency of ML estimationin practice as more observations are made.The ML estimate of vector parameter Θ is given as follows:Θˆ∣∣ML= argmaxΘp(s|Θ), (3.8)i.e., the ML estimate of Θ is the vector that maximizes the observation PMF, giventhe observation vector s. We will find that there are special cases, particularly ifthere is one observation and one unknown parameter, where we can write the MLestimate in closed form. In general, it can be found numerically. For example, wecan consider the Newton-Raphson method to avoid performing an exhaustive search(the latter becomes computationally cumbersome when there are multiple unknownparameters). The Newton-Raphson method begins with an initial estimate Θˆ0. The85Chapter 3. Joint Channel Parameter Estimation(c+ 1)th estimate is found iteratively as [121, Eq. (7.48)]Θˆc+1 = Θˆc −[∂2 ln p(s|Θ)∂Θ∂ΘT]−1∂ ln p(s|Θ)∂Θ∣∣∣∣Θ=Θˆc, (3.9)where [∂2 ln p(s|Θ)∂Θ∂ΘT]i,j=∂2 ln p(s|Θ)∂Θi∂Θj∀i, j ∈ {1, . . . ,Λ}. (3.10)The convenience in implementing the Newton-Raphson method is that the ex-pressions for the derivatives required in (3.9) and (3.10) can be found while derivingthe elements of the FIM in (3.4). We will provide examples of this procedure in Sec-tion 3.4.1. We must also recognize the limitations of the Newton-Raphson method,as discussed in [121, Ch. 7]. The method is not guaranteed to converge, or it mightconverge to a local maximum. The method can quickly diverge if the current estimateresults in an FIM that is close to singular. Generally, the ML estimate will be foundif the initial estimate Θˆ0 is close to the ML estimate and not in the vicinity ofsingularities (we discuss the meaning of being in the vicinity of a singularity in theFIM in further detail in Section 3.4.1). The issue of convergence could be addressedby implementing the expectation-maximization (EM) algorithm, if an appropriatedecomposition of the problem could be found (see [121, Ch. 7]). Such an alternativenumerical approach, which we leave for future work, could also provide more insightinto practical estimator design.3.3 Joint Parameter Estimation PerformanceIn this section, we first derive the FIM of the joint parameter estimation problem indiffusive MC with steady uniform flow and first-order molecule degradation. Then,we present simple examples of how to use the FIM to find the CRLB (following the86Chapter 3. Joint Channel Parameter Estimationmethodology in Section 3.2.2) and comment on situations where the FIM is singular,i.e., where the CRLB does not exist.3.3.1 Main ResultTo derive the FIM, we first need the joint observation PMF p(s|Θ) for our problem.The TXmakes a single release ofNTXmolecules at time t = ttx,0. Our observations arethe discrete number of molecules found within VRXat the sampling times, i.e., sm =NRX(tm), where the mth observation is made at time tm. We assume that the timebetween successive observations is sufficient for each observation sm to be independent(we discussed the independence of observations in detail in Chapter 2). We will alsoassume that the individual observations, which are Binomially distributed, can bebest approximated as Poisson random variables whose means are the expected valuesof the observations at the corresponding times. Thus, the joint PMF isp(s|Θ) =M∏m=1NRX|tx (tm)smexp(−NRX|tx (tm))/sm!, (3.11)where NRX|tx (tm) is as given by (3.2). The logarithm of the joint PMF isln p(s|Θ) =M∑m=1[sm lnNRX|tx (tm)− ln sm!−NRX|tx (tm)]. (3.12)We summarize the channel parameters that we wish to estimate in Table 3.1.From (3.12) and (3.2), the FIM can be found. We present the final result in thefollowing theorem:Theorem 3.1 (FIM of the Joint Estimation Problem). The elements of the Fisher87Chapter 3. Joint Channel Parameter Estimationinformation matrix for the joint parameter estimation problem are of the form[I(Θ)]Θi,Θj =M∑m=1GΘi,mGΘj ,mNRX|tx (tm) , (3.13)where GΘi,m is a unique term for parameter Θi and we note that the ordering of theelements in I(Θ) is arbitrary. The GΘi,m terms for the channel parameters are asfollows:Gd,m =12D(v‖ − dttx,ef), (3.14)Gttx,0,m =(32ttx,ef+ k +v2‖ + v2⊥4D− d24Dt2tx,ef), (3.15)GD,m =12D[12D(d2ttx,ef− 2dv‖ + ttx,ef(v2‖ + v2⊥))− 3] , (3.16)Gk,m = − ttx,ef, (3.17)Gv‖,m =12D(d − v‖ttx,ef) , (3.18)Gv⊥,m = −v⊥ttx,ef2D, (3.19)GNTX,m =1NTX, (3.20)where here ttx,ef = tm − ttx,0. The diagonal elements of the FIM are presented inTable 3.1, such that [I(Θ)]Θi is the diagonal element associated with parameter Θi.The 21 unique off-diagonal elements can be analogously found from (3.13).Proof. The proof can be shown by applying the properties of logarithms and expo-nentials and the rules of differentiation1to (3.12) and (3.2), and by noting that (by1An alternative (equivalent) derivation can be made directly from (3.11) and (3.2), as identifiedby an anonymous reviewer. We can recognize that the Fisher information of the mean of a singlePoisson distribution is the inverse of that mean, and then apply the commutative property ofFisher information for independent Poisson distributions and the reparametrization rule for Fisherinformation; see [127, Ch. 2].88Chapter 3. Joint Channel Parameter Estimationdefinition) E [sm] = NRX|tx (tm). It can be shown that the GΘi,m terms come fromthe derivative of the logarithm of the joint PMF in (3.12) with respect to Θi, i.e.,∂ ln p(s|Θ)∂Θi=M∑m=1GΘi,m(sm −NRX|tx (tm)). (3.21)3.3.2 Examples of the CRLBThe size of the FIM for a specific problem depends on the number of unknown channelparameters, i.e., given that there are Λ unknown parameters, the FIM will be an Λ×Λmatrix. The size of the FIM does not depend on the number of parameters that wewant to estimate. If we want to estimate Q parameters, then we should have Q ≤ Λ.Here, we present two basic examples of using the FIM to find the CRLB. We considerestimating the distance d, then jointly estimating d and the molecule release timettx,0, because the distance between any pair of devices in the same environment canbe unique, and every device can have its own internal synchronization. Thus, d andttx,0 are arguably the most critical parameters when establishing a communicationlink between a pair of devices in a diffusive MC environment.The simplest scenario is the estimation of a single parameter when we assumethat all other parameters are known. By (3.6), we see that we only need to invert thecorresponding entry in Table 3.1, and we can write the lower bound on the varianceof any unbiased distance estimator asvar(dˆ) ≥ 4D2∑Mm=1(v‖ − dttx,ef)2NRX|tx (tm). (3.22)Similarly, the FIM for any one unknown parameter has a single element and it89Chapter 3. Joint Channel Parameter Estimationcan be easily inverted to find the CRLB. Equations for the CRLB give us insightinto the factors that affect the accuracy of an estimate. For example, from (3.22)we see that a more accurate estimate might be possible if more samples are taken,i.e., by increasing M . The same observation can be made for the estimation of anysingle parameter via inspection of the diagonal elements of the FIM in Table 3.1. Theimpact of some parameters, such as D on the estimation of d, or tm on the estimationof the degradation rate, are not immediately clear because the parameters are bothinside and outside the NRX|tx (tm) term in the corresponding FIM element. However,we can see from Table 3.1 that increasing the number of molecules NTXwill alsoincrease the bound on the variance of estimation of NTX, because NRX|tx (tm) is onlyscaled by a factor of NTXin the corresponding element of the FIM.For Λ > 1, we must perform a matrix inversion to obtain the CRLB. ConsiderΛ = 2 where Θ = [d, ttx,0]T. The structure of the FIM is thenI(Θ) =[I(Θ)]d [I(Θ)]d,ttx,0[I(Θ)]d,ttx,0[I(Θ)]ttx,0 , (3.23)where [I(Θ)]d and [I(Θ)]ttx,0are from Table 3.1, and [I(Θ)]d,ttx,0can be evaluated from(3.13) using (3.14) and (3.15). The inversion of (3.23) is straightforward. For brevity,we omit writing the inversion out in full, but we have two comments regarding itsuse. First, we did not need to specify which parameter(s) we are trying to estimate,i.e., the FIM in (3.23) applies to estimating d or ttx,0 or both, given that both areunknown. Second, it can be shown that the CRLB for either parameter cannotbe smaller than if that parameter were the only unknown parameter. These twocomments apply to any joint estimation problem (see [121, Ch. 3]); the FIM dependson the Λ unknown parameters and not the parameters being actively estimated, and90Chapter 3. Joint Channel Parameter Estimationthe CRLB never decreases when more parameters become unknown. We show moreexamples of these observations when we present our numerical results in Section 3.5.3.3.3 On the Nonexistence of the CRLBOur analysis and discussion of the CRLB would be incomplete if we did not considerthe occasions when the CRLB does not exist. By inspection of (3.13) when there isa single observation, i.e., M = 1, we can see that singularities arise when GΘi,m = 0,such that inversion of the FIM is not possible and so the CRLB cannot be found(we do not consider the case where NRX|tx (tm) → 0, because we would not expectany meaningful communication if no molecules are expected at the RX). It has beenshown in [122, 123] that if the FIM is singular, then there is no unbiased estimatorfor Θ with finite variance. Furthermore, we must also consider the conditioning ofthe FIM. The GΘi,m terms associated with different parameters can vary by manyorders of magnitude, such that the FIM can be nearly singular.3.4 Estimation ProtocolsIn this section, we describe the implementation of estimation protocols for the channelparameter estimation problem. First, we apply examples of ML estimation, as definedin Section 3.2.3. We consider cases where the ML estimate can be written in analyticalclosed form. We also consider examples of applying the Newton-Raphson method tofind the ML estimate numerically, and comment on comparing ML estimates withthe CRLB when the FIM is singular or nearly singular. Then, we propose peak-basedestimation protocols as low-complexity methods for finding any one unknown channelparameter.91Chapter 3. Joint Channel Parameter Estimation3.4.1 ML Estimation3.4.1.1 Analytical ML EstimationWe can try to search for ML estimates analytically by taking the derivative of thelogarithm of the joint PMF with respect to the parameter of interest, i.e., (3.21), andsetting it equal to 0. If Λ > 1, i.e., if there is more than one unknown parameter,then we will have to solve a system of equations (each in the form of (3.21)) to findthe critical points that are candidates for the ML estimate. For tractability, we limitour discussion of analytical solutions to the special case of Λ = 1 and M = 1, andrely on numerical methods for the ML estimation of more than one parameter and/orobservation. Furthermore, for ML estimation when Λ = 1 and M = 1, we use anapproach that is more direct than taking the derivative of the logarithm of the jointPMF.Consider the direct estimation of the expected channel impulse response at time t1,NRX|tx (t1). It can be shown that the ML estimate of NRX|tx (t1) is just the observationat time t1, i.e., s1. Then, by the invariance property of ML estimation (see [127,Ch. 3]), the ML estimate of any single parameter in (3.2) can be found by setting t =t1 in (3.2), substituting NRX|tx (t1) with s1, and re-arranging to solve for the unknownparameter. Analytical solutions for estimating ttx,0 and D are not possible using thismethod because they are found both inside and outside the exponential in (3.2). Wecan still consider this method numerically for ttx,0 and D as an alternative to thenumerical maximization of the likelihood function directly, except when GΘi,m = 0.92Chapter 3. Joint Channel Parameter EstimationThe single-sample analytical ML estimates are then as follows:dˆ∣∣ML= v‖ttx,ef ±√4Dttx,efβ(s1)− t2tx,ef(v2⊥ + 4kD), (3.24)kˆ∣∣ML= − d2ef4Dt2tx,ef+β(s1)ttx,ef, (3.25)vˆ‖∣∣ML=dttx,ef± 1ttx,ef√4Dttx,efβ(s1)− t2tx,ef(v2⊥ + 4kD), (3.26)vˆ⊥∣∣ML= ± 1ttx,ef√4Dttx,efβ(s1)− 4kDt2tx,ef − (d − v‖ttx,ef)2, (3.27)ˆNTX∣∣ML=s1(4piDttx,ef)3/2VRXexp(kttx,ef +d2ef4Dttx,ef), (3.28)whereβ(s1) = ln(NTXVRXs1(4piDttx,ef)3/2), (3.29)we recall that d2ef= (d − v‖ttx,ef)2 + (v⊥ttx,ef)2, and here ttx,ef = t1 − ttx,0. Someadditional comments on these ML estimates are necessary:1. β(s1) is a decreasing function of the observation s1. For a sufficiently large valueof s1, an estimate of k can be negative or an estimate of d, v‖, or v⊥ can have animaginary component. A negative degradation rate k is physically meaningfuland corresponds to the spontaneous generation of molecules in the propagationenvironment. Estimates with imaginary components should be ignored.2. The ± in (3.24), (3.26), and (3.27) mean that there could be multiple validestimates due to the symmetry of (3.2) about the point {v‖ttx,ef−d, 0, 0}. Evenif the resulting distance d is negative, it still has physical meaning becauseit represents uncertainty in the position of the TX relative to the RX, e.g.,at {−d, 0, 0} or {d, 0, 0} if v‖ = 0. We could choose between multiple validestimates by tossing an unbiased coin.93Chapter 3. Joint Channel Parameter Estimation3. If the observation is s1 = 0, then β(s1) = ∞ and all analytical ML estimates(except for that of NTX) are infinite. We can avoid infinite estimates by settings1 = s if s1 = 0, where 0 < s < 1.As with the CRLB, we will find that the accuracy of ML estimation improveswith the number of observations M . Therefore, in Section 3.5 we will not focus onassessing the above equations for single-sample analytical ML estimates.3.4.1.2 Iterative Numerical ML EstimationHere, we present examples of the structure of the Newton-Raphson method for nu-merically finding the ML parameter estimate. We consider the same examples thatwe examined in Section 3.3.2 because of their importance when establishing a com-munication link. First, we consider estimation of the distance d. Second, we considerthe joint estimation of d and ttx,0.By (3.9), the distance d can be found iteratively asdˆc+1 = dˆc −∂ ln p(s|Θ)∂d/∂2 ln p(s|Θ)∂d2∣∣∣∣d=dˆc, (3.30)where we have already presented the first derivative of the logarithm of the jointPMF with respect to d in (3.21). The second derivative with respect to d can beshown to be∂2 ln p(s|Θ)∂d2= −M∑m=1(sm −NRX|tx (tm)2Dttx,ef+G2d,mNRX|tx (tm)), (3.31)such that d is found iteratively asdˆc+1 = dˆc +∑Mm=1Gd,m(sm −NRX|tx (tm))∑Mm=1(sm−NRX|tx(tm)2Dttx,ef+G2d,mNRX|tx (tm)) , (3.32)94Chapter 3. Joint Channel Parameter Estimationwhere Gd,m (as defined in (3.14)) and NRX|tx (tm) are evaluated for d = dˆc. Similariterative expressions can be written for the iterative estimation of the other channelparameters. We see that, for a single observation, i.e., M = 1, (3.32) will converge(such that dˆc+1 = dˆc) when the estimate dˆc is such that s1 = NRX|tx (t1), unless wesimultaneously have Gd,m = 0 (in which case the method will diverge).The joint estimation of distance d and synchronization (via ttx,0), such that Θ =[d, ttx,0]T, can be found iteratively asdˆc+1tˆ0c+1 = dˆctˆ0c−∂2 ln p(s|Θ)∂d2 ∂2 ln p(s|Θ)∂d∂ttx,0∂2 ln p(s|Θ)∂d∂ttx,0∂2 ln p(s|Θ)∂ttx,02−1∂ ln p(s|Θ)∂d∂ ln p(s|Θ)∂ttx,0 , (3.33)where we use Θˆc = [dˆc, tˆ0c ]Twhen we evaluate the derivatives of the logarithm of thejoint PMF. It can be shown that the second derivative of the logarithm of the jointPMF with respect to ttx,0 is∂2 ln p(s|Θ)∂ttx,02=M∑m=1[sm −NRX|tx (tm)2t2tx,ef(3− d2Dttx,ef)−G2ttx,0,mNRX|tx (tm)], (3.34)and the cross derivative is∂2 ln p(s|Θ)∂d∂ttx,0=M∑m=1(d(NRX|tx (tm)− sm)2Dt2tx,ef−Gd,mGttx,0,mNRX|tx (tm)). (3.35)The structure of the Newton-Raphson method can be similarly described for otherestimation problems with more than one unknown parameter.3.4.1.3 ML Estimation and the CRLBWe complete our discussion of ML estimation by commenting on the behavior ofML estimation when the FIM is singular and the CRLB does not exist. Consider95Chapter 3. Joint Channel Parameter Estimationthe estimation of a single parameter Θ from a single observation so that from (3.13)the FIM is a single element with no summation. If GΘ,m = 0, then I(Θ) = 0and no unbiased estimator with finite error variance exists. We have observed thatwe cannot find an analytical ML estimate when estimating one parameter Θ froma single observation when GΘ,m = 0. However, an informative ML estimate stillexists; performing a finite grid search and choosing the estimate that maximizes theobservation's log likelihood will result in a finite mean square error. We will see anexample of this in Section 3.5. The ML estimate is still informative because it isnow biased (we previously noted in Section 3.2.3 that we can only claim that MLestimation is efficient in the sense of the CRLB as M →∞).It is insufficient to limit this discussion to the case where I(Θ) = 0. In fact, MLestimation is biased and better than the CRLB when I(Θ) is in the vicinity of 0,i.e., as I(Θ)→ 0. Even in the case of estimating multiple parameters from a smallnumber of observations, the FIM could be singular or nearly singular (this becomesless likely as more observations are made). Again, ML estimation in such a scenariocan be biased and better than the CRLB. More seriously, poor conditioning can alsocause convergence problems when implementing the Newton-Raphson method forML estimation.Existing literature (see [124, 125]) has sought to define the neighborhood ofa singularity to determine where the CRLB is not an actual lower bound for MLestimation. However, this is a non-trivial task that has only been studied for somespecific problems. A detailed study to determine the parameter values for which theCRLB is not a lower bound on ML estimation is outside the scope of this dissertation.One might question whether knowledge of the CRLB is meaningful if it is notalways a lower bound on ML estimation. We believe that it is relevant to have96Chapter 3. Joint Channel Parameter Estimationthe CRLB because we are ultimately interested in practical parameter estimationschemes. A practical estimator will be more effective if it collects many observationsover time. FIMs that are singular (and therefore have no corresponding CRLB) orclose to singular will be less common as more observations are made, as we willobserve in Section 3.5. Furthermore, ML estimation becomes unbiased (such thatthe CRLB is valid) as more observations are made. Thus, we claim that the CRLBis a meaningful benchmark.3.4.2 Peak-Based EstimationThe study of parameter estimation in this chapter has focused thus far on optimalperformance, i.e., we have asked what is the best possible performance of an unbiasedestimator and what is the performance of the maximum likelihood approach. Wedo not expect to implement a ML estimator as part of a nanoscale device, even ifthere is only one unknown parameter to estimate. Rather, our intent is to establishtheoretical limits that we can use to compare with simpler, more practical estimators.We propose peak-based estimation for finding any one unknown channel parameter.Peak-based estimation has been proposed for distance estimation in [72, 73]. It hasbeen shown to be a relatively simple and accurate method for measuring the distance.By simple, we mean that a peak-based estimator makes multiple observations butuses just one observation to calculate the estimate.In our simplest variation, the RX measures the time tmaxwhen the peak number ofmolecules is observed and uses the value of tmaxto estimate the unknown parameter.For comparison, we consider more complex variations where the RX measures thepeak number of observed molecules smax, and also where the RX measures both tmaxand smax.97Chapter 3. Joint Channel Parameter Estimation3.4.2.1 Finding the Peak TimeFor peak-based estimation we need the time, after molecules are released by the TX,when the maximum number of molecules is expected, i.e., tmaxgiven that ttx,0 = 0.By taking the derivative of (3.1) with respect to t and setting it equal to 0, it can beshown that the peak number of molecules at the RX, due to an instantaneous releaseof A molecules by the TX at time t = 0, would be expected at timetmax=(−3 +√9 + d2a/D)/a, (3.36)wherea = (v2‖ + v2⊥)/D + 4k = |v|2/D + 4k. (3.37)Interestingly, (3.36) shows that the direction of flow has no impact on the timewhen the peak number of molecules is expected; only the magnitude of the flowmatters. Thus, (3.36) is also the time when the peak number of molecules would beexpected at the TX due to an instantaneous release of molecules by the RX at timet = 0.In the absence of flow and molecule degradation, i.e., if a = 0, then it can beshown that the peak number of molecules would be expected at the RX at timetmax= d2/(6D). (3.38)Peak-based estimation requires the RX to measure either the peak number ofobserved molecules smaxor the time tmaxwhen the peak number is observed. Thesimplest method for doing so is to keep track of the number of molecules observed overa sufficiently long period of time and then select (either the time or the value of) the98Chapter 3. Joint Channel Parameter Estimationpeak observation. A more general method, originally proposed in [73], is for the RX totrack the upper and lower envelopes of the observations. The peak observation smaxis then the peak value of the mean of the two envelopes. We implement the envelopedetector using what we call a moving maximum filter and a moving minimum filter.Given an odd filter length κ, the mth filtered observation s′m of the moving minimumfilter iss′m = minc∈{m−κ−12,...,m+κ−12}sc, (3.39)and the moving maximum filter is defined analogously. We note that a filter lengthκ = 1 is analogous to the simplest method of determining smaxor tmax. We alsonote that the maximum observation smaxwill generally be greater than the expectedobservation at the time when the maximum observation is expected, even when usingthe envelope detector. This is discussed in greater detail in [73]. The estimators thatfollow in the remainder of this section can be implemented with any method of finding(the time or the value of) the peak observation.3.4.2.2 Estimation from Peak TimeOur simplest variation of peak-based estimation is when the RX estimates a param-eter using tmaxalone (and not smax). If ttx,0 is the unknown parameter, then weassume that the RX is able to calculate tmaxfrom (3.36) or (3.38) as appropriate andmeasure the time when the peak number of molecules is observed. If the observedpeak time is tmax, then the RX can immediately estimate ttx,0 astˆtx,0∣∣Peak= tmax− tmax. (3.40)For clarity of exposition in the remainder of this section, we will assume that99Chapter 3. Joint Channel Parameter Estimationttx,0 = 0 when it is known and that the RX has adjusted its timer accordingly. Othervalues of ttx,0 can be accommodated by replacing the observed tmax with tmax − ttx,0.Estimates for most of the remaining parameters can be derived by re-arranging(3.36) or (3.38) as appropriate (the number of molecules released, NTX, does notappear in (3.36) or (3.38), so we cannot use this method to estimate NTX). Generally,if we have flow or molecule degradation, i.e., if a 6= 0, then the remaining peak-basedestimators aredˆ∣∣Peak=√Dtmax(4ktmax+ 6) + tmax2|v|2, (3.41)Dˆ∣∣Peak=d2 − tmax2|v|24tmax2k + 6tmax, (3.42)kˆ∣∣Peak=d2 − 6Dtmax− tmax2|v|24Dtmax2, (3.43)ˆ|v|∣∣Peak=√d2 − 4Dktmax2 − 6Dtmaxtmax2, (3.44)and the estimators (3.41) and (3.42) for the distance and the diffusion coefficient,respectively, also apply in the absence of flow and molecule degradation, i.e., if a = 0.We note that a two-way version of the distance estimator (3.41) when a = 0 wasoriginally proposed as the round-trip time from peak concentration protocol in [72].Givenˆ|v| by (3.44) and the knowledge of one flow component, we can estimate theunknown flow component using |v| = √v2‖ + v2⊥.3.4.2.3 Estimation from Peak ObservationOur remaining peak-based estimation protocols are adapted from single-sample MLestimation, given that we have the peak observation smax. Any such parameter esti-mate will not be the ML estimate given all of the observations that were assessed toidentify smax, but will be the single-sample ML estimate for the largest observation.100Chapter 3. Joint Channel Parameter EstimationThese protocols must be implemented numerically, except for special cases, and areconsidered as (potentially) more accurate alternatives to estimation from only thepeak time tmax.If the RX has knowledge of both smaxand tmax, then both of these can be substi-tuted into (3.12) and the ML estimate can be found numerically (tmaxis substitutedfor t1). We can alternatively apply one of the analytical closed-form ML estimatesfound in Section 3.4.1 if the corresponding GΘi,m 6= 0.If the RX has knowledge of smaxbut not of tmax, then the corresponding formulafor tmax(either (3.36) or (3.38)) can be substituted for t1 in (3.12) and the MLestimate can be found numerically. This approach was applied in the implementationof the envelope detector proposed for distance estimation when a = 0 in [73]. Theonly analytical ML estimate in Section 3.4.1 that remains in closed-form for any awithout requiring a numerical evaluation is that of NTXin (3.28) because tmaxis nota function of the number of released molecules.3.5 Numerical and Simulation ResultsIn this section, we present simulation results to assess the performance of the channelparameter estimation protocols discussed in this chapter with respect to the corre-sponding CRLBs. For clarity of exposition, since we have presented a number ofparameter estimators in this chapter, and there are many possible combinations ofjoint parameter estimation problems, we focus on System 3 defined in Table 2.1 andsummarized in Table 3.2. All simulation results that we present in this section wereaveraged over 104 independent simulations.The flow magnitudes of v‖ = 2mm/s and v⊥ = 1mm/s are strong relative tothe diffusion but do not completely dominate; the Peclet number, which describes101Chapter 3. Joint Channel Parameter EstimationTable 3.2: System 3 parameters used for numerical and simulation results. The Minand Max values are the bounds of ML estimation via grid search.Parameter Symbol Units Value Min MaxRX Radius RRXµm 0.5 - -Sim. Time Step ∆t ms 0.1 - -# of Sim. Steps - - 100 - -Distance to RX d µm Various 0.01 20TX Release Time ttx,0 ms 0 −10 < t1A Radius RA nm 0.5 - -Diffusion Coefficient D m2/s 10−9 10−10 10−7Degradation Rate k s−1 62.5 0 500Flow Towards RX v‖ mm/s 2 −3 6Perpendicular Flow v⊥ mm/s 1 0 10Molecules Released NTX- 105 103 106the relative dominance of convection versus diffusion and is found here as d|v|/D(see [58, Ch. 5]), is equal to 8.94 when d = 4µm. Such strong flows are withinthe range of average capillary blood speed (from 0.1 to 10mm/s; see [120]). Wedo not claim to accurately model capillary flow, where the flow is more complexthan the uniform flow that we consider in this work, but such an environment isalso one where the flow is relatively stronger than diffusion (without dominating;see [6, Ch. 7]). The strong flows also enable us to observe singularities in the CRLBfor distance estimation at sampling times of interest (i.e., near when the maximumnumber of molecules is expected). The number of A molecules released by the TXat one time, NTX= 105, is the number of molecules that would be inside a sphericalcontainer of radius 0.5µm with a concentration of 0.32mM, which is at least anorder of magnitude lower than the concentration of common ions used for signalingin mammalian cells; see [3, Ch. 12].102Chapter 3. Joint Channel Parameter EstimationTable 3.2 also lists the minimum and maximum parameter values that we usewhen performing a grid search of the maximum likelihood estimate of a given channelparameter. By symmetry, we only consider positive distance d and positive perpen-dicular flow v⊥. We do not consider TX release times greater than t1, the time of thefirst observation, because molecules cannot be observed before they are released. Wealso only consider non-negative degradation rate k, even though negative k has phys-ical meaning (i.e., information molecules are spontaneously created). Our constraintson the ranges of parameter values for grid searches enable best-case ML estimation;relaxing any of the constraints can only make ML estimation less accurate.The resulting expected channel impulse response as a function of time, given theparameters listed in Table 3.2, is presented in Fig. 3.2 for varying distance d from2µm to 10µm. We also show the average channel impulse response as generated by104 independent realizations of our simulator at each distance. Over this range ofdistances, the time of the expected maximum increases from about tmax= 0.5ms toalmost tmax= 4ms, and the number of molecules expected at that time decreasesby almost two orders of magnitude. The average simulated responses are generallyin agreement with the expected impulse responses, although the expected responsetends to slightly underestimate the simulations before the peak time and overestimatethe simulations after the peak time (due to the limitation of the assumption that theconcentration expected throughout the RX is uniform). Assuming that the TX is apoint source even though we simulate a spherical source is also a (negligible) sourceof inaccuracy.In the remainder of this section, we present normalized (i.e., dimensionless) resultsof the CRLB and the performance of the parameter estimators. By normalizing ourresults, we are able to show the relative accuracy of estimating a given parameter.103Chapter 3. Joint Channel Parameter EstimationTime t [ms]2 4 6 8 10#ofMoleculesExpectedNRX|tx(t)10-2100102Eq. (3.2)d =2µm, Simulationd =4µm, Simulationd =6µm, Simulationd =8µm, Simulationd =10µm, SimulationFigure 3.2: The expected channel impulse response NRX|tx (t) of the environment de-fined by Table 3.2 as a function of time t for varying distance d. The responses in thisfigure are found by evaluating (3.2) and compared with corresponding simulations.This is useful when a single parameter can vary over orders of magnitude, or when wewant to show the estimation of different parameters on a single plot. We normalizethe CRLB of parameter Θi as1Θ2iref[I−1 (Θ)]Θi,Θi, (3.45)such that a CRLB of 1 means that the lower bound on the variance of an unbiasedestimator is equal to Θ2iref. Generally, we will set Θiref= Θi. The one exception is thatof ttx,0 because it has a value of 0ms. We set t0Ref= 0.1ms so that the normalizingterm is on the order of what an accurate estimate would be.The performance of an estimator of parameter Θi is evaluated by measuring theestimator's mean square error, which we normalize asmse(Θˆi)∣∣Norm= E[(Θˆi −Θi)2] /Θ2iref, (3.46)104Chapter 3. Joint Channel Parameter Estimationand we note that the non-normalized mean square error, i.e., without the scalingfactor of Θ2iref, is equivalent to the variance in (3.7) if and only if the estimator isunbiased. Generally, we aim for the CRLB and the mean square error to be as smallas possible, such that the normalized bound and error should be much less than 1 forthe estimation to be meaningful.Unless otherwise noted, the sampling scheme is as follows. When one observationis made, i.e., M = 1, then it is taken at t1 = 2ms (close to the time when themaximum number of molecules are expected at distance d = 6µm; see Fig. 3.2).For other values of M , the observation times are equally spaced such that the lastsample is taken at time tM = 10ms. If we had only added new sample times whenincreasing M (without changing the old values of tm), then from (3.3) and (3.13) theCRLB could never increase. However, since we change the exact sample times foreach value of M , we will see results where the CRLB can increase with (small valuesof) increasing M .3.5.1 Optimal EstimationWe begin our discussion of estimator performance by focusing on optimal estimation,i.e., ML estimation and how it compares with the CRLB. All ML performance resultspresented were obtained via a grid search using the limits specified in Table 3.2.By performing grid searches instead of using the analytical solutions available whenM = 1, we do not need to address the exceptional cases described in Section 3.4.1.The performance of the estimation of a single parameter has also been verified viathe Newton-Raphson method.First, we consider estimating the distance when the true value is d = 6µm andwe vary the number of observations made and the number of known parameters. We105Chapter 3. Joint Channel Parameter Estimation5 10 15 20 25 30 35 4010−310−210−1100Number of Samples MNormalizedErrorondEstimation  CRLBML, d UnknownML, d, ttx,0 UnknownML, d, ttx,0, v‖ Unknownd Unknownd, ttx,0, v‖ Unknownd, ttx,0 Unknownd, ttx,0, v‖, v⊥ UnknownFigure 3.3: Normalized mean square error of ML distance estimation as a function ofthe number of observations M and as the knowledge of other parameters is removed.The corresponding CRLB for each estimate is also shown.measure the normalized CRLB (given by (3.45)) and the normalized mean square er-ror (given by (3.46)) of ML estimation when only d is unknown, and then successivelyremove the knowledge of ttx,0, v‖, and v⊥. The results are shown in Fig. 3.3. Remov-ing the knowledge of D, k, or NTXis not as detrimental to distance estimation, socorresponding results are not shown. We will see later in this section that removingthe knowledge of d does not significantly degrade the estimation of D, k, or NTX,either. We note that the ML estimate was solved for fewer values of M when thereare three unknown parameters and for no values of M when there are four unknownparameters due to the increasing computational requirements of exhaustive search-ing. Applying the Newton-Raphson method for three and four unknown parameterswas not feasible here due to poor matrix conditioning.In Fig. 3.3, we see that there are no steady trends of ML estimation accuracy orits comparison with the CRLB for low values of M , i.e., for M < 5. This is for two106Chapter 3. Joint Channel Parameter Estimationreasons: the sampling times change significantly for each value of M and some ofthese samples are close to singular points. For example, a sample taken at abouttm = 3ms will have a corresponding Gd,m term with a value of 0, which is why theCRLB when only d is unknown and M = 3 (i.e., the first sample is at t1 = 3.3ms) ishigher than when M = 2 (i.e., the first sample is at t1 = 5ms). We also cannot claimthat losing knowledge of parameters will always degrade performance; when M = 1or 5, the ML estimate of d when both d and ttx,0 are unknown is more accuratethan when only d is unknown. In these cases, the ML estimate trades accuracy inestimating ttx,0 for accuracy in estimating d (later in this section, we will see thatestimating ttx,0 is very inaccurate when M ≤ 5). Nevertheless, we can make moregeneral claims as more samples are taken, i.e., forM > 5. As more samples are made,the CRLB improves and the ML estimate approaches the CRLB. In this regime, theCRLB and ML performance both degrade as more parameters become unknown. Thepotential mean square error in the estimation of d increases by orders of magnitudeas we remove the knowledge of the values of ttx,0, v‖, and v⊥.In Fig. 3.4, we observe the performance of ML estimation of each individualchannel parameter when only that parameter is unknown. We set d = 6µm, andwe measure the normalized error of each parameter as a function of the number ofsamples M . To ease inspection of the normalized error for small values of M , thisfigure is shown in log-log scale. The figure gives us a sense of the relative accuracy towhich we can aim to estimate any single channel parameter, and helps us to verify thediagonal elements of the FIM that we list in Table 3.1. As in Fig. 3.3, the normalizederror as a function of the number of samples begins to stabilize for M > 5. Weobserve that the ML estimation of any single parameter performs very close to thecorresponding CRLB as more samples are taken. In a relative sense, we can most107Chapter 3. Joint Channel Parameter Estimation100 10110−2100Number of Samples MNormalizedError  CRLBNDkdv‖v⊥ttx,0Figure 3.4: Normalized mean square error of ML estimation of each channel param-eter when that parameter is the only one that is unknown. ML performance is givenas a function of the number of observations M when the distance d = 6µm. Thecorresponding CRLB for each estimate is also shown.accurately estimate the distance d, followed (in order) by the flow towards the RXv‖, the perpendicular flow component v⊥, the number of molecules released NTX, thediffusion coefficient D, the molecule degradation rate k, and finally the release timettx,0 (although the choice of t0Refwas particularly arbitrary since we could not chooseΘiref= Θi; the normalized error in the estimation of ttx,0 is comparable to that of v⊥if we choose t0Ref= 1ms instead of t0Ref= 0.1ms).In Fig. 3.5, we observe the performance of ML estimation of each individualchannel parameter when there are two unknown parameters: the distance d (whoseactual value is still 6µm) and the parameter of interest. We measure the normalizederror of each parameter as a function of the number of samples M . We can compareFig. 3.5 directly with Fig. 3.4 to see the importance of the knowledge of the distancewhen estimating the other channel parameters. There is negligible degradation inthe estimation of D, k, and v⊥, slight degradation in the estimation of NTX, and108Chapter 3. Joint Channel Parameter Estimation5 10 15 20 25 30 35 4010−310−210−1100101Number of Samples MNormalizedError  CRLBML, ttx,0ML,DML, kML, v‖ML, v⊥ML,Nd Unknownkv⊥v‖DNttx,0Figure 3.5: Normalized mean square error of ML estimation of each channel parame-ter when that parameter and the distance d are unknown. ML performance is givenas a function of the number of observations M when the distance d = 6µm. Thecorresponding CRLB for each estimate is also shown.significant degradation in the estimation of ttx,0 and v‖. The negligible change inthe estimation of v⊥ is most interesting because the opposite was not observed inFig. 3.3, where removing the knowledge of v⊥ was shown to measurably degrade theestimation of d.The results presented thus far do not give us a very clear sense of the performanceof ML estimation in the neighborhood of a singularity in the FIM. To do so, weshould consider ML estimation as a function of a varying channel parameter whosedomain includes a point where the CRLB is infinite. In Fig. 3.6, we perform distanceestimation as a function of the actual distance d for the number of observationsM ∈ {1, 2, 10, 20, 100}. We adjust the sampling times for M = 2 so that they aretaken at t1 = 2ms and t2 = 3ms. This adjustment ensures that, for every value ofM , a sample is taken at time tm = 2ms, so that the corresponding Gd,m is 0 whend = 4µm (recall that Gd,m is a function of tm).109Chapter 3. Joint Channel Parameter Estimation2 3 4 5 6 7 810−410−310−210−1Actual Distance d [µm]NormalizedErrorondEstimation  CRLBML M =1ML M =2ML M =10ML M =20ML M =100M = 1M = 2Figure 3.6: Normalized mean square error of ML distance estimation when d is theonly unknown parameter. The corresponding CRLB for each estimate is also shown.For every value of M , there is a sample taken at time tm = 2ms.The only singularity in the FIM in Fig. 3.6 is when M = 1 and d = 4µm.Although ML estimation when M = 1 is generally not nearly as accurate as theCRLB, it is more accurate than the CRLB over the range 3.2µm < d < 4.6µm.This range is effectively the vicinity of the singularity when one sample is taken attime t1 = 2ms and when ML estimation must be biased. Interestingly, when M = 2,there is never an actual singularity in the FIM, but we are still in the vicinity of asingularity when 3.6µm < d < 4.6µm, where ML estimation is more accurate thanthe CRLB and must be unbiased. We observe this behavior where the Gd,m term forthe observation at t1 = 2ms is equal to zero, but not where the Gd,m term for theobservation at t2 = 3ms is equal to zero, i.e., at around d = 6µm. This is becausethe sample at time t1 is more critical for the estimation of d than that at t2. Therelative importance of individual samples is reduced as more samples are taken, suchthat ML estimation is only slightly more accurate than the CRLB at d = 2µm when110Chapter 3. Joint Channel Parameter EstimationM = 10, i.e., where the sample t1 = 1ms is most critical for the estimation of dand the corresponding Gd,m term is 0. Otherwise, we observe that ML estimationperforms close to but not better than the CRLB for larger values of M , where MLestimation becomes increasingly unbiased, over the entire range of d that we consider.3.5.2 Peak-Based EstimationFinally, we consider the performance of the sub-optimal peak-based estimators thatwe proposed in Section 3.4.2. We are interested in how well the simplest peak-basedprotocol (which only measures tmaxand can be implemented in closed-form) performsin comparison to the peak-based protocols that generally require a ML search giventhe value of the peak observation smax. The ML estimates given smaxare found viaa grid search. We are also interested in the impact of the moving minimum (andmaximum) filter window length κ on the performance of each estimator, and whetherthe relative performance of the different estimators varies when different parametersare being estimated.In Fig. 3.7, we compare the performance of the peak-based distance estimatorsas a function of the actual distance d for varying window length κ. Each estimatorvariation is described in a dedicated subplot. For reference and for comparison be-tween subplots, we show the CRLB when a single sample is taken at time t1 = tmaxand when M = 100. In all three subplots, no window length κ emerges as optimalfor the entire range of d. This makes sense; the best window length for a given dis-tance is proportional to the time required for the diffusion wave to rise and then fall.Therefore, shorter filter lengths are more appropriate at shorter distances and longerfilter lengths are generally better at longer distances.More interestingly, the estimator that uses the knowledge of both tmaxand smax111Chapter 3. Joint Channel Parameter Estimation10−410−2  10−410−310−2NormalizedErrorondEstimation  2 3 4 5 6 7 810−210−1Actual Distance d [µm]  CRLB t1 = tmaxCRLB M = 100κ = 1κ = 3κ = 7κ = 11tmax, smax Knownsmax Knowntmax KnownFigure 3.7: Normalized mean square error of peak-based distance estimation as afunction of the actual distance d for varying window length κ. Each subplot islabeled with the knowledge available to the peak-based estimator. The CRLBs fort1 = tmax and M = 100 are also shown and are the same in each subplot (althoughthe CRLB when M = 100 in the bottom subplot is not visible on the scale shown).is much less accurate for measuring the distance than the estimators that use theknowledge of only tmaxor smax. The reason is that this estimator uses the values oftmaxand smaxbut not the knowledge that they correspond to the peak observation,i.e., neither (3.36) nor (3.38) are applied. Thus, the estimator does not know that itsobservation was made at time tmax. The simpler protocols combine the knowledgeof tmaxor smaxwith the knowledge that the observation was made at the peak timeand use (3.36) or (3.38) as needed (i.e., depending on the value of a). The simplestprotocol (using tmax) performs on the order of the single-sample CRLB for all windowlengths over most distances, the protocol that uses only smaxoften performs betterthan the single-sample CRLB, and the protocol using both tmaxand smaxalwaysperforms much worse than the single-sample CRLB.112Chapter 3. Joint Channel Parameter EstimationIn Fig. 3.8, we compare the performance of peak-based estimation of the otherchannel parameters as a function of the distance d. For clarity, we only consider asingle window length κ = 7. Each parameter is considered in a dedicated subplot.Where relevant, we show the CRLB when a single sample is taken at time t1 = tmax(which is not applicable for ttx,0 because Gttx,0,m = 0 at that time) and whenM = 100.The vertical scales here are not as important as the comparison between estimatorsand their performance relative to the CRLBs. Interestingly, the performance of theestimation of each parameter is not analogous to that of estimating the distance inFig. 3.7, which should not be too surprising because the peak-based estimators aresub-optimal ad hoc methods. Instead, different peak-based estimators are more ac-curate at measuring different parameters. This is an important point when assessingthe suitability of these estimation strategies. For example, the simplest protocol isthe most accurate for estimating ttx,0 at shorter distances, but it is generally the leastaccurate when estimating k or v⊥ at any distance. There is no clear best peak-basedestimator for estimating D or NTX, whereas the estimator that uses only smaxissignificantly more accurate than the other variants when estimating v‖. Overall, thesimplest estimator does not perform as well as the single-sample CRLB (when it ex-ists) when estimating any parameter besides the distance, but both of the ML-basedvariants can perform better than the single-sample CRLB for some parameters.3.6 ConclusionIn this chapter, we studied the local estimation of the diffusive MC channel parame-ters when a transmitter releases a single impulse of molecules. We derived the FIMof the joint estimation problem, which leads to the CRLB on the error variance ofany locally unbiased estimator. The FIM reduces for the estimation of any subset113Chapter 3. Joint Channel Parameter Estimation10−2100  10−210−1  10−1100101NormalizedError  10−210−1  2 4 6 810−310−210−1  2 4 6 810−310−2Distance d [µm]  CRLB t1 = tmaxCRLB M = 100tmax Knowntmax, smax Knownsmax KnownkDv⊥ Nv‖ttx,0Figure 3.8: Normalized mean square error of peak-based estimation as a function ofthe distance d when the window length is κ = 7. Each subplot is labeled with theparameter being estimated. The CRLBs for t1 = tmax and M = 100 are also shown.The release time ttx,0 does not have a CRLB when t1 = tmax because there is always asingularity at that time. The number of released molecules NTXcannot be estimatedfrom the knowledge of tmaxalone.of the channel parameters. We considered ML estimation and presented cases whereML estimates can be evaluated in closed form. Generally, ML estimation is no moreaccurate than the CRLB, unless we are in the neighborhood of singularities in thecorresponding FIM, but the impact of a sample being at or near a singularity dimin-ishes as more samples are used in estimation. We proposed variations of peak-basedestimation for more practical estimation of individual channel parameters, whichrely on observing either the value or the time of the maximum number of moleculesobserved at the receiver.The analysis presented in this chapter provides a benchmark for the future design114Chapter 3. Joint Channel Parameter Estimationof parameter estimation protocols. We are interested in the design of low-complexityestimators that use multiple samples (i.e., M > 1) for estimation in more realis-tic environments. Low-complexity protocols would be more feasible in practice, butbounds on the accuracy of estimation give us insight into how much is lost by im-plementing sub-optimal solutions. Other related and interesting problems includecooperative estimation, where multiple devices share information to generate a com-mon estimate, and channel estimation, where the expression for the expected channelimpulse response is unknown and must be measured. Channel estimation is a moregeneral problem because it does not rely on the existence of a closed-form expressionfor the expected impulse response.115Chapter 4Optimal and Suboptimal ReceiverDesign4.1 IntroductionThe performance of a communication system is generally measured at the receiver,where the information sent by the transmitter is detected. In Chapter 2, we estab-lished a physical model and the corresponding channel impulse response for a diffusiveMC environment that experiences steady uniform flow, the chemical degradation ofinformation molecules, and multiple sources of information molecules (in additionto the intended transmitter). In Chapter 3, we studied the limits for a receiver toestimate the channel parameters from the release of a single impulse of moleculesby the transmitter. In this chapter, we study the design and performance of thereceiver where we assume that the channel parameters are known to the system de-signer (i.e., the receiver does not need explicit knowledge of all channel parametervalues). Our primary goal in receiver design is to accommodate the randomness ofmolecule behavior. We cannot know a priori when individual molecules will arrive,and in general the bottleneck on performance is the unbounded delay in moleculepropagation, which leads to intersymbol interference (ISI). We consider optimal andsuboptimal receiver detection schemes. We anticipate that individual transceiverswould have limited computational abilities, therefore it is of interest to consider sim-116Chapter 4. Optimal and Suboptimal Receiver Designple modulation schemes and then assess how simple detectors perform in comparisonto optimally designed detectors. Such analysis provides valuable insight into thepractical design of diffusive MC networks.The primary contributions of this chapter are as follows:1. We derive the optimal sequence detector, in a maximum likelihood sense, togive a lower bound on the bit error probability for any detector at the receiverthat makes multiple observations in every bit interval. In practice, we do notexpect such a detector to be physically realizable, due to the memory andcomputational complexity required, even if the implementation is simplifiedwith the Viterbi algorithm as described in [1]. We propose a modified Viterbialgorithm to limit the number of states while accommodating all ISI.2. We introduce weighted sum detectors as suboptimal but more physically real-izable. In fact, they are used in neurons, which sum inputs from synapses withweights based on the characteristics of each synapse and fire when the sumexceeds a certain threshold; see [58, Ch. 12]. We characterize weighted sum de-tectors by the number of samples taken by the receiver in each bit interval andthe weights assigned to each observation. We derive the expected bit error rateof any weighted sum detector. We show that a particular selection of sampleweights is equivalent to a matched filter, and compare this detector with theoptimal single bit detector. We also consider equal sample weights as a simplerweight selection scheme that is analogous to energy detection in conventionalcommunication.The rest of this chapter is organized as follows. The details of the transmissionenvironment are summarized in Section 4.2. We derive the optimal sequence detectorin a maximum likelihood sense in Section 4.3, assuming independent observations. In117Chapter 4. Optimal and Suboptimal Receiver DesignSection 4.4, we introduce weighted sum detectors, which are suboptimal for receivinga sequence of bits, but which may be more easily realizable for bio-inspired nanoscaleand microscale communication networks. Numerical results showing detector perfor-mance are described in Section 4.5, and conclusions are drawn in Section 4.6.4.2 System ModelWe consider the general system model described in Chapter 2, but we have only onetransmitter (TX) and all other molecule sources are aggregated into a single randomnoise source. The unbounded environment has the receiver (RX) with volume VRXcentered at the origin and the point TX at a distance of d from the origin. There isa steady uniform flow v with components v‖ and v⊥, where v‖ is the component ofv in the direction of a line pointing from the TX towards the RX. The degradationmodel uses either Michaelis-Menten kinetics as described in (2.1) or direct first-orderdegradation as in (2.2). Both degradation models can be described by the degradationrate constant k, which is used as either an approximation or lower bound in the caseof Michaelis-Menten kinetics. We apply the uniform concentration assumption andrecall the expected channel impulse response (2.38), where the TX releases NTXAmolecules at time t = 0, asNRX|tx,0 (t) =NTXVRX(4piDAt)3/2exp(−kt− d2ef4DAt), (4.1)where d2ef= (d − v‖t)2 + (v⊥t)2 is the square of the effective distance from the TX tothe center of the RX, and DA is the constant diffusion coefficient.The TX has binary sequenceW = {W [1] ,W [2] , . . . } to send to the RX, whereW [b] is the bth information bit, Pr(W [b] = 1) = P1, and Pr(W [b] = 0) = P0. It118Chapter 4. Optimal and Suboptimal Receiver Designhas a bit interval of Tintseconds and it releases NTXA molecules at the start of theinterval to send a binary 1 and no molecules to send a binary 0. From (2.46), thenumber of molecules observed at the RX due to the TX isNRX|tx (t) =b tTint+1c∑j=1NRX|tx (t; b) , (4.2)where NRX|tx (t; b) is the number of molecules observed at time t that were releasedat the start of the bth bit interval. We recall that the number of molecules expectedfrom the TX, given the sequenceW, is given in (2.39) asNRX|tx (t) =b tTint+1c∑b=1W [b]NRX|tx,0 (t− (b− 1)Tint) . (4.3)We aggregate the molecules from all other sources, so from (2.44) the RX willobserveNRX(t) = NRX|tx (t) +NRX|n (t) , (4.4)where NRX|n (t) is the number of molecules from all sources except the TX. In thischapter, we assume that NRX|n (t) can be represented as a Poisson random variablewith time-varying mean NRX|n (t). A more precise characterization of noise is madein Chapter 5. By applying the Poisson approximation to the signal from the TX, thecumulative signal NRX(t) is also a Poisson random variable whose mean from (2.41)isNRX(t) = NRX|tx (t) +NRX|n (t) . (4.5)119Chapter 4. Optimal and Suboptimal Receiver Design4.3 Optimal Sequence DetectionIn this section, we derive the optimal sequence detector to give a lower bound on theachievable bit error performance of any practical detector. We present a modifiedversion of the Viterbi algorithm to reduce the computational complexity of optimaldetection and facilitate its implementation in simulations.4.3.1 Optimal DetectorThe optimal joint interval detection problem can be defined as follows. Let us assumethat the TX sequence W is B bits in length. Within each bit interval, the RXmakes M observations. The value of the mth observation in the bth interval islabeled sb,m. We assume that the sampling times within a single interval can bewritten as the function tb [m], and we define a global time sampling function t [b,m] =(b−1)Tint+tb [m], where b = {1, 2, . . . , B},m = {1, 2, . . . ,M}. Let us briefly considertwo examples of tb [m]. If a single observation is made when the maximum numberof molecules is expected, i.e., tmaxin (3.36), then tb [m] has one value, tb [1] = tmax.If there are observations taken at times separated by constant ∆tob, then tb [m] =m∆tob.The optimal RX decision rule is to select the sequence of bits Wˆ [b] that is mostlikely given the joint likelihood of all received samples, i.e.,Wˆ [b]∣∣∣b={1,2,...,B}argmaxW [b],b={1,2,...,B}Pr (NRX) (4.6)120Chapter 4. Optimal and Suboptimal Receiver DesignwherePr (NRX) = Pr(NRX(t (1, 1)) = s1,1, NRX (t (1, 2)) = s1,2, . . . ,NRX(t (B,M)) = sB,M |W). (4.7)Pr(NRX) is the joint probability mass function over all BM observations, givena specified TX sequence W. Its form is readily tractable only if we assume that allindividual observations are independent of each other, i.e., ifPr(NRX) =B∏b=1M∏m=1Pr(NRX(t [b,m]) = sb,m |W)). (4.8)If we apply our analysis in Section 2.4 and assume that RX observations are inde-pendent, then we can use (4.8) to determine the likelihood of a givenW. However,this is still a problem with significant complexity, especially for large B, because wemust determine the likelihood of 2B possible Ws. The total complexity is only lin-ear in the number of samples M since a larger M only means that more terms areincluded in the product in (4.8).4.3.2 Optimal Joint Detection Using Viterbi AlgorithmWe consider the Viterbi algorithm in order to reduce the computational complexityof optimal joint detection and evaluate the corresponding probability of error in sim-ulations as a benchmark comparison with simpler detection methods. The memoryand computational requirements of the Viterbi algorithm are likely still too high foreffective implementation in a molecular communication system.The general Viterbi algorithm is described in detail in [1, Ch. 5]. The algorithmbuilds a trellis diagram of states where the number of states depends on the channel121Chapter 4. Optimal and Suboptimal Receiver Designmemory, and each path through the trellis represents one candidate sequence. Ourmodified implementation artificially shortens the (explicit) channel memory and de-lays the decision of a given bit by the shortened memory (as performed in methodssuch as delayed decision-feedback sequence estimation in [128]), but we include theimpact of all prior ISI on the current candidate states. If the memory is increasedto the actual channel memory, then our method is equivalent to the regular Viterbialgorithm.Theoretically, the channel memory of a diffusive environment is infinite; from(2.47), we see that Pob(t) → 0 only as t → ∞. However, in practice, there will besome finite number of bit intervals after which the impact of a given transmissionbecomes negligible. While it is prudent to include the impact of ISI from all previousbit intervals, we limit the explicit channel memory to F bit intervals. Thus, therewill be 2F trellis states, where each state represents a candidate sequence for theprevious F bit intervals. Each state f has two potential incoming paths, representingthe two possible transitions from previous states (each transition corresponds to thepossibility of the bit in the (F + 1)th prior interval being 0 or 1).We define Wˆfi [l] , l = {1, 2, . . . , b} as the lth received bit according to the ith pathleading to state f . The current log likelihood for the ith path leading to state f inthe bth interval, which is the likelihood associated with only the observations in themost recent bit interval and the candidate sequence Wˆfi [l], is Φfi [b] and is evaluatedbyΦfi [b] =M∑m=1log (Pr (NRX(t [b,m]) = sb,m))=M∑m=1log(NRX(t [b,m])sb,mexp(−NRX(t [b,m]))sb,m!), (4.9)122Chapter 4. Optimal and Suboptimal Receiver Designwhere we assume that the observations are independent and we apply the Poisson ap-proximation to the probability of observing a given number of information molecules.We note that taking the logarithm has no influence on the optimality but facilitatesnumerical evaluation. The cumulative log likelihood for the fth state in the bth inter-val is the log likelihood of the most likely path (and the corresponding bit sequenceWˆf [l]) to reach the fth state, calculated for all prior bit intervals. We write thislikelihood as Ψf [b] and it is found asΨf [b] = max (Ψf1 [b− 1] + Φf1 [b] ,Ψf2 [b− 1] + Φf2 [b]) , (4.10)where Ψfi [b− 1] is the cumulative log likelihood of the state prior to the fth statealong the ith path. For the bth bit interval, Ψf [b] is the likelihood associated with themost likely TX bit sequence to lead to the fth state. Our modified Viterbi algorithmsequentially builds the trellis diagram by determining Ψf [b] for every state in everybit interval of the transmission, and keeping track of the candidate bit sequence Wˆf [l]that led to Ψf [b]. At the end of the algorithm, the RX makes its decision by selectingthe bit sequence Wˆf [l] associated with the largest value of Ψf [B].It can be shown that we have reduced the complexity of optimal joint detectionby only needing to find the likelihood of B 2F+1 sets of M observations, rather thanthe likelihood of 2B sets of BM observations. However, this is still a significantcomputational burden on the RX, so it is of interest to consider simpler detectionmethods. Furthermore, the derivation of the expected bit error probability for amaximum likelihood detector is not easily tractable, so we are restricted to evaluatingthe bit error probability via simulation.123Chapter 4. Optimal and Suboptimal Receiver Design4.4 Weighted Sum DetectionIn this section, we introduce the family of weighted sum detectors for diffusive molec-ular communication. These detectors do not have the same memory and computa-tional requirements as maximum likelihood detectors, and we are able to derive theexpected bit error probability for a given TX sequence.4.4.1 Detector Design and PerformanceWe assume that there is only sufficient memory for the RX to store theM observationsmade within a single bit interval, and that it is incapable of evaluating likelihoods orconsidering the impact of prior decisions. Under these limitations, an intuitive RXdesign is to add together the individual observations, with a weight assigned to eachobservation, and then compare the sum with a pre-determined decision threshold.This is the weighted sum detector and it is implemented (over space and not overtime) in neuron-neuron junctions; see [58, Ch. 12]. A detector that uses a singlesample is the simplest case of a weighted sum detector, i.e., M = 1, and we referto this as the max detector. Under specific conditions, which we will discuss inSection 4.4.2, a particular selection of weights makes the performance of the weightedsum detector equivalent to the optimal detector described in Section 4.3.The decision rule of the weighted sum detector in the bth bit interval isWˆ [b] = 1 if∑Mm=1wmNRX (t [b,m]) ≥ ξ,0 otherwise,(4.11)where wm is the weight of the mth observation and ξ is the binary decision thresh-old. For positive integer weights, we only need to consider positive integer decision124Chapter 4. Optimal and Suboptimal Receiver Designthresholds.The method of calculation of the expected error probability is dependent on theselection of the weights. In a special case, if the weights are all equal, particularly ifwm = 1∀m, and we assume that the value of each individual observation is a Poissonrandom variable, then the weighted sum will also be a Poisson random variablewhose mean is the sum of the means of the individual observations. Thus, we canimmediately write the CDF of the weighted sum in the bth bit interval asPr(M∑m=1NRX(t [b,m]) < ξ)= exp(−M∑m=1NRX(t [b,m]))ξ−1∑i=0(M∑m=1NRX(t [b,m]))ii!.(4.12)We note that, especially if M is large, relevant values of ξ may be very high, evenif the expected number of molecules counted in a single observation is low. Thus, wemight have difficulty in evaluating (4.12) numerically. Alternate methods to evaluatethe CDF of a Poisson random variable were described in Chapter 2, including writingthe Poisson random variable in terms of Gamma functions as in (2.55), and using theGaussian approximation of the Poisson distribution as in (2.57).Now we consider the more general case, where we have positive non-equal weights.Our analysis can then be summarized in the following theorem:Theorem 4.1 (Distribution of a weighted sum). Given M Poisson random vari-ables with means λm and non-negative weights wm, then the weighted sum X =∑Mm=1wmXm is in general not a Poisson random variable, however the weighted sumof Gaussian approximations of the individual variables is a Gaussian random variablewith mean∑Mm=1wmλm and variance∑Mm=1w2mλm.Proof. The proof can be shown using moment generating functions; see [65, Ch. 4].It can be shown that a sum of weighted independent Poisson random variables is also125Chapter 4. Optimal and Suboptimal Receiver Designa Poisson random variable only if the weights wm are all equal to 1. However, theGaussian approximation of each Xm gives a Gaussian random variable with meanand variance λm, and it can be shown that any weighted sum of Gaussian randomvariables is also a Gaussian random variable.Using Theorem 4.1, we can immediately write the CDF of random variable X =∑Mm=1wmXm asPr (X < x) =121 + erfx− 0.5−∑Mm=1wmλm√2∑Mm=1 w2mλm . (4.13)In summary, for evaluation of the expected error probability when the weights areequal to 1, we can use the CDF (4.12), its equivalent form (2.55), or its approximation(2.57), where λ =∑Mm=1NRX (t [b,m]). When the weights are not equal, we must use(4.13) where λm = NRX (t [b,m]).Given a particular sequenceW = {W [1] , . . . ,W [B]}, we can write the probabil-ity of error of the bth bit, Pe[b|W], asPe[b|W] =Pr(∑Mm=1wmNRX (t [b,m]) < ξ)if W [b] = 1,Pr(∑Mm=1wmNRX (t [b,m]) ≥ ξ)if W [b] = 0,(4.14)The true expected error probability for the bth bit is found by evaluating (4.14)for all possible TX sequences and scaling each by the likelihood of that sequenceoccurring in a weighted sum of all error probabilities, i.e.,Pe[b] =∑W∈WPg1,b(W)1 (1− P1)g0,b(W)Pe [b|W] , (4.15)126Chapter 4. Optimal and Suboptimal Receiver Designwhere W is the set of all 2B possible sequences and gi,b (W) is the number of occur-rences of bit i within the first b bits of the sequenceW. In practice, we can randomlygenerate a large number (e.g., some hundreds or thousands) of sequences based onP1, and we will find that it is sufficient to average the probability of error over thissubset of total possible sequences.4.4.2 Optimal WeightsWe now consider optimizing the sample weights. It would make sense to assigngreater weight to samples that are expected to measure more information molecules.We make this claim more specific in the context of the matched filter.Consider a signal h (t). The impulse response of the matched filter to signal h (t)is h (Tint− t), where Tintis the signal interval; see [1, Ch. 5]. The output of such afilter at time Tintis thenTint∫0h (τ)h (Tint− Tint+ τ) dτ =Tint∫0h (τ)2 dτ, (4.16)which is the continuous time equivalent of a weighted sum detector where the sampleat time t is simply weighted by the expected value of the signal of interest. Thus,we design a matched filter detector by setting the sample weight wm equal to thenumber of molecules expected from the TX, NRX|tx (t [b,m]).The matched filter is optimal in the sense that it maximizes the signal-to-noiseratio, and it also minimizes the bit error probability if the desired signal is cor-rupted by additive white Gaussian noise (AWGN). We generally cannot claim thatthe desired signal is corrupted by AWGN, as that would require a large expectednumber of molecules at all samples and an AWGN external noise source. However,127Chapter 4. Optimal and Suboptimal Receiver Designif these conditions were satisfied, and Tintwas chosen to be sufficiently long to ig-nore the impact of ISI, then the optimal weighted sum detector would have weightswm = NTXPob (tb [m]), and it would be equivalent to the optimal sequence detector.Our discussion of weighted sum detectors has not included the selection of thedecision threshold ξ. Obviously, the performance of a particular detector relies on thechosen value of ξ. The derivation of the optimal ξ for a weighted sum detector is leftfor future work. In Section 4.5, when we discuss the bit error probability observedvia simulation of a weighted sum detector, we imply that the optimal ξ for the givenenvironment was found via numerical search.4.5 Numerical and Simulation ResultsIn this section, we present simulation results to assess the performance of the detectorsdescribed in this chapter. We consider Systems 1 and 2 defined in Table 2.1 andsummarized in Table 4.1. The primary difference between the two systems, which arecomparable in size, is that System 1 does not have any molecule degradation (in thischapter) whereas System 2 has degradation described by Michaelis-Menten kinetics.We recall the Michaelis-Menten parameters for System 2, described in Table 2.2, inTable 4.2. For simplicity, we refer to the base case of System 2 as having no additivenoise, no flow, and no active enzymes present.Although the environments of both systems are relatively small (the sizes of theRX and distances from the TX are both significantly less than the size of a bacte-rial cell) with a low number of A molecules and high chemical reactivity, the chan-nel impulse response scales to dimensionally homologous systems that have (for ex-ample) more molecules and lower reactivities. Our choices of parameters ease thetime required for simulations. When we consider the expected impulse response for128Chapter 4. Optimal and Suboptimal Receiver DesignTable 4.1: System parameters used for RX detection.Parameter Symbol Units System 1 System 2RX Radius RRXµm 0.05 0.045Molecules Released NTX- 104 5000Distance to RX d µm 0.5 0.3/VaryA Radius RA nm - 0.5A Diffusion Coefficient DA m2/s 10−9 4.365× 10−10Degradation Rate k s−1 0 -Flow Towards RX v‖ mm/s Vary VaryPerpendicular Flow v⊥ mm/s Vary VarySim. Time Step ∆t µs 0.5 0.5Table 4.2: Enzyme reaction-diffusion parameters for System 2.Parameter Symbol Units System 2Enzyme Volume VE µm31Enzyme Molecules NE - 5× 104E Diffusion Coefficient DE m2/s 8.731× 10−11EA Diffusion Coefficient DEA m2/s 7.276× 10−11Binding Rate k1m3molecule·s 2× 10−19Unbinding Rate k−1 s−1 104Degradation Rate k2 s−1 106E Radius RE nm 2.5EA Radius REA nm 3RMS radius rrmsnm 22.9Binding Radius rbindnm 2.88Michaelis-Menten kinetics in (4.1), we use the lower bound and not the approxima-tion, such that k = k1CEtot.The presentation of simulation results is as follows. First, we consider the per-formance of the simplest weighted sum detector for System 2, i.e., making a single129Chapter 4. Optimal and Suboptimal Receiver Designobservation in each bit interval. We then compare detectors for System 2 in theabsence of ISI by measuring the probability of error of the first bit in a transmittedsequence. Next, we study RX performance in System 2 as a function of the number ofsamples taken per bit interval, M . Finally, we measure RX performance in System 1as a function of the flow parameters (we note that the labeling of the systems is to beconsistent with Chapter 2). The average bit error probability Peof any weighted sumdetector is found by averaging Pe[b] in (4.15) over a large subset of the setW of allpossible bit sequences (we indicate the size of this subset for each simulation). Un-less otherwise noted, the uniform concentration assumption (UCA) is applied whenevaluating Pe. Also, throughout this section, we always assume that bits 0 and 1 areequally likely, i.e., P1 = P0 = 0.5.4.5.1 Max DetectionThe simplest detector for a passive RX is the max detector, where the RX observesthe number of molecules when the most molecules are expected, i.e., at tmaxwithin agiven bit interval (as derived in Section 3.4.2). We assess the average error probabilityPeof this detector as a function of the bit decision threshold ξ for the base case ofSystem 2 in Fig. 4.1. The TX transmits a sequence of 100 consecutive bits with bitintervals of Tint= 100µs and Tint= 200µs. For Tint= 100µs, we also consider thebase case with enzymes. Each simulation is averaged over at least 1800 sequences,and the expected error probability is averaged over 103 random sequences using thePoisson approximation in (4.12).We see in Fig. 4.1 that the optimal decision threshold the base case of System2 is ξ = 4 when Tint= 200µs and ξ = 6 when Tint= 100µs. With enzymes, theoptimal threshold is ξ = 2. These differences make intuitive sense; there is less ISI130Chapter 4. Optimal and Suboptimal Receiver Design0 5 10 15 2010−1100Detection Threshold ξReceiverErrorProbabilityPe  Simulation, Tint = 100 µsSimulation, No Enzyme, Tint = 100 µsSimulation, No Enzyme, Tint = 200 µsExpectedFigure 4.1: Evaluating the error probability of System 2 as a function of the bitdecision threshold ξ at the RX for bit interval Tint= 100µs and Tint= 200µs withoutenzymes and Tint= 100µs with enzymes. The transmission is a sequence of 100randomly-generated bits.from previous bits when the bit interval is longer or enzymes are present, so a lowerdecision threshold can be used. The minimum average error probability is lower forTint= 200µs than for Tint= 100µs (just under 0.1 compared with almost 0.2), andlower again when enzymes are added (about 0.08). However, if we apply the optimaldecision threshold for Tint= 100µs without enzymes to the other two cases, thenthose cases actually perform worse because the threshold is too high relative to theaverage number of molecules observed, i.e., there are too many missed detections.The primary observation in Fig. 4.1 is that, by adding enzymes, the data transmis-sion rate can be increased while maintaining a similar bit error probability, or the biterror probability can be improved for the same data transmission rate. However, theoverall bit error probabilities are quite high; we will observe lower error probabilitieswith the more complex detectors in the remainder of this section.131Chapter 4. Optimal and Suboptimal Receiver Design4.5.2 Detection Without Intersymbol InterferenceFor the remainder of this section, we make comparisons of the average bit error prob-abilities obtainable via the optimal sequence detector, the matched filter detector,and the equal weight detector. We note that the bit error probability observed forthe optimal detector assuming independent samples is found via simulation only. Forthe matched filter detector, the weights are based on the number of molecules fromthe TX expected at the RX, NRX|tx (t), due to a TX emission in the current intervalonly (so that the weights are the same for every bit interval; the adaptive assignmentof weights, which is also performed by neurons as described in [58, Ch. 12], has beenconsidered in [129]). The expected error probabilities for the matched filter detectorare found via (4.13). For the equal weight detector, the expected error probabil-ity is found using the Gamma functions in (2.55) for the Poisson approximation in(4.12). For simplicity, observations are equally spaced within the interval, such thatthe sampling times are t [b,m] = mTint/M .First, we consider the ISI-free case by having the TX send a single bit in thebase case of System 2. We do this to assess whether the matched filter performsthe same as the optimal detector in the absence of ISI. In order for the detectionto be meaningful, we add an independent Poisson noise source NRX|n (t) = 50 andimpose a bit interval of Tint= 200µs for sampling (for reference, the maximum valueof NRX|tx (t) under these conditions is 5.20 molecules at 34.36µs after transmission,so the number of molecules expected from the noise source is much greater thanthe number expected from the TX). The bit error probability found via simulationis averaged over 2 × 105 transmissions, and the expected error probability is foundconsidering all bit sequences in W since in this case there are only two (either 1 or0). The results are presented in Fig. 4.2.132Chapter 4. Optimal and Suboptimal Receiver Design101 10210−310−210−1100# of Samples Per Interval MReceiverErrorProbabilityPe  Simulation, Matched FilterSimulation, Equal WeightMaximum LikelihoodExpected, Matched FilterExpected, Equal WeightFigure 4.2: Average error probability of the base case of System 2 as a function ofM when there is no ISI, NRX|n (t) = 50, and Tint = 200µs. The performance of thematched filter detector is equivalent to that of the maximum likelihood detector.In Fig. 4.2, we see that the bit error probability achievable with the matchedfilter is equal to that of the maximum likelihood detector for any value of M . Thus,we claim that the performance of the matched filter is equivalent to the optimalRX design if there is no ISI, even though the signal is corrupted by Poisson and notAWGN noise (the optimality of the matched filter is only guaranteed if the noiseis AWGN). Furthermore, the bit error probability achievable with the equal weightdetector is greater than those of the optimal detectors. For example, the equal weightdetector achieves a bit error probability of 0.03 for M = 100 samples, whereas theoptimal detectors achieve a bit error probability of 0.017. Finally, the expected errorprobabilities evaluated for the matched filter and equal weight detectors are veryaccurate when compared with the simulation results, except when M ≥ 100 (wherethe deviation is primarily due to the increasing sample dependence; when M = 100the time between consecutive samples ∆tobis only 2µs). Interestingly, there appearsto be no degradation in bit error performance as ∆tobgoes to 0 (i.e., asM increases),133Chapter 4. Optimal and Suboptimal Receiver Designeven for the maximum likelihood detector which was designed assuming independentsamples. However, for clarity, and in consideration of Fig. 4.2, we separate samplesin subsequent simulations by at least 5µs.4.5.3 Detection With Varying MIn the remainder of this section, we consider TX sequences of 100 consecutive bits.The simulation results and the expected error probabilities are evaluated by tak-ing the average over 1000 random bit sequences. For optimal joint detection, welimit the explicit channel memory to F = 2 bit intervals as a compromise betweencomputational complexity and observable error probability.In Fig. 4.3, we study the impact of adding noise NRX|n (t) = 0.5 to the basecase when the bit interval is Tint= 200µs. Due to ISI, the bit error probabilityachievable with the matched filter is not as low as that achievable with the optimalsequence detector. The disparity reaches orders of magnitudes asM increases (about2 orders of magnitude in the noiseless case when M = 20). For all detectors, theerror probability rises as additive noise is introduced. However, all detectors are ableto achieve a probability of error below 0.01 as more samples are taken, even withadditive noise.In Fig. 4.4, we consider the impact of propagation distance on RX performancein the base case of System 2 when the bit interval is Tint= 200µs by varying dfrom 250 nm to 500 nm while keeping all other transmission parameters constant.Noise is not added. We see that all detectors are very sensitive to the propagationdistance; the bit error probability varies over many orders of magnitude, even thoughthe distances vary at most by a factor of 2. As the distance increases, fewer moleculesreach the RX (i.e., there is less received signal energy) and it takes more time for those134Chapter 4. Optimal and Suboptimal Receiver Design5 10 15 20 25 30 35 4010−410−310−210−1# of Samples Per Interval MReceiverErrorProbabilityPe  Simulation, Matched FilterSimulation, Equal WeightMaximum LikelihoodExpected, Matched FilterExpected, Equal Weight11221: NRX|n(t) = 02: NRX|n(t) = 0.5Figure 4.3: Average error probability of the base case of System 2 as a function of Mwhen ISI is included, Tint= 200µs, and NRX|n (t) = 0 or 0.5.molecules to arrive (i.e., the channel impulse response is longer and there is relativelymore ISI, although the optimal sequence detector is more robust to this effect). Theperformance degradation at longer distances can be mitigated by methods such asincreasing the bit interval time, changing the number of molecules released by theTX, or adding enzymes to the propagation environment.In Figs. 4.5 and 4.6, we consider the impact of the presence of enzymes in thepropagation environment in comparison with the base case when the bit interval isTint= 100µs, i.e., shorter than that in Figs. 4.3 and 4.4. In both Figs. 4.5 and 4.6,the expected error probabilities when enzymes are present are not as accurate as inthe base case. This is because we assumed that the concentration expected at theRX as given in (4.1) is exact, when in fact it is a lower bound (we used k1CEtotfork). As the average error probability decreases, it becomes relatively more sensitiveto the accuracy of this bound.In Fig. 4.5, we consider no additive noise sources. The bit interval is so short135Chapter 4. Optimal and Suboptimal Receiver Design5 10 15 20 25 30 35 4010−410−310−210−1# of Samples Per Interval MReceiverErrorProbabilityPe  Simulation, Matched FilterSimulation, Equal WeightMaximum LikelihoodExpected, Matched FilterExpected, Equal Weight1122d = 500 nmd = 400 nm1: d = 250 nm2: d = 300 nmFigure 4.4: Average error probability of the base case of System 2 as a function of Mwhen ISI is included, Tint= 200µs, and the distance d to the RX is varied.5 10 15 2010−310−210−1# of Samples Per Interval MReceiverErrorProbabilityPe  Simulation, Matched FilterSimulation, Equal WeightMaximum LikelihoodExpected, Matched FilterExpected, Equal Weight2: With Enzymes1: No Enzymes1212Figure 4.5: Average error probability of System 2 as a function of M when ISI isincluded, Tint= 100µs, and enzymes are included to mitigate the impact of ISI.that, without enzymes, the bit error probability observed by the weighted sum detec-tors reaches a floor of about 0.06. Orders of magnitude of improvement in bit errorprobability are observed when enzymes are added to degrade the A molecules and136Chapter 4. Optimal and Suboptimal Receiver Designmitigate the impact of ISI; both weighted sum detectors give a bit error probabilitybelow 0.005 for M ≥ 10, and performance is within an order of magnitude of themaximum likelihood detector. Interestingly, the equal weight detector outperformsthe matched filter detector for M < 10, emphasizing that the matched filter does notnecessarily optimize the probability of error when the noise is not AWGN and ISI ispresent (though the Gaussian approximation of the noise improves as M increases).There is also an improvement in the performance of the maximum likelihood detectorfor the range of M shown, even though the enzymes are effectively destroying signalenergy that could have been used to help with joint detection (the maximum likeli-hood detector performs better without enzymes when M > 15, but this is omittedfrom Fig. 4.5 to maintain clarity). The reason for this improvement is the chosenvalue of F = 2; the actual channel memory is longer without enzymes.In Fig. 4.6, we include an additive noise source. Since we do not model the actuallocation of the noise source, we cannot predict how much of this noise will be brokendown by enzymes before it reaches the RX. Intuitively, the enzymes will be ableto more easily degrade the molecules emitted by the noise source before they areobserved by the RX if the noise source is placed further away. For a fair comparison,we consider a noise source that is placed at a moderate distance from the RX, suchthat the RX observes noise with mean NRX|n (t) = 1 in the base case and with meanNRX|n (t) = 0.5 when enzymes are present. The optimal detector can now clearlybenefit from the presence of enzymes for all values of M , since the enzymes breakdown noise molecules in addition to those emitted by the TX. The improvement in biterror probability of the optimal detector is about 20 % when enzymes are present, forall values ofM . Of course, the error probabilities observed by all detectors either withor without enzymes are worse than those observed in the no-noise case in Fig. 4.5.137Chapter 4. Optimal and Suboptimal Receiver Design5 10 15 2010−310−210−1# of Samples Per Interval MReceiverErrorProbabilityPe  Simulation, Matched FilterSimulation, Equal WeightMaximum LikelihoodExpected, Matched FilterExpected, Equal Weight2: With Enzymes21211: No EnzymesFigure 4.6: Average error probability of System 2 as a function of M when ISI isincluded, Tint= 100µs, enzymes are included to mitigate the impact of ISI, and anadditive noise source is present (NRX|n (t) = 1 without enzymes and NRX|n (t) = 0.5when enzymes are present).In Fig. 4.7, we consider the impact of flow in the absence of enzymes and whenthere is an additive noise source with mean NRX|n (t) = 1 (we assume that the amountof additive noise observed is independent of the flow, but of course in practice thatwould depend on the nature of the noise source). We set the bit interval to Tint=100µs, the same as in Fig. 4.6, so we can make comparisons with the detectorsin Fig. 4.6. We plot the average error probability for three different flows: v‖ =3mm/s (i.e., towards the RX from the TX), v‖ = −1mm/s, and v⊥ = 3mm/s (i.e.,perpendicular to the line between TX and RX). The flow magnitudes are chosen sothat they affect but do not dominate the channel impulse response (the correspondingPeclet numbers, which describe the dominance of convection versus diffusion and arefound here as dv/DA, are 2.06 for v⊥ and the positive v‖; see [58, Ch. 5]). Of course,for a given flow, the random diffusion of each molecule in every time step is added tothe constant displacement due to flow.138Chapter 4. Optimal and Suboptimal Receiver Design4 6 8 10 12 14 16 18 2010−510−410−310−210−1# of Samples Per Interval MReceiverErrorProbabilityPe  Simulation, Matched FilterSimulation, Equal WeightMaximum LikelihoodExpected, Matched FilterExpected, Equal Weightv‖ = −1mmsv⊥ = 3mmsv‖ = 3mmsFigure 4.7: Average error probability of System 2 as a function of M when ISI isincluded, Tint= 100µs, NRX|n (t) = 1, and different degrees of flow are present.When v‖ is positive, the observed error probabilities in Fig. 4.7 are much betterthan in the corresponding no-enzyme case in Fig. 4.6, and also better than whenenzymes are present, as the flow both increases the strength of the observed signaland mitigates ISI. The expected performance of the weighted sum detectors is not asaccurate for positive v‖ because the UCA at the receiver has even less validity withthis flow. Perhaps more interestingly, we see that the observed error probabilitieswith the weighted sum detectors are better than the no-enzyme case in Fig. 4.6 whenv‖ is negative (although the magnitude of negative flow that we consider is less thanthat of the positive flow, such that we are still able to observe A molecules at thereceiver). Furthermore, all detectors perform better than the no-enzyme case whenthe direction of flow is perpendicular to the direction of information transmission.These disruptive flows, which are not in the direction of information transmission,do not prevent the ability to communicate, and in these cases improve transmission.This feature is not possible for the timing channel model as considered in [17, 18],139Chapter 4. Optimal and Suboptimal Receiver Designwhere all released molecules must reach the RX eventually.4.5.4 Detection With Varying FlowIn the remaining figures, we take a closer look at the impact of steady uniform flowwhere Tint= 200µs. To easily make direct comparisons between the strength of theflow versus diffusion, we describe the flows dimensionlessly with the Peclet number,where the reference distance is L = d and the reference number of A molecules isNAref= NTX. For System 1, a flow of v? = 1 translates to a flow of 2 mms. To maximizeaccuracy, we do not apply the UCA in the evaluation of (2.34) in the remainder ofthis section.In Fig. 4.8, we consider the impact of varying v?‖ , i.e., the flow is either in thedirection of information transmission or directly opposite. The performance of allthree detectors is quite similar for low M and any value of v?‖ , but the optimaldetector can become orders of magnitude better than the weighted sum detectorsfor large M , i.e., M > 10. Generally, all detectors improve over the no-flow casewhen v?‖ > 0. As v?‖ increases, advection dominates diffusion and the impact of ISI ismitigated. However, with sufficiently high v?‖ > 0, the molecules enter and leave theRX between consecutive observations. As expected, communication degrades eventhough the flow is actually non-disruptive. We see this trend forM = 2 when v?‖ > 2,where the flow time (d/v?‖ , or 0.125ms for v?‖ = 2) becomes on the order of thetime between observations (0.1ms). This trend continues for every value of M asv?‖ increases, but degradation would never occur as v?‖ →∞ if the RX was perfectlysynchronized to make an observation when the emitted molecules pass through. Wealso note that, in practice, a physical RX would have receptors to which the emittedmolecules could bind and then be observed. However, communication under a very140Chapter 4. Optimal and Suboptimal Receiver Design−110−410−310−210−1ReceiverErrorProbabilityPe  0Dimensionless Flow Velocity v⋆‖  100 101 102Simulation, Matched FilterSimulation, Equal WeightMaximum LikelihoodExpected, Matched FilterExpected, Equal WeightM = 40M = 10M = 40M = 5M = 2Figure 4.8: Average error probability of System 1 as a function of v?‖ for M ={2, 5, 10, 40} observations in each bit interval (v?⊥ = 0). The horizontal axis is sepa-rated into 3 regions in order to show logarithmic scales: −2 ≤ v?‖ ≤ −0.2 on a logscale, v?‖ = 0, and 0.2 ≤ v?‖ ≤ 100 on a log scale.strong flow could still degrade if the binding rate of the receptors was not sufficientlyhigh.All detectors fail when v?‖ is negative and sufficiently large, since advection-dominant disruptive flow prevents all transmitted molecules from reaching the RX.However, this degradation is less severe for small v?‖ and, with sufficient sampling(i.e., M = 40), Fig. 4.8 shows that both weighted sum detectors perform better (al-beit slightly) than the no-flow case when −1 < v?‖ < −0.5. Interestingly, the impactof this disruptive flow's removal of A molecules in their intended bit interval is miti-gated by the removal of ISI molecules. Bi-directional transmission is thus possible inan environment with a steady flow moving in a direction parallel to the line betweentwo transceivers, as long as advection does not dominate diffusion, and communica-tion in each direction using weighted sum detectors can be improved for some flowswith a large number of samples.141Chapter 4. Optimal and Suboptimal Receiver DesignIn Fig. 4.9, we consider the impact of varying v?⊥, i.e., the flow is perpendicularto the direction of information transmission. As might be expected, the impact ofthis disruptive flow is measurably different from v?‖ < 0. For all M > 2 shown,all detectors (including the maximum likelihood optimal sequence detector) havea range of v?⊥ over which they perform better than in the no-flow case, and thepotential for improvement increases with M . As with v?‖ < 0, the impact of thisdisruptive flow's removal of A molecules in their intended bit interval is mitigated bythe removal of ISI molecules, i.e., performance improves if the removal of ISI moleculesis proportionately greater than the degradation of the useful signal. For example, themaximum improvement in the probability of error over the no-flow case is about10 % at v?⊥ = 1 when M = 5, but the probability of error of the weighted sumdetectors decreases by an order of magnitude when M = 40 and 1.5 ≤ v?⊥ ≤ 2. Theimprovement in performance of the optimal sequence detector is small for all values ofM considered, although we expected no more than a small gain because this detectoralready accounts for ISI. As with negative v?‖ , all detectors eventually degrade as v?⊥increases and advection begins to dominate diffusion. However, the detectors do notappear to be as sensitive to v?⊥ as they are to the corresponding negative values ofv?‖ in Fig. 4.8. As long as a flow moving in a direction perpendicular to the linebetween two transceivers does not dominate diffusion, bi-directional communicationis not only possible, but can also be improved over the no-flow case if the detectorstake enough samples.Finally, in Fig. 4.10 we consider the impact of varying both v?‖ and v?⊥ simulta-neously, such that v?‖ = v?⊥. Detector performance is most similar to that shownin Fig. 4.8 because the detectors are more sensitive to v?‖ . However, there are twonotable differences with Fig. 4.8. First, the improvement in the weighted sum detec-142Chapter 4. Optimal and Suboptimal Receiver Design0 0.5 1 1.5 2 2.510−410−310−210−1Dimensionless Flow Velocity v⋆⊥ReceiverErrorProbabilityPe  Simulation, Matched FilterSimulation, Equal WeightMaximum LikelihoodExpected, Matched FilterExpected, Equal WeightM = 5M = 10M = 40M = 2Figure 4.9: Average error probability of System 1 as a function of v?⊥ for M ={2, 5, 10, 40} observations in each bit interval (v?‖ = 0).tors is slightly more pronounced in Fig. 4.10 when M = 40 and both flows are smalland negative; the expected bit error probability of the matched filter detector is alittle more than 0.01 when v?‖ = v?⊥ = −0.5 but almost 0.015 when only v?‖ = −0.5.This is an example of disruptive flows in two dimensions contributing constructively.Second, the degradation of all detectors occurs sooner for positive v?‖ and v?⊥ than forpositive v?‖ alone; the probability of error for the equal weight detector and M = 5or M = 10 begins increasing as a function of flow when v?‖ = v?⊥ = 2.5, whereasit was still decreasing when only v?‖ = 2.5. This is an example of the benefits of anon-disruptive flow component being mitigated by a disruptive flow component.4.6 ConclusionIn this chapter, we studied both optimal and suboptimal detectors for the passivereceiver in a diffusive MC environment. We derived the maximum likelihood sequence143Chapter 4. Optimal and Suboptimal Receiver Design−2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.510−410−310−210−1Dimensionless Flow Velocity v⋆‖ = v⋆⊥ReceiverErrorProbabilityPe  Simulation, Matched FilterSimulation, Equal WeightMaximum LikelihoodExpected, Matched FilterExpected, Equal WeightM = 2M = 40M = 10M = 5Figure 4.10: Average error probability of System 1 as a function of v?‖ = v?⊥ for thenumbers of observations in each bit interval M = {2, 5, 10, 40}.detector to provide a lower bound on the achievable bit error probability. We thendesigned weighted sum detectors as a family of more practical detectors, where theoptimal selection of weights under corruption by AWGN is the matched filter andit performs very close to the optimal detector in the absence of ISI, even if theadditive noise is not Gaussian. Simpler weighted sum detectors, with either equalweights or fewer samples per bit interval, offer an easier implementation at the costof higher error probability. We showed that having enzymes present enables highthroughput without relying on the complexity of the optimal detector. We alsoshowed that communication using weighted sum detectors can perform better thanin a no-flow environment when slow disruptive flows are present, and even optimalsequence detectors can perform slightly better than in a no-flow environment whenthere is a non-dominant disruptive flow perpendicular to the direction of informationtransmission. When flows become fast enough to dominate diffusion, disruptive flowsprevent communication, whereas performance under a non-disruptive flow is only144Chapter 4. Optimal and Suboptimal Receiver Designlimited by the sampling times of the detector.145Chapter 5A Unifying Model for External NoiseSources and Intersymbol Interference5.1 IntroductionThe deliberate release of molecules by the intended transmitter might not be theonly local source of information molecules. We refer to other such sources as externalmolecule sources. External molecule sources can be expected in diffusive environ-ments where molecular communication systems may be deployed. Examples include:ˆ Multiuser interference caused by molecules that are emitted by the transmittersof other communication links. This interference can be mitigated by usingdifferent molecule types for every communication link, but this might not bepractical if there is a very large number of links and the individual transceiversshare a common design.ˆ Unintended leakage from vesicles (i.e., membrane-bound containers) where theinformation molecules are being stored by the transceivers. A rupture couldresult in a steady release of molecules or, if large enough, the sudden release ofa large number of molecules; see [130].ˆ The output from an unrelated biochemical process. The biocompatability ofthe nanonetwork may require the selection of a naturally-occurring information146Chapter 5. A Unifying Model for External Noise Sources and ISImolecule. Thus, other processes that produce or release that type of moleculeare effectively noise sources for communication. For example, calcium signal-ing is commonly used as a messenger molecule within cellular systems (see [3,Ch. 16]), so selecting calcium as the information carrier in a new molecularcommunication network deployed in a biological environment would mean thatthe natural occurrence of calcium is a source of noise.ˆ The unintended reception of other molecules that are sufficiently similar tothe information molecules to be recognized by the receiver. For example, thereceptors at the receiver might not be specific enough to only bind to theinformation molecules, or the other molecules might have a shape and sizethat is very similar to that of the information molecules; see [3, Ch. 4]).In this chapter, we propose a unifying model for external noise sources (includingmultiuser interference) and ISI in diffusive molecular communication. We consideran unbounded physical environment with steady uniform flow, based on the systemmodel that we have studied in previous chapters (but where we did not develop anydetailed noise analysis; we only assumed that the asymptotic impact of the noisesources was known). The primary contributions of this chapter are as follows:1. We derive the expected asymptotic (and, wherever possible, time-varying) im-pact of a continuously-emitting noise source, given the location of the sourceand its rate of emission. By impact, we refer to the corresponding expectednumber of molecules observed at the receiver, and by asymptotic we refer tothe source being active for infinite time. Closed-form solutions are availablefor a number of special cases; otherwise, the impact can only be found vianumerical integration.147Chapter 5. A Unifying Model for External Noise Sources and ISI2. We use asymptotic noise from a source far from the receiver to approximatethe impact of interfering transmitters, thus providing a simple expression forthe molecules observed at the receiver due to multiuser interference withoutrequiring the interfering transmitters' data sequences. The accuracy of thisapproximation improves as the distance between the receiver and the interferingtransmitters increases.3. We approximate old ISI in the intended communication link as asymptoticinterference from a continuously-emitting source. We decompose the receivedsignal into molecules observed due to an emission in the current bit interval,molecules that were emitted in recent bit intervals, and molecules emitted inolder intervals, where only the impact of the old emissions is approximated.Knowing the expected impact of a noise source enables us to model its effect onsuccessful transmissions between the intended transmitter and receiver. For example,in Chapter 4 we assumed that we had knowledge of the expected impact of noisesources in order to evaluate the effect of external noise on the bit error probabilityat the intended receiver for a selection of detectors. The expected impact of noisesources can also be used to assess different methods to mitigate the effects of noise,e.g., via the degradation of noise molecules as we consider in this chapter.Decomposing the signal received from the intended transmitter enables us tobridge all existing work on ISI by adjusting the number of recent bit intervalsand deciding how we analytically model the old molecules. Literature on diffusivemolecular communication has considered either a finite number of bit intervals todetermine ISI (see [16, 25, 28, 36, 45, 5257]) or accounted for all molecules released(see [19,20,2224,30,31,35,3739] and also Chapter 4). In this chapter, we introducethe number of recent bit intervals as a parameter that enables a trade-off between148Chapter 5. A Unifying Model for External Noise Sources and ISIcomplexity and accuracy in analyzing receiver performance. Furthermore, modelingall older ISI as asymptotic noise will be shown to be a more accurate alternative toassuming that old ISI has no impact at all.In this chapter, we also describe how an asymptotic model for old ISI simplifiesthe evaluation of the expected bit error probability of a weighted sum detector withequal weights, which we proposed in Chapter 4. Other possible applications of anasymptotic model for old ISI include a simplified implementation of the optimalsequence detector (which we also considered in Chapter 4), or simplifying the designof an adaptive weight detector (as in [129]) where the decision threshold is adjustedbased on knowledge of the previously-detected symbols.The rest of this chapter is organized as follows. The system model, including thephysical environment and its representation in dimensionless form, is summarizedfrom Chapter 2 in Section 5.2. In Section 5.3, we derive the time-varying and asymp-totic impact of an external noise source on the receiver. In Section 5.4, we considerthe special case of a noise source that is an interfering transmitter. We adapt thenoise analysis for asymptotic old ISI and use it to simplify detector performance eval-uation in Section 5.5. Numerical and simulation results are described in Section 5.6,and conclusions are drawn in Section 5.7.5.2 System ModelWe consider the system model described in Chapter 2, where for clarity we limitour modeling of chemical reactions to first-order degradation as defined in (2.2). Wealso drop the subscript A from all corresponding parameters. There is one RX ofinterest, centered at the origin with volume VRX. There are U point sources of Amolecules, including the intended transmitter (TX) placed at {−d, 0, 0}. There is a149Chapter 5. A Unifying Model for External Noise Sources and ISIsteady uniform flow v with components v‖ and v⊥, where v‖ is the component of vin the direction of a line pointing from the TX towards the RX.For clarity of exposition in this chapter, we focus on the dimensionless form of thesystem model. Specifically, dimensional analysis reduces the number of parametersthat appear in the equations. Unless otherwise noted, all variables that are describedin this chapter are assumed to be dimensionless (as denoted by a ? superscript),and they are equal to the dimensional variables scaled by the appropriate referencevariables. We define reference distance L in m and reference number of moleculesNAref, such that the reference concentration of A molecules is C0 = NAref/L3.As described in Chapter 2, we assume that the uth source is placed at {−d?u, 0, 0},where d?u ≥ 0. The advection variables must be defined relative to each source'scorresponding coordinate frame, such that v?‖,u > 0 always represents flow from thesource towards the receiver.The dimensionless signal observed at the receiver, N?RX(t?), is the cumulativeimpact of all molecule emitters in the environment, including interfering transmittersand other noise sources. As described in Chapter 2, we can apply superposition tothe impacts of the individual sources, such that the cumulative impact of multiplenoise sources is the sum of the impacts of the individual sources. If we assume thatthere are U−1 sources of A molecules that are not the intended transmitter (withoutspecifying what kinds of sources these are, i.e., other transmitters or simply leakingA molecules), then we recall from (2.45) the complete observed signal can be writtenasN?RX(t?) = N?RX|tx (t?) +U∑u=2N?RX|u (t?) . (5.1)where N?RX|tx (t?) is the signal from the intended transmitter. In Table 5.1, we sum-marize where the different terms in (5.1) are analyzed in the remainder of this chapter150Chapter 5. A Unifying Model for External Noise Sources and ISITable 5.1: Description of the terms in (5.1).Componentof N?RX(t?)SourceTypeSection DetailedFormEmittingContinuously?N?RX|u (t?)RandomNoise5.3 (5.4) YesInterferingTransmitter5.4 (5.19) Approximationin (5.20)N?RX|tx (t?) IntendedTransmitter5.5 (5.21) Approximationof old ISI in(5.23)-(5.25)and whether each type of source is treated as continuously-emitting. In Sections 5.3and 5.4, we model N?RX|u (t?) as a random noise source and as an interfering trans-mitter, respectively. In Section 5.5, we decompose N?RX|tx (t?) to approximate old ISIas asymptotic noise.5.3 External Additive NoiseIn this section, we derive the impact of external noise sources on the receiver, giventhat we have some knowledge about the location of the noise sources and their mode ofemission. First, we consider a single point noise source placed at {−d?n, 0, 0} where d?nis non-negative (we change the subscript of the source from u to n in order to empha-size that this source is random noise and not a transmitter of information). The sourceemits molecules according to the random process NGen,n (t), represented dimension-lessly asN?Gen,n (t?) = L2NGen,n (t) / (DNAref). Assuming that the expected generationof molecules can be represented as a step function, i.e., NGen,n (t) = NGen,n, t ≥ 0, wethen formulate the expected impact of the noise source at the receiver, N?RX|n (t?).In its most general form, we will not have a closed-form solution for the expectedimpact of the noise source. Next, we present either time-varying or asymptotic ex-151Chapter 5. A Unifying Model for External Noise Sources and ISIpressions for a number of relevant special cases, some of which are in closed form andothers that facilitate numerical integration. While we are ultimately most interestedin asymptotic solutions (particularly for extension to the analysis of interference),time-varying solutions are also of interest when they are available because they giveus insight into how long a noise source must be active before we can model itsimpact as asymptotic. Time-varying solutions will also be useful when we considerold ISI in Section 5.5. As previously noted, we can use superposition to consider thecumulative impact of multiple noise sources, as given in (5.1), where the advectionvariables v?‖,n and v?⊥,n must be defined for each source depending on its location.5.3.1 General Noise ModelFirst, we require the expected point concentration due to the noise source, i.e., theexpected concentration of molecules observed at the point {x?, y?, z?} due to an emis-sion of one molecule by the noise source at t? = 0. This is analogous to the pointconcentration due to an intended transmitter at the same location, so we recall (2.33)and writeC? =1(4pit?)3/2exp(−k?t? − r?n,ef24t?), (5.2)wherer?n,ef2 = (x? + d?n − v?‖,nt?)2 + (y? − v?⊥,nt?)2 + (z?)2, (5.3)is the square of the time-varying effective distance between the noise source and thepoint {x?, y?, z?}.Unlike a transmitter (as we considered in Chapter 2), the noise source is emittingmolecules as described by the general random process N?Gen,n (t?). We are alreadyaveraging over the randomness of the diffusion channel (i.e., we have the expected152Chapter 5. A Unifying Model for External Noise Sources and ISIchannel impulse response), so we only consider the time-varying mean of the noisesource emission process, i.e., N?Gen,n (t?). Thus, the expected impact of the noisesource is found by multiplying (5.2) by N?Gen,n (t?), integrating over V ?RX, and thenintegrating over all time up to t?, i.e.,N?RX|n (t?) =t?∫−∞R?RX∫02pi∫0pi∫0r?2N?Gen,n (τ?)C? sin θdθdφdr?dτ ?, (5.4)where r? is the magnitude of the distance from the origin to the point {x?, y?, z?}within V ?RX. To solve (5.4), we must also convert r?n,ef2from cartesian to sphericalcoordinates, which can be shown to ber?n,ef2 = r?2 + d?n2 − 2t?d?nv?‖ + 2d?nr? cosφ sin θ + t?2(v?‖2 + v?⊥2)− 2t?r? (v?‖ cosφ sin θ + v?⊥ sinφ sin θ) , (5.5)where φ = tan−1 (y?/x?) and θ = cos−1 (z?/r?). Generally, (5.4) does not have aknown closed-form solution, even if we omit the integral over time. In the followingsubsection, we present a series of cases for which (5.4) can be more easily solved(numerically or in closed form), for either arbitrary t? or as t? →∞. In the asymptoticcase, we write N?RX|n (t?)∣∣t?→∞ = N?RX|n for compactness. The asymptotic case willalso be useful to approximate multiuser interference and old intersymbol interferencein Sections 5.4 and 5.5, respectively.5.3.2 Tractable Noise AnalysisFor tractability, we will assume throughout the remainder of this section that theexpected noise source emission in (5.4) can be described as a step function, i.e.,153Chapter 5. A Unifying Model for External Noise Sources and ISIN?Gen,n (t?) = L2NGen,n/ (DNAref) , t? ≥ 0. Furthermore, we choose the reference num-ber of molecules to be NAref= L2NGen,n/D, so that N?Gen,n (t?) = 1, t? ≥ 0. The casewhere the noise source also shuts off at some future time, for example when a rup-tured vesicle is depleted, is a straightforward extension that we considered in [129].We note that the emission of molecules by the noise source could then be deter-ministically uniform, such that the emission process N?Gen,n (t?) is in fact N?Gen,n (t?)(e.g., via leakage from a vesicle that ruptured at t? = 0), or it could be randomwith independent emission times (e.g., the stochastic output of a chemical reactionmechanism with a constant expected generation rate that was triggered to begin att? = 0). Strictly speaking, in the latter case the expected emission rate is 1. This willnot affect any of the following analysis because we are deriving the expected impactN?RX|n (t?). We emphasize that our analysis focuses on the expected impact and notthe complete probability mass function (PMF) of the impact. A case-by-case analysisof the noise release statistics would be needed to determine the time-varying PMF ofthe impact at the receiver.The solutions to (5.4) that we present in the remainder of this section follow oneof two general strategies. Both strategies reduce (5.4) to a single integral, which canbe solved numerically or reduced to closed form if additional assumptions are made.The first strategy is the uniform concentration assumption (UCA), where we assumethat the expected concentration of A molecules due to the noise source is uniformand equal to that expected at the center of the receiver (i.e., at the origin). As weobserved in Chapter 2, this assumption is accurate if the noise source is sufficientlyfar from the receiver, such that the expected concentration of A molecules will notvary significantly throughout the receiver. Here, applying the UCA means that we154Chapter 5. A Unifying Model for External Noise Sources and ISIdo not need to integrate over V ?RXand (5.4) becomesN?RX|n (t?) = V ?RXt?∫0C?(d?n,ef, τ?)dτ ?, (5.6)where d?n,ef2 = (d?n − v?‖τ ?)2 + (v?⊥τ ?)2 is the square of the effective distance from thenoise source to the receiver, and the expected concentration at the receiver isC?(d?n,ef, t?) =1(4pit?)3/2exp(−d?n,ef24t?− k?t?). (5.7)The second strategy for solving (5.4) does not apply the UCA, so we include theintegration over V ?RX. We considered that integration in Chapter 2, and for generaladvection we could only evaluate the integral over r?. A closed-form solution waspossible if v?⊥ = 0, which we derived in Theorem 2.1. Including the integration overtime, where N?Gen,n (t?) = 1, t? ≥ 0, (5.4) becomesN?RX|n (t?) =t?∫0{12[erf(R?RX− d?ef,x2τ ?12)+ erf(R?RX+ d?ef,x2τ ?12)]+1d?ef,x√τ ?pi[exp(−(d?ef,x +R?RX)24τ ?)− exp(−(d?ef,x −R?RX)24τ ?)]}exp (−k?τ ?) dτ ?, (5.8)where d?ef,x = −(d?n − v?‖τ ?)is the effective distance along the x?-axis from the noisesource to the center of the receiver, and the error function is defined in (2.36).Eq. (5.8) can be evaluated numerically but, unlike (5.6), is valid for any d?n(although special consideration must be made if d?n = 0, i.e., the worst-case locationfor the noise source, and we consider that case at the end of this subsection).155Chapter 5. A Unifying Model for External Noise Sources and ISIThe two strategies that we have presented reduce (5.4) to a single integral (either(5.6) or (5.8)), thereby facilitating numerical evaluation. In the remainder of thissubsection, we make additional assumptions that enable us to solve (5.4) in closedform.5.3.2.1 Asymptotic SolutionsIn the asymptotic case, i.e., as t? → ∞, it is straightforward to show that (5.6)becomesN?RX|n =V ?RX4pid?nexp(d?nv?‖2− d?n2√v?‖2 + v?⊥2 + 4k?), (5.9)where we apply [111, Eq. 3.472.5]∞∫01a3/2exp(−βa− ca)da =√picexp(−2√βc), (5.10)and recall that d?n is positive.Remark 5.1. From (5.9) it can be shown that, if there is no flow in the y-directionand no molecule degradation (i.e., v?⊥ = 0 and k? = 0), then any positive flow alongthe x-direction (i.e., v?‖ > 0) will not change the asymptotic impact of the noise source.We had expected that this flow would increase the asymptotic impact in comparisonto the no-flow case, so this is a somewhat surprising result.An asymptotic closed-form solution to (5.8) is possible if we impose v?‖ = 0, suchthat we are restricted to the no-flow case. If the noise source is also close to thereceiver, then this is another worst-case scenario because there is no advection tocarry the noise molecules away. The result is presented in the following theorem:Theorem 5.1 (N?RX|n in Absence of Flow). The expected asymptotic impact (i.e.,as t? → ∞) of a noise source in the absence of flow, whose expected output is156Chapter 5. A Unifying Model for External Noise Sources and ISIN?Gen,n (t?) = 1, t? ≥ 0, is given byN?RX|n =12k?[1− 1d?nexp(−|r?dif|k? 12)(k?−12 + g (R?RX)R?RX)+1x?exp(−r?sumk?12)(k?−12 +R?RX)+ g (R?RX)], (5.11)where r?dif= R?RX− d?n, r?sum = R?RX+ d?n, g (R?RX) = sgn(R?RX− d?n), and sgn(·) is thesign function.Proof. Please refer to Appendix C.1.5.3.2.2 Absence of Flow and Molecule DegradationTime-varying solutions to both (5.6) and (5.8) are only possible in the absence offlow and molecule degradation, i.e., if v?‖ = v?⊥ = 0 and k? = 0. If we are using theUCA, then (5.6) can be combined with [79, Eq. (3.5b)] and we can writeN?RX|n (t?) =V ?RX4pid?n(1− erf(d?n2√t?)), (5.12)Remark 5.2. We see from (2.36) that erf (a)→ 0 as a→ 0, and that erf (a) ≈ 0.056when a = 0.05. If the reference distance is chosen to be the distance of the noise sourcefrom the receiver, i.e., L = d?n, and if there is no advection or molecule degradation,then from (5.12) we must wait until t? > 100 before the impact is expected to be atleast 95 % of the asymptotic impact.The time-varying solution to (5.8) is presented in the following theorem:Theorem 5.2 (N?RX|n (t?) in Absence of Flow and Degradation). The expected time-varying impact of a noise source, whose expected output is N?Gen,n (t?) = 1, t? ≥ 0, is157Chapter 5. A Unifying Model for External Noise Sources and ISIgiven in the absence of flow and molecule degradation byN?RX|n (t?) = erf(r?dif2√t?)[r?dif24+t?2+r?dif36d?n]+ erf(r?sum2√t?)[r?sum24+t?2− r?sum36d?n]+√t?piexp(−r?dif24t?)[r?dif2− 2t?3d?n+r?dif23d?n]+√t?piexp(−r?sum24t?)[r?sum2+2t?3d?n− r?sum23d?n]− gR?RX (r?dif)24− r?sum24+r?sum36d?n− |r?dif|36d?n. (5.13)Proof. Please refer to Appendix C.2.Although (5.13) is verbose, it can be evaluated for any non-zero values of d?n andt?. We consider the case d?n = 0 at the end of this subsection. Here, we note that theasymptotic impact of a noise source, i.e., as t? → ∞, can be evaluated from (5.13)using the properties of limits and l'Hôpital's rule asN?RX|n =r?sum36d?n− |r?dif|36d?n− r?sum24− gR?RX (r?dif)24. (5.14)Remark 5.3. Eq. (5.14) simplifies to a single term if the noise source is outsideof the receiver, i.e., if r?dif= R?RX− d?n < 0. It can then be shown that N?RX|n =R?RX3/(3d?n), which is equivalent to (5.9) with a spherical receiver in the absence ofadvection and molecule degradation, i.e., v?‖ = v?⊥ = k? = 0, even though (5.9) wasderived for a noise source that is far from the receiver. Thus, in the absence ofadvection and molecule degradation, the expected impact of a noise source anywhereoutside the receiver increases with the inverse of the distance to the receiver.158Chapter 5. A Unifying Model for External Noise Sources and ISI5.3.2.3 Worst-Case Noise Source LocationFinally, we consider the special case where the noise source is located at the receiver,i.e., d?n = 0. Clearly, the UCA should not apply in this case, so we only consider theevaluation of (5.8). Generally, we need to apply l'Hôpital's rule to account for d?n = 0,and here we do so for three cases. First, if evaluating (5.8) directly and v?‖ = 0, thenl'Hôpital's rule must be used to re-write the second term inside the curly braces in(5.8) (i.e., the term with the two exponentials, including the scaling by√τ ?/pi/d?ef,x)as− R?RX(piτ ?)12exp(−R?RX24τ ?). (5.15)Second, if evaluating (5.11), which applies asymptotically in the absence of flow,then we can apply l'Hôpital's rule in the limit of d?n → 0 and write (5.11) aslimd?n→0N?RX|n =1k?− exp(−R?RX√k?)( 1k?+R?RX√k?). (5.16)Remark 5.4. From (5.11) and (5.16) we see that any increase in k? will result in adecrease in the expected number of noise molecules observed, even if the noise sourceis located at the receiver (i.e., d?n = 0).Third, the time-varying impact of the worst-case noise source in the absence offlow and molecule degradation can be found using repeated applications of l'Hôpital'srule to (5.13) aslimd?n→0N?RX|n (t?) = erf(R?RX2√t?)[t? − R?RX22]−R?RX√t?piexp(−R?RX24t?)+R?RX22.(5.17)This subsection considered a number of solutions to (5.4), where the expectedmolecule emission is described as a step function. In Table 5.2, we summarize pre-159Chapter 5. A Unifying Model for External Noise Sources and ISITable 5.2: Summary of the equations for the impact of an external noise source andthe conditions under which they can be used. By d?n = Far, we mean that the UCAis applied.Eq. Closed Form? Asymptotic? d?n k? v?⊥ v?‖(5.6) No No Far Any Any Any(5.8) No No Any Any Any 0(5.9) Yes Yes Far Any Any Any(5.11) Yes Yes 6= 0 Any 0 0(5.12) Yes No Far 0 0 0(5.13) Yes No 6= 0 0 0 0(5.14) Yes Yes 6= 0 0 0 0(5.16) Yes Yes 0 Any 0 0(5.17) Yes No 0 0 0 0cisely which conditions and assumptions apply to each equation. We will see the ac-curacy of these equations in comparison with simulated noise sources in Section 5.6.In practice, these equations can enable us to more accurately assess the effect of noisesources on the bit error probability of the intended communication link (as we didin Chapter 4, where we only assumed that the expected impact of noise sources wasknown). In the remainder of this chapter, we focus on using the noise analysis toapproximate some or all of the signal observed by transmitters that release impulsesof molecules.5.4 Multiuser InterferenceIn this section, we consider the impact of transmitters that are using the same mod-ulation scheme as the transmitter that is linked to the receiver of interest but aresending independent information. Thus, the Amolecules emitted by these unintendedtransmitters are effectively noise. We begin by presenting the complete model of the160Chapter 5. A Unifying Model for External Noise Sources and ISIobservations made at the receiver due to any number of transmitters (independent ofwhether the transmitters are linked to the receiver). This detailed model is the mostcomprehensive, so it enables the most accurate calculation of the bit error probability,but it requires knowledge of all transmitter sequences. Then, we apply our results inSection 5.3 to simplify the analysis of an interfering transmitter.5.4.1 Complete Multiuser ModelWe recall from (2.40) that the (dimensionless) number of molecules expected at theRX due to all emissions by the TX isN?RX|tx (t?) =b t?T?int+1c∑b=1W [b]N?RX|tx,0 (t? − (b− 1)T ?int) , (5.18)where we apply the UCA. Eq. (2.43) is an analogous expression for the uth moleculesource when it is also a transmitter, where Wu = {Wu [1] ,Wu [2] , . . . } is the uthsource's binary sequence, T ?int,u is the bit interval, and N?TX,u = 1 molecules arereleased at the start of the interval to send a binary 1.Consider that all U sources of A molecules are transmitters. From (2.42), thenumber of molecules expected at the RX is thenN?RX(t?) =U∑u=1b t?T?int,u+1c∑b=1Wu [b]N?RX|u,0(t? − t?u,0 − (b− 1)T ?int,u). (5.19)5.4.2 Asymptotic InterferencePrecise analysis of the performance of the receiver's detector can be made using (5.19),but we must have knowledge of every transmitter sequenceWu. We propose simpli-161Chapter 5. A Unifying Model for External Noise Sources and ISIfying the analysis by applying our results in Section 5.3. For widest applicability, i.e.,to include molecule degradation and flow in any direction, we assume that interferingtransmitters are sufficiently far away to apply the uniform concentration assumption(this makes sense; an interferer that is very close to the receiver would likely result inan error probability that is too high for communication with the intended transmitterto be practical). The corresponding closed-form analysis is asymptotic in time, butthis is acceptable because we can assume that interferers were transmitting for a longtime before the start of our intended transmission (we will see in Section 5.6 that thisis an easy assumption to satisfy). The remainder of this section can also be easilyextended to the other special cases in Section 5.3.Consider the asymptotic impact of a single interfering transmitter. The emis-sions of the uth transmitter must be approximated as a continuous function sothat we can apply the results from our noise analysis. The effective emission rateis P1N?TX,u molecules every T?int,u dimensionless time units. If we choose NAref=L2P1NTX,u/(Tint,uD), then the emission function of the uth transmitter can be ap-proximated as N?Gen,u (t?) = 1, t? ≥ 0. From (5.9), we immediately have the expectedasymptotic impact of the interfering transmitter, N?RX|u, written asN?RX|u =V ?RX4pid?uexp(d?uv?‖,u2− d?u2√v?‖,u2 + v?⊥,u2 + 4k?), (5.20)where we recall that, in general, we have adjusted the reference coordinate frame sothat the uth transmitter lies on the negative x?-axis. The time-varying impact can befound via numerical integration of (5.6), or, if v?‖ = v?⊥ = 0 and k? = 0, i.e., if there isno advection or molecule degradation, via (5.12). The complete asymptotic multiuserinterference is given by adding (5.20) for each of the U − 1 interfering transmitters.We note that (5.20) is a constant approximation of what is in practice a signal162Chapter 5. A Unifying Model for External Noise Sources and ISIthat is expected to oscillate over time. The channel impulse response given by (5.2)has a definitive peak and tail. The interference can be envisioned as the most recentpeak followed by all of the tails of prior transmissions. Even asymptotically, theexpected impact at a given instant will depend on the time relative to the interferer'stransmission intervals. So, over time, (5.20) will both overestimate and underestimatethe impact of the interferer. However, we expect that, on average, (5.20) will tend tooverestimate the impact more often. This is because the approximation of moleculeemission as a continuous function effectively makes the release of molecules laterthan they actually are by spreading emissions over the entire bit interval instead ofreleasing all of them at the start of the bit interval. We will visualize the accuracyof (5.20) more clearly in Section 5.6.5.5 Asymptotic Intersymbol InterferenceIn this section, we focus on characterizing the signal observed at the receiver due tothe intended transmitter only. We seek a method to model some of the ISI asymptoti-cally based on the previous analysis in this chapter. Specifically, we model F prior bitsexplicitly (and not as a signal from a continuously-emitting source), and the impactof all earlier bits is approximated asymptotically as a continuously-emitting source.The choice of F enables a tradeoff between accuracy and computational efficiency.We describe the application of our model for asymptotic (i.e., old) ISI to simplify theevaluation of the expected bit error probability of weighted sum detectors, which ingeneral requires finding the expected probability of error of all possible transmittersequences and taking an average. Other applications of our model for asymptoticISI are simplifying the implementation of the maximum likelihood detector and inthe design of a weighted sum detector with adaptive weights. Adaptive weighting is163Chapter 5. A Unifying Model for External Noise Sources and ISIphysically realizable in biological systems; neurons sum inputs from synapses withdynamic weights (see [58, Ch. 12]). The design of an adaptive weighted sum detectorfor diffusive MC was considered in [129].5.5.1 Decomposition of Received SignalWe now decompose the signal from the intended transmitter, i.e., N?RX|tx (t?) as writ-ten in (5.18). We decompose (2.40) into three terms: molecules observed due to thecurrent bit interval, N?RX|tx,cur (t?), molecules observed that were released within Fintervals before the current interval, N?RX|tx,ISI (t?), and molecules observed that werereleased in any older bit interval, N?RX|tx,old (t?). Specifically, (5.18) becomesN?RX|tx (t?) = W [bcur]N?RX|tx,0 (t? − bcurT ?int) +bcur−1∑b=bcur−FW [b]N?RX|tx,0 (t? − bT ?int)+bcur−F−1∑b=1W [b]N?RX|tx,0 (t? − bT ?int) (5.21)= N?RX|tx,cur (t?) +N?RX|tx,ISI (t?) +N?RX|tx,old (t?) , (5.22)where bcur= b t?T ?int+ 1c is the index of the current bit interval, and we emphasizethat each term in (5.22) is evaluated given the current transmitter sequenceW. Thedecomposition enables us to simplify the expression for the signal observed due tomolecules released by the transmitter if we can write an asymptotic expression forthe expected value of N?RX|tx,old (t?), i.e., N?RX|tx,old. However, the analysis that wehave derived in this chapter for the asymptotic impact of signals is dependent on theon-going emission of molecules that began at time t? = 0. We present two methodsto derive N?RX|tx,old. First, we begin with the asymptotic expression in (5.20) foran interferer that is always emitting and then subtract the unconditional expected164Chapter 5. A Unifying Model for External Noise Sources and ISIimpact of molecules released within the last F + 1 bit intervals. We then write theexpected and time-varying but asymptotic expression for N?RX|tx,old (t?) asN?RX|tx,old (t?) = N?RX|tx −N?RX|tx,ISI (t?)−N?RX|tx,cur (t?) , (5.23)where N?RX|tx is in the same form as (5.20), and here N?RX|tx,cur (t?) and N?RX|tx,ISI (t?)do not depend on W because they are averaged over the 2 and 2F possible corre-sponding bit sequences, respectively. Eq. (5.23) is time-varying because the expectedimpact that we subtract depends on the time within the current bit interval. From(5.23), N?RX|tx,old (t?) is asymptotically a cyclostationary process; the expected meanis periodic with period T ?int. Although (5.23) is tractable, it is cumbersome to evaluate(because we need to average the expected impact of molecules released over all 2F+1possible recent bit sequences, including the current bit) and is also not accurate (be-cause (5.20) is based on continuous emission while N?RX|tx,ISI (t?) and N?RX|tx,cur (t?)are based on discrete emissions at the start of the corresponding bit intervals). Asecond method to derive N?RX|tx,old, which requires less approximation, is to start with(5.6) and change the limits of integration over time, i.e.,N?RX|tx,old (t?) =∞∫t?−(bcur−F−1)T ?intV ?RXC?(d?ef, τ ?)dτ ?, (5.24)where d?ef2 = (d?− v?‖τ ?)2 + (v?⊥τ ?)2 and, if bcur− F − 1 ≤ 0, then we do not yet haveasymptotic ISI. Eq. (5.24) is also periodic with period T ?int. A special case of (5.24)occurs if we have v?‖ = v?⊥ = 0 and k? = 0. In such an environment, we can subtract165Chapter 5. A Unifying Model for External Noise Sources and ISI(5.12) from the asymptotic expression in (5.9). Specifically, we can writeN?RX|tx,old (t?) =V ?RX4pid?erf(d?2√t? − (bcur− F − 1)T ?int). (5.25)Depending on the environmental parameters and whether there is a preference fortractability or accuracy, either (5.23), (5.24), or (5.25) can be used for N?RX|tx,old (t?)in (5.22). This asymptotic ISI term is independent of the actual transmitter datasequence W, so it can be pre-computed and used to assist in applications such asevaluating the expected bit error probability of a weighted sum detector.5.5.2 Application: Weighted Sum DetectionWe focus on a single type of detector at the receiver as a detailed example of theapplication of an asymptotic model of old ISI. We proposed the family of weightedsum detectors in Chapter 4 as detectors that can operate with limited memory andcomputational requirements. We envision such detectors to be physically practicalbecause they can already be found in biological systems such as neurons; see [58,Ch. 12]. Here, we consider weighted sum detectors where the receiver makes Mobservations in every bit interval, and we assume that these observations are equallyspaced such that the mth observation in the bth interval is made at time t? [b,m] =(b+ mM)T ?int, where b = {1, 2, . . . , B},m = {1, 2, . . . ,M}.The dimensionless decision rule of the weighted sum detector in the bth bit intervalisWˆ [b] = 1 if∑Mm=1wmN?RX(t? [b,m]) ≥ ξ?,0 otherwise,(5.26)where N?RX(t? [b,m]) is the mth observation as given by (5.1), wm is the weight ofthe mth observation, and ξ? is the binary decision threshold (we note that we do not166Chapter 5. A Unifying Model for External Noise Sources and ISIneed to make the weights dimensionless because they already are). We assume thata constant optimal ξ? for the given environment (and for the given formulation of ISIwhen evaluating the expected performance) is found via numerical search.Given a particular transmitter sequence W, we recall from (4.14) that the ex-pected probability of error of the bth bit, Pe[b|W], isPe[b|W] =Pr(∑Mm=1wmN?RX(t? [b,m]) < ξ?)if W [b] = 1,Pr(∑Mm=1 wmN?RX(t? [b,m]) ≥ ξ?)if W [b] = 0.(5.27)In Chapter 4, we approximated the expected error probability for the bth bitaveraged over all possible transmitter sequences, Pe[b], by averaging (4.14) over asubset of all sequences. An error probability was determined for all B bit intervalsof every considered sequence. This analysis can be greatly simplified by evaluatingthe probability of error of a single bit that is sufficiently far from the start ofthe sequence, i.e., b → ∞, and then model only the most recent F intervals ofISI explicitly and represent all older intervals with N?RX|tx,old (t?). Furthermore, if theimpacts of the external noise sources in (5.1) are represented asymptotically (whetherthey are interferers or other noise sources), or if there are no external noise sourcespresent, then we only need to evaluate the expected probability of error of the lastbit in 2F+1 sequences.The evaluation of (5.27) depends on the statistics of the dimensional version of theweighted sum∑Mm=1wmN?RX(t? [b,m]). For simplicity, we limit our discussion to thespecial case where the weights are all equal, i.e., wm = 1∀m, such that we can assumethat the (dimensional) observations are independent Poisson random variables (wealso considered the general case, where we must approximate the observations asGaussian random variables, in Chapter 4). Then, the sum of observations is also a167Chapter 5. A Unifying Model for External Noise Sources and ISIPoisson random variable. We recall from (4.12) that the CDF of the weighted sumin the bth bit interval is thenPr(M∑m=1NRX(tb [m]) < ξ)= exp(−M∑m=1NRX(tb [m]))×ξ−1∑i=0(M∑m=1NRX(tb [m]))ii!, (5.28)where, from (5.1),NRX(tb [m]) = NRX|tx (tb [m]) +U∑u=2NRX|u, (5.29)and NRX|tx (t) and NRX|u are the dimensional forms of the number of molecules ex-pected from the intended transmitter and uth noise source, i.e., N?RX|tx (t?) and N?RX|u,respectively, and we emphasize that we represent the noise sources asymptotically.We write (5.28) and (5.29) in dimensional form to emphasize that the observationsare discrete. For the corresponding simulations in Section 5.6, we only consider U = 1to focus on the accuracy of the asymptotic approximation of old ISI, and we evaluatethe old ISI as given by (5.24) or (5.25) for k 6= 0 and k = 0, respectively.5.6 Numerical ResultsIn this section, we present numerical and simulation results to verify the analysis ofnoise, multiuser interference, and ISI performed in this chapter. To clearly show theaccuracy of all equations derived in this chapter, we simulate only one source at atime, measuring either 1) the impact of a noise source or an interfering transmitter, or2) the receiver error probability when the intended transmitter is the only molecule168Chapter 5. A Unifying Model for External Noise Sources and ISITable 5.3: System parameters used for numerical and simulation results.Parameter Symbol ValueRelease rate of ideal noise source NGen,n (t) 1.2× 106 moleculesMolecules per transmitter emission NTX,u 104Probability of transmitter binary 1 P1 0.5Length of transmitter sequence B 100 bitsTransmitter bit interval Tint,u 0.2msDiffusion coefficient D 10−9m2/sRadius of receiver RRX50 nmStep size for continuous noise ∆t? 0.1Step size for transmitters ∆t 2µssource. In Table 5.3, we summarize the system parameters, which are based onSystem 1 defined in Table 2.1. We use a larger simulation step size ∆t for simulationswith a TX than that defined in Table 2.1 (2µs instead of 0.5µs) because we are notmaking observations at the RX as frequently as the fastest cases in Chapter 4. Whenwe simulate a noise source, we adjust the step size based on values of the referencevariables.Most of the results in this section have been non-dimensionalized with the refer-ence distance L depending on the distance from the source of molecules to the receiver.For reference, conversions between the dimensional variables that were simulated andtheir values in dimensionless form are listed in Table 5.4.5.6.1 Continuous Noise SourceWe first present the time-varying impact of the continuously-emitting noise sourcethat we analyzed in Section 5.3. The times between the release of consecutivemolecules from the noise source are simulated as a continuous Poisson process so169Chapter 5. A Unifying Model for External Noise Sources and ISITable 5.4: Conversion between dimensional and dimensionless variables. The valuesof t, v, and k correspond to t? = 1, v? = 1, and k? = 1, respectively.dn [nm] L [nm] t [µs] v [mms] k [s−1] NAref0 50 2.5 20 4× 105 350 50 2.5 20 4× 105 3100 100 10 10 1× 105 12200 200 40 5 2.5× 104 48400 400 160 2.5 6.25× 103 1921000 1000 1000 1 103 1200that the times between molecule release are independent. The expected release rate,1.2× 106 molecules, is chosen so that, asymptotically, one (dimensional) molecule is ex-pected to be observed at the receiver due to a noise source placed 50 nm from thecenter of the receiver (this distance is actually at the edge of the receiver, as shownin Table 5.3). To accommodate the range of distances considered, we adjust thesimulation time step ∆t so that 10 steps are made within every t? = 1 time unit.Simulations are averaged over 105 independent realizations. The specific equationsused for calculating the expected values, both time-varying and asymptotically, werechosen as appropriate from Table 5.2.In Fig. 5.1, we show the time-varying impact of the noise source when there is noadvection and no molecule degradation, i.e., v?‖ = v?⊥ = 0 and k? = 0. Under theseconditions, we have the expected time-varying and asymptotic impact in closed form.For every distance shown, the impact approaches the asymptotic value as t? → 100,as expected from Remark 5.2. The expected impact without the UCA is highlyaccurate for all time, and the expected impact with the UCA shows visible deviationonly for t? < 1 when dn < 200 nm, i.e., when the noise source is not far from thereceiver. We also observe that the overall impact decreases as the noise source is170Chapter 5. A Unifying Model for External Noise Sources and ISI10−1 100 101 10210−410−310−210−1t⋆#ofNoiseMoleculesN⋆ RX|n(t⋆)  SimulatedExpected without UCAExpected with UCAExpected Asymptoticallydn = 50 nmdn = 100 nmdn = 400 nmdn = 200 nmFigure 5.1: The dimensionless number of noise molecules observed at the receiver asa function of time when v?‖ = v?⊥ = 0 and k? = 0, i.e., when there is no advection ormolecule degradation.placed further from the receiver; doubling the distance decreases N?RX|n (t?) by abouta factor of 8 while the corresponding value of NAref, defined as NAref= L2NGen,n/Dand used to convert N?RX|n (t?) into dimensional form, only increases by a factor of 4(see Table 5.4). The overall (dimensional) decrease in impact by a factor of 2 is asexpected by Remark 5.3.In Fig. 5.2, we consider the same environment as in Fig. 5.1 but we set the moleculedegradation rate k? = 1. The accuracy of the expected expressions is comparable tothat observed in Fig. 5.1, but here the asymptotic impact is approached about twoorders of magnitude faster, as t? → 2. The asymptotic impact at any distance is alsoless than half of that observed in Fig. 5.1 because of the molecule degradation.In Fig. 5.3, we observe the impact of a noise source at the worst-case location,i.e., d?n = 0, and we vary the molecule degradation rate k?. The expressions for theexpected time-varying and asymptotic impact are both highly accurate. We see thegeneral trend that the asymptotic impact decreases (as expected by Remark 5.4)171Chapter 5. A Unifying Model for External Noise Sources and ISI10−1 100 101 10210−510−410−310−210−1t⋆#ofNoiseMoleculesN⋆ RX|n(t⋆)  SimulatedExpected without UCAExpected with UCAExpected Asymptoticallydn = 50 nmdn = 400 nmdn = 100 nmdn = 200 nmFigure 5.2: The dimensionless number of noise molecules observed at the receiver asa function of time when v?‖ = v?⊥ = 0 and k? = 1.and is reached sooner as k? increases. Increasing k? also degrades the signal fromthe desired transmitter, but this can be good for reducing ISI as we will see in thefollowing subsection. Furthermore, it is interesting that the impact of the noise sourcecan be significantly reduced by increasing the rate of noise molecule degradation, eventhough the noise molecules are being emitted directly at the receiver. This impliesthat, if they were not degraded, significantly more noise molecules would have beenobserved by the receiver before diffusing away.In Figs. 5.4 and 5.5, we consider the effect of advection on the impact of noisewithout molecule degradation. For clarity, we observe dn = {0, 100} nm in Fig. 5.4and dn = {200, 400} nm in Fig. 5.5, and we write v?‖,n = v?‖ , v?⊥,n = v?⊥. When dn = 0,only one flow direction is relevant because all flows are equivalent by symmetry. Aswith molecule degradation, we observe that the presence of advection reduces thetime required for the impact of the noise source to become asymptotic, which hereoccurs by about t? = 4. Flows that are not in the direction of a line from the noisesource to the receiver, i.e., v?‖ < 0 or v?⊥ 6= 0 (which we termed disruptive flows in172Chapter 5. A Unifying Model for External Noise Sources and ISI10−2 10−1 100 10110−210−1t⋆#ofNoiseMoleculesN⋆ RX|n(t⋆)  SimulatedExpectedExpected Asymptoticallyk⋆ = 0k⋆ = 50Figure 5.3: The dimensionless number of noise molecules observed at the receiver asa function of time when d?n = 0, v?‖ = v?⊥ = 0, and the molecule degradation rate isk? = {0, 1, 2, 5, 10, 20, 50}.10−1 100 101 10210−310−210−1t⋆#ofNoiseMoleculesN⋆ RX|n(t⋆)  Simulated, No FlowSimulated, v⋆‖ = 1Simulated, v⋆⊥ = 1Simulated, v⋆‖ = −1ExpectedExpected Asymptoticallydn = 0 nmdn = 100 nmFigure 5.4: The dimensionless number of noise molecules observed at the receiver asa function of time when k? = 0, we vary v?‖ or v?⊥, and we consider dn = 0 nm anddn = 100 nm.2), decrease the asymptotic impact of the noise source. However, the flow v?‖ = 1results in about the same asymptotic impact as the no-flow case when dn 6= 0 nm,which we expect from Remark 5.1, although it might not be an intuitive result.173Chapter 5. A Unifying Model for External Noise Sources and ISI10−1 100 101 10210−510−410−3t⋆#ofNoiseMoleculesN⋆ RX|n(t⋆)  Simulated, No FlowSimulated, v⋆‖ = 1Simulated, v⋆⊥ = 1Simulated, v⋆‖ = −1Expected without UCAExpected Asymptoticallydn = 200 nm dn = 400 nmFigure 5.5: The dimensionless number of noise molecules observed at the receiver asa function of time when k? = 0, we vary v?‖ or v?⊥, and we consider dn = 200 nm anddn = 400 nm.5.6.2 Interference and Intersymbol InterferenceWe now assess the accuracy of approximating transmitters as continuously-emittingnoise sources. First, we observe the impact of an interfering transmitter. Second,we assess the accuracy of evaluating the receiver error probability where we vary thenumber F of symbols of ISI treated explicitly and approximate all older ISI as anasymptotic noise source. We consider transmitters with a common set of dimensionaltransmission parameters, as described in Table 5.3.In Fig. 5.6, we show the time-varying impact on the receiver of a single interfererusing binary-encoded impulse modulation, both with and without molecule degrada-tion, for the interferer placed d2 = 400 nm or 1µm from the receiver (we emphasizethat the only active molecule source is not the intended transmitter by using thesubscript 2). At both distances, the same bit interval is used (Tint,2 = 0.2ms). Theexpected time-varying and asymptotic curves are evaluating using (5.6) and (5.20),respectively. The simulations are averaged over 105 independent realizations, and174Chapter 5. A Unifying Model for External Noise Sources and ISI0 2 4 6 8#ofIntereferenceMoleculesNRX|2(t?)10−310−410−510−6d2 = 400nmd2 = 1000nmk? = 0k? = 0k? = 1k? = 1t?Expected with UCAExpected AsymptoticallySimulation10Figure 5.6: The dimensionless number of interfering molecules observed at the receiveras a function of time an interfering transmitter placed at d2 = 400 nm and d2 = 1µm.in Fig. 5.6 we clearly observe oscillations in the simulated values above and belowthe expected curves. The relative amplitude of these oscillations is much greaterwhen the interferer is closer to the receiver, and also greater when there is moleculedegradation; when d2 = 400 nm and k? = 1, the impact in the asymptotic regimevaries from 4 × 10−5 to over 6 × 10−4, but when d2 = 1µm and k? = 0, the relativeamplitude of the oscillations is an order of magnitude smaller. Thus, the impact ofan interferer that is sufficiently far from the receiver can be accurately approximatedwith a non-oscillating function, and an interferer does not need to be transmitting fora very long time to assume that its impact is asymptotic (8 and 50 bit intervals areshown in Fig. 5.6 for the interferers at 400 nm and 1µm, respectively; the differenceis due to plotting on a dimensionless time axis). We note that the relative amplitudeof oscillations would also decrease if the interferer transmitted with a smaller bitinterval.In Fig. 5.7, we measure the average bit error probability of the equal weight de-175Chapter 5. A Unifying Model for External Noise Sources and ISI0 5 10 15 2010−310−2# of Explicit Bit Intervals of ISI (F )ReceiverErrorProbabilityPe  SimulatedExpected with N ⋆RX|tx,old(t⋆)Expected with N ⋆RX|tx,old(t⋆) = 0k⋆ = 0.05k⋆ = 0.1k⋆ = 0.2k⋆ = 0Figure 5.7: Receiver error probability as a function of F , the number of bit intervalsof ISI treated explicitly, for varying molecule degradation rate k?.tector when M = 10 samples are taken per bit interval and the optimal decisionthreshold is found numerically. The receiver is placed d = 400 nm from the trans-mitter and we vary k? to control the amount of ISI that we expect (since a fastermolecule degradation rate means that emitted molecules are less likely to exist suffi-ciently long to interfere with future transmissions). We do not add any external noiseor interference (i.e., there is only one source of information molecules), but we varythe number F of bit intervals that are treated explicitly as ISI, i.e., the complexity ofN?RX|tx,ISI (t?), in evaluating the expected error probability. Simulations are averagedover 104 independent realizations, and we ignore the decisions made within the first50 of the 100 bits in each sequence in order to approximate the old ISI as asymp-totic. The old ISI, N?RX|tx,old (t?), is found by evaluating (5.24) (or by using (5.25)when k? = 0), and to emphasize the benefit of including this term we also considerevaluating the expected error probability where we set N?RX|tx,old (t?) = 0.We generally observe in Fig. 5.7 that, as F increases, the expected error proba-bility becomes more accurate because we treat more of the ISI explicitly instead of176Chapter 5. A Unifying Model for External Noise Sources and ISIas asymptotic noise via N?RX|tx,old (t?). The exception to this is when k? = 0 and wecalculate the expected value using N?RX|tx,old (t?). The ISI in that case is much greaterthan when k? > 0, such that the expected bit error probability is more sensitive tothe approximation for N?RX|tx,old (t?), which assumes that the release of molecules iscontinuous over the entire bit interval. This approximation means that the expectedold ISI is overestimated and a higher expected bit error probability is calculated.When k? > 0, the expected bit error probability tends to underestimate that observedvia simulation because the evaluation of the expected bit error probability assumesthat all observed samples are independent, but this assumption loses accuracy forlarger M . Importantly, the expected bit error probability tends to that observed viasimulation much faster when including N?RX|tx,old (t?), even though it is an approxi-mation. For all values of k? considered, it is sufficient to consider only 2 or 3 intervalsof ISI explicitly while approximating all prior intervals with N?RX|tx,old (t?). If we useN?RX|tx,old (t?) = 0, as is common in the existing literature, then many more intervalsof explicit ISI are needed for comparable accuracy (F = 20 is still not sufficient ifk? = 0, although F = 5 might be acceptable if k? = 0.2). Since the computationalcomplexity of evaluating the expected bit error probability increases exponentiallywith F (because we need to evaluate the expected probability of error due to all2F+1 bit sequences), approximating old ISI with N?RX|tx,old (t?) provides an effectivemeans with which to reduce the complexity without making a significant sacrifice inaccuracy.5.7 ConclusionIn this chapter, we proposed a unifying model to account for the observation ofunintended molecules by a passive receiver in a diffusive MC system, where the un-177Chapter 5. A Unifying Model for External Noise Sources and ISIintended molecules include those emitted by the intended transmitter in previousbit intervals, those emitted by interfering transmitters, and those emitted by otherexternal noise sources that are continuously emitting molecules. We presented thegeneral time-varying expression for the expected impact of a noise source that isemitting continuously, and then we considered a series of special cases that facilitatetime-varying or asymptotic solutions. Knowing the expected impact of noise sourcesenables us to find the effect of those sources on the bit error probability of a commu-nication link. We used the analysis for asymptotic noise to approximate the impactof an interfering transmitter, which we extended to the general case of multiuser in-terference. Finally, we decomposed the signal received from the intended transmitterso that we could approximate old ISI as asymptotic interference. We showed howthis approximation could be used to simplify the evaluation of the expected bit errorprobability of a weighted sum detector.Our simulation results showed the high accuracy of our expressions for time-varying and asymptotic noise. We showed that an interfering transmitter placedsufficiently far from the receiver can be approximated as an asymptotic noise sourcesoon after it begins transmitting, and that approximating old ISI as asymptotic noiseis an effective method to reduce the computational complexity of evaluating theexpected probability of error.178Chapter 6Conclusions and Topics for FutureResearchIn this chapter, we conclude the dissertation and identify areas for future work. Wereview our findings and draw overall conclusions in Section 6.1. Comments on thedirection of this field, including topics for future related research, are discussed inSection 6.2.6.1 ConclusionsIn this section we review the main results from each chapter and then summarize theconclusions of the dissertation.ˆ Chapter 2  Channel Model and Impulse Response: In Chapter 2, wedefined a system model for 3-dimensional diffusive molecular communication(MC) with steady uniform flow and two methods for molecule degradation. Wederived the expected channel impulse response in dimensional and dimensionlessforms, where the dimensionless form is scalable to any arbitrary dimensionwhere we the underlying assumptions are still valid. We repeated the derivationof the impulse response where we assumed that the concentration of moleculesthroughout the receiver is uniform and equal to that expected at the centerof the receiver, i.e., the uniform concentration assumption (UCA). We used179Chapter 6. Conclusions and Topics for Future Researchthe expected channel impulse response to write the cumulative signal at thereceiver due to all molecule sources, including interfering transmitters. Weshowed that the underlying statistics of the observed channel impulse responsefollow the Binomial distribution, and that they can be approximated with thePoisson or Gaussian distributions. We derived the mutual information betweenconsecutive observations at the receiver as a measure of the independence ofobservations. We introduced a microscopic simulation framework that accountsfor all of the phenomena in our reaction-diffusion model. The correspondingsimulation results verified the expected channel impulse response, showed thatthe Poisson approximation of the channel statistics is more accurate than theGaussian approximation (for the parameter values that we considered), assessedthe validity of the UCA, and showed that the independence of consecutiveobservations grows as the time between them increases.ˆ Chapter 3  Joint Channel Parameter Estimation: In Chapter 3, westudied the joint estimation of the underlying channel parameters of our dif-fusive MC model. We estimated the distance between the receiver and thetransmitter, the coefficient of diffusion, the components of the flow vector, themolecule degradation rate, the number of molecules released by the transmit-ter, and the time when those molecules were released. The joint estimationproblem readily simplified to the estimation of any subset of the channel pa-rameters. We used the Fisher Information Matrix to derive the Cramer-Raolower bound (CRLB) on the variance of estimation error of any locally unbiasedestimator, where the receiver makes independent observations of an impulse ofmolecules released by the transmitter. We compared the CRLB with maximumlikelihood (ML) estimation, which simplified to closed-form for the estimation180Chapter 6. Conclusions and Topics for Future Researchof some individual parameters. Simulations showed that as more observationswere taken, ML estimation was shown to be unbiased and approach the CRLB.If the Fisher Information Matrix (FIM) was singular or in the vicinity of asingularity, ML estimation was unbiased and could be better than the CRLB.We also proposed peak-based estimation as low-complexity estimation of anysingle channel parameter. Simulations of peak-based estimators showed thatthey were generally comparable to the CRLB for one observation made at thetime when the largest observation is expected, and much worse than the CRLBfor a large number of samples.ˆ Chapter 4  Optimal and Suboptimal Receiver Design: In Chapter 4,we studied receiver design for our diffusive MC model when the transmitterencodes a binary sequence with impulsive ON/OFF keying. First, we pro-posed optimal sequence detection in a ML sense, which we implemented witha modified Viterbi algorithm that limits the number of explicit channel mem-ory states but accounts for all existing intersymbol interference (ISI). Next, weintroduced weighted sum detectors, which allow the receiver to assign greaterimportance to molecules observed in particular samples during the symbol in-terval. Weighted sum detectors use a threshold for making symbol-by-symboldecisions and are generally suboptimal but more physically realizable than theoptimal sequence detector. We derived the expected bit error probability of aweighted sum detector for every bit in a given transmitter sequence, which wecould then average over all of the bits in the sequence and all possible sequences.We showed that, in the absence of ISI, a weighted sum detector with matchedfilter weights had an expected bit error probability equal to the optimal detec-tor. Other simulation results showed how flow and molecule degradation could181Chapter 6. Conclusions and Topics for Future Researchimprove receiver performance without needing to use an optimal detector, howexternal noises degrade receiver performance, and how performance depends onthe number of observations made in a given bit interval.ˆ Chapter 5  A Unifying Model for External Noise Sources and In-tersymbol Interference: In Chapter 5, we revisited our diffusive MC sys-tem model to derive the time-varying and asymptotic number of molecules ex-pected at the receiver due to a noise source that releases molecules continuously.Closed-form expressions were available when we applied the UCA and under anumber of other special cases. We applied asymptotic noise as approximationsfor deriving the number of molecules expected due to ISI or multiuser interfer-ence (MUI) when the transmitters use impulsive ON/OFF keying. Simulationresults verified the time-varying and asymptotic expressions for the number ofmolecules observed due to a continuous noise source, showed how MUI couldbe accurately modeled as random noise if it is sufficiently far from the receiver,and showed how asymptotic noise could approximate old ISI when evaluatingthe expected bit error probability of the weighted sum detector.We summarize the conclusions of this dissertation as follows:ˆ Channel Modeling: We claim to have introduced the most detailed diffusiveMC model for which the time domain channel impulse response is availablein closed form. Having an analytical closed-form expression enabled our com-munications analysis by making it possible to derive expressions for parameterestimation, detector performance, and the impact of noise sources. Our studywas end-to-end in that we determined the relevant underlying physical phe-nomena, determined the channel impulse response, assessed whether a receivercould estimate the underlying channel parameters, studied the performance of182Chapter 6. Conclusions and Topics for Future Researchthe receiver given that the channel parameters are known, and accounted forthe presence of other molecule sources. Our simulations incorporated all ofthe details and assumptions of the underlying system model, so the simulationresults are an accurate reflection of the behavior of the model.ˆ Knowledge of Channel Parameters: We demonstrated that having preciseknowledge of the channel parameters is a non-trivial assumption for nanoscaledevices. We established the bounds on classical parameter estimation from asingle impulse of molecules from the transmitter. Simple parameter estimationschemes can result in a very large error variance that must be accounted forwhen relying on the parameter values.ˆ Weighted Sum Detection: Weighted sum detection is a promising approachfor receiver design, given its biological motivation and its reduced computationalcomplexity. Even when using the simplest distribution of weights (i.e., equalweights), low bit error probabilities are acheivable if a large number of samplescan be taken. Environment phenomena such as flow and molecule degradationcould result in significantly improved performance, which was comparable insome cases to that of the optimal sequence detector.ˆ Feasibility of Diffusive Molecular Communication: Diffusive MC is apromising natural communication strategy for adaptation in synthetic networks.The fundamental limits of communication via diffusion can accommodate thereliable transfer of much more data than the simple signaling messages that areused in cellular systems. Ultimately, there is a tradeoff between the simplicity ofthe individual devices and the communications performance, and understand-ing this tradeoff will help system designers in determining the communications183Chapter 6. Conclusions and Topics for Future Researchpotential of a nanonetwork.6.2 Future WorkIn this section, we organize our discussion of future work into two areas. First, wecomment on the future of molecular communication from a broader perspective thanthat of this dissertation, i.e., generally, what problems should be considered? Then,in consideration of these problems, we present topics that are direct extensions of thecontributions of this dissertation.6.2.1 Directions of MC ResearchIn Chapter 1, we motivated our goal to seek an understanding of the fundamentallimits of MC to transmit arbitrarily large amounts of information. This dissertationhas focused on the design of devices with limited computational complexity that shareinformation via diffusion. Ultimately, experimental verification is required for thisor any other MC model. Existing work has provided interesting demonstrations forsynthetic networks based on the transmission of molecules, including the tabletop flowexperiments in [42,44], the excitation of calcium signaling in [131], the connection ofnanotubes between mammalian cells in [132], and the observation of fluorescence frombacterial colonies in a microfluidic environment in [133]. However, these experimentshave not verified the analytical models that have drawn the most attention in MCresearch. The experimental verification of diffusive MC models is essential for theprogression of the state-of-the-art towards practical implementation. We see this astwo complementary problems, which we define as follows:1. Improving Models: The existing diffusive MC models can be extended to184Chapter 6. Conclusions and Topics for Future Researchincorporate more realistic physical phenomena, so that they can be readilycompared with experimental results. Examples of extensions to the model inthis dissertation are discussed in the next subsection. It is quite possible thatcertain extensions may not facilitate closed-form analysis, and this would limitour ability to draw substantive conclusions. However, numerical evaluation orsimulation results would still be meaningful for comparison with experimentaldata.2. Obtaining Experimental Data: The ability to verify a model from experi-mental data means that we must have experimental data to make the compar-ison. Such data could be obtained in one of two ways. First, we could identifyexisting experimental data that was obtained from an environment that is suf-ficiently similar to that described by the model. This might not always bepossible, even for realistic environments. For example, the reaction-diffusionsystem in the neuromuscular junction has been a common environment for thedevelopment of physical models (see [101103]), but this is due in part to thepractical challenges of obtaining experimental observations from this environ-ment. Strategies to understand the neuromuscular junction might involve ananimal study with a combination of calcium imaging in the presynaptic cleft,electrophysiology to verify that the postsynaptic ion channels were opened, andfreeze fracture electron microscopy; see [58, Ch. 12] and [3, Ch. 1]. Alterna-tively to using existing experimental data, we could design, create, and observea controlled environment. One example is a microfluidic platform where wecould visualize the release and diffusion of fluorescent molecules.It is clear that the two aforementioned problems are related, and both need to beaddressed to bridge the gap between existing analytical efforts in MC and exper-185Chapter 6. Conclusions and Topics for Future Researchimental verification. The second problem also draws attention to another potentialapplication of MC research, which is to improve the understanding of natural MCsystems, rather than focus solely on the design of synthetic networks. For example,progress in this field might one day be applied to help understand environments whereexperimentation verification is difficult (such as in the neuromuscular junction), orto provide guidance by predicting experimental results.6.2.2 Modeling and AnalysisThere are many interesting problems that are still to be considered in the modelingand analysis of diffusive MC systems. Primarily, we can expand the system model toincorporate more realistic physical phenomena, we can continue to gain inspirationfrom biological mechanisms for adaptation in network design, and we can continueto apply communications theory to assess the feasibility of diffusive MC. We identifya selection of interesting topics for future study as follows:1. Simulation Methods: A custom microscopic simulation framework was de-veloped for the simulations in this dissertation. One drawback of this approachis that computational efficiency becomes an obstacle when trying to considerlarger environments or those with a large number of chemical reactions. Eventhough we can extrapolate the expected channel impulse response to arbitrarilylarger environments with our dimensional analysis, this extrapolation does notreadily apply to the results that are based on the channel statistics (i.e, the per-formance of estimation or detection at the receiver). Larger environments wouldbe easier to consider experimentally, so we are interested in improving the scal-ability of the simulations. An alternate approach to microscopic simulation isto use a mesoscopic method, where the propagation environment is partitioned186Chapter 6. Conclusions and Topics for Future Researchinto containers called subvolumes. If molecular concentrations in each subvol-ume are homogeneous, then a mesoscopic simulation can accurately capturethe behavior of the system; see [134]. A microscopic model has better spatialaccuracy, but the advantages of a mesoscopic model include easier implementa-tion of complex chemical reactions and better computational efficiency as thesystem dimensions grow. We are interested in developing a simulation frame-work based on a hybrid scheme that combines the microscopic and mesoscopicmodels, based on strategies proposed in [135137]. We emphasize that we areultimately interested in the accuracy of the statistics at the receiver; throughoutthis dissertation, we simulated the systems many thousands of times to generatethe statistics. The behavior of the total system is not as important, so we aremotivated to improve computational efficiency in regions that are not criticalto the receiver statistics, i.e., that are sufficiently far from the communicationlink. Our early progress in this area has been reported in [69,138].2. Parameter and Channel Estimation: There are a number of problems thatextend the work on parameter estimation performed in Chapter 3 but whichhave not yet been considered in the diffusive MC literature:ˆ Bayesian approaches to estimating the channel parameters can be ex-plored, where we assume that the value of the unknown parameter issampled from a known distribution; see [121, Ch. 10]. Generally, suchapproaches will be more accurate if the chosen distribution is accurate.ˆ We can consider estimating the channel impulse itself, instead of the un-derlying channel parameters. For some applications, such as the designof the optimal detector or determining the matched filter weights of theweighted sum detector, we only need to determine the expected channel187Chapter 6. Conclusions and Topics for Future Researchimpulse response. In these scenarios, we do not even need the existence ofa closed-form expression for the channel impulse response.ˆ We can study the impact of imperfect channel or parameter estimationon the performance of detectors that need this information. For example,such an analysis might show that applying equal weights is better thanmatched filters weights for the weighted sum detector. Similarly, we canassess the impact of imperfect information on the accuracy of predictingdetector performance.ˆ There has been limited work thus far in proposing protocols to estimatethe channel parameters. We are ultimately interested in finding low-complexity estimators without compromising too much on estimation ac-curacy. One possible approach is to not use the peak number of moleculesobserved, but to observe the signal's decay from the peak value. Thisapproach could take advantage of the long tail of diffusion by collect-ing more samples to improve estimation accuracy. Another approach isto combine the information obtained by observing multiple impulses bythe transmitter. Finally, a numerical solution of the ML estimate usingthe expectation-maximization (EM) algorithm might provide insight intoa practical estimator, if an appropriate decomposition of the signal intocomplete data can be found; see [121, Ch. 7].3. Active Reception Techniques: This dissertation considered a receiver thatis a passive observer. More realistically, information molecules could be de-tected at the receiver by binding to receptors on the surface of or inside thereceiver. A system model that incorporates binding at the receiver would alterthe system's boundary conditions and require different analysis to derive the188Chapter 6. Conclusions and Topics for Future Researchchannel impulse response. For example, a derivation of the channel impulseresponse from a point TX to an absorbing spherical RX was recently intro-duced to the MC literature in [33]. More generally, reaction mechanisms suchas MichaelisMenten kinetics or ligand-receptor binding could be considered.Ligand-receptor binding is similar to MichaelisMenten kinetics but does nothave a degradation reaction, and it was considered in deterministic form forthe receiver in a diffusive MC environment in [105]. A general model that ac-commodates any reaction scheme at the receiver was studied in [32], where thechannel impulse response can be found by numerically performing an inverseLaplace transform. It would be of great interest to integrate receiver reactionswith the propagation environment studied in this dissertation.4. Environment Flow: In this dissertation, we assumed that flow was alwayssteady and uniform, which is analytically the simplest flow model but may notbe sufficient to represent flow in biological environments. For example, the rateof blood flow generally varies over both time and space, such as in oscillatinglaminar flow; see [58, Ch. 5]. Laminar flow has been studied in the context ofmolecular communication in [76, 9799]. The inclusion of laminar flow with afocus on practical receiver design has not yet been considered but would be aninteresting problem.189Bibliography[1] J. G. Proakis, Digital Communications, 4th ed. Boston: McGraw-Hill, 2001.[2] W. K. Purves, D. Sadava, G. H. Orians, and H. C. Heller, Life: The Science ofBiology, 6th ed. Sinauer Associates, 2001.[3] B. Alberts, D. Bray, K. Hopkin, A. Johnson, J. Lewis, M. Raff, K. Roberts,and P. Walter, Essential Cell Biology, 3rd ed. Garland Science, 2010.[4] I. F. Akyildiz, F. Brunetti, and C. Blazquez, Nanonetworks: A new commu-nication paradigm, Computer Networks, vol. 52, no. 12, pp. 22602279, May2008.[5] T. Nakano, M. J. Moore, F. Wei, A. V. Vasilakos, and J. Shuai, Molecularcommunication and networking: Opportunities and challenges, IEEE Trans.Nanobiosci., vol. 11, no. 2, pp. 135148, Jun. 2012.[6] G. A. Truskey, F. Yuan, and D. F. Katz, Transport Phenomena in BiologicalSystems, 2nd ed. Pearson Prentice Hall, 2009.[7] L. C. Antunes and R. B. Ferreira, Intercellular communication in bacteria,Critical reviews in microbiology, vol. 35, no. 2, pp. 6980, 2009.[8] T. Nakano, A. Eckford, and T. Haraguchi, Molecular Communication. Cam-bridge University Press, 2013.[9] S. Hiyama, Y. Moritani, T. Suda, R. Egashira, A. Enomoto, M. Moore, andT. Nakano, Molecular communication, in Proc. 2005 NSTI Nanotech, May2005, pp. 391394.[10] M. Pierobon and I. F. Akyildiz, A physical end-to-end model for molecularcommunication in nanonetworks, IEEE J. Sel. Areas Commun., vol. 28, no. 4,pp. 602611, May 2010.[11] , Diffusion-based noise analysis for molecular communication in nanonet-works, IEEE Trans. Signal Process., vol. 59, no. 6, pp. 25322547, Jun. 2011.[12] , Noise analysis in ligand-binding reception for molecular communicationin nanonetworks, IEEE Trans. Signal Process., vol. 59, no. 9, pp. 41684182,Sep. 2011.190Bibliography[13] S. Kadloor and R. Adve, A framework to study the molecular communicationsystem, in Proc. IEEE ICCCN, Aug. 2009, pp. 16.[14] B. Atakan and O. B. Akan, Deterministic capacity of information flow in molec-ular nanonetworks, Nano Commun. Net., vol. 1, no. 1, pp. 3142, Mar. 2010.[15] D. Miorandi, A stochastic model for molecular communications, Nano Com-mun. Net., vol. 2, no. 4, pp. 205212, Dec. 2011.[16] M. S. Kuran, H. B. Yilmaz, T. Tugcu, and B. Ozerman, Energy model forcommunication via diffusion in nanonetworks, Nano Commun. Net., vol. 1,no. 2, pp. 8695, Jun. 2010.[17] S. Kadloor, R. R. Adve, and A. W. Eckford, Molecular communication usingBrownian motion with drift, IEEE Trans. Nanobiosci., vol. 11, no. 2, pp. 8999, Jun. 2012.[18] K. V. Srinivas, A. W. Eckford, and R. S. Adve, Molecular communication influid media: The additive inverse Gaussian noise channel, IEEE Trans. Inf.Theory, vol. 58, no. 7, pp. 46784692, Jul. 2012.[19] H. ShahMohammadian, G. G. Messier, and S. Magierowski, Optimum receiverfor molecule shift keying modulation in diffusion-based molecular communica-tion channels, Nano Commun. Net., vol. 3, no. 3, pp. 183195, Sep. 2012.[20] M. Pierobon and I. F. Akyildiz, Capacity of a diffusion-based molecular com-munication system with channel memory and molecular noise, IEEE Trans.Inf. Theory, vol. 59, no. 2, pp. 942954, Feb. 2013.[21] C. T. Chou, Extended master equation models for molecular communicationnetworks, IEEE Trans. Nanobiosci,, vol. 12, no. 2, pp. 7992, Jun. 2013.[22] H. ShahMohammadian, G. G. Messier, and S. Magierowski, Nano-machinemolecular communication over a moving propagation medium, Nano Commun.Net., vol. 4, no. 3, pp. 142153, Sep. 2013.[23] H. J. Chiu, L. S. Meng, P. C. Yeh, and C. H. Lee, Near-optimal low complexityreceiver design for diffusion-based molecular communication, in Proc. IEEEGLOBECOM, Dec. 2013, pp. 33723377.[24] D. Kilinc and O. B. Akan, Receiver design for molecular communication,IEEE J. Sel. Areas Commun., vol. 31, no. 12, pp. 705714, Dec. 2013.[25] N.-R. Kim and C.-B. Chae, Novel modulation techniques using isomers asmessenger molecules for nano communication networks via diffusion, IEEE J.Sel. Areas Commun., vol. 31, no. 12, pp. 847856, Dec. 2013.191Bibliography[26] T. Nakano, Y. Okaie, and J.-Q. Liu, Channel model and capacity analysisof molecular communication with brownian motion, IEEE Commun. Lett.,vol. 16, no. 6, pp. 797800, Jun. 2012.[27] M. J. Moore, Y. Okaie, and T. Nakano, Diffusion-based multiple access bynano-transmitters to a micro-receiver, IEEE Commun. Lett., vol. 18, no. 3,pp. 385388, Mar. 2014.[28] M. H. Kabir and K. S. Kwak, Effect of memory on BER in molecular commu-nication, Electron. Lett., vol. 50, no. 2, pp. 7172, Jan. 2014.[29] S. Abadal, I. Llatser, E. Alarcon, and A. Cabellos-Aparicio, Cooperative sig-nal amplification for molecular communication in nanonetworks, Wireless Net-works, vol. 20, no. 6, pp. 116, Feb. 2014.[30] H. B. Yilmaz, N. R. Kim, and C. B. Chae, Effect of ISI mitigation on mod-ulation techniques in molecular communication via diffusion, in Proc. ACMNANOCOM, May 2014, pp. 19.[31] A. C. Heren, F. N. Kilicli, G. Genc, and T. Tugcu, Effect of messenger moleculedecomposition in communication via diffusion, in Proc. ACM NANOCOM,May 2014, pp. 15.[32] C. T. Chou, Molecular communication networks with general molecular circuitreceivers, in Proc. ACM NANOCOM, May 2014, pp. 19.[33] H. B. Yilmaz, A. C. Heren, T. Tugcu, and C.-B. Chae, Three-dimensional chan-nel characteristics for molecular communications with an absorbing receiver,IEEE Commun. Lett., vol. 18, no. 6, pp. 929932, Jun. 2014.[34] M. Pierobon and I. F. Akyildiz, A statistical-physical model of interferencein diffusion-based molecular nanonetworks, IEEE Trans. Commun., vol. 62,no. 6, pp. 20852095, Jun. 2014.[35] H. B. Yilmaz and C.-B. Chae, Simulation study of molecular communicationsystems with an absorbing receiver: Modulation and ISI mitigation techniques,Simulat. Modell. Pract. Theory, vol. 49, pp. 136150, Dec. 2014.[36] N.-R. Kim, A. W. Eckford, and C.-B. Chae, Symbol interval optimization formolecular communication with drift, IEEE Trans. Nanobiosci., vol. 13, no. 3,pp. 223229, Sep. 2014.[37] A. Singhal, R. K. Mallik, and B. Lall, Effect of molecular noise in diffusion-based molecular communication, IEEE Wireless Commun. Lett., vol. 3, no. 5,pp. 489492, Oct. 2014.192Bibliography[38] L.-S. Meng, P.-C. Yeh, K.-C. Chen, and I. F. Akyildiz, On receiver designfor diffusion-based molecular communication, IEEE Trans. Signal Process.,vol. 62, no. 22, pp. 60326044, Nov. 2014.[39] R. Mosayebi, H. Arjmandi, A. Gohari, M. Nasiri-Kenari, and U. Mitra, Re-ceivers for diffusion-based molecular communication: Exploiting memory andsampling rate, IEEE J. Sel. Areas Commun., vol. 32, no. 12, pp. 23682380,Dec. 2014.[40] A. Akkaya, H. B. Yilmaz, C.-B. Chae, and T. Tugcu, Effect of receptor densityand size on signal reception in molecular communication via diffusion with anabsorbing receiver, IEEE Commun. Lett., vol. 19, no. 2, pp. 155158, Feb.2015.[41] M. U. Mahfuz, D. Makrakis, and H. T. Mouftah, A comprehensive analysis ofstrength-based optimum signal detection in concentration-encoded molecularcommunication with spike transmission, IEEE Trans. Nanobiosci., vol. 14,no. 1, pp. 6682, Jan. 2015.[42] N. Farsad, W. Guo, and A. W. Eckford, Tabletop molecular communication:text messages through chemical signals, PLoS ONE, vol. 8, no. 12, p. e82935,Dec. 2013.[43] C. T. Chou, Impact of receiver reaction mechanisms on the performance ofmolecular communication networks, IEEE Trans. Nanotechnol., vol. 14, no. 2,pp. 304317, Mar. 2015.[44] N. Farsad, N.-R. Kim, A. W. Eckford, and C.-B. Chae, Channel and noisemodels for nonlinear molecular communication systems, IEEE J. Sel. AreasCommun., vol. 32, no. 12, pp. 23922401, Dec. 2014.[45] M. S. Kuran, H. B. Yilmaz, T. Tugcu, and I. F. Akyildiz, Modulation tech-niques for communication via diffusion in nanonetworks, in Proc. IEEE ICC,Jun. 2011, pp. 15.[46] A. Einolghozati, M. Sardari, and F. Fekri, Capacity of diffusion-based molecu-lar communication with ligand receptors, in Proc. IEEE ITW, Oct. 2011, pp.8589.[47] A. W. Eckford, K. V. Srinivas, and R. S. Adve, The peak constrained additiveinverse Gaussian noise channel, in Proc. IEEE ISIT, Jul. 2012, pp. 29732977.[48] H. Li, S. M. Moser, and D. Guo, Capacity of the memoryless additive inversegaussian noise channel, IEEE J. Sel. Areas Commun., vol. 32, no. 12, pp.23152329, Dec. 2014.193Bibliography[49] A. Einolghozati, M. Sardari, A. Beirami, and F. Fekri, Capacity of discretemolecular diffusion channels, in Proc. IEEE ISIT, Aug. 2011, pp. 723727.[50] T. Nakano, Y. Okaie, and A. V. Vasilakos, Throughput and efficiency of molec-ular communication between nanomachines, in Proc. IEEE WCNC, Apr. 2012,pp. 704708.[51] L.-S. Meng, P.-C. Yeh, K.-C. Chen, and I. F. Akyildiz, MIMO communicationsbased on molecular diffusion, in Proc. IEEE GLOBECOM, Dec. 2012, pp.56025607.[52] M. U. Mahfuz, D. Makrakis, and H. T. Mouftah, A comprehensive studyof concentration-encoded unicast molecular communication with binary pulsetransmission, in Proc. IEEE NANO, Aug. 2011, pp. 227232.[53] B. Atakan, S. Galmes, and O. B. Akan, Nanoscale communication with molec-ular arrays in nanonetworks, IEEE Trans. Nanobiosci., vol. 11, no. 2, pp.149160, Jun. 2012.[54] M. S. Leeson and M. D. Higgins, Forward error correction for molecular com-munications, Nano Commun. Net., vol. 3, no. 3, pp. 161167, Sep. 2012.[55] M. Pierobon and I. F. Akyildiz, Intersymbol and co-channel interference indiffusion-based molecular communication, in Proc. IEEE ICC MONACOM,Jun. 2012, pp. 61266131.[56] G. Genc, H. B. Yilmaz, and T. Tugcu, Reception enhancement with protru-sions in communication via diffusion, in Proc. IEEE BlackSeaCom, Jul. 2013,pp. 8993.[57] A. Aijaz and A.-H. Aghvami, Error performance of diffusion-based molecu-lar communication using pulse-based modulation, IEEE Trans. Nanobiosci.,vol. 14, no. 1, pp. 145150, Jan. 2015.[58] P. Nelson, Biological Physics: Energy, Information, Life, updated 1st ed. W.H. Freeman and Company, 2008.[59] R. Chang, Physical Chemistry for the Biosciences. University Science Books,2005.[60] M. J. Moore, T. Suda, and K. Oiwa, Molecular communication: Modelingnoise effects on information rate, IEEE Trans. Nanobiosci., vol. 8, no. 2, pp.169180, Jun. 2009.[61] M. S. Kuran, H. B. Yilmaz, and T. Tugcu, A tunnel-based approach for signalshaping in molecular communication, in Proc. IEEE ICC MONACOM, Jun.2013, pp. 776781.194Bibliography[62] M. S. Kuran and T. Tugcu, Co-channel interference for communication viadiffusion system in molecular communication, in Proc. ICST BIONETICS,Dec. 2011, pp. 199212.[63] C. Jiang, Y. Chen, and K. J. R. Liu, Inter-user interference in molecular com-munication networks, in Proc. IEEE ICASSP, May 2014, pp. 57255729.[64] H. ShahMohammadian, G. Messier, and S. Magierowski, Blind synchronizationin diffusion-based molecular communication channels, IEEE Commun. Lett.,vol. 17, no. 11, pp. 21562159, Nov. 2013.[65] S. Ross, Introduction to Probability and Statistics for Engineers and Scientists,4th ed. Academic Press, 2009.[66] M. J. Moore and T. Nakano, Multiplexing over molecular communication chan-nels from nanomachines to a micro-scale sensor device, in Proc. IEEE GLOBE-COM, Dec. 2012, pp. 45184523.[67] C. T. Chou, Noise properties of linear molecular communication networks,Nano Commun. Net., vol. 4, no. 3, pp. 8797, Sep. 2013.[68] H. B. Yilmaz and C.-B. Chae, Arrival modelling for molecular communicationvia diffusion, Electron. Lett., vol. 50, no. 23, pp. 16671669, Nov. 2014.[69] A. Noel, K. C. Cheung, and R. Schober, On the statistics of reaction-diffusionsimulations for molecular communication, in Proc. ACM NANOCOM, Sept.2015, pp. 16. [Online]. Available: http://arxiv.org/abs/1505.05080[70] C.-H. Ho, White blood cell and platelet counts could affect whole blood vis-cosity, J. Chin. Med. Assoc., vol. 67, no. 8, pp. 394397, Aug. 2004.[71] M. J. Moore, T. Nakano, A. Enomoto, and T. Suda, Measuring distance withmolecular communication feedback protocols, in Proc. ICST BIONETICS,Dec. 2010, pp. 113.[72] , Measuring distance from single spike feedback signals in molecular com-munication, IEEE Trans. Signal Process., vol. 60, no. 7, pp. 35763587, Jul.2012.[73] J. T. Huang, H. Y. Lai, Y. C. Lee, C. H. Lee, and P. C. Yeh, Distance es-timation in concentration-based molecular communications, in Proc. IEEEGLOBECOM, Dec. 2013, pp. 25872597.[74] M. J. Moore and T. Nakano, Comparing transmission, propagation, and re-ceiving options for nanomachines to measure distance by molecular communi-cation, in Proc. IEEE ICC, Jun. 2012, pp. 61326136.195Bibliography[75] , Oscillation and synchronization of molecular machines by the diffusionof inhibitory molecules, IEEE Trans. Nanotechnol., vol. 12, no. 4, pp. 601608,Jul. 2013.[76] L. Felicetti, M. Femminella, G. Reali, and P. Lio, A molecular communicationsystem in blood vessels for tumor detection, in Proc. ACM NANOCOM, May2014, pp. 19.[77] S. Qiu, W. Guo, and S. Wang, Experimental Nakagami distributed noise modelfor molecular communication channels with no drift, Electron. Lett., vol. 51,no. 8, pp. 611613, Apr. 2015.[78] M. U. Mahfuz, D. Makrakis, and H. T. Mouftah, A comprehensive study ofsampling-based optimum signal detection in concentration-encoded molecularcommunication, IEEE Trans. Nanobiosci., vol. 13, no. 3, pp. 208222, Sept.2014.[79] J. Crank, The Mathematics of Diffusion, 2nd ed. Oxford University Press,1980.[80] E. L. Cussler, Diffusion: Mass transfer in fluid systems. Cambridge UniversityPress, 1984.[81] M. U. Mahfuz, D. Makrakis, and H. T. Mouftah, An investigative analysison concentration-encoded subdiffusive molecular communication in nanonet-works, in Proc. IEEE NANO, Aug. 2014, pp. 244249.[82] I. Llatser, D. Demiray, A. Cabellos-Aparicio, D. T. Altilar, and E. Alarcon,N3Sim: Simulation framework for diffusion-based molecular communicationnanonetworks, Simulat. Modell. Pract. Theory, vol. 42, pp. 210222, Mar.2014.[83] T. Nakano and J.-Q. Liu, Design and analysis of molecular relay channels: Aninformation theoretic approach, IEEE Trans. Nanobiosci., vol. 9, no. 3, pp.213221, Sep. 2010.[84] B. D. Unluturk, D. Malak, and O. B. Akan, Rate-delay tradeoff with networkcoding in molecular nanonetworks, IEEE Trans. Nanotechnol., vol. 12, no. 2,pp. 120128, Mar. 2013.[85] A. Einolghozati, M. Sardari, and F. Fekri, Relaying in diffusion-based molec-ular communication, in Proc. IEEE ISIT, Jul. 2013, pp. 18441848.[86] Z. P. Li, J. Zhang, and T. C. Zhang, Concentration aware routing protocolin molecular communication nanonetworks, in Proc. ICMECT, Apr. 2014, pp.50245027.196Bibliography[87] A. Einolghozati, M. Sardari, and F. Fekri, Decode and forward relaying indiffusion-based molecular communication between two populations of biologicalagents, in Proc. IEEE ICC, Jun. 2014, pp. 39753980.[88] A. Ahmadzadeh, A. Noel, and R. Schober, Analysis and design of two-hopdiffusion-based molecular communication networks, in Proc. IEEE GLOBE-COM, Dec. 2014, pp. 28202825.[89] M. J. Moore, A. Enomoto, S. Watanabe, K. Oiwa, and T. Suda, Simulatingmolecular motor uni-cast information rate for molecular communication, inProc. IEEE CISS, Mar. 2009, pp. 859864.[90] Y. Okaie, T. Nakano, M. Moore, and J. Q. Liu, Information transmissionthrough a multiple access molecular communication channel, in Proc. IEEEICC, Jun. 2013, pp. 15.[91] W.-A. Lin, Y.-C. Lee, P.-C. Yeh, and C.-H. Lee, Signal detection and ISI can-cellation for quantity-based amplitude modulation in diffusion-based molecularcommunications, in Proc. IEEE GLOBECOM, Dec. 2012, pp. 45784583.[92] M. U. Mahfuz, D. Makrakis, and H. T. Mouftah, Strength-based optimum sig-nal detection in concentration-encoded pulse-transmitted OOK molecular com-munication with stochastic ligand-receptor binding, Simulat. Modell. Pract.Theory, vol. 42, pp. 189209, Mar. 2014.[93] R. B. Bird, W. E. Stewart, and E. N. Lightfoot, Transport Phenomena, 2nd ed.John Wiley & Sons, 2002.[94] P.-J. Shih, C.-H. Lee, and P.-C. Yeh, Channel codes for mitigating intersym-bol interference in diffusion-based molecular communications, in Proc. IEEEGLOBECOM, Dec. 2012, pp. 44444448.[95] P.-J. Shih, C.-H. Lee, P.-C. Yeh, and K.-C. Chen, Channel codes for reliabil-ity enhancement in molecular communication, IEEE J. Sel. Areas Commun.,vol. 31, no. 12, pp. 857867, Dec. 2013.[96] A. Singhal, R. K. Mallik, and B. Lall, Molecular communication with brownianmotion and a positive drift: Performance analysis of amplitude modulationschemes, IET Communications, vol. 8, no. 14, pp. 24132422, Sep. 2014.[97] L. Felicetti, M. Femminella, and G. Reali, Simulation of molecular signaling inblood vessels: Software design and application to atherogenesis, Nano Com-mun. Net., vol. 4, no. 3, pp. 98119, Sept. 2013.197Bibliography[98] A. O. Bicen and I. F. Akyildiz, System-theoretic analysis and least-squares de-sign of microfluidic channels for flow-induced molecular communication, IEEETrans. Signal Process., vol. 61, no. 20, pp. 50005013, Oct. 2013.[99] , End-to-end propagation noise and memory analysis for molecular com-munication over microfluidic channels, IEEE Trans. Commun., vol. 62, no. 7,pp. 24322443, Jul. 2014.[100] Y. Chahibi and I. F. Akyildiz, Molecular communication noise and capacityanalysis for particulate drug delivery systems, IEEE Trans. Commun., vol. 62,no. 11, pp. 38913903, Nov. 2014.[101] J. Barreda and H.-X. Zhou, A solvable model for the diffusion and reaction ofneurotransmitters in a synaptic junction, BMC Biophysics, vol. 4, no. 5, pp.17, Mar. 2011.[102] T. Naka, K. Shiba, and N. Sakamoto, A two-dimensional compartment modelfor the reaction-diffusion system of acetylcholine in the synaptic cleft at theneuromuscular junction, Biosystems, vol. 41, no. 1, pp. 1727, Jan. 1997.[103] Y. Cheng, J. K. Suen, Z. Radic, S. D. Bond, M. J. Holst, and J. A. McCammon,Continuum simulations of acetylcholine diffusion with reaction-determinedboundaries in neuromuscular junction models, Biophys. Chem., vol. 127, no. 3,pp. 129139, May 2007.[104] T. Nakano, Y. Okaie, and A. V. Vasilakos, Transmission rate control for molec-ular communication among biological nanomachines, IEEE J. Sel. Areas Com-mun., vol. 31, no. 12, pp. 835846, Dec. 2013.[105] H. ShahMohammadian, G. G. Messier, and S. Magierowski, Modelling the re-ception process in diffusion-based molecular communication channels, in Proc.IEEE ICC MONACOM, Jun. 2013, pp. 782786.[106] X. Wang, M. D. Higgins, and M. S. Leeson, Distance estimation schemes for dif-fusion based molecular communication systems, IEEE Commun. Lett., vol. 19,no. 3, pp. 399402, Mar. 2015.[107] L. Debnath, Nonlinear Partial Differential Equations for Scientists and Engi-neers, 2nd ed. Birkhaeuser, 2005.[108] H. C. Berg, Random Walks in Biology. Princeton University Press, 1993.[109] T. Szirtes, Applied Dimensional Analysis and Modeling, 2nd ed. Butterworth-Heinemann, 2007.198Bibliography[110] E. W. Ng and M. Geller, A table of integrals of the error functions, J. Res.Bur. Stand., vol. 73B, no. 1, pp. 120, Jan.-Mar. 1969.[111] I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products,5th ed. London: Academic Press, 1994.[112] A. Ilienko, Continuous counterparts of poisson and binomial distributions andtheir properties, Annales Univ. Sci. Budapest, Sect. Comp., vol. 39, pp. 137147, 2013.[113] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed.Wiley-Interscience, 2006.[114] S. Plimpton, Fast parallel algorithms for short-range molecular dynamics, J.Comp. Phys., vol. 117, no. 1, pp. 119, 3/1 1995.[115] http://www.comsol.com.[116] S. S. Andrews, N. J. Addy, R. Brent, and A. P. Arkin, Detailed simulations ofcell biology with Smoldyn 2.1, PLoS Comput. Biol., vol. 6, no. 3, p. e1000705,Mar. 2010.[117] D. T. Gillespie, Stochastic simulation of chemical kinetics, Annu. Rev. Phys.Chem., vol. 58, no. 1, pp. 3555, May 2007.[118] K. A. Iyengar, L. A. Harris, and P. Clancy, Accurate implementation of leapingin space: the spatial partitioned-leaping algorithm, J. Chem. Phys., vol. 132,no. 9, p. 094101, Mar. 2010.[119] S. S. Andrews and D. Bray, Stochastic simulation of chemical reactions withspatial resolution and single molecule detail, Physical Biology, vol. 1, no. 3,pp. 137151, Aug. 2004.[120] A. A. Merrikh and J. L. Lage, Effect of blood flow on gas transport in apulmonary capillary, Journal of Biomech. Eng., vol. 127, no. 3, pp. 432439,Jun. 2005.[121] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory.Prentice Hall, 1993, vol. 1.[122] P. Stoica and T. L. Marzetta, Parameter estimation problems with singularinformation matrices, IEEE Trans. Signal Process., vol. 49, no. 1, pp. 8790,Jan. 2001.[123] R. C. Liu and L. D. Brown, Nonexistence of informative unbiased estimatorsin singular problems, Ann. Stat., vol. 21, no. 1, pp. 113, Mar. 1993.199Bibliography[124] O. Landau and A. Weiss, On maximum likelihood estimation in the presenceof vanishing information measure, in Proc. IEEE ICASSP, vol. 3, May 2006,pp. 680682.[125] E. Bashan, A. Weiss, and Y. Bar-Shalom, Estimation near zero informationpoints: angle-of-arrival near the endfire, IEEE Trans. Aerosp. Electron. Syst.,vol. 43, no. 4, pp. 12501264, Oct. 2007.[126] I. A. Ibragimov and R. Z. Has'minskii, Statistical Estimation: Asymptotic The-ory. Springer-Verlag, 1981.[127] E. L. Lehmann and G. Casella, Theory of point estimation, 2nd ed. Springer,1998.[128] A. Duel-Hallen and C. Heegard, Delayed decision-feedback sequence estima-tion, IEEE Trans. Commun., vol. 37, no. 5, pp. 428436, May 1989.[129] A. Noel, K. C. Cheung, and R. Schober, Overcoming noise and multiuserinterference in diffusive molecular communication, in Proc. ACM NANOCOM,May 2014, pp. 19.[130] A. S. Ladokhin, W. C. Wimley, and S. H. White, Leakage of membrane vesiclecontents: determination of mechanism using fluorescence requenching, Bio-phys. Journ., vol. 69, no. 5, pp. 19641971, Nov. 1995.[131] T. Nakano, J. Shuai, T. Koujin, T. Suda, Y. Hiraoka, and T. Haraguchi, Bi-ological excitable media based on non-excitable cells and calcium signaling,Nano Commun. Net., vol. 1, no. 1, pp. 4349, 3 2010.[132] H. Zhang, S. Xu, G. D. M. Jeffries, O. Orwar, and A. Jesorka, Artificial nan-otube connections and transport of molecular cargo between mammalian cells,Nano Commun. Net., vol. 4, no. 4, pp. 197204, Dec. 2013.[133] B. Krishnaswamy, C. M. Austin, J. P. Bardill, D. Russakow, G. L. Holst, B. K.Hammer, C. R. Forest, and R. Sivakumar, Time-elapse communication: Bac-terial communication on a microfluidic chip, IEEE Trans. Commun., vol. 61,no. 12, pp. 51395151, Dec. 2013.[134] R. Ramaswamy and I. F. Sbalzarini, Exact on-lattice stochastic reaction-diffusion simulations using partial-propensity methods, J. Chem. Phys., vol.135, no. 24, p. 244103, Dec. 2011.[135] M. Klann, A. Ganguly, and H. Koeppl, Hybrid spatial Gillespie and particletracking simulation, Bioinformatics, vol. 28, no. 18, pp. i549i555, Sep. 2012.200Bibliography[136] M. B. Flegg, S. J. Chapman, L. Zheng, and R. Erban, Analysis of the two-regime method on square meshes, SIAM J. Sci. Comput., vol. 36, no. 3, pp.B561B588, Jun. 2014.[137] A. Hellander, S. Hellander, and P. Loetstedt, Coupled mesoscopic and mi-croscopic simulation of stochastic reaction-diffusion processes in mixed dimen-sions, Multiscale Model. Simul., vol. 10, no. 2, pp. 585611, May 2012.[138] A. Noel, K. C. Cheung, and R. Schober, Multi-scale stochastic simulation fordiffusive molecular communication, in Proc. IEEE ICC, Jun. 2015, pp. 27122718.201Appendix AList of Other PublicationsOther research works completed during the author's Ph.D. program at UBC but notincluded in this dissertation have been published or accepted as follows:ˆ A. Noel , K. C. Cheung, and R. Schober, On the Statistics of Reaction-Diffusion Simulations for Molecular Communication, to be presented at ACMNANOCOM, Sept. 2015.ˆ A. Ahmadzadeh, A. Noel, and R. Schober, Analysis and Design of Multi-Hop Diffusion-Based Molecular Communication Networks, to appear in IEEETransactions on Molecular, Biological, and Multi-Scale Communications, 2015.ˆ A. Noel , K. C. Cheung, and R. Schober, Multi-Scale Stochastic Simulationfor Diffusive Molecular Communication, in Proc. IEEE ICC 2015, pp. 27122718, Jun. 2015.ˆ A. Ahmadzadeh, A. Noel, and R. Schober, Analysis and Design of Two-HopDiffusion-Based Molecular Communication Networks, in Proc. IEEE GLOBE-COM, pp. 28202825, Dec. 2014.ˆ A. Noel , K. C. Cheung, and R. Schober, Overcoming Noise and Multiuser In-terference in Diffusive Molecular Communication, in Proc. ACM NANOCOM,May 2014.202Appendix BProofs for Chapter 2B.1 Proof of Theorem 2.1In this proof, we solve the integration in (2.34). For convenience, we recall that (2.34)isN?RX|tx,0 (t?A) =R?RX∫02pi∫0pi∫0C?Ar?2 sin θdθdφdr?, (B.1)where we recall from (2.33) thatC?A =1(4pit?A)3/2exp(−k?t?A −r?TX,ef24t?A). (B.2)The integration in (B.1) is first found by converting the squared (effective) dis-tance r?TX,ef2in (B.2) from Cartesian to spherical coordinates. We recall that r?TX,efis the effective distance from the TX to an arbitrary point inside the RX. We needto write r?TX,ef2in terms of the distance from the TX to the origin d?, the distancefrom the origin to an arbitrary point inside the RX r?, and the flow components v?‖and v?⊥. This can be found asr?TX,ef2 = r?2 + d?2 − 2t?Ad?v?‖ − 2d?r? cosφ sin θ + t?A2(v?‖2 + v?⊥2)− 2t?Ar?(v?‖ cosφ sin θ + v?⊥ sinφ sin θ)(B.3)203Appendix B. Proofs for Chapter 2where φ = tan−1 (y?/x?) and θ = cos−1 (z?/r?). Generally, (2.34) does not havea known closed-form solution, due to the sum of trigonometric terms in in (B.3)which is placed inside the exponential in (B.2). We can integrate (2.34) over r?using substitution, integration by parts, the definition of the error function, i.e., [110,Eq. (3.1.1)]erf (a) =2pi12∫ a0exp(−c2) dc, (B.4)and the integral [110, Eq. (4.1.4)]∫a erf (a) da =12erf (a)(a2 − 12)+a2pi12exp(−a2) . (B.5)It can then be shown that (2.34) becomesN?RX|tx,0 (t?A) =2pi∫0pi∫0(2pi32 )−1 exp(β21t?A + β2)sin θ[β1t?A12 exp(−β21t?A)+ pi 12/2× (1 + 2β21t?A)(erf (R?RXt?A− 12/2− β1t?A 12)− erf (−β1t?A 12))−(β1t?A12 +R?RXt?A− 12/2)exp(−(R?RXt?A− 12/2− β1t?A12 )2)]dθdφ,(B.6)whereβ1 =sin θ2(v?‖ cosφ+ v?⊥ sinφ+d?t?Acosφ), (B.7)β2 =d?v?‖2− t?A4(v?‖2 + v?⊥2)− d?24t?A− k?t?A. (B.8)For any general steady uniform flow v? 6= 0, (B.6) can be solved numerically forN?RX|tx,0 (t?A). However, it is not particularly insightful or intuitive. To arrive at aclosed-form solution, we consider the special case v?⊥ = 0, such that the only flow204Appendix B. Proofs for Chapter 2component v?‖ is parallel to the line between the TX and the center of the RX. Westart with (B.3) and the substitution d?ef,x = d?−v?‖ t?A, i.e., d?ef,x is the effective distancealong the x?-axis from the TX to the center of the RX. Eq. (B.3) then becomesr?TX2 = r?2 + d?ef,x2 − 2r?d?ef,x cosφ sin θ, (B.9)and the point concentration (B.2) becomesC?A =1(4pit?A)32exp(−k?t?A −r?2 + d?ef,x24t?A)exp(r?d?ef,x cosφ sin θ2t?A). (B.10)We can now integrate (B.1) with respect to φ by using [111, Eq. (3.339)]∫ pi0exp (a cos c) dc = piJ0(ja) , (B.11)where J0(ja) is the modified zeroth order Bessel function of the first kind and j =√−1. It can be shown that the integral of exp (a cos c) from c = pi to c = 2pi is alsopiJ0(ja), so from (B.1) we can write∫ 2pi0exp(r?d?ef,x cosφ sin θ2t?A)dφ = 2piJ0(jr?d?ef,x sin θ2t?A). (B.12)Next, we integrate with respect to θ by using [111, Eq. (6.681.8)]pi∫0sin (2β3c) J2β4(2b sin c) dc = pi sin (β3pi) Jβ4−β3(a) Jβ4+β3(a) , (B.13)where Ji(a) is the ith order Bessel function of the first kind. From (B.1), (B.12), and205Appendix B. Proofs for Chapter 2(B.13) we can write∫ pi0sin θJ0(jr?d?ef,x sin θ2t?A)dθ = piJ− 12(jr?d?ef,x4t?A)J− 12(jr?d?ef,x4t?A). (B.14)Using [111, Eqs. (1.311), (1.334), (8.464.1-2)], it can be shown thatJ− 12(ja) J 12(ja) =12pia(exp (2a)− exp (−2a)) , (B.15)so (B.1) is now reduced toN?RX|tx,0 (t?A) =12d?ef,x√pit?Aexp (−k?t?A)×∫ R?RX0r?[exp(−(d?ef,x − r?)24t?A)− exp(−(d?ef,x + r?)24t?A)]dr?.(B.16)Using the substitution a =√cβ, it can be shown that∫ β4β3exp(−cβ2)dβ = 12√pic(erf(√cβ4)− erf (√cβ3)) , (B.17)which can be applied to (B.16) to arrive at (2.35).206Appendix B. Proofs for Chapter 2B.2 Proof of Theorem 2.2The integration in (2.62) to prove Theorem 2.2 can be divided into three parts, whichwe write as32R3RXRRX∫0r2(erf(RRX− r2√DA∆tob)+ erf(RRX+ r2√DA∆tob))dr, (B.18)− 3R3RX√DA∆tobpiRRX∫0r exp(−(RRX − r)24DA∆tob)dr, (B.19)3R3RX√DA∆tobpiRRX∫0r exp(−(RRX + r)24DA∆tob)dr. (B.20)We begin with (B.19). It can be shown via the substitution β = RRX− r and thedefinition of the error function in (2.36) that (B.19) evaluates to−3DA∆tobR2RXerf(RRX2√DA∆tob)− 6(DA∆tob)32R3RX√pi[exp( −R2RX4DA∆tob)− 1]. (B.21)Similarly, (B.20) evaluates to6(DA∆tob)32R3RX√pi[exp( −R2RX4DA∆tob)− exp( −R2RXDA∆tob)]+3DA∆tobR2RX[erf(RRX2√DA∆tob)− erf(RRX√DA∆tob)]. (B.22)To solve (B.18), we first apply the substitutions a = (RRX± r)/(DA∆tob) tointegrals containing the first and second error functions, respectively. This enables207Appendix B. Proofs for Chapter 2us to rewrite (B.18) with a single error function, as32R3RXRRX√DA∆tob∫0(RRX− 2a√DA∆tob)2erf (a) da. (B.23)Evaluating (B.23) requires the solution of three integrals, being the product oferf (a) with increasing powers of a. Beginning with the base case, from [111, Eq.(5.41)] we have ∫erf (a) da = a erf (a) +1√piexp(−a2) . (B.24)All terms in (B.23) can be evaluated using (B.24) and integration by parts, suchthat (B.18) evaluates to1RRX√DA∆tobpi[(1 +4DA∆tobR2RX)exp( −R2RXDA∆tob)− 3 + 4DA∆tobR2RX]+[3DA∆tobR2RX+ 1]erf(RRX√DA∆tob). (B.25)We can then combine (B.21), (B.22), and (B.25) to arrive at (2.63).208Appendix CProofs for Chapter 5C.1 Proof of Theorem 5.1The asymptotic integration (i.e, as t? → ∞) in (5.8) to prove Theorem 5.1 can bewritten as the summation of four integrals which can be found by solving the followingtwo integrals:∞∫0erf(aτ ?12)exp (−k?τ ?) dτ ?, (C.1)∞∫0τ ?12 exp(− βτ ?− k?τ ?)dτ ?, (C.2)where a could be positive or negative and the latter occurs only when d?n > R?RX.To solve (C.1) for a > 0, we apply the substitution c = a/τ ?12and use the definiteintegral [110, Eq. 4.3.28]∞∫0erf (β1c) exp(− β224c2)dcc3=2β22(1− exp (−β1β2)) , (C.3)so that we can write (C.1) as1k?(1− exp(−2a√k?)). (C.4)209Appendix C. Proofs for Chapter 5Recalling that erf (·) is an odd function, i.e., erf (−c) = − erf (c), we solve (C.1)for a < 0 as1k?(exp(2a√k?)− 1). (C.5)We can solve (C.2) by directly applying [111, Eq. 3.471.16] as√pi2k?exp(−2√βk?)(k?−12 + 2√β). (C.6)By taking care to consider the sign of R?RX− d?n, we can combine (C.4), (C.5),and (C.6) to arrive at (5.11).C.2 Proof of Theorem 5.2Similarly to Appendix C.1, we can solve the integration in (5.8) up to any t? bysolving the following two integrals:t?∫0erf(aτ ?12)dτ ?, (C.7)t?∫0τ ?12 exp(− βτ ?)dτ ?, (C.8)where a could be positive or negative and the latter occurs only when d?n > R?RX.To solve (C.7) for a > 0, we apply the substitution c = a/τ ?12and use the indefiniteintegrals [110, Eq. 4.1.14]∫erf (c)dcc3=− erf (c)2c2+1√pi∫exp(−c2) dcc2, (C.9)210Appendix C. Proofs for Chapter 5and [111, Eq. 2.325.5]∫exp (−cn) dcci=1i− 1[− exp (−cn)ci−1− n∫exp (−cn)ci−ndc], (C.10)as well as the definition of the error function. It can then be shown that (C.7) fora > 0 is solved as(2a2 + t?)erf(a√t?)+ 2a√t?piexp(−a2t?)− 2a2. (C.11)Recalling that erf (·) is an odd function, we solve (C.7) for a < 0 as(2a2 + t?)erf(a√t?)+ 2a√t?piexp(−a2t?)+ 2a2. (C.12)To solve (C.8), we apply the substitution c =√βτ ?, apply (C.10) twice, and usethe definition of the error function. It can then be shown that (C.8) is solved as23√t? exp(− βt?)(t? − 2β) + 43β32√pi(1− erf(√βt?)). (C.13)If we take care to consider the sign of R?RX− d?n, then we can arrive at (5.13) bycombining (C.11), (C.12), and (C.13).211

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0165819/manifest

Comment

Related Items