Open Collections

UBC Undergraduate Research

Adding Ƭ leptons to the search for supersymmetry in events with three leptons and missing transverse… Boone, Kyle Jan 7, 2013

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


52966-Boone_Kyle_ENPH_479_2013.pdf [ 687.72kB ]
JSON: 52966-1.0074486.json
JSON-LD: 52966-1.0074486-ld.json
RDF/XML (Pretty): 52966-1.0074486-rdf.xml
RDF/JSON: 52966-1.0074486-rdf.json
Turtle: 52966-1.0074486-turtle.txt
N-Triples: 52966-1.0074486-rdf-ntriples.txt
Original Record: 52966-1.0074486-source.json
Full Text

Full Text

Adding ? Leptons to the Search forSupersymmetry in Events with Three Leptonsand Missing Transverse Momentum with theATLAS Detector at the LHCKyle BooneSupervisor: Dr. Colin GayENPH 479Engineering PhysicsThe University of British ColumbiaATLASProject Number 1258January 7, 20131Executive SummaryThe objective of this project was to investigate adding taus to the search forsupersymmetry with the ATLAS detector at the LHC. This analysis was doneusing the framework set up by the group investigating the channel with a finalstate with 3 leptons where none of them are ? leptons. I modified this frameworkto include ? leptons and performed all of my analyses using the framework.I began by studying the reconstruction of ? leptons. These particles are veryshort lived and only their decay products are observed in the detector so theymust be reconstructed in the analysis. After determine a suitable reconstructionmethod I investigated reproducing an analysis similar to what has been done withthe 3 light leptons in order to improve the sensitivity to the models targeted bythat search. Adding taus in proved to yield much weaker results than the standard3 light leptons search and did not yield any significant increase in sensitivity.I then investigated a final state where there are two same sign light leptonsand one tau. This configuration results in a much smaller background than thestandard 3 lepton search. I identified several models where this search wouldbe able to outperform the 3 light lepton search and optimized signal regions forthese models. Supersymmetry is a very promising theory for physics beyond theStandard Model and current searches have little to no sensitivity to the chosenconfiguration which motivates the addition of new channels. I began work on thebackground estimation but ran into some issues that the ATLAS SUSY multilep-ton group is currently trying to figure out. The new signal regions that I havefound have been included in the analysis of the full 2012 dataset.3Contents1 Introduction 61.1 The Standard Model of particle physics . . . . . . . . . . . . . . . 61.2 Supersymmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.3 The Large Hadron Collider and the ATLAS experiment . . . . . . 91.4 Supersymmetry to 3 lepton channel . . . . . . . . . . . . . . . . . 102 Discussion 112.1 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2 Tau identification . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.3 Sensitivity to 3 light lepton search models . . . . . . . . . . . . . 132.4 Same sign searches . . . . . . . . . . . . . . . . . . . . . . . . . . 152.5 Same sign models . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.6 Signal region optimization . . . . . . . . . . . . . . . . . . . . . . 172.7 Exclusion limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.8 Background Estimation . . . . . . . . . . . . . . . . . . . . . . . . 252.9 Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Conclusion 274 Project Deliverables 284.1 List of Deliverables . . . . . . . . . . . . . . . . . . . . . . . . . . 284.2 Financial Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 284.3 Ongoing commitments by team members . . . . . . . . . . . . . . 285 Recommendations 296 References 304List of Tables1 Particles in the MSSM grouped by charge . . . . . . . . . . . . . 82 Comparison of the different tau selections . . . . . . . . . . . . . . 123 Base selection of different tau multiplicities normalized to 21 fb?1.Errors are statistical only . . . . . . . . . . . . . . . . . . . . . . . 144 Selection of different tau multiplicities normalized to 21 fb?1 witha same sign request and b-jet veto applied. Errors are statistical only 165 Background distribution in finalized signal regions . . . . . . . . . 236 Background distribution in verification regions . . . . . . . . . . . 26List of Figures1 Standard Model of particle physics, source: SLAC [2] . . . . . . . 72 Example of a Feynman diagram of supersymmetric particle pro-duction with a decay to 3 leptons . . . . . . . . . . . . . . . . . . 103 Feynman diagrams for the decays targeted by the 3 light leptonsearch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Best reconstructed Z boson mass in the SherpaWZ sample (126893)using 2 light leptons (black), 1 light lepton and 1 tau (red) and 2taus (blue), number of events at 21 fb?1 . . . . . . . . . . . . . . 155 Feynman diagrams for the decays targeted by the 3 light leptonsearch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Exclusion reaches of the 3 light lepton searches in the DGstau andnoslepwh grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 /ET and max(MT ) distributions for each channel . . . . . . . . . . 218 /ET and max(MT ) distributions for each channel . . . . . . . . . . 229 Exclusion limits for each of the three signal regions in both theDGstau and noslepwh grids . . . . . . . . . . . . . . . . . . . . . 2410 Estimated combined exclusion limits in the DGstau and noslepwhgrids using Fisher?s method . . . . . . . . . . . . . . . . . . . . . 2551 IntroductionThe objective of my project was to include tau leptons in the search for su-persymmetry in the 3 lepton channel with the ATLAS detector. The previoussearches used 3 light leptons only and taus had not previously been investigatedfor this analysis. Adding taus to the analysis opens up the possibility of extendingthe reaches in models that have been investigated with other searches or probingentirely new models that have not been touched yet. My task was to define a setof signal regions using taus which would be included in the analysis of the full2012 dataset and to determine the sensitivity of these signal regions in severalmodels.This report contains a large amount of technical information related to particlephysics, ATLAS and the LHC but I have tried to make it accessible to the averageEngineering Physics student. I begin by covering the basics of particle physics andof supersymmetry in order to explain the physics signal that I am searching for. Inthe Discussion section I describe how I set up a framework to run on the ATLASMonte Carlo simulations and actual data. I then explain how I went about pickingmodels to target and how I optimized signal regions to target those models. Thefinal exclusion reaches using these signal regions are then shown for several models.1.1 The Standard Model of particle physicsThe Standard Model of particle physics is a theory which was developed toexplain the electromagnetic, weak and strong nuclear interactions. The StandardModel predicts the existence of a large set of particles, all of which other than theHiggs boson are shown in Figure 1.According to the Standard Model, each of the forces has a carrier particle whichis responsible for it. The electromagnetic force acts between two charged particlesand is carried by the photon. The strong force acts only between quarks and ismediated by the gluon. It is the force that holds a proton (which is comprisedof two up quarks and a down quark) together. The weak force is what causesphenomena such as beta decay. It has three force carriers, the W? bosons andthe Z boson. There is also the Higgs boson labeled H which explains the masses offundamental particles. All of these particles except for the Higgs boson have beendiscovered and measured extensively. A new particle has been observed recentlyat the LHC which is consistent with the Standard Model Higgs boson and is beingstudied further.Ordinary matter is comprised of up quarks, down quarks and electrons. Theother quarks decay in a fraction of a second to up and down quarks. Similarly,muons and taus decay to electrons and neutrinos very rapidly and are not acomponent of normal matter. Neutrinos are actually quite abundant in the world(? 65 billion per second from the sun pass through every square centimeter onEarth) but they interact very little with normal matter and are effectively invisibleif they are produced in particle accelerators. Most of the non-stable particles inthe Standard Model require large amounts of energy to be produced and decayvery rapidly to stable particles. Large particle accelerators are built to producethem and to study their properties.6Figure 1: Standard Model of particle physics, source: SLAC [2]The Standard Model agrees with all measurements made at particle accelera-tors to date but it has some major theoretical issues. For example, the StandardModel cannot explain why the weak force is 1032 times stronger than gravity, anissue referred to as the hierarchy problem. It also cannot explain many phenom-ena like dark matter which, according to cosmology experiments, together accountfor 95% of the mass-energy in the universe.1.2 SupersymmetryNew theories have been proposed which can solve these problems. Supersym-metry (often referred to as SUSY) is one of the most promising such theories.Supersymmetry predicts that each Standard Model particle has a correspondingsuperpartner particle which hasn?t been discovered yet. Supersymmetry is verytheoretically motivated since it provides a dark matter candidate and it solves thehierarchy problem. It is also required by many more advanced theories such asmany variants of string theory. The minimal extension to the Standard Modelthat incorporates supersymmetry is called the Minimal Supersymmetric StandardModel (referred to as the MSSM) and this supersymmetric model will be consid-ered for the purpose of this project.The supersymmetric particles are given names based on their Standard Modelpartners. For each Standard Model fermion (eg: quarks and leptons) there isa corresponding sfermion which is named by adding an ?s? (for scalar) beforethe particle?s name. For example, the superpartner of the top quark (t) is thestop quark (t?). Superpartners for bosons are named by appending ?ino? to theparticle?s name. The Standard Model force carriers, called gauge bosons, have su-7Table 1: Particles in the MSSM grouped by chargeStandard model particle Corresponding MSSM particle Mass eigenstatequark (u, d, s, c, b, t) squark (u?, d?, s?, c?, b?, t?)lepton (e, ?, ?) slepton (e?, ??, ??)neutrino (?e, ?mu, ?? ) sneutrino (??e, ??mu, ??? )gluon (g) gluino (g?)neutral Higgs bosons (H0u, H0d) neutral higgsinos (H?0u, H?0d)???neutralinos (??01, ??02, ??03, ??04)Z boson (Z0) zino (W? 0)photon (?) photino (??)charged Higgs bosons (H+u , H?d ) charged higgsinos (H?+u , H??d )}charginos (??+1 , ???1 , ??+2 , ???2 )W bosons (W+,W?) winos (W?+, W??)perpartners called gauginos. In the MSSM, there is not a single Higgs boson but4 which are grouped in two doublets, two of which are neutral and two of whichare charged. These Higgs particles each have a superpartner called higgsinos. Be-cause of an effect called electroweak symmetry breaking, the electroweak gauginos(superpartners of Z, W? and ?) and higgsinos form a quantum mechanical mix-ture. The mass eigenstates of these mixtures that would actually be producedin experiments are called charginos and neutralinos and there are 4 of each. Ta-ble 1 lists the standard model particles and their corresponding superpartners.[1]Superpartners are denoted symbolically by adding a tilde over the symbol of thecorresponding standard model particle.If supersymmetry were a perfect symmetry, then the superpartner particleswould have the same mass as their Standard Model particle pairs. If this werethe case then we would have observed them already so supersymmetry must bebroken in some way that gives the superpartner particles larger masses than theirStandard Model partners. The exact method of how supersymmetry is brokenis not known. Due to the unspecified method of supersymmetry breaking, theMSSM has ?120 free parameters such as the masses of the superpartner particles.This makes it difficult to search for supersymmetry without making arbitraryassumptions for a large number of these parameters. However, if supersymmetryis to solve the hierarchy problem previously discussed, the masses of at least someof the superpartner particles must be on the order of 1 TeV (?1000 times theproton mass) or less and at least some of them should be observable at the LargeHadron Collider.The conservation of a term called R-parity is introduced in supersymmetry inorder to avoid some processes which haven?t been observed to date in very highprecision experiments. When R-parity is conserved, the lightest supersymmetricparticle (LSP) will not be able to decay to Standard Model particles and will bestable. If the LSP is a charged particle then it will be possible to observe it in8the detector. If the LSP is neutral, then it won?t be observed in the detector andproduction of supersymmetric particles will result in a large amount of missingenergy in the transverse plane of the detector. A neutral LSP is a candidate fordark matter.1.3 The Large Hadron Collider and the ATLAS experi-mentThe Large Hadron Collider (LHC) is currently the world?s largest particleaccelerator. It accelerates protons to ?99.999999% of the speed of light andcollides them in several different detectors at a rate of 40 million collisions persecond. It is currently operating at a center of mass energy of 8 TeV and at theend of 2012 will go into a shutdown for a year to be upgraded to a center of massenergy of 14 TeV. It is expected to collect 20 fb?1 of data in 2012 and 300 fb?1 ofdata at 14 TeV by approximately 2021.The ATLAS detector is a general purpose detector at the LHC with a cylindri-cal geometry. It is 45 meters long and 25 meters in diameter. It uses a combinationof tracking detectors and calorimeters to measure particles emitted in both theforwards and backwards directions from the collision point in the center. Discov-ering supersymmetry is one of the primary objectives of the ATLAS detector andof the LHC experiment. Superparticles are expected to have masses on the orderof 1 TeV or less so they should be able to be produced at the LHC. Searches areongoing for them in many different channels.The rate at which various processes occur is called the cross section and isquoted in units of ?barns? (b). Processes such as production of supersymmetricparticles typically have cross sections on the order of 10?15 to 10?9 barns so crosssections are usually given in nanobarns (nb), picobarns (pb) or femtobarns (fb).The amount of data collected at a particle accelerator is called the integrated lumi-nosity and is measured in units of ?inverse femtobarns? (fb?1). 1 fb?1 correspondsto roughly 7 ? 1013 collisions for the LHC and the ATLAS detector. Multiplyingthe integrated luminosity by the cross section gives the number of times that aprocess is expected to have occurred. For example, in 2012, roughly 20 fb?1 ofdata will be collected by the ATLAS detector. The Higgs boson cross section is?10 pb at a center of mass energy of 8 TeV so we expect to see ?200000 of themin the 2012 data.However, separating signal processes from the Standard Model backgroundones is not easy to do. Many processes can produce the same final state particlesseen in the detector and all of the background processes must be well understoodin order to be able to determine whether a signal is present or not. Monte Carlogenerators have been created which can simulate background processes from thebasic collision between two protons to the actual hits of scintillators in the detec-tor. The ATLAS experiment has a large software framework called Athena whichis used to run these Monte Carlo simulations and to process their results. Eventprocessing is done using the LHC Computing Grid which is a grid-based computernetwork with approximately 200000 processing cores.91.4 Supersymmetry to 3 lepton channelCurrent searches for supersymmetry have not led to any discoveries and haveplaced especially strong limits on the masses of the 1st and 2nd generation squarksand gluinos. In many supersymmetry models consistent with large masses forthese particles, the neutralinos and charginos will be light and accessible at LHCenergies. For my project I looked into the case where only the neutralino-1 (??01),neutralino-2 (??02), chargino-1 (???1 ) and sleptons are light and every other super-symmetric particle is beyond the reach of the LHC. I assume that the ??01 is theLSP discussed previously so it will be stable.This model has been investigated already but only with a final state with 3leptons which are all either muons or electrons. One possible Feynman diagramfor the decay of these superparticles is shown in Figure 2. This diagram shouldbe read with time increasing to the right on the x-axis. The two quarks fromthe protons collide and produce supersymmetric particles on the left hand sideof the diagram. The superparticles then decay by sleptons and sneutrinos to 2??01 particles, 3 leptons and a neutrino. The lines exiting the diagram to the rightare the particles that were produced in the interaction.Figure 2: Example of a Feynman diagram of supersymmetric particle productionwith a decay to 3 leptonsThe ATLAS collaboration had previously only performed analyses in this chan-nel with electrons and muons in the final state; taus had not yet been considered.My project involved adding taus to the search in order to improve the sensitivity.102 Discussion2.1 FrameworkThe first part of my project involved setting up a software framework whichcould perform the tasks required for my project. I started with the UC Irvineframework which was used for the 3 light lepton searches. I made several modifica-tions to this framework which were necessary in order to study tau leptons such asupdating the Z identification functions to include various possibilities for taus andtheir decays and updating the transverse mass calculations. I added in all of theappropriate scale factors (electrons, muons and b-jets). My updated frameworkcan be found on the UBC ATLAS servers at ?kboone/tau-2012/susynt-n0115.(replace n0115 with the desired UCI Ntuple tag).The UCI framework was too slow for my purposes, taking over an hour toanalyze the full set of Monte Carlo samples while I needed to investigate hundredsof different selections. As a result, I modified the UCI framework to select allevents with 3 leptons as a baseline and calculate all relevant parameters for thoseevents. The results of these calculations are stored in new slimmed down ROOTNtuples which can then be processed in a matter of seconds.I wrote a secondary framework which uses these slimmed down Ntuples andwhich can produce arbitrary histograms of the results. This framework can befound on the UBC ATLAS servers at ?kboone/tau-2012/cut-ana/macros. Theframework is controlled by specifying the samples to run over in the samples-n0115.C file and by setting parameters in the Parameters.C file. There are thenseveral ROOT macros which read these files and perform a wide range of functionsincluding generating stacked histograms, producing significance plots as a functionof a cut on 1 or 2 variables, calculating yields after a set of cuts and plotting thesignificance in a grid. In addition, I wrote a set of macros using ROOT?s TMVApackage which run a suite a multivariate tools on the samples including BoostedDecision Trees (BDTs).I verified the results of this framework by comparing results with Nicky San-toyo, a graduate student at the University of Sussex who was also working onadding taus to the 3 lepton search using a separate framework. Nicky ended upoptimizing signal regions for DGemt-like grids while I ended up optimizing forDGstau-like grids. Our frameworks had similar capabilities so we verified eachother?s results at several stages in the analysis.2.2 Tau identificationTo start off the analysis I investigated several tau reconstruction algorithmsthat are used in the ATLAS framework in order to find the best one for thischannel. It is worth noting that when I refer to taus in my analysis I am implyingthat the taus decayed hadronically. Around 17% of taus will decay to a muon anda neutrino and around the same fraction will decay to an electron and a neutrino.These leptonic decays are included in the light lepton searches. In the case ofthe hadronic decays, the detector actually observes narrow cones of hadrons andother particles called jets. Jets can be produced by many other processes so it is11necessary to determine whether they came from a tau decay or some other process.The ATLAS collaboration has many people working on this problem and sev-eral boosted decision trees (BDTs) have been produced which take parametersof the jet and output a value indicating if the jet is more or less likely to be atau decay. I used the results of both a BDT to identify electrons (eleBDT) anda BDT to identify hadronic jets (jetBDTSig) to separate taus from these objects.Several predefined cutoffs have been chosen for analyses to use with various tauidentification efficiencies. I investigated three of these cutoffs labelled ?Tight?(30% identification efficiency), ?Medium? (50%) and ?Loose? (60%). Increasedidentification efficiency comes with the trade off of decreased background rejection.As an initial step I perform the standard baseline cuts on taus of pT > 20 GeV,|?| < 2.5, author 1 or 3, number of tracks 1 or 3 and charge ? 1. These cuts areused in most other tau analyses and are in place to take the detector geometryinto account and veto non-standard events.To determine which of the three cutoffs was most effective, I ran over all ofthe major backgrounds expected to contribute to the reducible background in thissearch and investigated how the fake rates in these samples are affected by differentcutoffs. In order to determine the optimal selection, I calculated an estimate ofhow significance will be affected by the selection using the formula T/?F whereT is the fraction of true taus passing the selection and F is the number of faketaus passing the selection. I then calculated the ratio of the significances usingeach of the different cutoff levels. The results of this investigation are shown inTable 2.Table 2: Comparison of the different tau selectionsSample Sample ID Medium/Loose Tight/Loose ?Medium/?Loose ?T ight/?LooseWZ 126893 0.42 0.17 1.28 1.22tt? 105861 0.47 0.17 1.22 1.20ZeeNp0 107650 0.17 0.04 2.05 2.57ZeeNp5 107655 0.36 0.11 1.40 1.54Z??Np0 107660 0.52 0.28 1.16 0.95Z??Np5 107665 0.50 0.19 1.18 1.14We? 147774 0.34 0.11 1.43 1.50W?? 147775 0.53 0.23 1.14 1.04As shown in this table, both the Tight and Medium selections show a largeincrease in significance over the Loose selection. The Medium and Tight selectionsare comparable, especially for the tt? sample which is expected to be the predom-inant source of fakes. Since we are dealing in low statistics though, we will haveadditional contributions to a proper significance calculation from the statistics.Hence the Medium selection will perform better than the Tight selection for anyanalysis where these are the predominant reducible backgrounds.12For these reasons the Medium selection was used for further analyses. I didinvestigate optimizing signal regions with Loose and Tight taus for comparisonbut Medium selections always ended up performing the best. Independent studieswere performed by Nicky Santoyo who optimized a specific signal region using thedifferent selection levels. Her studies agreed with my conclusion that using theMedium tau selection was optimal for this analysis.For muons and electrons I used the standard selection that has been used in the3 light leptons analysis. I will not cover these selections in depth here (full detailscan be found in [4]) but I esentially require ET/pT > 10 GeV and |?| < 2.47/2.40for electrons and muons respectively with some additional requirements in thetracker and calorimeter.2.3 Sensitivity to 3 light lepton search modelsMy initial approach to adding taus to the search involved trying to targetthe same models that had been targeted by the 3 light lepton search in order tocombine the results of the two searches and improve the sensitivity. I focusedon what the SUSY group refers to as ?Mode A? production where a ???1 anda ??02 are the initial supersymmetric particles produced as done in the previousanalysis. There are two main decay paths which are targeted by the 3 light leptonsearch, one where all of the sleptons have masses between the ??01 and ??02 masses(called the ?wA slep? model) and one where the sleptons have very high masses(?wA noslep?). The Feynman diagrams for the most common decays with thesemodels are shown in Figure 3.(a) wA slep (b) wA noslepFigure 3: Feynman diagrams for the decays targeted by the 3 light lepton searchAssuming that all of the sleptons have the same masses, the leptons producedin each of these two diagrams are equally likely to be electrons, muon or taus.Hence there are several possible final states: 3 light leptons (referred to as ?3l?),2 light leptons and 1 tau (?2l1??), 1 light lepton and 2 taus (?2?1l?) or 3 taus(?3??).13I investigated all of the possible final states with taus in the context of thesemodels and compared them to the results with 3 light leptons. The results werenot promising. Several major issues arise when replacing one of the light leptonswith a tau in this search. The first is the signal branching ratio. I calculatedthat without taking the tau decay into account, a 3l final state will occur 30%of the time compared with 44% for 2l1? , 22% for 2?1l and 4% for 3? . Thesenumbers seem good for the taus, but there are several additional factors whichneed to be taken into account. Leptonic tau decays end up being registered as lightleptons (a process which occurs for 33% of taus) so these end up being includedin a different channel. Secondly, using the Medium identification as discussedpreviously, the hadronic tau reconstruction only has a 50% efficiency compared toa >90% efficiency for the light leptons. Hence the tau channels are already at adisadvantage compared to the light leptons simply by the branching ratios. Theresults of all of these factors can be seen at the bottom of Table 3 which showsthe number of events in each final state in Monte Carlo simulations of differentsignals.Table 3: Base selection of different tau multiplicities normalized to 21 fb?1. Errorsare statistical onlyBackground 3l 2l1? 2?1l 3?tt? 248.11 ? 8.53 1550.23 ? 21.29 701.05 ? 14.03 58.47 ? 4.03Zee + jets 981.85 ? 69.81 46346.42 ? 588.01 506.78 ? 50.25 3.87 ? 3.87Z?? + jets 1685.19 ? 87.39 67256.19 ? 728.74 432.53 ? 46.41 0.00 ? 0.00Z?? + jets 30.87 ? 12.83 985.91 ? 65.85 3975.02 ? 132.70 3582.22 ? 128.53Single top 17.46 ? 2.74 125.51 ? 7.57 111.94 ? 7.84 19.04 ? 3.14tt?+W 16.64 ? 0.39 11.45 ? 0.33 3.10 ? 0.17 0.32 ? 0.05tt?+ Z 30.09 ? 0.51 10.46 ? 0.30 2.62 ? 0.15 0.36 ? 0.05WZ 2421.97 ? 17.33 403.27 ? 7.08 95.14 ? 3.48 10.38 ? 1.19WW 6.35 ? 0.64 183.92 ? 3.59 45.69 ? 1.79 2.27 ? 0.40ZZ 1523.59 ? 16.23 257.32 ? 6.51 38.71 ? 2.61 4.41 ? 0.84W + jets 0.00 ? 0.00 244.89 ? 43.02 7511.02 ? 298.37 1010.65 ? 225.38triboson 23.47 ? 0.31 7.99 ? 0.18 1.18 ? 0.07 0.08 ? 0.02Total background 6985.59 ? 115.41 117383.53 ? 940.01 13424.80? 334.06 4692.05 ? 259.54noslep (250, 50) 40.47 ? 0.78 7.45 ? 0.34 2.41 ? 0.19 0.35 ? 0.07slep (225, 125) 309.94 ? 5.75 69.09 ? 2.75 25.11 ? 1.66 3.47 ? 0.62Another disadvantage when considering taus instead of light leptons is thatthe reducible background is much higher since fake taus are much more likelythan fake light leptons. As shown in Table 3, this means that there are 6986background events in the 3l channel compared to 117384 in the 2l1? channel dueprimarily to a huge increase in the fakes from Z+jets.Finally, adding taus also makes separation on variables such as missing trans-14verse energy and transverse mass more difficult. This is due to the fact thatneutrinos are produced in the tau decays which smear out readings of these val-ues. These two variables were the major discriminating factors used in the 3 lightlepton analysis. Similarly, a Z boson requirement/veto is commonly used whenonly 3 light leptons are present based on the reconstructed mass of the Z boson.Events produced by Z bosons give a peak around the mass of the Z boson if 2light leptons are used. However, this peak is gone when 1 light lepton and a tau ortwo taus are considered due to the energy lost to the neutrinos. The distributionof the reconstructed Z mass is shown in Figure 4 for the WZ sample which is oneof the most significant backgrounds for this search.Figure 4: Best reconstructed Z boson mass in the Sherpa WZ sample (126893)using 2 light leptons (black), 1 light lepton and 1 tau (red) and 2 taus (blue),number of events at 21 fb?1Overall, there are many factors coming into play which all make a 3 lightlepton search much more effective than any search with taus for the models thatwere considered. I spent several weeks looking into this an investigated over 50kinematic variables to try to get an edge over the 3 light leptons search but noneof these achieved a sensitivity anywhere near comparable with the sensitivity of 3light leptons. Hence I abandoned the idea of trying to compete directly with the3 lepton search and decided to target other models and selections where taus canbe more effective.2.4 Same sign searchesAfter many discussions with Zoltan Gecse, a postdoc in the UBC ATLASgroup, we realized that there is one way that a selection with taus can be veryeffective which is same sign searches. If an interaction produces two taus andone light lepton, then if one of the taus decays leptonically we can end up with afinal state in the detector with 1 tau and 2 light leptons which have the same sign.This final state is extremely uncommon in the background signals. It immediately15eliminates most of the background contribution from Z+jets which was dominatingwithout any cuts (as seen in Table 3). Since there is a significant amount of topbackgrounds remaining I also applied a b-jet veto which cuts these by a largefactor without having a major effect on the signal.After these cuts are applied, a much more manageable scenario is achieved.However, upon investigation, the flavour of the light leptons has a significantimpact on the background composition at this point in the cuts. For this reason Idecided to create three separate signal regions, one for each combination of lightlepton flavours. The background composition after all of the cuts discussed areapplied is shown in Table 4.Table 4: Selection of different tau multiplicities normalized to 21 fb?1 with a samesign request and b-jet veto applied. Errors are statistical onlyBackground ??? ?e? ee?tt? 2.15 ? 0.79 4.11 ? 1.13 4.00 ? 1.11Zee + jets 0.00 ? 0.00 25.76 ? 18.49 123.61 ? 22.38Z?? + jets 0.00 ? 0.00 1.51 ? 1.51 0.00 ? 0.00Z?? + jets 0.00 ? 0.00 12.78 ? 5.78 9.91 ? 7.08Single top 0.56 ? 0.56 0.03 ? 0.03 0.00 ? 0.00tt?+W 0.10 ? 0.03 0.22 ? 0.05 0.07 ? 0.02tt?+ Z 0.02 ? 0.01 0.05 ? 0.02 0.04 ? 0.02WZ 14.04 ? 1.29 24.69 ? 1.74 13.47 ? 1.30WW 0.24 ? 0.14 0.60 ? 0.20 0.25 ? 0.11ZZ 3.31 ? 0.74 8.01 ? 1.17 4.62 ? 0.87W + jets 10.72 ? 6.11 13.73 ? 6.37 19.41 ? 8.26triboson 0.61 ? 0.05 1.05 ? 0.07 0.52 ? 0.05Total background 31.75 ? 6.36 92.53 ? 20.58 175.90 ? 24.95noslep (250, 50) 0.17 ? 0.05 0.54 ? 0.09 0.30 ? 0.07slep (225, 125) 2.68 ? 0.56 4.32 ? 0.68 2.80 ? 0.56DGstau (160, 350) 12.09 ? 4.02 20.86 ? 5.87 8.15 ? 2.95DGstau (210, 350) 5.35 ? 1.92 10.51 ? 2.76 6.45 ? 2.29DGstau (250, 350) 4.41 ? 2.10 5.48 ? 2.23 3.29 ? 1.66DGstau (300, 350) 3.71 ? 1.75 4.51 ? 1.61 1.54 ? 0.89This table shows that in the ee? channel there are over 5 times as many eventsas in the ??? channel. These additional events appear to be coming from theZ+jets samples. This behaviour is unusual since the distribution of muons andelectrons should be distributed evenly at truth level. I investigated these eventsand looked at them individually and discovered that what is happening is that thetau is a fake and one of the electrons for the Z decay ends up being reconstructedwith the wrong sign. This occurs primarily when the electron radiates a photonwhich undergoes pair production and where the 3 resulting electrons are close16together and reconstructed as one with the opposite sign. This does not occur formuons since both the muon chambers and the tracker can be used to determinethe sign of the muons and a coincident error in both systems is extremely unlikely.Nevertheless, the background is at an acceptable level after the same sign cutsare applied and these channels are sensitive to some models as is.2.5 Same sign modelsWhile the same sign 2l1? searches can in fact target some of the same modelsthat the 3l searches target, the sensitivity there is much less than the 3l searchesfor reasons discussed previously, even with the same sign requirement. Hence Ilooked for new configurations where the same sign 2l1? search could outperformthe 3l search.One such model involves having a light stau but heavy smuons and selectrons.In this case, the ???1 and ??02 will both decay via staus giving an end state whichalways has 3 taus. The branching ratio to 3 light leptons in this case is quitelow and the same sign search is much more effective. A grid was produced wheresuch a scenario occurs called the ?DGstau? grid. A Feynman diagram of the maindecay mode is shown in Figure 5 (a).Another possible model involves replacing the Z boson in the noslep modelwith a Higgs boson. The ??02 can decay almost entirely to Higgs bosons insteadof Z bosons depending on its composition. A 125 GeV Higgs boson decays to 2muons with a probability of about 2?10?4 and its decay to 2 electrons is negligible.This is in contrast with Z bosons which decay to either 2 muons or 2 electronswith a probability of around 7 ? 10?2 which is two orders of magnitude higher. Onthe contrary, the Higgs boson decays to 2 tau leptons with a probability of about6 ? 10?2. This means that leptonic Higgs boson decays will almost always resultin two ? leptons. However, the Higgs boson also has the opportunity to decayto two W bosons. For a 125 GeV Higgs boson, back of the envelope calculationsindicate that the probability of getting two light leptons out of the Higgs decayends up being nearly identical to that of getting a light lepton and a tau. As aresult the tau search outperforms the 3 lepton search but not by the same marginthat is seen in the DGstau grid as will be seen in further sections. A grid wasproduced for this model called the ?noslepwh? grid. The Feynman diagram of themain decay mode is shown in Figure 5 (b).2.6 Signal region optimizationAfter models were chosen, the next step was to optimize signal regions thatcould target these models effectively. When the optimization was being done, thenoslepwh grid was still being processed (and is in fact still only half available) sothe DGstau grid was mainly used. The DGstau grid was generated in the contextof the pMSSM which is a simplified version of the MSSM which ends up having19 free parameters. In this grid, 16 of these parameters are fixed in order togive the scenario described previously. There are then 3 varied parameters: M1corresponding to the first gaugino mass, M2 corresponding to the second gauginomass and MU corresponding to the Higgsino mass. M1 essentially gives the mass17(a) DGstau (b) noslepwhFigure 5: Feynman diagrams for the decays targeted by the 3 light lepton searchof the ??01 while the lesser of MU and M2 essentially gives the ??02 and ???1 mass.The noslepwh grid uses the mass of the ??01 and ??02 directly as parameters.To begin the signal region optimization process, I investigated the reach of thecurrent 3 light lepton search in both this grid and in the noslepwh grid. For theDGstau grid, I chose M1 to be fixed to 50 GeV and let both MU and M2 vary. Iused signal regions SR1a, SR1b and SR2b from the 13 fb?1 analysis. The exclusionreaches of these signal regions in these two grids at 21 fb?1 are shown in Figure 6.In this figure and all other figures where a significance is shown, the Zn algorithmis used for estimating significance with a 30% systematic uncertainty applied. Myframework draws a line at the 95% confidence exclusion level (corresponding to asignificance of 1.64) indicating the region of the parameter space which has beenexcluded. As shown in this figure, these signal regions have little to no exclusionin both grids. It is worth noting that these grids have quite low statistics sofluctuations are to be expected which accounts for the unevenness that is seen.Since the 3 light lepton search has no exclusion in this grid, I chose to optimizefor two points. First, I targeted the bottom left corner of the grid where the??02 mass is around 110 GeV. Since the statistics for each individual point are quitelow, I combined all of the points with 100 ? M2 ? 120 and 100 ? MU ? 120 (9points total) which all have very similar distribution anyway. The second pointthat I targeted is a ??02 mass around 210 GeV. In order to get better statistics, Icombined all of the points with either M2 = 210 GeV or MU = 210 GeV sincethe ??02 mass is determined by the smaller of the two.18(a) DGstau SR1a (b) noslepwh SR1a(c) DGstau SR1b (d) noslepwh SR1b(e) DGstau SR2b (f) noslepwh SR2bFigure 6: Exclusion reaches of the 3 light lepton searches in the DGstau andnoslepwh gridsI also applied a trigger at this stage in the analysis in order to account for thefact that trigger efficiencies aren?t perfect. I used both the dilepton triggers used inthe 3 light lepton analysis (TRIG 2e12Tvh loose1, TRIG e24vh medium1 e7 medium1and similar triggers). However, the efficiencies using only these triggers are onlyaround ?80%. To increase the efficiency, I added in single isolated lepton triggers(TRIG e24vhi medium1 and TRIG mu24i tight) for both the muons and elec-trons. When these are included, the trigger efficiency rises to around 94% whichis acceptable. Lepton-tau and isolated tau triggers should be added but theyweren?t available in earlier steps of the UCI framework when I began this projectso I wasn?t able to include them. They are in the process of being added to the19framework and should be considered for addition to these signal regions whenavailable.I investigated a large variety of kinematic variables in order to find ones whichare optimal to cut on for this analysis. I found two variables which gave a mean-ingful separation of signal from background: missing transverse energy ( /ET ) andthe maximum transverse mass (max(MT )). The missing transverse energy is cal-culated by measuring the vector sum of the momentum of particles that are seenin the plane perpendicular to the collision. Since there was no initial energy inthis plane, this value can be used to estimate the energies of particles that don?tinteract with the detector such as neutrinos and ??01. In SUSY interactions thereare typically many such particles so they will on average have higher values of/ET than standard model interactions. The transverse mass involves taking thereconstructed mass of the /ET and of a lepton. For W boson decays with no otherinvisible particles than the neutrino from the W , a distribution will be producedwhich has a sharp drop off after the W mass. Since we have 2 light leptons it isnot evident which one should be used in this calculation. After trying out severalcombinations I discovered that calculating the transverse mass for both leptonsand taking the maximum value gives the best discrimination.The lepton transverse momenta were the variables with the next best abilityto distinguish signal from background, but additional cuts on them actually de-creased the sensitivity due to low statistics. Hence only the /ET andmax(MT ) cutswere used. Initial plots of these two variables are shown for both of the signalcombinations discussed previously in Figure 720met0 20 40 60 80 100 120 140 160 180 200Events / 20 GeV?210?110110ttbarZee + jetsZmumu + jetsZtautau + jetsSingle topttbar + bosonWZWWZZW + jetstribosonDgstau M1=50 combined (110.0, 110.0)Dgstau M1=50 combined (210.0, 210.0)(a) ??? /ETmax(lepMt1, lepMt2)0 20 40 60 80 100 120 140 160 180 200Events / 20 GeV?1101 ttbarZee + jetsZmumu + jetsZtautau + jetsSingle topttbar + bosonWZWWZZW + jetstribosonDgstau M1=50 combined (110.0, 110.0)Dgstau M1=50 combined (210.0, 210.0)(b) ??? max(MT )met0 20 40 60 80 100 120 140 160 180 200Events / 20 GeV?110110ttbarZee + jetsZmumu + jetsZtautau + jetsSingle topttbar + bosonWZWWZZW + jetstribosonDgstau M1=50 combined (110.0, 110.0)Dgstau M1=50 combined (210.0, 210.0)(c) e?? /ETmax(lepMt1, lepMt2)0 20 40 60 80 100 120 140 160 180 200Events / 20 GeV110ttbarZee + jetsZmumu + jetsZtautau + jetsSingle topttbar + bosonWZWWZZW + jetstribosonDgstau M1=50 combined (110.0, 110.0)Dgstau M1=50 combined (210.0, 210.0)(d) e?? max(MT )met0 20 40 60 80 100 120 140 160 180 200Events / 20 GeV?210?110110210310 ttbarZee + jetsZmumu + jetsZtautau + jetsSingle topttbar + bosonWZWWZZW + jetstribosonDgstau M1=50 combined (110.0, 110.0)Dgstau M1=50 combined (210.0, 210.0)(e) ee? /ETmax(lepMt1, lepMt2)0 20 40 60 80 100 120 140 160 180 200Events / 20 GeV110210310 ttbarZee + jetsZmumu + jetsZtautau + jetsSingle topttbar + bosonWZWWZZW + jetstribosonDgstau M1=50 combined (110.0, 110.0)Dgstau M1=50 combined (210.0, 210.0)(f) ee? max(MT )Figure 7: /ET and max(MT ) distributions for each channelIn order to find a suitable cut on each of these variables, I generated 2 di-mensional significance plots as a function of the cuts. These plots can be seen inFigure 8. These plots exhibit many variations due to the fact that the samplesused to generate them have relatively low statistics. Since all of these plots havenearly identical distributions, I decided to use the same cuts on all three signalregions. This helps avoid cherrypicking cuts which achieve unrealistically highsensitivities by cutting out specific Monte Carlo events. Based on these plots andalso looking at the sensitivities in the full grid for different sets of cuts I chose a cuton the /ET of >60 GeV and a cut on the max(MT ) of >80 GeV. The backgroundcompositions after these cuts are shown in Table 5.21max(lepMt1, lepMt2)0 20 40 60 80 100 120 140 160 180 200met02040608010012014016018020000.511.522.5(a) ??? (110, 110) combinationmax(lepMt1, lepMt2)0 20 40 60 80 100 120 140 160 180 200met02040608010012014016018020000. ??? (210, 210) combinationmax(lepMt1, lepMt2)0 20 40 60 80 100 120 140 160 180 200met02040608010012014016018020000.511.522.5(c) e?? (110, 110) combinationmax(lepMt1, lepMt2)0 20 40 60 80 100 120 140 160 180 200met02040608010012014016018020000. e?? (210, 210) combinationmax(lepMt1, lepMt2)0 20 40 60 80 100 120 140 160 180 200met02040608010012014016018020000. ee? (110, 110) combinationmax(lepMt1, lepMt2)0 20 40 60 80 100 120 140 160 180 200met02040608010012014016018020000. ee? (210, 210) combinationFigure 8: /ET and max(MT ) distributions for each channelAs illustrated in this table, the signals are at similar levels in each of the threechannels but the background is much smaller in the ??? and e?? channels. Theee? channel is plagued by one particularly bad Monte Carlo event from W+jetswhich had a weight of 12.2 due to very low statistics in the W+jets sample thatwas available. This will not affect the actual search since the background will beestimated from data driven methods. I expect that there will be some W+jetscontribution to both the ??? and e?? channels as well. The final selection issummarized below:? Signal muons and electrons as defined in the 3 light lepton search? Medium tau selection, pT > 20 GeV, |?| < 2.522Table 5: Background distribution in finalized signal regionsSample ee? ??? e??WZ 3.1?0.6 2.3?0.5 4.7?0.8ZZ 0.5?0.3 0.4?0.2 0.7?0.4tt?+Z 0.0?0.0 0.0?0.0 0.0?0.0tt?+W 0.0?0.0 0.1?0.0 0.1?0.0tt?+WW 0.0?0.0 0.0?0.0 0.0?0.0tribosons 0.3?0.1 0.3?0.0 0.5?0.1? irreducible 3.9?0.7 3.1?0.5 6.0?0.9tt? 1.5?0.8 0.6?0.4 1.5?0.6single t 0.0?0.0 0.0?0.0 0.3?0.3WW 0.1?0.1 0.1?0.1 0.2?0.1Z+jets 5.0?3.4 0.0?0.0 0.0?0.0W+jets 12.2?12.2 0.0?0.0 0.2?0.2? reducible 18.8?12.7 0.7?0.4 2.2?0.7? SM 22.8?12.7 3.7?0.7 8.3?1.1DGstau (160, 350) 2.0?1.3 4.0?2.5 11.2?4.4DGstau (210, 350) 2.4?1.4 3.7?1.7 4.2?1.8DGstau (250, 350) 2.3?1.3 1.0?0.8 1.5?1.3DGstau (300, 350) 0.5?0.5 2.4?1.5 2.1?1.0? Split into ??? , e?? and ee? channels? Require same sign light leptons, opposite sign tau? b-jet veto? /ET > 60 GeV? max(MT ) > 80 GeV2.7 Exclusion limitsI generated exclusion plots showing the reach of each of these signal regions inboth the DGstau grid and in the noslepwh grid. These plots can be seen in Figure9. In the DGstau grid these signal regions are capable of excluding a large numberof points and perform much better than the 3 light lepton signal regions shownin Figure 6. These exclusion plots do exhibit some interesting islands which aremost likely due simply to statistical fluctuations. Higher statistics samples are inthe process of being generated but are not yet available.In the noslepwh grid, only a subset of the samples have been fully generatedso this result is very preliminary. None of the points reach the required 1.6423significance level to be excluded in each individual signal region although severalare quire close. The sensitivity in the 2l1? channels is better than the sensitivityin 3l channels although not by the same factor as in the DGstau case due tothe fact that there are H ? WW decays producing light leptons as discussedpreviously.MU (GeV)100 150 200 250 300 350 400 450 500M2 (GeV)1001502002503003504004505000.511.522.533.544.5(a) DGstau ??? (b) noslepwh ???MU (GeV)100 150 200 250 300 350 400 450 500M2 (GeV)1001502002503003504004505000.511.522.533.54(c) DGstau e?? (d) noslepwh e??MU (GeV)100 150 200 250 300 350 400 450 500M2 (GeV)1001502002503003504004505000. DGstau ee? (f) noslepwh ee?Figure 9: Exclusion limits for each of the three signal regions in both the DGstauand noslepwh gridsAn additional feature of the three 2l1? signal regions is that they are inherentlymutually exclusive. Hence their results can be combined in order to increase theexclusion limits. In order to get an estimate for how strong these exclusion limitsare I calculated an estimated combined significance by modifying the Zn functionto use Fisher?s method to combine the pvalue estimates from the Zn function for24each grid point. While this method is not rigorous by any means, it does give aninitial estimate of the combined grid. Exclusion reaches in both the DGstau gridand noslepwh grid are shown for this combination in Figure 10. A more rigorouscombination should be done using the proper ATLAS tools although I didn?t havetime to learn how to do this myself. The Fisher?s method combination appearsto give quite conservative results. With this basic combination, a large portionof the DGstau grid is excluded and there there are points which are very close tobeing excluded in the noslepwh grid.(a) DGstau (b) noslepwhFigure 10: Estimated combined exclusion limits in the DGstau and noslepwh gridsusing Fisher?s method2.8 Background EstimationOnce the signal regions were properly defined I began looking into validationregions. The first region that I defined was one to check how well the Z+jets wasmodeled since it represents the bulk of the events before the /ET andmax(MT )cuts.The validation region that I defined for this purpose involves opposite sign leptonswith /ET< 50 GeV in the data up to period E (13.02 fb?1). The results of thesimulation and data estimates are shown in Table 6.In both the ??? and ee? there is a large disagreement between the data andMonte Carlo simulations. In both cases, the disagreement is of identical factors.Interestingly, this same disagreement is not observed in the e?? channel. Thisdisagreement has actually been observed by several other members of the 3 leptonSUSY group and is currently under investigation. It appears that the fake tausare not being modeled properly when a Medium selection is applied since thedisagreement is not present when a Light selection is used. Further work needsto be done to understand this and there are several people in the group currentlyworking on it. However, the relative agreement in the e?? channel is promisingand seems to indicate that real taus are being modeled reasonably well. If thisis the case then fixing this error will in fact decrease the background withoutaffecting the signal, increasing the sensitivity of my signal regions.25Table 6: Background distribution in verification regionsParameter ??? e?? ee?ttbar 90.55 ? 4.04 160.05 ? 5.38 73.46 ? 3.72Zee + jets 0.00 ? 0.00 0.00 ? 0.00 27453.05 ? 359.65Zmumu + jets 40230.65 ? 442.59 0.01 ? 0.01 0.00 ? 0.00Ztautau + jets 130.54 ? 18.30 299.82 ? 29.42 100.42 ? 15.93Single top 8.55 ? 1.63 11.67 ? 1.87 6.41 ? 1.31ttbar + boson 1.31 ? 0.08 0.85 ? 0.07 1.07 ? 0.07WZ 57.67 ? 2.10 14.16 ? 1.04 45.82 ? 1.88WW 15.28 ? 0.81 28.25 ? 1.12 11.69 ? 0.72ZZ 68.35 ? 2.70 4.85 ? 0.73 51.69 ? 2.32W + jets 5.36 ? 3.74 51.32 ? 18.12 19.53 ? 9.78triboson 0.38 ? 0.03 0.54 ? 0.04 0.29 ? 0.03Full simulation 40608.63 ? 443.02 571.54 ? 35.06 27763.45 ? 360.17Data 27014 545 18463Data/sim 0.665 0.954 0.6652.9 StatusMy signal regions have been accepted by the SUSY multilepton group andwill be included in the upcoming analysis. I wrote up all of my work in thegroup ATLAS note which was submitted to the Editorial Board before Christmas.This target completion date for this analysis is the Moriond conference which willhappen in early March. From my side, the signal regions are effectively frozen nowso the only major thing left to do is background estimation which is a group effort.I would like to see this project through and will help out with the backgroundestimation and any other tasks that are required. The data will be unblindedonce everything is complete in late February or early March which should bequite exciting.263 ConclusionFor my project I investigated adding tau leptons to the 3 lepton SUSY searcheswith the ATLAS detector. I set up a framework based off of the UC Irvineframework currently being used in these searches and modified it to include taus.I performed a study of tau identification methods for the samples of interest anddiscovered that using a Medium selection gives the best sensitivity.Using this framework, I studied using signal regions similar to those used inthe 3 lepton search targeting the same models. Due to the large number of faketaus compared to fake light leptons and the fact that the 2l1? signal is muchsmaller than the 3l signal to begin with in these models, adding taus did not helpincrease the sensitivity at all.I then took a very different approach and began requesting same sign lightleptons and an opposite sign tau. This final state is very uncommon in backgroundsignals and applying this selection cuts the background down by a very large factor.I found two models which previous searches have no sensitivity to but where thisselection excels. I optimized a selection for these grids and managed to find asignal region which excludes a large portion of the DGstau grid.Background estimation using this signal region is in progress although thereare some discrepancies between data and Monte Carlo simulations in the Z+jetssamples. These discrepancies are currently being investigated by several membersof the ATLAS SUSY multilepton group and will hopefully be understood soon.My work has been accepted by the group and my signal regions are being includedin the upcoming analysis on the full 2012 dataset which will be completed by lateFebruary or early March.274 Project Deliverables4.1 List of DeliverablesThe deliverable for this project is an analysis framework which includes the ?? lchannel that can be used on the actual 2012 ATLAS data. This analysis frameworkhas been completed and includes a set of tuned tau reconstruction algorithmsand an optimized signal selection. I have produced several presentations andsets of slides which have been shared with the multilepton group detailing whatI have done with the framework and how exactly I have been performing taureconstructions. My results have been successfully reproduced by other groupmembers using separate frameworks.The full background estimation has not yet been completed due to some un-explained processes occurring in the Z+jets channel. This is not only a problemfor my analysis and has affected the analyses of several other members of theATLAS multilepton group so there are several people now working to understandit. I have set up validation regions though already which will be used once thisbackground estimation is completed and will assist with figuring out the sourceof this excess.4.2 Financial SummaryThe total cost of my project was on the order of $10 billion which probablysets some sort of a record for Engineering Physics projects. I had no direct coststhough since I simply used the CERN and ATLAS UBC computing resourcesalready in place and this section of the report is therefore not relevant.4.3 Ongoing commitments by team membersMy signal regions have been accepted by the ATLAS SUSY multilepton groupbut I still plan on continuing work on this analysis. I will assist with the back-ground estimation which should be completed within a few weeks. I plan onhelping out wherever I am needed in the analysis efforts after that is completed.I am continuing work in this field in graduate school so the more experience thatI can get as an undergraduate the better off I am. The results of this analysis willbe presented at a conference at the beginning of March so all of the work will bedone by then.285 RecommendationsThere are several things that could be done to improve this analysis. The firstis to use higher statistic signal and background samples. The current DGstaugrid that I was optimizing with has quite poor statistics and generating doublethe points would make a significant difference. A request for this to happen hasbeen submitted. Improving the statistics in the W+jets sample would also bevery useful since there are events with weights of up to 12 right now which isextremely high. Unfortunately generating these samples takes a lot of CPU timeso it is unrealistic to recommend improved statistics in all of the samples.The background estimation needs to be completed properly and the discrep-ancy in the Z+jets sample needs to be understood. This is already in progressand several people are working on it.A proper combination of the signal regions should be done. In the later stagesof the analysis after a background estimation has been completed, this will beperformed using proper statistical packages. These packages end up requiring alot of CPU time since they use Monte Carlo toys so this most likely won?t happenuntil the data has been unblinded and the actual results are being calculated. TheFisher?s method combination gives a reasonably good estimate with low compu-tational cost and is probably acceptable until then.For future analyses on larger datasets, there are several thing that would beworth investigating. The first is a same sign light lepton signal region such as??e?e? or e?????. Such signal regions would be sensitive to the same modelsthat the 2?1l search applies to and the background in these signal regions isextremely low. I investigated these signal regions for this analysis, but there aretoo few events with 21 fb?1 of data and the results are heavily statistics limited.With a larger dataset these channels have the potential to significantly extend theexclusion limits.Another interesting thing to look into would be applying a Boosted DecisionTree (BDT) or other multivariate techniques to split signal from background. Itrained a BDT on the DGstau grid, but due to the low statistics in the Monte Carlosamples the BDT was not very effective and ended up picking up more fluctuationsin the Monte Carlo samples than real trends. If the statistics are improved thenusing a BDT would become feasible and could significantly improve the limits.296 References[1] Martin, Stephen P., ?A Supersymmetry Primer?, arXiv:hep-ph/9709356v6,2011[2] Woods, M., ?Standard Model of Particle Physics?, Retrieved from model.htm on Sept. 23,2012[3] ATLAS Collaboration, The, ?Search for events with large missing transversemomentum, jets, and at least two tau leptons in 7 TeV proton-proton collisiondata with the ATLAS detector?, arXiv:1203.6580v2, 2012[4] ATLAS Collaboration, The ?Search for supersymmetry in events with threeleptons and missing transverse momentum in sqrt(s) = 7 TeV pp collisionswith the ATLAS detector?, arXiv:1204.5638, 201230


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items