ABSTRACT Title of dissertation: NOVEL DARK MATTER PHENOMENOLOGY AT COLLIDERS Kyle Wardlow, Doctor of Philosophy, 2015 Dissertation directed by: Professor Kaustubh S. Agashe Department of Physics While a suitable candidate particle for dark matter (DM) has yet to be discov- ered, it is possible one will be found by experiments currently investigating physics on the weak scale. If discovered on that energy scale, the dark matter will likely be producible in significant quantities at colliders like the LHC, allowing the proper- ties of and underlying physical model characterizing the dark matter to be precisely determined. I assume that the dark matter will be produced as one of the decay products of a new massive resonance related to physics beyond the Standard Model, and using the energy distributions of the associated visible decay products, develop techniques for determining the symmetry protecting these potential dark matter candidates from decaying into lighter Standard Model (SM) particles and to simul- taneously measure the masses of both the dark matter candidate and the particle from which it decays. NOVEL DARK MATTER PHENOMENOLOGY AT COLLIDERS by Kyle Patrick Wardlow Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2015 Advisory Committee: Professor Kaustubh Agashe, Chair/Advisor Professor Zakaria Chacko, Professor Thomas Cohen, Professor Nicolas Hadley, Professor Massimo Ricotti c© Copyright by Kyle Patrick Wardlow 2015 Dedication I would like to dedicate this dissertation to my grandmother, Ramona Wardlow, for providing constant inspiration and enduring, unconditional love and support as I grew up. The memory of her generous spirit, kind heart, and diligent mind continue to provide an example worth striving toward. ii Acknowledgments The work done in this thesis is the result of the help, support, and collaboration of so many people. I would foremost like to thank my advisor, Kaustubh Agashe, for his assistance and patience throughout the years. His help has greatly expanded my knowledge of particle physics and I owe him much gratitude for all he has done in the past five years. I would also like to thank Doojin Kim, for his assistance in all things phe- nomenology and research. His insight was instrumental in helping me understand the ins and outs of properly analyzing our data; Roberto Franceschini, for his expert advice and ability to simplify the seemingly difficult problems encountered in our research to tractable, manageable hurdles. This work was supported in part by NSF grants Nos. PHY-0652363, PHY- 1315155, and PHY-0968854 and the Maryland Center for Fundamental Physics. I would like to thank all the members of the center, and especially Tom Cohen, Zakaria Chacko, Ted Jacobson, and Raman Sundrum for their insightful and instructive lectures and discussions over the years. I would like to express my gratitude to Evan Berkowitz for his immense help with Mathematica, programming, life, the universe, and everything. I would like to also express my thanks to all of grad students in the center for their camaraderie and enthusiasm. My family deserves no small praise in supporting me and having high expec- tations that I would succeed and get this far. My mother for more reasons than I iii can tell and my father and family for their constant belief in me. And finally, Karen for her patience, love, support, and so much more. iv Table of Contents List of Tables vii List of Figures viii List of Abbreviations ix 1 Introduction 1 1.1 Dark Matter and WIMPs . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Dark matter and Colliders . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Energy Peaks and Stabilization Symmetry 12 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2 Theoretical observations on kinematics . . . . . . . . . . . . . . . . . 18 2.2.1 The peak of the energy distribution of a visible daughter . . . 19 2.2.1.1 Two-body decay . . . . . . . . . . . . . . . . . . . . 19 2.2.1.2 Three-body decay . . . . . . . . . . . . . . . . . . . 23 2.2.2 The kinematic endpoint of the MT2 distribution . . . . . . . . 27 2.2.2.1 Two-body decay, one visible and one invisible . . . . 28 2.2.2.2 Three-body decay, one visible and two invisibles . . . 29 2.3 General Strategy to distinguish Z2 and Z3 . . . . . . . . . . . . . . . 32 2.4 Application to b quark partner decays . . . . . . . . . . . . . . . . . . 35 2.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3 Mass extraction 51 3.1 Introduction and general strategy of the mass measurement . . . . . 52 3.2 A template for the energy spectrum of a massive child particle . . . 60 3.3 “Event mixing” to estimate the combinatorial background . . . . . . 65 3.4 Application to the gluino decay . . . . . . . . . . . . . . . . . . . . . 70 3.4.1 Signal process: gluino decay . . . . . . . . . . . . . . . . . . 71 3.4.2 Backgrounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.4.2.1 Standard Model backgrounds and event selection . . 74 3.4.2.2 Combinatorial background and mixed event subtrac- tion . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 v 3.5 Mass measurement results and discussion . . . . . . . . . . . . . . . . 90 3.5.1 Measurement of gluino and neutralino masses . . . . . . . . . 90 3.5.2 Study of systematic effects . . . . . . . . . . . . . . . . . . . 100 3.5.3 Improving the mass measurement using the mbb endpoint . . . 107 3.6 Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . 109 4 Conclusions 114 4.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 A Event Mixing: Signal and background 117 Bibliography 120 vi List of Tables 2.1 Cross-sections of Z2 and Z3 signals and dominant BG . . . . . . . . . 41 3.1 Gluino signal and dominant BG cross sections before and after cuts . 73 3.2 Fit results for massless and massive templates for all mass slices . . . 97 3.3 Conditions for over-, under-, and consistent estimation of the masses . 100 3.4 Samples used to isolate the primary source of error in the template fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 vii List of Figures 1.1 DM Discovery Channels . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 DM Decay Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1 Z2 and Z3 signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2 Z3 separation from Z2 peak . . . . . . . . . . . . . . . . . . . . . . . 26 2.3 Effect of ST cuts on the dominant Z2 and Z3 backgrounds . . . . . . 43 2.4 MT2 distributions post-cuts for Z2 and Z3 . . . . . . . . . . . . . . . 45 2.5 Bottom energy distributions for Z2 and Z3 after cuts with BG . . . . 46 3.1 Real three-body decay and effective two-body decay used to form distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.2 Mixed event and correct gluino signal di-b-jet invariant mass and R distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.3 Di-b energy distributions and ratio for mbb = 300GeV,700GeV. . . . . 83 3.4 Mixed event signal and background interference . . . . . . . . . . . . 88 3.5 Bin-by-bin ratio of Ebb from mixed event subtraction and the correct pairings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.6 Goodness of fit comparison for massive and massless template . . . . 91 3.7 Fit results for massless and massive templates for m¯bb = 250 GeV and m¯bb = 650 GeV . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.8 Mass extraction using the E∗bb parameter extracted from the massless and massive template fits . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.9 Contour plot in the plane of the gluino and DM mass around their best fit mass values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.10 Determination of the source of bias in the fit results . . . . . . . . . . 105 3.11 Contour plot of improved mass paramaters using the invariant mass endpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 viii List of Abbreviations SM Standard Model of particle physics LHC Large Hadron Collider DM Dark Matter Λ− CDM Lambda Cold Dark Matter Cosmological Model TeV Tera-electronVolt GeV Giga-electronVolt MeV Mega-electronVolt MT2 Stranverse Mass ISR Initial State Radiation BSM Beyond Standard Model Z2 Cyclic group of order two Z3 Cyclic group of order three MCFP Maryland Center for Fundamental Physics NSF National Science Foundation DOE Department of Energy ix Chapter 1: Introduction While it has been extraordinarily successful in providing an accurate descrip- tion for an extremely broad range of observed phenomena, the Standard Model does not constitute a complete accounting of the physics of the universe. Some of the ob- servational discrepancies and omissions apparent in the model can be addressed by the well-motivated addition of complementary extensions and modifications, such as Λ-CDM a Big Bang cosmological model, but problems remain even after taking these into account - there is no provision for new physics above the electro-weak scale, leaving the Planck-weak hierarchy problem unresolved, and there are no reasonable candidate particles for dark matter (DM) [1,2]. The fact that the Standard Model includes no new physics beyond the electro- weak scale it not necessarily problematic per se - however, the theory seems strangely fine-tuned without the addition of new particles between the weak and Planck scales. We know that a new description of physics must take over at the Planck scale, because it is the scale at which quantum gravitational fluctuations become too large to neglect, but according to our best knowledge of the universe as it currently stands, there are no new states with masses between the weak scale (mWeak = 246GeV) and the Planck scale (MPl ≈ 1018GeV) The mass of the Higgs particle, the scalar 1 boson that breaks the electro-weak symmetry of the Standard Model and gives the fundamental elementary leptons their masses, is susceptible to physics at higher energy scales, and without new physics between the Planck and weak scales, would receive corrections to its mass based on the order of the Planck scale. Because the Planck scale is so large, the fact that the observed mass of the Higgs is on the order of weak scale, mH = 126GeV, suggests that there is a very precise cancellation to these higher order corrections to the Higgs mass. This is highly suggestive of new physics at or above the weak scale. Since the energies below the weak scale have been well-studied, we therefore expect that new physics that protects the Higgs mass will come in at around the TeV scale and that the LHC will be able to probe these new phenomena. An interesting possibility is that this new physics will also include a weakly-interacting, massive particle (WIMP) that constitutes a viable candidate for dark matter. The existence of dark matter has been solidly established by astrophysical observation - the rotation curves of objects in many observed galaxies cannot be explained without either modifying gravity or assuming that there is additional, in- visible matter keeping the luminous matter in the orbits seen. Modifying gravity is at the present moment strongly disfavored - observations such as the Bullet Cluster put strong bounds on the extent to which gravitational interactions can be altered from what is expected in general relativity - and so ‘dark’ matter is the likely can- didate for the observed phenomena [1]. Observations also force us to conclude that the dark matter is not baryonic in nature - it does not interact with electromagnetic radiation - indeed, it has only been observed gravitationally at present. Cosmic 2 microwave background data also corroborate that dark matter are a significant por- tion of the mass-energy of the known universe; in fact, it constitutes approximately 26.8% of this mass-energy, making it about 5.5 times more plentiful than ordinary matter [3]. Dark matter is quite possibly one of most interesting topics in particle physics at the present time - we have a clear signal of its existence and we have good reason to believe that the discovery of its underlying physical basis is within our grasp. We must therefore discuss new, non-gravitational methods of probing the properties of this dark matter. To that end, I will discuss how dark matter can be seen directly and indirectly, and potentially produced experimentally. After this, I will elaborate on why WIMPs are an interesting candidate for this dark matter and will lay out how this kind of dark matter relates to collider experiments and discuss the ways in which it can be seen and probed at the current generation of experiments. I will then quickly outline the remainder of the dissertation by focusing on some of the most interesting properties of this dark matter and how these specifically can be measured at the LHC if and when dark matter is produced there. In depth discussion and results from my phenomenological investigations into the possibility of determining these properties will then follow in the body of the dissertation, and I will conclude with a summation of these results and comments on the outlook for future areas of phenomenological interest. This dissertation essentially follows work done with Kaustubh Agashe, Roberto Francescini, Doojin Kim, and Sung-Woo Hong, and is based on that work in chronological order [4, 5]. 3 Figure 1.1: The ways in which dark matter can be observed 1.1 Dark Matter and WIMPs Generally speaking, if dark matter interacts with normal matter at all, it can in principle be detected via those interactions. Direct detection involves dark matter scattering off of normal matter, which in the context of direct detection experiment usually means a large, uniform mass highly shielded from external background radia- 4 tion and sensitive Cherenkov-detecting apparatuses to detect the energy transferred in this scattering. Indirect detection typically involves the astrophysical observation of regions of high dark matter density and looking for unexpected lines of radiation; these unexpected lines would imply the bipartite self-annihilation of the dark mat- ter into visible particles (typically photons) and would be an easy way to access the mass of the dark matter particles. Production is the method of observation that we shall direct our focus toward, and typically involves dark matter particles being pro- duced at collider experiments as a result of the high energy collisions there. These dark matter particles can be produced in a variety of ways, but one of the most interesting is when they are the result of the decay of some heavier, new physics state produced in the collider experiment and the dark matter must be produced in association with visible particles in order for the events to be of interest because it is essentially invisible to the detection equipment at current colliders. As mentioned above, WIMP dark matter is an especially interesting candidate for the dark matter observed in the universe; this is due to what is known as the “WIMP Miracle.” The WIMP Miracle, briefly, is the fact that particles that have an interaction cross section with normal matter and masses of order the weak scale give approximately the observed dark matter relic density left over after the Big Bang and the period known as inflation in the early universe. This is tantalizing; as discussed above, we have strong reason to suspect that there is new physics just around the corner at the TeV scale, which consider approximately equivalent to the weak scale. This means that whatever new physics might be discovered at, say, the LHC may also naturally bring with it dark matter candidates at no extra theoretical 5 cost. This raises the exciting possibility that we will see signs of dark matter candi- dates being produced at the LHC and it is for this reason that I restrict our focus here on collider dark matter phenomenology. In the investigations undertaken in the following, a technical distinction must be made: I motivate this research with the conspicuous coincidence of the WIMP miracle as it relates to dark matter, but our results are in fact more general and can be applied to other situations where there are invisible particles. I use the heuristic shortcut of labeling these invisible dark matter candidates as dark matter, because the hope is that if similar invisi- ble particles are discovered, they would match the criteria we already know from cosmological observations and thus earn the appellation “dark matter.” There are many theoretical extensions of the Standard Model that give rise to these invisible particles; in fact, most are constructed so that they include this attractive feature. Supersymmetry (SUSY) is perhaps the most famous, where the particles known as neutralinos act as excellent WIMP dark matter when they are the lightest (stable) supersymmetric partner to the Standard Model [6]. In order to make this class of dark matter stable, it must be imbued with a conserved charge that prevents it from decaying to lighter SM final states; this is typically implemented in the form of an R-parity symmetry. But there are yet more interesting and exotic models that also give rise to dark matter candidates - Extra dimensional models can easily incorporate dark matter in the form of the Kaluza-Klein modes of neutral Standard Model particles. If the extra dimensions are warped, one can also ameliorate the Planck-weak hierarchy problem, which makes this class of models 6 competitive with SUSY [7, 8]. Extra dimensional models are also often distinct from SUSY in the way that they protect their dark matter candidates (among other things, such as protons) from decaying to lighter SM particles - typically it is done by giving particles charge based on their baryon number (gauging and then breaking the baryon number symmetry in a proscribed way), which results in a different symmetry of stabilization for these models; we shall return to this later in this chapter and in Chapter 2 [9]. There are also models that include axion dark matter, which is related to the resolution of the strong CP problem, and additional generations of highly decoupled neutrinos, known as sterile neutrinos; however, I do not consider these here, and my general program will be to consider only WIMP dark matter. 1.2 Dark matter and Colliders I will now consider some general phenomenological characteristics of the be- havior of dark matter in collision events. First, in order for the dark matter to be “seen” in a collider experiment, it must paradoxically escape the detector without decaying to visible particles. For this reason, I have required that the dark matter considered in the dissertation to be stable. Of course, we know that astrophysical dark matter must be stable at least on the order of the lifetime of the universe, so this is a well-justified assumption. Now let us turn to what collision events involving dark matter look like: Natu- rally, in order to make any determination of the properties of these dark matter par- ticles, they must first be discovered. There are several means of making a definitive 7 discovery of dark matter, both in the context of detection design and construction, and in what discovery channel, i.e direct vs. indirect detection vs. production as has already been mentioned; I will assume in the following that a definitive discovery has already been made and the results that follow are predicated upon that. In general, the signature of dark matter in a collision event is a large amount of miss- ing momentum and energy - the dark matter flies through the detector leaving no direct trace of its presence. Because of this, it must be produced along with visible, SM particles; otherwise the “event” has no way of registering in the detector, as described in the following schematic equation: SM + SM → SM +DM = SM + E/T (1.1) There are a variety of events that could have this E/T + SM final state; we will consider only those where the dark matter is produced in pairs so that the charge that the dark matter has under its stabilization symmetry is conserved. To wit: SM + SM → SM + DM + ¯DM The dark matter pairs could be the result of the decay of some heavy intermediate state produced directly by the collision of partons - this channel would require that it be produced in association with some initial state radiation in order for it to be seen, and the ISR would give a handle on the missing energy and momentum. They could also be produced as the subsequent decay products of heavier new physics states that are also charged under the stabilization symmetry; this is the case that we take under consideration for the remainder of this dissertation. In this category of production, there are further subcategories related to the 8 S˜M SM DM SM DM S˜MSM SM Figure 1.2: Typical dark matter-inclusive decay topology from a pair-produced new heavy resonance. mass and energy of the pair produced intermediate states: • Some or all of the intermediate states are on-shell, which means that the precise decay topology plays an important role in the ultimate kinematics of the decay products. • The intermediate states are all far off their mass shell(s), leading to contact- like interactions, meaning that the decay is effectively N-body, where N is the total number of decay products. Of these, our focus is directed toward the latter. The former is interesting in its own right, and the author would direct the interested reader to [10,11]. Now that the production channel of the dark matter has been sufficiently specified, let us consider which dark matter properties are of interest and accessible from the data accessible to collider experiments. Of paramount interest are the masses of not just the dark matter, but of all the particles involved in the interaction 9 leading to the emission of the dark matter particles. Knowing the masses of these particles would give physicists a much better handle on the parameter space and scale of the new physics, and would help to better direct theoretical development of the nascent BSM theory that the discovery of such particles would necessitate. There are other parameters one would generally like to know, such as the spin, but one that is often overlooked is the stabilization symmetry. It may seem at first glance that accessing this information is extraordinarily difficult, but it is in fact quite simple in some cases to garner insight about this symmetry from simple kinematics. It is upon these two categories of determination - the stabilization symmetry and the masses of the dark matter and its originating particle - that will occupy the rest of time in the following. Broadly, the stabilization symmetry directly affects how many particles a mas- sive parent that is also charged under the symmetry will ultimately decay to. For R parity-like (Z2) symmetries, the decays are typically two-body with one invisible and one SM particle in the final state, and for baryon number-like symmetries (Z3), the final state of a massive parent are three-body, two of which are invisible [10]. In the second chapter, I will develop the theory behind how this affects the kine- matic distributions of the visible particles in the decay, and I will show how the stabilization symmetry can be determined using event simulated under a largely model-independent, toy process using the peak location of the energy distribution of the visible particles. In this chapter, I will assume that the visible particles are massless, which is critical to the simplicity of the result. In the third chapter, I will generalize the result in chapter two to massive visible 10 decay products and, considering a different decay topology, discuss how the mass of both the parent and the dark matter particle can be extracted from the energy distribution of the decay-wise sum of the energy of the visible decay products. In order to do this, I simulate events for an example process - pair-produced gluinos decaying to two SM particles and one invisible each, and fit the data in the pair- wise energy sum distribution of the visible particles to a function that allows me to ultimately extract the mass parameters of both the parent and DM particle. In order to do so, the constructed energy data must first be treated for a combinatoric ambiguity - there is generally more than one way to combine the particles in the overall multi-body final state, and without removing the distortion to distribution that is the inevitable result of this, one cannot extract any useful information. The critical take-away is the use of the energy distribution in both of these examples: typically, Lorentz invariants are used to extract information from parti- cle collision event samples, and it is remarkable (and truly novel) that the energy distribution proves to be so useful in this regard. I will close with some comments on this and on the outlook for further uses of novel kinematic observables, such as the energy distribution, going into the future. 11 Chapter 2: Energy Peaks and Stabilization Symmetry 2.1 Introduction As mentioned previously, there are many motivations for extensions to the Standard Model (SM) of particle physics at the TeV scale; perhaps the most im- portant among these motivations are the necessity of a fundamental mechanism for electroweak symmetry breaking (EWSB) and a resolution of the related Planck-weak hierarchy problem. In such extensions of the SM, there generally exists a new par- ticle at or below the TeV scale which cancels the quadratic divergence of the Higgs mass from the top quark loop in the SM. Such a particle is typically a color triplet with a significant coupling to the SM top quark, and has an electric charge of +2/3. Following the literature, I will generically call such particles “top partners” and denote them by T ′ 1. These top partners often come along with bottom partners, which I similarly denote as B′. The typical reason for this is that the left-handed (LH) top quark is in a doublet of SU(2)L with the LH bottom quark. I then ex- pect top and bottom quark-rich events from the production and decay of these new particles at the LHC. Because the aforementioned extensions also generally contain 1In this chapter this name applies as long as the partners have interactions with the relevant SM particle, even if the partners do not directly cancel the Higgs mass divergence. 12 candidate particle for dark matter [1], in the form of WIMPs or other more ex- otic candidates [2], and these scenarios will often involve heavier new particles that are charged under both the symmetry that keeps the DM stable and the SM gauge group. These new particles should then be copiously produced at the LHC and must decay into DM particles and SM states, given that the latter are not charged under the DM stabilization symmetry. Thus I expect this new physics to give rise to events at the LHC with large missing energy, in association with jets, leptons, and photons. We therefore explore scenarios which employ extensions that have the above characteristics; In this case, it is likely that the top and bottom partners are also charged under the DM stabilization symmetry. These extensions will then result in top and bottom quark-rich events at the LHC in which the new particles give rise to missing energy. There are many examples of these extensions [6–8], but in essence I find that a search for events exhibiting the characteristics of having a top or bottom partner and missing energy should be a top priority for the LHC. Once the existence of new physics has been established, the most urgent issue that will then have to be addressed is the determination of the details of the dynam- ics underlying this new physics. In particular, it will be crucial to determine the properties of the top and bottom partners using as model-independent an approach as possible. This detailed study would also offer major hints regarding the resolu- tion of the Planck-weak hierarchy problem. For largely model-independent work on fermionic bottom and top partners’ discovery potential at the LHC see Refs. [12,13] and for the determination of generic partners’ spin and mass see Refs. [14]. However, I remark that in this literature it has been assumed that the top or 13 bottom partner decays into only one DM particle, which is expected when the DM is stabilized by a Z2 symmetry. While Z2 is perhaps the simplest DM stabilization symmetry, it is by no means the only possibility: see references [15,16]. The point, especially in the case of such non-Z2 symmetries, is that more than one DM can appear in the decays of top and bottom (and other SM) partners: for example, two DM are allowed with Z3 as in [15], but not with Z2. I believe that a truly model-independent approach to the determination of the top and bottom partners’ properties should include this possibility of multiple DM in addition to different spins for the top and bottom partners. With this goal in mind, I aim to devise a strategy that uses experimental data to determine the number of DM in these decays and accordingly to identify the stabilization symmetry of the dark matter. Below, I outline a general strategy and then apply it to the specific case of bottom partner decays. I concentrate on the distinction between two general decay topologies: A→ bX and A→ bX Y (2.1) where b is a (single) SM visible particle, X and Y are two potentially different invisible particles and A is a heavier particle that belongs to the new physics sector. In the context of the models that I have discussed, A is the heavy particle charged under the DM stabilization symmetry and the particles labeled X and Y are the DM particles. In particular, I focus on scenarios where the two decays are mutually exclusive, i.e. where the stabilization symmetry and the charges of the involved particles are such that one decay can happen and not the other. This mutual 14 exclusivity can be the case with both Z2 and Z3 as the stabilization symmetry. To wit, if the SM particle b is not charged under the stabilization symmetry and all the new particles A,X, Y are, then the Z2 symmetry allows only for two-body decays of A. On the other hand, both the two and three-body decays of A are allowed by the Z3 symmetry by itself. However, I assume that other considerations forbid (or suppress) the two-body decay in this model. I choose to concentrate on this realization of the Z3 -symmetric model in part because this is the case that cannot be resolved using the results of previous work on the DM stabilization symmetry. This is the case, for instance, in Ref. [10], where purely two-body decays of A could be distinguished from mixed two- and three-body decays, but not from the purely three-body decays that I am now taking into consideration. In this chapter, I develop a method based primarily on the features of the energy distribution of the visible final state b to differentiate between the cases of purely two- and three-body decays. I remark that this is the first use of the energy distribution of the the decay products to study the stabilization symmetry of the DM. In fact, other work has typically focused on using Lorentz invariant quantities or quantities that are invariant under boosts along the beam direction of the collider. This is the case for Refs. [10, 11,17,18]. In particular, Refs. [10, 11,17] used the endpoints of kinematic distributions to probe the stabilization symmetry of the DM, whereas our method relies quite directly on peak measurements and only marginally on endpoint measurements. Additionally, I note that the methods developed in Refs. [11, 18] apply only to the case where there are more than one visible particle per decay. Therefore, our result for cases where there is only one 15 visible particle per decay is complementary to the results of the above references. Our basic strategy is explained in the following. It relies on a new result: assuming massless visible decay products and the unpolarized production of the mother particles, I will show that in a three-body decay the peak of the observed energy of a massless decay product is smaller than its maximum energy in the rest frame of the mother. This observation can be used in conjunction with a previously observed kinematic characteristic of the two-body decay to distinguish the stabilization symmetry of the DM. Specifically, it was shown in Ref. [19, 20] that for an unpolarized mother particle, the peak of the laboratory frame energy distribution of a massless daughter from a two-body decay coincides with its (fixed) energy in the rest-frame of the mother. Clearly, to make use of these observations in distinguishing two from three- body decays, I need to measure the “reference” values of the energy that are involved in these comparisons. Moreover, the procedure that is to be used to obtain this reference value from the experimental data should be applicable to both two and three-body decays. To this end, I find that when the mother particles are pair produced, as happens in hadronic collisions, the MT2 variable can be used. Thus, these observations make counting the number of invisible decay products possible by looking only at the properties of the single detectable particle produced in the decay. However, it is worth noting that our proof of the above assertion regarding the kinematics of two- and three-body decays is only valid with a massless visible daughter and an unpolarized mother. Therefore, care must be taken when discussing cases with a massive daughter or a polarized mother. 16 To illustrate the proposed technique, I will study how to distinguish between pair-produced bottom partners each decaying into a b quark and one DM from pair- produced bottom partners each decaying into a b quark and two DM particles at the LHC 2. As discussed above, a bottom partner appears in many motivated extensions to the SM, so I posit that this is a relevant example. Furthermore, I remark that the b quark is relatively light compared to the expected mass of the bottom partner, so that our theoretical observation for massless visible particles is expected to apply. Additionally, the production of bottom partners proceeds dominantly via QCD and is thus unpolarized. In this sense, the example of a bottom partner is well-suited to illustrate our technique. Finally, it is known that the backgrounds to the production of bottom partners may be rendered more easily manageable than for those of top partners [12], which would be a well-motivated alternative example. Specializing to the example of bottom partners, our goal then is to distinguish the two processes illustrated in Figure 2.1 at the collider pp→ B′B¯′ → bb¯χχ for Z2 , (2.2) pp→ B′B¯′ → bb¯χχχ¯χ¯ for Z3 , (2.3) where χ is an invisible particle and a bar denote anti-particles. In these processes, I assume that there are no on-shell intermediate states. I consider the case where the decay into two χ can happen only if the stabilization symmetry of the DM is Z3, while the decay into one χ is characteristic of the Z2 case. As said before, I 2To the best of our knowledge, none of the earlier work on distinguishing DM stabilization symmetries at colliders has studied this specific case. 17 p B' χ p B' χ b b p B' p B' b b χ χ χ χ Figure 2.1: The signal processes of interest for Z2 (left panel) and Z3 (right panel) stabilization symmetry of the dark matter particle χ. focus on this scenario because it has thus far been left uninvestigated by previous studies on the experimental determination of the stabilization symmetry of the dark matter [10,11]. From here, I organize our findings as follows: In Section 2.2, I review the current theory and I derive new results about the energy spectrum of the decay products of two- and three-body decays. These are then the foundation of the general technique presented in Section 2.3 for differentiating decays into one DM particle from those into two DM particles. In Section 2.4, I apply this technique to the specific case of bottom partners at the LHC. I conclude in Section 2.5. 2.2 Theoretical observations on kinematics I begin first by reviewing the relevant theoretical observations about the kine- matics of two-body and three-body decays. Specifically, I review the remarks on two-body decays described in [19]. I then generalize this result to three-body decay 18 kinematics and study the features that distinguish it from two-body decay kinemat- ics. I also briefly review applications of the kinematic variable MT2 to two-body and three-body decays and discuss the distinct features of the two different decay processes [10, 21]. For the two-body decay, I assume that a heavy particle A decays into a massless visible daughter b and another daughter X which can be massive and invisible: A→ bX. (2.4) On the other hand, for a three-body decay the heavy particle A decays into particles b, X and another particle Y A→ bX Y . (2.5) Like particle X, particle Y can also be massive and invisible, but it is not necessarily the same species as particle X. 2.2.1 The peak of the energy distribution of a visible daughter 2.2.1.1 Two-body decay It is well-known that the energy of particle b in the rest frame of its mother particle A is fixed, which implies a δ function-like distribution, and the simple ana- lytic expression for this energy can be written in terms of the two mass parameters mA and mX : E∗b = m2A −m2X 2mA . (2.6) 19 Typically, the mother particle is produced in the laboratory frame at colliders with a boost that varies with each event. Since the energy is not an invariant quantity, it is clear that the δ function-like distribution for the energy as described in the rest frame of the mother is smeared as I go to the laboratory frame. Thus, naively it seems that the information encoded in eq. (2.6) might be lost or at least not easily accessed in the laboratory frame. Nevertheless, it turns out that such information is retained. I denote the energy of the visible particle b as measured in the laboratory frame as Eb. Remarkably, the location of the peak of the laboratory frame energy distribution is the same as the fixed rest-frame energy given in eq. (2.6): Epeakb = E∗b , (2.7) as was shown in [19,20]. Let us briefly review the proof of this result while looking ahead to the discus- sion of the three-body case. As mentioned before, the rest-frame energy of particle b must be Lorentz-transformed. The energy in the laboratory frame is given by Eb = E∗bγ(1 + β cos θ∗) = E∗b (γ + √ γ2 − 1 cos θ∗) , (2.8) where γ is the Lorentz boost factor of the mother in the laboratory frame and θ∗ defines the angle between the emission direction of the particle b in the rest frame of the mother and the direction of the boost ~β, and where I have used the relationship γβ = √ γ2 − 1. If the mother particle is produced unpolarized, i.e., it is either a scalar particle or a particle with spin produced with equal likelihood in all possible polarization states, the probability distribution of cos θ∗ is flat, and thus so is that 20 of Eb. Since cos θ∗ varies between −1 and +1 for any given γ, the shape of the distribution in Eb is simply given by a rectangle spanning the range Eb ∈ [ E∗b (γ − √ γ2 − 1), E∗b (γ + √ γ2 − 1) ] . (2.9) It is crucial to note that the lower and upper bounds of the above-given range are always smaller and greater, respectively, than Eb = E∗b for any given γ, so that E∗b is covered by every single rectangle. As long as the distribution of the mother particle boost is non-vanishing in a small region near γ = 1, E∗ is the only value of Eb to have this feature. Furthermore, because the energy distribution is flat for any boost factor γ, no other energy value has a larger contribution to the distribution than E∗b . Thus, the peak in the energy distribution of particle b is unambiguously located at Eb = E∗b . The existence of this peak can be understood formally. From the fact that the differential decay width in cos θ∗ is constant, I can derive the differential decay width in Eb for a fixed γ as follows: 1 Γ dΓ dEb ∣∣∣∣ fixed γ = 1 Γ dΓ d cos θ∗ d cos θ∗ dEb ∣∣∣∣ fixed γ = 1 2E∗b √ γ2 − 1 Θ [Eb E∗b − (γ − βγ) ] Θ [ − Eb E∗b + (γ + βγ) ] ,(2.10) where the two Θ(Eb) are the usual Heaviside step functions, which here merely de- fine the range of Eb. To obtain the full expression for any given Eb, one should integrate over all γ factors contributing to this Eb. Letting g(γ) denote the prob- ability distribution of the boost factor γ of the mother particles, the normalized 21 energy distribution f2-body(Eb) can be expressed as the following integral f2-body(Eb) = ∫ ∞ 1 2 ( Eb E∗b + E∗b Eb ) dγ g(γ) 2E∗b √ γ2 − 1 . (2.11) The lower limit in the integral can be computed by solving the following equation for γ: Eb = E∗b ( γ ± √ γ2 − 1 ) (2.12) with the positive (negative) signature being relevant for Eb ≥ E∗b (Eb < E∗b ). I can also calculate the first derivative of eq. (2.11) with respect to Eb as follows: f ′2-body(Eb) = − 1 2E∗bEb sgn (Eb E∗b − E∗b Eb ) g ( 1 2 (Eb E∗b + E∗b Eb )) . (2.13) The solutions of f ′2-body(Eb) = 0 give the extrema of f2-body(Eb), and given the expression f ′2-body(Eb) in eq. (2.13), these zeros originate from those of g(γ). For practical purposes, one can take g(γ) to be non-vanishing for particles produced at colliders for any finite value of γ greater than 1 3. As far as zeros are concerned, two possible cases arise for g(1) (corresponding to Eb = E∗b ). If it vanishes, f ′2-body(Eb = E∗b ) ∝ g(1) = 0, which implies that the distribution has a unique extremum at Eb = E∗b . If g(1) 6= 0, f ′2-body(Eb) has an overall sign change at Eb = E∗b . As a result, the distribution has a cusp and is concave-down at Eb = E∗b . Moreover, the function f2-body(Eb) has to be positive to be physical, and has to vanish as Eb approaches either 0 or ∞, which is manifest from the fact that in those two limits the definite 3It must be noted that due to the finite energy of the collider, there is a kinematic upper limit for the boost factor γ of the heavy mother particles. However, this kinematic limit is usually very large and can effectively be taken as infinite. 22 integral in eq. (2.11) is trivial. Combining all of these considerations, one can easily see that the point Eb = E∗b is necessarily the peak value of the distribution in both cases. 2.2.1.2 Three-body decay I now generalize the above argument to three-body decays. I denote the energy of the visible particle b measured in the rest frame of the mother particle A as E¯b. I also denote the normalized rest-frame energy distribution of particle b as h(E¯b). In the two-body decay, this rest-frame energy is single-valued (see eq. (2.6)), and so the corresponding distribution h(E¯b) was trivially given by a δ-function. However, when another decay product is introduced, for instance, particle Y in eq. (2.5), then the energy of particle b is no longer fixed, even in the mother’s rest frame: h(E¯b) 6= δ ( E¯b − E∗b ) . Although the detailed shape of this rest-frame energy distribution is model-dependent, the kinematic upper and lower endpoints are model-independent. Since particle b is assumed massless, the lower endpoint corresponds to the case where energy-momentum conservation is satisfied by particles X and Y alone. On the other hand, the upper endpoint is obtained when the invariant mass of X and Y equals mX +mY , which corresponds to the situation where X and Y are produced at rest in their overall center-of-mass frame. Thus, I have E¯minb = 0 , (2.14) E¯maxb = m2A − (mX +mY )2 2mA . (2.15) For any fixed γ, the differential decay width in the energy of particle b in the 23 laboratory frame is no longer a simple rectangle due to non-trivial h(E¯b). For any specific laboratory frame energy Eb, contributions should be taken from all relevant values of E¯b and weighted by h(E¯b). This can be written as 1 Γ dΓ dEb ∣∣∣∣ fixed γ = ∫ E¯>b E¯b = min [ E¯maxb , Eb γ − √ γ2 − 1 ] , (2.18) with Eb running from 0 to E¯maxb ( γ + √ γ2 − 1 ) . Again, since the visible particle is assumed massless, E¯minb is zero and so the second equality in eq. (2.17) holds trivially. Finding an analytic expression for the location of the peak is difficult because of the model-dependence of h(E¯b), and it follows that the precise location of the peak is also model-dependent. Nevertheless, I can still obtain a bound on the position of the peak for fixed γ. Suppose that I am interested in the functional value of the energy distribution at a certain value of Eb in the laboratory frame; according to the integral representation given above, the relevant contributions to this Eb come from a range of center of mass energies which go from E¯ ′b to E¯ ′′b , where these are defined by E¯ ′b(γ + √ γ2 − 1) = Eb , (2.19) E¯ ′′b (γ − √ γ2 − 1) = Eb . (2.20) Each energy contributes with weight described by h(E¯b), as implied by eq. (2.16). 24 Let us assume that E¯ ′′b = E¯maxb and denote the corresponding energy in the laboratory frame as Elimitb , given by Elimitb = E¯maxb (γ − √ γ2 − 1). (2.21) From these considerations, it follows that all rest-frame energies in the range from E¯ ′b = Elimitb (γ+ √ γ2−1) to E¯ ′′b = E¯maxb contribute to a chosen energy in the laboratory frame, Elimitb . On the other hand, any laboratory frame energy greater than Elimitb has contributions from E¯ ′b > Elimitb (γ+ √ γ2−1) to E¯ ′′b = E¯maxb ; the relevant range of the rest-frame energy values will shrink so that the peak cannot exceed Elimitb : Epeakb ∣∣∣ fixed γ < E¯maxb (γ − √ γ2 − 1) ≤ E¯maxb for any fixed γ. (2.22) In order to ensure that the first inequality holds even for γ = 1, I assume in the last equation that h ( E¯maxb ) = 0, which is typically the case for a three-body decay. In order to obtain the shape of the energy distribution of particle b in the labora- tory frame, all relevant values of γ should be integrated over as with the two-body kinematics in the previous section. Hence, the laboratory frame distribution reads f3-body(Eb) = 1 Γ dΓ dEb = ∫ E¯>b E¯ 20 GeV for l = e, µ, τ , (2.36a) 2 b-tagged jets with |ηb| < 2.5 and pT b1 > 100 GeV, pT b2 > 40 GeV, (2.36b) E/T > 300 GeV , (2.36c) ST > 0.4 , (2.36d) f > 0.3 , (2.36e) ∆φmin(E/T , bi) > 0.2 rad for all the selected b-jets bi . (2.36f) Note that the our cuts are of the same sort used in experimental searches for new physics in final states with large E/T , 0 leptons and jets including 1 or more b-jets (see, for instance, [25]). However, notice that in our analysis, I privilege the strength of the signal over the statistical significance of the observation. As already mentioned, I imagine this investigation being carried out after the initial discovery of a B′ has taken place. Hence, I favor enhancing the signal to better study the detailed properties of the interaction(s) of B′. For this reason, I cut more aggressively on E/T and ST than in experimental searches and other phenomenological literature focusing on the discovery of B′s (see, for example, [12]). 39 I consider quarks separated by ∆R > 0.7 as jets. With this as our con- dition on jet reconstruction, the cuts of eq. (2.36) can be readily applied to the signals and to the Z + bb¯ background; the resulting cross-sections are shown in Ta- ble 3.1. These cross-sections are computed from samples of events obtained using the Monte Carlo event generator MadGraph5 v1.4.7 [26] and parton distribution func- tions CTEQ6L1 [27]. For the sake of completeness, I specify that in generating these event samples I assumed a fermionic B′ and a weakly interacting scalar χ. However, as already stressed, I anticipate that different choices of spin for these particles will not significantly affect our final result because the production via QCD gives rise to an effectively unpolarized sample of b quark partners. The estimate of the reducible backgrounds requires more work, as it is partic- ularly important to accurately model the possible causes that make pp→ tt¯→ bb¯+X and pp→ W± + bb¯ a background to our 2b+E/T signal. In fact, these processes have larger cross sections than Z + bb¯. However, they also typically give rise to extra leptons or extra jets with respect to our selection criteria in eq. (2.36). Therefore, in order for us to consider them as background events, it is necessary for the extra leptons or jets to fail our selection criteria. Accordingly, the relevant cross-section for these processes is significantly reduced compared to the total. In fact, I find that tt¯ and W±bb¯ are subdominant background sources compared to Z + bb¯. In what follows, I describe how I estimated the background rate from tt¯ and W±bb¯. An accurate determination of the proportion of tt¯ andW±bb¯ background events 40 Cut Z2 (B → bχ) Z3 (B → bχχ¯) Z + bb¯ (Z → νν¯) No cuts 159.75 159.75 – Precuts 139.89 136.73 2927 pj1T > 100 GeV, pj2T > 40 GeV 139.64 133.76 971.9 E/T > 300 GeV 101.73 69.01 19.93 f > 0.3 89.66 65.21 19.40 ∆φmin > 0.2 88.95 64.31 18.81 ST > 0.4 30.03 16.07 1.96 2 b-tagged jets 13.29 7.18 0.87 Table 2.1: Cross-sections in fb of the signals and the dominant background Z + bb¯ after the cuts of eqs. (2.36). The mass spectrum for the signals is mB′ = 800 GeV and mχ = 100 GeV. The line “No cuts” is for the inclusive cross-section of the signal. The line “precuts” gives the cross-section after the cuts E/T > 60 GeV, pT,b > 30 GeV, ηb < 2.5,∆Rbb > 0.7 that are imposed solely to avoid a divergence in the leading order computation of the background. In the last line, the rate of tagging b quarks is assumed 66% [28]. that pass the cuts in eq. (2.36) depends on the finer details of the detector used to observe these events. However, the most important causes for the extra jets and leptons in the reducible backgrounds to fail our jet and lepton identification criteria can be understood at the matrix element level. I estimate the rate of the reducible backgrounds by requiring that at the matrix element level, a suitable number of final states from the tt¯ and W + bb¯ production fail the selections of eq. (2.36) for 41 one of the following reasons: • the lepton or quark is too soft, i.e., pT,l < 20 GeV, pT,j < 30 GeV • or the lepton or quark is not central, i.e. |ηl,j| > 2.5 . Additionally, when any quark or lepton is too close to a b quark, I consider them as having been merged by the detector, and the resulting object is counted as a b quark (i.e., ∆Rbl < 0.7, ∆Rbj < 0.7), or if any light quark or lepton is too close to a light jet, they are likewise merged, and the resulting object is counted as a light quark (i.e., ∆Rjl < 0.7, ∆Rjj < 0.7). In the latter case, the light ”jet” resulting from a merger must then also satisfy the pT and η criteria given above for going undetected. Using our method to estimate the results on the backgrounds in Ref. [12], the analysis of which was carried out with objects reconstructed at the detector level, I find that our estimates agree with Ref. [12] within a factor of two. Because I successfully captured the leading effect, I did not feel the necessity of pursuing detector simulations in our analysis. Estimating the reducible background after the selections in eq. (2.36), I find that tt¯ and W + bb¯ are subdominant compared to Z+ bb¯. The suppression of the re- ducible backgrounds, and in particular, of tt¯, comes especially from the combination of the ST and E/T cuts. This is shown in Fig. 2.3, where I plot the E/T distributions of the three backgrounds under different ST cuts: ST > 0, ST > 0.2, and the cut ST > 0.4, which is used in our final analysis. Clearly, one can see that for a E/T as large as our requirement in eq. (2.36), the dominant background is Z + bb¯, and that 42 Z + bb W + bb tt 0 100 200 300 400 500 600 700 0.01 0.1 1 10 100 MET H GeVL d Σ d H M E T L H f b  1 0 G e V L i n L o g s c a l e S T > 0.0 Z + bb W + bb tt 0 100 200 300 400 500 600 700 0.01 0.1 1 10 100 MET H GeVL d Σ d H M E T L H f b  1 0 G e V L i n L o g s c a l e S T > 0.2 Z + bb W + bb tt 0 100 200 300 400 500 600 700 0.01 0.1 1 10 100 MET H GeVL d Σ d H M E T L H f b  1 0 G e V L i n L o g s c a l e S T > 0.4 Figure 2.3: E/T distributions for the three backgrounds (Z + bb¯, W± + bb¯, and tt¯) with ST cuts of increasing magnitude, ST > 0.0, > 0.2, and > 0.4 from the left panel to the right panel. In each plot, the black solid, blue dotdashed, and red dashed curves represent Z + bb¯, W± + bb¯, and tt¯, respectively. in particular, the tt¯ is significantly suppressed by simultaneously requiring a large E/T and moderate ST cut (rightmost panel in the figure). As the first step in our analysis, I compute the MT2 distributions expected at the LHC for our two potential cases of new physics interactions, Z2 and Z3 . The distributions for the two cases are shown in Fig. 2.4. Since I found that with selections of eq. (2.36), the Z + bb¯ process is the dominant background, as seen in the figure, I consider it the only background process. The two distributions 43 have been computed assuming a trial mass m˜ = 0 GeV and have an endpoint at 787.5 GeV and 750 GeV for the Z2 and the Z3 cases, respectively. Interpreting the distributions under the na¨ıve assumption of one invisible particle per decay of the B′, I obtain from eq. (2.30) a C parameter that is 383.75 GeV and 375 GeV for Z2 and Z3 , respectively. These are the reference values that I need for the analysis of the energy distributions 7. As the final step in our analysis, I need to compare the obtained reference values with the peaks of the energy distributions. These distributions are shown in Fig. 2.5. I clearly see that the location of the peak in the energy distribution the Z2 case coincides with the associated reference value, whereas for the Z3 case the peak is, as expected, at an energy less than the associated reference value. I remark that in the Z3 case, the peak of the energy distribution is significantly displaced with respect to the reference value. Therefore, I expect our test of the Z2 nature of the interactions of the B′ to be quite robust under the inclusion of both experimental and theoretical uncertainties, such as the smearing of the peak due to the resolution on the jet energy, the errors on the extraction of the reference value obtained from the MT2 analysis, and the shift of the peak that is expected due to radiative corrections to the leading order of the decay of the B′. 7I remark that as apparent from the figure, the signal rate is much larger than that of the background, and therefore the shape of the distribution expected at the LHC largely reflects the features of the signal. In this case, it seems particularly straightforward to extract the endpoint of the distribution. In other cases where the background is larger, the extraction of the endpoint may require a more elaborate procedure, especially for the Z3 case where the endpoint is much less sharp (see, for example, [10, 29–31]). 44 Signal+ BG Signal BG 0 200 400 600 800 0.0 0.1 0.2 0.3 0.4 M T2 H GeV L d Σ d M T 2 H f b  1 0 G e V L Signal H Z 2 L + BG Signal+ BG Signal BG 0 200 400 600 800 0.0 0.1 0.2 0.3 0.4 0.5 M T2 H GeV L d Σ d M T 2 H f b  1 0 G e V L Signal H Z 3 L + BG Figure 2.4: MT2 distributions after the cuts of eq. (2.36). The chosen masses for the new particles are mB′ = 800 GeV and mχ = 100 GeV. The left panel is for the Z2 signal while the right panel is Z3 (both in blue). In both cases, the background is Z + bb¯ (red). In both panels, the black line represents the sum of signal and background. The black vertical dashed lines denote the theoretical prediction for the endpoints. 45 Signal+ BG Signal BG 0 200 400 600 800 1000 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 E b H GeV L d Σ d E b H f b  1 0 G e V L Signal H Z 2 L + BG Signal+ BG Signal BG 0 200 400 600 800 1000 0.00 0.05 0.10 0.15 0.20 0.25 E b H GeV L d Σ d E b H f b  1 0 G e V L Signal H Z 3 L + BG Figure 2.5: Energy distributions of the b quarks after the cuts of eqs. (2.36). The chosen masses for the new particles are mB′ = 800 GeV and mχ = 100 GeV. The left panel is for the Z2 signal, while the right panel is Z3 (both in blue). In both cases, the background is Z + bb¯ (red). In both panels, the black line represents the sum of signal and background. The black vertical dashed lines denote the reference values extracted from the MT2 distributions of Fig. 2.4 using eq. (2.30). 46 2.5 Conclusions In this chapter, I studied the problem of the experimental determination of the general structure of the interactions of an extension to the SM that hosts collider- stable WIMPs. If these new particles are charged under a new symmetry and the SM particles are not, then the lightest such WIMP is stable and is concomitantly a candidate for the DM of the universe. In the context of such DM models, the above is thus relevant for the determination of the stabilization symmetry of this DM. In more detail, such models typically have heavier new particles that are charged under both the SM gauge group and the DM stabilization symmetry. Thus, these particles can be produced via the collision of SM particles, and will decay into DM plus SM particles. The number of DM particles in such a decay depends on the DM stabilization symmetry. Our goal was to devise a strategy to count this number of DM and thus probe the nature of this symmetry, based only on the visible part of the decays. To illustrate the technique, I studied models with fermionic b quark partners, i.e. colored fermions with electric charge −1/3 with sizable coupling to the b quark. In our example, I considered the case of b quark partners with mass at or below the TeV scale. The possibility of such is motivated by extensions to the SM that solve the Planck-weak hierarchy problem, since they contain top partners and, thus by SU(2)L symmetry, bottom partners. In the same model, it is also possible to have a WIMP DM. The b quark partners, as the typical states of the new physics sector, are charged under this stabilization symmetry and will then decay into a bottom 47 quark, plus DM. Furthermore, thanks to their color gauge interactions, the b quark partners have a large production cross-section at hadronic colliders. Therefore the study of b quark partners is very well-suited to illustrate our technique. The literature on b quark partners thus far has only considered single DM in each decay chain, as would be the case in models where the DM is stabilized by a Z2 symmetry. However, in general, there can be more than one DM in this decay chain; for example, two DM are allowed in the case of a Z3 stabilization symmetry, albeit not in the case of a Z2 symmetry. So, the question I posed is whether I can distinguish the hypothesis of one vs. (say) two DM particles appearing in each of these decay chains. As mentioned above, in this way I can probe the nature of the DM stabilization symmetry. The question is non-trivial, because in either case the detectable particles produced are the same, and so is the signal of the b quark partners’ production, i.e. bb¯+ E/T . To distinguish between one and two DM in each b quark partner decay chain, the first result I used is that the measured MT2 endpoints can be fitted by the formula eq. (2.30) irrespectively of how many DM particles are produced. The value of the free parameter obtained by fitting eq. (2.30) to the data is used in the next step of our analysis as follows. The second theoretical observation is that the peak of the distribution of the b quark energy in the laboratory frame is the same as the mother rest frame value for the two-body decay, but is smaller than the maximum value in the mother rest frame for the three-body decay. The crux is that the rest frame energy that is used as a reference value in this comparison is precisely the parameter obtained in the above MT2 analysis. Combining the above 48 two facts, I showed that the peak of observed bottom-jet energy being smaller than (vs. same as) the reference value obtained from the MT2 endpoint provides evidence for two (vs. one) DM particles in the decay of a b quark partner, and thus a Z3 symmetry can be distinguished from Z2. I verified our theoretical observations in B′ pair production and decay at the LHC. To assess the feasibility of the determination of the stabilization symmetry with our method, I simulated the signal and the dominant SM backgrounds. Using suitable cuts, I showed that the background in this case is due mostly to Z + bb¯. I studied in detail the case where the b quark partner has a mass mB′ = 800 GeV and the invisible particles have a mass mχ = 100 GeV. In this case, the background can be made small compared to the signal using the cuts of eq. (2.36). In Figures 2.4 and 2.5, I show the resulting MT2 and b quark energy distributions relevant to our analysis. I observed that the peak in the b quark energy distribution for Z2 models is consistent with the reference value from the MT2 endpoint, while that of Z3 models is apparently less than the corresponding reference value. The determinations of the peak of the energy distribution and of the reference value needed for our analysis are subject to uncertainties, e.g. those that propagate from the error in the determination of the MT2 endpoint. However, the evidence for a Z3 stabilization symmetry comes from a difference between the peak of the energy distribution and the reference value. The theoretical prediction for this difference is large enough compared to the relevant uncertainties so that the proposed method seems to be quite robust, and should allow a clear discrimination of the stabilization symmetry of the DM. 49 In next chapter I shall extend the theory of Section 2.2 to deal with massive visible decay products and will tackle the issue of multi-body decay channels where there are potentially many identical particles in the final state. Having handled that, I will develop a method of simultaneously determining the masses of the DM and parent particles. 50 Chapter 3: Mass extraction In previous works I have demonstrated how the energy distribution of massless decay products in two body decays can be used to measure the mass of decaying particles. In this work I show how such results can be generalized to the case of multi-body decays. The key ideas that allow us to deal with multi-body final states are an extension of our previous results to the case of massive decay products and the factorization of the multi-body phase space. The mass measurement strategy that I propose is distinct from alternative methods because it does not require an accurate reconstruction of the entire event, as it does not involve, for instance, the missing transverse momentum, but rather requires measuring only the visible decay products of the decay of interest. To demonstrate the general strategy, I study a supersymmetric model wherein pair-produced gluinos each decay to a stable neutralino and a bottom quark-antiquark pair via an off-shell bottom squark. The combinatorial background stemming from the indistinguishable visible final states on both decay sides can be treated by an “event mixing” technique, the performance of which is discussed in detail. Taking into account dominant backgrounds, I am able to show that the mass of the gluino and, in favorable cases, that of the neutralino can be determined by this mass measurement strategy. 51 3.1 Introduction and general strategy of the mass measurement As previously mentioned, a stable WIMP is a well-motivated candidate for dark matter. Specifically, many models incorporating this WIMP-type DM contain particles that are not only heavier than the DM and charged under the DM stabiliza- tion symmetry, but also that interact via SM gauge bosons. If these heavier particles (dubbed “parent” particles) do interact via say, QCD, they could be copiously pro- duced at hadron colliders, which would then be followed by their subsequent decay into the concomitant DM and SM particles. By design, the DM particle leaves no visible trace in the particle detector, thus its presence in an event is typically in- ferred from the missing transverse momentum (pT ), which can be interpreted as a loss of specificity in the kinematic information of the event. The primary goal of this chapter is to devise a strategy for the simultaneous measurement of the masses of the parent and the DM particles in the associated processes despite this loss of information. This strategy for the mass measurement also has further applications beyond the study of DM particles; it can be applied to any case where a new particle decays to a semi-invisible final state. Again, the invisible particle neither has to be a DM candidate, nor has to be absolutely stable – only insofar as its time-of-flight out of the detector is concerned. However, for notational simplicity I shall still refer to it as “DM”. A full reconstruction of such a decay chain is typically not possible, given that it contains an invisible particle. On top of this, due to the DM stabilization symmetry the parent particles are typically pair-produced, implying that each event 52 comes with two invisible particles. The presence of two invisible particles involves an even greater loss of kinematic information from each event, and poses a sizable challenge in the associated mass measurement. Methods using the MT2 variable and its variations [21,30,32–35] have been proposed as a solution to overcome this chal- lenge. This class of variables is well-known for its usefulness both in measuring the masses of particles [36] and in isolating new physics signals from their backgrounds (e.g. Ref. [37]). Despite their utility, these variables have a possible drawback when aiming at a precise mass measurement: they all require information about the total missing momentum. Unfortunately, a precise measurement of the missing momen- tum is often difficult, for instance due to the relatively poor reconstruction of the jets that are usually a part of the overall event structure. This is an unpleasant fea- ture of missing momentum measurements, especially in those cases in which many of the jets that are involved in the measurement of the missing momentum are actu- ally not involved in the decay process of interest. Said another way, in general, the missing transverse momentum is measured as the opposite of the sum of the momen- tum of all the reconstructed objects (leptons, jets, photons,...) in the event, which means that the measurement of the missing momentum is an inherently “global” measurement of said event. In light of this, I have recently proposed complementary methods for mass measurements which instead use only the energy of the visible particles. The reason to pursue this strategy is, of course, that it relies intrinsically on more “local” information, ideally using only a subset of the particles coming from a given decay 53 chain.1 The main idea behind the method that I propose is to use the energy spectra of the visible particles. The basic result upon which the method is predicated was shown in Ref. [19]: namely, for a massless child from the two-body decay of an unpolarized parent, the peak in the energy spectrum of the child (henceforth denoted as “energy-peak”) seen in the laboratory frame (henceforth denoted by “laboratory- frame” energy) is the same as the fixed value of its energy in the rest frame of the associated parent (henceforth denoted by “rest-frame” energy). The latter value is given by a simple relation in terms of the masses of the two massive particles (the parent and the other child) involved in the decay, and hence can give information about these masses. In a subsequent paper [45], my colleagues then applied this observation to measuring the unknown masses in the semi-invisible decay of a heavy new particle involving a multi-step cascade of two-body decays.2 In this chapter, I consider instead a single-step, three-body, semi-invisible decay of a heavier new particle, which I denote as B → Aab (3.1) where a, b are visible SM particles and A, B are massive new particles with A assumed invisible. In order to deal with this specific decay topology, I need to extend the result of Ref. [19] to multi-body decays. The key idea is to map a multi-body final state into a two-body one, by the factorization of phase-space. In 1See also Refs. [18, 38–42] for other recent methods of mass measurement that do not use the missing transverse momentum and Ref. [43,44] for a general review of mass measurement methods. 2I also showed that our energy-peak result of Ref. [19] can be used for “counting”’ DM particles in decays [4], which is a powerful probe of the DM stabilization symmetry. 54 carrying out this mapping, I will take particular care in correctly partitioning and grouping the multi-body decay products and selecting an appropriate region of the multi-body phase space upon which to apply the above two-body result. B A a b mab B A (ab) mab Figure 3.1: The three-body decay of interest (left panel) and the effective two-body decay (right panel) with the mass of the visible system being mab. (ab) denotes the effective visible pseudo-particle formed by the two visible particles a and b. To reduce the multi-body final state to a two body one, I first form a compound system made of all of the visible particles, labeled as a and b in eq. (3.1). I denote this compound system as (ab) and graphically represent the corresponding partitioning in Figure 3.1. After this partitioning the decay does not yet look like a truly two- body decay because the compound system (ab) does not have a fixed mass. The combination of (ab) will have its own phase space in invariant mass. This is apparent from the well-known [46] recursive formula for the multi-particle phase-space of N particles of masses m1, ...,mN , which can be thought of as the sum of many two- body phase-spaces, a single particle on its own and the remaining (N − 1) particles clustered into a single object whose mass now depends on the momenta and the angles between the N − 1 clustered particles: 55 φN(m1, ...,mN) = ∫ dµ dφ2 (mN , µ(m1, ...,mN−1) · dφN−1(m1, ...,mN−1)) . (3.2) Considering each value of the masses that the compound system (ab) can take separately, I can regard the N body final state as a weighted sum of a collection of two-body systems, each of which is characterized by the mass of the compound system denoted by µ and its probability dφN−1. This probability, together with the actual squared matrix element of the decay, would give the rate of decay in that particular kinematic configuration. In the following, I do not assume any knowledge of the matrix element of the decay and I shall make no use of these rates; all I will need for our strategy to work is the ability to represent the multi-body final state as the sum of the collection of all possible two-body final states. The fact that I do not need to know the rate for each possible kinematic configuration of the multi-particle final states is a remarkable point of strength of our method; it is especially powerful when applied to newly discovered particles, as their matrix elements are a priori essentially unknown. For the case of a three-body decay, the above outlined procedure gives B → Aab = ∑ mab (B → A (ab)mab) , where the equality should be taken in the sense of an equivalence. I also remark that for practical reasons the integral for the phase-space factorization formula has been discretized. In this way, I can form a finite number of compound systems (ab)mab of mass mab ± δ with δ  mab. This procedure ensures that each of the compound systems has an approximately fixed invariant mass, and I can think of 56 it as a “pseudo-particle” having a width of order δ. This means that I partition, or “slice”, the data according to the total invariant mass of a compound particle formed from the a and b particles and apply the result for two-body decays to each mass partition as the overall system has been reduced to an effective two-body decay. In the rest frame of the parent particle, each partition of the pseudo-particle (ab) has energy: E∗(ab) = m2B −m2A +m2ab 2mB , (3.3) based simply on two-body kinematics for decay of B into A and (ab). Using the appropriate extension of the result in Ref. [19] to the case mab 6= 0 (see more on this point in Section 3.2), I am able to extract E∗(ab) from the laboratory-frame energy distribution of that particular (ab) compound particle. I then repeat this procedure for each of the mass partitions in the overall range of mab. When plotted versus m2ab, the fitted data for E∗(ab) extracted from the energy distributions should lie along a straight line as per eq. (3.3). It is straightforward to see that mB can be determined from the slope of this line and that mA can be determined from the intercept on the vertical axis once mB has been determined. The available information can be fully utilized in constructing this straight line by analyzing the data in all slices.3 I remark that, although the characteristic signature of the production of an invisible particle A is missing momentum, our method does not make explicit use this quantity and yet still offers a way to obtain a measurement of mass of this particle! 3In practice, one could end up not using some of the slices if treating them becomes too prob- lematic, e.g. because of backgrounds or sensitivity to the cuts. The fraction of unused slices will in any case be kept to a minimum. 57 In other words, any specific property of the invisible particle is almost completely irrelevant to our method, except for the assumption that there is at least one invisible particle per decay chain. I again emphasize that this achievement is remarkable, especially in comparison with other mass measurement methods involving pT such as MT2 and its variants. Despite the simplicity of the general idea, there are still some potential is- sues that would need to be properly dealt with in order to successfully execute the strategy outlined above. First, the compound particle (ab), the visible child particle from the effective two-body decay, has a non-negligible mass; thus, it is essential to generalize the result in Ref. [19] on the energy-peak to the case for a massive visible child particle. I refer to work done by my collaborators to this end [47], which is devoted to studying how to deal with these massive child particles in more detail. In this dissertation, I shall merely report the final result of their work and use this result for our present investigation. Nevertheless, the discussion presented here is largely independent of the derivation of this result. As mentioned earlier, the DM model under consideration has the parent par- ticles being produced in pairs. If both of them decay to the same final state, a combinatorial ambiguity arises in attempting to correctly partition and group those particles originating from the same parent; multiple pairs can be formed from the final state as seen in the detector, but it is not known a priori which is the correct pairing - that is, that the particles in the pairing originated from the same decay. This partitioning is a crucial necessity in forming the (ab) compound system that plays the role of the child pseudo-particle. For this reason, I allot Section 3.3 to the 58 thorough discussion of the treatment of this combinatorial ambiguity. In particular I propose an “event mixing” technique [48–51] as a way to remove the combinatorial background. To illustrate our method I discuss in detail its application to a specific pro- cess. As a concrete example, I choose the pair production of gluinos in an R-parity conserving supersymmetric model. Here the gluinos are assumed to subsequently decay into bb¯ and an invisible light stable neutralino via an off-shell bottom squark: pp→ g˜g˜ → bb¯bb¯χχ. (3.4) This scenario is chosen primarly because it has been thoroughly studied in the literature, and thus should be familiar and interesting to a large audience. Indeed, this process has also been investigated at the LHC [37,52–56]. In order to provide a fully realistic example, and to demonstrate some of the issues that arise in using our method, I shall incorporate the relevant Standard Model backgrounds in our analysis as well. I take particular care in devising cuts for background rejection so that these selections do not affect the shape of the energy spectrum near the peak, which is the critical region of interest for our energy peak method. Obviously, the optimization of these cuts is a process-dependent issue, and so must be evaluated on a case-by-case basis. The goal of our discussion is to present potential systematic uncertainties and biases arising from the specific details of our method, such as those induced by phase space slicing, event mixing, imperfect knowledge of the background, and overly restrictive event selection criteria. I also present other complementary observables that enhance the findings obtained using the energy peak method, one of which is 59 the kinematic endpoint of the di-jet invariant mass distribution. The rest of the chapter is organized as follows. I continue in the next section with a discussion of a template function used to describe the energy spectrum of a massive child particle. Section 3.3 is devoted to dealing with the combinatorial ambiguity inherent in our chosen decay topology. Then in Section 3.4, I detail our selected signal process and the relevant backgrounds, both those from SM processes and the “event mixing” scheme for signal combinatorics. Section 3.5 contains the main results for the mass measurement of the aforementioned example process to- gether with a discussion of several opportunities for improvement to the method. In Section 3.6 I present our conclusions and outlook. 3.2 A template for the energy spectrum of a massive child particle As outlined in the previous section, the essence of our mass measurement technique is to fit the data to get the value of E∗ for each of the fixed masses of the compound system (ab) and fit them onto the straight line in eq. (3.3). The mass of the compound system (ab), being a system of two particles, is not fixed and in general spans a range fixed by the masses of all the particles in the decay. Since I am a priori unaware of the masses of the parent and invisible child particles, it is not possible to know whether or not a given value of mab is small enough in comparison to those unknown masses to justifiably trust the validity of our previous results for effectively massless child particles [19], and thus I am motivated to extend the finding to massive child particles. 60 The primary difficulty in generalizing to a massive child is the potential loss of correlation between the peak location in the laboratory-frame energy distribution of the massive child and its energy in the rest frame of the parent. This can readily be seen when considering the decay of a massive particle B → X ψ at the kinematic end point of the phase-space, i.e., mB = mX + mψ. For any value of mX , ψ will be at rest in the B rest frame, hence E∗ψ = mψ. If each particular event is boosted to the laboratory frame, the energy of ψ becomes simply γBmψ, with γB being the boost of particle B relative to the laboratory. This direct linear relationship between Eψ and γB implies that the shape of the energy distribution of particle ψ in the laboratory frame should simply be that of the boost distribution of particle B. In this case, it is clear that the peak of the energy distribution of the massive child ψ carries essentially no information about the masses; rather, it carries information on the most probable boost of particle B. This is contrast to the “invariance” that holds for a massless child: the energy-peak in the laboratory frame is the same as the rest-frame energy value irrespective of the details in the boost distribution of particle B. For a more formal understanding of this problem, it is instructive to analyze the Lorentz transformation of a massive child particle from the rest frame of its parent particle, where it has energy-momentum (E∗, p∗), to the laboratory frame. Given the boost factor γ of the parent particle and the emission angle of the child θ∗ relative to the boost direction, I find the energy of the child particle in the laboratory 61 frame (denoted by E) to be E = E∗γ (1 + β∗β cos θ∗) , (3.5) where I have used p∗ = β∗E∗. I observe that the laboratory-frame energy E becomes equal to the rest-frame energy E∗ only if cos θ∗ = − 1β∗β (γ − 1 γ ) . (3.6) Denoting this θ∗ as the “reference” angle, I see that any value of cos θ∗ smaller (larger) than eq. (3.6) gives rise to a laboratory-frame energy value E smaller (larger) than E∗. To ensure the existence of the reference angle, the boost factor should satisfy the following relation: γ < 1 + (β ∗)2 1− (β∗)2 = 2(γ ∗)2 − 1 ≡ γcr . (3.7) When this condition is satisfied, the energy distribution in the laboratory frame is non-zero at E = E∗, which is a necessary, but not sufficient, condition to have a maximum at E∗. Obviously, if for some γ the condition set by eq.(3.7) is not satisfied, then E > E∗ for all cos θ∗, which potentially invalidates the statement that the peak in the laboratory-frame energy distribution appears at E∗. Actually, for the typical boost distributions of parent particles produced at hadron colliders, one can see that if any of the boosts of the parent particle(s) lie outside of the range given by (3.7), it is then guaranteed that the peak of the energy distribution in the laboratory frame will not be located at the rest-frame energy value 4. A 4The displacement of the maximum with respect to E∗ may still be small, but strictly speaking, the “invariance” that I demonstrated in Ref. [19] for the massless particle will be broken. 62 more rigorous method of characterizing the energy distribution of a massive child is presented in [47]. I hope that the argument above, while not as rigorous as the above reference, will convince the reader that the maximum boost of the parent particle is the key parameter that controls the position of the peak in the laboratory-frame energy distribution of a massive child particle. On top of affecting the peak position, the overall shape of the energy distribu- tion for the massive child is expected to differ from that for the massless child. This means that the function used to fit the massless child energy spectra in previous works cannot be used in the present work. In order to obtain a suitable description of the massive child energy spectrum, I revisit the corresponding discussion for the case with a massless child particle. The value of the energy distribution at a given laboratory-frame energy E is given by a Lebesque-type integral within the range of γ values contributing to the E together with the associated weight for the γ [19]. For the case at hand, this range is found by solving eq. (3.5) for γ with cos θ∗ = ±1, which gives: γ+(E) ≡ γ∗2 (√ 1− 1 γ∗2 √ E2 E∗2 − 1 γ∗2 + E E∗ ) , (3.8) γ−(E) ≡ γ∗2 E E∗ ( 1− √ 1− 1 γ∗2 √ 1− E∗2 γ∗2E2 ) . (3.9) I see that for a massless child particle, i.e., γ∗ →∞, γ+(E) diverges, whereas in the same limit γ−(E) converges to a finite value: lim γ∗→∞ γ−(E) = E E∗ + E∗ E ≡ γ (∞) − (E) . (3.10) In light of eq. (3.10), I can express the energy spectrum for a massless child that I 63 used in previous works [19] as exp ( −w · γ(∞)− (E) ) . (3.11) Motivated by the success of this exponential form in the massless case [19,45] and exploiting the identification in the massless limit eq. (3.11), I propose the following ansatz on the shape of the laboratory frame energy spectrum of a massive child f(E) = N (exp[−w · γ−(E)]− exp[−w · γ+(E)]) , (3.12) whereN and w are a normalization factor and the width of the function, respectively. A complete evaluation of the accuracy with which this function describes the energy spectrum in the laboratory frame of a massive child is presented by colleagues in the companion to the work presented here [47]. For the purposes of this chapter, it will be sufficient to know that this function reproduces our ansatz for massless child particles for γ∗ →∞. In any case, I will explicitly show that this function provides a good description of simulation data for our example process below. A comment on the location of the maximum of this function is in order. The maximum of this function coincides with E∗ only in the limit w → ∞, in which the function becomes a δ function. For all finite values of w, the actual location of the maximum of the function is slightly larger than E∗. However, I empirically observe that for parent particles that would typically be produced at colliders, and for γ∗ somewhat larger than 1, the typical value of w is large enough that this effect is negligible. Therefore, I expect that eq. (3.12) properly describes a large class of energy spectra. Because the peak location and the E∗ parameter are no longer 64 necessarily the same 5, the determinations of the best fit values of E∗ and w are interrelated when fitting the data to the massive template function eq.(3.12). The fact that the maximum of the spectrum is a function of both E∗ and w is an inherent feature of our ansatz for the massive child energy spectrum, which did not exist in the massless case of Ref. [19]. In this chapter I will study the possible effects that arise in our explicit example due to this feature of eq.(3.12). For a fully general investigation of this issue I refer to the companion to this chapter [47]. In Sec. 3.4 I shall fit the energy distribution of each mass partition of the pseudo-particle (ab) both with the new template for massive children eq. (3.12), and with its massless limit (i.e γ∗ → ∞), the latter of which was the template employed previously for massless child particles. The comparison of the results from these two templates will allow us to demonstrate the necessity of using eq. (3.12) and generalizing what I had used in our previous work [19,45]. 3.3 “Event mixing” to estimate the combinatorial background As explained in the Introduction and depicted in Figure 3.1, the success of our strategy hinges on correctly identifying the pairs of particles coming from the same decay side. If this identification is done correctly, the idea of phase-space factorization can safely be applied to reduce the multi-body final state to a two-body final state. Generally speaking, the identification of the correct pairs of particles to be grouped together is a tremendously difficult task, as I have no systematic way 5Again, for the case with massless children, E∗ conforms to the peak irrespective of w. 65 of knowing which particles are related to the same decay, and can thus be correctly paired in the analysis. One approach to surmounting this challenge is to attempt to correctly identify the pairs of particles that come from the same decay by exploiting some preferential kinematic correlation between particles that originate from the same decay. This correlation could then be translated into a selection criterion that would correctly pair the appropriate particles with relatively high accuracy. Of course, this selection process would not be 100% effective and would fail to pick the correct pairings for some fraction of events. As a result, I would be left with a certain amount of com- binatoric pollution from pairings whose constituent particles did not come from the same decay. Several event-by-event strategies have been developed to identify which pairs of particles come from the same decay (see, for example, Refs. [29,57–60]). It is however generally true that, in order to maximize the chance of pairing particles correctly, the kinematic selection criteria which form the basis of each method must be rather restrictive, so as to guarantee a sufficient rejection of unwanted pairings. Events that pass these highly constrained kinematic criteria will be preferentially selected from isolated regions of the kinematic phase space of the scattering, and therefore, the kinematic distributions of the final state will be significantly altered by the imposition of these criteria. Our method of mass extraction relies critically on the fidelity of the energy distribution around the location of the peak; without this, the templates I use to extract the masses return biased information. Thus, because the above set of procedures for selecting correct decay siblings greatly dis- turbs the resultant kinematic distributions, they are not suitable for use alongside 66 our method. For this reason, I do not even attempt to identify the correct pairings in each event; I instead try to obtain the energy distribution of the correct pairs without knowing which are the correct pairs event-by-event. In order to determine the distribution of a given observable as it would arise from picking just the correct pairs, I use an event mixing technique whose basic idea is as follows. I consider a scattering pp→ B1B2 → (A1a1b1)(A2a2b2), (3.13) where two heavy particles B1 and B2 are produced and decay to final states of the same kind, B → Aab, which are labeled to correspond with their respective parent.6 In the analysis of events of this nature, I follow the procedure laid out in Section 3.1 for making pseudo-particles out of the a and b particles for all possible equivalent pairings (e.g. a1 with b1, but also a1 with b2 and so on.) From these pairs, I obtain a fully inclusive distribution of the observable of interest, which in our case here is the total energy of the pair, dσ dEab (all pairs). (3.14) In order to obtain the distribution stemming only from the pairs of particles coming from the same decay, it is sufficient to come up with an estimate of the distribution that stems from the pairs that I would like to discard and subtract it from the fully inclusive distribution: dσ dEab (same-decay pairs) = dσ dEab (all pairs)− dσ dEab (different-decay pairs). (3.15) 6Strictly speaking, Ai’s need not be invisible as long as they are distinguishable from ai and bi, which are assumed indistinguishable from each other. 67 This equality might seem trivial, but is in fact very powerful. To wit, the object that I must have for our method to be successful - the distribution of pairings from the same decay - which is quite difficult to obtain in itself, is now expressed in terms of two objects that are much simpler to obtain. The first piece is obviously attainable, as it is simply the distribution from all the pairs that can be formed in each event. The second piece, the distribution from pairs not coming from the same decay, can be estimated by the “event mixing” technique, which is based on making a distribution from pairs of jets that come from different events. The intuition that justifies the usefulness of the event mixing technique originates from the fact that pairs (a1b2) and (a2b1) from the same event are made of particles which are produced with almost no kinematic correlation. Therefore, it seems reasonable to mimic the effect of these “incoherent” pairs with the pairs of particles taken from different events, which intuitively have no correlation. More precisely, I can see that the phase-space point from which a1 and b1 originate in the decay of B1 is very close to being uncorrelated with the phase-space point from which a2 and b2 originate in the decay of B2. This approximately vanishing correlation between the products of different decays implies that pairs formed by particle a1 taken from one event, and particle b2 taken from another event are expected to be statistically equivalent to pairs (a1b2), where both particles are taken from the same event. This means that the distributions of a quantity over a given sample of events obtained from either the (a1b2)-type incorrect pairings within the same event or from pairing a1 in one event and b2 in another event are equivalent. This implies that I can estimate the 68 distribution stemming from pairs of particles from the same decay as dσ dEab (same-decay pairs) ' dσ dEab (all pairs)− dσ dEab (different-events pairs) . (3.16) This observation is at the center of the event mixing idea, and I henceforth de- note the procedure described by the right-hand side of eq. (3.16) as “mixed event subtraction”. The plausibility of the distribution obtained by pairing particles from differ- ent decays within the same event being equivalent to the distribution arising from pairings between different events can be seen intuitively in certain simple cases. To illustrate, imagine having an ensemble E of pairs of particles (B1, B2), E = { (B(1)1 , B (1) 2 ), (B (2) 1 , B (2) 2 ), (B (3) 1 , B (3) 2 ), ... } , (3.17) where the superscripts denote the associated event number. For simplicity, I take particles B to be scalars sitting at rest in the laboratory, where they decay B → abA. It is obvious that the distributions made from pairs coming from different B particles in the same event of the ensemble are the same as the distribution made from pairs coming from B particles from different events in the ensemble. In this example, I am simply sampling the phase space of the B decay in two different ways, in one case taking kinematic information from instances of the decay that happen at the same time and in the other taking that information from instances of the decay that are separated in time. However, one must note that the situation described above may not corre- spond to reality. As a concrete example, I take the pair of (B(i)1 , B (i) 2 ) and assume that the system of the two B particles has a center of mass energy √ sˆi(> 2mB). In 69 the laboratory frame, the phase-space accessible to the decay products of the two B particles depends on √ sˆi. Let us denote such a phase-space as Φ(sˆi). If I then consider the jth event, with the intention of mixing particles from the ith and jth events, I am forced to confront the fact that the center of mass energy in the jth event, √ sˆj, and thus the phase space accessible to the jth event’s particles, is differ- ent than that of ith event. The mismatch in the phase-space accessible to particles in events at different √ sˆ is clearly a potential source of error in the identification of the “different-decay” and “different-event” distributions, which naturally poses an a threat to the successful application of the event mixing technique. It is quite difficult to estimate the size of this inaccuracy, though I expect that for typical situations at hadron colliders the event mixing technique works quite well. One reason for this is because of the small variance of the boosts of the B particles produced in typical collisions at hadron colliders. In addition to this effect, however, there may be other potential sources of error in the event mixing technique, and a case-by-case study is needed to check the performance of the method. Because of this, I take a pragmatic approach in the following and apply the event mixing technique to our example while explicitly checking the performance of this method for our example process. 3.4 Application to the gluino decay I now demonstrate how the general strategy detailed above is realized by tak- ing as an example a particular gluino decay channel. I first illustrate the signal 70 process and its particular characteristics, and then move to discussing the possible backgrounds of this signal process. The discussion of these backgrounds is sepa- rated into two categories: 1) the real background from SM processes, and 2) the systematic background from incorrect pairings of the final state particles used in forming the invariant mass and energy distributions. As I detail the various pro- cesses involved, I shall utilize Monte Carlo simulation to generate the relevant event samples, construct and analyze the appropriate kinematic data from these samples, and end with a discussion of the effectiveness of our technique based on the results of this analysis. 3.4.1 Signal process: gluino decay I apply the general idea developed in the previous sections for the case of pair- produced SUSY gluinos and their subsequent decay into two bottom quarks and a neutralino via a three-body decay: pp→ g˜g˜ → b¯bb¯b+ χχ (3.18) at the 14 TeV LHC. In terms of the notation used in Section3.1, the gluino and the neutralino correspond to particles B and A, respectively, and two visible particles a and b are the bottom quark and anti-quark in a decay chain. In reality, the particle detector cannot reliably discriminate between bottom and anti-bottom, thus particles a and b in this example are considered indistinguishable. Though I am using the specific decay above as a concrete example, I empha- size that the decay mode and underlying model at hand are chosen only to enable 71 us to demonstrate the proposed technique and that the general idea can be applied to multi-body decay processes in other models. I also point out that the applica- bility of the method is not affected in any way by the strengthening of bounds on supersymmetric particles, because the method is applicable for parent and invisible (child) particles of any mass. To illustrate our technique, I choose the masses of these particles to be mg˜ = 1.2 TeV, mχ = 100 GeV with a decoupled bottom squark, and assume that the only decay mode of the gluino is a three-body decay in the form of bb¯χ 7. The Monte Carlo signal for our study is simulated using MadGraph5 v1.4.8 [26] and the structure of the proton is parametrized by the parton distribution functions (PDFs) CTEQ6l1 [27], evaluated with the default renormalization and factorization scale settings of MadGraph5. The production cross section of the paired gluinos is computed with MadGraph5 and is reported in the first column of Table 3.1. Since I assume that all produced gluinos decay into bb¯χ as described above, σ(pp→ g˜g˜) is equal to σ(pp→ g˜g˜ → bb¯bb¯χχ). The neutralinos in the final state of our signal do not interact with the detectors of the LHC, resulting in a missing transverse momentum. The four bottom quarks give rise to jets of hadrons - particularly, B-hadrons. The particular characterisitcs 7During the completion of this work the limit on the gluino mass given by the LHC experiments has risen to about 1.4 TeV for light χ [55, 56]. Despite these new limits ruling out the spectrum I consider at a 95% confidence level, this spectrum still serves its purpose as an illustration of the technique. It should be remarked that I do not expect qualitative differences in the application of our strategy to the mass measurement of a heavier but not yet experimentally excluded gluino. 72 4b+ pT 4b+ pT 4b+ Z(→ νν¯) tt¯bb¯ (before cuts) (after cuts) (after cuts) (after cuts) σ [fb] 54.74 36.53 0.48 0.15 Table 3.1: The cross sections for the signal process (before and after a set of cuts are imposed) and the (main) background processes (only after cuts are imposed). The cuts are described in eqs. (3.19)-(3.21), and also include all the identification and the isolation criteria explained in the text. The effect of the b-tagging efficiency is not taken into account by the numbers in this table. of the B-hadrons in the jets allows us to distinguish this type of jet from other jets that do not originate from the bottom quark, and it is possible to see the traces of b-quark-initiated jets and tag them in a large fraction of the events. With the requirement that four of the reconstructed jets in the final state have this tag, the signal will feature four bottom jets plus missing transverse momentum, 4b+ pT . Before closing this section, I remark that the chosen example process poses an extra challenge in the application of our method. In fact all visible final state particles are indistinguishable, hence there are three different ways to form pseudo- particles from these b-tagged final state particles that must be all be considered in the analysis. Together with the SM backgrounds, this can be interpreted as another background, as explained in the next section. 73 3.4.2 Backgrounds In this section, I discuss the backgrounds relevant to the signal process defined in the preceding section. As mentioned earlier, there are two types of backgrounds: those coming from Standard Model processes which give rise to the same signature as our supersymmetric process, and that which comes from taking the wrong pairings of two visible particles when I evaluate both the energy sum and the pair-wise invariant mass. I start by discussing the “real” background from the Standard Model and then I discuss the combinatorial background. 3.4.2.1 Standard Model backgrounds and event selection For our collider signature 4b + pT , the following two processes in the SM are identified as the major backgrounds: pp→ bbb¯b¯+ Z → bbb¯b¯+ νν¯ and pp→ tt¯bb¯ . The Monte Carlo generation of background events is done using the same event generator and input PDFs as those for the signal events. Since the detector signature from these interactions is exactly the same as the one used in our earlier work Ref. [45], I adopt a similar strategy for handling these backgrounds with only slight modifications. The Z boson background is irreducible, whereas the tt¯bb¯ is reducible and can be reduced so that it becomes sub-dominant with respect to the Z boson background. The tt¯bb¯ background might seem different from the signal process in terms of its partonic final state, but it can mimic our signal, and thus become a 74 relevant background, by “losing” some of the final state partons in the detectors. In order to match with the signal’s detector signature, the two W bosons originating from the decay of the two top quarks must go undetected, which makes those W bosons the main source of pT in the tt¯bb¯ background. Although the rate of the detector missing the W bosons is expected to be small, the sizable production rate of pp→ tt¯bb¯ can compensate for this, thus making the pp→ tt¯bb¯ process a possibly important background. A W boson will go unseen in the detector for primarily two reasons: 1) when its decay products are not within the experimental acceptance region of the detector due to having insufficient pT , supernumerary η, or both, and 2) when its decay products are not adequately isolated from other particles, i.e., they are merged with other particles in the reconstruction of a given event. For the first case, I define as missed any object that satisfies the following criteria: • for jets, pT,j < 30 GeV or |ηj| > 5, • for leptons, pT,l < 10 GeV or |ηl| > 3 with l = e, µ, τ . In the second case, the following rules determine when a particle is missed: • for merging jets, ∆Rj1j2 < 0.4 with j1 and j2 denoting any jet pairs including b-jets, • for merging leptons, ∆Rjl < 0.3 with j and l denoting a jet and a lepton, respectively. With the acceptance and isolation requirements listed above, I observe that most of 75 the background events are from either the fully leptonic or the semi-leptonic decay channels of top quark pairs because these channels require that fewer erstwhile visible partons be missed in comparison to the fully hadronic top decay channel. To devise an adequate strategy for rejecting a large number of these back- ground events while preferentially keeping the signal events, I must adopt event selection criteria that incorporate the kinematic differences between signal and back- ground event. I observe first that, thanks to the heaviness of the parent particles, the signal events will be composed of jets that typically have a larger transverse mo- mentum than those found in background events. This is a strong hint as to which cuts will be suitable in rejecting the background. However, one must be especially cautious in selecting these cuts because the method proposed here is based on ex- tracting E∗bb, which relies in part on the shape analysis of the energy distributions for each invariant mass slice. Therefore, cuts should be chosen such that they do not considerably distort the energy distributions. For this reason, I prefer using softer cuts than in most searches performed at the LHC, and I choose as our baseline selection criteria pT,b > 30 GeV, |ηb| < 5, ∆Rbb > 0.4, (3.19) for identifying the bottom jets in all events that I analyze. In order to further suppress the backgrounds from Standard Model processes, I consider requiring that events have a large missing transverse momentum. In signal events, the missing transverse momentum is expected to be determined by some combination of the new particle masses, and thus will be large. On the other 76 hand, the missing transverse momentum in background events is determined by the larger of the total hardness of the event, the mass of the Z boson, or the mass of the top quark. Therefore, a large pT cut allows us to efficiently discriminate the signal events from the background. However, in our case special care is needed in deciding the scale of this cut, as there is the risk of this cut introducing unwanted bias in the energy distributions. In particular, the missing transverse momentum can be interpreted as the recoil of the invisible particles against the visible, which seems to imply that a large pT cut is likely to select only events with very hard visible particles and correspondingly induce some bias in the b-jet energy spectrum toward higher energies. As mentioned before, this could lead to a misidentification the value of E∗bb, and as a consequence, an innacurate measurement of the associated masses. Fortunately, the relatively large mass hierarchy between the gluino and the neutralino in our signal process ensures multiple hard b-jets on average and thus a sizable recoil for the invisible neutralinos. I therefore anticipate that the Ebb distribution will only be mildly affected, even with a fairly hard pT cut. For our signal and backgrounds, I impose pT > 200 GeV , (3.20) which strongly suppresses the backgrounds with negligible deformation of the Ebb distributions. In addition to the pT cut, I introduce another cut that requires each b-jet ~pT to have some minimum angular separation from the pT vector. This enables us to avoid events where the measured missing energy is caused by the mismeasurement 77 of jets. For our analysis I require ∆φ(~pT , ~pbT ) > 0.2, (3.21) which has negligible effect on the shape of the Ebb distributions. In Table 3.1 I show the cross sections for both signal and background events after applying the set of cuts listed above. I clearly see that the tt¯bb¯ background is sub-dominant with respect to the Z + bbbb background. I also remark that the expected signal-to-background ratio (S/B) is large, which is certainly favorable for extracting E∗bb from the Ebb distribution. Indeed, if new physics particles are dis- covered in the forthcoming runs of the LHC, it would then be natural to discuss measuring their masses in the channels where there is a clean signal, and hence a large S/B. In this sense, the context in which I present our mass measurement technique is expected to be typical when attempting a mass measurement beyond the precision of the order of magnitude, as I do here. Other than the above-mentioned backgrounds, QCD multi-jet production pp→ bbb¯b¯ is another possible source of background events from the SM, in which the miss- ing transverse momentum typically arises from imperfectly measuring the energy of jets. Unsurprisingly, an accurate estimation of this background is quite challenging because it involves detector effects. I expect that a great deal of the QCD multi-jet background would largely be suppressed by the cuts eqs. (3.19), (3.20) and (3.21) to the point that it becomes sub-dominant, and in the following I do not taking this background into account (see, for example, Ref. [45]). 78 3.4.2.2 Combinatorial background and mixed event subtraction As mentioned earlier, the procedure of phase space slicing inevitably requires the formation of the invariant mass of two visible objects. Since the process eq. (3.18) under consideration has pair-produced parent particles which both decay to indistinguishable visible children, grouping the four b-jets into two pairs gives the correct choice in only one case out of the three possible combinations. Following the strategy outlined in Section 3.3, I form all possible pairs and obtain an inclusive energy distribution. I then subtract the contributions originating from the wrong combinations by estimat- ing the corresponding distribution through the event mixing technique as described before. In order to validate the performance of this mixed event subtraction scheme in our example, I first study a large number of pure signal events where no selec- tion cuts are imposed and without including backgrounds. I then discuss how the inclusion of backgrounds complicates the estimate of the combinatorial background when using the mixed event subtraction. i) Pure signal: For our study it is necessary to check that b-jet pairs’ energy and invariant mass distributions are well reproduced by the mixed event subtraction scheme. In order to apply it as per eq. (3.16), I need to obtain the distributions of observables given by forming pairs of b-jets belonging to different events as explained in Section 3.3. In principle, there are several options for choosing the two events that one can use to compute these observables. For example, one can compute the observables using all possible pairs of events, meaning that each event is reused 79 many times, or one can use a procedure such that each event is used only a few times. The detailed way in which the event mixing is done can, in principle, affect the result. However, in most cases, the alternative methods give rise to only minor differences in the relevant result 8. Among those possibilities, I show results for which the events were mixed as follows: given a sample of N events, (1) I first randomly shuffle and reorder the events to remove any potential correla- tion between events arising from the way in which the events were generated, (2) compute the different-event observables by taking b-jets in the ith and (i+1)th events,9 so that each event is used twice (3) finally renormalize these distributions to weigh as much as the contribution from the incorrect pairings in the signal sample that I intend to remove, i.e., two thirds of the total number of events in the signal sample. I label the inclusive invariant mass distribution formed from all the pairs in the same event as FSE(mbb) and I denote as FDE(mbb) the distribution obtained from pairs in different events. Then, given an invariant mass value, I take from the same- event sample all the pairs whose invariant mass lies within the range of interest and plot the spectrum of the energy of the sum of the two b-jets, Ebb. I repeat the same 8One can also form “events” out of randomly selected sets of four particles from the entire event sample and compute the observables by forming pairs from the particles now constituting these new “events.” I evaluated the efficacy of constructing the distribution to be subtracted using this alternative method and found little difference between the end results, both in the distributions and in the energy and mass values extracted. 9The last event is mixed with the first one. 80 operation in the different-event sample, and I obtain the Ebb spectrum for the fixed ranges of mbb. Denoting the spectrum from the same-event pairs as fSE(Ebb), the spectrum from different-event pairs as fDE(Ebb), and the resultant spectrum from the mixed event subtraction as fS(Ebb), our estimate of the energy distribution from the correct pairs is: fS(Ebb) = fSE(Ebb)− fDE(Ebb) (3.22) from which I will ultimately extract the rest-frame energy value E∗bb for each fixed mbb. Correct pairings only All pairings Mixed events subtracted 0 200 400 600 800 1000 1200 1400 0 100 200 300 400 500 600 m bb HGeV L E v e n t s  2 G e V 200 400 600 800 1000 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 m bb HGeV L R i Figure 3.2: The left panel shows the di-b-jet invariant mass distributions that are normalized to an integrated luminosity of 3 ab−1. The right panel shows the Ri distribution over mbb. Similarly to the subtraction for the Ebb spectrum, I obtain an estimate of the overall invariant mass distribution for the correct pairings using the mixed event technique, which is FS(mbb) = FSE(mbb)− FDE(mbb) . (3.23) 81 Note that this distribution is not used to extract the masses, but serves as an example of the effectiveness of the mixed event subtraction scheme. To quantify the fidelity of the distribution obtained from the mixed event subtraction scheme, I define a ratio Ri ≡ Ni,S Ni,C , (3.24) where Ni,S is the i-th bin-count of the “subtracted” distribution FS(mbb) and Ni,C is the corresponding bin-count in the distribution obtained considering only the cor- rect pairs. The latter is obtained by exploiting the fact that in simulated events all of the history of the particles is available. The left panel in Figure 3.2 compares the invariant mass distribution from the correct pairs, shown as the blue dot-dashed histogram, and the distribution obtained by mixed event subtraction, shown as the red solid histogram. To show how much the original invariant mass distribution is contaminated by the combinatorial background, the distribution before the sub- traction procedure is also plotted as the green dashed histogram. Each bin count is normalized to an integrated luminosity of 3 ab−1. I observe that the distribution obtained from the mixed event subtraction is very close to that obtained from the correct pairs. A more quantitative comparison is also provided in the right panel of Figure 3.2, showing the bin-by-bin ratio Ri for the distribution of the events against mbb. All of the bin counts are quite close to their associated theoretical values. In fact, Ri ∼ 1 in all the range of mbb. I do not show the ratio Ri in the vicinity of both kinematic endpoints because the significantly smaller Ni,C at the endpoints leads to unreliable Ri values. Besides visualizing the effectiveness of the mixed event 82 subtraction, this check enables us to confirm that for a given invariant mass slice, a similar amount of data remain available after the mixed event subtraction com- pared to what would have been available was I able to completely eliminate the combinatorial background. Correct parings only Mixed events subtracted 500 1000 1500 0 20 40 60 80 100 120 E bb HGeV L E v e n t s  2 5 G e V m bb Î @ 275 , 325 D GeV =0.96 400 600 800 1000 1200 1400 1600 0.0 0.2 0.4 0.6 0.8 1.0 1.2 E bb H GeV L R i m bb Î @ 275 , 325 D GeV Correct parings only Mixed events subtracted 600 800 1000 1200 1400 1600 1800 0 50 100 150 200 250 E bb HGeV L E v e n t s  2 5 G e V m bb Î @ 675 , 725 D GeV =0.99 800 1000 1200 1400 1600 0.0 0.2 0.4 0.6 0.8 1.0 1.2 E bb H GeV L R i m bb Î @ 675 , 725 D GeV Figure 3.3: The two plots in the left panel show the di-b-jet energy distributions for 300 GeV (top) and 700 GeV (bottom) nominal mass slices. They are normalized to an integrated luminosity of 3/ab. The color codes in the plots in the left panel are the same as those in Figure 3.2. The two plots in the right panel show the respective R distributions over Ebb. For computing 〈R〉, only the data within the two black vertical dashed lines is taken into account. 83 In Figure 3.3 I compare the energy distribution from the correct pairs and that obtained from the mixed event subtraction. The two distributions are shown for a mass slice 275 GeV ≤ mbb ≤ 325 GeV in the upper panel of the figure and for another mass slice 675 GeV ≤ mbb ≤ 725 GeV in the bottom panel. The ratio of the correct pairs and the subtracted histograms is provided for each choice of mbb as well. Since the fit to extract E∗bb from the energy spectra will be performed using only the data around the peak, I show the bin-by-bin ratio only for the energy range that corresponds to the full width at half maximum (FWHM), which is indicated by black dashed lines in each plot. In this case, I see that the energy spectrum processed with the mixed event subtraction (blue histogram) is also quite close to the associated theory expectation (red histogram). To be more quantitative, I compute the average of Ri in the FWHM range. This average is denoted as 〈R〉 and is close to 1, which suggests that the mixed event subtraction scheme works quite well, i.e., the shape of the energy spectrum is reasonably preserved. From this exploratory analysis, I expect that the extraction of E∗bb from the energy distribution obtained by the mixed event subtraction is unlikely to have major bias due to the subtraction. ii) Background and “signal-background interference”: Once the SM backgrounds come into play, there is a non-trivial complication that is introduced by the event mixing. Since I am not aware a priori whether a given event is from the signal or the background, it is not possible to perform the event mixing using only the signal events. Therefore, the distribution returned by the whole operation of the mixed 84 event subtraction scheme contains “signal-background interference”, i.e., picking one particle from a signal event and the other from a background event. In principle, the overall kinematic characteristics of the background events differ from those of the signal events, and therefore these interference pairs will make the overall distribution deviate from that of the pure signal or pure background combinatorially-generated background.10 As a consequence, a naively subtracted distribution would be dis- torted with respect to the distribution of a pure signal sample. To understand the quantitative impact of the inclusion of physical backgrounds, I first need to assess the hierarchy of the effects that arise from the simple addi- tion of these backgrounds and from the event mixing. I focus on the situations where nev events have been collected after the application of selection cuts, e.g., eqs. (3.19)-(3.21). These events come both from the signal process and from back- ground processes. In general, I have ns signal events and nb = nev − ns background events.11 However, in a situation in which a mass measurement is attempted, I expect that the signal will dominate the backgrounds, ns  nb. Under this assump- tion, I can quantify how likely the event mixing procedure is to form pairs where both particles come from the signal process, both particles come from the back- ground processes, or one particle comes from signal and the other from background. 10For the dominant background in our case (i.e., Z + 4b), the pure background combinatorial distribution is somewhat tricky; the distinction between correct and incorrect pairings is meaning- less because the associated event topology is ill-defined. However, in the interest of generality, I imagine that our background can also give rise to fictitious correct and wrong combinations. 11Since different types of backgrounds, in principle, will form different distributions, I here assume only a single type of background to avoid any potential complication. 85 Clearly, the probabilities to select a particle from a signal or a background event in the sample are ps = ns ns + nb ' 1− nb ns , pb = nb ns + nb ' nb ns , (3.25) for a signal and a background event, respectively. Therefore, most of the pairs formed in the event mixing procedure are made with two particles from the signal process. Pairs made of two particles from the background are much less abundant, and in fact arise only in a small fraction, n2b/n2s, of the cases. Strikingly, pairs made with one particle coming from the background and one particle from the signal are much more abundant than pure background pairs, as their probability is 2× nb/ns. The effect of the pairs involving both backgrounds and signal is not predictable unless one specifies the signal. However, some general features of this “interference” contribution to the event mixing estimate of the distribution for the correct signal pairings can be easily guessed. First, the “interference” distribution tends to produce an underestimation of the bin-counts in the estimate eq. (3.16) of the distribution consisting of pairs of b-jets originating from the same decay. The reason is that in the inclusive distribution from all pairs in the same event, the first term in eq. (3.16), there are contributions stemming from pairs of events coming both from signal or both from background, but there is no way to construct a hybrid of the two: dσ dEbb (all-pairs) = dσS dEbb + dσB dEbb , (3.26) where by σS and σB I mean the signal and background contributions, respectively. On the other hand, in mixed events, I have dσ dEbb (different-events pairs) = dσSS dEbb + dσSB dEbb + dσBB dEbb . (3.27) 86 As suggested by Figure 3.3, the contribution from pairs where both events come from the signal, dσSSdEbb , does a good job of estimating the effect of the pairs of b- jets not coming from the the same decay in the signal. Similarly, the contribution from pairs of events where both events come from the background, dσBBdEbb , is a good estimate of the combinatorial background generated from the background itself. Therefore, the contribution dσSBdEbb is the piece that typically ruins the result because it gives rise to an excessive subtraction in eq. (3.16). Obviously, this phenomenon cannot be avoided since I am unable to distinguish signal and background events with absolute certainty. The presence of this type of “interference” background is inherent to the event mixing technique and dealing with it requires special care. A more quantitative argument about the interference is available in App. A for more interested readers. Given its peculiar origin, it may be desirable in some cases to remove the “signal-background interference” contribution. In order to do so, one must discuss the shape of this distribution, which in general depends on the signal and there- fore is a priori unknown. However, some general features of the “signal-background interference” distribution can be predicted using the following argument. The dis- tribution that arises from pairs made of one background and one signal particle feels in part the kinematics of the signal events and in part that of the background events. The background is typically expected to have softer particles than the sig- nal, and therefore, the “interference” b-jet pair energy distribution is expected to be skewed towards energies that are somewhat larger than the characteristic values of the distribution for pairs made out of two background particles. For our example 87 Interference Background only Combined 500 1000 1500 - 5 0 5 10 15 20 E bb H GeV L E v e n t s  2 0 G e V m bb Î @ 275 , 325 D GeV Interference Background only Combined 600 800 1000 1200 1400 1600 1800 - 4 - 2 0 2 4 6 E bb H GeV L E v e n t s  2 0 G e V m bb Î @ 675 , 725 D GeV ••••••••••••• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • •• • • • • • •• •• 500 1000 1500 -8 -6 -4 -2 0 E bb HGeV L E v e n t s  2 0 G e V m bb Î @ 275 , 325 D GeV •••••••••••••• • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • •• 600 800 1000 1200 1400 1600 1800 -4 -3 -2 -1 0 E bb HGeV L E v e n t s  2 0 G e V m bb Î @ 675 , 725 D GeV Figure 3.4: The di-b-jet energy distributions of the true background and the in- terference for 300 GeV (left panel) and 700 GeV (right panel) nominal mass slices. They are normalized to an integrated luminosity of 3 ab−1. The black distribution is obtained by subtracting the blue one by the red one. The vertical black dashed lines denote the associated fitting range for each slice. The bottom panel shows the performance of the proposed fitting template for the effective backgrounds. process, I display the distribution from pairs of signal particles in Figure 3.3, while the distribution from pairs of background particles and the interference distribution are shown in Figure 3.4. Comparison of these distributions confirms our intuition from the argument above. 88 From Figure 3.4 I see that the interference effect dominates the pure back- ground effect essentially everywhere in the distribution. As discussed above, this is due to the fact that I envision a situation where signal events are much more abundant than background events. The shape of the interference contribution is also quite different from that of the pure background contribution. When discussing results in a later section, I will study the effect of the removal of background contributions to our results. With this goal in mind, I study what functional form describes the total effect of backgrounds; that is, the distribution from the pure background pairs plus that from the “signal-background interference” pairs. The combination is shown in Figure 3.4; in the lower panel, I show a possible fit of this distribution. Due to the importance of the signal in determining the shape of the “signal-background interference” distribution, I decided to model the total effect of backgrounds with a function of the family eq. (3.12). The fit result in Figure 3.4 is rather good, but I do not attach any special significance to this finding. In fact, a better description for this background may exist and might be preferred. More generally, I stress that the “signal-background interference” distribution is not universal, and our choice could be unreliable for other signals. In our application to the gluino decay process, the fairly good description provided by eq. (3.12) and shown in Figure 3.4 is satisfactory for our current purposes. 89 ŸŸ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ 200 400 600 800 1000 0.98 1.00 1.02 1.04 1.06 1.08 1.10 m bb H GeV L < R > Figure 3.5: Average in the FWHM range of the bin-by-bin ratio of the energy distributions from the event mixing and from just the correct pairs of b-jets. 〈R〉 = 1 implies a good match. 3.5 Mass measurement results and discussion In this section, I demonstrate the application of the proposed technique to the gluino decay. Results on the mass measurement from fitting the energy spectra for the compound system of two b-jets are presented in the following subsections along with the possible issues and limitations of our method. In the final subsection, I discuss possible improvements of the mass measurement with the aid of the di-jet invariant mass endpoint. 3.5.1 Measurement of gluino and neutralino masses Following the strategy outlined in the previous sections, I present results for the determination of the masses of the gluino and the neutralino from the b-jet energies. 90 The energy spectra that I study are obtained from simulated event samples generated as described in Sec. 3.4.1 for both signal and dominant background processes at the 14 TeV LHC. I also recall that the relevant channel is characterized by a large missing transverse momentum and four bottom-tagged jets which are selected as per eqs. (3.19)-(3.21). Since the primary interest of this chapter is to study the theoretical aspects of energy peaks in a multi-body decay, rather than data analysis under realistic statistics, I take a sufficiently large number of events to minimize potential statistical fluctuation within the data sample, which is then normalized to an integrated luminosity of 3 ab−1. ò ò ò ò ò ò ò ò ò ò ò ò ò ò ò Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Ÿ Massive Massless 200 400 600 800 1000 0.00 0.05 0.10 0.15 0.20 m bb HGeV L Χ 2  d . o . f . Figure 3.6: Comparison of the goodness of the fit between massive template eq. (3.12) (blue) and the massless template eq. (3.11) (red) as fitter for the en- ergy spectrum of the b-jet pair system for various values of the invariant mass of b-jet system m¯bb. Note that I study the distribution of the sum of the energy of b-jet pairs (say, b1 and b2), i.e., Ebb = Eb1 + Eb2 , for which the associated invariant mass values 91 belong to a narrow range such that mbb ∈ [m¯bb −∆mbb/2, m¯bb + ∆mbb/2] , (3.28) where ∆mbb denotes the width of the mass window. I henceforth identify every individual mass window by its central value m¯bb. Due to the indistinguishable nature of the final state particles, I form the Ebb distribution from all possible pairs of b-jets in an event, and subsequently apply the mixed event subtraction technique described in Sec. 3.4.2.2 to eliminate the contamination from the pairs of b-jets not coming from the same gluino. In Figure 3.5, I present the average of the bin-by-bin ratio of the distributions from only correct pairs and from the mixed event subtraction technique at various values of m¯bb. I remark that for each point in the figure, the average is limited to the energy range defined by the full width at half maximum (FWHM) of the distribution. The figure suggests that the average deviation between the two distributions is, at most, about 8%, meaning that the mixed event subtraction scheme reproduces the original distribution fairly well. Although I do not show it here, I also remark that for a given m¯bb the standard deviation in the bin-by-bin ratio is small enough that the distribution from the mixed event subtraction consistently tracks the corresponding distribution from the correct pairs. For each m¯bb, the rest-frame energy of the b-jet pair system (i.e., E∗bb) is ex- tracted from the energy distribution by fitting the data to a template function. I have two possible template functions, given in eqs. (3.11) and (3.12), and I use both of them to see which one better suits the data and to see if there are sig- nificant differences between the rest-frame energy and E∗ value found by the two 92 functions. Since eq. (3.11) is suitable only for the case where the relevant m¯bb is ef- fectively negligible, I expect an increasing discrepancy between the results obtained by eqs. (3.11) and (3.12) as m¯bb grows. The reduced χ2 values for each fit to the energy spectrum are shown in Figure 3.6. This χ2 is a measure of how well the template function describes the data globally, but I do not necessarily attach any statistical meaning to it. I instead use it as a measure for the distance between the two template functions. Looking at the figure, I observe that for all m¯bb, the massive template describes the data as well as or better than the massless template. As expected, the performance of the massless template becomes progressively worse as m¯bb increases. The massless template also seems inferior to the massive template from another aspect as it typically returns an estimate of the rest-frame energy that is larger than the expected value. On the other hand, the massive template does not introduce such a pronounced bias. To demonstrate the difference between fits made with the two templates, I show in Figure 3.7 sample fit results for two different nominal m¯bb values, 250 GeV and 650 GeV. In the left panels, I provide results from applying the massive template, while in the right panels I provide results from applying the massless template. The errors quoted for the extracted E∗bb were estimated at 95% confidence interval (C.I.) from the variation of the χ2 of the fit. For both of the m¯bb values, I see that the massless template estimates E∗bb as being slightly larger than the estimate given by the massive template, and the discrepancy is larger as I go to larger m¯bb. Although the discrepancy is within the 95% confidence interval, I feel that this is an important characteristic of the fit results. In fact, the massless template has a 93 systematic tendency to return larger E∗bb in general, which implies the introduction of a possible bias to the mass measurement. The results of the fits for all of the values of m¯bb are reported in Table 3.2, from which I see that massless template consistently overshoots the estimate of E∗bb obtained from the massive template, and that it also overestimates the theory values of E∗bb. æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ E bb * = 627.2 -19.5 + 18.7 GeV Χ 2 ‘ d.o. f . = 0.039 0 500 1000 1500 2000 2500 0 10 20 30 40 50 60 70 E bb HGeV L E v e n t s  2 0 G e V m bb = 250 GeV HmassiveL æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ E bb * = 634.8 -19.5 + 18.7 GeV Χ 2 ‘ d.o. f . = 0.035 0 500 1000 1500 2000 2500 0 10 20 30 40 50 60 70 E bb HGeV L E v e n t s  2 0 G e V m bb = 250 GeV HmasslessL æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ E bb * = 802.2 -36.9 + 21. GeV Χ 2 ‘ d.o. f . = 0.03 0 500 1000 1500 2000 2500 0 20 40 60 80 100 120 140 E bb HGeV L E v e n t s  2 0 G e V m bb = 650 GeV HmassiveL æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ æ E bb * = 810. -40.3 + 26.8 GeV Χ 2 ‘ d.o. f . = 0.063 0 500 1000 1500 2000 2500 0 20 40 60 80 100 120 140 E bb HGeV L E v e n t s  2 0 G e V m bb = 650 GeV HmasslessL Figure 3.7: Sample fit results for extracting E∗bb using the massless template (right panels) and the massive template (left panels). The chosen invariant mass slices are mbb ∈ [225, 275] GeV (top panels) and mbb ∈ [625, 675] GeV (bottom panels). I re- port only statistical errors. Each fit range is chosen such that it roughly corresponds to the relevant FWHM. 94 ƒƒ ƒ ƒ ƒ ƒ ƒ ƒ ƒ ƒ ƒ ƒ ƒ ƒ ƒ ´ ´ ´ ´ ´ ´ ´ ´ ´ ´ ´ ´ ´ ´ ´ ´ ƒ Theory: m g Ž = 1200 GeV, m Χ Ž = 100 GeV Fit with massive data Hm bb Î @200,650DL : m g Ž = 1042± 65 GeV, m Χ Ž 2 = - 159000± 59000 GeV 2 Fit with massless data Hm bb Î @200,600DL : m g Ž = 964± 56 GeV, m Χ Ž 2 = - 240000± 41000 GeV 2 E bb * with massive template E bb * with massless template 200 2 400 2 600 2 800 2 600 700 800 900 1000 1100 1200 1300 m bb 2 HGeV 2 L E b b * H G e V L Figure 3.8: The fit of the data points (m2bb, E∗bb) with eq. (3.29). The theoretical expectation for the given mass spectrum is represented by the solid black line. The data points obtained by fitting the Ebb distributions with massless and massive templates are marked by “cross” and “square” symbols, respectively. The mass measurement done with cross symbols is represented by the red dashed line. For the blue dot-dashed line, the measurement is done for the data points for which the massless template work reasonably well. Finally, I take all the values of E∗bb obtained from the fitting the energy spec- trum for each m¯bb and fit them to the line given by eq. (3.3), taking into account and displaying the associated errors from the fit procedure that I used to extract the values of E∗bb. The expression in eq. (3.3) can be adjusted to match our example 95 òàà à ò 68% C.L. 95% C.L. Best-fit Theory 0.00042 0.00044 0.00046 0.00048 0.00050 594 595 596 597 598 599 600 12m g Ž H GeV -1 L H m g Ž 2 - m Χ Ž 2 L  2 m g Ž H G e V L Massive template ò àà à ò 68 % C.L. 95 % C.L. Best-fit Theory 0.00042 0.00044 0.00046 0.00048 0.00050 0.00052 0.00054 596 598 600 602 604 606 608 12m g Ž H GeV -1 L H m g Ž 2 - m Χ Ž 2 L  2 m g Ž H G e V L Massless template Figure 3.9: Contour plots in the plane of s (= 1/2mg˜) vs. y (= (m2g˜ −m2χ˜)/2mg˜) around the best fit values for the fit results with massive (left panel) and massless (right panel) templates. 96 m¯bb Fit range Theory Massive [χ2/d.o.f] Massless [χ2/d.o.f] 200 [400, 1000] 612.5 614.8+21.4−24.0 [0.064] 619.1+21.4−24.2 [0.054] 250 [400, 1000] 621.9 627.2+18.7−19.5 [0.039] 634.8+18.7−19.5 [0.035] 300 [400, 1000] 633.3 640.9+15.5−16.0 [0.089] 654.3+15.4−16.0 [0.057] 350 [440, 1000] 646.9 659.2+16.2−16.2 [0.074] 673.1+15.9−16.2 [0.063] 400 [440, 1040] 662.5 670.7+14.3−14.0 [0.074] 693.7+13.6−13.5 [0.061] 450 [500, 1040] 680.2 694.8+14.9−15.5 [0.041] 715.9+14.8−15.6 [0.037] 500 [540, 1040] 700.0 716.3+14.8−15.9 [0.033] 738.8+14.8−16.2 [0.050] 550 [600, 1100] 721.9 742.1+16.7−21.1 [0.038] 760.0+18.5−23.8 [0.064] 600 [640, 1100] 745.8 768.9+17.6−27.2 [0.026] 787.9+19.3−26.3 [0.074] 650 [700, 1200] 771.9 802.2+21.0−36.9 [0.030] 810.0+29.0−40.3 [0.063] 700 [740, 1240] 800.0 832.7+23.4−132.7 [0.011] 840.9+21.4−45.0 [0.060] 750 [800, 1300] 830.2 871.4+28.4−121.5 [0.017] 865.3+39.3−68.7 [0.060] 800 [840, 1340] 862.5 910.5+28.3−110.6 [0.024] 908.3+37.8−64.3 [0.067] 850 [880, 1340] 896.9 952.8+29.4−102.9 [0.019] 961.1+34.4−60.7 [0.12] 900 [920, 1400] 933.3 998.0+29.7−98.0 [0.040] 1015.2+31.6−52.6 [0.21] Table 3.2: The fit results for fifteen invariant mass slices. For each fit, a mass slice of 50 GeV was chosen, for example, for m¯bb = 200, Ebb is selected such that the corresponding mbb is between 175 and 225 GeV. The bin size for all energy distributions is 20 GeV. The error estimation for each fit parameter is performed by 95% confidence interval. All values but χ2 are given in GeV. 97 process as follows: E∗bb = m2g˜ −m2χ + m¯2bb 2mg˜ . (3.29) I perform the fit of eq. (3.29) on both of the results from the massive and massless templates. For the fits using the massive template, I use only the results obtained for m¯bb in the range from 200 GeV to 650 GeV, in which the errors from the fit of the energy spectra are quite small. Other choices of the m¯bb range give similar mass measurements, but I simply make a conservative choice for the range of m¯bb included so to avoid the values of m¯bb where fewer events are expected. From the results extracted using the massless template, I choose to fit eq. (3.29) only for m¯bb in the range from 200 GeV to 600 GeV, where it is more reasonably accurate to treat the b-jet pair system as massless. The fit parameters are the slope of eq. (3.29) and its vertical intercept (that is E∗bb of m¯bb = 0) and I denote them as s ≡ 1 2mg˜ and y ≡ m2g˜ −m2χ˜ 2mg˜ . (3.30) For our spectrum, the theory values are s = 4.2× 10−4 GeV−1, y = 595 GeV . (3.31) Fitting the line eq. (3.29) on the results obtained from the energy spectra, I obtain the best-fit lines shown in Figure 3.8, which correspond to s = (4.8± 0.3)× 10−4 GeV−1, y = 597± 5 GeV , (3.32) for the massive template and s = (5.2± 0.3)× 10−4 GeV−1, y = 606± 6 GeV , (3.33) 98 for the massless template. Not surprisingly, the values extracted using the massive template are closer to the input values than those from the massless template, although even the massless template was able to get a rough estimate of the rest frame energy values of the bb system for the chosen values of m¯bb. In Figure 3.9, I also show the 68% and 95% confidence level contours obtained from the χ2 variation of the fit of the slope and intercept parameters; the result with the massive template are in the left panel and that with the massless template in the right panel. One can clearly see that the distance between the theory values and the best-fit values for the case of the massive template (left panel) is smaller than that for the case of the massless template (right panel). As mentioned before, s and y can be easily converted into the masses of gluino and neutralino. Based on eqs. (3.32) and (3.33), I obtain the following measurements of the two masses: Massive template : mg˜ = 1042± 65 GeV, m2χ = −159000± 59000 GeV2 ,(3.34) Massless template : mg˜ = 964± 56 GeV, m2χ = −240000± 41000 GeV2 .(3.35) I remark that the gluino mass, while quite precisely determined, is underestimated by about 20%, with the value from the massive template being closer to the true value than the value from the massless template. The neutralino mass is poorly determined using both the massive and the massless template. Possible causes of this poor estimation will be discussed in the next subsection. 99 3.5.2 Study of systematic effects The results in the previous subsection on the measurement of the gluino and neutralino masses are fairly good, considering the challenging circumstances of the mass measurement, particularly the fully indistinguishable character of final state particles in our chosen signal process. Despite an adequate result for the gluino mass measurement, the neutralino mass measurement is very poor; the only conclusion that one is able to draw is that the neutralino in our example process is consistent with being massless. slope steeper consistent shallow y-intercept larger mg˜,ext < mg˜,in mg˜,ext ≈ mg˜,in mg˜,ext > mg˜,in m2χ˜01,ext  m 2 χ˜01,in m2χ˜01,ext < m 2 χ˜01,in m2χ˜01,ext ≈ m 2 χ˜01,in consistent mg˜,ext < mg˜,in mg˜,ext ≈ mg˜,in mg˜,ext > mg˜,in m2χ˜01,ext < m 2 χ˜01,in m2χ˜01,ext ≈ m 2 χ˜01,in m2χ˜01,ext > m 2 χ˜01,in smaller mg˜,ext < mg˜,in mg˜,ext ≈ mg˜,in mg˜,ext > mg˜,in m2χ˜01,ext ≈ m 2 χ˜01,in m2χ˜01,ext > m 2 χ˜01,in m2χ˜01,ext  m 2 χ˜01,in Table 3.3: Comparisons of extracted mass parameters with corresponding input values for the nine possible combinations of over-, under- or consistent estimation of the slope and the intercept of the straight line eq. (3.29) fitted on the (mbb, E∗bb) data. The orange table cell corresponds to the result of the fit of the data in Sec. 3.5.1. As noted already, the measurement of E∗bb for each m¯bb is statistically compat- 100 ible with the theory value, but still results in a mass measurement that is system- atically overestimated. From the fit of the data in Figure 3.8, the mismeasurement of E∗bb primarily implies a slope larger than that predicted by theory, which conse- quently implies that the extracted gluino mass is biased towards values smaller than the true mass. This bias is not particularly worrisome per se, as it is about 10%. However, given the relation between the masses and the observables in eq. (3.30), it turns out that this underestimation of the gluino mass severely affects the neutralino mass determination. More generally, there are nine possible cases based on under-, over-, or consistent estimations of the slope and intercept of the straight line in eq. (3.29). The implication of each case in terms of the extracted mass parameters is summarized in Table 3.3. It is interesting to examine possible causes of this bias in the best-fit line of Figure 3.8, which also serves as a basis for possible improvements of our method. In order to clarify the origin of the incorrect estimation of E∗, I study the following potential sources of inaccuracy in our fits of the energy spectra: i) an imperfect fit of the data with the massive template eq. (3.12); ii) contamination due to the background; iii) biases introduced by the event mixing subtraction; iv) finite size of the mbb range used to discretize the multi-body phase-space; v) biases due to events selection. For the first potential source, I recall the discussion in Section 3.2 where the 101 massive template function was introduced; this template had a maximum at E∗bb only when w → ∞, which corresponds to producing the gluino(s) at rest. For practical cases, w is finite and the maximum of the function appears at a somewhat larger value than E∗bb. On the other hand, physical energy distributions can have the maximum at E∗bb; in particular, for cases where mbb can be treated as effectively massless. Therefore, the relevant fit could result in a value that does not match the corresponding expectation. Fortunately for the case at hand, I find that w is large enough to cause only a negligible shift in the peak position, i.e., such a potential mismatch is very tiny. Consequently, I do not ascribe the systematic overestimate E∗bb to the inaccuracy of the template function eq. (3.12). In order to see the effect of the other four potential sources of bias on the final result, I conduct a dedicated analysis for each. In each analysis, I repeat the same procedure as described in the previous section, that is to say I extract the values of E∗bb from an event sample that incorporates the effect under study. The event samples for the study of these possible effects are denoted as “Check Sample” (CS). I then compare the results obtained from those Check Samples to those obtained from the Original Sample (OS). The attributes of the check samples that I have considered are summarized in Table 3.4 and are also described in the following. The first check sample enables us to find the effect of the background on the extraction of E∗bb. I study first the pure background energy distributions in order to calibrate the template function describing them in the fit. This calibration is done for each of the m¯bb slices. Although there are two types of backgrounds, the dominant SM background and the interference from the event mixing, I employ a 102 ∆mbb Event mixing Background Background fit Cuts OS 50 GeV Yes Included No Yes CS I 50 GeV Yes Included Yes Yes CS II 50 GeV No Included No Yes CS III 50 GeV No Not Included - Yes CS IV 2 GeV No Not Included - Yes CS V 2 GeV No Not Included - No Table 3.4: Description of the original sample (OS) and several selected check samples (CS). The width of the ranges of mbb for the discretization of the multi- body phase space is reported in the first column. Samples marked as event mixed are those in which the mixed event subtraction has been carried out. In those marked as “no”, the correct pairs are identified in the event record and so eliminate the effect of combinatorial backgrounds. Samples where the background has been completely neglected are marked in the third column. For the samples where the background has been added, I report in the fourth column if I have added a template to fit the background events to the overall fit of the data . Finally, in the fifth and final column I report if selection cuts eqs. (3.19) through (3.21) have been applied to the events or not. single template in eq. (3.12) to describe both of them collectively. I then repeat the fit of the energy spectra for each m¯bb, including the template function for the backgrounds as well. The results in the determination of E∗bb for each m¯bb are labeled as “CS I” an plotted as red open circles in Figure 3.10. In this figure, the left panel 103 shows the absolute shift of the measured E∗bb from the corresponding theory value for each sample. The right panel shows the ratio of the values of E∗bb from the check samples and the corresponding value in the original sample. I observe that the effect of background modeling is negligible for all m¯bb, and from the right panel of Figure 3.10, I can in fact see that this engenders less than 1% of the shift in E∗bb. The next potential source of bias that I study is the event mixing, which is studied by the second and third check samples, denoted by CS II and CS III in the following. In these samples I use the event record to identify the correct pairs of jets coming from the same gluino, and therefore I obtain the correct energy spectra without applying the mixed event subtraction. The two samples CS II and CS III differ by the inclusion of the SM background. The results for the determination of E∗bb are reported in Figure 3.10 by blue filled triangles and blue open triangles, respectively. From the figure I see that the determination of E∗bb is significantly improved. In fact, in the left panel of Figure 3.10 I can see that a mild positive shift of the determined E∗bb values still exists, but is greatly reduced compared to what I had with the original sample. I remark further that only minor differences are found between the results obtained from the check samples CS II and CS III, which can be taken as another way of confirming that the effect from background events is negligible. Therefore, I conclude that the effect of event mixing is a major cause of the shift that I observe in the gluino mass determination. Next, I study the effect of the discretization of the mbb spectrum by taking smaller ranges for the m¯bb window. This is performed on the check sample denoted as CS IV. This narrow mbb range analysis is intended to provide an improvement to 104 ææ æ æ æ æ æ æ æ æ æ æ æ æ æ ç ç ç ç ç ç ç ç ç ç ç ç ç ç ç ò ò ò ò ò ò ò ò ò ò ò ò ò ò ò ó ó ó ó ó ó ó ó ó ó ó ó ó ó ó ì ì ì ì ì ì ì ì ì ì ì ì ì ì ì í í í í í í í í í í í í í í í æ ç ò ó ì í OS CS I CS II CS III CS IV CS V 200 2 400 2 600 2 800 2 - 20 0 20 40 60 80 m bb 2 HGeV 2 L D E b b * H G e V L ç ç ç ç ç ç ç ç ç ç ç ç ç ç ç ò ò ò ò ò ò ò ò ò ò ò ò ò ò ò ó ó ó ó ó ó ó ó ó ó ó ó ó ó ó ì ì ì ì ì ì ì ì ì ì ì ì ì ì ì í í í í í í í í í í í í í í í ç ò ó ì í CS I CS II CS III CS IV CS V 200 2 400 2 600 2 800 2 0.95 1.00 1.05 m bb 2 HGeV 2 L E b b * H C S L  E b b * H O S L Figure 3.10: Comparisons of fit results from the five samples used to assess the effects of several potential sources of bias in the gluino mass determination as de- scribed in Table 3.4. the previous two analyses where the combinatorial issues were artificially resolved using the information in the event record. In fact, the check sample CS IV for this analysis is similar to CS III except for the ∆mbb. The results of this analysis are reported in Figure 3.10 by black filled rhombuses, which suggests that the effect of 105 discretization of the mbb spectrum is negligible. This is not surprising in light of the following observation: one can easily figure out that, for a given nominal value of m¯bb the the E∗bb from eq. (3.29) for (m¯bb + 25) GeV (m¯bb − 25) GeV is at most about 15 GeV larger (smaller) than the E∗bb for the nominal m¯bb. The absolute size of this shift is already quite small and is further reduced by the fact that for each mbb range I observe the sum of all the contributions below and above m¯bb. In all, I expect a very small net effect, making the discretization of the mbb spectrum in increments of 50 GeV suitable for the precision sought. Finally, I study the bias induced by the selection cuts. To assess their effect, I produce a sample along the line of CS IV, but being fully inclusive in the signal phase-space. The result of fits performed on the energy spectra from this sample are reported in Figure 3.10 by black open rhombuses. The use of a fully inclusive sample gives E∗bb from the fits that agree with the theory predictions within few percents up to m¯bb = 650 GeV. For the check samples II, III, IV and V I remark that the agreement of the fit results with the theory value deteriorates as one gets closer to the endpoint of the range covered by mbb. I observe that good agreement is retained up to m¯bb = 650 GeV, which is in the falling tail of the mbb distribution, as apparent from Figure 3.2. I suspect that the mismatch of the fitted E∗bb and the theory values is connected to the massive template becoming less accurate in fitting to the data. Indeed, in Figure 3.8 I can see that the error estimation on the fitted E∗bb becomes larger for m¯bb ≥ 650 GeV. In a realistic application of our mass measurement method, I would not know up to what precise value of mbb the massive template can be trusted. However, it 106 is clear that the values of mbb for which the extracted E∗bb from the fit comes with a large error should be avoided. I remark that in Figure 3.8, all the fit results for mbb ≥ 650 GeV have a significantly large error, thus clearly signaling a transition to a region of mbb where the fit template eq. (3.12) can no longer be trusted. A more detailed investigation of this transition boundary is beyond the scope of this chapter and I instead refer to Ref. [47] for a more systematic study of it. 3.5.3 Improving the mass measurement using the mbb endpoint In this section, I discuss a possible improvement of the mass measurement 2.0 TeV 1.6 TeV 1.2 TeV 0.8 TeV 0.4 TeV 200 2 400 2 600 2 800 2 400 500 600 700 800 900 m bb 2 HGeV 2 L E b b * H G e V L Figure 3.11: The functional dependence of eq. (3.37) for various gluino masses. The mbb endpoint is set to be 1.1 TeV. The black solid line denotes the case where mg˜ is identical to that of our study point. with the aid of the kinematic endpoint of the dijet invariant mass distribution. The point is that without prior knowledge of the masses, it is not possible to say 107 whether the measurement eq. (3.34) has a bias. However, I can devise a check and an improvement of the obtained measurement using an independent observable. To this end, I study a possible combination of the result of fits to energy spectra with the measurement of the endpoint of the invariant mass distribution of the pairs of b-jet, which I denote as mmaxbb . This observable is a simple function of the gluino and neutralino masses: mmaxbb = mg˜ −mχ , (3.36) and is expected to be very useful in combination with the results of fits to energy spectra. In fact, eq. (3.29), in light of eq. (3.36), can be rewritten as E∗bb = mmaxbb − (mmaxbb )2 − m¯2bb 2mg˜ , (3.37) which is a straight line in the plane (E∗bb, m¯2bb) described by just one free parame- ter. This should be compared to the previous equation I used to find the masses, eq. (3.29), where there are two independent parameters: the slope and the constant term of the straight line. If the mbb endpoint is assumed to be well-measured, I can use eq. (3.37) to more accurately fit our results to a line in the plane (m2bb, E∗bb). This relation is shown in Figure 3.11 for mmaxbb = 1.1 TeV. Using this relation as a template for the fit of the (m2bb, E∗bb) data points in Figure 3.8 along with eq. (3.36), I obtain a mass measurements: mg˜ = 1236± 31 GeV, mχ = 134± 31 GeV . (3.38) In this case, as well as for the analysis in the previous sections, the fit is performed 108 between m¯bb = 200 and m¯bb = 650 GeV, with the data points obtained from the massive template. This result is more accurate and in better agreement with the expected values than what I obtained in eq. (3.34) using only energy distributions. Therefore, I conclude that, depending on the accuracy with which the mbb endpoint can be experimentally determined, the addition of information from thembb endpoint can grant a very significant improvement to our results obtained from only the energy spectra. 3.6 Summary and Conclusions In this chapter, I have discussed how to use the energy spectra of visible decay products for the measurement of masses of “parent” particles in semi-invisible multi (i.e., more than 2)-body decays. The results are an extension of previous results regarding the properties of the energy distribution of the massless decay products in a two-body decay. In particular, I extended the results for two-body decays by discretizing the multi-body phase-space and considering it as the combination of multiple two-body decays, as suggested by the recursive factorization formula for the multi-body phase space. The fictitious two-body systems that are involved in the recursive factorization of phase-space are necessarily massive. Therefore, I utilized the results of [47] on the description of energy spectra of massive decay products in two-body decays. Armed with these results, I should be able to fit the energy spectra of the visible part of the fictitious two-body decay and extract an estimate of the masses involved in the 109 process using the results of these fits. A particularly challenging aspect of our analysis had to do with the fact that the decaying particle (the particle of interest) is typically produced in association with other particles. In this situation, it is possible that some of the “child” par- ticles from the decay of the parent are identical to those contained in the rest of the event, which means that in the process of reducing the multi-body final state of the parent decay to one with fewer bodies, the particle pairings that I perform may unintentionally include particles which have nothing to do with the mass mea- surement at hand. In particular, the parent particles are often produced in pairs: if the two parents in each event undergo the same decay process, then it is clear that this combinatorial background is inevitable. These particles extraneous to the decay potentially hamper the mass measurement of the parent, and thus the con- tamination they add must be addressed. Most of the general discussion above can be succinctly illustrated by the consideration of a suitable example. Furthermore, tackling a concrete example enables us to quantify the quality of the mass measure- ment that one can achieve using our method. With these goals in mind, I studied in detail the production of a pair of gluinos in a supersymmetric model, where R-parity is conserved, and in which the gluinos directly decay to bb¯χ˜01 (a three-body decay) via an off-shell bottom squark. To this end, I simulated events for the process pp→ g˜g˜ → bbbb+ pT , including the relevant SM contribution. I identified selection cuts to remove the SM backgrounds to a level that further clears a path toward the successful determination 110 of the masses of the new physics states. Simultaneously, I attempted to minimize the changes in the shape of the energy spectra caused by these event selection criteria. Our actual analysis then starts with forming all possible pairs of bottom quarks in each event that passed the selection in eqs. (3.19)-(3.21), from which I then obtained a distribution of the energy of the b quark pairs, Ebb. This distribution includes the contribution from pairs formed by bottom quarks not originating from the same decay, i.e., the wrong combinations. Note that the final state of each decay in our chosen process is made of two (visible) indistinguishable particles, and so the pollution from the combination of particles not coming from the same decay is even more severe; in particular, each event gives 6 combinations of two bottom quarks, out of which only 2 are correct, cf. the case of distinct particles a and b from each decay, which would give 2 correct ones out of 4 combinations. To remove this adulteration of the event sample, I subtracted an estimate that I obtained using the event mixing technique described in Sec. 3.3. This estimate was obtained from pairs of b quarks taken from different events, and I showed that the contribution from pairs of b quarks not coming from the same gluino can thus be effectively removed, as seen in Figure 3.3. In other words, this method has a natural tendency to maintain the shapes of the distributions that I need to analyze to carry out our measurement. The energy spectra, once effectively rid of the contribution from pairs of b quarks coming from different gluinos, were fitted in a region around their peak with the function eq. (3.12), which is taken from Ref. [47] and briefly described in Sec. 3.2. For comparison, these energy spectra were also fitted with the function 111 eq. (3.11), which was used in our original paper on the peak of energy spectra of massless particles. The comparison of the results from fits with the two functions highlights the improvement achieved by our new result for massive decay products. The better description of the energy spectra with the function eq. (3.12) can be seen in the comparison of the χ2 for the various fits performed, as reported in Figure 3.6. The result of the fits of the energy spectra is the extraction of the function parameter E∗bb, which is exactly the energy of the system of the two b quarks in the rest frame of the gluino that generated the two b quarks. The determination of E∗bb is the core of our procedure as this value is connected to the masses of the gluino and the neutralino via eq. (3.29): E∗bb = m2g˜ −m2χ + m¯2bb 2mg˜ for a pair of b quarks of mass m¯bb. The results of the extraction of E∗bb for several choices of the mass of the two b quark system were shown in Figure 3.8. The determination of E∗bb for each mbb was fitted using the straight line eq. (3.29) given above. This fit is essentially our mass measurement, as the gluino mass corresponds to the slope of the line and the neutralino mass to the constant term of the straight line. The resulting mass measurement was given in eq. (3.34), which was found to be within 20% of the gluino mass and a rather poor determination of the neutralino mass. This inaccuracy of the mass measurement is mainly due to the fact that in each fit of energy spectra, I tend to overestimate the energy E∗bb. In Sec. 3.5.2, I studied several possible causes for this error and (in the end) identified the modest shape changes due to the mixed event subtraction as the primary source. 112 In order to improve the mass measurement, I then studied how including information about the endpoint location in the mbb distribution altered the quality of the mass measurements. If well-measured, this quantity should correspond to the mass difference between the gluino and the neutralino, and hence is expected to aid in the measurement of these two masses. Indeed, a much more accurate measurement was obtained using information from this observable, as shown in eq. (3.38). 113 Chapter 4: Conclusions 4.1 Summary We know from astrophysical observations that dark matter exists in abundance throughout the known universe; we also know that to date it has stubbornly eluded non- gravitational human experiment and observation. However, there is good rea- son to suspect that the particle or particles that make up this dark matter are described in electro-weak energy scale extensions to the Standard Model and that we are on the cusp of discovering these weakly-interacting massive particles. While there are potentially many means of discovering these presumptive dark matter par- ticles, the one upon which we focus our interest here is that via direct production at man-made particle colliders. For the sake of argument, we assume that the dis- covery of these invisible particles has already been made and fixed our attention on determining various properties of these dark matter candidates. Naturally, there are many quantities that characterize the various particles that at a fundamental level comprise the natural world; these dark matter candidates would be no differ- ent. Among their measurable properties would be such quantities as mass, spin, and couplings to various other particles, including both those already adequately described by the Standard Model and those that may lie beyond the scope of our 114 current knowledge. The investigations above were undertaken primarily in order to develop and expand data analysis techniques that will consistently and accurately reveal the nature of these beguiling particles once they have been discovered. Focus was placed on using novel kinematic distributions, i.e. the energy (or energy sum) and MT2, in order to uncover the properties of interest. This opens up the possi- bility that there are further kinematic distributions that have been overlooked or underused, which could allow us to glean better knowledge from the collision data available to use currently and in the future. I first focused on determining broad, model-level characteristics of dark matter- like invisible particles. Typically, there is a stabilization symmetry underlying the extensions to the Standard Model that provide the framework from which these invisible particles arise that protects them from decaying into standard model par- ticles. Naturally, knowing this property would immensely help the larger particle physics community in determining where to turn their attention in developing new physics models and in looking for new physics in the data. It was demonstrated that it easily possible to determine this symmetry, with only a few modest assumptions. I next turned my attention to the determination of the masses of an invisible new physics state and its parent. I demonstrated that the use of energy distributions, and in particular, the region close to their peaks, is an effective way to measure the masses of new physics particles involved in a single-step multi-body decay. Taking the example of gluino production and decay in a R-parity conserving supersymmetric model, we have found that using only visible decay products of the gluino decay g˜ → bbχ, it is possible to measure with good accuracy the masses of both the 115 gluino and neutralino, with the best results being obtained when both the energy and the invariant mass distributions of the pairs of b quarks are used. Rather strikingly, the mass measurement technique that we discussed does not actually use information about the missing momentum, except only for event selection purposes. The example that we considered also required a proper removal of the effect from pairs of b quarks not coming form the same gluino. To this end, we have shown that the event mixing technique is especially well-suited. We anticipate that the general methodology in Chapter 3 can be applied to mass measurements of other processes and emphasize that the technique to reduce the combinotorial background is a tool that every phenomenologist should have in their repertoire. 116 Appendix A: Event Mixing: Signal and background We begin with the total number of pairings of b-jets (denoted by NNC) and with the data sample consisting of ns signal events and nb background events. Re- membering the fact that there are six possible pairings out of four b-jets in each event, we have NNC = 2ns + 2nb + 4ns + 4nb (A.1) where the last two terms represent the total number of wrong pairings of b-jets denoted by NWC = 4(ns + nb) . As mentioned before, our unawareness of which are signal and which are back- ground events precludes us from having the event mixing (symbolized by ⊗) only between signal events or background events. Therefore, if we perform the event mixing procedure on the total (ns + nb) events, we then have signal-signal mixing, signal-background mixing, and background-background mixing. Since there are four b-jets in each event, a single event mixing enables us to have up to 4 = 16 b-jet pair- ings. Of course, for practical purposes, one could use a subset of these 16 pairs, say m pairs. Now that m is given by a common prefactor for every single event mixing, we then need to know the number of event mixings. The total number of 117 signal-signal mixing is evaluated by the two-combination in the binomial coefficients: ns ⊗ ns = (ns 2 ) = ns(ns − 1) 2 , (A.2) and so is that of background-background mixing: nb ⊗ nb = (nb 2 ) = nb(nb − 1) 2 . (A.3) Likewise, the number of signal-background mixing, i.e., interference, is expressed as follows: ns ⊗ nb = (ns 1 ) × (nb 1 ) = nsnb. (A.4) Suppose that we use m mixed pairings out of 16 mixed pairings in each event mixing. Denoting by NMC the sum of eqs. (A.2), (A.3), and (A.4), we eventually use m ·NMC b-jet pairs to estimate the distribution from the NWC wrong pairs formed in the same event distribution.1 Since in general NWC 6= NNC , each mixed pair should be re-weighted in making the “different events” distributions, so to match the contribution of NWC wrong pairs. Keeping this issue of the normalization in mind, let us first see how the wrong b-jet pairings in the signal events can be treated by the mixed event subtraction even in presence of a small background. Since the total number of mixed pairings is normalized toNWC , the presence of the background affects the weight of the “different event” distributions made by just signal events. In fact, in the “different event” distribution, once normalized so as to match the contribution of the NWC wrong pairs, the fraction of b-jet pairs stemming from the signal-signal mixing is given by eq. (A.2) times a rescaling factor to match the 1Note that the only thing that we know is ns + nb, neither ns nor nb separately. 118 normalization to NWC , that is ns(ns − 1) 2 × 4(ns + nb) ns(ns − 1)/2 + nsnb + nb(nb − 1)/2 ≈ 4ns ( 1− nb ns ) (A.5) where the common prefactor m is omitted for simplicity and the approximation is done with the assumptions of ns  nb and ns  1. The implication of this result is that the combinatorial background (caused by the signal itself) can be almost com- pletely eliminated with the event mixing technique thanks to the dominance of the signal assumed throughout in this paper. In the total “different event” distribution reweighed as to match the expected number of wrong pairing NWC we find that the fraction of b-jet pairs from the background-background mixing is nb(nb − 1) 2 × 4(ns + nb) ns(ns − 1)/2 + nsnb + nb(nb − 1)/2 ≈ 4nb (nb ns − 1 ns ) . (A.6) In the same distribution we find that the fraction of b-jet pairs from the signal- background mixing is nsnb × 4(ns + nb) ns(ns − 1)/2 + nsnb + nb(nb − 1)/2 ≈ 8nb ( 1− nb ns + 1 ns ) . (A.7) Comparing eq. (A.6) and eq. (A.7), we conclude that the effect of “interfer- ence” is, in general, more important than the contribution from the background- background mixing, as also argued in the main text. Besides corroborating the argument given in the main text, these equations quantify more precisely the effect of background, which becomes more important when the mass measurement strat- egy described in this paper is applied to situations where S/B is less favorable than that in our example process. 119 Bibliography [1] G. Bertone, D. Hooper and J. Silk, “Particle dark matter: Evidence, candidates and constraints,” Phys. Rept. 405, 279 (2005) [arXiv:hep-ph/0404175]. [2] J. Feng, “Dark Matter Candidates from Particle Physics and Methods of Detec- tion,” Ann. Rev. Astron. Astrophys. 48: 495, (2010) [arXiv:1003.0904 [astro- ph.CO]]. [3] Planck Collaboration “Planck 2015 results. I. Overview of products and scien- tific results,” [arXiv:1502.01582 [astro-ph.CO]]. [4] K. Agashe, R. Franceschini, D. Kim and K. Wardlow, “Using Energy Peaks to Count Dark Matter Particles in Decays,” Phys. Dark Univ. 2, 72 (2013) [arXiv:1212.5230 [hep-ph]]. [5] K. Agashe, R. Franceschini, D. Kim and K. Wardlow, “Mass Measurement Using Energy Spectra in Three-body Decays,” [arXiv:1503.03836 [hep-ph]]. [6] G. Jungman, M. Kamionkowski and K. Griest, “Supersymmetric dark matter,” Phys. Rept. 267, 195 (1996) [arXiv:hep-ph/9506380]. [7] N. Arkani-Hamed, A. G. Cohen, T. Gregoire, J. G. Wacker, “Phenomenology of electroweak symmetry breaking from theory space,” JHEP 0208, 020 (2002). [hep-ph/0202089]; [8] H. C. Cheng and I. Low, “TeV symmetry and the little hierarchy problem,” JHEP 0309, 051 (2003) and [arXiv:hep-ph/0308199]. [9] K. Agashe and G. Servant, “Baryon number in warped GUTs: Model build- ing and (dark matter related) phenomenology,” JCAP 0502, 002 (2005) [hep- ph/0411254] 120 [10] K. Agashe, D. Kim, D. G. E. Walker, L. Zhu, “Using MT2 to Distinguish Dark Matter Stabilization Symmetries,” [arXiv:1012.4460 [hep-ph]]. [11] K. Agashe, D. Kim, M. Toharia, D. G. E. Walker, “Distinguishing Dark Matter Stabilization Symmetries Using Multiple Kinematic Edges and Cusps,” Phys. Rev. D82, 015007 (2010). [arXiv:1003.0899 [hep-ph]]. [12] J. Alwall, J. L. Feng, J. Kumar and S. Su, “B’s with Direct Decays: Tevatron and LHC Discovery Prospects in the bb¯ + MET Channel,” Phys. Rev. D 84, 074010 (2011) [arXiv:1107.2919 [hep-ph]]. [13] J. Alwall, J. L. Feng, J. Kumar, S. Su, “Dark Matter-Motivated Searches for Exotic 4th Generation Quarks in Tevatron and Early LHC Data,” Phys. Rev. D81, 114027 (2010). [arXiv:1002.3366 [hep-ph]]. [14] P. Meade, M. Reece, “Top partners at the LHC: Spin and mass measurement,” Phys. Rev. D74, 015010 (2006) [hep-ph/0601124]; [15] E. Ma, “Z3 dark matter and two-loop neutrino mass,” Physics Letters B 662 (Apr., 2008) 49–52, arXiv:0708.3371 [hep-ph]; [16] B. Batell, “Dark discrete gauge symmetries,” Phys.Rev D 83 no. 3, (Feb., 2011) 035006, arXiv:1007.0045 [hep-ph]; [17] G. F. Giudice, B. Gripaios, R. Mahbubani, “Counting dark matter particles in LHC events,” [arXiv:1108.1800 [hep-ph]]. [18] W. S. Cho, D. Kim, K. T. Matchev, and M. Park, “Cracking the dark matter code at the LHC,” ArXiv e-prints (June, 2012) , arXiv:1206.1546 [hep-ph]. [19] K. Agashe, R. Franceschini and D. Kim, “A simple, yet subtle ’invariance’ of two-body decay kinematics,” arXiv:1209.0772 [hep-ph]. [20] Similar results appeared in the cosmic rays physics literature, see, for example, F. W. Stecker, “Cosmic gamma rays,” NASA Special Publication 249 (1971) . [21] C. G. Lester and D. J. Summers, “Measuring masses of semi-invisibly decaying particle pairs produced at hadron colliders,” Physics Letters B 463 (Sept., 1999) 99–103, arXiv:hep-ph/9906349. [22] CMS Collaboration, “Search for supersymmetry in final states with missing transverse energy and 0, 1, 2, or at least 3 b-quark jets in 7 TeV pp collisions 121 using the variable alphaT,” ArXiv e-prints (Oct., 2012) , arXiv:1210.8115 [hep-ex]. [23] “Search for supersymmetry in final states with missing transverse energy and 0, 1, 2, 3, or at least 4 b-quark jets in 8 TeV pp collisions using the variable αT ,” CMS Collaboration CMS-PAS-SUS-12-028 (2012) . [24] R. K. Ellis, W. J. Stirling, and B. Webber, “QCD and collider physics,” Camb.Monogr.Part.Phys.Nucl.Phys.Cosmol. 8 (1996) 1–435. [25] CMS Collaboration, “Search for supersymmetry in events with b-quark jets and missing transverse energy in pp collisions at 7 TeV,” ArXiv e-prints (Aug., 2012) , arXiv:1208.4859 [hep-ex] and https://twiki.cern.ch/ twiki/bin/view/CMSPublic/PhysicsResultsSUS12003. [26] J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer, and T. Stelzer, “Mad- Graph 5: going beyond,” Journal of High Energy Physics 6 (June, 2011) 128, arXiv:1106.0522 [hep-ph]. [27] J. Pumplin, D. Stump, J. Huston, H. Lai, P. M. Nadolsky, et al., “New gen- eration of parton distributions with uncertainties from global QCD analysis,” JHEP 0207 (2002) 012, arXiv:hep-ph/0201195 [hep-ph]. [28] CMS Collaboration, “Identification of b-quark jets with the CMS experiment,” ArXiv e-prints (Nov., 2012) , arXiv:1211.4462 [hep-ex]. [29] M. Blanke, D. Curtin and M. Perelstein, “SUSY-Yukawa Sum Rule at the LHC,” Phys. Rev. D 82, 035020 (2010) [arXiv:1004.5350 [hep-ph]]. [30] W. S. Cho, K. Choi, Y. G. Kim and C. B. Park, “Measuring superparticle masses at hadron collider using the transverse mass kink,” JHEP 0802, 035 (2008) [arXiv:0711.4526 [hep-ph]]. [31] D. Curtin, “Mixing It Up With MT2: Unbiased Mass Measurements at Hadron Colliders,” arXiv:1112.1095 [hep-ph]. [32] A. J. Barr, B. Gripaios, and C. G. Lester, “Weighing wimps with kinks at colliders: invisible particle mass measurements from endpoints,” Journal of High Energy Physics 2 (Feb., 2008) 14, arXiv:0711.4008 [hep-ph]. [33] For a guide to the literature on transverse mass variables, see, for example, A. J. Barr, T. J. Khoo, P. Konar, K. Kong, C. G. Lester, K. T. Matchev and M. Park, “Guide to transverse projections and mass-constraining variables,” Phys. Rev. D 84, 095031 (2011) arXiv:1105.2977 [hep-ph]. 122 [34] W. S. Cho, K. Choi, Y. G. Kim and C. B. Park, “Gluino Stransverse Mass,” Phys. Rev. Lett. 100, 171801 (2008), arXiv:0709.0288 [hep-ph]; [35] A. Barr, C. Lester, and P. Stephens, “A variable for measuring masses at hadron colliders when missing energy is expected mT2: the truth behind the glamour,” Journal of Physics G Nuclear Physics 29 (Oct., 2003) 2343–2363, arXiv:hep-ph/0304226. [36] ATLAS Collaboration, “Top quark mass measurement in the eµ channel using the mT2 variable at ATLAS,” Tech. Rep. ATLAS-CONF-2012-082, CERN, Geneva, Jul, 2012. [37] CMS Collaboration, “Search for supersymmetry in hadronic final states using MT2 with the CMS detector at √s = 8 TeV,” Tech. Rep. CMS-PAS-SUS- 13-019, CERN, Geneva, 2014. https://twiki.cern.ch/twiki/bin/view/ CMSPublic/PhysicsResultsSUS13019 [38] H.-C. Cheng, J. F. Gunion, Z. Han, G. Marandella, and B. McElrath, “Mass de- termination in SUSY-like events with missing energy,” Journal of High Energy Physics 12 (Dec., 2007) 76, arXiv:0707.0030 [hep-ph]. [39] H.-C. Cheng, D. Engelhardt, J. F. Gunion, Z. Han, and B. McElrath, “Accurate Mass Determinations in Decay Chains with Missing Energy,” Physical Review Letters 100 no. 25, (June, 2008) 252001, arXiv:0802.4290 [hep-ph]. [40] H.-C. Cheng, J. F. Gunion, Z. Han, and B. McElrath, “Accurate mass determi- nations in decay chains with missing energy: II,” Phys. Rev. D 80 no. 3, (Aug., 2009) 035020, arXiv:0905.1344 [hep-ph]. [41] T. Han, I.-W. Kim, and J. Song, “Kinematic cusps: Determining the miss- ing particle mass at colliders,” Physics Letters B 693 (Oct., 2010) 575–579, arXiv:0906.5009 [hep-ph]; “Kinematic Cusps With Two Missing Particles I: Antler Decay Topology,” ArXiv e-prints (June, 2012) , arXiv:1206.5633 [hep-ph]; and [42] T. Han, I. -W. Kim and J. Song, “Kinematic Cusps with Two Missing Particles II: Cascade Decay Topology,” “Kinematic Cusps with Two Missing Particles II: Cascade Decay Topology,” Phys. Rev. D 87, no. 3, 035004 (2013), ArXiv e-prints (June, 2012) , arXiv:1206.5641 [hep-ph]. [43] A. J. Barr and C. G. Lester, “A review of the mass measurement techniques proposed for the Large Hadron Collider,” Journal of Physics G Nuclear Physics 37 no. 12, (Dec., 2010) 123001, arXiv:1004.2732 [hep-ph]. 123 [44] B. Gripaios, “Tools for Extracting New Physics in Events with Missing Trans- verse Momentum,” International Journal of Modern Physics A 26 (2011) 4881– 4900, arXiv:1110.4502 [hep-ph]. [45] K. Agashe, R. Franceschini and D. Kim, “Using Energy Peaks to Measure New Particle Masses,” arXiv:1309.4776 [hep-ph]. [46] K.A. Olive et al. (Particle Data Group), Chin. Phys. C, 38, 090001 (2014). [47] K. Agashe, R. Franceschini, S. Hong and D. Kim, in preparation. [48] M. Albrow, B. Alper, J. Armitage, D. Aston, P. Benz, et al., “A Search for Narrow Resonances in Proton Proton Collisions at 53-GeV Center-Of-Mass Energy,” Nucl.Phys. B114 (1976) 365. [49] D. Drijard, H. G. Fischer and T. Nakada, “Study of Event Mixing and Its Application to the Extraction of Resonance Signals,” Nucl. Instrum. Meth. A 225, 367 (1984). [50] DELPHI Collaboration, N. Kjaer and M. Mulders, “Mixed Lorentz boosted Z0’s,”. [51] CMS Collaboration, “Measurement of the top-quark mass in all-jets tt¯ events in pp collisions at √s = 7 TeV,” ArXiv e-prints (July, 2013) , arXiv:1307.4617 [hep-ex]. [52] CMS Collaboration, “Search for supersymmetry in hadronic final states with missing transverse energy using the variables αT and b-quark multiplicity in pp collisions at 8 TeV,” ArXiv e-prints (Mar., 2013) , arXiv:1303.2985 [hep-ex]. https://twiki.cern.ch/twiki/bin/view/ CMSPublic/PhysicsResultsSUS12028. [53] CMS Collaboration, “Search for gluino mediated bottom- and top-squark pro- duction in multijet final states in pp collisions at 8 TeV,” ArXiv e-prints (May, 2013) , arXiv:1305.2390 [hep-ex]. https://twiki.cern.ch/twiki/ bin/view/CMSPublic/PhysicsResultsSUS12024. [54] ATLAS Collaboration, “Search for strong production of supersymmetric par- ticles in final states with missing transverse momentum and at least three b-jets using 20.1/fb of pp collisions at √s = 8 TeV with the ATLAS Detector.,” Tech. Rep. ATLAS-CONF-2013-061, CERN, Geneva, Jun, 2013. https://atlas. web.cern.ch/Atlas/GROUPS/PHYSICS/CONFNOTES/ATLAS-CONF-2013-061/. 124 [55] CMS Collaboration, “Exclusion limits on gluino and top-squark pair production in natural SUSY scenarios with inclusive razor and exclu- sive single-lepton searches at 8 TeV.,” Tech. Rep. CMS-PAS-SUS-14- 011, CERN, Geneva, 2014. https://twiki.cern.ch/twiki/bin/view/ CMSPublic/PhysicsResultsSUS14011. [56] ATLAS Collaboration, “Search for strong production of supersymmetric par- ticles in final states with missing transverse momentum and at least three b-jets at √s = 8 TeV proton-proton collisions with the ATLAS detector,” ArXiv e- prints (July, 2014) , arXiv:1407.0600 [hep-ex]. https://atlas.web.cern. ch/Atlas/GROUPS/PHYSICS/PAPERS/SUSY-2013-18. [57] A. Rajaraman and F. Yu, “A New Method for Resolving Combinatorial Ambi- guities at Hadron Colliders,” Phys. Lett. B 700, 126 (2011) [arXiv:1009.2751 [hep-ph]]; [58] Y. Bai and H. -C. Cheng, “Identifying Dark Matter Event Topologies at the LHC,” JHEP 1106, 021 (2011); [arXiv:1012.1863 [hep-ph]]. [59] P. Baringer, K. Kong, M. McCaskey and D. Noonan, “Revisiting Combinatorial Ambiguities at Hadron Colliders with MT2,” JHEP 1110, 101 (2011); [60] S. Chatrchyan et al. [CMS Collaboration], “Measurement of masses in the tt¯ system by kinematic endpoints in pp collisions at √s = 7 TeV,” Eur. Phys. J. C 73, 2494 (2013) [arXiv:1304.5783 [hep-ex]] [61] D. G. E. Walker, “Dark Matter Stabilization Symmetries from Spontaneous Symmetry Breaking,” arXiv:0907.3146 [hep-ph]. [62] D. G. E. Walker, “Dark Matter Stabilization Symmetries and Long-Lived Par- ticles at the Large Hadron Collider,” arXiv:0907.3142 [hep-ph]. [63] M. Schmaltz, D. Tucker-Smith, “Little Higgs review,” Ann. Rev. Nucl. Part. Sci. 55, 229-270 (2005) [hep-ph/0502182]; [64] M. Perelstein, “Little Higgs models and their phenomenology,” Prog. Part. Nucl. Phys. 58, 247 (2007) [hep-ph/0512128]. [65] H. C. Cheng and I. Low, “Little hierarchy, little Higgses, and a little symmetry,” JHEP 0408, 061 (2004) [arXiv:hep-ph/0405243]; [66] I. Low, “T parity and the littlest Higgs,” JHEP 0410, 067 (2004) [hep- ph/0409025]; 125 [67] H. C. Cheng, I. Low, L. -T. Wang, “Top partners in little Higgs theories with T-parity,” Phys. Rev. D74, 055001 (2006) [arXiv:hep-ph/0510225]; [68] A. Freitas, P. Schwaller, D. Wyler, “A Little Higgs Model with Exact Dark Matter Parity,” JHEP 0912, 027 (2009) [arXiv:0906.1816 [hep-ph]]; [69] T. Brown, C. Frugiuele, T. Gregoire, “UV friendly T-parity in the SU(6)/Sp(6) little Higgs model,” JHEP 1106, 108 (2011) [arXiv:1012.2060 [hep-ph]]. [70] T. Han, R. Mahbubani, D. G. E. Walker, L. -T. Wang, “Top Quark Pair plus Large Missing Energy at the LHC,” JHEP 0905, 117 (2009) [arXiv:0803.3820 [hep-ph]]; [71] C. -Y. Chen, A. Freitas, T. Han and K. S. M. Lee, “New Physics from the Top at the LHC,” JHEP 1211, 124 (2012) [arXiv:1207.4794 [hep-ph]]. [72] K. Agashe and G. Servant, “Warped unification, proton stability and dark matter,” and Phys. Rev. Lett. 93, 231805 (2004) [hep-ph/0403143]; [73] G. Belanger, K. Kannike, A. Pukhov and M. Raidal, “Z3 Scalar Singlet Dark Matter,” arXiv:1211.1014 [hep-ph]. [74] A. Adulpravitchai, B. Batell and J. Pradler, “Non-Abelian Discrete Dark Mat- ter,” Phys. Lett. B 700, 207 (2011) [arXiv:1103.3053 [hep-ph]]; [75] M. Lisanti and J. G. Wacker, “Unification and dark matter in a minimal scalar extension of the standard model,” arXiv:0704.2816 [hep-ph]; [76] A. J. Barr, C. G. Lester, M. A. Parker, B. C. Allanach and P. Richardson, “Discovering anomaly-mediated supersymmetry at the LHC,” JHEP 0303, 045 (2003) [arXiv:hep-ph/0208214]; [77] A. Barr, C. Lester and P. Stephens, “m(T2) : The Truth behind the glamour,” J. Phys. G 29, 2343 (2003) [arXiv:hep-ph/0304226]. [78] T. Han, I.-W. Kim, and J. Song, “Kinematic cusps: Determining the miss- ing particle mass at colliders,” Physics Letters B 693 (Oct., 2010) 575–579, arXiv:0906.5009 [hep-ph]. [79] D. Kim and K. Matchev, in progress. 126