Commit c45b8b0d authored by Iñaki Lara's avatar Iñaki Lara

26/09/2019

parent 8314f67d
Excluded scenarios with long-lived Bino LSP
*******************************************
The exclusion for this points is based on the search 1504.05162 "Search for massive, long-lived particles using multitrack displaced vertices or displaced lepton pairs in pp collisions at s√ = 8 TeV with the ATLAS detector". On each point, the proton-proton collision could produce a pair chargino-chargino or chargino-neutralino, of dominant wino composition. The charginos and neutralinos will rapidly decay to sneutrinos/smuons and muons/neutrinos that afterwards decay to muons/neutrinos plus long-lived binos. The possible decays form the following combinations:
I) p p > cha chi > 3xmu nu 2xB_displaced
II) p p > cha cha > 2xmu 2xnu 2xB_displaced
III) p p > cha chi > 2xmu 2xnu 2xB_displaced
Finally, the displaced binos will decay in 5 different dominant channels that can be detected with the mentioned search:
a) B > 2xe nu
b) B > mu e nu
c) B > 2xmu nu
d) B > qq' mu
e) B > qq' e
Each of the 5 channels constitute a different signal to search for. This way, a point will be considered excluded id the number of events predicted for any of the 5 previous categories is above 3. The number of events is calculated, for each channel, as
Nev = Luminosity x[ Cross-section@8TeV(cha chi)x {eff_t_IX xBR(chi > mu smu)xBR(cha > mu sneu)+eff_t_IX xBR(chi > mu smu)xBR(cha > nu smu)+eff_t_IIIX xBR(chi > nu sneu)xBR(cha > mu sneu)+eff_t_IIIX xBR(chi > nu sneu)xBR(cha > nu smu)}+ Cross-section@8TeV(cha cha)x{eff_t_IIX x(BR(cha > mu sneu) + BR(cha > nu smu))²}]xeff_sel_x
Where eff_t_AX refers to the trigger efficiency associated to each intermediate chain and each final decay of the bino (for example eff_t_Ia correspond to the trigger efficiency when the binos are produced through the channel I and decay to electrons and neutrinos) and eff_sel_x correspond to the selection efficiency of the displaced binos for the decay channel x.
The values of eff_t_AX can be calculated using 'python3.7 trigg_bino.py x=[x_val] y=[y_val] channel=[]', where x_val correspond to the mass of the wino-like chargino/neutralino and the y_val to the mass of the bino. Channel has to be among {ai,bi,ci,di,ei,aii,...} corresponding to the options described above. Note that the x_val has to be within [60,700] and y_val within [60,350], if x_val happens to be above 700 you can just take the value corresponding to 700, since the efficiency saturates above this. If you needed values below 60 GeV or more massive for the bino, please ask me about.
Similarly, you can obtain the values of eff_sel_x using 'python3.7 Seff_bino.py ctau=[x_val] mass1=[mass_heavy] mass2=[mass_light] channel=[a,b,c,d,e]' with a self explanatory syntax. Note that for llnu channels ctau should be within [0.5,88700] mm, while the lqq channels require ctau to be within [1,10000] mm. Outside of this values Nev should be 0. Also, mass1 should be in the interval [mass2,1300] and mass2 in [50,1000].
Excluded scenarios with short-lived Bino LSP
*******************************************
The exclusion for this points is based on the search 1908.08215 'Search for electroweak production of charginos and sleptons decaying into final states with two leptons and missing transverse momentum in s√=13 TeV pp collisions using the ATLAS detector', where we assume that non-prompt decays of the bino are not displaced enough to be detected in LLP searches, bu displaced enough to be discarded as cosmic rays background in prompt searches. Under this assumption we could compare the processes:
I) p p > cha cha > 2xW 2xBino_invisible
II) p p > cha cha > 2xnu/l 2xslepton/sneutrino > 2xnu 2xl 2xBino_invisible
with diagrams of Fig.1a and Fig.1b. Where invisible means that the Bino will be misidentified as missing transverse momentum regardless of its decay.
If the cross-section times Branching fraction corresponding to each channel is higher than the limits shown in the adjoint plots, the point will be excluded. The numbers on the plots are interpolated with the files in the folder, and its value (in fb) can be called as 'python3.7 SLP_sne.py x=[x_val] y=[y_val]'.
Is possible that some points fall outside of the regions covered in the plots. In my opinion we should tag this points as non-excluded.
Excluded scenarios with short-lived Bino LSP
*******************************************
The exclusion for this points is based on the search 1908.08215 'Search for electroweak production of charginos and sleptons decaying into final states with two leptons and missing transverse momentum in s√=13 TeV pp collisions using the ATLAS detector', where we assume that the invisible decay of the sneutrino allow us to compare the following process:
I) p p > cha cha > 2xmu 2xsneutrino_invisible
with diagram of Fig.1c.
If the cross-section times Branching fraction corresponding to each channel is higher than the limits shown in the adjoint plot, the point will be excluded. The numbers on the plot are interpolated with the file in the folder, and its value (in fb) can be called as 'python3.7 limit_sneuLSP.py x=[x_val] y=[y_val]'.
Is possible that some points fall outside of the regions covered in the plots. In my opinion we should tag this points as non-excluded.
from scipy.interpolate import griddata
import numpy as np
import sys
#
#This takes numbers displayed in https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/SUSY-2016-24/figaux_31b.png , tessellate the input point set to 2-dimensional simplices, and interpolate linearly on each simplex.
#See for further info https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html
#
#USAGE--------------------------------------------------------
#
#>> python<..> limit_sneuLSP.py x=[x_val] y=[y_val]
#>> [upper limit in fb]
#
# If the masses are outside ot the interpolated region the subroutine returns nan
#Data for interpolation------------------------------------
x=[100,150,200,250,300,350,400,450,500,550,600,
125,175,225,275,325,375,425,475,525,575,625,
150,
175,200,250,300,350,400,450,500,550,600,
200,220,
225,275,325,375,425,475,525,575,625,
250,
250,275,300,350,400,450,500,550,600,
325,375,425,475,525,575,625,
350,400,450,500,550,600]
y=[0,0,0,0,0,0,0,0,0,0,0,
50,50,50,50,50,50,50,50,50,50,50,
75,
100,100,100,100,100,100,100,100,100,100,
125,125,
150,150,150,150,150,150,150,150,150,
175,
200,200,200,200,200,200,200,200,200,
250,250,250,250,250,250,250,
300,300,300,300,300,300]
z=[1199,18.3,9.2,3.5,2.0,1.9,1.9,1.9,1.9,1.9,1.5,
666,21.2,7.1,3.3,2.0,2.0,1.9,1.7,1.6,1.6,1.5,
173,
90.6,514,8.7,3.0,2.0,2.0,2.0,1.7,1.6,1.5,
75.1,19.5,
71.6,13.9,3.4,2.0,2.1,2.1,1.7,1.6,1.5,
71.3,
2991,49.9,31.3,5.9,2.1,1.9,1.9,1.6,1.6,
50.3,9.3,2.8,2.2,2.0,1.7,1.8,
10602,20.1,4.4,2.1,1.9,1.8]
kx=5
ky=5
s=0
input_data=[0,0]
#--------------------------------------------------------------------------------------------
#Routine for input data----------------------------------------------------------------------
for arg in sys.argv:
if 'limit_sneuLSP.py'.lower() in arg.lower():
pass
elif arg.lower().split("=")[0]=="x".lower():
input_data[0]=arg.split("=")[1]
elif arg.lower().split("=")[0]=="y".lower():
input_data[1]=arg.split("=")[1]
else:
print("Unexpected argument: ",arg)
#--------------------------------------------------------------------------------------------
#Calculating value on input point------------------------------------------------------------
data=[]
for n,el in enumerate(x) :
data.append([x[n],y[n]])
result=griddata(data, z, (input_data), method='linear')
#---------------------------------------------------------------------------------------------
#Output through Stdout
print(result)
#from scipy.interpolate import RectBivariateSpline
from scipy.interpolate import griddata
import matplotlib.pyplot as plt
import numpy as np
import math
x=[90,190,240,340,390,440,490,540,590,640,690,
90,
90,
140,
190,240,340,390,440,490,540,590,640,690,
190,
190,240,290,340,
240,290,340,390,440,490,540,590,640,690,
290,340,390,440,
340,440,490,540,590,640,690,
390,440,490,540,
440,490,690]
y=[0,0,0,0,0,0,0,0,0,0,0,
20,
40,
80,
100,100,100,100,100,100,100,100,100,100,
120,
140,140,140,140,
200,200,200,200,200,200,200,200,200,200,
240,240,240,240,
300,300,300,300,300,300,300,
340,340,340,340,
400,400,400]
z=[1276.3,15.6,7.3,0.5,0.4,0.3,0.3,0.3,0.3,0.2,0.2,
56785.2,
19884.3,
197.8,
35.6,13.4,0.5,0.4,0.4,0.3,0.3,0.3,0.3,0.3,
85.6,
4221.6,3.7,1.2,0.6,
839.2,3.6,1.1,0.6,0.5,0.4,0.3,0.3,0.3,0.3,
755.0,3.3,0.9,0.6,
245.2,0.9,0.5,0.4,0.4,0.3,0.3,
250.2,2.8,0.9,0.5,
90.3,3.0,0.3]
kx=3
ky=3
s=0
grid_x, grid_y = np.mgrid[100:700:2, 0:300:2]
input_data=[300,50]
#print(grid_x)
#print('\n\n')
#print(grid_y)
data=[]
for n,el in enumerate(x) :
data.append([x[n],y[n]])
v_method='linear'
#v_method='cubic'
#print(len(data),len(z))
spl = griddata(data, z, (grid_x, grid_y), method=v_method)
print(griddata(data, z, (input_data), method='cubic'))
#spl = RectBivariateSpline(x, y, z, kx=3, ky=3)
z_test=float(0)*grid_x
x_test=[]
y_test=[]
z_test_X=[]
#print(spl)
for n,el in enumerate(grid_x):
for m,elm in enumerate(grid_x[n]):
x_test.append(grid_x[n][m])
y_test.append(grid_y[n][m])
if grid_y[n][m]+50<grid_x[n][m]:
# if float(spl[n][m])>0.0:
z_test[n][m]=(float(spl[n][m]))
# z_test[n][m]=math.log(float(spl[n][m]))
else:
z_test[n][m]=(spl[n][m])
# z_test[n][m]=math.log(spl[n][m])
# z_test[n][m]=10602
z_test_X.append(z_test[n][m])
fig, ax = plt.subplots()
data={'x_test':x_test,'y_test':y_test,'z_test':z_test_X}
#print(data)
plt.scatter('x_test', 'y_test',c= 'z_test',data=data, zorder=0)
#plt.imshow(spl.T, extent=(100,600,0,300), origin='lower')
plt.ylabel('m_sneu [GeV]')
plt.xlabel('m_cha [GeV]')
cbar = plt.colorbar()
cbar.set_label("S_95 [fb]")
#plt.savefig('plot.eps', format='eps', dpi=200)
plt.show()
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment