LiteHF.jl
Documentation for LiteHF.jl
LiteHF.ExpCounts
— Typestruct ExpCounts{T, M} #M is a long Tuple for unrolling purpose.
nominal::T
modifier_names::Vector{Symbol}
modifiers::M
end
A callable struct that returns the expected count given modifier nuisance parameter values. The # of parameters passed must equal to length of modifiers
. See _expkernel
LiteHF.FlatPrior
— TypePseudo flat prior in the sense that `logpdf()` always evaluates to zero,
but `rand()`, `minimum()`, and `maximum()` behaves like `Uniform(a, b)`.
LiteHF.Histosys
— TypeHistosys is defined by two vectors represending bin counts in hi_data
and lo_data
LiteHF.InterpCode0
— TypeInterpCode0{T}
Callable struct for interpolation for additive modifier. Code0 is the two-piece linear interpolation.
LiteHF.InterpCode1
— TypeInterpCode1{T}
Callable struct for interpolation for multiplicative modifier. Code1 is the exponential interpolation.
LiteHF.InterpCode4
— TypeInterpCode4{T}
Callable struct for interpolation for additive modifier. Code4 is the exponential + 6-order polynomial interpolation.
LiteHF.Lumi
— TypeLuminosity doesn't need interpolation, σ is provided at modifier construction time. In pyhf
JSON, this information lives in the "Measurement" section, usually near the end of the JSON file.
LiteHF.MultOneHot
— TypeMultOneHot{T} <: AbstractVector{T}
Internal type used to avoid allocation for per-bin multiplicative systematics. It behaves as a vector with length nbins
and only has value α
on nthbin
-th index, the rest being one(T)
. See also binidentity.
LiteHF.Normfactor
— TypeNormfactor is unconstrained, so interp
is just identity.
LiteHF.Normsys
— MethodNormsys is defined by two multiplicative scalars
LiteHF.PyHFModel
— Typestruct PyHFModel{E, O, P} <: AbstractModel
expected::E
observed::O
priors::P
prior_names
inits::Vector{Float64}
end
Struct for holding result from build_pyhf. List of accessor functions is
- expected(p::PyHFModel)
- observed(p::PyHFModel)
- priors(p::PyHFModel)
- prior_names(p::PyHFModel)
- inits(p::PyHFModel)
LiteHF.RelaxedPoisson
— TypeRelaxedPoisson
Poisson with logpdf
continuous in k
. Essentially by replacing denominator with gamma
function.
The Distributions.logpdf
has been redefined to be logpdf(d::RelaxedPoisson, x) = logpdf(d, x*d.λ)
. This is to reproduce the Poisson constraint term in pyhf
, which is a hack introduced for Asimov dataset.
LiteHF.Shapefactor
— TypeShapefactor is unconstrained, so interp
is just identity. Unlike Normfactor
, this is per-bin
LiteHF.Shapesys
— TypeShapesys doesn't need interpolation, similar to Staterror
LiteHF.Staterror
— TypeStaterror doesn't need interpolation, but it's a per-bin modifier. Information regarding which bin is the target is recorded in bintwoidentity
.
The δ
is the absolute yield uncertainty in each bin, and the relative uncertainty: δ
/ nominal is taken to be the σ
of the prior, i.e. α ~ Normal(1, δ/nominal)
LiteHF.T_q0
— TypeTest statistic for discovery of a positive signal q0 = \tilde{t}0 See equation 12 in: https://arxiv.org/pdf/1007.1727.pdf for reference. Note that this IS NOT a special case of q_\mu for \mu = 0.
LiteHF.T_qmu
— TypeTest statistic for upper limits See equation 14 in: https://arxiv.org/pdf/1007.1727.pdf for reference. Note that q0 IS NOT a special case of q\mu for \mu = 0.
LiteHF.T_tmu
— TypeT_tmu
\[ t_\mu = -2\ln\lambda(\mu)\]
LiteHF.T_tmutilde
— TypeT_tmutilde(LL, inits)
\[ \widetilde{t_\mu} = -2\ln\widetilde{\lambda(\mu)}\]
LiteHF.AsimovModel
— MethodAsimovModel(model::PyHFModel, μ)::PyHFModel
Generate the Asimov model when fixing μ
(POI) to a value. Notice this changes the priors
and observed
compare to the original model
.
LiteHF._expkernel_unrolled_expansion_##312
— Method_expkernel(modifiers, nominal, αs)
The Unrolled.@unroll
kernel function that computs the expected counts.
LiteHF.asimovdata
— Methodasimovdata(model::PyHFModel, μ)
Generate the Asimov dataset and asimov priors, which is the expected counts after fixing POI to μ
and optimize the nuisance parameters.
LiteHF.asymptotic_dists
— Methodasymptotic_dists(sqrtqmuA; base_dist = :normal)
Return S+B
and B only
teststatistics distributions.
LiteHF.binidentity
— Methodbinidentity(nbins, nthbin)
A functional that used to track per-bin systematics. Returns the closure function over nbins, nthbin
:
α -> MultOneHot(nbins, nthbin, α)
LiteHF.build_channel
— Methodbuild_channel(rawjdict[:channels][1][:samples][2]) =>
Dict{String, ExpCounts}
LiteHF.build_modifier!
— Methodbuild_modifier(rawjdict[:channels][1][:samples][2][:modifiers][1]) =>
<:AbstractModifier
LiteHF.build_modifier
— Methodbuild_modifier(...[:modifiers][1][:data], Type) =>
<:AbstractModifier
LiteHF.build_pyhf
— Methodbuild_pyhf(load_pyhfjson(path)) -> PyHFModel
the expected(αs)
is a function that takes vector or tuple of length N
, where N
is also the length of priors
and priornames
. In other words, these three fields of the returned object are aligned.
The bins from different channels are put into a NTuple{Nbins, Vector}
.
LiteHF.build_sample
— Functionbuild_sample(rawjdict[:channels][1][:samples][2]) =>
ExpCounts
LiteHF.get_condLL
— Methodget_condLL(LL, μ)
Given the original log-likelihood function and a value for parameter of interest, return a function condLL(nuisance_θs)
that takes one less argument than the original LL
. The μ
is assumed to be the first element in input vector.
LiteHF.get_lnLR
— Methodget_lnLR(LL, inits)
A functional that returns a function lnLR(μ::Number)
that evaluates to log of likelihood-ratio:
\[\ln\lambda(\mu) = \ln(\frac{L(\mu, \hat{\hat\theta)}}{L(\hat\mu, \hat\theta)}) = LL(\mu, \hat{\hat\theta}) - LL(\hat\mu, \hat\theta)\]
we assume the POI is the first in the input array.
LiteHF.get_lnLRtilde
— Methodget_lnLRtilde(LL, inits)
A functional that returns a function lnLRtilde(μ::Number)
that evaluates to log of likelihood-ratio:
\[\ln\widetilde{\lambda(\mu)}\]
See equation 10 in: https://arxiv.org/pdf/1007.1727.pdf for refercen.
LiteHF.get_teststat
— Methodget_teststat(LL, inits, ::Type{T}) where T <: ATS
Return a callable function t(μ)
that evaluates to the value of corresponding test statistics.
LiteHF.internal_expected
— Methodinternal_expected(Es, Vs, αs)
The @generated
function that computes expected counts in expected(PyHFModel, parameters)
evaluation. The Vs::NTuple{N, Vector{Int64}}
has the same length as Es::NTuple{N, ExpCounts}
.
In general αs
is shorter than Es
and Vs
because a given nuisance parameter α
may appear in multiple sample / modifier.
If for example Vs[1] = [1,3,4]
, it means that the first ExpCount
in Es
is evaluated with
Es[1](@view αs[[1,3,4]])
and so on.
LiteHF.load_pyhfjson
— Methodload_pyhfjson(path)
LiteHF.pvalue
— Methodpvalue(teststat, s_plus_b_dist::AsymptoticDist, b_only_dist::AsymptoticDist)
-> (CLsb, CLb, CLs)
Compute the confidence level for S+B
, B
only, and S
.
LiteHF.pvalue
— Methodpvalue(d::AsymptoticDist, value) -> Real
Compute the p-value for a single teststatistics distribution.
LiteHF.pyhf_logjointof
— Methodpyhf_logjointof(expected, obs, priors)
Return a callable Function that would calculate the joint log likelihood of likelihood and priors.
Equivalent of adding loglikelihood
and logprior
together.
The "constraint" terms are included here.
LiteHF.pyhf_loglikelihoodof
— Methodpyhf_loglikelihoodof(expected, obs)
Return a callable Function L(αs)
that would calculate the log likelihood. expected
is a callable of αs
as well.
The so called "constraint" terms (from priors) are NOT included here.
LiteHF.pyhf_logpriorof
— Methodpyhf_logpriorof(priors)
Return a callable Function L(αs)
that would calculate the log likelihood for the priors.
Sometimes these are called the "constraint" terms.
LiteHF.sortpoi!
— MethodEnsure POI parameter always comes first in the input array.