{% raw %}
Title: Create a Markdown Blog Post Integrating Research Details and a Featured Paper
====================================================================================
This task involves generating a Markdown file (ready for a GitHub-served Jekyll site) that integrates our research details with a featured research paper. The output must follow the exact format and conventions described below.
====================================================================================
Output Format (Markdown):
------------------------------------------------------------------------------------
---
layout: post
title: "Simple and statistically sound recommendations for analysing physical
theories"
date: 2020-12-17
categories: papers
---


Content generated by [gemini-2.5-pro](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/content/2020-12-17-2012.09874.txt).
Image generated by [imagen-3.0-generate-002](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/images/2020-12-17-2012.09874.txt).
------------------------------------------------------------------------------------
====================================================================================
Please adhere strictly to the following instructions:
====================================================================================
Section 1: Content Creation Instructions
====================================================================================
1. **Generate the Page Body:**
- Write a well-composed, engaging narrative that is suitable for a scholarly audience interested in advanced AI and astrophysics.
- Ensure the narrative is original and reflective of the tone and style and content in the "Homepage Content" block (provided below), but do not reuse its content.
- Use bullet points, subheadings, or other formatting to enhance readability.
2. **Highlight Key Research Details:**
- Emphasize the contributions and impact of the paper, focusing on its methodology, significance, and context within current research.
- Specifically highlight the lead author ({'name': 'Shehu S. AbdusSalam'}). When referencing any author, use Markdown links from the Author Information block (choose academic or GitHub links over social media).
3. **Integrate Data from Multiple Sources:**
- Seamlessly weave information from the following:
- **Paper Metadata (YAML):** Essential details including the title and authors.
- **Paper Source (TeX):** Technical content from the paper.
- **Bibliographic Information (bbl):** Extract bibliographic references.
- **Author Information (YAML):** Profile details for constructing Markdown links.
- Merge insights from the Paper Metadata, TeX source, Bibliographic Information, and Author Information blocks into a coherent narrative—do not treat these as separate or isolated pieces.
- Insert the generated narrative between the HTML comments:
and
4. **Generate Bibliographic References:**
- Review the Bibliographic Information block carefully.
- For each reference that includes a DOI or arXiv identifier:
- For DOIs, generate a link formatted as:
[10.1234/xyz](https://doi.org/10.1234/xyz)
- For arXiv entries, generate a link formatted as:
[2103.12345](https://arxiv.org/abs/2103.12345)
- **Important:** Do not use any LaTeX citation commands (e.g., `\cite{...}`). Every reference must be rendered directly as a Markdown link. For example, instead of `\cite{mycitation}`, output `[mycitation](https://doi.org/mycitation)`
- **Incorrect:** `\cite{10.1234/xyz}`
- **Correct:** `[10.1234/xyz](https://doi.org/10.1234/xyz)`
- Ensure that at least three (3) of the most relevant references are naturally integrated into the narrative.
- Ensure that the link to the Featured paper [2012.09874](https://arxiv.org/abs/2012.09874) is included in the first sentence.
5. **Final Formatting Requirements:**
- The output must be plain Markdown; do not wrap it in Markdown code fences.
- Preserve the YAML front matter exactly as provided.
====================================================================================
Section 2: Provided Data for Integration
====================================================================================
1. **Homepage Content (Tone and Style Reference):**
```markdown
---
layout: home
---

The Handley Research Group stands at the forefront of cosmological exploration, pioneering novel approaches that fuse fundamental physics with the transformative power of artificial intelligence. We are a dynamic team of researchers, including PhD students, postdoctoral fellows, and project students, based at the University of Cambridge. Our mission is to unravel the mysteries of the Universe, from its earliest moments to its present-day structure and ultimate fate. We tackle fundamental questions in cosmology and astrophysics, with a particular focus on leveraging advanced Bayesian statistical methods and AI to push the frontiers of scientific discovery. Our research spans a wide array of topics, including the [primordial Universe](https://arxiv.org/abs/1907.08524), [inflation](https://arxiv.org/abs/1807.06211), the nature of [dark energy](https://arxiv.org/abs/2503.08658) and [dark matter](https://arxiv.org/abs/2405.17548), [21-cm cosmology](https://arxiv.org/abs/2210.07409), the [Cosmic Microwave Background (CMB)](https://arxiv.org/abs/1807.06209), and [gravitational wave astrophysics](https://arxiv.org/abs/2411.17663).
### Our Research Approach: Innovation at the Intersection of Physics and AI
At The Handley Research Group, we develop and apply cutting-edge computational techniques to analyze complex astronomical datasets. Our work is characterized by a deep commitment to principled [Bayesian inference](https://arxiv.org/abs/2205.15570) and the innovative application of [artificial intelligence (AI) and machine learning (ML)](https://arxiv.org/abs/2504.10230).
**Key Research Themes:**
* **Cosmology:** We investigate the early Universe, including [quantum initial conditions for inflation](https://arxiv.org/abs/2002.07042) and the generation of [primordial power spectra](https://arxiv.org/abs/2112.07547). We explore the enigmatic nature of [dark energy, using methods like non-parametric reconstructions](https://arxiv.org/abs/2503.08658), and search for new insights into [dark matter](https://arxiv.org/abs/2405.17548). A significant portion of our efforts is dedicated to [21-cm cosmology](https://arxiv.org/abs/2104.04336), aiming to detect faint signals from the Cosmic Dawn and the Epoch of Reionization.
* **Gravitational Wave Astrophysics:** We develop methods for [analyzing gravitational wave signals](https://arxiv.org/abs/2411.17663), extracting information about extreme astrophysical events and fundamental physics.
* **Bayesian Methods & AI for Physical Sciences:** A core component of our research is the development of novel statistical and AI-driven methodologies. This includes advancing [nested sampling techniques](https://arxiv.org/abs/1506.00171) (e.g., [PolyChord](https://arxiv.org/abs/1506.00171), [dynamic nested sampling](https://arxiv.org/abs/1704.03459), and [accelerated nested sampling with $\beta$-flows](https://arxiv.org/abs/2411.17663)), creating powerful [simulation-based inference (SBI) frameworks](https://arxiv.org/abs/2504.10230), and employing [machine learning for tasks such as radiometer calibration](https://arxiv.org/abs/2504.16791), [cosmological emulation](https://arxiv.org/abs/2503.13263), and [mitigating radio frequency interference](https://arxiv.org/abs/2211.15448). We also explore the potential of [foundation models for scientific discovery](https://arxiv.org/abs/2401.00096).
**Technical Contributions:**
Our group has a strong track record of developing widely-used scientific software. Notable examples include:
* [**PolyChord**](https://arxiv.org/abs/1506.00171): A next-generation nested sampling algorithm for Bayesian computation.
* [**anesthetic**](https://arxiv.org/abs/1905.04768): A Python package for processing and visualizing nested sampling runs.
* [**GLOBALEMU**](https://arxiv.org/abs/2104.04336): An emulator for the sky-averaged 21-cm signal.
* [**maxsmooth**](https://arxiv.org/abs/2007.14970): A tool for rapid maximally smooth function fitting.
* [**margarine**](https://arxiv.org/abs/2205.12841): For marginal Bayesian statistics using normalizing flows and KDEs.
* [**fgivenx**](https://arxiv.org/abs/1908.01711): A package for functional posterior plotting.
* [**nestcheck**](https://arxiv.org/abs/1804.06406): Diagnostic tests for nested sampling calculations.
### Impact and Discoveries
Our research has led to significant advancements in cosmological data analysis and yielded new insights into the Universe. Key achievements include:
* Pioneering the development and application of advanced Bayesian inference tools, such as [PolyChord](https://arxiv.org/abs/1506.00171), which has become a cornerstone for cosmological parameter estimation and model comparison globally.
* Making significant contributions to the analysis of major cosmological datasets, including the [Planck mission](https://arxiv.org/abs/1807.06209), providing some of the tightest constraints on cosmological parameters and models of [inflation](https://arxiv.org/abs/1807.06211).
* Developing novel AI-driven approaches for astrophysical challenges, such as using [machine learning for radiometer calibration in 21-cm experiments](https://arxiv.org/abs/2504.16791) and [simulation-based inference for extracting cosmological information from galaxy clusters](https://arxiv.org/abs/2504.10230).
* Probing the nature of dark energy through innovative [non-parametric reconstructions of its equation of state](https://arxiv.org/abs/2503.08658) from combined datasets.
* Advancing our understanding of the early Universe through detailed studies of [21-cm signals from the Cosmic Dawn and Epoch of Reionization](https://arxiv.org/abs/2301.03298), including the development of sophisticated foreground modelling techniques and emulators like [GLOBALEMU](https://arxiv.org/abs/2104.04336).
* Developing new statistical methods for quantifying tensions between cosmological datasets ([Quantifying tensions in cosmological parameters: Interpreting the DES evidence ratio](https://arxiv.org/abs/1902.04029)) and for robust Bayesian model selection ([Bayesian model selection without evidences: application to the dark energy equation-of-state](https://arxiv.org/abs/1506.09024)).
* Exploring fundamental physics questions such as potential [parity violation in the Large-Scale Structure using machine learning](https://arxiv.org/abs/2410.16030).
### Charting the Future: AI-Powered Cosmological Discovery
The Handley Research Group is poised to lead a new era of cosmological analysis, driven by the explosive growth in data from next-generation observatories and transformative advances in artificial intelligence. Our future ambitions are centred on harnessing these capabilities to address the most pressing questions in fundamental physics.
**Strategic Research Pillars:**
* **Next-Generation Simulation-Based Inference (SBI):** We are developing advanced SBI frameworks to move beyond traditional likelihood-based analyses. This involves creating sophisticated codes for simulating [Cosmic Microwave Background (CMB)](https://arxiv.org/abs/1908.00906) and [Baryon Acoustic Oscillation (BAO)](https://arxiv.org/abs/1607.00270) datasets from surveys like DESI and 4MOST, incorporating realistic astrophysical effects and systematic uncertainties. Our AI initiatives in this area focus on developing and implementing cutting-edge SBI algorithms, particularly [neural ratio estimation (NRE) methods](https://arxiv.org/abs/2407.15478), to enable robust and scalable inference from these complex simulations.
* **Probing Fundamental Physics:** Our enhanced analytical toolkit will be deployed to test the standard cosmological model ($\Lambda$CDM) with unprecedented precision and to explore [extensions to Einstein's General Relativity](https://arxiv.org/abs/2006.03581). We aim to constrain a wide range of theoretical models, from modified gravity to the nature of [dark matter](https://arxiv.org/abs/2106.02056) and [dark energy](https://arxiv.org/abs/1701.08165). This includes leveraging data from upcoming [gravitational wave observatories](https://arxiv.org/abs/1803.10210) like LISA, alongside CMB and large-scale structure surveys from facilities such as Euclid and JWST.
* **Synergies with Particle Physics:** We will continue to strengthen the connection between cosmology and particle physics by expanding the [GAMBIT framework](https://arxiv.org/abs/2009.03286) to interface with our new SBI tools. This will facilitate joint analyses of cosmological and particle physics data, providing a holistic approach to understanding the Universe's fundamental constituents.
* **AI-Driven Theoretical Exploration:** We are pioneering the use of AI, including [large language models and symbolic computation](https://arxiv.org/abs/2401.00096), to automate and accelerate the process of theoretical model building and testing. This innovative approach will allow us to explore a broader landscape of physical theories and derive new constraints from diverse astrophysical datasets, such as those from GAIA.
Our overarching goal is to remain at the forefront of scientific discovery by integrating the latest AI advancements into every stage of our research, from theoretical modeling to data analysis and interpretation. We are excited by the prospect of using these powerful new tools to unlock the secrets of the cosmos.
Content generated by [gemini-2.5-pro-preview-05-06](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/content/index.txt).
Image generated by [imagen-3.0-generate-002](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/images/index.txt).
```
2. **Paper Metadata:**
```yaml
!!python/object/new:feedparser.util.FeedParserDict
dictitems:
id: http://arxiv.org/abs/2012.09874v2
guidislink: true
link: http://arxiv.org/abs/2012.09874v2
updated: '2022-04-11T08:22:51Z'
updated_parsed: !!python/object/apply:time.struct_time
- !!python/tuple
- 2022
- 4
- 11
- 8
- 22
- 51
- 0
- 101
- 0
- tm_zone: null
tm_gmtoff: null
published: '2020-12-17T19:00:06Z'
published_parsed: !!python/object/apply:time.struct_time
- !!python/tuple
- 2020
- 12
- 17
- 19
- 0
- 6
- 3
- 352
- 0
- tm_zone: null
tm_gmtoff: null
title: "Simple and statistically sound recommendations for analysing physical\n\
\ theories"
title_detail: !!python/object/new:feedparser.util.FeedParserDict
dictitems:
type: text/plain
language: null
base: ''
value: "Simple and statistically sound recommendations for analysing physical\n\
\ theories"
summary: 'Physical theories that depend on many parameters or are tested against
data
from many different experiments pose unique challenges to statistical
inference. Many models in particle physics, astrophysics and cosmology fall
into one or both of these categories. These issues are often sidestepped with
statistically unsound ad hoc methods, involving intersection of parameter
intervals estimated by multiple experiments, and random or grid sampling of
model parameters. Whilst these methods are easy to apply, they exhibit
pathologies even in low-dimensional parameter spaces, and quickly become
problematic to use and interpret in higher dimensions. In this article we give
clear guidance for going beyond these procedures, suggesting where possible
simple methods for performing statistically sound inference, and
recommendations of readily-available software tools and standards that can
assist in doing so. Our aim is to provide any physicists lacking comprehensive
statistical training with recommendations for reaching correct scientific
conclusions, with only a modest increase in analysis burden. Our examples can
be reproduced with the code publicly available at
https://doi.org/10.5281/zenodo.4322283.'
summary_detail: !!python/object/new:feedparser.util.FeedParserDict
dictitems:
type: text/plain
language: null
base: ''
value: 'Physical theories that depend on many parameters or are tested against
data
from many different experiments pose unique challenges to statistical
inference. Many models in particle physics, astrophysics and cosmology fall
into one or both of these categories. These issues are often sidestepped with
statistically unsound ad hoc methods, involving intersection of parameter
intervals estimated by multiple experiments, and random or grid sampling of
model parameters. Whilst these methods are easy to apply, they exhibit
pathologies even in low-dimensional parameter spaces, and quickly become
problematic to use and interpret in higher dimensions. In this article we
give
clear guidance for going beyond these procedures, suggesting where possible
simple methods for performing statistically sound inference, and
recommendations of readily-available software tools and standards that can
assist in doing so. Our aim is to provide any physicists lacking comprehensive
statistical training with recommendations for reaching correct scientific
conclusions, with only a modest increase in analysis burden. Our examples
can
be reproduced with the code publicly available at
https://doi.org/10.5281/zenodo.4322283.'
authors:
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Shehu S. AbdusSalam
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Fruzsina J. Agocs
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Benjamin C. Allanach
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Peter Athron
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: "Csaba Bal\xE1zs"
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Emanuele Bagnaschi
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Philip Bechtle
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Oliver Buchmueller
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Ankit Beniwal
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Jihyun Bhom
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Sanjay Bloor
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Torsten Bringmann
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Andy Buckley
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Anja Butter
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: "Jos\xE9 Eliel Camargo-Molina"
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Marcin Chrzaszcz
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Jan Conrad
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Jonathan M. Cornell
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Matthias Danninger
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Jorge de Blas
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Albert De Roeck
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Klaus Desch
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Matthew Dolan
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Herbert Dreiner
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Otto Eberhardt
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: John Ellis
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Ben Farmer
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Marco Fedele
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: "Henning Fl\xE4cher"
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Andrew Fowlie
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: "Tom\xE1s E. Gonzalo"
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Philip Grace
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Matthias Hamer
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Will Handley
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Julia Harz
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Sven Heinemeyer
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Sebastian Hoof
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Selim Hotinli
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Paul Jackson
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Felix Kahlhoefer
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Kamila Kowalska
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: "Michael Kr\xE4mer"
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Anders Kvellestad
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Miriam Lucio Martinez
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Farvah Mahmoudi
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Diego Martinez Santos
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Gregory D. Martinez
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Satoshi Mishima
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Keith Olive
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Ayan Paul
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Markus Tobias Prim
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Werner Porod
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Are Raklev
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Janina J. Renk
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Christopher Rogan
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Leszek Roszkowski
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Roberto Ruiz de Austri
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Kazuki Sakurai
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Andre Scaffidi
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Pat Scott
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Enrico Maria Sessolo
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Tim Stefaniak
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: "Patrick St\xF6cker"
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Wei Su
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Sebastian Trojanowski
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Roberto Trotta
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Yue-Lin Sming Tsai
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Jeriek Van den Abeele
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Mauro Valli
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Aaron C. Vincent
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Georg Weiglein
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Martin White
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Peter Wienemann
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Lei Wu
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Yang Zhang
author_detail: !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Yang Zhang
author: Yang Zhang
arxiv_doi: 10.1088/1361-6633/ac60ac
links:
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
title: doi
href: http://dx.doi.org/10.1088/1361-6633/ac60ac
rel: related
type: text/html
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
href: http://arxiv.org/abs/2012.09874v2
rel: alternate
type: text/html
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
title: pdf
href: http://arxiv.org/pdf/2012.09874v2
rel: related
type: application/pdf
arxiv_comment: "15 pages, 4 figures. extended discussions. closely matches version\n\
\ accepted for publication"
arxiv_journal_ref: Rep. Prog. Phys. 85 052201 (2022)
arxiv_primary_category:
term: hep-ph
scheme: http://arxiv.org/schemas/atom
tags:
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
term: hep-ph
scheme: http://arxiv.org/schemas/atom
label: null
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
term: astro-ph.CO
scheme: http://arxiv.org/schemas/atom
label: null
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
term: hep-ex
scheme: http://arxiv.org/schemas/atom
label: null
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
term: physics.data-an
scheme: http://arxiv.org/schemas/atom
label: null
```
3. **Paper Source (TeX):**
```tex
% texcount rules
% for wordcount, do texcount main.tex
%
%TC:macro \cite [ignore]
%TC:macro \cref [ignore]
%
\documentclass[fleqn,10pt,mtlplain]{wlscirep}
% fonts
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{mathpazo}
\usepackage{inconsolata}
\usepackage{microtype}
\makeatletter
\renewcommand\AB@authnote[1]{\textsuperscript{\sffamily\mdseries #1}}
\renewcommand\AB@affilnote[1]{\textsuperscript{\sffamily\mdseries #1}}
\makeatother
\usepackage{bm}
\usepackage{xspace}
\usepackage[normalem]{ulem}
\usepackage{gensymb}
\usepackage[symbol]{footmisc}
\usepackage{siunitx}
\usepackage{doi}
\usepackage{cleveref}
\crefname{figure}{Figure}{Figures}
\crefname{table}{Table}{Tables}
\crefname{chapter}{Chapter}{Chapters}
\crefname{section}{Section}{Sections}
\definecolor{lightmagenta}{RGB}{250,150,200}
\newcommand{\ab}[1]{\textcolor{lightmagenta}{AB: #1}}
%TC:macro \ab [ignore]
\newcommand{\CsB}[1]{\textcolor{blue}{CsB: #1}}
%TC:macro \CsB [ignore]
\newcommand{\af}[1]{\textcolor{brown}{AF: #1}}
%TC:macro \af [ignore]
\newcommand{\pa}[1]{\textcolor{red}{PA: #1}}
%TC:macro \pa [ignore]
\newcommand{\sh}[1]{\textcolor{orange}{SebH: #1}}
%TC:macro \sh [ignore]
\newcommand{\paadd}[1]{\textcolor{red}{#1}}
\newcommand{\yz}[1]{\textcolor{purple}{YZ: #1}}
%TC:macro \yz [ignore]
\newcommand{\ar}[1]{\textcolor{pink}{AR: #1}}
%TC:macro \ar [ignore]
\newcommand{\mtp}[1]{\textcolor{green}{MP: #1}}
%TC:macro \mtp [ignore]
\newcommand{\tb}[1]{\textcolor{violet}{TB: #1}}
%TC:macro \tb [ignore]
\newcommand{\ak}[1]{\textcolor{teal}{AK: #1}}
%TC:macro \tb [ignore]
\newcommand{\tg}[1]{\textcolor{olive}{TG: #1}}
%TC:macro \tg [ignore]
\newcommand{\pat}[1]{\textcolor{magenta}{Pat: #1}}
%TC:macro \pat [ignore]
\newcommand{\SB}[1]{\textcolor{purple}{SB: #1}}
%TC:macro \SB [ignore]
\newcommand{\EC}[1]{\textcolor{cyan}{EC: #1}}
%TC:macro \EC [ignore]
\newcommand{\mjw}[1]{\textcolor{gray}{Martin: #1}}
%TC:macro \mjw [ignore]
\newcommand{\recommendation}[1]{\vspace{\baselineskip}\noindent\emph{Recommendation:} #1}
\newcommand{\pvalue}{\emph{p}-value\xspace}
\newcommand{\aachen}{Institute for Theoretical Particle Physics and Cosmology (TTK), RWTH Aachen University, Sommerfeldstra\ss e 14, D-52056 Aachen, Germany}
\newcommand{\queens}{Department of Physics, Engineering Physics and Astronomy, Queen's University, Kingston ON K7L 3N6, Canada}
\newcommand{\imperial}{Department of Physics, Imperial College London, Blackett Laboratory, Prince Consort Road, London SW7 2AZ, UK}
\newcommand{\cambridge}{Cavendish Laboratory, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK}
\newcommand{\oslo}{Department of Physics, University of Oslo, Box 1048, Blindern, N-0316 Oslo, Norway}
\newcommand{\adelaide}{ARC Centre for Dark Matter Particle Physics, Department of Physics, University of Adelaide, Adelaide, SA 5005, Australia}
\newcommand{\louvain}{Centre for Cosmology, Particle Physics and Phenomenology (CP3), Universit\'{e} catholique de Louvain, B-1348 Louvain-la-Neuve, Belgium}
\newcommand{\monash}{School of Physics and Astronomy, Monash University, Melbourne, VIC 3800, Australia}
\newcommand{\mcdonald}{Arthur B. McDonald Canadian Astroparticle Physics Research Institute, Kingston ON K7L 3N6, Canada}
\newcommand{\nanjing}{Department of Physics and Institute of Theoretical Physics, Nanjing Normal University, Nanjing, Jiangsu 210023, China}
\newcommand{\okc}{Oskar Klein Centre for Cosmoparticle Physics, AlbaNova University Centre, SE-10691 Stockholm, Sweden}
\newcommand{\perimeter}{Perimeter Institute for Theoretical Physics, Waterloo ON N2L 2Y5, Canada}
\newcommand{\uq}{School of Mathematics and Physics, The University of Queensland, St.\ Lucia, Brisbane, QLD 4072, Australia}
\newcommand{\gottingen}{Institut f\"ur Astrophysik und Geophysik, Georg-August-Universit\"at G\"ottingen, Friedrich-Hund-Platz~1, D-37077 G\"ottingen, Germany}
\newcommand{\ioa}{Institute for Astronomy, University of Cambridge, Madingley Road, Cambridge, CB3 0HA, UK}
\newcommand{\kicc}{Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge, CB3 0HA, UK}
\newcommand{\caius}{Gonville \& Caius College, Trinity Street, Cambridge, CB2 1TA, UK}
\newcommand{\bonn}{University of Bonn, Physikalisches Institut, Nussallee 12, D-53115 Bonn, Germany}
\newcommand{\bom}{Bureau of Meteorology, Melbourne, VIC 3001, Australia}
\newcommand{\glasgow}{School of Physics and Astronomy, University of Glasgow, University Place, Glasgow, G12~8QQ, UK}
\newcommand{\sfu}{Department of Physics, Simon Fraser University, 8888 University Drive, Burnaby B.C., Canada}
\newcommand{\zzu}{School of Physics, Zhengzhou University, ZhengZhou 450001, China}
\newcommand{\lyon}{Universit\'e de Lyon, Universit\'e Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique des 2 Infinis de Lyon, UMR 5822, F-69622, Villeurbanne, France}
\newcommand{\cernth}{Theoretical Physics Department, CERN, CH-1211 Geneva 23, Switzerland}
\newcommand{\cernex}{Experimental Physics Department, CERN, CH–1211 Geneva 23, Switzerland}
\newcommand{\infnt}{Istituto Nazionale di Fisica Nucleare, Sezione di Torino, via P. Giuria 1, I–10125 Torino, Italy}
\newcommand{\ifj}{Institute of Nuclear Physics, Polish Academy of Sciences, Krakow, Poland}
\newcommand{\ific}{Instituto de F\'isica Corpuscular, IFIC-UV/CSIC, Apt.\ Correus 22085, E-46071, Valencia, Spain}
\newcommand{\kansas}{Department of Physics and Astronomy, University of Kansas, Lawrence, KS 66045, USA}
\newcommand{\wsu}{Department of Physics, Weber State University, 1415 Edvalson St., Dept. 2508, Ogden, UT 84408, USA}
\newcommand{\tum}{Physik Department T70, James-Franck-Stra{\ss}e, Technische Universit\"at M\"unchen, D-85748 Garching, Germany}
\newcommand{\uppsala}{Department of Physics and Astronomy, Uppsala University, Box 516, SE-751 20 Uppsala, Sweden}
\newcommand{\desy}{Deutsches Elektronen-Synchrotron DESY, Notkestr.~85, 22607 Hamburg, Germany}
\newcommand{\kit}{Institut f\"ur Theoretische Teilchenphysik, Karlsruhe Institute of Technology, D-76131 Karlsruhe, Germany}
\newcommand{\ucla}{Physics and Astronomy Department, University of California, Los Angeles, CA 90095, USA}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\title{Simple and statistically sound recommendations for analysing physical theories}
\preprint{
PSI-PR-20-23; %Bagnaschi
BONN-TH-2020-11; %Bechtle, Desch, Dreiner
CP3-20-59; %Beniwal
KCL-PH-TH/2020-75; %Ellis
P3H-20-080; TTP20-044; %Fedele
TUM-HEP-1310/20; %Harz
IFT-UAM/CSIC-20-180; %Heinemeyer
TTK-20-47; %Kahlhoefer
CERN-TH-2020-215; %Mahmoudi
FTPI-MINN-20-36; UMN-TH-4005/20; %Olive
HU-EP-20/37; %Paul
DESY 20-222; %Paul, Stefaniak, Weiglein
ADP-20-33/T1143; %Su, White
Imperial/TP/2020/RT/04; %Trotta
UCI-TR-2020-19 %Valli
% gambit-review-2020 %GAMBIT; for arxiv field only
}
\author[1]{Shehu~S.~AbdusSalam}
\author[a,2,3]{Fruzsina~J.~Agocs}
\author[4]{Benjamin~C.~Allanach}
\author[a,5,6]{Peter~Athron}
\author[a,6]{Csaba~Bal{\'a}zs}
\author[b,7]{Emanuele Bagnaschi}
\author[c,8]{Philip Bechtle}
\author[b,9]{Oliver Buchmueller}
\author[a,10]{Ankit~Beniwal}
\author[a,11]{Jihyun Bhom}
\author[a,9,12]{Sanjay Bloor}
\author[a,13]{Torsten Bringmann}
\author[a,14]{Andy~Buckley}
\author[15]{Anja~Butter}
\author[a,16]{Jos{\'e}~Eliel~Camargo-Molina}
\author[a,11]{Marcin Chrzaszcz}
\author[a,17]{Jan Conrad}
\author[a,18]{Jonathan~M.~Cornell}
\author[a,19]{Matthias~Danninger}
\author[d,20]{Jorge de Blas}
\author[b,21]{Albert De Roeck}
\author[c,8]{Klaus Desch}
\author[b,22]{Matthew Dolan}
\author[c,8]{Herbert Dreiner}
\author[d,23]{Otto Eberhardt}
\author[b,24]{John Ellis}
\author[a,9,25]{Ben~Farmer}
\author[d,26]{Marco~Fedele}
\author[b,27]{Henning Fl{\"a}cher}
\author[a,5,*]{Andrew~Fowlie}
\author[a,6]{Tom{\'a}s~E.~Gonzalo}
\author[a,28]{Philip~Grace}
\author[c,8]{Matthias Hamer}
\author[a,2,3]{Will~Handley}
\author[a,29]{Julia~Harz}
\author[b,30]{Sven Heinemeyer}
\author[a,31]{Sebastian~Hoof}
\author[a,9]{Selim~Hotinli}
\author[a,28]{Paul~Jackson}
\author[a,32]{Felix~Kahlhoefer}
\author[e,33]{Kamila Kowalska}
\author[c,32]{Michael Kr\"amer}
\author[a,13]{Anders~Kvellestad}
\author[b,34]{Miriam Lucio Martinez}
\author[a,35,36]{Farvah~Mahmoudi}
\author[b,37]{Diego Martinez Santos}
\author[a,38]{Gregory~D.~Martinez}
\author[d,39]{Satoshi Mishima}
\author[b,40]{Keith Olive}
\author[d,41,42]{Ayan Paul}
\author[a,8]{Markus~Tobias~Prim}
\author[c,43]{Werner Porod}
\author[a,13]{Are~Raklev}
\author[a,9,12,17]{Janina~J.~Renk}
\author[a,44]{Christopher~Rogan}
\author[e,45,33]{Leszek Roszkowski}
\author[a,30]{Roberto~Ruiz~de~Austri}
\author[b,46]{Kazuki Sakurai}
\author[a,47]{Andre Scaffidi}
\author[a,9,12]{Pat~Scott}
\author[e,33]{Enrico~Maria~Sessolo}
\author[c,41]{Tim Stefaniak}
\author[a,32]{Patrick~St{\"o}cker}
\author[a,28,48]{Wei~Su}
\author[e,45,33]{Sebastian Trojanowski}
\author[9,49]{Roberto~Trotta}
\author[50]{Yue-Lin Sming Tsai}
\author[a,13]{Jeriek~Van~den~Abeele}
\author[d,51]{Mauro Valli}
\author[a,52,53,54]{Aaron~C.~Vincent}
\author[b,41,55]{Georg~Weiglein}
\author[a,28]{Martin~White}
\author[c,8]{Peter Wienemann}
\author[a,5]{Lei~Wu}
\author[a,6,56]{Yang~Zhang}
\affil[a]{The GAMBIT Community}
\affil[b]{The MasterCode Collaboration}
\affil[c]{The Fittino Collaboration}
\affil[d]{HEPfit}
\affil[e]{BayesFits Group\newline}
\affil[1]{Department of Physics, Shahid Beheshti University, Tehran, Iran}
\affil[2]{\cambridge}
\affil[3]{\kicc}
\affil[4]{DAMTP, University of Cambridge, Cambridge, CB3 0WA, UK}
\affil[5]{\nanjing}
\affil[6]{\monash}
\affil[7]{Paul Scherrer Institut, CH-5232 Villigen, Switzerland}
\affil[8]{\bonn}
\affil[9]{\imperial}
\affil[10]{\louvain}
\affil[11]{\ifj}
\affil[12]{\uq}
\affil[13]{\oslo}
\affil[14]{\glasgow}
\affil[15]{Institut f\"ur Theoretische Physik, Universit\"at Heidelberg, Germany}
\affil[16]{\uppsala}
\affil[17]{\okc}
\affil[18]{\wsu}
\affil[19]{\sfu}
\affil[20]{Institute of Particle Physics Phenomenology, Durham University, Durham DH1 3LE, UK}
\affil[21]{\cernex}
\affil[22]{ARC Centre of Excellence for Dark Matter Particle Physics, School of Physics, The University of Melbourne, Victoria 3010, Australia}
\affil[23]{\ific}
\affil[24]{Theoretical Particle Physics and Cosmology Group, Department of Physics, King’s College London, London WC2R 2LS, UK}
\affil[25]{\bom}
\affil[26]{\kit}
\affil[27]{H.~H.~Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL, UK}
\affil[28]{\adelaide}
\affil[29]{\tum}
\affil[30]{Instituto de F\'isica Te\'orica UAM-CSIC, Cantoblanco, 28049, Madrid, Spain}
\affil[31]{\gottingen}
\affil[32]{\aachen}
\affil[33]{National Centre for Nuclear Research, ul. Pasteura 7, PL-02-093 Warsaw, Poland}
\affil[34]{Nikhef National Institute for Subatomic Physics, Amsterdam, Netherlands}
\affil[35]{\lyon}
\affil[36]{\cernth}
\affil[37]{Instituto Galego de F{\'i}sica de Altas Enerx{\'i}as, Universidade de Santiago de Compostela, Spain}
\affil[38]{\ucla}
\affil[39]{Theory Center, IPNS, KEK, Tsukuba, Ibaraki 305-0801, Japan}
\affil[40]{William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA}
\affil[41]{\desy}
\affil[42]{Institut f\"ur Physik, Humboldt-Universit\"at zu Berlin, D-12489 Berlin, Germany}
\affil[43]{University of W\"urzburg, Emil-Hilb-Weg 22, D-97074 Würzburg, Germany}
\affil[44]{\kansas}
\affil[45]{Astrocent, Nicolaus Copernicus Astronomical Center Polish Academy of Sciences, Bartycka 18, PL-00-716 Warsaw, Poland}
\affil[46]{Institute of Theoretical Physics, Faculty of Physics, University of Warsaw, ul. Pasteura 5, PL-02-093 Warsaw, Poland}
\affil[47]{\infnt}
\affil[48]{Korea Institute for Advanced Study, Seoul 02455, Korea}
\affil[49]{SISSA International School for Advanced Studies, Via Bonomea 265, 34136, Trieste, Italy}
\affil[50]{Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210033, China}
\affil[51]{Department of Physics and Astronomy, University of California, Irvine, California 92697, USA}
\affil[52]{\queens}
\affil[53]{\mcdonald}
\affil[54]{\perimeter}
\affil[55]{Institut f\"ur Theoretische Physik, Universit\"at Hamburg,Luruper Chaussee 149, 22761 Hamburg, Germany}
\affil[56]{\zzu}
\affil[*]{E-mail: andrew.j.fowlie@njnu.edu.cn}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{abstract}
Physical theories that depend on many parameters or are tested against data from many different experiments pose unique challenges to statistical inference. Many models in particle physics, astrophysics and cosmology fall into one or both of these categories. These issues are often sidestepped with statistically unsound \textit{ad~hoc} methods, involving intersection of parameter intervals estimated by multiple experiments, and random or grid sampling of model parameters. Whilst these methods are easy to apply, they exhibit pathologies even in low-dimensional parameter spaces, and quickly become problematic to use and interpret in higher dimensions. In this article we give clear guidance for going beyond these procedures, suggesting where possible simple methods for performing statistically sound inference, and recommendations of readily-available software tools and standards that can assist in doing so. Our aim is to provide any physicists lacking comprehensive statistical training with recommendations for reaching correct scientific conclusions, with only a modest increase in analysis burden. Our examples can be reproduced with the code publicly available at \href{https://doi.org/10.5281/zenodo.4322283}{Zenodo}.
\end{abstract}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\flushbottom
\maketitle
\clearpage
\thispagestyle{empty}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
The search for new particles is underway in a wide range of high-energy, astrophysical and precision experiments. These searches are made harder by the fact that theories for physics beyond the Standard Model almost always contain unknown parameters that cannot be uniquely derived from the theory itself. For example, in particle physics models of dark matter, these would be the dark matter mass and its couplings. Models usually make a range of different experimental predictions depending on the assumed values of their unknown parameters. Despite an ever-increasing wealth of experimental data, evidence for specific physics beyond the Standard Model has not yet emerged, leading to the proposal of increasingly complicated models. This increases the number of unknown parameters in the models, leading to high-dimensional parameter spaces. This problem is compounded by additional calibration and nuisance parameters that are required as experiments become more complicated. Unfortunately, high-dimensional parameter spaces, and the availability of relevant constraints from an increasing number of experiments, expose flaws in the simplistic methods sometimes employed in phenomenology to assess models. In this article, we recommend alternatives suitable for today's models and data, consistent with established statistical principles.
When assessing a model in light of data, physicists typically want answers to two questions: \textit{a})~Is the model favoured or allowed by the data? \textit{b})~What values of the unknown parameters are favoured or allowed by the data? In statistical language, these questions concern model testing and parameter estimation, respectively.
Parameter estimation allows us to understand what a model could predict, and design future experiments to test it. On the theory side, it allows us to construct theories that contain the model and naturally accommodate the observations. Model testing, on the other hand, allows us to test whether data indicate the presence of a new particle or new phenomena.
Many analyses of particle physics models suffer from two key deficiencies. First, they overlay exclusion curves from experiments and, second, they perform a random or grid scan of a high-dimensional parameter space. These techniques are often combined to perform a crude hypothesis test. In this article, we recapitulate relevant statistical principles, point out why both of these methods give unreliable results, and give concrete recommendations for what should be done instead. Despite the prevalence of these problems, we stress that there is diversity in the depth of statistical training in the physics community. Physicists contributed to major developments in statistical theory\cite{Jeffreys:1939xee,2008arXiv0808.2902R} and there are many statistically rigorous works in particle physics and related fields, including the famous Higgs discovery,\cite{Chatrchyan:2012ufa,Aad:2012tfa} and global fits of electroweak data.\cite{Baak:2014ora} Our goal is to make clear recommendations that would help lift all analyses closer to those standards, though we urge particular caution when testing hypotheses as unfortunately there are no simple recipes. The examples that we use to illustrate our recommendations can be reproduced with the code publicly available through \href{https://doi.org/10.5281/zenodo.4322283}{Zenodo}.\cite{zenodo_record}
Our discussion covers both Bayesian methods,\cite{giulio2003bayesian,gregory2005bayesian,sivia2006data,Trotta:2008qt,von2014bayesian,bailer2017practical} in which one directly considers the plausibility of a model and regions of its parameter space, and frequentist methods,\cite{lyons1989statistics,cowan1998statistical,james2006statistical,behnke2013data}
in which one compares the observed data to data that could have been observed in identical repeated experiments.%
\footnote{We cite here introductory textbooks about statistics by and for scientists. Refs.~\citen{sivia2006data,james2006statistical} are particularly concise.}
%
Our recommendations are agnostic about the relative merits of the two sets of methods, and apply whether one is an adherent of either form, or neither.
Both approaches usually involve the so-called likelihood function,\cite{Cousins:2020ntk} which tells us the probability of the observed data, assuming a particular model and a particular combination of numerical values for its unknown parameters.
In the following discussions, we assume that a likelihood is available and consider inferences based on it. In general, though, the likelihood alone is not enough in frequentist inference (as well as for reference priors and some methods in Bayesian statistics that use simulation). One requires the so-called sampling distribution; this is similar to the likelihood function, except that the data is not fixed to the observed data (see the likelihood principle\cite{berger1988likelihood} for further discussion). There are, furthermore, situations in which the likelihood is intractable. In such cases, likelihood-free techniques may be possible.\cite{Brehmer:2020cvb} In fact, in realistic applications in physics, the complete likelihood is almost always intractable. Typically, however, we create summaries of the data by e.g.\ binning collider events into histograms.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\centering
\includegraphics[width=12cm]{phi_s.pdf}
\caption{Confidence intervals in \num{100} pseudo-experiments, from the combination of five measurements~(\textit{left}) or from the intersection of five individual confidence intervals~(\textit{right}). We show the true value of~$\phi_s$ with a vertical black line. Intervals that contain the true value are shown in blue; those that do not are shown in red. On the right-hand side, grey bands indicate cases where no value can be found where the 95\% intervals from all five measurements overlap. Each bar originates from five pseudo-measurements, as shown zoomed-in to the side for a few points.}
\label{fig:phi_s}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Problems of overlaying exclusion limits}\label{sec:limit_intersection}
Experimental searches for new phenomena are usually summarised by confidence regions, either for a particular model's parameters or for model-independent quantities more closely related to the experiment that can be interpreted in any model. For example, experiments performing direct searches for dark matter\cite{Undagoitia:2015gya} publish confidence regions for the mass and scattering cross section of the dark matter particle, rather than for any parameters included in the Lagrangian of a specific dark matter model. To apply those results to a given dark matter model, the confidence regions must be transformed to the parameter space of the specific model of interest. This can sometimes modify the statistical properties of the confidence regions, so care must be taken in performing the transformation.\cite{Bridges:2010de,Akrami11coverage,Strege12}
In the frequentist approach, if an experiment that measured a parameter were repeated over and over again, each repeat would lead to a different confidence region for the measured parameter. The coverage is the fraction of repeated experiments in which the resulting confidence region would contain the true parameter values.\cite{10.2307/91337}
The confidence level of a confidence region is the desired coverage.\footnote{Note that for discrete observations\cite{2010NIMPA.612..388C} or in the presence of nuisance parameters,\cite{Rolke:2004mj,Punzi:2005yq} confidence regions are often defined to include the true parameter values in \emph{at least} e.g.\ $95\%$ of repeated experiments,\cite{Zyla:2020zbs_conf_intervals} and that in some cases the nominal confidence level may not hold in practice.}
For example: a 95\% confidence region should contain the true values in 95\% of repeated experiments, and the rate at which we would wrongly exclude the true parameter values is controlled to be~5\%. Approximate confidence regions can often be found from the likelihood function alone using asymptotic assumptions about the sampling distribution, e.g., Wilks' theorem.\cite{wilks1938} However, it is important to check carefully that the required assumptions hold.\cite{Algeri:2019arh}
Confidence intervals may be constructed to be one- or two-tailed. By construction, in the absence of a new effect, a 95\% upper limit would exclude all effect sizes, including zero, at a rate of 5\%. The fact that confidence intervals may exclude effect sizes that the experiment had no power to discover was considered a problem in particle physics and lead to the creation of CL${}_s$ intervals.\cite{Read:2002hq} By construction, these intervals cannot exclude negligible effect sizes, and thus over-cover.
The analogous construct in Bayesian statistics is the credible region. First, prior information about the parameters and information from the observed data contained in the likelihood function are combined into the posterior using Bayes' theorem. Second, parameters that are not of interest are integrated over, resulting in a marginal posterior distribution. A 95\% credible region for the remaining parameters of interest is found from the marginal posterior by defining a region containing 95\% of the posterior probability. In general, credible regions only guarantee average coverage: suppose we re-sampled model parameters and pseudo-data from the model and constructed 95\% credible regions. In 95\% of such trials, the credible region would contain the sampled model parameters.\cite{10.2307/2347266,james2006statistical} % james p250, sec. 9.6.1
Whilst credible regions and confidence intervals are identical in some cases~(e.g.\ in normal linear models), the fact that they in general lead to different inferences remains a point of contention.\cite{Morey2016} For both credible regions and confidence intervals, the level only stipulates the size of the region. One requires an ordering rule to decide which region of that size is selected. For example, the Feldman-Cousins construction\cite{Feldman:1997qc} for confidence regions and the highest-posterior density ordering rule for credible regions naturally switch from a one-\ to a two-tailed result.
When several experiments report confidence regions, requiring that the true value must lie within all of those regions amounts to approximating the combined confidence region by the intersection of regions from the individual experiments. This quickly loses accuracy as more experiments are applied in sequence, and leads to much greater than nominal error rates. This is because by taking an intersection of $n$ independent 95\% confidence regions, a parameter point has $n$ chances to be excluded at a $5\%$ error rate, giving an error rate of $1 - 0.95^n$.\cite{Junk:2020azi}
This issue is illustrated in \cref{fig:phi_s} using the $B$-physics observable $\phi_s$, which is a well-measured phase characterising CP-violation in $B_s$ meson decays.\cite{Amhis:2019ckw} We perform 10,000~pseudo-experiments.\footnote{In a pseudo-experiment, we simulate the random nature of a real experimental measurement using a pseudo-random number generator on a computer. Pseudo-experiments may be used to learn about the expected distributions of repeated measurements.} Each pseudo-experiment consists of a set of five independent Gaussian measurements of an assumed true Standard Model value of $\phi_s = -0.037$ with statistical errors
0.078, % atlas_combined"
0.097, % cms
0.037, % lhcb_combined
0.285, and % lhcb_psi_2S_phi, averaged asymmetric +0.29/-0.28
0.17, % lhcb_DD
which are taken from real ATLAS, CMS and LHCb measurements.\footnote{See Eq.~(91) and Table~22 in Ref.~\citen{Amhis:2019ckw}.}
We can then obtain the $95\%$ confidence interval from the combination of the five measurements in each experiment,\footnote{We used the standard weighted-mean approach to combine the results.\cite{Zyla:2020zbs_weighted_mean}} and compare it to the interval resulting from taking the intersection of the five $95\%$ confidence intervals from the individual measurements. We show the first \num{100} pseudo-experiments in \cref{fig:phi_s}. As expected, the $95\%$ confidence interval from the combination contains the true value in $95\%$ of simulated experiments. The intersection of five individual $95\%$ confidence intervals, on the other hand, contains the true value in only $78\%$ of simulations. Thus, overlaying regions leads to inflated error rates and can create a misleading impression about the viable parameter space. Whilst this is a one-dimensional illustration, an identical issue would arise for the intersection of higher-dimensional confidence regions. Clearly, rather than taking the intersection of reported results, one should combine likelihood functions from multiple experiments. Good examples can be found in the literature.\cite{Ciuchini:2000de, deAustri:2006jwj, Allanach:2007qk, Buchmueller:2011ab, Bechtle:2012zk, Fowlie:2012im, Athron:2017qdc}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\centering
\includegraphics[width=14cm]{contours_triangle_layout.pdf}
\caption{Starting from four individual likelihood functions~(\textit{top}; orange, blue, red and green, where lighter shades indicate greater likelihood), we compare overlaid $95\%$ contours~(\textit{bottom left}) versus a combination of the likelihoods~(\textit{bottom right}; blue contours). The dashed black line in both bottom panels is the intersection of the limits from the individual likelihoods. The red line in the bottom right panel is the resulting $95\%$ contour of the product of all likelihoods.}
\label{fig:contours}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
In \cref{fig:contours} we again show the dangers of simply overlaying confidence regions. We construct several toy two-dimensional likelihood functions (top), and find their $95\%$ confidence regions (bottom left). In the bottom right panel, we show the contours of the combined likelihood function (blue) and a combined $95\%$ confidence region (red contour). We see that the intersection of confidence regions~(dashed black curve) can both exclude points that are allowed by the combined confidence region, and allow points that should be excluded. It is often useful to plot both the contours of the combined likelihood (bottom right panel) and the contours from the individual likelihoods (bottom left panel), in order to better understand how each measurement or constraint contributes to the final combined confidence region.
\recommendation{Rather than overlaying confidence regions, combine likelihood functions. Derive a likelihood function for all the experimental data (this may be as simple as multiplying likelihood functions from independent experiments), and use it to compute approximate joint confidence or credible regions in the native parameter space of the model.}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Problems of uniform random sampling and grid scanning}\label{sec:random}
Parameter estimation generally involves integration of a posterior or maximisation of a likelihood function. This is required to go from the full high-dimensional model to the one or two dimensions of interest or to compare different models. In most cases this cannot be done analytically. The likelihood function, furthermore, may be problematic in realistic settings. In particle physics,\cite{Balazs:2021uhg} it is usually moderately high-dimensional, and often contains distinct modes corresponding to different physical solutions, degeneracies in which several parameters can be adjusted simultaneously without impacting the fit, and plateaus in which the model is unphysical and the likelihood is zero.
On top of that, only noisy estimates of the likelihood may be available, such as from Monte Carlo simulations of collider searches for new particles, and derivatives of the likelihood function are usually unavailable.\cite{Balazs:2017moi} As even single evaluations of the likelihood function can be computationally expensive, the challenge is then to perform integration or maximisation in a high-dimensional parameter space using a tractable number of evaluations of the likelihood function.
Random and grid scans are common strategies in the high-energy phenomenology literature. In random scans, one evaluates the likelihood function at a number of randomly-chosen parameter points. Typically the parameters are drawn from a uniform distribution in each parameter in a particular parametrisation of the model, which introduces a dependency on the choice of parametrisation. In grid scans, one evaluates the likelihoods on a uniformly spaced grid with a fixed number of points per dimension.
It is then tempting to attribute statistical meaning to the number or density of samples found by random or grid scans. However, such an interpretation is very problematic, in particular when the scan is combined with the crude method described in \cref{sec:limit_intersection}, i.e.\ keeping only points that make predictions that lie within the confidence regions reported by every single experiment.
It is worth noting that random scans often outperform grid scans: consider 100 likelihood evaluations in a two-parameter model where the likelihood function depends much more strongly on the first parameter than on the second. A random scan would try~100 different parameter values of the important parameter, whereas the grid scan would try just~10. In a similar vein, quasi-random samples that cover the space more evenly than truly random samples can out-perform truly random sampling.\cite{JMLR:v13:bergstra12a}
This is illustrated in \cref{fig:quasi_random} with 256 samples in two-dimensions.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\centering
\includegraphics[width=15cm]{quasi_random.pdf}
\caption{Grid, random and quasi-random sampling with 256 samples in two dimensions when the likelihood function is approximately one-dimensional. When the number of important parameters increases these methods perform poorly, as shown in \cref{fig:rosenbrock}.}
\label{fig:quasi_random}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
However, random, quasi-random and grid scans are all extremely inefficient in cases with even a few parameters. The ``curse of dimensionality''~\cite{bellman1961adaptive} is one of the well-known problems: the number of samples required for a fixed resolution per dimension scales exponentially with dimension $D$: just~10 samples per dimension requires $10^D$ samples. This quickly becomes an impossible task in high-dimensional problems. Similarly, consider a $D$-dimensional model in which the interesting or best-fitting region occupies a fraction $\epsilon$ of each dimension. A random scan would find points in that region with an efficiency of $\epsilon^D$, i.e.\ random scans are exponentially inefficient. See Ref.~\citen{blum2020foundations} for further discussion and examples.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[t]
\centering
\includegraphics[width=12cm]{rosenbrock.pdf}
\caption{Points found inside the $95\%$ confidence region of the likelihood function, in a two-dimensional plane of the four-dimensional Rosenbrock problem. Points are shown from scans using differential evolution (blue), random sampling (orange) and grid sampling (yellow). For reference, we also show the actual $95\%$ confidence level contour of the likelihood function (red). Note that due to the projection of the four-dimensional space down to just two dimensions, two of the points shown from the grid sampler actually consist of three points each in the full four-dimensional space.}
\label{fig:rosenbrock}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
These issues can be addressed by using more sophisticated algorithms that, for example, preferentially explore areas of the parameter space where the likelihood is larger. Which algorithm is best suited for a given study depends on the goal of the analysis. For Bayesian inference, it is common to draw samples from the posterior distribution or compute an integral over the model's parameter space, relevant for Bayesian model selection. See Ref.~\citen{2020arXiv200406425M} for a review of Bayesian computation. For frequentist inference, one might want to determine the global optimum and obtain samples from any regions in which the likelihood function was moderate. This can be more challenging than Bayesian computation. In particular, algorithms for Bayesian computation might not be appropriate optimizers. For example, Markov chain Monte Carlo methods draw from the posterior. In high-dimensions, the bulk of the posterior probability (the typical set) often lies well away from the maximum likelihood. This is another manifestation of the curse of dimensionality.
In \cref{fig:rosenbrock} we illustrate one such algorithm that overcomes the deficiencies of random and grid sampling and is suitable for frequentist inference. Here we assume that the logarithm of the likelihood function is given by a four-dimensional Rosenbrock function~\cite{10.1093/comjnl/3.3.175}
\begin{equation}\label{eq:rosenbrock}
-2\ln\mathcal{L}(\bm{x}) = 2 \sum_{i=1}^3 f(x_i,\,x_{i+1}), \quad\text{where } f(a, b) = (1 - a)^2 + 100 \, (b - a^2)^2 \, .
\end{equation}
This is a challenging likelihood function with a global maximum at $x_i = 1$ ($i=1,2,3,4$).
We show samples found with $-2\ln\mathcal{L}(\bm{x}) \le 5.99$. This constraint corresponds to the two-dimensional $95\%$ confidence region, which in the $(x_1, x_2)$ plane has a banana-like shape~(red contour). We find the points using uniform random sampling from $-5$ to $5$ for each parameter (orange dots), using a grid scan (yellow dots), and using an implementation of the differential evolution algorithm~\cite{StornPrice95,2020SciPy-NMeth} operating inside the same limits (blue dots).
With only \num{2e5} likelihood calls, the differential evolution scan finds more than 11,500~points in the high-likelihood region,\footnote{We used a population size of 50 and stopped once the coefficient of variation of the fitness of the population dropped below 1\%. See the associated code for the complete settings.\cite{zenodo_record}}
whereas in \num{e7} tries the random scan finds only~7 high-likelihood samples, and the grid scan just~10. The random and grid scans would need over \num{e10} likelihood calls to obtain a similar number of high-likelihood points as obtained by differential evolution in just \num{2e5} evaluations. If likelihood calls are expensive and dominate the run-time, this could make differential evolution about \num{e5} times faster.
\recommendation{Use efficient algorithms to analyse parameter spaces, rather than grid or random scans. The choice of algorithm should depend on the goal. Good examples for Bayesian analyses are Markov chain Monte Carlo\cite{Hogg:2017akh, brooks2011handbook} and nested sampling.\cite{Skilling:2006gxv} Good examples for maximizing and exploring the likelihood are simulated annealing,\cite{Kirkpatrick671} differential evolution,\cite{StornPrice95} genetic algorithms\cite{1995ApJS..101..309C} and local optimizers such as Nelder-Mead.\cite{10.1093/comjnl/7.4.308} These are widely available in various public software packages.\cite{2020MNRAS.tmp..280S, Feroz:2008xx, Handley:2015fda, ForemanMackey:2012ig, Workgroup:2017htr, James:1975dr, hans_dembinski_2020_3951328, 2020SciPy-NMeth}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Problems with model testing}\label{sec:testing}
Overlaying confidence regions and performing random scans are straightforward
methods for ``hypothesis tests'' of physical theories with many parameters or testable predictions. For example, it is tempting to say that a model is excluded if a uniform random or grid scan finds no samples for which the experimental predictions lie inside every 95\%~confidence region. This procedure is, however, prone to misinterpretation: just as in \cref{sec:limit_intersection}, it severely under-estimates error rates, and, just as in \cref{sec:random}, it easily misses solutions.
Testing and comparing individual models in a statistically defensible manner is challenging and contentious. On the frequentist side, one can calculate a global \pvalue
: the probability of obtaining data as extreme or more extreme than observed, if the model in question is true.
%
The \pvalue features in two distinct statistical approaches:\cite{doi:10.1198/0003130031856} first, the \pvalue may be interpreted as a measure of evidence against a model.\cite{fisher} See Refs.~\citen{Hubbard2008, doi:10.1080/00031305.1996.10474380,doi:10.1080/01621459.1987.10478397,Senn2001,Murtaugh2014} for discussion of this approach. Second, we may use the \pvalue to control the rate at which we would wrongly reject the model when it was true.\cite{10.2307/91247} If we reject when $p < \alpha$, we would wrongly reject at a rate $\alpha$. In particle physics, we adopt the $5\sigma$ threshold, corresponding to $\alpha \simeq \num{e-7}$.\cite{Lyons:2013yja}
%
When we compute \pvalue{}s, we should take into account all the tests that we might have performed. In the context of searches for new particles, this is known as the look-elsewhere effect. Whilst calculations can be greatly simplified by using asymptotic formulae,\cite{Cowan:2010js,Gross:2010qma} bear in mind that they may not apply.\cite{Algeri:2019arh} Also, care must be taken to avoid common misinterpretations of the \pvalue.\cite{GOODMAN2008135,Greenland2016} For example, the \pvalue is not the probability of the null hypothesis, or the probability that the observed data were produced by chance alone, or the probability of the observed data given the null hypothesis, or the rate at which we would wrongly reject the null hypothesis when it was true.
On the Bayesian side, one can perform Bayesian model comparison~\cite{Jeffreys:1939xee,Robert:1995oiy} to find any change brought about by data to the relative plausibility of two different models. The factor that updates the relative plausibility of two models is called a Bayes factor. The Bayes factor is a ratio of integrals that may be challenging to compute in high-dimensional models. Just as in Bayesian parameter inference, this requires constructing priors for the parameters of the two models, permitting one to coherently incorporate prior information. In this setting, however, inferences may be strongly prior dependent, even in cases with large data sets and where seemingly uninformative priors are used.\cite{berger2001objective,Cousins:2008gf}
This sensitivity can be particularly problematic in high-dimensional models. Unfortunately, there is no unique notion of an uninformative prior representing a state of indifference about a parameter,\cite{Robert:1996lhi} though in special cases symmetry considerations may help.\cite{4082152}
Neither of these approaches is simple, either philosophically or computationally, and the task of model testing and comparison is in general full of subtleties. For example, they depend differently on the amount of data collected which leads to somewhat paradoxical differences between them.\!\!\cite{10.2307/2333251,Jeffreys:1939xee,Cousins:2013hry} See Refs.~\citen{Wagenmakers2007,doi:10.1177/1745691620958012,Benjamin2018,doi:10.1080/00031305.2018.1527253,Lakens2018} for recent discussions in other scientific settings. It is worth noting that there are connections between model testing and parameter inference in the case of nested models, i.e.\ when a model can be viewed as a subset of the parameter space of some larger, ``full'' model. A hypothesis test of a nested model can be equivalent to whether it lies inside a confidence region in the full model.\cite{kendall2a,Cousins:2018tiz} Similarly, the Bayes factor between nested models can be found from parameter inference in the full model alone through the Savage-Dickey ratio.\cite{10.2307/2958475} There are, furthermore, approaches beyond Bayesian model comparison and frequentist model testing that we do not discuss here.
\recommendation{In Bayesian analyses, carefully consider the choice of priors, their potential impact particularly in high-dimensions and check the prior sensitivity. In frequentist analyses, consider the look-elsewhere effect, check the validity of any asymptotic formulae and take care to avoid common misinterpretations of the \pvalue. If investigation of such subtleties fall outside the scope of the analysis, refrain from making strong statements on the overall validity of the theory under study.}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Summary}\label{sec:conclude}
As first steps towards addressing the challenges posed by physical theories with many parameters and many testable predictions, we make three recommendations: \emph{i}) construct a composite likelihood that combines constraints from individual experiments, \emph{ii}) use adaptive sampling algorithms (ones that target the interesting regions) to efficiently sample the parameter spaces, and \emph{iii}) avoid strong statements on the viability of a theory unless a proper model test has been performed.
The second recommendation can be easily achieved through the use of any one of a multitude of publicly-available implementations of efficient sampling algorithms (for examples see \cref{sec:random}). For the first recommendation, composite likelihoods are often relatively simple to construct, and can be as straightforward as a product of Gaussians for multiple independent measurements. Even for cases where constructing the composite likelihood is more complicated, software implementations are often publicly available already.\cite{Athron:2017ard,hepfit,brinckmann2018montepython,Bhom:2020bfe,LikeDM,Collaboration:2242860,Aghanim:2019ame,IC79_SUSY,IC22Methods}
Given the central role of the likelihood function in analysing experimental data, it is in the interest of experimental collaborations to make their likelihood functions (or a reasonable approximation) publicly available to truly harness the full potential of their results when confronted with new theories. Even for large and complex datasets, e.g.~those from the Large Hadron Collider, there exist various recommended methods for achieving this goal.\cite{Cousins:451612,Vischia:2019uul,Abdallah:2020pec}
Our recommendations can be taken separately when only one of the challenges exists, or where addressing them all is impractical. However, when confronted with both high-dimensional models and a multitude of relevant experimental constraints, we recommend that they are used together to maximise the validity and efficiency of analyses.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bibliography{stats,phys}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section*{Acknowledgements}
BCA has been partially supported by the UK Science and Technology Facilities Council (STFC) Consolidated HEP theory grants ST/P000681/1 and ST/T000694/1. PA is supported by Australian Research Council (ARC) Future Fellowship FT160100274, and PS by FT190100814. PA, CB, TEG and MW are supported by ARC Discovery Project DP180102209. CB and YZ are supported by ARC Centre of Excellence CE110001104 (Particle Physics at the Tera-scale) and WS and MW by CE200100008 (Dark Matter Particle Physics). ABe is supported by F.N.R.S. through the F.6001.19 convention. ABuc is supported by the Royal Society grant UF160548. JECM is supported by the Carl Trygger Foundation grant no. CTS 17:139. JdB acknowledges support by STFC under grant ST/P001246/1. JE was supported in part by the STFC (UK) and by the Estonian Research Council. BF was supported by EU MSCA-IF project 752162 -- DarkGAMBIT. MF and FK are supported by the Deutsche Forschungsgemeinschaft (DFG) through the Collaborative Research Center TRR 257 ``Particle Physics Phenomenology after the Higgs Discovery'' under Grant~396021762 -- TRR 257 and FK also under the Emmy Noether Grant No.\ KA 4662/1-1. AF is supported by an NSFC Research Fund for International Young Scientists grant 11950410509. SHe was supported in part by the MEINCOP (Spain) under contract PID2019-110058GB-C21 and in part by the Spanish Agencia Estatal de Investigaci\'on (AEI) through the grant IFT Centro de Excelencia Severo Ochoa SEV-2016-0597. SHoof is supported by the Alexander von Humboldt Foundation. SHoof and MTP are supported by the Federal Ministry of Education and Research of Germany (BMBF). KK is supported in part by the National Science Centre (Poland) under research Grant No. 2017/26/E/ST2/00470, LR under No. 2015/18/A/ST2/00748, and EMS under No. 2017/26/D/ST2/00490. LR and ST are supported by grant AstroCeNT: Particle Astrophysics Science and Technology Centre, carried out within the International Research Agendas programme of the Foundation for Polish Science financed by the European Union under the European Regional Development Fund. MLM acknowledges support from NWO (Netherlands). SM is supported by JSPS KAKENHI Grant Number 17K05429. The work of K.A.O.~was supported in part by DOE grant DE-SC0011842 at the University of Minnesota. JJR is supported by the Swedish Research Council, contract 638-2013-8993. KS was partially supported by the National Science Centre, Poland, under research grants 2017/26/E/ST2/00135 and the Beethoven grants DEC-2016/23/G/ST2/04301. AS is supported by MIUR research grant No. 2017X7X85K and INFN. WS is supported by KIAS Individual Grant (PG084201) at Korea Institute for Advanced Study. ST is partially supported by the Polish Ministry of Science and Higher Education through its scholarship for young and outstanding scientists (decision no. 1190/E-78/STYP/14/2019). RT was partially supported by STFC under grant number ST/T000791/1. The work of MV is supported by the NSF Grant No.\ PHY-1915005. ACV is supported by the Arthur B. McDonald Canadian Astroparticle Physics Research Institute. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science, and Economic Development, and by the Province of Ontario through MEDJCT. LW is supported by the National Natural Science Foundation of China (NNSFC) under grant Nos. 117050934, by Jiangsu Specially Appointed Professor Program.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section*{Author contributions}
The project was led by AF and in preliminary stages by BF and FK.
ABe, AF, SHoof, AK, PSc and WS contributed to creating the figures.
PA, CB, TB, ABe, ABuc, AF, TEG, SHoof, AK, JECM, MTP, AR, PSc, ACV and YZ contributed to writing.
WH and FK performed official internal reviews of the article.
All authors read, endorsed and discussed the content and recommendations.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section*{Code availability}
The figures were prepared with \texttt{matplotlib}.\cite{Hunter:2007} We have made all scripts publicly available at Zenodo.\cite{zenodo_record}
\end{document}
```
4. **Bibliographic Information:**
```bbl
\begin{thebibliography}{100}
\urlstyle{rm}
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\def\urlprefix{URL }\fi
\expandafter\ifx\csname doiprefix\endcsname\relax\def\doiprefix{DOI: }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2]{[\href{https://arxiv.org/abs/#2}{#1:#2}]}
\bibitem{Jeffreys:1939xee}
\bibinfo{author}{Jeffreys, H.}
\newblock \emph{\bibinfo{title}{{The Theory of Probability}}}.
\newblock Oxford Classic Texts in the Physical Sciences
(\bibinfo{publisher}{Oxford University Press}, \bibinfo{year}{1939}).
\bibitem{2008arXiv0808.2902R}
\bibinfo{author}{{Robert}, C.} \& \bibinfo{author}{{Casella}, G.}
\newblock \bibinfo{journal}{\bibinfo{title}{{A Short History of Markov Chain
Monte Carlo: Subjective Recollections from Incomplete Data}}}.
\newblock {\emph{\JournalTitle{Statistical Science}}}
\textbf{\bibinfo{volume}{26}}, \bibinfo{pages}{102 -- 115},
\doi{10.1214/10-STS351} (\bibinfo{year}{2011}).
\newblock \eprint{arXiv}{0808.2902}.
\bibitem{Chatrchyan:2012ufa}
\bibinfo{author}{Chatrchyan, S.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Observation of a New Boson at a
Mass of 125 GeV with the CMS Experiment at the LHC}}}.
\newblock {\emph{\JournalTitle{Phys. Lett. B}}} \textbf{\bibinfo{volume}{716}},
\bibinfo{pages}{30--61}, \doi{10.1016/j.physletb.2012.08.021}
(\bibinfo{year}{2012}).
\newblock \eprint{arXiv}{1207.7235}.
\bibitem{Aad:2012tfa}
\bibinfo{author}{Aad, G.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Observation of a new particle in
the search for the Standard Model Higgs boson with the ATLAS detector at the
LHC}}}.
\newblock {\emph{\JournalTitle{Phys. Lett. B}}} \textbf{\bibinfo{volume}{716}},
\bibinfo{pages}{1--29}, \doi{10.1016/j.physletb.2012.08.020}
(\bibinfo{year}{2012}).
\newblock \eprint{arXiv}{1207.7214}.
\bibitem{Baak:2014ora}
\bibinfo{author}{Baak, M.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{The global electroweak fit at NNLO
and prospects for the LHC and ILC}}}.
\newblock {\emph{\JournalTitle{Eur. Phys. J. C}}}
\textbf{\bibinfo{volume}{74}}, \bibinfo{pages}{3046},
\doi{10.1140/epjc/s10052-014-3046-5} (\bibinfo{year}{2014}).
\newblock \eprint{arXiv}{1407.3792}.
\bibitem{zenodo_record}
\bibinfo{author}{{GAMBIT Collaboration}}.
\newblock \bibinfo{title}{Supplementary code: Simple and statistically sound
recommendations for analysing physical theories},
\doi{10.5281/zenodo.4322283}.
\newblock \bibinfo{note}{This DOI represents all versions, and will always
resolve to the latest one}.
\bibitem{giulio2003bayesian}
\bibinfo{author}{D'Agostini, G.}
\newblock \emph{\bibinfo{title}{Bayesian Reasoning In Data Analysis: A Critical
Introduction}} (\bibinfo{publisher}{World Scientific Publishing Company},
\bibinfo{year}{2003}).
\bibitem{gregory2005bayesian}
\bibinfo{author}{Gregory, P.}
\newblock \emph{\bibinfo{title}{Bayesian Logical Data Analysis for the Physical
Sciences}} (\bibinfo{publisher}{Cambridge University Press},
\bibinfo{year}{2005}).
\bibitem{sivia2006data}
\bibinfo{author}{Sivia, D.} \& \bibinfo{author}{Skilling, J.}
\newblock \emph{\bibinfo{title}{Data Analysis: A Bayesian Tutorial}}
(\bibinfo{publisher}{Oxford University Press}, \bibinfo{year}{2006}).
\bibitem{Trotta:2008qt}
\bibinfo{author}{Trotta, R.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Bayes in the sky: Bayesian
inference and model selection in cosmology}}}.
\newblock {\emph{\JournalTitle{Contemp. Phys.}}} \textbf{\bibinfo{volume}{49}},
\bibinfo{pages}{71--104}, \doi{10.1080/00107510802066753}
(\bibinfo{year}{2008}).
\newblock \eprint{arXiv}{0803.4089}.
\bibitem{von2014bayesian}
\bibinfo{author}{von~der Linden, W.}, \bibinfo{author}{Dose, V.} \&
\bibinfo{author}{von Toussaint, U.}
\newblock \emph{\bibinfo{title}{Bayesian Probability Theory: Applications in
the Physical Sciences}} (\bibinfo{publisher}{Cambridge University Press},
\bibinfo{year}{2014}).
\bibitem{bailer2017practical}
\bibinfo{author}{Bailer-Jones, C.}
\newblock \emph{\bibinfo{title}{Practical Bayesian Inference: A Primer for
Physical Scientists}} (\bibinfo{publisher}{Cambridge University Press},
\bibinfo{year}{2017}).
\bibitem{lyons1989statistics}
\bibinfo{author}{Lyons, L.}
\newblock \emph{\bibinfo{title}{Statistics for Nuclear and Particle
Physicists}} (\bibinfo{publisher}{Cambridge University Press},
\bibinfo{year}{1989}).
\bibitem{cowan1998statistical}
\bibinfo{author}{Cowan, G.}
\newblock \emph{\bibinfo{title}{Statistical Data Analysis}}
(\bibinfo{publisher}{Clarendon Press}, \bibinfo{year}{1998}).
\bibitem{james2006statistical}
\bibinfo{author}{James, F.}
\newblock \emph{\bibinfo{title}{Statistical Methods in Experimental Physics}}
(\bibinfo{publisher}{World Scientific}, \bibinfo{year}{2006}).
\bibitem{behnke2013data}
\bibinfo{author}{Behnke, O.}, \bibinfo{author}{Kr{\"o}ninger, K.},
\bibinfo{author}{Schott, G.} \& \bibinfo{author}{Sch{\"o}rner-Sadenius, T.}
\newblock \emph{\bibinfo{title}{Data Analysis in High Energy Physics: A
Practical Guide to Statistical Methods}} (\bibinfo{publisher}{Wiley},
\bibinfo{year}{2013}).
\bibitem{Cousins:2020ntk}
\bibinfo{author}{Cousins, R.~D.}
\newblock \bibinfo{journal}{\bibinfo{title}{{What is the likelihood function,
and how is it used in particle physics?}}}
\newblock {\emph{\JournalTitle{arXiv preprint}}} (\bibinfo{year}{2020}).
\newblock \bibinfo{note}{\href{https://ep-news.web.cern.ch/node/3213}{CERN EP
Newsletter}}, \eprint{arXiv}{2010.00356}.
\bibitem{berger1988likelihood}
\bibinfo{author}{Berger, J.} \& \bibinfo{author}{Wolpert, R.}
\newblock \emph{\bibinfo{title}{The Likelihood Principle}},
vol.~\bibinfo{volume}{6} of \emph{\bibinfo{series}{Lecture notes --
monographs series}} (\bibinfo{publisher}{Institute of Mathematical
Statistics}, \bibinfo{year}{1988}), \bibinfo{edition}{second} edn.
\bibitem{Brehmer:2020cvb}
\bibinfo{author}{Brehmer, J.} \& \bibinfo{author}{Cranmer, K.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Simulation-based inference methods
for particle physics}}}.
\newblock {\emph{\JournalTitle{arXiv preprint}}} (\bibinfo{year}{2020}).
\newblock \eprint{arXiv}{2010.06439}.
\bibitem{Undagoitia:2015gya}
\bibinfo{author}{Marrod\'an~Undagoitia, T.} \& \bibinfo{author}{Rauch, L.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Dark matter direct-detection
experiments}}}.
\newblock {\emph{\JournalTitle{J. Phys. G}}} \textbf{\bibinfo{volume}{43}},
\bibinfo{pages}{013001}, \doi{10.1088/0954-3899/43/1/013001}
(\bibinfo{year}{2016}).
\newblock \eprint{arXiv}{1509.08767}.
\bibitem{Bridges:2010de}
\bibinfo{author}{Bridges, M.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{A Coverage Study of the CMSSM
Based on ATLAS Sensitivity Using Fast Neural Networks Techniques}}}.
\newblock {\emph{\JournalTitle{JHEP}}} \textbf{\bibinfo{volume}{03}},
\bibinfo{pages}{012}, \doi{10.1007/JHEP03(2011)012} (\bibinfo{year}{2011}).
\newblock \eprint{arXiv}{1011.4306}.
\bibitem{Akrami11coverage}
\bibinfo{author}{{Akrami}, Y.}, \bibinfo{author}{{Savage}, C.},
\bibinfo{author}{{Scott}, P.}, \bibinfo{author}{{Conrad}, J.} \&
\bibinfo{author}{{Edsj{\"o}}, J.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Statistical coverage for
supersymmetric parameter estimation: a case study with direct detection of
dark matter}}}.
\newblock {\emph{\JournalTitle{JCAP}}} \textbf{\bibinfo{volume}{7}},
\bibinfo{pages}{2}, \doi{10.1088/1475-7516/2011/07/002}
(\bibinfo{year}{2011}).
\newblock \eprint{arXiv}{1011.4297}.
\bibitem{Strege12}
\bibinfo{author}{{Strege}, C.}, \bibinfo{author}{{Trotta}, R.},
\bibinfo{author}{{Bertone}, G.}, \bibinfo{author}{{Peter}, A.~H.~G.} \&
\bibinfo{author}{{Scott}, P.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Fundamental statistical
limitations of future dark matter direct detection experiments}}}.
\newblock {\emph{\JournalTitle{Phys. Rev. D}}} \textbf{\bibinfo{volume}{86}},
\bibinfo{pages}{023507}, \doi{10.1103/PhysRevD.86.023507}
(\bibinfo{year}{2012}).
\newblock \eprint{arXiv}{1201.3631}.
\bibitem{10.2307/91337}
\bibinfo{author}{Neyman, J.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Outline of a Theory of Statistical
Estimation Based on the Classical Theory of Probability}}}.
\newblock {\emph{\JournalTitle{Philos. Trans. Roy. Soc. London Ser. A}}}
\textbf{\bibinfo{volume}{236}}, \bibinfo{pages}{333--380},
\doi{10.1098/rsta.1937.0005} (\bibinfo{year}{1937}).
\bibitem{2010NIMPA.612..388C}
\bibinfo{author}{{Cousins}, R.~D.}, \bibinfo{author}{{Hymes}, K.~E.} \&
\bibinfo{author}{{Tucker}, J.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Frequentist evaluation of
intervals estimated for a binomial parameter and for the ratio of Poisson
means}}}.
\newblock {\emph{\JournalTitle{Nuclear Instruments and Methods in Physics
Research A}}} \textbf{\bibinfo{volume}{612}}, \bibinfo{pages}{388--398},
\doi{10.1016/j.nima.2009.10.156} (\bibinfo{year}{2010}).
\newblock \eprint{arXiv}{0905.3831}.
\bibitem{Rolke:2004mj}
\bibinfo{author}{Rolke, W.~A.}, \bibinfo{author}{Lopez, A.~M.} \&
\bibinfo{author}{Conrad, J.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Limits and confidence intervals in
the presence of nuisance parameters}}}.
\newblock {\emph{\JournalTitle{Nucl. Instrum. Meth. A}}}
\textbf{\bibinfo{volume}{551}}, \bibinfo{pages}{493--503},
\doi{10.1016/j.nima.2005.05.068} (\bibinfo{year}{2005}).
\newblock \eprint{arXiv}{physics/0403059}.
\bibitem{Punzi:2005yq}
\bibinfo{author}{Punzi, G.}
\newblock \bibinfo{title}{{Ordering algorithms and confidence intervals in the
presence of nuisance parameters}}.
\newblock In \emph{\bibinfo{booktitle}{{Statistical Problems in Particle
Physics, Astrophysics and Cosmology}}}, \doi{10.1142/9781860948985_0019}
(\bibinfo{year}{2005}).
\newblock \eprint{arXiv}{physics/0511202}.
\bibitem{Zyla:2020zbs_conf_intervals}
\bibinfo{author}{Zyla, P.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Review of Particle Physics}}}.
\newblock {\emph{\JournalTitle{PTEP}}} \textbf{\bibinfo{volume}{2020}},
\bibinfo{pages}{083C01, chap.~40.4.2}, \doi{10.1093/ptep/ptaa104}
(\bibinfo{year}{2020}).
\bibitem{wilks1938}
\bibinfo{author}{Wilks, S.~S.}
\newblock \bibinfo{journal}{\bibinfo{title}{The large-sample distribution of
the likelihood ratio for testing composite hypotheses}}.
\newblock {\emph{\JournalTitle{Ann. Math. Statist.}}}
\textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{60--62},
\doi{10.1214/aoms/1177732360} (\bibinfo{year}{1938}).
\bibitem{Algeri:2019arh}
\bibinfo{author}{Algeri, S.}, \bibinfo{author}{Aalbers, J.},
\bibinfo{author}{Dundas~Morå, K.} \& \bibinfo{author}{Conrad, J.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Searching for new physics with
profile likelihoods: Wilks and beyond}}}.
\newblock {\emph{\JournalTitle{{Nat. Rev. Phys}}}}
\textbf{\bibinfo{volume}{2}}, \bibinfo{pages}{245–252},
\doi{10.1038/s42254-020-0169-5} (\bibinfo{year}{2020}).
\newblock \eprint{arXiv}{1911.10237}.
\bibitem{Read:2002hq}
\bibinfo{author}{Read, A.~L.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Presentation of search results:
The CL(s) technique}}}.
\newblock {\emph{\JournalTitle{J. Phys. G}}} \textbf{\bibinfo{volume}{28}},
\bibinfo{pages}{2693--2704}, \doi{10.1088/0954-3899/28/10/313}
(\bibinfo{year}{2002}).
\bibitem{10.2307/2347266}
\bibinfo{author}{Rubin, D.~B.} \& \bibinfo{author}{Schenker, N.}
\newblock \bibinfo{journal}{\bibinfo{title}{Efficiently simulating the coverage
properties of interval estimates}}.
\newblock {\emph{\JournalTitle{Journal of the Royal Statistical Society. Series
C (Applied Statistics)}}} \textbf{\bibinfo{volume}{35}},
\bibinfo{pages}{159--167}, \doi{10.2307/2347266} (\bibinfo{year}{1986}).
\bibitem{Morey2016}
\bibinfo{author}{Morey, R.~D.}, \bibinfo{author}{Hoekstra, R.},
\bibinfo{author}{Rouder, J.~N.}, \bibinfo{author}{Lee, M.~D.} \&
\bibinfo{author}{Wagenmakers, E.-J.}
\newblock \bibinfo{journal}{\bibinfo{title}{The fallacy of placing confidence
in confidence intervals}}.
\newblock {\emph{\JournalTitle{Psychonomic Bulletin {\&} Review}}}
\textbf{\bibinfo{volume}{23}}, \bibinfo{pages}{103--123},
\doi{10.3758/s13423-015-0947-8} (\bibinfo{year}{2016}).
\bibitem{Feldman:1997qc}
\bibinfo{author}{Feldman, G.~J.} \& \bibinfo{author}{Cousins, R.~D.}
\newblock \bibinfo{journal}{\bibinfo{title}{{A Unified approach to the
classical statistical analysis of small signals}}}.
\newblock {\emph{\JournalTitle{Phys. Rev. D}}} \textbf{\bibinfo{volume}{57}},
\bibinfo{pages}{3873--3889}, \doi{10.1103/PhysRevD.57.3873}
(\bibinfo{year}{1998}).
\newblock \eprint{arXiv}{physics/9711021}.
\bibitem{Junk:2020azi}
\bibinfo{author}{Junk, T.~R.} \& \bibinfo{author}{Lyons, L.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Reproducibility and Replication of
Experimental Particle Physics Results}}}.
\newblock {\emph{\JournalTitle{Harvard Data Science Review}}}
\textbf{\bibinfo{volume}{2}}, \doi{10.1162/99608f92.250f995b}
(\bibinfo{year}{2020}).
\newblock \eprint{arXiv}{2009.06864}.
\bibitem{Amhis:2019ckw}
\bibinfo{author}{Amhis, Y.~S.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Averages of $b$-hadron,
$c$-hadron, and $\tau $-lepton properties as of 2018}}}.
\newblock {\emph{\JournalTitle{Eur. Phys. J. C}}}
\textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{226},
\doi{10.1140/epjc/s10052-020-8156-7} (\bibinfo{year}{2021}).
\newblock \eprint{arXiv}{1909.12524}.
\bibitem{Zyla:2020zbs_weighted_mean}
\bibinfo{author}{Zyla, P.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Review of Particle Physics}}}.
\newblock {\emph{\JournalTitle{PTEP}}} \textbf{\bibinfo{volume}{2020}},
\bibinfo{pages}{083C01, chap.~40.2.1}, \doi{10.1093/ptep/ptaa104}
(\bibinfo{year}{2020}).
\bibitem{Ciuchini:2000de}
\bibinfo{author}{Ciuchini, M.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{2000 CKM triangle analysis: A
Critical review with updated experimental inputs and theoretical
parameters}}}.
\newblock {\emph{\JournalTitle{JHEP}}} \textbf{\bibinfo{volume}{07}},
\bibinfo{pages}{013}, \doi{10.1088/1126-6708/2001/07/013}
(\bibinfo{year}{2001}).
\newblock \eprint{arXiv}{hep-ph/0012308}.
\bibitem{deAustri:2006jwj}
\bibinfo{author}{Ruiz~de Austri, R.}, \bibinfo{author}{Trotta, R.} \&
\bibinfo{author}{Roszkowski, L.}
\newblock \bibinfo{journal}{\bibinfo{title}{{A Markov chain Monte Carlo
analysis of the CMSSM}}}.
\newblock {\emph{\JournalTitle{JHEP}}} \textbf{\bibinfo{volume}{05}},
\bibinfo{pages}{002}, \doi{10.1088/1126-6708/2006/05/002}
(\bibinfo{year}{2006}).
\newblock \eprint{arXiv}{hep-ph/0602028}.
\bibitem{Allanach:2007qk}
\bibinfo{author}{Allanach, B.~C.}, \bibinfo{author}{Cranmer, K.},
\bibinfo{author}{Lester, C.~G.} \& \bibinfo{author}{Weber, A.~M.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Natural priors, CMSSM fits and LHC
weather forecasts}}}.
\newblock {\emph{\JournalTitle{JHEP}}} \textbf{\bibinfo{volume}{08}},
\bibinfo{pages}{023}, \doi{10.1088/1126-6708/2007/08/023}
(\bibinfo{year}{2007}).
\newblock \eprint{arXiv}{0705.0487}.
\bibitem{Buchmueller:2011ab}
\bibinfo{author}{Buchmueller, O.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Higgs and Supersymmetry}}}.
\newblock {\emph{\JournalTitle{Eur. Phys. J. C}}}
\textbf{\bibinfo{volume}{72}}, \bibinfo{pages}{2020},
\doi{10.1140/epjc/s10052-012-2020-3} (\bibinfo{year}{2012}).
\newblock \eprint{arXiv}{1112.3564}.
\bibitem{Bechtle:2012zk}
\bibinfo{author}{Bechtle, P.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Constrained Supersymmetry after
two years of LHC data: a global view with Fittino}}}.
\newblock {\emph{\JournalTitle{JHEP}}} \textbf{\bibinfo{volume}{06}},
\bibinfo{pages}{098}, \doi{10.1007/JHEP06(2012)098} (\bibinfo{year}{2012}).
\newblock \eprint{arXiv}{1204.4199}.
\bibitem{Fowlie:2012im}
\bibinfo{author}{Fowlie, A.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{The CMSSM Favoring New
Territories: The Impact of New LHC Limits and a 125 GeV Higgs}}}.
\newblock {\emph{\JournalTitle{Phys. Rev. D}}} \textbf{\bibinfo{volume}{86}},
\bibinfo{pages}{075010}, \doi{10.1103/PhysRevD.86.075010}
(\bibinfo{year}{2012}).
\newblock \eprint{arXiv}{1206.0264}.
\bibitem{Athron:2017qdc}
\bibinfo{author}{Athron, P.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Global fits of GUT-scale SUSY
models with GAMBIT}}}.
\newblock {\emph{\JournalTitle{Eur. Phys. J. C}}}
\textbf{\bibinfo{volume}{77}}, \bibinfo{pages}{824},
\doi{10.1140/epjc/s10052-017-5167-0} (\bibinfo{year}{2017}).
\newblock \eprint{arXiv}{1705.07935}.
\bibitem{Balazs:2021uhg}
\bibinfo{author}{Bal\'azs, C.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{A comparison of optimisation
algorithms for high-dimensional particle and astrophysics applications}}}.
\newblock {\emph{\JournalTitle{JHEP}}} \textbf{\bibinfo{volume}{05}},
\bibinfo{pages}{108}, \doi{10.1007/JHEP05(2021)108} (\bibinfo{year}{2021}).
\newblock \eprint{arXiv}{2101.04525}.
\bibitem{Balazs:2017moi}
\bibinfo{author}{Bal\'azs, C.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{ColliderBit: a GAMBIT module for
the calculation of high-energy collider observables and likelihoods}}}.
\newblock {\emph{\JournalTitle{Eur. Phys. J. C}}}
\textbf{\bibinfo{volume}{77}}, \bibinfo{pages}{795},
\doi{10.1140/epjc/s10052-017-5285-8} (\bibinfo{year}{2017}).
\newblock \eprint{arXiv}{1705.07919}.
\bibitem{JMLR:v13:bergstra12a}
\bibinfo{author}{Bergstra, J.} \& \bibinfo{author}{Bengio, Y.}
\newblock
\bibinfo{journal}{\bibinfo{title}{{\href{http://jmlr.org/papers/v13/bergstra12a.html}{Random
Search for Hyper-Parameter Optimization}}}}.
\newblock {\emph{\JournalTitle{Journal of Machine Learning Research}}}
\textbf{\bibinfo{volume}{13}}, \bibinfo{pages}{281--305}
(\bibinfo{year}{2012}).
\bibitem{bellman1961adaptive}
\bibinfo{author}{Bellman, R.}
\newblock \emph{\bibinfo{title}{Adaptive Control Processes: A Guided Tour}}.
\newblock Princeton Legacy Library (\bibinfo{publisher}{Princeton University
Press}, \bibinfo{year}{1961}).
\bibitem{blum2020foundations}
\bibinfo{author}{Blum, A.}, \bibinfo{author}{Hopcroft, J.} \&
\bibinfo{author}{Kannan, R.}
\newblock \emph{\bibinfo{title}{Foundations of data science}}
(\bibinfo{publisher}{Cambridge University Press}, \bibinfo{year}{2020}).
\newblock \bibinfo{note}{Chap.~2. High-Dimensional Space}.
\bibitem{2020arXiv200406425M}
\bibinfo{author}{{Martin}, G.~M.}, \bibinfo{author}{{Frazier}, D.~T.} \&
\bibinfo{author}{{Robert}, C.~P.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Computing Bayes: Bayesian
Computation from 1763 to the 21st Century}}}.
\newblock {\emph{\JournalTitle{arXiv e-prints}}} (\bibinfo{year}{2020}).
\newblock \eprint{arXiv}{2004.06425}.
\bibitem{10.1093/comjnl/3.3.175}
\bibinfo{author}{Rosenbrock, H.~H.}
\newblock \bibinfo{journal}{\bibinfo{title}{{An Automatic Method for Finding
the Greatest or Least Value of a Function}}}.
\newblock {\emph{\JournalTitle{The Computer Journal}}}
\textbf{\bibinfo{volume}{3}}, \bibinfo{pages}{175--184},
\doi{10.1093/comjnl/3.3.175} (\bibinfo{year}{1960}).
\bibitem{StornPrice95}
\bibinfo{author}{Storn, R.} \& \bibinfo{author}{Price, K.}
\newblock \bibinfo{journal}{\bibinfo{title}{Differential evolution: A simple
and efficient heuristic for global optimization over continuous spaces}}.
\newblock {\emph{\JournalTitle{Journal of Global Optimization}}}
\textbf{\bibinfo{volume}{11}}, \bibinfo{pages}{341--359},
\doi{10.1023/A:1008202821328} (\bibinfo{year}{1997}).
\bibitem{2020SciPy-NMeth}
\bibinfo{author}{{Virtanen}, P.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{SciPy 1.0: Fundamental Algorithms
for Scientific Computing in Python}}}.
\newblock {\emph{\JournalTitle{Nature Methods}}} \textbf{\bibinfo{volume}{17}},
\bibinfo{pages}{261--272}, \doi{10.1038/s41592-019-0686-2}
(\bibinfo{year}{2020}).
\bibitem{Hogg:2017akh}
\bibinfo{author}{Hogg, D.~W.} \& \bibinfo{author}{Foreman-Mackey, D.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Data analysis recipes: Using
Markov Chain Monte Carlo}}}.
\newblock {\emph{\JournalTitle{Astrophys. J. Suppl.}}}
\textbf{\bibinfo{volume}{236}}, \bibinfo{pages}{11},
\doi{10.3847/1538-4365/aab76e} (\bibinfo{year}{2018}).
\newblock \eprint{arXiv}{1710.06068}.
\bibitem{brooks2011handbook}
\bibinfo{author}{Brooks, S.}, \bibinfo{author}{Gelman, A.},
\bibinfo{author}{Jones, G.} \& \bibinfo{author}{Meng, X.}
\newblock \emph{\bibinfo{title}{Handbook of Markov Chain Monte Carlo}}.
\newblock Chapman \& Hall/CRC Handbooks of Modern Statistical Methods
(\bibinfo{publisher}{CRC Press}, \bibinfo{year}{2011}).
\bibitem{Skilling:2006gxv}
\bibinfo{author}{Skilling, J.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Nested sampling for general
Bayesian computation}}}.
\newblock {\emph{\JournalTitle{Bayesian Analysis}}}
\textbf{\bibinfo{volume}{1}}, \bibinfo{pages}{833--859},
\doi{10.1214/06-BA127} (\bibinfo{year}{2006}).
\bibitem{Kirkpatrick671}
\bibinfo{author}{Kirkpatrick, S.}, \bibinfo{author}{Gelatt, C.~D.} \&
\bibinfo{author}{Vecchi, M.~P.}
\newblock \bibinfo{journal}{\bibinfo{title}{Optimization by simulated
annealing}}.
\newblock {\emph{\JournalTitle{Science}}} \textbf{\bibinfo{volume}{220}},
\bibinfo{pages}{671--680}, \doi{10.1126/science.220.4598.671}
(\bibinfo{year}{1983}).
\bibitem{1995ApJS..101..309C}
\bibinfo{author}{{Charbonneau}, P.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Genetic Algorithms in Astronomy
and Astrophysics}}}.
\newblock {\emph{\JournalTitle{ApJS}}} \textbf{\bibinfo{volume}{101}},
\bibinfo{pages}{309}, \doi{10.1086/192242} (\bibinfo{year}{1995}).
\bibitem{10.1093/comjnl/7.4.308}
\bibinfo{author}{Nelder, J.~A.} \& \bibinfo{author}{Mead, R.}
\newblock \bibinfo{journal}{\bibinfo{title}{{A Simplex Method for Function
Minimization}}}.
\newblock {\emph{\JournalTitle{The Computer Journal}}}
\textbf{\bibinfo{volume}{7}}, \bibinfo{pages}{308--313},
\doi{10.1093/comjnl/7.4.308} (\bibinfo{year}{1965}).
\bibitem{2020MNRAS.tmp..280S}
\bibinfo{author}{{Speagle}, J.~S.}
\newblock \bibinfo{journal}{\bibinfo{title}{{dynesty: A Dynamic Nested Sampling
Package for Estimating Bayesian Posteriors and Evidences}}}.
\newblock {\emph{\JournalTitle{Mon. Not. Roy. Astron. Soc.}}}
\doi{10.1093/mnras/staa278} (\bibinfo{year}{2020}).
\newblock \eprint{arXiv}{1904.02180}.
\bibitem{Feroz:2008xx}
\bibinfo{author}{Feroz, F.}, \bibinfo{author}{Hobson, M.~P.} \&
\bibinfo{author}{Bridges, M.}
\newblock \bibinfo{journal}{\bibinfo{title}{{MultiNest: an efficient and robust
Bayesian inference tool for cosmology and particle physics}}}.
\newblock {\emph{\JournalTitle{Mon. Not. Roy. Astron. Soc.}}}
\textbf{\bibinfo{volume}{398}}, \bibinfo{pages}{1601--1614},
\doi{10.1111/j.1365-2966.2009.14548.x} (\bibinfo{year}{2009}).
\newblock \eprint{arXiv}{0809.3437}.
\bibitem{Handley:2015fda}
\bibinfo{author}{Handley, W.~J.}, \bibinfo{author}{Hobson, M.~P.} \&
\bibinfo{author}{Lasenby, A.~N.}
\newblock \bibinfo{journal}{\bibinfo{title}{{PolyChord: nested sampling for
cosmology}}}.
\newblock {\emph{\JournalTitle{Mon. Not. Roy. Astron. Soc.}}}
\textbf{\bibinfo{volume}{450}}, \bibinfo{pages}{L61--L65},
\doi{10.1093/mnrasl/slv047} (\bibinfo{year}{2015}).
\newblock \eprint{arXiv}{1502.01856}.
\bibitem{ForemanMackey:2012ig}
\bibinfo{author}{Foreman-Mackey, D.}, \bibinfo{author}{Hogg, D.~W.},
\bibinfo{author}{Lang, D.} \& \bibinfo{author}{Goodman, J.}
\newblock \bibinfo{journal}{\bibinfo{title}{{emcee: The MCMC Hammer}}}.
\newblock {\emph{\JournalTitle{Publ. Astron. Soc. Pac.}}}
\textbf{\bibinfo{volume}{125}}, \bibinfo{pages}{306--312},
\doi{10.1086/670067} (\bibinfo{year}{2013}).
\newblock \eprint{arXiv}{1202.3665}.
\bibitem{Workgroup:2017htr}
\bibinfo{author}{Martinez, G.~D.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Comparison of statistical sampling
methods with ScannerBit, the GAMBIT scanning module}}}.
\newblock {\emph{\JournalTitle{Eur. Phys. J. C}}}
\textbf{\bibinfo{volume}{77}}, \bibinfo{pages}{761},
\doi{10.1140/epjc/s10052-017-5274-y} (\bibinfo{year}{2017}).
\newblock \eprint{arXiv}{1705.07959}.
\bibitem{James:1975dr}
\bibinfo{author}{James, F.} \& \bibinfo{author}{Roos, M.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Minuit: A System for Function
Minimization and Analysis of the Parameter Errors and Correlations}}}.
\newblock {\emph{\JournalTitle{Comput. Phys. Commun.}}}
\textbf{\bibinfo{volume}{10}}, \bibinfo{pages}{343--367},
\doi{10.1016/0010-4655(75)90039-9} (\bibinfo{year}{1975}).
\bibitem{hans_dembinski_2020_3951328}
\bibinfo{author}{Dembinski, H.} \emph{et~al.}
\newblock \bibinfo{title}{scikit-hep/iminuit: v1.4.9},
\doi{10.5281/zenodo.3951328} (\bibinfo{year}{2020}).
\bibitem{doi:10.1198/0003130031856}
\bibinfo{author}{Hubbard, R.} \& \bibinfo{author}{Bayarri, M.~J.}
\newblock \bibinfo{journal}{\bibinfo{title}{Confusion over measures of evidence
($p$'s) versus errors ($\alpha$'s) in classical statistical testing}}.
\newblock {\emph{\JournalTitle{Am. Stat.}}} \textbf{\bibinfo{volume}{57}},
\bibinfo{pages}{171--178}, \doi{10.1198/0003130031856}
(\bibinfo{year}{2003}).
\bibitem{fisher}
\bibinfo{author}{Fisher, R.~A.}
\newblock \emph{\bibinfo{title}{Statistical Methods for Research Workers}}
(\bibinfo{publisher}{Oliver and Boyd}, \bibinfo{year}{1925}).
\bibitem{Hubbard2008}
\bibinfo{author}{Hubbard, R.} \& \bibinfo{author}{Lindsay, R.~M.}
\newblock \bibinfo{journal}{\bibinfo{title}{Why p values are not a useful
measure of evidence in statistical significance testing}}.
\newblock {\emph{\JournalTitle{Theory {\&} Psychology}}}
\textbf{\bibinfo{volume}{18}}, \bibinfo{pages}{69--88},
\doi{10.1177/0959354307086923} (\bibinfo{year}{2008}).
\bibitem{doi:10.1080/00031305.1996.10474380}
\bibinfo{author}{Schervish, M.~J.}
\newblock \bibinfo{journal}{\bibinfo{title}{P values: What they are and what
they are not}}.
\newblock {\emph{\JournalTitle{Am. Stat.}}} \textbf{\bibinfo{volume}{50}},
\bibinfo{pages}{203--206}, \doi{10.1080/00031305.1996.10474380}
(\bibinfo{year}{1996}).
\bibitem{doi:10.1080/01621459.1987.10478397}
\bibinfo{author}{Berger, J.~O.} \& \bibinfo{author}{Sellke, T.}
\newblock \bibinfo{journal}{\bibinfo{title}{Testing a point null hypothesis:
The irreconcilability of p values and evidence}}.
\newblock {\emph{\JournalTitle{J. Am. Stat. Assoc.}}}
\textbf{\bibinfo{volume}{82}}, \bibinfo{pages}{112--122},
\doi{10.1080/01621459.1987.10478397} (\bibinfo{year}{1987}).
\bibitem{Senn2001}
\bibinfo{author}{Senn, S.}
\newblock \bibinfo{journal}{\bibinfo{title}{Two cheers for p-values?}}
\newblock {\emph{\JournalTitle{Journal of Epidemiology and Biostatistics}}}
\textbf{\bibinfo{volume}{6}}, \bibinfo{pages}{193--204},
\doi{10.1080/135952201753172953} (\bibinfo{year}{2001}).
\bibitem{Murtaugh2014}
\bibinfo{author}{Murtaugh, P.~A.}
\newblock \bibinfo{journal}{\bibinfo{title}{In defense {of P values}}}.
\newblock {\emph{\JournalTitle{Ecology}}} \textbf{\bibinfo{volume}{95}},
\bibinfo{pages}{611--617}, \doi{10.1890/13-0590.1} (\bibinfo{year}{2014}).
\bibitem{10.2307/91247}
\bibinfo{author}{Neyman, J.} \& \bibinfo{author}{Pearson, E.~S.}
\newblock \bibinfo{journal}{\bibinfo{title}{On the problem of the most
efficient tests of statistical hypotheses}}.
\newblock {\emph{\JournalTitle{Philos. Trans. Roy. Soc. London Ser. A}}}
\textbf{\bibinfo{volume}{231}}, \bibinfo{pages}{289--337},
\doi{10.1098/rsta.1933.0009} (\bibinfo{year}{1933}).
\bibitem{Lyons:2013yja}
\bibinfo{author}{Lyons, L.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Discovering the Significance of 5
sigma}}}.
\newblock {\emph{\JournalTitle{arXiv preprint}}} (\bibinfo{year}{2013}).
\newblock \eprint{arXiv}{1310.1284}.
\bibitem{Cowan:2010js}
\bibinfo{author}{Cowan, G.}, \bibinfo{author}{Cranmer, K.},
\bibinfo{author}{Gross, E.} \& \bibinfo{author}{Vitells, O.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Asymptotic formulae for
likelihood-based tests of new physics}}}.
\newblock {\emph{\JournalTitle{Eur. Phys. J. C}}}
\textbf{\bibinfo{volume}{71}}, \bibinfo{pages}{1554},
\doi{10.1140/epjc/s10052-011-1554-0} (\bibinfo{year}{2011}).
\newblock \bibinfo{note}{[Erratum: \textit{Eur. Phys. J. C} \textbf{73}, 2501
(2013), \doi{10.1140/epjc/s10052-011-1554-0}]}, \eprint{arXiv}{1007.1727}.
\bibitem{Gross:2010qma}
\bibinfo{author}{Gross, E.} \& \bibinfo{author}{Vitells, O.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Trial factors for the look
elsewhere effect in high energy physics}}}.
\newblock {\emph{\JournalTitle{Eur. Phys. J. C}}}
\textbf{\bibinfo{volume}{70}}, \bibinfo{pages}{525--530},
\doi{10.1140/epjc/s10052-010-1470-8} (\bibinfo{year}{2010}).
\newblock \eprint{arXiv}{1005.1891}.
\bibitem{GOODMAN2008135}
\bibinfo{author}{Goodman, S.}
\newblock \bibinfo{journal}{\bibinfo{title}{A dirty dozen: Twelve p-value
misconceptions}}.
\newblock {\emph{\JournalTitle{Seminars in Hematology}}}
\textbf{\bibinfo{volume}{45}}, \bibinfo{pages}{135--140},
\doi{10.1053/j.seminhematol.2008.04.003} (\bibinfo{year}{2008}).
\bibitem{Greenland2016}
\bibinfo{author}{Greenland, S.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{Statistical tests, $p$ values,
confidence intervals, and power: a guide to misinterpretations}}.
\newblock {\emph{\JournalTitle{European Journal of Epidemiology}}}
\textbf{\bibinfo{volume}{31}}, \bibinfo{pages}{337--350},
\doi{10.1007/s10654-016-0149-3} (\bibinfo{year}{2016}).
\bibitem{Robert:1995oiy}
\bibinfo{author}{Kass, R.~E.} \& \bibinfo{author}{Raftery, A.~E.}
\newblock \bibinfo{journal}{\bibinfo{title}{Bayes factors}}.
\newblock {\emph{\JournalTitle{Journal of the American Statistical
Association}}} \textbf{\bibinfo{volume}{90}}, \bibinfo{pages}{773--795},
\doi{10.1080/01621459.1995.10476572} (\bibinfo{year}{1995}).
\bibitem{berger2001objective}
\bibinfo{author}{Berger, J.~O.} \& \bibinfo{author}{Pericchi, L.~R.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Objective Bayesian methods for
model selection: Introduction and comparison}}}.
\newblock {\emph{\JournalTitle{{IMS Lecture Notes -- Monograph Series}}}}
\textbf{\bibinfo{volume}{38}}, \bibinfo{pages}{135--207},
\doi{10.1214/lnms/1215540968} (\bibinfo{year}{2001}).
\bibitem{Cousins:2008gf}
\bibinfo{author}{Cousins, R.~D.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Comment on `Bayesian Analysis of
Pentaquark Signals from CLAS Data', with Response to the Reply by Ireland and
Protopopsecu}}}.
\newblock {\emph{\JournalTitle{Phys. Rev. Lett.}}}
\textbf{\bibinfo{volume}{101}}, \bibinfo{pages}{029101},
\doi{10.1103/PhysRevLett.101.029101} (\bibinfo{year}{2008}).
\newblock \eprint{arXiv}{0807.1330}.
\bibitem{Robert:1996lhi}
\bibinfo{author}{Kass, R.~E.} \& \bibinfo{author}{Wasserman, L.}
\newblock \bibinfo{journal}{\bibinfo{title}{{The Selection of Prior
Distributions by Formal Rules}}}.
\newblock {\emph{\JournalTitle{Journal of the American Statistical
Association}}} \textbf{\bibinfo{volume}{91}}, \bibinfo{pages}{1343--1370},
\doi{10.1080/01621459.1996.10477003} (\bibinfo{year}{1996}).
\bibitem{4082152}
\bibinfo{author}{Jaynes, E.~T.}
\newblock \bibinfo{journal}{\bibinfo{title}{Prior probabilities}}.
\newblock {\emph{\JournalTitle{IEEE Transactions on Systems Science and
Cybernetics}}} \textbf{\bibinfo{volume}{4}}, \bibinfo{pages}{227--241},
\doi{10.1109/TSSC.1968.300117} (\bibinfo{year}{1968}).
\bibitem{10.2307/2333251}
\bibinfo{author}{Lindley, D.~V.}
\newblock \bibinfo{journal}{\bibinfo{title}{A statistical paradox}}.
\newblock {\emph{\JournalTitle{Biometrika}}} \textbf{\bibinfo{volume}{44}},
\bibinfo{pages}{187--192}, \doi{10.1093/biomet/44.1-2.187}
(\bibinfo{year}{1957}).
\bibitem{Cousins:2013hry}
\bibinfo{author}{Cousins, R.~D.}
\newblock \bibinfo{journal}{\bibinfo{title}{{The Jeffreys-Lindley paradox and
discovery criteria in high energy physics}}}.
\newblock {\emph{\JournalTitle{Synthese}}} \textbf{\bibinfo{volume}{194}},
\bibinfo{pages}{395--432}, \doi{10.1007/s11229-014-0525-z,
10.1007/s11229-015-0687-3} (\bibinfo{year}{2017}).
\newblock \eprint{arXiv}{1310.3791}.
\bibitem{Wagenmakers2007}
\bibinfo{author}{Wagenmakers, E.-J.}
\newblock \bibinfo{journal}{\bibinfo{title}{A practical solution to the
pervasive problems of $p$ values}}.
\newblock {\emph{\JournalTitle{Psychonomic Bulletin {\&} Review}}}
\textbf{\bibinfo{volume}{14}}, \bibinfo{pages}{779--804},
\doi{10.3758/BF03194105} (\bibinfo{year}{2007}).
\bibitem{doi:10.1177/1745691620958012}
\bibinfo{author}{Lakens, D.}
\newblock \bibinfo{journal}{\bibinfo{title}{The practical alternative to the
$p$ value is the correctly used $p$ value}}.
\newblock {\emph{\JournalTitle{Perspectives on Psychological Science}}}
\doi{10.1177/1745691620958012} (\bibinfo{year}{2021}).
\bibitem{Benjamin2018}
\bibinfo{author}{Benjamin, D.~J.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{Redefine statistical
significance}}.
\newblock {\emph{\JournalTitle{Nature Human Behaviour}}}
\textbf{\bibinfo{volume}{2}}, \bibinfo{pages}{6--10},
\doi{10.1038/s41562-017-0189-z} (\bibinfo{year}{2018}).
\bibitem{doi:10.1080/00031305.2018.1527253}
\bibinfo{author}{McShane, B.~B.}, \bibinfo{author}{Gal, D.},
\bibinfo{author}{Gelman, A.}, \bibinfo{author}{Robert, C.} \&
\bibinfo{author}{Tackett, J.~L.}
\newblock \bibinfo{journal}{\bibinfo{title}{Abandon statistical significance}}.
\newblock {\emph{\JournalTitle{The American Statistician}}}
\textbf{\bibinfo{volume}{73}}, \bibinfo{pages}{235--245},
\doi{10.1080/00031305.2018.1527253} (\bibinfo{year}{2019}).
\bibitem{Lakens2018}
\bibinfo{author}{Lakens, D.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{Justify your alpha}}.
\newblock {\emph{\JournalTitle{Nature Human Behaviour}}}
\textbf{\bibinfo{volume}{2}}, \bibinfo{pages}{168--171},
\doi{10.1038/s41562-018-0311-x} (\bibinfo{year}{2018}).
\bibitem{kendall2a}
\bibinfo{author}{Kendall, M.}, \bibinfo{author}{Stuart, A.},
\bibinfo{author}{Ord, J.} \& \bibinfo{author}{Arnold, S.}
\newblock \emph{\bibinfo{title}{Kendall's Advanced Theory of Statistics}}, vol.
\bibinfo{volume}{2A, chap. 21} (\bibinfo{publisher}{Wiley},
\bibinfo{year}{2009}), \bibinfo{edition}{sixth} edn.
\bibitem{Cousins:2018tiz}
\bibinfo{author}{Cousins, R.~D.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Lectures on Statistics in Theory:
Prelude to Statistics in Practice}}}.
\newblock {\emph{\JournalTitle{arXiv e-prints}}} (\bibinfo{year}{2018}).
\newblock \bibinfo{note}{See Sec. 7.4}, \eprint{arXiv}{1807.05996}.
\bibitem{10.2307/2958475}
\bibinfo{author}{Dickey, J.~M.}
\newblock \bibinfo{journal}{\bibinfo{title}{The weighted likelihood ratio,
linear hypotheses on normal location parameters}}.
\newblock {\emph{\JournalTitle{The Annals of Mathematical Statistics}}}
\textbf{\bibinfo{volume}{42}}, \bibinfo{pages}{204--223}
(\bibinfo{year}{1971}).
\bibitem{Athron:2017ard}
\bibinfo{author}{Athron, P.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{GAMBIT: The Global and Modular
Beyond-the-Standard-Model Inference Tool}}}.
\newblock {\emph{\JournalTitle{Eur. Phys. J. C}}}
\textbf{\bibinfo{volume}{77}}, \bibinfo{pages}{784},
\doi{10.1140/epjc/s10052-017-5321-8} (\bibinfo{year}{2017}).
\newblock \bibinfo{note}{[Addendum: \textit{Eur.~Phys.~J.~C} \textbf{78}, 98
(2018), \doi{10.1140/epjc/s10052-017-5513-2}]}, \eprint{arXiv}{1705.07908}.
\bibitem{hepfit}
\bibinfo{author}{De~Blas, J.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{$\texttt{HEPfit}$: a Code for the
Combination of Indirect and Direct Constraints on High Energy Physics
Models}}}.
\newblock {\emph{\JournalTitle{Eur. Phys. J. C}}}
\textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{456},
\doi{10.1140/epjc/s10052-020-7904-z} (\bibinfo{year}{2019}).
\newblock \eprint{arXiv}{1910.14012}.
\bibitem{brinckmann2018montepython}
\bibinfo{author}{Brinckmann, T.} \& \bibinfo{author}{Lesgourgues, J.}
\newblock \bibinfo{title}{{MontePython 3: boosted MCMC sampler and other
features}}, \doi{10.1016/j.dark.2018.100260} (\bibinfo{year}{2019}).
\newblock \eprint{arXiv}{1804.07261}.
\bibitem{Bhom:2020bfe}
\bibinfo{author}{Bhom, J.} \& \bibinfo{author}{Chrzaszcz, M.}
\newblock \bibinfo{journal}{\bibinfo{title}{{HEPLike: an open source framework
for experimental likelihood evaluation}}}.
\newblock {\emph{\JournalTitle{Comput. Phys. Commun.}}}
\textbf{\bibinfo{volume}{254}}, \bibinfo{pages}{107235},
\doi{10.1016/j.cpc.2020.107235} (\bibinfo{year}{2020}).
\newblock \eprint{arXiv}{2003.03956}.
\bibitem{LikeDM}
\bibinfo{author}{Huang, X.}, \bibinfo{author}{Tsai, Y.-L.~S.} \&
\bibinfo{author}{Yuan, Q.}
\newblock \bibinfo{journal}{\bibinfo{title}{{LikeDM: likelihood calculator of
dark matter detection}}}.
\newblock {\emph{\JournalTitle{Comput. Phys. Commun.}}}
\textbf{\bibinfo{volume}{213}}, \bibinfo{pages}{252--263},
\doi{10.1016/j.cpc.2016.12.015} (\bibinfo{year}{2017}).
\newblock \eprint{arXiv}{1603.07119}.
\bibitem{Collaboration:2242860}
\bibinfo{title}{{Simplified likelihood for the re-interpretation of public CMS
results}}.
\newblock \bibinfo{type}{Tech. Rep.}
\bibinfo{number}{\href{https://cds.cern.ch/record/2242860}{CMS-NOTE-2017-001}},
\bibinfo{institution}{CERN}, \bibinfo{address}{Geneva}
(\bibinfo{year}{2017}).
\bibitem{Aghanim:2019ame}
\bibinfo{author}{Aghanim, N.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Planck 2018 results. V. CMB power
spectra and likelihoods}}}.
\newblock {\emph{\JournalTitle{{Astronomy and Astrophysics}}}}
\textbf{\bibinfo{volume}{641}}, \bibinfo{pages}{A5},
\doi{10.1051/0004-6361/201936386} (\bibinfo{year}{2020}).
\newblock \eprint{arXiv}{1907.12875}.
\bibitem{IC79_SUSY}
\bibinfo{author}{{Aartsen}, M.~G.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Improved limits on dark matter
annihilation in the Sun with the 79-string IceCube detector and implications
for supersymmetry}}}.
\newblock {\emph{\JournalTitle{JCAP}}} \textbf{\bibinfo{volume}{04}},
\bibinfo{pages}{022}, \doi{10.1088/1475-7516/2016/04/022}
(\bibinfo{year}{2016}).
\newblock \eprint{arXiv}{1601.00653}.
\bibitem{IC22Methods}
\bibinfo{author}{{Scott}, P.}, \bibinfo{author}{{Savage}, C.},
\bibinfo{author}{{Edsj{\"o}}, J.} \& \bibinfo{author}{{the IceCube
Collaboration: R.~Abbasi et al.}}
\newblock \bibinfo{journal}{\bibinfo{title}{{Use of event-level neutrino
telescope data in global fits for theories of new physics}}}.
\newblock {\emph{\JournalTitle{JCAP}}} \textbf{\bibinfo{volume}{11}},
\bibinfo{pages}{57}, \doi{10.1088/1475-7516/2012/11/057}
(\bibinfo{year}{2012}).
\newblock \eprint{arXiv}{1207.0810}.
\bibitem{Cousins:451612}
\bibinfo{author}{Cousins, R.~D.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Comments on methods for setting
confidence limits}}}.
\newblock {\emph{\JournalTitle{{Workshop on Confidence Limits}}}}
\doi{10.5170/CERN-2000-005.49} (\bibinfo{year}{2000}).
\newblock \bibinfo{note}{See point 5, p57}.
\bibitem{Vischia:2019uul}
\bibinfo{author}{Vischia, P.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Reporting results in High Energy
Physics publications: A manifesto}}}.
\newblock {\emph{\JournalTitle{Rev. Phys.}}} \textbf{\bibinfo{volume}{5}},
\bibinfo{pages}{100046}, \doi{10.1016/j.revip.2020.100046}
(\bibinfo{year}{2020}).
\newblock \eprint{arXiv}{1904.11718}.
\bibitem{Abdallah:2020pec}
\bibinfo{author}{Abdallah, W.} \emph{et~al.}
\newblock \bibinfo{journal}{\bibinfo{title}{{Reinterpretation of LHC Results
for New Physics: Status and Recommendations after Run 2}}}.
\newblock {\emph{\JournalTitle{SciPost Phys.}}} \textbf{\bibinfo{volume}{9}},
\bibinfo{pages}{022}, \doi{10.21468/SciPostPhys.9.2.022}
(\bibinfo{year}{2020}).
\newblock \eprint{arXiv}{2003.07868}.
\bibitem{Hunter:2007}
\bibinfo{author}{Hunter, J.~D.}
\newblock \bibinfo{journal}{\bibinfo{title}{Matplotlib: A 2d graphics
environment}}.
\newblock {\emph{\JournalTitle{Computing in Science \& Engineering}}}
\textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{90--95},
\doi{10.1109/MCSE.2007.55} (\bibinfo{year}{2007}).
\end{thebibliography}
```
5. **Author Information:**
- Lead Author: {'name': 'Shehu S. AbdusSalam'}
- Full Authors List:
```yaml
Shehu S. AbdusSalam: {}
Fruzsina Agocs:
phd:
start: 2017-10-01
end: 2021-09-01
supervisors:
- Will Handley
- Anthony Lasenby
- Mike Hobson
thesis: 'Primordial evolution of cosmological perturbations: Theory and computation'
partiii:
start: 2016-10-01
end: 2017-06-01
supervisors:
- Will Handley
thesis: "The Runge\u2013Kutta\u2013Wentzel\u2013Kramers\u2013Brillouin method\
\ and the primordial Universe"
original_image: images/originals/fruzsina_agocs.jpg
image: /assets/group/images/fruzsina_agocs.jpg
links:
Webpage: https://fruzsinaagocs.github.io/
Group webpage: https://www.simonsfoundation.org/people/fruzsina-agocs/
destination:
2021-10-01: 3y fellowship CCM New York
2024-10-01: Assistant Professor at Boulder, Colorado
Benjamin C. Allanach: {}
Peter Athron: {}
"Csaba Bal\xE1zs": {}
Emanuele Bagnaschi: {}
Philip Bechtle: {}
Oliver Buchmueller: {}
Ankit Beniwal: {}
Jihyun Bhom: {}
Sanjay Bloor: {}
Torsten Bringmann: {}
Andy Buckley: {}
Anja Butter: {}
"Jos\xE9 Eliel Camargo-Molina": {}
Marcin Chrzaszcz: {}
Jan Conrad: {}
Jonathan M. Cornell: {}
Matthias Danninger: {}
Jorge de Blas: {}
Albert De Roeck: {}
Klaus Desch: {}
Matthew Dolan: {}
Herbert Dreiner: {}
Otto Eberhardt: {}
John Ellis: {}
Ben Farmer: {}
Marco Fedele: {}
"Henning Fl\xE4cher": {}
Andrew Fowlie: {}
"Tom\xE1s E. Gonzalo": {}
Philip Grace: {}
Matthias Hamer: {}
Will Handley:
pi:
start: 2020-10-01
thesis: null
postdoc:
start: 2016-10-01
end: 2020-10-01
thesis: null
phd:
start: 2012-10-01
end: 2016-09-30
supervisors:
- Anthony Lasenby
- Mike Hobson
thesis: 'Kinetic initial conditions for inflation: theory, observation and methods'
original_image: images/originals/will_handley.jpeg
image: /assets/group/images/will_handley.jpg
links:
Webpage: https://willhandley.co.uk
Julia Harz: {}
Sven Heinemeyer: {}
Sebastian Hoof: {}
Selim Hotinli: {}
Paul Jackson: {}
Felix Kahlhoefer: {}
Kamila Kowalska: {}
"Michael Kr\xE4mer": {}
Anders Kvellestad: {}
Miriam Lucio Martinez: {}
Farvah Mahmoudi: {}
Diego Martinez Santos: {}
Gregory D. Martinez: {}
Satoshi Mishima: {}
Keith Olive: {}
Ayan Paul: {}
Markus Tobias Prim: {}
Werner Porod: {}
Are Raklev: {}
Janina J. Renk: {}
Christopher Rogan: {}
Leszek Roszkowski: {}
Roberto Ruiz de Austri: {}
Kazuki Sakurai: {}
Andre Scaffidi: {}
Pat Scott: {}
Enrico Maria Sessolo: {}
Tim Stefaniak: {}
"Patrick St\xF6cker": {}
Wei Su: {}
Sebastian Trojanowski: {}
Roberto Trotta: {}
Yue-Lin Sming Tsai: {}
Jeriek Van den Abeele: {}
Mauro Valli: {}
Aaron C. Vincent: {}
Georg Weiglein: {}
Martin White: {}
Peter Wienemann: {}
Lei Wu: {}
Yang Zhang: {}
```
This YAML file provides a concise snapshot of an academic research group. It lists members by name along with their academic roles—ranging from Part III and summer projects to MPhil, PhD, and postdoctoral positions—with corresponding dates, thesis topics, and supervisor details. Supplementary metadata includes image paths and links to personal or departmental webpages. A dedicated "coi" section profiles senior researchers, highlighting the group’s collaborative mentoring network and career trajectories in cosmology, astrophysics, and Bayesian data analysis.
====================================================================================
Final Output Instructions
====================================================================================
- Combine all data sources to create a seamless, engaging narrative.
- Follow the exact Markdown output format provided at the top.
- Do not include any extra explanation, commentary, or wrapping beyond the specified Markdown.
- Validate that every bibliographic reference with a DOI or arXiv identifier is converted into a Markdown link as per the examples.
- Validate that every Markdown author link corresponds to a link in the author information block.
- Before finalizing, confirm that no LaTeX citation commands or other undesired formatting remain.
- Before finalizing, confirm that the link to the paper itself [2012.09874](https://arxiv.org/abs/2012.09874) is featured in the first sentence.
Generate only the final Markdown output that meets all these requirements.
{% endraw %}