{% raw %} Title: Create a Markdown Blog Post Integrating Research Details and a Featured Paper ==================================================================================== This task involves generating a Markdown file (ready for a GitHub-served Jekyll site) that integrates our research details with a featured research paper. The output must follow the exact format and conventions described below. ==================================================================================== Output Format (Markdown): ------------------------------------------------------------------------------------ --- layout: post title: "Nested sampling cross-checks using order statistics" date: 2020-06-05 categories: papers --- ![AI generated image](/assets/images/posts/2020-06-05-2006.03371.png) Will Handley Content generated by [gemini-2.5-pro](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/content/2020-06-05-2006.03371.txt). Image generated by [imagen-3.0-generate-002](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/images/2020-06-05-2006.03371.txt). ------------------------------------------------------------------------------------ ==================================================================================== Please adhere strictly to the following instructions: ==================================================================================== Section 1: Content Creation Instructions ==================================================================================== 1. **Generate the Page Body:** - Write a well-composed, engaging narrative that is suitable for a scholarly audience interested in advanced AI and astrophysics. - Ensure the narrative is original and reflective of the tone and style and content in the "Homepage Content" block (provided below), but do not reuse its content. - Use bullet points, subheadings, or other formatting to enhance readability. 2. **Highlight Key Research Details:** - Emphasize the contributions and impact of the paper, focusing on its methodology, significance, and context within current research. - Specifically highlight the lead author ({'name': 'Andrew Fowlie'}). When referencing any author, use Markdown links from the Author Information block (choose academic or GitHub links over social media). 3. **Integrate Data from Multiple Sources:** - Seamlessly weave information from the following: - **Paper Metadata (YAML):** Essential details including the title and authors. - **Paper Source (TeX):** Technical content from the paper. - **Bibliographic Information (bbl):** Extract bibliographic references. - **Author Information (YAML):** Profile details for constructing Markdown links. - Merge insights from the Paper Metadata, TeX source, Bibliographic Information, and Author Information blocks into a coherent narrative—do not treat these as separate or isolated pieces. - Insert the generated narrative between the HTML comments: and 4. **Generate Bibliographic References:** - Review the Bibliographic Information block carefully. - For each reference that includes a DOI or arXiv identifier: - For DOIs, generate a link formatted as: [10.1234/xyz](https://doi.org/10.1234/xyz) - For arXiv entries, generate a link formatted as: [2103.12345](https://arxiv.org/abs/2103.12345) - **Important:** Do not use any LaTeX citation commands (e.g., `\cite{...}`). Every reference must be rendered directly as a Markdown link. For example, instead of `\cite{mycitation}`, output `[mycitation](https://doi.org/mycitation)` - **Incorrect:** `\cite{10.1234/xyz}` - **Correct:** `[10.1234/xyz](https://doi.org/10.1234/xyz)` - Ensure that at least three (3) of the most relevant references are naturally integrated into the narrative. - Ensure that the link to the Featured paper [2006.03371](https://arxiv.org/abs/2006.03371) is included in the first sentence. 5. **Final Formatting Requirements:** - The output must be plain Markdown; do not wrap it in Markdown code fences. - Preserve the YAML front matter exactly as provided. ==================================================================================== Section 2: Provided Data for Integration ==================================================================================== 1. **Homepage Content (Tone and Style Reference):** ```markdown --- layout: home --- ![AI generated image](/assets/images/index.png) The Handley Research Group stands at the forefront of cosmological exploration, pioneering novel approaches that fuse fundamental physics with the transformative power of artificial intelligence. We are a dynamic team of researchers, including PhD students, postdoctoral fellows, and project students, based at the University of Cambridge. Our mission is to unravel the mysteries of the Universe, from its earliest moments to its present-day structure and ultimate fate. We tackle fundamental questions in cosmology and astrophysics, with a particular focus on leveraging advanced Bayesian statistical methods and AI to push the frontiers of scientific discovery. Our research spans a wide array of topics, including the [primordial Universe](https://arxiv.org/abs/1907.08524), [inflation](https://arxiv.org/abs/1807.06211), the nature of [dark energy](https://arxiv.org/abs/2503.08658) and [dark matter](https://arxiv.org/abs/2405.17548), [21-cm cosmology](https://arxiv.org/abs/2210.07409), the [Cosmic Microwave Background (CMB)](https://arxiv.org/abs/1807.06209), and [gravitational wave astrophysics](https://arxiv.org/abs/2411.17663). ### Our Research Approach: Innovation at the Intersection of Physics and AI At The Handley Research Group, we develop and apply cutting-edge computational techniques to analyze complex astronomical datasets. Our work is characterized by a deep commitment to principled [Bayesian inference](https://arxiv.org/abs/2205.15570) and the innovative application of [artificial intelligence (AI) and machine learning (ML)](https://arxiv.org/abs/2504.10230). **Key Research Themes:** * **Cosmology:** We investigate the early Universe, including [quantum initial conditions for inflation](https://arxiv.org/abs/2002.07042) and the generation of [primordial power spectra](https://arxiv.org/abs/2112.07547). We explore the enigmatic nature of [dark energy, using methods like non-parametric reconstructions](https://arxiv.org/abs/2503.08658), and search for new insights into [dark matter](https://arxiv.org/abs/2405.17548). A significant portion of our efforts is dedicated to [21-cm cosmology](https://arxiv.org/abs/2104.04336), aiming to detect faint signals from the Cosmic Dawn and the Epoch of Reionization. * **Gravitational Wave Astrophysics:** We develop methods for [analyzing gravitational wave signals](https://arxiv.org/abs/2411.17663), extracting information about extreme astrophysical events and fundamental physics. * **Bayesian Methods & AI for Physical Sciences:** A core component of our research is the development of novel statistical and AI-driven methodologies. This includes advancing [nested sampling techniques](https://arxiv.org/abs/1506.00171) (e.g., [PolyChord](https://arxiv.org/abs/1506.00171), [dynamic nested sampling](https://arxiv.org/abs/1704.03459), and [accelerated nested sampling with $\beta$-flows](https://arxiv.org/abs/2411.17663)), creating powerful [simulation-based inference (SBI) frameworks](https://arxiv.org/abs/2504.10230), and employing [machine learning for tasks such as radiometer calibration](https://arxiv.org/abs/2504.16791), [cosmological emulation](https://arxiv.org/abs/2503.13263), and [mitigating radio frequency interference](https://arxiv.org/abs/2211.15448). We also explore the potential of [foundation models for scientific discovery](https://arxiv.org/abs/2401.00096). **Technical Contributions:** Our group has a strong track record of developing widely-used scientific software. Notable examples include: * [**PolyChord**](https://arxiv.org/abs/1506.00171): A next-generation nested sampling algorithm for Bayesian computation. * [**anesthetic**](https://arxiv.org/abs/1905.04768): A Python package for processing and visualizing nested sampling runs. * [**GLOBALEMU**](https://arxiv.org/abs/2104.04336): An emulator for the sky-averaged 21-cm signal. * [**maxsmooth**](https://arxiv.org/abs/2007.14970): A tool for rapid maximally smooth function fitting. * [**margarine**](https://arxiv.org/abs/2205.12841): For marginal Bayesian statistics using normalizing flows and KDEs. * [**fgivenx**](https://arxiv.org/abs/1908.01711): A package for functional posterior plotting. * [**nestcheck**](https://arxiv.org/abs/1804.06406): Diagnostic tests for nested sampling calculations. ### Impact and Discoveries Our research has led to significant advancements in cosmological data analysis and yielded new insights into the Universe. Key achievements include: * Pioneering the development and application of advanced Bayesian inference tools, such as [PolyChord](https://arxiv.org/abs/1506.00171), which has become a cornerstone for cosmological parameter estimation and model comparison globally. * Making significant contributions to the analysis of major cosmological datasets, including the [Planck mission](https://arxiv.org/abs/1807.06209), providing some of the tightest constraints on cosmological parameters and models of [inflation](https://arxiv.org/abs/1807.06211). * Developing novel AI-driven approaches for astrophysical challenges, such as using [machine learning for radiometer calibration in 21-cm experiments](https://arxiv.org/abs/2504.16791) and [simulation-based inference for extracting cosmological information from galaxy clusters](https://arxiv.org/abs/2504.10230). * Probing the nature of dark energy through innovative [non-parametric reconstructions of its equation of state](https://arxiv.org/abs/2503.08658) from combined datasets. * Advancing our understanding of the early Universe through detailed studies of [21-cm signals from the Cosmic Dawn and Epoch of Reionization](https://arxiv.org/abs/2301.03298), including the development of sophisticated foreground modelling techniques and emulators like [GLOBALEMU](https://arxiv.org/abs/2104.04336). * Developing new statistical methods for quantifying tensions between cosmological datasets ([Quantifying tensions in cosmological parameters: Interpreting the DES evidence ratio](https://arxiv.org/abs/1902.04029)) and for robust Bayesian model selection ([Bayesian model selection without evidences: application to the dark energy equation-of-state](https://arxiv.org/abs/1506.09024)). * Exploring fundamental physics questions such as potential [parity violation in the Large-Scale Structure using machine learning](https://arxiv.org/abs/2410.16030). ### Charting the Future: AI-Powered Cosmological Discovery The Handley Research Group is poised to lead a new era of cosmological analysis, driven by the explosive growth in data from next-generation observatories and transformative advances in artificial intelligence. Our future ambitions are centred on harnessing these capabilities to address the most pressing questions in fundamental physics. **Strategic Research Pillars:** * **Next-Generation Simulation-Based Inference (SBI):** We are developing advanced SBI frameworks to move beyond traditional likelihood-based analyses. This involves creating sophisticated codes for simulating [Cosmic Microwave Background (CMB)](https://arxiv.org/abs/1908.00906) and [Baryon Acoustic Oscillation (BAO)](https://arxiv.org/abs/1607.00270) datasets from surveys like DESI and 4MOST, incorporating realistic astrophysical effects and systematic uncertainties. Our AI initiatives in this area focus on developing and implementing cutting-edge SBI algorithms, particularly [neural ratio estimation (NRE) methods](https://arxiv.org/abs/2407.15478), to enable robust and scalable inference from these complex simulations. * **Probing Fundamental Physics:** Our enhanced analytical toolkit will be deployed to test the standard cosmological model ($\Lambda$CDM) with unprecedented precision and to explore [extensions to Einstein's General Relativity](https://arxiv.org/abs/2006.03581). We aim to constrain a wide range of theoretical models, from modified gravity to the nature of [dark matter](https://arxiv.org/abs/2106.02056) and [dark energy](https://arxiv.org/abs/1701.08165). This includes leveraging data from upcoming [gravitational wave observatories](https://arxiv.org/abs/1803.10210) like LISA, alongside CMB and large-scale structure surveys from facilities such as Euclid and JWST. * **Synergies with Particle Physics:** We will continue to strengthen the connection between cosmology and particle physics by expanding the [GAMBIT framework](https://arxiv.org/abs/2009.03286) to interface with our new SBI tools. This will facilitate joint analyses of cosmological and particle physics data, providing a holistic approach to understanding the Universe's fundamental constituents. * **AI-Driven Theoretical Exploration:** We are pioneering the use of AI, including [large language models and symbolic computation](https://arxiv.org/abs/2401.00096), to automate and accelerate the process of theoretical model building and testing. This innovative approach will allow us to explore a broader landscape of physical theories and derive new constraints from diverse astrophysical datasets, such as those from GAIA. Our overarching goal is to remain at the forefront of scientific discovery by integrating the latest AI advancements into every stage of our research, from theoretical modeling to data analysis and interpretation. We are excited by the prospect of using these powerful new tools to unlock the secrets of the cosmos. Content generated by [gemini-2.5-pro-preview-05-06](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/content/index.txt). Image generated by [imagen-3.0-generate-002](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/images/index.txt). ``` 2. **Paper Metadata:** ```yaml !!python/object/new:feedparser.util.FeedParserDict dictitems: id: http://arxiv.org/abs/2006.03371v2 guidislink: true link: http://arxiv.org/abs/2006.03371v2 updated: '2020-08-24T03:52:06Z' updated_parsed: !!python/object/apply:time.struct_time - !!python/tuple - 2020 - 8 - 24 - 3 - 52 - 6 - 0 - 237 - 0 - tm_zone: null tm_gmtoff: null published: '2020-06-05T11:19:03Z' published_parsed: !!python/object/apply:time.struct_time - !!python/tuple - 2020 - 6 - 5 - 11 - 19 - 3 - 4 - 157 - 0 - tm_zone: null tm_gmtoff: null title: Nested sampling cross-checks using order statistics title_detail: !!python/object/new:feedparser.util.FeedParserDict dictitems: type: text/plain language: null base: '' value: Nested sampling cross-checks using order statistics summary: 'Nested sampling (NS) is an invaluable tool in data analysis in modern astrophysics, cosmology, gravitational wave astronomy and particle physics. We identify a previously unused property of NS related to order statistics: the insertion indexes of new live points into the existing live points should be uniformly distributed. This observation enabled us to create a novel cross-check of single NS runs. The tests can detect when an NS run failed to sample new live points from the constrained prior and plateaus in the likelihood function, which break an assumption of NS and thus leads to unreliable results. We applied our cross-check to NS runs on toy functions with known analytic results in 2 - 50 dimensions, showing that our approach can detect problematic runs on a variety of likelihoods, settings and dimensions. As an example of a realistic application, we cross-checked NS runs performed in the context of cosmological model selection. Since the cross-check is simple, we recommend that it become a mandatory test for every applicable NS run.' summary_detail: !!python/object/new:feedparser.util.FeedParserDict dictitems: type: text/plain language: null base: '' value: 'Nested sampling (NS) is an invaluable tool in data analysis in modern astrophysics, cosmology, gravitational wave astronomy and particle physics. We identify a previously unused property of NS related to order statistics: the insertion indexes of new live points into the existing live points should be uniformly distributed. This observation enabled us to create a novel cross-check of single NS runs. The tests can detect when an NS run failed to sample new live points from the constrained prior and plateaus in the likelihood function, which break an assumption of NS and thus leads to unreliable results. We applied our cross-check to NS runs on toy functions with known analytic results in 2 - 50 dimensions, showing that our approach can detect problematic runs on a variety of likelihoods, settings and dimensions. As an example of a realistic application, we cross-checked NS runs performed in the context of cosmological model selection. Since the cross-check is simple, we recommend that it become a mandatory test for every applicable NS run.' authors: - !!python/object/new:feedparser.util.FeedParserDict dictitems: name: Andrew Fowlie - !!python/object/new:feedparser.util.FeedParserDict dictitems: name: Will Handley - !!python/object/new:feedparser.util.FeedParserDict dictitems: name: Liangliang Su author_detail: !!python/object/new:feedparser.util.FeedParserDict dictitems: name: Liangliang Su author: Liangliang Su arxiv_doi: 10.1093/mnras/staa2345 links: - !!python/object/new:feedparser.util.FeedParserDict dictitems: title: doi href: http://dx.doi.org/10.1093/mnras/staa2345 rel: related type: text/html - !!python/object/new:feedparser.util.FeedParserDict dictitems: href: http://arxiv.org/abs/2006.03371v2 rel: alternate type: text/html - !!python/object/new:feedparser.util.FeedParserDict dictitems: title: pdf href: http://arxiv.org/pdf/2006.03371v2 rel: related type: application/pdf arxiv_comment: minor changes & clarifications. closely matches published version arxiv_primary_category: term: stat.CO scheme: http://arxiv.org/schemas/atom tags: - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: stat.CO scheme: http://arxiv.org/schemas/atom label: null - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: astro-ph.CO scheme: http://arxiv.org/schemas/atom label: null - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: astro-ph.IM scheme: http://arxiv.org/schemas/atom label: null - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: hep-ph scheme: http://arxiv.org/schemas/atom label: null - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: physics.data-an scheme: http://arxiv.org/schemas/atom label: null ``` 3. **Paper Source (TeX):** ```tex BAO & 0.89 & 0.82 & 0.07 & 0.05\\ lensing+BAO & 0.72 & 0.54 & 0.19 & 0.43\\ lensing & 0.26 & 0.14 & 0.04 & 0.64\\ lensing+S$H_0$ES & 0.08 & 0.08 & 0.78 & 0.04\\ Planck+BAO & 0.39 & 0.56 & 0.14 & 0.43\\ Planck+lensing+BAO & 0.68 & 0.69 & 0.70 & 0.27\\ Planck+lensing & 0.94 & 0.49 & 0.89 & 0.72\\ Planck+lensing+S$H_0$ES & 0.92 & 0.92 & 0.33 & 0.82\\ Planck & 0.81 & 0.69 & 0.84 & 0.88\\ Planck+S$H_0$ES & 0.20 & 0.48 & 0.92 & 0.97\\ S$H_0$ES & 0.59 & 0.59 & 0.98 & 0.98\\\documentclass[a4paper,fleqn,usenatbib]{mnras} % MNRAS is set in Times font. If you don't have this installed (most LaTeX % installations will be fine) or prefer the old Computer Modern fonts, comment % out the following line \usepackage{newtxtext,newtxmath} % Depending on your LaTeX fonts installation, you might get better results with one of these: %\usepackage{mathptmx} %\usepackage{txfonts} % Use vector fonts, so it zooms properly in on-screen viewing software % Don't change these lines unless you know what you are doing \usepackage[T1]{fontenc} \usepackage{ae,aecompl} %%%%% AUTHORS - PLACE YOUR OWN PACKAGES HERE %%%%% % Only include extra packages if you really need them. Common packages are: \usepackage{graphicx} \usepackage{dcolumn} \usepackage{bm} \usepackage{hyperref} %\usepackage{natbib} \usepackage{xspace} \usepackage{soul} \usepackage[ruled,vlined]{algorithm2e} % adjust algorithm appearance \SetAlCapNameFnt{\normalsize} \SetAlCapFnt{\normalsize} % fonts %\usepackage[english]{babel} %\usepackage[T1]{fontenc} %\usepackage[utf8]{inputenc} %\usepackage[scaled=1.04]{biolinum} %\renewcommand*\familydefault{\rmdefault} %\usepackage{fourier} %\usepackage[scaled=0.83]{beramono} %\usepackage{microtype} % journal names \usepackage{aas_macros} \newcommand{\code}{\textsf} % nested sampling macros \newcommand{\nlive}{n_\text{live}} \newcommand{\niter}{n_\text{iter}} \newcommand{\pvalue}{\text{\textit{p}-value}\xspace} \newcommand{\pvalues}{\text{\pvalue{}s}\xspace} \newcommand{\Z}{\mathcal{Z}} \newcommand{\logZ}{\ensuremath{\log\Z}\xspace} \newcommand{\like}{\mathcal{L}} \newcommand{\threshold}{\like^\star} \newcommand{\pg}[2]{p\left(#1\,\rvert\, #2\right)} \newcommand{\p}[1]{p\left(#1\right)} \newcommand{\intd}{\text{d}} \newcommand{\params}{\mathbf{\Theta}} \newcommand{\stoppingtol}{\epsilon} \newcommand{\efr}{\ensuremath{\code{efr}}\xspace} \newcommand{\nr}{\ensuremath{n_r}\xspace} % distributions \newcommand{\loggamma}{\ln\Gamma} \newcommand{\uniform}{\mathcal{U}} \newcommand{\normal}{\mathcal{N}} % ref to sections etc \usepackage{cleveref} % comments \usepackage[usenames]{xcolor} \newcommand{\AF}[1]{{\color{blue}\textbf{TODO AF:} \textit{#1}}} \newcommand{\WH}[1]{{\color{red}\textbf{TODO WH:} \textit{#1}}} \newcommand{\LL}[1]{{\color{green}\textbf{TODO LL:} \textit{#1}}} % codes \newcommand{\MN}{\code{MultiNest}\xspace} \newcommand{\PC}{\code{PolyChord}\xspace} \newcommand{\MNVersion}{\code{\MN-3.12}\xspace} \newcommand{\PCVersion}{\code{\PC-1.17.1}\xspace} \newcommand{\anesthetic}{\code{anesthetic}} % settings we used \newcommand{\nliveSetting}{1000\xspace} \newcommand{\tolSetting}{0.01\xspace} \newcommand{\nrepeatSetting}{100\xspace} \newcommand{\nrepeatsPerfectSetting}{10,000\xspace} \newcommand{\nrepeatsMCSetting}{100,000\xspace} \newcommand{\niterPerfectSetting}{10,000\xspace} % Highlight changes in resubmission %\newcommand{\add}[1]{\textcolor{red}{\textbf{#1}}} %\newcommand{\remove}[1]{\textcolor{gray}{\textit{\st{#1}}}} % adjust header \makeatletter \def\@printed{} \def\@journal{} \def\@oddfoot{} \def\@evenfoot{} \makeatother % Title of the paper, and the short title which is used in the headers. % Keep the title short and informative. \title{Nested sampling cross-checks using order statistics} % The list of authors, and the short list which is used in the headers. % If you need two or more lines of authors, add an extra line using \newauthor \author[A. Fowlie et al.]{% Andrew Fowlie$^{1}$\thanks{andrew.j.fowlie@njnu.edu.cn}, Will Handley$^{2,3}$\thanks{wh260@cam.ac.uk}, and Liangliang Su$^{1}$\thanks{191002001@stu.njnu.edu.cn} \\ % List of institutions $^{1}$Department of Physics and Institute of Theoretical Physics, Nanjing Normal University, Nanjing, Jiangsu 210023, China\\ $^{2}$Astrophysics Group, Cavendish Laboratory, J.J.Thomson Avenue, Cambridge, CB3 0HE, UK\\ $^{3}$Kavli Institute for Cosmology, Madingley Road, Cambridge, CB3 0HA, UK } % These dates will be filled out by the publisher \date{} % \date{Accepted XXX. Received YYY; in original form ZZZ} % Enter the current year, for the copyright statements etc. \pubyear{2020} % Don't change these lines \begin{document} \label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \maketitle \begin{abstract} Nested sampling (NS) is an invaluable tool in data analysis in modern astrophysics, cosmology, gravitational wave astronomy and particle physics. % We identify a previously unused property of NS related to order statistics: the insertion indexes of new live points into the existing live points should be uniformly distributed. % This observation enabled us to create a novel cross-check of single NS runs. % The tests can detect when an NS run failed to sample new live points from the constrained prior and plateaus in the likelihood function, which break an assumption of NS and thus leads to unreliable results. % We applied our cross-check to NS runs on toy functions with known analytic results in $2$ -- $50$ dimensions, showing that our approach can detect problematic runs on a variety of likelihoods, settings and dimensions. % As an example of a realistic application, we cross-checked NS runs performed in the context of cosmological model selection. % Since the cross-check is simple, we recommend that it become a mandatory test for every applicable NS run. \end{abstract} % Select between one and six entries from the list of approved keywords. % Don't make up new ones. \begin{keywords} methods: statistical -- methods: data analysis -- methods: numerical \end{keywords} \section{Introduction} Nested sampling (NS) was introduced by Skilling in 2004~\citep{2004AIPC..735..395S,Skilling:2006gxv} as a novel algorithm for computing Bayesian evidences and posterior distributions. The algorithm requires few tuning parameters and can cope with traditionally-challenging multimodal and degenerate functions. As a result, popular implementations such as \MN~\citep{Feroz:2007kg,Feroz:2008xx,Feroz:2013hea}, \PC~\citep{Handley:2015fda,Handley:2015xxx} and \code{dynesty}~\citep{2020MNRAS.tmp..280S} have become invaluable tools in modern cosmology~\citep{Mukherjee:2005wg,Easther:2011yq,Martin:2013nzq,Hlozek:2014lca,2013JCAP...02..001A,Akrami:2018odb}, astrophysics~\citep{Trotta:2010mx,2007MNRAS.377L..74L,Buchner:2014nha}, gravitational wave astronomy~\citep{Veitch:2014wba,TheLIGOScientific:2016src,TheLIGOScientific:2016pea,Ashton:2018jfp}, and particle physics~\citep{Trotta:2008bp,Feroz:2008wr,Buchmueller:2013rsa,Workgroup:2017htr}. Other NS applications include statistical physics~\citep{PhysRevLett.120.250601,PhysRevX.4.031034,doi:10.1021/jp1012973,PhysRevE.89.022302,PhysRevE.96.043311,doi:10.1063/1.4821761}, condensed matter physics~\citep{PhysRevB.93.174108}, and biology~\citep{10.1093/sysbio/syy050,10.1093/bioinformatics/btu675}. In this work, we propose a cross-check of an important assumption in NS that works on single NS runs. This improves upon previous tests of NS that required toy functions with known analytic properties~\citep{2014arXiv1407.5459B} or multiple runs~\citep{Higson:2018cqj}. The cross-check detects faults in the compression of the parameter space that lead to biased estimates of the evidence. We demonstrate our method on toy functions and previous NS runs used for model selection in cosmology~\citep{Handley:2019tkm}. We anticipate that the cross-check could be applied as broadly as NS itself. The paper is structured as follows. After recapitulating the relevant aspects of NS in \cref{sec:intro}, we introduce our approach in \cref{sec:test}. We apply our methods to toy functions and a cosmological likelihood in \cref{sec:examples}. We briefly discuss the possibility of using the insertion indexes to debias NS evidence estimates in \cref{sec:debiasing} before concluding in \cref{sec:conclusions}. \section{NS algorithm}\label{sec:intro} To establish our notation and explain our cross-check, we briefly summarize the NS algorithm. For more detailed and pedagogical introductions, see e.g., \citep{Skilling:2006gxv,Feroz:2008xx,Handley:2015fda,2020MNRAS.tmp..280S}. NS is primarily an algorithm for computing the Bayesian evidence of a model in light of data. Consider a model with parameters $\params$. The evidence may be written \begin{equation}\label{eq:Z} \Z \equiv \int_{\Omega_\params} \like(\params) \, \pi(\params) \,\intd \params, \end{equation} where $\pi(\params)$ is a prior density for the parameters and $\like(\params)$ is a likelihood function describing the probability of the observed experimental data. The evidence is a critical ingredient in Bayesian model selection in which models are compared by Bayes factors, since Bayes factors are ratios of evidences for two models, \begin{equation} B_{10} \equiv \frac{\Z_1}{\Z_0}. \end{equation} The Bayes factor $B_{10}$ tells us hows much more we should believe in model $1$ relative to model $0$ in light of experimental data. For an introduction to Bayes factors, see e.g., \citep{Kass:1995loi}. NS works by casting \cref{eq:Z} as a one-dimensional integral via the volume variable, \begin{equation}\label{eq:X} X(\lambda) = \int_{\like(\params) > \lambda} \pi(\params) \,\intd \params. \end{equation} This is the prior volume enclosed within the iso-likelihood contour defined by $\lambda$. The evidence may then be written as \begin{equation}\label{eq:Z1d} \Z = \int_0^1 \like(X) \,\intd X, \end{equation} where in the overloaded notation $\like(X)$ is the inverse of $X(\lambda)$. The remaining challenge is computing the one-dimensional integral in \cref{eq:Z1d}. In NS we begin from $\nlive$ live points drawn from the prior. At each iteration of the NS algorithm, we discard the point with the smallest likelihood, $\threshold$, and sample a replacement drawn from the constrained prior, that is, drawn from $\pi(\params)$ subject to $\like(\params) > \threshold$. By the statistical properties of random samples drawn from the constrained prior, we expect that the volume $X(\threshold)$ compresses by $t$ at each iteration, where \begin{equation}\label{eq:t} \langle \log t \rangle = -\frac{1}{\nlive}. \end{equation} This enables us to estimate the volume at the $i$-th iteration by $X_i \equiv X(\threshold_i) = e^{-i/\nlive}$ and write the one-dimensional integral using the trapezium rule, \begin{equation}\label{eq:Z_sum} \Z \approx \sum_i \threshold_i \, w_i, \qquad w_i = \tfrac12 \left(X_{i - 1} - X_{i + 1}\right). \end{equation} The algorithm terminates once an estimate of the maximum remaining evidence, $\Delta \Z$, is less than a specified fraction, $\stoppingtol$, of the total evidence found, \begin{equation} \frac{\Delta \Z}{\Z} < \stoppingtol. \label{eqn:stop} \end{equation} The main numerical problem in an implementation of NS is efficiently sampling from the constrained prior. \subsection{Sampling from the constrained prior}\label{sec:constrained_prior} % NS runs could fail to produce accurate estimates of the evidence for a variety for reasons. % For example, although NS is particularly robust to the problems posed by multimodal distributions, modes could be missed. % We focus on one particular problem: implementations of NS may fail to correctly sample from the constrained prior and thus produce biased evidences. Indeed, Because rejection sampling from the entire prior would be impractically slow as the volume compresses exponentially, implementations of NS typically employ specialised subalgorithms to sample from the constrained prior. When these subalgorithms fail, the evidences may be unreliable. This was considered the most severe drawback of the NS algorithm in \citep{2018arXiv180503924S}. One such subalgorithm is ellipsoidal sampling~\citep{Mukherjee:2005wg,Feroz:2007kg}, a rejection sampling algorithm in which the live points are bounded by a set of ellipsoids. Potential live points are sampled from the ellipsoids and accepted only if $\like > \threshold$. Ellipsoidal NS is implemented in \MN~\citep{Feroz:2007kg,Feroz:2008xx,Feroz:2013hea}. For this to faithfully sample from the constrained prior, the ellipsoids must completely enclose the iso-likelihood contour defined by $\threshold$. To ensure this is the case, the ellipsoids are expanded by a factor $1 / \efr$, with $\efr = 0.3$ recommended for reliable evidences. Slice sampling~\citep{neal} is an alternative scheme for sampling from the constrained prior~\citep{aitken,Handley:2015fda}. A chord is drawn from a live point across the entire region enclosed by the iso-likelihood contour and a candidate point is drawn uniformly from along the chord. This is repeated \nr times to reduce correlations between the new point and the original live point. Slice sampling is implemented in \PC~\citep{Handley:2015fda,Handley:2015xxx}. The recommend number of repeats is $\nr = 2d$ for a $d$-dimensional function. \subsection{Plateaus in the likelihood}\label{sec:plateaus} Plateaus in the likelihood function, i.e., regions in which $\like(\params) = \text{const.}$, were discussed in \citep{2004AIPC..735..395S,Skilling:2006gxv} and more recently in \citep{2020arXiv200508602S}. In \citep{2020arXiv200508602S} it was stressed that they can lead to faulty estimates of the compression. In such cases, the live points are not uniformly distributed in $X$ (\cref{eq:X}), violating assumptions in \cref{eq:t}. \section{Using insertion indexes}\label{sec:test} \begingroup \begin{table*} \centerline{% \begin{tabular}{cccccccccc} $\efr$ & $d$ & Analytic \logZ & Mean $\logZ\pm \Delta\logZ$ & $\sigma_{\logZ}$ & SEM \logZ & Inaccuracy & Bias & Median \pvalue & Median rolling \pvalue\\ \hline \hyperref[sec:gaussian]{Gaussian}\\ \hline \input{MN_gaussian.tex} \hline \hyperref[sec:rosenbrock]{Rosenbrock}\\ \hline \input{MN_rosenbrock.tex} \hline \hyperref[sec:shells]{Shells}\\ \hline \input{MN_shells.tex} \hline \hyperref[sec:gaussian-log-gamma]{Mixture}\\ \hline \input{MN_mixture.tex} \end{tabular} } \caption{\label{tab:MN_summary} Summary of results of our insertion index cross-check for \MN. The numerical results are the average from \nrepeatSetting runs. Biases and inaccuracies greater than $3$ and \pvalues less than $0.01$ are highlighted by red.} \end{table*} \endgroup By \emph{insertion index}, we mean the index at which an element must be inserted to maintain order in an sorted list. With a left-sided convention, the insertion index $i$ of a sample $y$ in an sorted list $o$ is such that \begin{equation}\label{eq:insertion_index} o_{i - 1} < y \le o_i. \end{equation} The key idea in this paper is to use the insertion indexes of new live points relative to existing live points sorted by enclosed prior volume, $X$, to detect problems in sampling from the constrained prior. Since the relationship between volume and likelihood is monotonic, we can sort by volume by sorting by likelihood. If new live points are genuinely sampled from the constrained prior leading to a uniform distribution in $X$, the insertion indexes, $i$, should be discrete uniformly distributed from $0$ to $\nlive - 1$, \begin{equation}\label{EQ:UNIFORM} i \sim \uniform(0, \nlive - 1). \end{equation} This result from order statistics is proven in \cref{app:proof}. During a NS run of $\niter$ iterations we thus find $\niter$ insertion indexes that should be uniformly distributed. Imagine, however, that during a NS run using ellipsoidal sampling, the ellipsoids encroached on the true iso-likelihood contour. In that case, the insertion indexes near the lowest-likelihood live points could be disfavoured, and the distribution of insertion indexes would deviate from uniformity. Alternatively, imagine that the likelihood function contains a plateau. Any initial live points that lie in the plateau share the same insertion index, leading to many repeated indexes and a strong deviation from a uniform distribution. Thus, we can perform a statistical test on the insertion indexes to detect deviations from a uniform distribution. The choice of test isn't important to our general idea of using information in the insertion indexes, though in our examples we use a Kolmogorov-Smirnov (KS) test~\citep{smirnov1948,kolmogorov1933sulla}, which we found to be powerful, to compute a \pvalue from all the iterations. We describe the KS test in \cref{app:ks}. Excepting plateaus, deviations from uniformity are caused by a \emph{change} in the distribution of new live points with respect to the existing live points. Since there is no technical challenge in sampling the initial live points from the prior, failures should typically occur during a run and thus be accompanied by a change in the distribution. In runs with many iterations in which a change occurs only once, the power of the test may be diluted by the many iterations before and after the distribution changes, as the insertion indexes before and after the change should be uniformly distributed. To mitigate this, we also perform multiple tests on chunks of iterations, find the smallest resulting \pvalue and apply a correction for multiple testing. We later refer to this as the rolling \pvalue. Since the volume compresses by $e$ in $\nlive$ iterations, we pick $\nlive$ as a reasonable size for a chunk of iterations. We treat each chunk as independent. The procedure for computing the rolling \pvalue is detailed in \cref{algo:rolling_p_value}. For clarity, let us stress that we later present \pvalues from all the iterations and rolling \pvalues. Functionality to perform these tests on \MN and \PC output is now included in \code{anesthetic-1.3.6 }~\citep{Handley:2019mfs}. % \begin{figure} % \begin{algorithm}[H] % \SetAlgoLined % \KwIn{$\niter$ insertion indexes} % Compute empirical CDF for the insertion indexes\; % Compute expected uniform CDF for insertion indexes\; % Compute $D_n$ via \cref{eq:Dn}\; % \KwRet{\pvalue from KS test with $D_n$ and $n = \niter$} % \caption{Computing \pvalue from insertion indexes.} % \label{algo:p_value} % \end{algorithm} % \end{figure} \begin{algorithm}[h] \SetAlgoLined \KwIn{Set of $\niter$ insertion indexes} Split the insertion indexes into consecutive chunks of size $\nlive$. The size of the final chunk may be less than $\nlive$\; \ForEach{chunk of insertion indexes}{Apply KS test to obtain a \pvalue\;} Let $p$ equal the minimum of such \pvalues\; Let $n$ equal the number of chunks\; \KwRet{Rolling \pvalue{} --- minimum \pvalue adjusted for multiple tests,~$1 - (1 - p)^n$\;} \caption{The rolling \pvalue.} \label{algo:rolling_p_value} \end{algorithm} We furthermore neglect correlations between the insertion indexes. % We anticipate, however, that the insertion indexes \emph{repel} each other, possibly making tests that assume that the indexes are independent conservative. Finally, we stress that the magnitude of the deviation from uniform, as well as the \pvalue, should be noted. A small \pvalue alone isn't necessarily cause for concern, if the departure from uniformity is negligible. \section{Examples}\label{sec:examples} \begingroup \begin{table*} \centerline{% \begin{tabular}{cccccccccc} $d/\nr$ & $d$ & Analytic \logZ & Mean $\logZ\pm \Delta\logZ$ & $\sigma_{\logZ}$ & SEM \logZ & Inaccuracy & Bias & Median \pvalue & Median rolling \pvalue\\ \hline \hyperref[sec:gaussian]{Gaussian}\\ \hline \input{PC_gaussian.tex} \hline \hyperref[sec:rosenbrock]{Rosenbrock}\\ \hline \input{PC_rosenbrock.tex} \hline \hyperref[sec:shells]{Shells}\\ \hline \input{PC_shells.tex} \hline \hyperref[sec:gaussian-log-gamma]{Mixture}\\ \hline \input{PC_mixture.tex} \end{tabular} } \caption{\label{tab:PC_summary} Summary of results of our insertion index cross-check for \PC. See \cref{tab:MN_summary} for further details. In this table we show $d / \nr$, which may be thought of as a ``\PC efficiency'' analogue of the \MN efficiency $\efr$.} \end{table*} \endgroup \subsection{Toy functions} We now present detailed numerical examples of our cross-check using NS runs on toy functions using \MNVersion~\citep{Feroz:2007kg,Feroz:2008xx,Feroz:2013hea} and \PCVersion~\citep{Handley:2015fda,Handley:2015xxx}. We chose toy functions with known analytic evidences or precisely known numerical estimates of the evidence to demonstrate that biased results from NS are detectable with our approach. The toy functions are described in \cref{app:toy_problems}. We performed \nrepeatSetting \MN and \PC runs on each toy function to study the statistical properties of their outputs. We used $\nlive = \nliveSetting$ and $\stoppingtol = \tolSetting$ throughout. To generate biased NS runs, we used inappropriate settings, e.g., $\efr > 1$ in \MN or few repeats $\nr2} = 10$. The prior is uniform in each parameter from $-30$ to $30$. Since the likelihood is a pdf in $\params$, the analytic $\logZ$ is governed by the prior normalization factor, $\logZ = \log(1/60^d) \approx -4.1 d$, modulo small truncation errors introduced by the prior. ``` 4. **Bibliographic Information:** ```bbl \begin{thebibliography}{} \makeatletter \relax \def\mn@urlcharsother{\let\do\@makeother \do\$\do\&\do\#\do\^\do\_\do\%\do\~} \def\mn@doi{\begingroup\mn@urlcharsother \@ifnextchar [ {\mn@doi@} {\mn@doi@[]}} \def\mn@doi@[#1]#2{\def\@tempa{#1}\ifx\@tempa\@empty \href {http://dx.doi.org/#2} {doi:#2}\else \href {http://dx.doi.org/#2} {#1}\fi \endgroup} \def\mn@eprint#1#2{\mn@eprint@#1:#2::\@nil} \def\mn@eprint@arXiv#1{\href {http://arxiv.org/abs/#1} {{\tt arXiv:#1}}} \def\mn@eprint@dblp#1{\href {http://dblp.uni-trier.de/rec/bibtex/#1.xml} {dblp:#1}} \def\mn@eprint@#1:#2:#3:#4\@nil{\def\@tempa {#1}\def\@tempb {#2}\def\@tempc {#3}\ifx \@tempc \@empty \let \@tempc \@tempb \let \@tempb \@tempa \fi \ifx \@tempb \@empty \def\@tempb {arXiv}\fi \@ifundefined {mn@eprint@\@tempb}{\@tempb:\@tempc}{\expandafter \expandafter \csname mn@eprint@\@tempb\endcsname \expandafter{\@tempc}}} \bibitem[\protect\citeauthoryear{Abbott et~al.}{Abbott et~al.}{2016a}]{TheLIGOScientific:2016pea} Abbott B.~P., et~al., 2016a, \mn@doi [Phys. Rev.] {10.1103/PhysRevX.6.041015, 10.1103/PhysRevX.8.039903}, X6, 041015 \bibitem[\protect\citeauthoryear{Abbott et~al.}{Abbott et~al.}{2016b}]{TheLIGOScientific:2016src} Abbott B.~P., et~al., 2016b, \mn@doi [Phys. Rev. Lett.] {10.1103/PhysRevLett.116.221101, 10.1103/PhysRevLett.121.129902}, 116, 221101 \bibitem[\protect\citeauthoryear{Aitken \& Akman}{Aitken \& Akman}{2013}]{aitken} Aitken S., Akman O.~E., 2013, \mn@doi [BMC Systems Biology] {10.1186/1752-0509-7-72}, 7, 72 \bibitem[\protect\citeauthoryear{Arnold \& Emerson}{Arnold \& Emerson}{2011}]{RJ-2011-016} Arnold T.~B., Emerson J.~W., 2011, \mn@doi [{The R Journal}] {10.32614/RJ-2011-016}, 3, 34 \bibitem[\protect\citeauthoryear{Ashton et~al.}{Ashton et~al.}{2019}]{Ashton:2018jfp} Ashton G., et~al., 2019, \mn@doi [Astrophys. J. Suppl.] {10.3847/1538-4365/ab06fc}, 241, 27 \bibitem[\protect\citeauthoryear{{Audren}, {Lesgourgues}, {Benabed} \& {Prunet}}{{Audren} et~al.}{2013}]{2013JCAP...02..001A} {Audren} B., {Lesgourgues} J., {Benabed} K., {Prunet} S., 2013, \mn@doi [JCAP] {10.1088/1475-7516/2013/02/001}, \href {https://ui.adsabs.harvard.edu/abs/2013JCAP...02..001A} {2013, 001} \bibitem[\protect\citeauthoryear{Baldock, P\'artay, Bart\'ok, Payne \& Cs\'anyi}{Baldock et~al.}{2016}]{PhysRevB.93.174108} Baldock R. J.~N., P\'artay L.~B., Bart\'ok A.~P., Payne M.~C., Cs\'anyi G., 2016, \mn@doi [Phys. Rev. B] {10.1103/PhysRevB.93.174108}, 93, 174108 \bibitem[\protect\citeauthoryear{Baldock, Bernstein, Salerno, P\'artay \& Cs\'anyi}{Baldock et~al.}{2017}]{PhysRevE.96.043311} Baldock R. J.~N., Bernstein N., Salerno K.~M., P\'artay L.~B., Cs\'anyi G., 2017, \mn@doi [Phys. Rev. E] {10.1103/PhysRevE.96.043311}, 96, 043311 \bibitem[\protect\citeauthoryear{{Beaujean} \& {Caldwell}}{{Beaujean} \& {Caldwell}}{2013}]{2013arXiv1304.7808B} {Beaujean} F., {Caldwell} A., 2013, arXiv e-prints, \href {https://ui.adsabs.harvard.edu/abs/2013arXiv1304.7808B} {p. arXiv:1304.7808} \bibitem[\protect\citeauthoryear{Bolhuis \& Cs\'anyi}{Bolhuis \& Cs\'anyi}{2018}]{PhysRevLett.120.250601} Bolhuis P.~G., Cs\'anyi G., 2018, \mn@doi [Phys. Rev. Lett.] {10.1103/PhysRevLett.120.250601}, 120, 250601 \bibitem[\protect\citeauthoryear{Buchmueller et~al.}{Buchmueller et~al.}{2014}]{Buchmueller:2013rsa} Buchmueller O., et~al., 2014, \mn@doi [Eur. Phys. J. C] {10.1140/epjc/s10052-014-2922-3}, 74, 2922 \bibitem[\protect\citeauthoryear{{Buchner}}{{Buchner}}{2016}]{2014arXiv1407.5459B} {Buchner} J., 2016, \mn@doi [Statistics and Computing] {10.1007/s11222-014-9512-y}, 26, 383 \bibitem[\protect\citeauthoryear{Buchner et~al.,}{Buchner et~al.}{2014}]{Buchner:2014nha} Buchner J., et~al., 2014, \mn@doi [Astron. Astrophys.] {10.1051/0004-6361/201322971}, 564, A125 \bibitem[\protect\citeauthoryear{Easther \& Peiris}{Easther \& Peiris}{2012}]{Easther:2011yq} Easther R., Peiris H.~V., 2012, \mn@doi [Phys. Rev. D] {10.1103/PhysRevD.85.103533}, 85, 103533 \bibitem[\protect\citeauthoryear{Feroz \& Hobson}{Feroz \& Hobson}{2008}]{Feroz:2007kg} Feroz F., Hobson M.~P., 2008, \mn@doi [Mon. Not. Roy. Astron. Soc.] {10.1111/j.1365-2966.2007.12353.x}, 384, 449 \bibitem[\protect\citeauthoryear{Feroz, Allanach, Hobson, AbdusSalam, Trotta \& Weber}{Feroz et~al.}{2008}]{Feroz:2008wr} Feroz F., Allanach B.~C., Hobson M., AbdusSalam S.~S., Trotta R., Weber A.~M., 2008, \mn@doi [JHEP] {10.1088/1126-6708/2008/10/064}, 10, 064 \bibitem[\protect\citeauthoryear{Feroz, Hobson \& Bridges}{Feroz et~al.}{2009}]{Feroz:2008xx} Feroz F., Hobson M.~P., Bridges M., 2009, \mn@doi [Mon. Not. Roy. Astron. Soc.] {10.1111/j.1365-2966.2009.14548.x}, 398, 1601 \bibitem[\protect\citeauthoryear{Feroz, Hobson, Cameron \& Pettitt}{Feroz et~al.}{2013}]{Feroz:2013hea} Feroz F., Hobson M.~P., Cameron E., Pettitt A.~N., 2013, \mn@doi [The Open Journal of Astrophysics] {10.21105/astro.1306.2144} \bibitem[\protect\citeauthoryear{Fowlie, Su \& Handley}{Fowlie et~al.}{2020}]{fowlie_andrew_2020_3958749} Fowlie A., Su L., Handley W., 2020, {Supplementary data for Nested sampling cross- checks using order statistics}, \mn@doi{10.5281/zenodo.3958749} \bibitem[\protect\citeauthoryear{Handley}{Handley}{2019a}]{will_handley_2019_3371152} Handley W., 2019a, {Curvature tension: evidence for a closed universe (supplementary inference products)}, \mn@doi{10.5281/zenodo.3371152} \bibitem[\protect\citeauthoryear{{Handley}}{{Handley}}{2019b}]{Handley:2019tkm} {Handley} W., 2019b, arXiv e-prints, \href {https://ui.adsabs.harvard.edu/abs/2019arXiv190809139H} {p. arXiv:1908.09139} \bibitem[\protect\citeauthoryear{Handley}{Handley}{2019c}]{Handley:2019mfs} Handley W., 2019c, \mn@doi [J. Open Source Softw.] {10.21105/joss.01414}, 4, 1414 \bibitem[\protect\citeauthoryear{Handley}{Handley}{2019d}]{anesthetic} Handley W., 2019d, \mn@doi [The Journal of Open Source Software] {10.21105/joss.01414}, 4, 1414 \bibitem[\protect\citeauthoryear{Handley, Hobson \& Lasenby}{Handley et~al.}{2015a}]{Handley:2015fda} Handley W.~J., Hobson M.~P., Lasenby A.~N., 2015a, \mn@doi [Mon. Not. Roy. Astron. Soc.] {10.1093/mnrasl/slv047}, 450, L61 \bibitem[\protect\citeauthoryear{{Handley}, {Hobson} \& {Lasenby}}{{Handley} et~al.}{2015b}]{Handley:2015xxx} {Handley} W.~J., {Hobson} M.~P., {Lasenby} A.~N., 2015b, \mn@doi [Mon. Not. Roy. Astron. Soc.] {10.1093/mnras/stv1911}, \href {https://ui.adsabs.harvard.edu/abs/2015MNRAS.453.4384H} {453, 4384} \bibitem[\protect\citeauthoryear{Higson, Handley, Hobson, Lasenby et~al.}{Higson et~al.}{2018}]{higson2018sampling} Higson E., Handley W., Hobson M., Lasenby A., et~al., 2018, Bayesian Analysis, 13, 873 \bibitem[\protect\citeauthoryear{Higson, Handley, Hobson \& Lasenby}{Higson et~al.}{2019}]{Higson:2018cqj} Higson E., Handley W., Hobson M., Lasenby A., 2019, \mn@doi [Mon. Not. Roy. Astron. Soc.] {10.1093/mnras/sty3090}, 483, 2044 \bibitem[\protect\citeauthoryear{Hlozek, Grin, Marsh \& Ferreira}{Hlozek et~al.}{2015}]{Hlozek:2014lca} Hlozek R., Grin D., Marsh D. J.~E., Ferreira P.~G., 2015, \mn@doi [Phys. Rev. D] {10.1103/PhysRevD.91.103512}, 91, 103512 \bibitem[\protect\citeauthoryear{Johnson, Kirk \& Stumpf}{Johnson et~al.}{2014}]{10.1093/bioinformatics/btu675} Johnson R., Kirk P., Stumpf M. P.~H., 2014, \mn@doi [Bioinformatics] {10.1093/bioinformatics/btu675}, 31, 604 \bibitem[\protect\citeauthoryear{Kass \& Raftery}{Kass \& Raftery}{1995}]{Kass:1995loi} Kass R.~E., Raftery A.~E., 1995, \mn@doi [J. Am. Statist. Assoc.] {10.1080/01621459.1995.10476572}, 90, 773 \bibitem[\protect\citeauthoryear{Kolmogorov}{Kolmogorov}{1933}]{kolmogorov1933sulla} Kolmogorov A., 1933, Giornale dell’ Instuto Italiano degli Attuari, 4, 83 \bibitem[\protect\citeauthoryear{{Liddle}}{{Liddle}}{2007}]{2007MNRAS.377L..74L} {Liddle} A.~R., 2007, \mn@doi [\mnras] {10.1111/j.1745-3933.2007.00306.x}, \href {https://ui.adsabs.harvard.edu/abs/2007MNRAS.377L..74L} {377, L74} \bibitem[\protect\citeauthoryear{Marsaglia, Tsang \& Wang}{Marsaglia et~al.}{2003}]{JSSv008i18} Marsaglia G., Tsang W.~W., Wang J., 2003, \mn@doi [Journal of Statistical Software, Articles] {10.18637/jss.v008.i18}, 8, 1 \bibitem[\protect\citeauthoryear{Martin, Ringeval, Trotta \& Vennin}{Martin et~al.}{2014}]{Martin:2013nzq} Martin J., Ringeval C., Trotta R., Vennin V., 2014, \mn@doi [JCAP] {10.1088/1475-7516/2014/03/039}, 03, 039 \bibitem[\protect\citeauthoryear{Martinez, McKay, Farmer, Scott, Roebber, Putze \& Conrad}{Martinez et~al.}{2017}]{Workgroup:2017htr} Martinez G.~D., McKay J., Farmer B., Scott P., Roebber E., Putze A., Conrad J., 2017, \mn@doi [Eur. Phys. J.] {10.1140/epjc/s10052-017-5274-y}, C77, 761 \bibitem[\protect\citeauthoryear{Martiniani, Stevenson, Wales \& Frenkel}{Martiniani et~al.}{2014}]{PhysRevX.4.031034} Martiniani S., Stevenson J.~D., Wales D.~J., Frenkel D., 2014, \mn@doi [Phys. Rev. X] {10.1103/PhysRevX.4.031034}, 4, 031034 \bibitem[\protect\citeauthoryear{Mukherjee, Parkinson \& Liddle}{Mukherjee et~al.}{2006}]{Mukherjee:2005wg} Mukherjee P., Parkinson D., Liddle A.~R., 2006, \mn@doi [Astrophys. J.] {10.1086/501068}, 638, L51 \bibitem[\protect\citeauthoryear{Neal}{Neal}{2003}]{neal} Neal R.~M., 2003, \mn@doi [Ann. Statist.] {10.1214/aos/1056562461}, 31, 705 \bibitem[\protect\citeauthoryear{Nielsen}{Nielsen}{2013}]{doi:10.1063/1.4821761} Nielsen S.~O., 2013, \mn@doi [The Journal of Chemical Physics] {10.1063/1.4821761}, 139, 124104 \bibitem[\protect\citeauthoryear{P\'artay, Bart\'ok \& Cs\'anyi}{P\'artay et~al.}{2014}]{PhysRevE.89.022302} P\'artay L.~B., Bart\'ok A.~P., Cs\'anyi G., 2014, \mn@doi [Phys. Rev. E] {10.1103/PhysRevE.89.022302}, 89, 022302 \bibitem[\protect\citeauthoryear{{Planck Collaboration} et~al.,}{{Planck Collaboration} et~al.}{2018}]{Akrami:2018odb} {Planck Collaboration} et~al., 2018, arXiv e-prints, \href {https://ui.adsabs.harvard.edu/abs/2018arXiv180706211P} {p. arXiv:1807.06211} \bibitem[\protect\citeauthoryear{Pártay, Bartók \& Csányi}{Pártay et~al.}{2010}]{doi:10.1021/jp1012973} Pártay L.~B., Bartók A.~P., Csányi G., 2010, \mn@doi [The Journal of Physical Chemistry B] {10.1021/jp1012973}, 114, 10502 \bibitem[\protect\citeauthoryear{Rosenbrock}{Rosenbrock}{1960}]{10.1093/comjnl/3.3.175} Rosenbrock H.~H., 1960, \mn@doi [The Computer Journal] {10.1093/comjnl/3.3.175}, 3, 175 \bibitem[\protect\citeauthoryear{Russel, Brewer, Klaere \& Bouckaert}{Russel et~al.}{2018}]{10.1093/sysbio/syy050} Russel P.~M., Brewer B.~J., Klaere S., Bouckaert R.~R., 2018, \mn@doi [Systematic Biology] {10.1093/sysbio/syy050}, 68, 219 \bibitem[\protect\citeauthoryear{{Salomone}, {South}, {Drovandi} \& {Kroese}}{{Salomone} et~al.}{2018}]{2018arXiv180503924S} {Salomone} R., {South} L.~F., {Drovandi} C.~C., {Kroese} D.~P., 2018, arXiv e-prints, \href {https://ui.adsabs.harvard.edu/abs/2018arXiv180503924S} {p. arXiv:1805.03924} \bibitem[\protect\citeauthoryear{{Schittenhelm} \& {Wacker}}{{Schittenhelm} \& {Wacker}}{2020}]{2020arXiv200508602S} {Schittenhelm} D., {Wacker} P., 2020, arXiv e-prints, \href {https://ui.adsabs.harvard.edu/abs/2020arXiv200508602S} {p. arXiv:2005.08602} \bibitem[\protect\citeauthoryear{{Skilling}}{{Skilling}}{2004}]{2004AIPC..735..395S} {Skilling} J., 2004, in {Fischer} R., {Preuss} R., {Toussaint} U.~V., eds, American Institute of Physics Conference Series Vol. 735, American Institute of Physics Conference Series. pp 395--405, \mn@doi{10.1063/1.1835238} \bibitem[\protect\citeauthoryear{Skilling}{Skilling}{2006}]{Skilling:2006gxv} Skilling J., 2006, \mn@doi [Bayesian Analysis] {10.1214/06-BA127}, 1, 833 \bibitem[\protect\citeauthoryear{Smirnov}{Smirnov}{1948}]{smirnov1948} Smirnov N., 1948, \mn@doi [Ann. Math. Statist.] {10.1214/aoms/1177730256}, 19, 279 \bibitem[\protect\citeauthoryear{{Speagle}}{{Speagle}}{2020}]{2020MNRAS.tmp..280S} {Speagle} J.~S., 2020, \mn@doi [Mon. Not. Roy. Astron. Soc.] {10.1093/mnras/staa278}, \href {https://ui.adsabs.harvard.edu/abs/2020MNRAS.tmp..280S} {} \bibitem[\protect\citeauthoryear{Trotta, Feroz, Hobson, Roszkowski \& Ruiz~de Austri}{Trotta et~al.}{2008}]{Trotta:2008bp} Trotta R., Feroz F., Hobson M.~P., Roszkowski L., Ruiz~de Austri R., 2008, \mn@doi [JHEP] {10.1088/1126-6708/2008/12/024}, 12, 024 \bibitem[\protect\citeauthoryear{Trotta, Jóhannesson, Moskalenko, Porter, de Austri \& Strong}{Trotta et~al.}{2011}]{Trotta:2010mx} Trotta R., Jóhannesson G., Moskalenko I.~V., Porter T.~A., de Austri R.~R., Strong A.~W., 2011, \mn@doi [Astrophys. J.] {10.1088/0004-637X/729/2/106}, 729, 106 \bibitem[\protect\citeauthoryear{Veitch et~al.}{Veitch et~al.}{2015}]{Veitch:2014wba} Veitch J., et~al., 2015, \mn@doi [Phys. Rev.] {10.1103/PhysRevD.91.042003}, D91, 042003 \bibitem[\protect\citeauthoryear{{Virtanen} et~al.}{{Virtanen} et~al.}{2020}]{2020SciPy-NMeth} {Virtanen} P., et~al., 2020, \mn@doi [Nature Methods] {https://doi.org/10.1038/s41592-019-0686-2}, \href {https://rdcu.be/b08Wh} {} \makeatother \end{thebibliography} ``` 5. **Author Information:** - Lead Author: {'name': 'Andrew Fowlie'} - Full Authors List: ```yaml Andrew Fowlie: {} Will Handley: pi: start: 2020-10-01 thesis: null postdoc: start: 2016-10-01 end: 2020-10-01 thesis: null phd: start: 2012-10-01 end: 2016-09-30 supervisors: - Anthony Lasenby - Mike Hobson thesis: 'Kinetic initial conditions for inflation: theory, observation and methods' original_image: images/originals/will_handley.jpeg image: /assets/group/images/will_handley.jpg links: Webpage: https://willhandley.co.uk Liangliang Su: {} ``` This YAML file provides a concise snapshot of an academic research group. It lists members by name along with their academic roles—ranging from Part III and summer projects to MPhil, PhD, and postdoctoral positions—with corresponding dates, thesis topics, and supervisor details. Supplementary metadata includes image paths and links to personal or departmental webpages. A dedicated "coi" section profiles senior researchers, highlighting the group’s collaborative mentoring network and career trajectories in cosmology, astrophysics, and Bayesian data analysis. ==================================================================================== Final Output Instructions ==================================================================================== - Combine all data sources to create a seamless, engaging narrative. - Follow the exact Markdown output format provided at the top. - Do not include any extra explanation, commentary, or wrapping beyond the specified Markdown. - Validate that every bibliographic reference with a DOI or arXiv identifier is converted into a Markdown link as per the examples. - Validate that every Markdown author link corresponds to a link in the author information block. - Before finalizing, confirm that no LaTeX citation commands or other undesired formatting remain. - Before finalizing, confirm that the link to the paper itself [2006.03371](https://arxiv.org/abs/2006.03371) is featured in the first sentence. Generate only the final Markdown output that meets all these requirements. {% endraw %}