{% raw %} Title: Create a Markdown Blog Post Integrating Research Details and a Featured Paper ==================================================================================== This task involves generating a Markdown file (ready for a GitHub-served Jekyll site) that integrates our research details with a featured research paper. The output must follow the exact format and conventions described below. ==================================================================================== Output Format (Markdown): ------------------------------------------------------------------------------------ --- layout: post title: "Nested sampling with any prior you like" date: 2021-02-24 categories: papers --- ![AI generated image](/assets/images/posts/2021-02-24-2102.12478.png) Will Handley Content generated by [gemini-2.5-pro](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/content/2021-02-24-2102.12478.txt). Image generated by [imagen-3.0-generate-002](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/images/2021-02-24-2102.12478.txt). ------------------------------------------------------------------------------------ ==================================================================================== Please adhere strictly to the following instructions: ==================================================================================== Section 1: Content Creation Instructions ==================================================================================== 1. **Generate the Page Body:** - Write a well-composed, engaging narrative that is suitable for a scholarly audience interested in advanced AI and astrophysics. - Ensure the narrative is original and reflective of the tone and style and content in the "Homepage Content" block (provided below), but do not reuse its content. - Use bullet points, subheadings, or other formatting to enhance readability. 2. **Highlight Key Research Details:** - Emphasize the contributions and impact of the paper, focusing on its methodology, significance, and context within current research. - Specifically highlight the lead author ({'name': 'Justin Alsing'}). When referencing any author, use Markdown links from the Author Information block (choose academic or GitHub links over social media). 3. **Integrate Data from Multiple Sources:** - Seamlessly weave information from the following: - **Paper Metadata (YAML):** Essential details including the title and authors. - **Paper Source (TeX):** Technical content from the paper. - **Bibliographic Information (bbl):** Extract bibliographic references. - **Author Information (YAML):** Profile details for constructing Markdown links. - Merge insights from the Paper Metadata, TeX source, Bibliographic Information, and Author Information blocks into a coherent narrative—do not treat these as separate or isolated pieces. - Insert the generated narrative between the HTML comments: and 4. **Generate Bibliographic References:** - Review the Bibliographic Information block carefully. - For each reference that includes a DOI or arXiv identifier: - For DOIs, generate a link formatted as: [10.1234/xyz](https://doi.org/10.1234/xyz) - For arXiv entries, generate a link formatted as: [2103.12345](https://arxiv.org/abs/2103.12345) - **Important:** Do not use any LaTeX citation commands (e.g., `\cite{...}`). Every reference must be rendered directly as a Markdown link. For example, instead of `\cite{mycitation}`, output `[mycitation](https://doi.org/mycitation)` - **Incorrect:** `\cite{10.1234/xyz}` - **Correct:** `[10.1234/xyz](https://doi.org/10.1234/xyz)` - Ensure that at least three (3) of the most relevant references are naturally integrated into the narrative. - Ensure that the link to the Featured paper [2102.12478](https://arxiv.org/abs/2102.12478) is included in the first sentence. 5. **Final Formatting Requirements:** - The output must be plain Markdown; do not wrap it in Markdown code fences. - Preserve the YAML front matter exactly as provided. ==================================================================================== Section 2: Provided Data for Integration ==================================================================================== 1. **Homepage Content (Tone and Style Reference):** ```markdown --- layout: home --- ![AI generated image](/assets/images/index.png) The Handley Research Group stands at the forefront of cosmological exploration, pioneering novel approaches that fuse fundamental physics with the transformative power of artificial intelligence. We are a dynamic team of researchers, including PhD students, postdoctoral fellows, and project students, based at the University of Cambridge. Our mission is to unravel the mysteries of the Universe, from its earliest moments to its present-day structure and ultimate fate. We tackle fundamental questions in cosmology and astrophysics, with a particular focus on leveraging advanced Bayesian statistical methods and AI to push the frontiers of scientific discovery. Our research spans a wide array of topics, including the [primordial Universe](https://arxiv.org/abs/1907.08524), [inflation](https://arxiv.org/abs/1807.06211), the nature of [dark energy](https://arxiv.org/abs/2503.08658) and [dark matter](https://arxiv.org/abs/2405.17548), [21-cm cosmology](https://arxiv.org/abs/2210.07409), the [Cosmic Microwave Background (CMB)](https://arxiv.org/abs/1807.06209), and [gravitational wave astrophysics](https://arxiv.org/abs/2411.17663). ### Our Research Approach: Innovation at the Intersection of Physics and AI At The Handley Research Group, we develop and apply cutting-edge computational techniques to analyze complex astronomical datasets. Our work is characterized by a deep commitment to principled [Bayesian inference](https://arxiv.org/abs/2205.15570) and the innovative application of [artificial intelligence (AI) and machine learning (ML)](https://arxiv.org/abs/2504.10230). **Key Research Themes:** * **Cosmology:** We investigate the early Universe, including [quantum initial conditions for inflation](https://arxiv.org/abs/2002.07042) and the generation of [primordial power spectra](https://arxiv.org/abs/2112.07547). We explore the enigmatic nature of [dark energy, using methods like non-parametric reconstructions](https://arxiv.org/abs/2503.08658), and search for new insights into [dark matter](https://arxiv.org/abs/2405.17548). A significant portion of our efforts is dedicated to [21-cm cosmology](https://arxiv.org/abs/2104.04336), aiming to detect faint signals from the Cosmic Dawn and the Epoch of Reionization. * **Gravitational Wave Astrophysics:** We develop methods for [analyzing gravitational wave signals](https://arxiv.org/abs/2411.17663), extracting information about extreme astrophysical events and fundamental physics. * **Bayesian Methods & AI for Physical Sciences:** A core component of our research is the development of novel statistical and AI-driven methodologies. This includes advancing [nested sampling techniques](https://arxiv.org/abs/1506.00171) (e.g., [PolyChord](https://arxiv.org/abs/1506.00171), [dynamic nested sampling](https://arxiv.org/abs/1704.03459), and [accelerated nested sampling with $\beta$-flows](https://arxiv.org/abs/2411.17663)), creating powerful [simulation-based inference (SBI) frameworks](https://arxiv.org/abs/2504.10230), and employing [machine learning for tasks such as radiometer calibration](https://arxiv.org/abs/2504.16791), [cosmological emulation](https://arxiv.org/abs/2503.13263), and [mitigating radio frequency interference](https://arxiv.org/abs/2211.15448). We also explore the potential of [foundation models for scientific discovery](https://arxiv.org/abs/2401.00096). **Technical Contributions:** Our group has a strong track record of developing widely-used scientific software. Notable examples include: * [**PolyChord**](https://arxiv.org/abs/1506.00171): A next-generation nested sampling algorithm for Bayesian computation. * [**anesthetic**](https://arxiv.org/abs/1905.04768): A Python package for processing and visualizing nested sampling runs. * [**GLOBALEMU**](https://arxiv.org/abs/2104.04336): An emulator for the sky-averaged 21-cm signal. * [**maxsmooth**](https://arxiv.org/abs/2007.14970): A tool for rapid maximally smooth function fitting. * [**margarine**](https://arxiv.org/abs/2205.12841): For marginal Bayesian statistics using normalizing flows and KDEs. * [**fgivenx**](https://arxiv.org/abs/1908.01711): A package for functional posterior plotting. * [**nestcheck**](https://arxiv.org/abs/1804.06406): Diagnostic tests for nested sampling calculations. ### Impact and Discoveries Our research has led to significant advancements in cosmological data analysis and yielded new insights into the Universe. Key achievements include: * Pioneering the development and application of advanced Bayesian inference tools, such as [PolyChord](https://arxiv.org/abs/1506.00171), which has become a cornerstone for cosmological parameter estimation and model comparison globally. * Making significant contributions to the analysis of major cosmological datasets, including the [Planck mission](https://arxiv.org/abs/1807.06209), providing some of the tightest constraints on cosmological parameters and models of [inflation](https://arxiv.org/abs/1807.06211). * Developing novel AI-driven approaches for astrophysical challenges, such as using [machine learning for radiometer calibration in 21-cm experiments](https://arxiv.org/abs/2504.16791) and [simulation-based inference for extracting cosmological information from galaxy clusters](https://arxiv.org/abs/2504.10230). * Probing the nature of dark energy through innovative [non-parametric reconstructions of its equation of state](https://arxiv.org/abs/2503.08658) from combined datasets. * Advancing our understanding of the early Universe through detailed studies of [21-cm signals from the Cosmic Dawn and Epoch of Reionization](https://arxiv.org/abs/2301.03298), including the development of sophisticated foreground modelling techniques and emulators like [GLOBALEMU](https://arxiv.org/abs/2104.04336). * Developing new statistical methods for quantifying tensions between cosmological datasets ([Quantifying tensions in cosmological parameters: Interpreting the DES evidence ratio](https://arxiv.org/abs/1902.04029)) and for robust Bayesian model selection ([Bayesian model selection without evidences: application to the dark energy equation-of-state](https://arxiv.org/abs/1506.09024)). * Exploring fundamental physics questions such as potential [parity violation in the Large-Scale Structure using machine learning](https://arxiv.org/abs/2410.16030). ### Charting the Future: AI-Powered Cosmological Discovery The Handley Research Group is poised to lead a new era of cosmological analysis, driven by the explosive growth in data from next-generation observatories and transformative advances in artificial intelligence. Our future ambitions are centred on harnessing these capabilities to address the most pressing questions in fundamental physics. **Strategic Research Pillars:** * **Next-Generation Simulation-Based Inference (SBI):** We are developing advanced SBI frameworks to move beyond traditional likelihood-based analyses. This involves creating sophisticated codes for simulating [Cosmic Microwave Background (CMB)](https://arxiv.org/abs/1908.00906) and [Baryon Acoustic Oscillation (BAO)](https://arxiv.org/abs/1607.00270) datasets from surveys like DESI and 4MOST, incorporating realistic astrophysical effects and systematic uncertainties. Our AI initiatives in this area focus on developing and implementing cutting-edge SBI algorithms, particularly [neural ratio estimation (NRE) methods](https://arxiv.org/abs/2407.15478), to enable robust and scalable inference from these complex simulations. * **Probing Fundamental Physics:** Our enhanced analytical toolkit will be deployed to test the standard cosmological model ($\Lambda$CDM) with unprecedented precision and to explore [extensions to Einstein's General Relativity](https://arxiv.org/abs/2006.03581). We aim to constrain a wide range of theoretical models, from modified gravity to the nature of [dark matter](https://arxiv.org/abs/2106.02056) and [dark energy](https://arxiv.org/abs/1701.08165). This includes leveraging data from upcoming [gravitational wave observatories](https://arxiv.org/abs/1803.10210) like LISA, alongside CMB and large-scale structure surveys from facilities such as Euclid and JWST. * **Synergies with Particle Physics:** We will continue to strengthen the connection between cosmology and particle physics by expanding the [GAMBIT framework](https://arxiv.org/abs/2009.03286) to interface with our new SBI tools. This will facilitate joint analyses of cosmological and particle physics data, providing a holistic approach to understanding the Universe's fundamental constituents. * **AI-Driven Theoretical Exploration:** We are pioneering the use of AI, including [large language models and symbolic computation](https://arxiv.org/abs/2401.00096), to automate and accelerate the process of theoretical model building and testing. This innovative approach will allow us to explore a broader landscape of physical theories and derive new constraints from diverse astrophysical datasets, such as those from GAIA. Our overarching goal is to remain at the forefront of scientific discovery by integrating the latest AI advancements into every stage of our research, from theoretical modeling to data analysis and interpretation. We are excited by the prospect of using these powerful new tools to unlock the secrets of the cosmos. Content generated by [gemini-2.5-pro-preview-05-06](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/content/index.txt). Image generated by [imagen-3.0-generate-002](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/images/index.txt). ``` 2. **Paper Metadata:** ```yaml !!python/object/new:feedparser.util.FeedParserDict dictitems: id: http://arxiv.org/abs/2102.12478v3 guidislink: true link: http://arxiv.org/abs/2102.12478v3 updated: '2021-06-28T15:37:48Z' updated_parsed: !!python/object/apply:time.struct_time - !!python/tuple - 2021 - 6 - 28 - 15 - 37 - 48 - 0 - 179 - 0 - tm_zone: null tm_gmtoff: null published: '2021-02-24T18:45:13Z' published_parsed: !!python/object/apply:time.struct_time - !!python/tuple - 2021 - 2 - 24 - 18 - 45 - 13 - 2 - 55 - 0 - tm_zone: null tm_gmtoff: null title: Nested sampling with any prior you like title_detail: !!python/object/new:feedparser.util.FeedParserDict dictitems: type: text/plain language: null base: '' value: Nested sampling with any prior you like summary: 'Nested sampling is an important tool for conducting Bayesian analysis in Astronomy and other fields, both for sampling complicated posterior distributions for parameter inference, and for computing marginal likelihoods for model comparison. One technical obstacle to using nested sampling in practice is the requirement (for most common implementations) that prior distributions be provided in the form of transformations from the unit hyper-cube to the target prior density. For many applications - particularly when using the posterior from one experiment as the prior for another - such a transformation is not readily available. In this letter we show that parametric bijectors trained on samples from a desired prior density provide a general-purpose method for constructing transformations from the uniform base density to a target prior, enabling the practical use of nested sampling under arbitrary priors. We demonstrate the use of trained bijectors in conjunction with nested sampling on a number of examples from cosmology.' summary_detail: !!python/object/new:feedparser.util.FeedParserDict dictitems: type: text/plain language: null base: '' value: 'Nested sampling is an important tool for conducting Bayesian analysis in Astronomy and other fields, both for sampling complicated posterior distributions for parameter inference, and for computing marginal likelihoods for model comparison. One technical obstacle to using nested sampling in practice is the requirement (for most common implementations) that prior distributions be provided in the form of transformations from the unit hyper-cube to the target prior density. For many applications - particularly when using the posterior from one experiment as the prior for another - such a transformation is not readily available. In this letter we show that parametric bijectors trained on samples from a desired prior density provide a general-purpose method for constructing transformations from the uniform base density to a target prior, enabling the practical use of nested sampling under arbitrary priors. We demonstrate the use of trained bijectors in conjunction with nested sampling on a number of examples from cosmology.' authors: - !!python/object/new:feedparser.util.FeedParserDict dictitems: name: Justin Alsing - !!python/object/new:feedparser.util.FeedParserDict dictitems: name: Will Handley author_detail: !!python/object/new:feedparser.util.FeedParserDict dictitems: name: Will Handley author: Will Handley arxiv_doi: 10.1093/mnrasl/slab057 links: - !!python/object/new:feedparser.util.FeedParserDict dictitems: title: doi href: http://dx.doi.org/10.1093/mnrasl/slab057 rel: related type: text/html - !!python/object/new:feedparser.util.FeedParserDict dictitems: href: http://arxiv.org/abs/2102.12478v3 rel: alternate type: text/html - !!python/object/new:feedparser.util.FeedParserDict dictitems: title: pdf href: http://arxiv.org/pdf/2102.12478v3 rel: related type: application/pdf arxiv_comment: 5 pages, 2 figures, Published as an MNRAS letter arxiv_journal_ref: MNRAS 505, L95-L99 (2021) arxiv_primary_category: term: astro-ph.IM scheme: http://arxiv.org/schemas/atom tags: - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: astro-ph.IM scheme: http://arxiv.org/schemas/atom label: null - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: astro-ph.CO scheme: http://arxiv.org/schemas/atom label: null - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: physics.data-an scheme: http://arxiv.org/schemas/atom label: null - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: stat.CO scheme: http://arxiv.org/schemas/atom label: null - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: stat.ML scheme: http://arxiv.org/schemas/atom label: null ``` 3. **Paper Source (TeX):** ```tex % mnras_template.tex % % LaTeX template for creating an MNRAS paper % % v3.0 released 14 May 2015 % (version numbers match those of mnras.cls) % % Copyright (C) Royal Astronomical Society 2015 % Authors: % Keith T. Smith (Royal Astronomical Society) % Change log % % v3.0 May 2015 % Renamed to match the new package name % Version number matches mnras.cls % A few minor tweaks to wording % v1.0 September 2013 % Beta testing only - never publicly released % First version: a simple (ish) template for creating an MNRAS paper %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Basic setup. Most papers should leave these options alone. \documentclass[fleqn,usenatbib]{mnras} % MNRAS is set in Times font. If you don't have this installed (most LaTeX % installations will be fine) or prefer the old Computer Modern fonts, comment % out the following line \usepackage{newtxtext,newtxmath} % Depending on your LaTeX fonts installation, you might get better results with one of these: %\usepackage{mathptmx} %\usepackage{txfonts} % Use vector fonts, so it zooms properly in on-screen viewing software % Don't change these lines unless you know what you are doing \usepackage[T1]{fontenc} % Allow "Thomas van Noord" and "Simon de Laguarde" and alike to be sorted by "N" and "L" etc. in the bibliography. % Write the name in the bibliography as "\VAN{Noord}{Van}{van} Noord, Thomas" \DeclareRobustCommand{\VAN}[3]{#2} \let\VANthebibliography\thebibliography \def\thebibliography{\DeclareRobustCommand{\VAN}[3]{##3}\VANthebibliography} %%%%% AUTHORS - PLACE YOUR OWN PACKAGES HERE %%%%% % Only include extra packages if you really need them. Common packages are: \usepackage{graphicx} % Including figure files \usepackage{layouts} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% AUTHORS - PLACE YOUR OWN COMMANDS HERE %%%%% % Please keep new commands to a minimum, and use \newcommand not \def to avoid % overwriting existing commands. \newcommand{\ja}[1]{{\textcolor{red}{\sf{[JA: #1]}} }} \newcommand{\wh}[1]{{\textcolor{red}{\sf{[WH: #1]}} }} \newcommand{\hl}[1]{{\textcolor{red}{#1}}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%% TITLE PAGE %%%%%%%%%%%%%%%%%%% % Title of the paper, and the short title which is used in the headers. % Keep the title short and informative. \title[Nested sampling with any prior you like]{Nested sampling with any prior you like} % The list of authors, and the short list which is used in the headers. % If you need two or more lines of authors, add an extra line using \newauthor \author[J. Alsing and W. Handley]{ Justin Alsing$^{1,2}$\thanks{E-mail: justin.alsing@fysik.su.se} and Will Handley$^{3,4}$ \\ % List of institutions $^{1}$Oskar Klein Centre for Cosmoparticle Physics, Department of Physics, Stockholm University, Stockholm SE-106 91, Sweden\\ $^{2}$Imperial Centre for Inference and Cosmology, Department of Physics, Imperial College London, Blackett Laboratory,Prince Consort Road, London SW7 2AZ, UK\\ $^{3}$Astrophysics Group, Cavendish Laboratory, JJ Thomson Avenue, Cambridge CB3 0HA, UK\\ $^{4}$Kavli Institute for Cosmology, Madingley Road, Cambridge CB3 0HA, UK } % These dates will be filled out by the publisher \date{Accepted XXX. Received YYY; in original form ZZZ} % Enter the current year, for the copyright statements etc. \pubyear{2020} % Don't change these lines \begin{document} \label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \maketitle % Abstract of the paper \begin{abstract} Nested sampling is an important tool for conducting Bayesian analysis in Astronomy and other fields, both for sampling complicated posterior distributions for parameter inference, and for computing marginal likelihoods for model comparison. One technical obstacle to using nested sampling in practice is the requirement (for most common implementations) that prior distributions be provided in the form of transformations from the unit hyper-cube to the target prior density. For many applications -- particularly when using the posterior from one experiment as the prior for another -- such a transformation is not readily available. In this letter we show that parametric bijectors trained on samples from a desired prior density provide a general-purpose method for constructing transformations from the uniform base density to a target prior, enabling the practical use of nested sampling under arbitrary priors. We demonstrate the use of trained bijectors in conjunction with nested sampling on a number of examples from cosmology. %A suite of cosmological posteriors represented as bijectors (for convenient use as priors in subsequent Bayesian analyses), along with code for constructing parametric bijector representations of priors given a set of samples, is provided at \url{https://github.com/justinalsing/nested_flows}. \end{abstract} \begin{keywords} data analysis: methods \end{keywords} %%%%%%%%%%%%%%%%% BODY OF PAPER %%%%%%%%%%%%%%%%%% \section{Introduction} Nested sampling has become one of the pillars of Bayesian computation in Astronomy (and other fields), enabling efficient sampling of complicated and high-dimensional posterior distributions for parameter inference, and estimation of marginal likelihoods for model comparison \citep{Skilling_2006,Feroz_2009,Handley_2015a, Handley_2015b}. In spite of its widespread popularity, nested sampling has had a practical limitation owing to the fact that in several implementations priors must typically be specified as a transform from the unit hyper-cube. As a consequence, nested sampling has largely only been practical for a somewhat restrictive class of priors, which have a readily available representation as a transform from the unit hyper-cube. This limitation has been notably prohibitive for (commonly occurring) situations where one wants to use an interim posterior derived from one experiment as the prior for another. In these situations, while the interim posterior is computable up to a normalization constant so can be sampled from, no unit hyper-cube transform representation readily presents itself in general. In this letter we address the need to provide priors in the form of transformations from the unit hyper-cube, by training parametric bijectors\footnote{Note that the required transformations do not always need to be bijective (ie., invertible), for example in the case of discrete parameters. We focus on continuous parameters in this paper.} to represent priors given only a set of samples from the desired prior. For use cases where interim posteriors are to be used as priors, the bijector can be trained on a set of samples from the interim posterior. In the context of cosmological data analysis, representing interim posteriors as (computationally inexpensive) bijectors has an additional advantage: it circumvents the need to either multiply computationally expensive likelihoods together, or introduce an extra importance sampling step, when performing joint analyses of multiple datasets. This will hence lead to substantial computational (and convenience) gains when performing multi-probe cosmological analyses. We note that whilst most commonly used nested sampling implementations require priors as unit hyper-cube transforms (e.g., \texttt{multinest} \citep{Feroz_2009} and \texttt{polychord} \citep{Handley_2015a, Handley_2015b}), it is not a \emph{theoretical} requirement. Early implementations used gaussian random walk MCMC to draw prior samples, and some implementations still use alternative prior sampling strategies (eg., \texttt{dnest4} \citealp{brewer2016}). Regardless of implementation, if samples from one sampling run are to be used to construct a prior for a subsequent run, some additional engineering (such as the method presented in this letter) is required. The structure of this letter is as follows. In \S \ref{sec:nested} we briefly review nested sampling theory. In \S \ref{sec:bijectors} we discuss representation of priors as bijective transformations from a base density. In \S \ref{sec:cosmological_bijectors} we use bijectors for representing cosmological parameter posteriors for a number of experiments, demonstrating that they are sufficiently accurate for practical use as priors for subsequent (multi-probe) cosmological analyses. We conclude in \S \ref{sec:conclusions}. % % \section{Nested sampling and prior specification} \label{sec:nested} Given a likelihood function $\mathcal{L}(\theta)$ and prior distribution $\pi(\theta)$, nested sampling was incepted by \citet{Skilling_2006} as a Monte Carlo tool to compute the Bayesian evidence \begin{equation} \mathcal{Z} = \int \mathcal{L}(\theta)\pi(\theta) d\theta, \label{eqn:evidence} \end{equation} whilst simultaneously generating samples drawn from the normalised posterior distribution $\mathcal{P}(\theta) = \mathcal{L}(\theta)\pi(\theta)/\mathcal{Z}$. Nested sampling performs these dual processes by evolving an ensemble of $n_\mathrm{live}$ \emph{live points} by initially drawing them from the prior distribution $\pi$, and then at each subsequent iteration: (a) discarding the lowest likelihood live point and (b) replacing it by a live point drawn from the prior $\pi$ subject to hard constraint that the new point is at a higher likelihood than the discarded point. The set of discarded \emph{dead points} and likelihoods can be used to compute a statistical estimate of the Bayesian evidence \eqref{eqn:evidence}, and be weighted to form a set of samples from the posterior distribution. The nested sampling procedure therefore differs substantially from traditional strategies such as Gibbs sampling, Metropolis-Hastings or Hamiltonian Monte Carlo in that it is acts as a likelihood scanner rather than a posterior sampler. It scans athermally, so in fact is capable of producing samples at any temperature and computing the full partition function in addition to the Bayesian evidence~\citep{Handley_2019}. Implementations of the nested sampling meta-algorithm e.g.\ \texttt{MultiNest}~\citep{Feroz_2009} and \texttt{PolyChord}~\citep{Handley_2015a,Handley_2015b} differ primarily in their choice of method by which to draw a new live point from the prior subject to a hard likelihood constraint. Nested sampling algorithms typically assume that each parameter has a prior that is uniform over the prior range $[0,1]$ i.e. the \emph{unit hypercube}. As we shall see in the next section, this is not as restrictive as it sounds, since any proper prior may be transformed onto the unit hypercube via a bijective transformation. It should be noted that normalising flows and bijectors have been used by \cite{2020MNRAS.496..328M} \& \cite{2021arXiv210211056W} in the context of techniques for generating new live points, but the approach in this letter for using them for prior specification is applicable to all existing nested sampling algorithms. % \begin{figure*} \centerline{\includegraphics{cosmology_posterior}} \caption{Demonstration of consistency between sampling using traditional and bijector-based priors. All plots show $66.7\%$ and $95\%$ credibility regions of marginal probability distributions projected into the $\Omega_K$-$H_0$ plane, and colours are chosen to be consistent across the grid. The first row indicates the prior placed on the $\Omega_K$ $H_0$ parameter space by \texttt{CAMB} ruling certain portions of the parameter space to be unphysical (e.g. for Universes with too much negative curvature collapsing, or unphysical lensing amplitudes), and it should be noted that it is banana-like rather than flat. The second row indicates the prior from the first row, and the posterior distributions derived by using CMB data alone, CMB+BAO data and BAO data alone respectively for each column, with appropriately coloured arrows indicating the application of data to the prior to produce a corresponding posterior distribution. The final row demonstrates the comparison between approaches. In the first and final columns, we see the resultant posterior derived by using the corresponding posterior from the second row as a prior for the cosmology run. The central bottom panel compares the three routes to a combined data posterior and demonstrates them to be in general agreement to within sampling error. Dashed lines indicate the corresponding relationship between limits of the plots. Plot produced under \texttt{anesthetic} \citep{Handley_2019}.\label{fig:cosmology_posterior}} \vspace{50pt} \end{figure*} \section{Representing priors as bijectors} \label{sec:bijectors} A bijector $\mathbf{f}: \mathbf{x}\rightarrow \mathbf{y}$ is an invertible function between the elements of two sets, with a one-to-one correspondence between elements, ie., each element of one set is paired with precisely one element of the other set and there are no unpaired elements. In the context of probability theory, bijectors are convenient for defining mappings from one probability distribution to another. If a random variate $\mathbf{x}$ with probability density $p(\mathbf{x})$ is transformed under a bijector $\mathbf{f}: \mathbf{x}\rightarrow \mathbf{y}=\mathbf{f}(\mathbf{x})$, then samples and densities transform as: \begin{align} &\mathbf{y} = \mathbf{f}(\mathbf{x}),\; \mathbf{x} = \mathbf{f}^{-1}(\mathbf{y})\nonumber \\ &p(\mathbf{y}) = |\mathbf{J}|p(\mathbf{x}),\;p(\mathbf{x}) = |\mathbf{J}^{-1}|p(\mathbf{y}) \end{align} where $J_{ij} = \partial f_i / \partial x_j $ is the Jacobian corresponding to the bijector function $\mathbf{f}$. Any probability density can be represented as a bijection from a base density, or equivalently, any target density can be completely specified by a base density and bijector from that base density to the target. Note also that any composition of bijectors is also a bijector. In the context of nested sampling, priors $\pi(\btheta)$ over model parameters $\btheta$ should be provided as a bijector that transforms from a base (unit) uniform density $\mathbf{u}\sim\mathcal{U}(0,1)$ to the desired prior $\pi(\boldsymbol\theta)$, ie., \begin{align} \btheta = \mathbf{f}(\mathbf{u}), \end{align} where \begin{align} &\pi(\btheta) = |\mathbf{J}|\,\mathcal{U}(\mathbf{f}^{-1}(\btheta)) \end{align} In practice, all that is required for specifying nested sampling priors is the bijector function $\mathbf{f}(\mathbf{u})$. In many use cases, such a bijector is readily available. For example, for any univariate prior density, the bijector from the unit uniform is simply the inverse cumulative density function (CDF) of the target prior, \begin{align} &f(u) = \mathrm{CDF}^{-1}(u), \nonumber \\ &\mathrm{CDF}(u) = \int_{-\infty}^u \pi(\theta)d\theta, \end{align} where the CDF can be interpolated and inverted numerically if the integral is intractable. However, for the general case of correlated multivariate priors, the requisite bijector is far more involved to compute analytically \citep[see 5.1 of][]{Feroz_2009} and in general numerically requires obtaining by other means. One particularly common scenario where this arises is when one wants to use the (sampled) posterior from one experiment as the prior for another; in these cases, all that is available are samples from the desired prior and un-normalized values of its density at those points. The solution is to fit a parametric model for the required bijector $\mathbf{f}(\mathbf{u};\mathbf{w})$ (with parameters $\mathbf{w}$) to the target prior. In the following sections we describe how to define expressive parametric bijector models, and fit them to target (prior) densities. % \subsection{Parametric bijectors} \label{sec:bijector_models} Parametric bijector models $\mathbf{f}(\mathbf{u};\mathbf{w})$ have recently been gaining popularity for solving complex density estimation tasks, probabilistic learning and variational inference, with a plethora of models available. In this section we give a brief overview of the key methods (in order of increasing complexity). For a more in-depth review, see \citet{papamakarios2019}. % \subsubsection*{Compositions of simple invertible functions} % Since any composition of bijectors is also a bijector, it is possible to construct flexible parametric bijectors by composing even simple invertible fucntions, such as affine transformations, exponentials, sigmoids, hyperbolic functions, etc. Some such transformations applied to a base density lead to already well-studied distribution families, such as the ``sinh-arcsinh" distributions \citep{jones2009sinh} which are generated by applying a $a\,\mathrm{sinh}(b\,\mathrm{sinh}^{-1}(x) + c)$ bijector (with some parameters $a$, $b$ and $c$) to a base normal random variate. Chaining a number of such transformations together, interleaved with linear transformations of the parameter space, can already lead to rather expressive families of distributions. % \subsubsection*{Autoregressive flows} % More sophisticated bijector models typically involve constructing normalizing flows parameterized by neural networks. Of these neural network based models, Inverse Autoregressive Flows (IAFs; \citealp{kingma2016}) have been demonstrated to be effective at representing complex densities, are fast to train, and fast to evaluate once trained. IAFs define a series of $n$ bijectors composed into a normalizing flow, as follows: % \begin{align} &\mathbf{z}^{(0)} = \mathbf{n} \sim \mathcal{N}(0, 1) \nonumber \\ &\mathbf{z}^{(1)} = \boldsymbol{\mu}^{(1)}[\mathbf{z}^{(0)}] + \boldsymbol\sigma^{(1)}[\mathbf{z}^{(0)}]\odot\mathbf{z}^{(0)} \nonumber \\ &\vdots\nonumber \\ &\mathbf{y} = \mathbf{z}^{(n)} = \boldsymbol{\mu}^{(n)}[\mathbf{z}^{(n-1)}] + \boldsymbol\sigma^{(n)}[\mathbf{z}^{(n-1)}]\odot\mathbf{z}^{(n-1)} \end{align} where the shifts $\boldsymbol{\mu}$ and scales $\boldsymbol\sigma$ of the subsequent affine transforms are autoregressive and are parameterized by neural networks, ie., \begin{align} \boldsymbol\mu^{(k)}_i = \boldsymbol\mu^{(k)}_i[\mathbf{z}_{1:i-1}; \mathbf{w}], \boldsymbol\sigma^{(k)}_i = \boldsymbol\sigma^{(k)}_i[\mathbf{z}_{1:i-1}; \mathbf{w}], \end{align} where $\mathbf{w}$ denote the weights and biases of the neural network(s). For more details see \cite{papamakarios2017}. % \subsubsection*{Continuous normalizing flows} % In order to take a further step forward in expressivity, one can replace the discrete set of transforms with an integral of continuous-time dynamics \citep{chen2018,grathwohl2018}, i.e., defining the bijector as the solution to an initial value problem: \begin{align} &\mathbf{z}(0) \sim \mathcal{N}(0,1) \nonumber \\ &\mathbf{y} = \mathbf{z}(t) = \int_0^t \mathbf{f}(\mathbf{z}, t^\prime; \mathbf{w})dt^\prime, \end{align} where the dynamics $\mathbf{f}(\mathbf{z}, t; \mathbf{w}) = d\mathbf{z}/dt$ are parameterized by a neural network with weights $\mathbf{w}$. The elevation from discrete to continuous normalizing flows comes with a significant increase in expressivity, making such models appropriate for the most complex bijector representation tasks. See \citet{chen2018} and \citet{grathwohl2018} for more details. % \subsection{Fitting bijectors to target densities} % Fitting a bijector model to the target can be achieved by minimizing the Kullback-Leibler (KL) divergence between the target $\pi(\btheta)$ and the (model) bijective density (with respect to the model parameters $\mathbf{w}$). Since the KL divergence will not be analytically tractable in general, one can instead minimize a Monte Carlo estimate of the integral given samples from the target prior $\{\btheta\} \sim \pi(\btheta)$: \begin{align} \label{kl_est} \mathcal{L}(\mathbf{w}) = \hat{\mathrm{KL}}(\mathbf{w}) = \frac{1}{N}\sum_{i=1}^N\ln\,\pi^*(\btheta_i; \mathbf{w}), \end{align} where $\pi^*(\btheta;\mathbf{w}) = |\mathbf{J}(\btheta;\mathbf{w})|\,\mathcal{U}(\mathbf{f}^{-1}(\btheta;\mathbf{w})$ is the probability density corresponding to the parametric bijector model, with parameters $\mathbf{w}$. Alternatively, when one has access to not only samples from the target prior, but also the values of the density at those samples, it can be advantageous to regress the model to the target density using the L2 norm, ie., minimize the loss function \begin{align} \mathcal{L}(\mathbf{w}) = \sum_{i=1}^N ||\,\mathrm{ln}\,\pi(\btheta_i) - \mathrm{ln}\,\pi^*(\btheta_i; \mathbf{w})\,||^2_2. \end{align} This has been shown to be less noisy than minimizing the (sampled) KL-divergence in certain cases \citep{seljak2019} (and can be readily extended to exploit gradient information about the target density if it is available; see \citealp{seljak2019} for details). % \section{Cosmological posteriors as bijectors} \label{sec:cosmological_bijectors} % \begin{figure} \centerline{\includegraphics{cosmology_evidence}} \caption{In the same three paths of Figure~\protect\ref{fig:cosmology_posterior} the Bayesian evidences $\log \mathcal{Z}$ for each of the three runs combine approximately consistently to within error, demonstrating that the bijective prior methodology can be used to recover reasonably accurate evidences in addition to posteriors. It should be noted that in contrast with Figure~\protect\ref{fig:cosmology_posterior}, since the error bars on this plot represent the nested sampling errors in inferring the evidence (in comparison with the posterior probability density spread), one should only expect these plots to overlap to within the histogram width, in analogue with repeated independent measurements of a fixed quantity. We note that typically error estimates on the log-evidence are typically underestimated due to implementation-specific errors~\protect\cite{2019MNRAS.483.2044H} Nonetheless, the inferred evidences are statistically consistent to within $3\sigma$, indicating that there may be a small amount of unquantified error from the use of bijectors, but that the evidences are sufficiently accurate for model comparison purposes (where $\Delta\log\mathcal{Z}\sim\mathcal{O}(1)$). Plot produced under \texttt{anesthetic} \citep{Handley_2019}. \label{fig:cosmology_evidence} } \end{figure} As a concrete and non-trivial cosmological example we choose the concordance $\Lambda$CDM model of the universe~\citep{scott2018standard} with an additional spatial curvature parameter $\Omega_K$. Such models have been debated recently in the literature \citep{handley2019curvature,Di_Valentino_2019,Efstathiou_2020}. We choose this model not for its controversy, but merely for the fact that it yields non-trivial banana-like priors and posteriors and therefore proves a more challenging cosmological example for a bijector to learn than models without curvature. For likelihoods we choose three datasets: (1) The \textit{Planck} baseline TT,TE,EE+lowE likelihood (without lensing)~\cite{2020A&A...641A...5P,2020A&A...641A...6P} (hereafter CMB), (2) Baryon Acoustic Oscillation and Redshift Space Distortion measurements from the Baryon Oscillation Spectroscopic Survey (BOSS) DR12~\cite{SDSS,SDSS2,SDSS3} (hereafter BAO), and (3) local cosmic distance ladder measurements of the expansion rate, using type Ia SNe calibrated by variable Cepheid stars from the S$H_0$ES collaboration ~\cite{Riess2018} (hereafter S$H_0$ES). The bijector code is implemented as an extension to \texttt{CosmoChord} using \texttt{forpy} to call \texttt{tensorflow} bijectors from FORTRAN. Figure \ref{fig:cosmology_posterior} shows our key results for CMB and BAO data. In this case we compare the three routes to a combined CMB and BAO constraint on a $k\Lambda$CDM universe. The simplest approach is to run nested sampling with the default \texttt{CosmoMC} prior with both likelihoods turned on $\pi\to$CMB+BAO. The bijector approach first runs the CMB dataset to update the prior $\pi\to$CMB, then after using the posterior samples from this run to train a CMB bijector constructs a distribution to use as the prior for the input for the Bayesian update with the next dataset CMB$\to$BAO. In Figure \ref{fig:cosmology_posterior} we show that doing this in either order recovers the same posterior. One critical advantage of this approach is that since much of the cost of a nested sampling run is in compressing from prior to posterior, whilst the first run $\pi\to$CMB is expensive, the second update CMB$\to$BAO requires considerably fewer likelihood calls. Furthermore, since the journey from prior to posterior is shorter, less poisson error accumulates over nested sampling iterations, making for more accurate posteriors and evidence estimates. For expensive models like those involving cosmic curvature, this is a significant advantage. It should be noted that as CMB and BAO are in mild tension, it has been argued that these combined constraints should be viewed with some scepticism \citep{handley2019curvature,2020arXiv201002230V}, and this tension is much stronger between CMB and S$H_0$ES data~\citep{HubbleTension}. Whilst this is a general cause for simultaneous concern and excitement scientifically speaking, in the context of this work this also serves to highlight the expressivity of these bijector models when combined with nested sampling chains. The tail behaviour of these bijective transformations is sufficiently powerful to still recover the correct answer even when combining likelihoods which are in strong tension. Figure \ref{fig:cosmology_evidence} demonstrates that the evidence estimates extracted by this run from \texttt{anesthetic} \citep{Handley_2019} are also consistent to within the errors on nested sampling. % \section{Conclusions} \label{sec:conclusions} Nested sampling implementations have been burdened by a practical limitation that priors need to be specified as bijective transforms from the unit hyper-cube, and in many cases such a representation is not readily available. We have shown that nested sampling priors can be constructed by fitting flexible parametric bijectors to samples from the target prior, providing a general-purpose approach for constructing generic priors for nested sampling. This is of particular importance for use cases where an interim posterior from one experiment is to be used as the prior for another, where a parametric bijector can then simply be trained on samples from the interim posterior. We have demonstrated the use of parametric bijectors on some typical use-cases from cosmology. In the longer term, we plan to release a library of cosmological bijectors for use as nested sampling priors in order to save users computational resources and standardise cosmological prior choice. % \section*{Acknowledgements} % We thank Johannes Buchner for useful comments. Justin Alsing was supported by research project grant \emph{Fundamental Physics from Cosmological Surveys} funded by the Swedish Research Council (VR) under Dnr 2017-04212. Will Handley was supported by a Gonville \& Caius Research Fellowship, STFC grant number ST/T001054/1 and a Royal Society University Research Fellowship. % \section*{Data availability} % All the nested sampling runs, code and details for reproducing the results in this letter can be found on Zenodo \citep{Zenodo}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% REFERENCES %%%%%%%%%%%%%%%%%% % The best way to enter references is to use BibTeX: \bibliographystyle{mnras} \bibliography{nested_bijectors} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%% APPENDICES %%%%%%%%%%%%%%%%%%%%% \appendix %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Don't change these lines \bsp % typesetting comment \label{lastpage} \end{document} % End of mnras_template.tex ``` 4. **Bibliographic Information:** ```bbl \begin{thebibliography}{} \makeatletter \relax \def\mn@urlcharsother{\let\do\@makeother \do\$\do\&\do\#\do\^\do\_\do\%\do\~} \def\mn@doi{\begingroup\mn@urlcharsother \@ifnextchar [ {\mn@doi@} {\mn@doi@[]}} \def\mn@doi@[#1]#2{\def\@tempa{#1}\ifx\@tempa\@empty \href {http://dx.doi.org/#2} {doi:#2}\else \href {http://dx.doi.org/#2} {#1}\fi \endgroup} \def\mn@eprint#1#2{\mn@eprint@#1:#2::\@nil} \def\mn@eprint@arXiv#1{\href {http://arxiv.org/abs/#1} {{\tt arXiv:#1}}} \def\mn@eprint@dblp#1{\href {http://dblp.uni-trier.de/rec/bibtex/#1.xml} {dblp:#1}} \def\mn@eprint@#1:#2:#3:#4\@nil{\def\@tempa {#1}\def\@tempb {#2}\def\@tempc {#3}\ifx \@tempc \@empty \let \@tempc \@tempb \let \@tempb \@tempa \fi \ifx \@tempb \@empty \def\@tempb {arXiv}\fi \@ifundefined {mn@eprint@\@tempb}{\@tempb:\@tempc}{\expandafter \expandafter \csname mn@eprint@\@tempb\endcsname \expandafter{\@tempc}}} \bibitem[\protect\citeauthoryear{{Alam} \& et al.}{{Alam} \& et~al.}{2017}]{SDSS} {Alam} S., et al. 2017, \mn@doi [\mnras] {10.1093/mnras/stx721}, \href {https://ui.adsabs.harvard.edu/\#abs/2017MNRAS.470.2617A} {470, 2617} \bibitem[\protect\citeauthoryear{{Beutler} et~al.,}{{Beutler} et~al.}{2011}]{SDSS2} {Beutler} F., et~al., 2011, \mn@doi [\mnras] {10.1111/j.1365-2966.2011.19250.x}, \href {https://ui.adsabs.harvard.edu/\#abs/2011MNRAS.416.3017B} {416, 3017} \bibitem[\protect\citeauthoryear{Brewer \& Foreman-Mackey}{Brewer \& Foreman-Mackey}{2016}]{brewer2016} Brewer B.~J., Foreman-Mackey D., 2016, arXiv preprint arXiv:1606.03757 \bibitem[\protect\citeauthoryear{Chen, Rubanova, Bettencourt \& Duvenaud}{Chen et~al.}{2018}]{chen2018} Chen R.~T., Rubanova Y., Bettencourt J., Duvenaud D.~K., 2018, in Advances in neural information processing systems. pp 6571--6583 \bibitem[\protect\citeauthoryear{Di~Valentino, Melchiorri \& Silk}{Di~Valentino et~al.}{2019}]{Di_Valentino_2019} Di~Valentino E., Melchiorri A., Silk J., 2019, \mn@doi [Nature Astronomy] {10.1038/s41550-019-0906-9}, 4, 196–203 \bibitem[\protect\citeauthoryear{Efstathiou \& Gratton}{Efstathiou \& Gratton}{2020}]{Efstathiou_2020} Efstathiou G., Gratton S., 2020, \mn@doi [Monthly Notices of the Royal Astronomical Society: Letters] {10.1093/mnrasl/slaa093}, 496, L91–L95 \bibitem[\protect\citeauthoryear{Feroz, Hobson \& Bridges}{Feroz et~al.}{2009}]{Feroz_2009} Feroz F., Hobson M.~P., Bridges M., 2009, \mn@doi [Monthly Notices of the Royal Astronomical Society] {10.1111/j.1365-2966.2009.14548.x}, 398, 1601–1614 \bibitem[\protect\citeauthoryear{Grathwohl, Chen, Bettencourt, Sutskever \& Duvenaud}{Grathwohl et~al.}{2018}]{grathwohl2018} Grathwohl W., Chen R.~T., Bettencourt J., Sutskever I., Duvenaud D., 2018, arXiv preprint arXiv:1810.01367 \bibitem[\protect\citeauthoryear{Handley}{Handley}{2019a}]{handley2019curvature} Handley W., 2019a, Curvature tension: evidence for a closed universe (\mn@eprint {arXiv} {1908.09139}) \bibitem[\protect\citeauthoryear{Handley}{Handley}{2019b}]{Handley_2019} Handley W., 2019b, \mn@doi [Journal of Open Source Software] {10.21105/joss.01414}, 4, 1414 \bibitem[\protect\citeauthoryear{Handley \& Alsing}{Handley \& Alsing}{2020}]{Zenodo} Handley W., Alsing J., 2020, Nested sampling with any prior you like (Supplementary inference products), \mn@doi{10.5281/zenodo.4247552}, \url {https://doi.org/10.5281/zenodo.4247552} \bibitem[\protect\citeauthoryear{Handley, Hobson \& Lasenby}{Handley et~al.}{2015a}]{Handley_2015a} Handley W.~J., Hobson M.~P., Lasenby A.~N., 2015a, \mn@doi [Monthly Notices of the Royal Astronomical Society: Letters] {10.1093/mnrasl/slv047}, 450, L61–L65 \bibitem[\protect\citeauthoryear{Handley, Hobson \& Lasenby}{Handley et~al.}{2015b}]{Handley_2015b} Handley W.~J., Hobson M.~P., Lasenby A.~N., 2015b, \mn@doi [Monthly Notices of the Royal Astronomical Society] {10.1093/mnras/stv1911}, 453, 4385–4399 \bibitem[\protect\citeauthoryear{{Higson}, {Handley}, {Hobson} \& {Lasenby}}{{Higson} et~al.}{2019}]{2019MNRAS.483.2044H} {Higson} E., {Handley} W., {Hobson} M., {Lasenby} A., 2019, \mn@doi [\mnras] {10.1093/mnras/sty3090}, \href {https://ui.adsabs.harvard.edu/abs/2019MNRAS.483.2044H} {483, 2044} \bibitem[\protect\citeauthoryear{Jones \& Pewsey}{Jones \& Pewsey}{2009}]{jones2009sinh} Jones M.~C., Pewsey A., 2009, Biometrika, 96, 761 \bibitem[\protect\citeauthoryear{Kingma, Salimans, Jozefowicz, Chen, Sutskever \& Welling}{Kingma et~al.}{2016}]{kingma2016} Kingma D.~P., Salimans T., Jozefowicz R., Chen X., Sutskever I., Welling M., 2016, in Advances in neural information processing systems. pp 4743--4751 \bibitem[\protect\citeauthoryear{{Moss}}{{Moss}}{2020}]{2020MNRAS.496..328M} {Moss} A., 2020, \mn@doi [\mnras] {10.1093/mnras/staa1469}, \href {https://ui.adsabs.harvard.edu/abs/2020MNRAS.496..328M} {496, 328} \bibitem[\protect\citeauthoryear{Papamakarios, Pavlakou \& Murray}{Papamakarios et~al.}{2017}]{papamakarios2017} Papamakarios G., Pavlakou T., Murray I., 2017, arXiv preprint arXiv:1705.07057 \bibitem[\protect\citeauthoryear{Papamakarios, Nalisnick, Rezende, Mohamed \& Lakshminarayanan}{Papamakarios et~al.}{2019}]{papamakarios2019} Papamakarios G., Nalisnick E., Rezende D.~J., Mohamed S., Lakshminarayanan B., 2019, arXiv preprint arXiv:1912.02762 \bibitem[\protect\citeauthoryear{{Planck Collaboration}}{{Planck Collaboration}}{2020a}]{2020A&A...641A...5P} {Planck Collaboration} 2020a, \mn@doi [\aap] {10.1051/0004-6361/201936386}, \href {https://ui.adsabs.harvard.edu/abs/2020A&A...641A...5P} {641, A5} \bibitem[\protect\citeauthoryear{{Planck Collaboration}}{{Planck Collaboration}}{2020b}]{2020A&A...641A...6P} {Planck Collaboration} 2020b, \mn@doi [\aap] {10.1051/0004-6361/201833910}, \href {https://ui.adsabs.harvard.edu/abs/2020A&A...641A...6P} {641, A6} \bibitem[\protect\citeauthoryear{{Ross}, {Samushia}, {Howlett}, {Percival}, {Burden} \& {Manera}}{{Ross} et~al.}{2015}]{SDSS3} {Ross} A.~J., {Samushia} L., {Howlett} C., {Percival} W.~J., {Burden} A., {Manera} M., 2015, \mn@doi [\mnras] {10.1093/mnras/stv154}, \href {https://ui.adsabs.harvard.edu/\#abs/2015MNRAS.449..835R} {449, 835} \bibitem[\protect\citeauthoryear{{S$H_0$ES Collaboration}}{{S$H_0$ES Collaboration}}{2018}]{Riess2018} {S$H_0$ES Collaboration} 2018, \mn@doi [\apj] {10.3847/1538-4357/aaadb7}, \href {https://ui.adsabs.harvard.edu/\#abs/2018ApJ...855..136R} {855, 136} \bibitem[\protect\citeauthoryear{Scott}{Scott}{2018}]{scott2018standard} Scott D., 2018, The Standard Model of Cosmology: A Skeptic's Guide (\mn@eprint {arXiv} {1804.01318}) \bibitem[\protect\citeauthoryear{Seljak \& Yu}{Seljak \& Yu}{2019}]{seljak2019} Seljak U., Yu B., 2019, arXiv preprint arXiv:1901.04454 \bibitem[\protect\citeauthoryear{Skilling}{Skilling}{2006}]{Skilling_2006} Skilling J., 2006, \mn@doi [Bayesian Anal.] {10.1214/06-BA127}, 1, 833 \bibitem[\protect\citeauthoryear{{Vagnozzi}, {Di Valentino}, {Gariazzo}, {Melchiorri}, {Mena} \& {Silk}}{{Vagnozzi} et~al.}{2020}]{2020arXiv201002230V} {Vagnozzi} S., {Di Valentino} E., {Gariazzo} S., {Melchiorri} A., {Mena} O., {Silk} J., 2020, arXiv e-prints, \href {https://ui.adsabs.harvard.edu/abs/2020arXiv201002230V} {p. arXiv:2010.02230} \bibitem[\protect\citeauthoryear{{Verde}, {Treu} \& {Riess}}{{Verde} et~al.}{2019}]{HubbleTension} {Verde} L., {Treu} T., {Riess} A.~G., 2019, arXiv e-prints, \href {https://ui.adsabs.harvard.edu/abs/2019arXiv190710625V} {p. arXiv:1907.10625} \bibitem[\protect\citeauthoryear{{Williams}, {Veitch} \& {Messenger}}{{Williams} et~al.}{2021}]{2021arXiv210211056W} {Williams} M.~J., {Veitch} J., {Messenger} C., 2021, arXiv e-prints, \href {https://ui.adsabs.harvard.edu/abs/2021arXiv210211056W} {p. arXiv:2102.11056} \makeatother \end{thebibliography} ``` 5. **Author Information:** - Lead Author: {'name': 'Justin Alsing'} - Full Authors List: ```yaml Justin Alsing: {} Will Handley: pi: start: 2020-10-01 thesis: null postdoc: start: 2016-10-01 end: 2020-10-01 thesis: null phd: start: 2012-10-01 end: 2016-09-30 supervisors: - Anthony Lasenby - Mike Hobson thesis: 'Kinetic initial conditions for inflation: theory, observation and methods' original_image: images/originals/will_handley.jpeg image: /assets/group/images/will_handley.jpg links: Webpage: https://willhandley.co.uk ``` This YAML file provides a concise snapshot of an academic research group. It lists members by name along with their academic roles—ranging from Part III and summer projects to MPhil, PhD, and postdoctoral positions—with corresponding dates, thesis topics, and supervisor details. Supplementary metadata includes image paths and links to personal or departmental webpages. A dedicated "coi" section profiles senior researchers, highlighting the group’s collaborative mentoring network and career trajectories in cosmology, astrophysics, and Bayesian data analysis. ==================================================================================== Final Output Instructions ==================================================================================== - Combine all data sources to create a seamless, engaging narrative. - Follow the exact Markdown output format provided at the top. - Do not include any extra explanation, commentary, or wrapping beyond the specified Markdown. - Validate that every bibliographic reference with a DOI or arXiv identifier is converted into a Markdown link as per the examples. - Validate that every Markdown author link corresponds to a link in the author information block. - Before finalizing, confirm that no LaTeX citation commands or other undesired formatting remain. - Before finalizing, confirm that the link to the paper itself [2102.12478](https://arxiv.org/abs/2102.12478) is featured in the first sentence. Generate only the final Markdown output that meets all these requirements. {% endraw %}