{% raw %} Title: Create a Markdown Blog Post Integrating Research Details and a Featured Paper ==================================================================================== This task involves generating a Markdown file (ready for a GitHub-served Jekyll site) that integrates our research details with a featured research paper. The output must follow the exact format and conventions described below. ==================================================================================== Output Format (Markdown): ------------------------------------------------------------------------------------ --- layout: post title: "$\texttt{unimpeded}$: A Public Nested Sampling Database for Bayesian Cosmology" date: 2025-11-07 categories: papers --- ![AI generated image](/assets/images/posts/2025-11-07-2511.05470.png) Dily OngWill Handley Content generated by [gemini-2.5-pro](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/content/2025-11-07-2511.05470.txt). Image generated by [imagen-4.0-generate-001](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/images/2025-11-07-2511.05470.txt). ------------------------------------------------------------------------------------ ==================================================================================== Please adhere strictly to the following instructions: ==================================================================================== Section 1: Content Creation Instructions ==================================================================================== 1. **Generate the Page Body:** - Write a well-composed, engaging narrative that is suitable for a scholarly audience interested in advanced AI and astrophysics. - Ensure the narrative is original and reflective of the tone and style and content in the "Homepage Content" block (provided below), but do not reuse its content. - Use bullet points, subheadings, or other formatting to enhance readability. 2. **Highlight Key Research Details:** - Emphasize the contributions and impact of the paper, focusing on its methodology, significance, and context within current research. - Specifically highlight the lead author ({'name': 'Dily Duan Yi Ong'}). When referencing any author, use Markdown links from the Author Information block (choose academic or GitHub links over social media). 3. **Integrate Data from Multiple Sources:** - Seamlessly weave information from the following: - **Paper Metadata (YAML):** Essential details including the title and authors. - **Paper Source (TeX):** Technical content from the paper. - **Bibliographic Information (bbl):** Extract bibliographic references. - **Author Information (YAML):** Profile details for constructing Markdown links. - Merge insights from the Paper Metadata, TeX source, Bibliographic Information, and Author Information blocks into a coherent narrative—do not treat these as separate or isolated pieces. - Insert the generated narrative between the HTML comments: and 4. **Generate Bibliographic References:** - Review the Bibliographic Information block carefully. - For each reference that includes a DOI or arXiv identifier: - For DOIs, generate a link formatted as: [10.1234/xyz](https://doi.org/10.1234/xyz) - For arXiv entries, generate a link formatted as: [2103.12345](https://arxiv.org/abs/2103.12345) - **Important:** Do not use any LaTeX citation commands (e.g., `\cite{...}`). Every reference must be rendered directly as a Markdown link. For example, instead of `\cite{mycitation}`, output `[mycitation](https://doi.org/mycitation)` - **Incorrect:** `\cite{10.1234/xyz}` - **Correct:** `[10.1234/xyz](https://doi.org/10.1234/xyz)` - Ensure that at least three (3) of the most relevant references are naturally integrated into the narrative. - Ensure that the link to the Featured paper [2511.05470](https://arxiv.org/abs/2511.05470) is included in the first sentence. 5. **Final Formatting Requirements:** - The output must be plain Markdown; do not wrap it in Markdown code fences. - Preserve the YAML front matter exactly as provided. ==================================================================================== Section 2: Provided Data for Integration ==================================================================================== 1. **Homepage Content (Tone and Style Reference):** ```markdown --- layout: home --- ![AI generated image](/assets/images/index.png) The Handley Research Group stands at the forefront of cosmological exploration, pioneering novel approaches that fuse fundamental physics with the transformative power of artificial intelligence. We are a dynamic team of researchers, including PhD students, postdoctoral fellows, and project students, based at the University of Cambridge. Our mission is to unravel the mysteries of the Universe, from its earliest moments to its present-day structure and ultimate fate. We tackle fundamental questions in cosmology and astrophysics, with a particular focus on leveraging advanced Bayesian statistical methods and AI to push the frontiers of scientific discovery. Our research spans a wide array of topics, including the [primordial Universe](https://arxiv.org/abs/1907.08524), [inflation](https://arxiv.org/abs/1807.06211), the nature of [dark energy](https://arxiv.org/abs/2503.08658) and [dark matter](https://arxiv.org/abs/2405.17548), [21-cm cosmology](https://arxiv.org/abs/2210.07409), the [Cosmic Microwave Background (CMB)](https://arxiv.org/abs/1807.06209), and [gravitational wave astrophysics](https://arxiv.org/abs/2411.17663). ### Our Research Approach: Innovation at the Intersection of Physics and AI At The Handley Research Group, we develop and apply cutting-edge computational techniques to analyze complex astronomical datasets. Our work is characterized by a deep commitment to principled [Bayesian inference](https://arxiv.org/abs/2205.15570) and the innovative application of [artificial intelligence (AI) and machine learning (ML)](https://arxiv.org/abs/2504.10230). **Key Research Themes:** * **Cosmology:** We investigate the early Universe, including [quantum initial conditions for inflation](https://arxiv.org/abs/2002.07042) and the generation of [primordial power spectra](https://arxiv.org/abs/2112.07547). We explore the enigmatic nature of [dark energy, using methods like non-parametric reconstructions](https://arxiv.org/abs/2503.08658), and search for new insights into [dark matter](https://arxiv.org/abs/2405.17548). A significant portion of our efforts is dedicated to [21-cm cosmology](https://arxiv.org/abs/2104.04336), aiming to detect faint signals from the Cosmic Dawn and the Epoch of Reionization. * **Gravitational Wave Astrophysics:** We develop methods for [analyzing gravitational wave signals](https://arxiv.org/abs/2411.17663), extracting information about extreme astrophysical events and fundamental physics. * **Bayesian Methods & AI for Physical Sciences:** A core component of our research is the development of novel statistical and AI-driven methodologies. This includes advancing [nested sampling techniques](https://arxiv.org/abs/1506.00171) (e.g., [PolyChord](https://arxiv.org/abs/1506.00171), [dynamic nested sampling](https://arxiv.org/abs/1704.03459), and [accelerated nested sampling with $\beta$-flows](https://arxiv.org/abs/2411.17663)), creating powerful [simulation-based inference (SBI) frameworks](https://arxiv.org/abs/2504.10230), and employing [machine learning for tasks such as radiometer calibration](https://arxiv.org/abs/2504.16791), [cosmological emulation](https://arxiv.org/abs/2503.13263), and [mitigating radio frequency interference](https://arxiv.org/abs/2211.15448). We also explore the potential of [foundation models for scientific discovery](https://arxiv.org/abs/2401.00096). **Technical Contributions:** Our group has a strong track record of developing widely-used scientific software. Notable examples include: * [**PolyChord**](https://arxiv.org/abs/1506.00171): A next-generation nested sampling algorithm for Bayesian computation. * [**anesthetic**](https://arxiv.org/abs/1905.04768): A Python package for processing and visualizing nested sampling runs. * [**GLOBALEMU**](https://arxiv.org/abs/2104.04336): An emulator for the sky-averaged 21-cm signal. * [**maxsmooth**](https://arxiv.org/abs/2007.14970): A tool for rapid maximally smooth function fitting. * [**margarine**](https://arxiv.org/abs/2205.12841): For marginal Bayesian statistics using normalizing flows and KDEs. * [**fgivenx**](https://arxiv.org/abs/1908.01711): A package for functional posterior plotting. * [**nestcheck**](https://arxiv.org/abs/1804.06406): Diagnostic tests for nested sampling calculations. ### Impact and Discoveries Our research has led to significant advancements in cosmological data analysis and yielded new insights into the Universe. Key achievements include: * Pioneering the development and application of advanced Bayesian inference tools, such as [PolyChord](https://arxiv.org/abs/1506.00171), which has become a cornerstone for cosmological parameter estimation and model comparison globally. * Making significant contributions to the analysis of major cosmological datasets, including the [Planck mission](https://arxiv.org/abs/1807.06209), providing some of the tightest constraints on cosmological parameters and models of [inflation](https://arxiv.org/abs/1807.06211). * Developing novel AI-driven approaches for astrophysical challenges, such as using [machine learning for radiometer calibration in 21-cm experiments](https://arxiv.org/abs/2504.16791) and [simulation-based inference for extracting cosmological information from galaxy clusters](https://arxiv.org/abs/2504.10230). * Probing the nature of dark energy through innovative [non-parametric reconstructions of its equation of state](https://arxiv.org/abs/2503.08658) from combined datasets. * Advancing our understanding of the early Universe through detailed studies of [21-cm signals from the Cosmic Dawn and Epoch of Reionization](https://arxiv.org/abs/2301.03298), including the development of sophisticated foreground modelling techniques and emulators like [GLOBALEMU](https://arxiv.org/abs/2104.04336). * Developing new statistical methods for quantifying tensions between cosmological datasets ([Quantifying tensions in cosmological parameters: Interpreting the DES evidence ratio](https://arxiv.org/abs/1902.04029)) and for robust Bayesian model selection ([Bayesian model selection without evidences: application to the dark energy equation-of-state](https://arxiv.org/abs/1506.09024)). * Exploring fundamental physics questions such as potential [parity violation in the Large-Scale Structure using machine learning](https://arxiv.org/abs/2410.16030). ### Charting the Future: AI-Powered Cosmological Discovery The Handley Research Group is poised to lead a new era of cosmological analysis, driven by the explosive growth in data from next-generation observatories and transformative advances in artificial intelligence. Our future ambitions are centred on harnessing these capabilities to address the most pressing questions in fundamental physics. **Strategic Research Pillars:** * **Next-Generation Simulation-Based Inference (SBI):** We are developing advanced SBI frameworks to move beyond traditional likelihood-based analyses. This involves creating sophisticated codes for simulating [Cosmic Microwave Background (CMB)](https://arxiv.org/abs/1908.00906) and [Baryon Acoustic Oscillation (BAO)](https://arxiv.org/abs/1607.00270) datasets from surveys like DESI and 4MOST, incorporating realistic astrophysical effects and systematic uncertainties. Our AI initiatives in this area focus on developing and implementing cutting-edge SBI algorithms, particularly [neural ratio estimation (NRE) methods](https://arxiv.org/abs/2407.15478), to enable robust and scalable inference from these complex simulations. * **Probing Fundamental Physics:** Our enhanced analytical toolkit will be deployed to test the standard cosmological model ($\Lambda$CDM) with unprecedented precision and to explore [extensions to Einstein's General Relativity](https://arxiv.org/abs/2006.03581). We aim to constrain a wide range of theoretical models, from modified gravity to the nature of [dark matter](https://arxiv.org/abs/2106.02056) and [dark energy](https://arxiv.org/abs/1701.08165). This includes leveraging data from upcoming [gravitational wave observatories](https://arxiv.org/abs/1803.10210) like LISA, alongside CMB and large-scale structure surveys from facilities such as Euclid and JWST. * **Synergies with Particle Physics:** We will continue to strengthen the connection between cosmology and particle physics by expanding the [GAMBIT framework](https://arxiv.org/abs/2009.03286) to interface with our new SBI tools. This will facilitate joint analyses of cosmological and particle physics data, providing a holistic approach to understanding the Universe's fundamental constituents. * **AI-Driven Theoretical Exploration:** We are pioneering the use of AI, including [large language models and symbolic computation](https://arxiv.org/abs/2401.00096), to automate and accelerate the process of theoretical model building and testing. This innovative approach will allow us to explore a broader landscape of physical theories and derive new constraints from diverse astrophysical datasets, such as those from GAIA. Our overarching goal is to remain at the forefront of scientific discovery by integrating the latest AI advancements into every stage of our research, from theoretical modeling to data analysis and interpretation. We are excited by the prospect of using these powerful new tools to unlock the secrets of the cosmos. Content generated by [gemini-2.5-pro-preview-05-06](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/content/index.txt). Image generated by [imagen-3.0-generate-002](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/images/index.txt). ``` 2. **Paper Metadata:** ```yaml !!python/object/new:feedparser.util.FeedParserDict dictitems: id: http://arxiv.org/abs/2511.05470v1 guidislink: true link: https://arxiv.org/abs/2511.05470v1 title: '$\texttt{unimpeded}$: A Public Nested Sampling Database for Bayesian Cosmology' title_detail: !!python/object/new:feedparser.util.FeedParserDict dictitems: type: text/plain language: null base: '' value: '$\texttt{unimpeded}$: A Public Nested Sampling Database for Bayesian Cosmology' updated: '2025-11-07T18:30:44Z' updated_parsed: !!python/object/apply:time.struct_time - !!python/tuple - 2025 - 11 - 7 - 18 - 30 - 44 - 4 - 311 - 0 - tm_zone: null tm_gmtoff: null links: - !!python/object/new:feedparser.util.FeedParserDict dictitems: href: https://arxiv.org/abs/2511.05470v1 rel: alternate type: text/html - !!python/object/new:feedparser.util.FeedParserDict dictitems: href: https://arxiv.org/pdf/2511.05470v1 rel: related type: application/pdf title: pdf summary: "Bayesian inference is central to modern cosmology. While parameter estimation\ \ is achievable with unnormalised posteriors traditionally obtained via MCMC methods,\ \ comprehensive model comparison and tension quantification require Bayesian evidences\ \ and normalised posteriors, which remain computationally prohibitive for many\ \ researchers. To address this, we present $\\texttt{unimpeded}$, a publicly available\ \ Python library and data repository providing DiRAC-funded (DP192 and 264) pre-computed\ \ nested sampling and MCMC chains with their normalised posterior samples, computed\ \ using $\\texttt{Cobaya}$ and the Boltzmann solver $\\texttt{CAMB}$. $\\texttt{unimpeded}$\ \ delivers systematic analysis across a grid of eight cosmological models (including\ \ $\u039B$CDM and seven extensions) and 39 modern cosmological datasets (comprising\ \ individual probes and their pairwise combinations). The built-in tension statistics\ \ calculator enables rapid computation of six tension quantification metrics.\ \ All chains are hosted on Zenodo with permanent access via the unimpeded API,\ \ analogous to the renowned Planck Legacy Archive but utilising nested sampling\ \ in addition to traditional MCMC methods." summary_detail: !!python/object/new:feedparser.util.FeedParserDict dictitems: type: text/plain language: null base: '' value: "Bayesian inference is central to modern cosmology. While parameter estimation\ \ is achievable with unnormalised posteriors traditionally obtained via MCMC\ \ methods, comprehensive model comparison and tension quantification require\ \ Bayesian evidences and normalised posteriors, which remain computationally\ \ prohibitive for many researchers. To address this, we present $\\texttt{unimpeded}$,\ \ a publicly available Python library and data repository providing DiRAC-funded\ \ (DP192 and 264) pre-computed nested sampling and MCMC chains with their\ \ normalised posterior samples, computed using $\\texttt{Cobaya}$ and the\ \ Boltzmann solver $\\texttt{CAMB}$. $\\texttt{unimpeded}$ delivers systematic\ \ analysis across a grid of eight cosmological models (including $\u039B$CDM\ \ and seven extensions) and 39 modern cosmological datasets (comprising individual\ \ probes and their pairwise combinations). The built-in tension statistics\ \ calculator enables rapid computation of six tension quantification metrics.\ \ All chains are hosted on Zenodo with permanent access via the unimpeded\ \ API, analogous to the renowned Planck Legacy Archive but utilising nested\ \ sampling in addition to traditional MCMC methods." tags: - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: astro-ph.IM scheme: http://arxiv.org/schemas/atom label: null - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: astro-ph.CO scheme: http://arxiv.org/schemas/atom label: null published: '2025-11-07T18:30:44Z' published_parsed: !!python/object/apply:time.struct_time - !!python/tuple - 2025 - 11 - 7 - 18 - 30 - 44 - 4 - 311 - 0 - tm_zone: null tm_gmtoff: null arxiv_primary_category: term: astro-ph.IM authors: - !!python/object/new:feedparser.util.FeedParserDict dictitems: name: Dily Duan Yi Ong - !!python/object/new:feedparser.util.FeedParserDict dictitems: name: Will Handley author_detail: !!python/object/new:feedparser.util.FeedParserDict dictitems: name: Will Handley author: Will Handley ``` 3. **Paper Source (TeX):** ```tex \documentclass[10pt,a4paper,onecolumn]{article} \usepackage{marginnote} \usepackage{graphicx} \usepackage[rgb,svgnames]{xcolor} \usepackage{authblk,etoolbox} \usepackage{titlesec} \usepackage{calc} \usepackage{tikz} \usepackage[pdfa]{hyperref} \usepackage{hyperxmp} \hypersetup{% unicode=true, pdfapart=3, pdfaconformance=B, pdftitle={unimpeded: A Public Nested Sampling Database for Bayesian Cosmology}, pdfauthor={Dily Duan Yi Ong, Will Handley}, pdfpublication={Journal of Open Source Software}, pdfpublisher={Open Journals}, pdfissn={2475-9066}, pdfpubtype={journal}, pdfvolumenum={}, pdfissuenum={}, pdfdoi={10.xxxxxx/draft}, pdfcopyright={Copyright (c) 1970, Dily Duan Yi Ong, Will Handley}, pdflicenseurl={http://creativecommons.org/licenses/by/4.0/}, colorlinks=true, linkcolor=[rgb]{0.0, 0.5, 1.0}, citecolor=Blue, urlcolor=[rgb]{0.0, 0.5, 1.0}, breaklinks=true } \usepackage{caption} \usepackage{orcidlink} \usepackage{tcolorbox} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{seqsplit} \usepackage{xstring} \usepackage{float} \let\origfigure\figure \let\endorigfigure\endfigure \renewenvironment{figure}[1][2] { \expandafter\origfigure\expandafter[H] } { \endorigfigure } \usepackage{fixltx2e} % provides \textsubscript % definitions for citeproc citations \NewDocumentCommand\citeproctext{}{} \NewDocumentCommand\citeproc{mm}{% \begingroup\def\citeproctext{#2}\cite{#1}\endgroup} \makeatletter % allow citations to break across lines \let\@cite@ofmt\@firstofone % avoid brackets around text for \cite: \def\@biblabel#1{} \def\@cite#1#2{{#1\if@tempswa , #2\fi}} \makeatother \newlength{\cslhangindent} \setlength{\cslhangindent}{1.5em} \newlength{\csllabelwidth} \setlength{\csllabelwidth}{3em} \newenvironment{CSLReferences}[2] % #1 hanging-indent, #2 entry-spacing {\begin{list}{}{% \setlength{\itemindent}{0pt} \setlength{\leftmargin}{0pt} \setlength{\parsep}{0pt} % turn on hanging indent if param 1 is 1 \ifodd #1 \setlength{\leftmargin}{\cslhangindent} \setlength{\itemindent}{-1\cslhangindent} \fi % set entry spacing \setlength{\itemsep}{#2\baselineskip}}} {\end{list}} \usepackage{calc} \newcommand{\CSLBlock}[1]{\hfill\break\parbox[t]{\linewidth}{\strut\ignorespaces#1\strut}} \newcommand{\CSLLeftMargin}[1]{\parbox[t]{\csllabelwidth}{\strut#1\strut}} \newcommand{\CSLRightInline}[1]{\parbox[t]{\linewidth - \csllabelwidth}{\strut#1\strut}} \newcommand{\CSLIndent}[1]{\hspace{\cslhangindent}#1} % --- Page layout ------------------------------------------------------------- \usepackage[top=3.5cm, bottom=3cm, right=1.5cm, left=1.0cm, headheight=2.2cm, reversemp, includemp, marginparwidth=4.5cm]{geometry} % --- Default font ------------------------------------------------------------ \renewcommand\familydefault{\sfdefault} % --- Style ------------------------------------------------------------------- \renewcommand{\captionfont}{\small\sffamily} \renewcommand{\captionlabelfont}{\bfseries} % --- Section/SubSection/SubSubSection ---------------------------------------- \titleformat{\section} {\normalfont\sffamily\Large\bfseries} {}{0pt}{} \titleformat{\subsection} {\normalfont\sffamily\large\bfseries} {}{0pt}{} \titleformat{\subsubsection} {\normalfont\sffamily\bfseries} {}{0pt}{} \titleformat*{\paragraph} {\sffamily\normalsize} % --- Header / Footer --------------------------------------------------------- \usepackage{fancyhdr} \pagestyle{fancy} \fancyhf{} %\renewcommand{\headrulewidth}{0.50pt} \renewcommand{\headrulewidth}{0pt} \fancyhead[L]{\hspace{-0.75cm}\includegraphics[width=5.5cm]{joss/logo.png}} \fancyhead[C]{} \fancyhead[R]{} \renewcommand{\footrulewidth}{0.25pt} \fancyfoot[L]{\parbox[t]{0.98\headwidth}{\footnotesize{\sffamily Ong, \& Handley. (2025). unimpeded: A Public Nested Sampling Database for Bayesian Cosmology. \emph{Journal of Open Source Software}, \emph{¿VOL?}(¿ISSUE?), ¿PAGE? \url{https://doi.org/10.xxxxxx/draft}}.}} \fancyfoot[R]{\sffamily \thepage} \makeatletter \let\ps@plain\ps@fancy \fancyheadoffset[L]{4.5cm} \fancyfootoffset[L]{4.5cm} % --- Macros --------- \definecolor{linky}{rgb}{0.0, 0.5, 1.0} \newtcolorbox{repobox} {colback=red, colframe=red!75!black, boxrule=0.5pt, arc=2pt, left=6pt, right=6pt, top=3pt, bottom=3pt} \newcommand{\ExternalLink}{% \tikz[x=1.2ex, y=1.2ex, baseline=-0.05ex]{% \begin{scope}[x=1ex, y=1ex] \clip (-0.1,-0.1) --++ (-0, 1.2) --++ (0.6, 0) --++ (0, -0.6) --++ (0.6, 0) --++ (0, -1); \path[draw, line width = 0.5, rounded corners=0.5] (0,0) rectangle (1,1); \end{scope} \path[draw, line width = 0.5] (0.5, 0.5) -- (1, 1); \path[draw, line width = 0.5] (0.6, 1) -- (1, 1) -- (1, 0.6); } } \definecolor{c53baa1}{RGB}{83,186,161} \definecolor{c202826}{RGB}{32,40,38} \def \rorglobalscale {0.1} \newcommand{\rorlogo}{% \begin{tikzpicture}[y=1cm, x=1cm, yscale=\rorglobalscale,xscale=\rorglobalscale, every node/.append style={scale=\rorglobalscale}, inner sep=0pt, outer sep=0pt] \begin{scope}[even odd rule,line join=round,miter limit=2.0,shift={(-0.025, 0.0216)}] \path[fill=c53baa1,nonzero rule,line join=round,miter limit=2.0] (1.8164, 3.012) -- (1.4954, 2.5204) -- (1.1742, 3.012) -- (1.8164, 3.012) -- cycle; \path[fill=c53baa1,nonzero rule,line join=round,miter limit=2.0] (3.1594, 3.012) -- (2.8385, 2.5204) -- (2.5172, 3.012) -- (3.1594, 3.012) -- cycle; \path[fill=c53baa1,nonzero rule,line join=round,miter limit=2.0] (1.1742, 0.0669) -- (1.4954, 0.5588) -- (1.8164, 0.0669) -- (1.1742, 0.0669) -- cycle; \path[fill=c53baa1,nonzero rule,line join=round,miter limit=2.0] (2.5172, 0.0669) -- (2.8385, 0.5588) -- (3.1594, 0.0669) -- (2.5172, 0.0669) -- cycle; \path[fill=c202826,nonzero rule,line join=round,miter limit=2.0] (3.8505, 1.4364).. controls (3.9643, 1.4576) and (4.0508, 1.5081) .. (4.1098, 1.5878).. controls (4.169, 1.6674) and (4.1984, 1.7642) .. (4.1984, 1.8777).. controls (4.1984, 1.9719) and (4.182, 2.0503) .. (4.1495, 2.1132).. controls (4.1169, 2.1762) and (4.0727, 2.2262) .. (4.0174, 2.2635).. controls (3.9621, 2.3006) and (3.8976, 2.3273) .. (3.824, 2.3432).. controls (3.7505, 2.359) and (3.6727, 2.367) .. (3.5909, 2.367) -- (2.9676, 2.367) -- (2.9676, 1.8688).. controls (2.9625, 1.8833) and (2.9572, 1.8976) .. (2.9514, 1.9119).. controls (2.9083, 2.0164) and (2.848, 2.1056) .. (2.7705, 2.1791).. controls (2.6929, 2.2527) and (2.6014, 2.3093) .. (2.495, 2.3487).. controls (2.3889, 2.3881) and (2.2728, 2.408) .. (2.1468, 2.408).. controls (2.0209, 2.408) and (1.905, 2.3881) .. (1.7986, 2.3487).. controls (1.6925, 2.3093) and (1.6007, 2.2527) .. (1.5232, 2.1791).. controls (1.4539, 2.1132) and (1.3983, 2.0346) .. (1.3565, 1.9436).. controls (1.3504, 2.009) and (1.3351, 2.0656) .. (1.3105, 2.1132).. controls (1.2779, 2.1762) and (1.2338, 2.2262) .. (1.1785, 2.2635).. controls (1.1232, 2.3006) and (1.0586, 2.3273) .. (0.985, 2.3432).. controls (0.9115, 2.359) and (0.8337, 2.367) .. (0.7519, 2.367) -- (0.1289, 2.367) -- (0.1289, 0.7562) -- (0.4837, 0.7562) -- (0.4837, 1.4002) -- (0.6588, 1.4002) -- (0.9956, 0.7562) -- (1.4211, 0.7562) -- (1.0118, 1.4364).. controls (1.1255, 1.4576) and (1.2121, 1.5081) .. (1.2711, 1.5878).. controls (1.2737, 1.5915) and (1.2761, 1.5954) .. (1.2787, 1.5991).. controls (1.2782, 1.5867) and (1.2779, 1.5743) .. (1.2779, 1.5616).. controls (1.2779, 1.4327) and (1.2996, 1.3158) .. (1.3428, 1.2113).. controls (1.3859, 1.1068) and (1.4462, 1.0176) .. (1.5237, 0.944).. controls (1.601, 0.8705) and (1.6928, 0.8139) .. (1.7992, 0.7744).. controls (1.9053, 0.735) and (2.0214, 0.7152) .. (2.1474, 0.7152).. controls (2.2733, 0.7152) and (2.3892, 0.735) .. (2.4956, 0.7744).. controls (2.6016, 0.8139) and (2.6935, 0.8705) .. (2.771, 0.944).. controls (2.8482, 1.0176) and (2.9086, 1.1068) .. (2.952, 1.2113).. controls (2.9578, 1.2253) and (2.9631, 1.2398) .. (2.9681, 1.2544) -- (2.9681, 0.7562) -- (3.3229, 0.7562) -- (3.3229, 1.4002) -- (3.4981, 1.4002) -- (3.8349, 0.7562) -- (4.2603, 0.7562) -- (3.8505, 1.4364) -- cycle(0.9628, 1.7777).. controls (0.9438, 1.7534) and (0.92, 1.7357) .. (0.8911, 1.7243).. controls (0.8623, 1.7129) and (0.83, 1.706) .. (0.7945, 1.7039).. controls (0.7588, 1.7015) and (0.7252, 1.7005) .. (0.6932, 1.7005) -- (0.4839, 1.7005) -- (0.4839, 2.0667) -- (0.716, 2.0667).. controls (0.7477, 2.0667) and (0.7805, 2.0643) .. (0.8139, 2.0598).. controls (0.8472, 2.0553) and (0.8768, 2.0466) .. (0.9025, 2.0336).. controls (0.9282, 2.0206) and (0.9496, 2.0021) .. (0.9663, 1.9778).. controls (0.9829, 1.9534) and (0.9914, 1.9209) .. (0.9914, 1.8799).. controls (0.9914, 1.8362) and (0.9819, 1.8021) .. (0.9628, 1.7777) -- cycle(2.6125, 1.3533).. controls (2.5889, 1.2904) and (2.5553, 1.2359) .. (2.5112, 1.1896).. controls (2.4672, 1.1433) and (2.4146, 1.1073) .. (2.3529, 1.0814).. controls (2.2916, 1.0554) and (2.2228, 1.0427) .. (2.1471, 1.0427).. controls (2.0712, 1.0427) and (2.0026, 1.0557) .. (1.9412, 1.0814).. controls (1.8799, 1.107) and (1.8272, 1.1433) .. (1.783, 1.1896).. controls (1.7391, 1.2359) and (1.7052, 1.2904) .. (1.6817, 1.3533).. controls (1.6581, 1.4163) and (1.6465, 1.4856) .. (1.6465, 1.5616).. controls (1.6465, 1.6359) and (1.6581, 1.705) .. (1.6817, 1.7687).. controls (1.7052, 1.8325) and (1.7388, 1.8873) .. (1.783, 1.9336).. controls (1.8269, 1.9799) and (1.8796, 2.0159) .. (1.9412, 2.0418).. controls (2.0026, 2.0675) and (2.0712, 2.0804) .. (2.1471, 2.0804).. controls (2.223, 2.0804) and (2.2916, 2.0675) .. (2.3529, 2.0418).. controls (2.4143, 2.0161) and (2.467, 1.9799) .. (2.5112, 1.9336).. controls (2.5551, 1.8873) and (2.5889, 1.8322) .. (2.6125, 1.7687).. controls (2.636, 1.705) and (2.6477, 1.6359) .. (2.6477, 1.5616).. controls (2.6477, 1.4856) and (2.636, 1.4163) .. (2.6125, 1.3533) -- cycle(3.8015, 1.7777).. controls (3.7825, 1.7534) and (3.7587, 1.7357) .. (3.7298, 1.7243).. controls (3.701, 1.7129) and (3.6687, 1.706) .. (3.6333, 1.7039).. controls (3.5975, 1.7015) and (3.5639, 1.7005) .. (3.5319, 1.7005) -- (3.3226, 1.7005) -- (3.3226, 2.0667) -- (3.5547, 2.0667).. controls (3.5864, 2.0667) and (3.6192, 2.0643) .. (3.6526, 2.0598).. controls (3.6859, 2.0553) and (3.7155, 2.0466) .. (3.7412, 2.0336).. controls (3.7669, 2.0206) and (3.7883, 2.0021) .. (3.805, 1.9778).. controls (3.8216, 1.9534) and (3.8301, 1.9209) .. (3.8301, 1.8799).. controls (3.8301, 1.8362) and (3.8206, 1.8021) .. (3.8015, 1.7777) -- cycle; \end{scope} \end{tikzpicture} } % --- Title / Authors --------------------------------------------------------- % patch \maketitle so that it doesn't center \patchcmd{\@maketitle}{center}{flushleft}{}{} \patchcmd{\@maketitle}{center}{flushleft}{}{} % patch \maketitle so that the font size for the title is normal \patchcmd{\@maketitle}{\LARGE}{\LARGE\sffamily}{}{} % patch the patch by authblk so that the author block is flush left \def\maketitle{{% \renewenvironment{tabular}[2][] {\begin{flushleft}} {\end{flushleft}} \AB@maketitle}} \renewcommand\AB@affilsepx{ \protect\Affilfont} %\renewcommand\AB@affilnote[1]{{\bfseries #1}\hspace{2pt}} \renewcommand\AB@affilnote[1]{{\bfseries #1}\hspace{3pt}} \renewcommand{\affil}[2][]% {\newaffiltrue\let\AB@blk@and\AB@pand \if\relax#1\relax\def\AB@note{\AB@thenote}\else\def\AB@note{#1}% \setcounter{Maxaffil}{0}\fi \begingroup \let\href=\href@Orig \let\protect\@unexpandable@protect \def\thanks{\protect\thanks}\def\footnote{\protect\footnote}% \@temptokena=\expandafter{\AB@authors}% {\def\\{\protect\\\protect\Affilfont}\xdef\AB@temp{#2}}% \xdef\AB@authors{\the\@temptokena\AB@las\AB@au@str \protect\\[\affilsep]\protect\Affilfont\AB@temp}% \gdef\AB@las{}\gdef\AB@au@str{}% {\def\\{, \ignorespaces}\xdef\AB@temp{#2}}% \@temptokena=\expandafter{\AB@affillist}% \xdef\AB@affillist{\the\@temptokena \AB@affilsep \AB@affilnote{\AB@note}\protect\Affilfont\AB@temp}% \endgroup \let\AB@affilsep\AB@affilsepx } \makeatother \renewcommand\Authfont{\sffamily\bfseries} \renewcommand\Affilfont{\sffamily\small\mdseries} \setlength{\affilsep}{1em} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \usepackage{fontspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\sffamily]{Ligatures=TeX} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \PassOptionsToPackage{usenames,dvipsnames}{color} % color is loaded by hyperref \urlstyle{same} % don't use monospace font for urls \ifLuaTeX \usepackage[bidi=basic]{babel} \else \usepackage[bidi=default]{babel} \fi \babelprovide[main,import]{american} % get rid of language-specific shorthands (see #6817): \let\LanguageShortHands\languageshorthands \def\languageshorthands#1{} \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \newenvironment{Shaded}{}{} \newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} \newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{#1}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{#1}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{#1}} \newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.50,0.00}{\textbf{#1}}} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{#1}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{#1}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{#1}}}} \usepackage{graphicx} \makeatletter \newsavebox\pandoc@box \newcommand*\pandocbounded[1]{% scales image to fit in text height/width \sbox\pandoc@box{#1}% \Gscale@div\@tempa{\textheight}{\dimexpr\ht\pandoc@box+\dp\pandoc@box\relax}% \Gscale@div\@tempb{\linewidth}{\wd\pandoc@box}% \ifdim\@tempb\p@<\@tempa\p@\let\@tempa\@tempb\fi% select the smaller of both \ifdim\@tempa\p@<\p@\scalebox{\@tempa}{\usebox\pandoc@box}% \else\usebox{\pandoc@box}% \fi% } % Set default figure placement to htbp \def\fps@figure{htbp} \makeatother \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{0} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi \ifLuaTeX \usepackage{selnolig} % disable illegal ligatures \fi \title{unimpeded: A Public Nested Sampling Database for Bayesian Cosmology} \author[1,2,3% % \ensuremath\mathparagraph]{Dily Duan Yi Ong% \,\orcidlink{0009-0004-8688-5088}\,% } \author[1,2% % ]{Will Handley% \,\orcidlink{0000-0002-5866-0445}\,% } \affil[1]{Kavli Institute for Cosmology, Madingley Road, Cambridge, CB3 0HA, UK% } \affil[2]{Astrophysics Group, Cavendish Laboratory, J.J. Thomson Avenue, Cambridge, CB3 0HE, UK% } \affil[3]{Newnham College, Sidgwick Avenue, Cambridge, CB3 9DF, UK% } \affil[$\mathparagraph$]{Corresponding author} \date{\vspace{-2.5ex}} \begin{document} \maketitle \marginpar{ \begin{flushleft} %\hrule \sffamily\small {\bfseries DOI:} \href{https://doi.org/10.xxxxxx/draft}{\color{linky}{10.xxxxxx/draft}} \vspace{2mm} {\bfseries Software} \begin{itemize} \setlength\itemsep{0em} \item \href{https://github.com/openjournals}{\color{linky}{Review}} \ExternalLink \item \href{https://github.com/openjournals}{\color{linky}{Repository}} \ExternalLink \item \href{https://doi.org/10.5281}{\color{linky}{Archive}} \ExternalLink \end{itemize} \vspace{2mm} \par\noindent\hrulefill\par \vspace{2mm} {\bfseries Editor:} \href{https://joss.theoj.org}{Open Journals} \ExternalLink \\ \vspace{1mm} {\bfseries Reviewers:} \begin{itemize} \setlength\itemsep{0em} \item \href{https://github.com/openjournals}{@openjournals} \end{itemize} \vspace{2mm} {\bfseries Submitted:} unsubmitted\\ {\bfseries Published:} unpublished \vspace{2mm} {\bfseries License}\\ Authors of papers retain copyright and release the work under a Creative Commons Attribution 4.0 International License (\href{https://creativecommons.org/licenses/by/4.0/}{\color{linky}{CC BY 4.0}}). \end{flushleft} } \section{Summary}\label{summary} Bayesian inference is central to modern cosmology. While parameter estimation is achievable with unnormalised posteriors traditionally obtained via MCMC methods, comprehensive model comparison and tension quantification require Bayesian evidences and normalised posteriors, which remain computationally prohibitive for many researchers. To address this, we present \texttt{unimpeded}, a publicly available Python library and data repository providing DiRAC-funded (DP192 and 264) pre-computed nested sampling and MCMC chains with their normalised posterior samples, computed using \texttt{Cobaya} (\citeproc{ref-Torrado2021}{Torrado \& Lewis, 2021}) and the Boltzmann solver CAMB (\citeproc{ref-Lewis1999}{Lewis et al., 2000}; \citeproc{ref-Lewis2002}{Lewis \& Bridle, 2002}). \texttt{unimpeded} delivers systematic analysis across a grid of eight cosmological models (including ΛCDM and seven extensions) and 39 modern cosmological datasets (comprising individual probes and their pairwise combinations). The built-in tension statistics calculator enables rapid computation of six tension quantification metrics. All chains are hosted on Zenodo\footnote{https://zenodo.org/} with permanent access via the \texttt{unimpeded} API, analogous to the renowned Planck Legacy Archive (\citeproc{ref-Dupac2015}{Dupac et al., 2015}) but utilising nested sampling (\citeproc{ref-Skilling2006}{Skilling, 2006}) in addition to traditional MCMC methods. \section{\texorpdfstring{\texttt{unimpeded}}{unimpeded}}\label{unimpeded} \texttt{unimpeded} addresses these challenges directly. It provides a pip-installable tool that leverages the \texttt{anesthetic} package (\citeproc{ref-Handley2019}{W. Handley, 2019}) for analysis and introduces a seamless Zenodo integration for data management. The nested sampling theory and methodology are detailed in (\citeproc{ref-Ong2025}{Ong \& Handley, 2025}). Its main features are: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item \textbf{A Public Nested Sampling Grid:} The package provides access to a pre-computed grid of nested sampling chains and MCMC chains for 8 cosmological models (standard \(\Lambda\)CDM and seven extensions), run against 39 datasets (comprising individual probes and their pairwise combinations). This saves the community significant computational resources and provides a common baseline for new analyses. Evidence and Kullback-Leibler divergence can be calculated jointly with \texttt{anesthetic} for model comparison and quantifying the constraining power of datasets and models, respectively. The scientific results from this grid are presented in (\citeproc{ref-Ong2025}{Ong \& Handley, 2025}). \item \textbf{Archival and Reproducibility via Zenodo:} \texttt{unimpeded} automates the process of archiving analysis products. The \texttt{DatabaseCreator} class bundles chains and metadata, uploading them to a Zenodo community to generate a permanent, citable Digital Object Identifier (DOI). The \texttt{DatabaseExplorer} class allows public user to easily download and analyse these chains, promoting open science and effortless reproducibility. Figure 1 illustrates the \texttt{unimpeded} ecosystem, detailing its three core functions. For data generation, it configures YAML files for HPC nested sampling. It then archives the chains on Zenodo, ensuring reproducibility with permanent DOIs, and finally provides an interface for post-processing analysis and visualisation with \texttt{anesthetic}. The following example demonstrates downloading chains: \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \ImportTok{from}\NormalTok{ unimpeded.database }\ImportTok{import}\NormalTok{ DatabaseExplorer} \CommentTok{\# Initialise DatabaseExplorer} \NormalTok{dbe }\OperatorTok{=}\NormalTok{ DatabaseExplorer()} \CommentTok{\# Get a list of currently available models and datasets} \NormalTok{models\_list }\OperatorTok{=}\NormalTok{ dbe.models} \NormalTok{datasets\_list }\OperatorTok{=}\NormalTok{ dbe.datasets} \CommentTok{\# Choose model, dataset and sampling method} \NormalTok{method }\OperatorTok{=} \StringTok{\textquotesingle{}ns\textquotesingle{}} \CommentTok{\# \textquotesingle{}ns\textquotesingle{} for nested sampling, \textquotesingle{}mcmc\textquotesingle{} for MCMC} \NormalTok{model }\OperatorTok{=} \StringTok{"klcdm"} \CommentTok{\# from models\_list} \NormalTok{dataset }\OperatorTok{=} \StringTok{"des\_y1.joint+planck\_2018\_CamSpec"} \CommentTok{\# from datasets\_list} \CommentTok{\# Download samples chain} \NormalTok{samples }\OperatorTok{=}\NormalTok{ dbe.download\_samples(method, model, dataset)} \CommentTok{\# Download Cobaya and PolyChord run settings} \NormalTok{info }\OperatorTok{=}\NormalTok{ dbe.download\_info(method, model, dataset)} \end{Highlighting} \end{Shaded} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item \textbf{Tension Statistics Calculator:} With the nested sampling chains and the built-in tension statistics calculator, six tension quantification metrics with different characteristics are available, including the \(R\) statistic, information ratio \(I\), suspiciousness \(S\), Gaussian model dimensionality \(d_G\), tension significance in units of \(\sigma\), and p-value. Each of them has unique characteristics optimised for different tasks, thoroughly discussed in (\citeproc{ref-Ong2025}{Ong \& Handley, 2025}). \texttt{unimpeded} implements these statistics with the necessary correction to account for discarded prior volume (\citeproc{ref-Handley2019a}{W. Handley \& Lemos, 2019}; \citeproc{ref-Handley2021}{Will Handley \& Lemos, 2021}; \citeproc{ref-Ong2025}{Ong \& Handley, 2025}). Figure 2 demonstrates the tension calculator output showing p-value derived tension significance (\(\sigma\)) for 31 pairwise dataset combinations across 8 cosmological models, sorted by significance to highlight the datasets in tension. Caution should be exercised when combining them. The following minimal example demonstrates the usage: \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \ImportTok{from}\NormalTok{ unimpeded.tension }\ImportTok{import}\NormalTok{ tension\_calculator} \NormalTok{tension\_samples }\OperatorTok{=}\NormalTok{ tension\_calculator(method}\OperatorTok{=}\StringTok{\textquotesingle{}ns\textquotesingle{}}\NormalTok{,} \NormalTok{ model}\OperatorTok{=}\StringTok{\textquotesingle{}lcdm\textquotesingle{}}\NormalTok{,} \NormalTok{ datasetA}\OperatorTok{=}\StringTok{\textquotesingle{}planck\_2018\_CamSpec\textquotesingle{}}\NormalTok{,} \NormalTok{ datasetB}\OperatorTok{=}\StringTok{\textquotesingle{}des\_y1.joint\textquotesingle{}}\NormalTok{,} \NormalTok{ nsamples}\OperatorTok{=}\DecValTok{1000}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} \centering \pandocbounded{\includegraphics[keepaspectratio]{flowchart.pdf}} \caption{The \texttt{unimpeded} ecosystem and workflow. At the centre, \texttt{unimpeded} manages data archival and retrieval through Zenodo, providing permanent DOIs and public access to pre-computed chains. For data generation, \texttt{unimpeded} configures YAML files for resource-intensive HPC nested sampling using \texttt{Cobaya}, \texttt{PolyChord}, and \texttt{CAMB}. For analysis, users download chains via \texttt{DatabaseExplorer} and leverage \texttt{anesthetic} for visualisation (corner plots, posterior distributions, constraint contours) and tension quantification (six metrics: \(R\) statistic, information ratio \(I\), suspiciousness \(S\), Bayesian model dimensionality \(d_G\), significance \(\sigma\), and \(p\)-value).\label{fig:workflow}} \end{figure} \begin{figure} \centering \pandocbounded{\includegraphics[keepaspectratio]{tension_stats_p_sorted_by_p.pdf}} \caption{Tension analysis heatmap produced by \texttt{unimpeded} and \texttt{anesthetic} displaying p-value derived tension significance (\(\sigma\) values) for 31 pairwise dataset combinations across 8 cosmological models. Rows are sorted by significance, with the most problematic dataset pairs (highest tension) at the top. This demonstrates \texttt{unimpeded}'s capability to systematically quantify tensions and their model dependence.\label{fig:tension_heatmap}} \end{figure} While tools like \texttt{getdist} (\citeproc{ref-Lewis2019}{Lewis, 2019}) are excellent for MCMC analysis, and frameworks like \texttt{CosmoSIS} (\citeproc{ref-Zuntz2015}{Zuntz et al., 2015}) or \texttt{MontePython} (\citeproc{ref-Brinckmann2019}{Brinckmann \& Lesgourgues, 2019}) are used for running simulations with samplers like \texttt{Cobaya} (\citeproc{ref-Torrado2021}{Torrado \& Lewis, 2021}), \texttt{unimpeded} fills a unique niche. It is not a sampler but a high-level analysis and database management tool that extends the capabilities of its underlying engine, \texttt{anesthetic}, to create a public, reproducible, and statistically robust nested sampling resource for the cosmology community. The package is fully documented, tested, and available for installation via the Python Package Index (PyPI). A Jupyter notebook tutorial is also available to help users get started. \section{Acknowledgements}\label{acknowledgements} We thank the developers of the open-source packages that this work relies upon, including \texttt{anesthetic}, \texttt{numpy}, \texttt{scipy}, \texttt{pandas}, and \texttt{corner.py}. This work was performed using the Cambridge Service for Data Driven Discovery (CSD3), operated by the University of Cambridge Research Computing Service, provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). \section*{References}\label{references} \addcontentsline{toc}{section}{References} \phantomsection\label{refs} \begin{CSLReferences}{1}{0.5} \bibitem[\citeproctext]{ref-Brinckmann2019} Brinckmann, T., \& Lesgourgues, J. (2019). {MontePython 3: boosted MCMC sampler and other features}. \emph{Physics of the Dark Universe}, \emph{24}. \url{https://doi.org/10.1016/j.dark.2018.100260} \bibitem[\citeproctext]{ref-Dupac2015} Dupac, X., Arviset, C., Fernandez Barreiro, M., Lopez-Caniego, M., \& Tauber, J. (2015, December). {The Planck Legacy Archive}. \emph{Science Operations 2015: Science Data Management}. \url{https://doi.org/10.5281/zenodo.34639} \bibitem[\citeproctext]{ref-Handley2019} Handley, W. (2019). {anesthetic: nested sampling visualisation}. \emph{Journal of Open Source Software}, \emph{4}(37), 1414. \url{https://doi.org/10.21105/joss.01414} \bibitem[\citeproctext]{ref-Handley2019a} Handley, W., \& Lemos, P. (2019). {Quantifying tension: interpreting the DES evidence ratio}. \emph{Physical Review D}, \emph{100}(4). \url{https://doi.org/10.1103/PhysRevD.100.043504} \bibitem[\citeproctext]{ref-Handley2021} Handley, Will, \& Lemos, P. (2021). {Quantifying the global parameter tensions between ACT, SPT, and Planck}. \emph{Physical Review D}, \emph{103}(6). \url{https://doi.org/10.1103/PhysRevD.103.063529} \bibitem[\citeproctext]{ref-Lewis2019} Lewis, A. (2019). {GetDist: a Python package for analysing Monte Carlo samples}. \emph{arXiv e-Prints}. \url{https://arxiv.org/abs/1910.13970} \bibitem[\citeproctext]{ref-Lewis2002} Lewis, A., \& Bridle, S. (2002). {Cosmological parameters from CMB and other data: A Monte Carlo approach}. \emph{Physical Review D}, \emph{66}(10), 103511. \url{https://doi.org/10.1103/PhysRevD.66.103511} \bibitem[\citeproctext]{ref-Lewis1999} Lewis, A., Challinor, A., \& Lasenby, A. (2000). {Efficient Computation of Cosmic Microwave Background Anisotropies in Closed Friedmann-Robertson-Walker Models}. \emph{The Astrophysical Journal}, \emph{538}, 473--476. \url{https://doi.org/10.1086/309179} \bibitem[\citeproctext]{ref-Ong2025} Ong, D. D. Y., \& Handley, W. (2025). {Tension statistics for nested sampling}. \emph{arXiv e-Prints}. \url{https://arxiv.org/abs/2511.04661} \bibitem[\citeproctext]{ref-Skilling2006} Skilling, J. (2006). Nested sampling for general bayesian computation. \emph{Bayesian Analysis}, \emph{1}(4), 833--859. \url{https://doi.org/10.1214/06-BA127} \bibitem[\citeproctext]{ref-Torrado2021} Torrado, J., \& Lewis, A. (2021). {Cobaya: Code for Bayesian analysis of hierarchical physical models}. \emph{Journal of Cosmology and Astroparticle Physics}, \emph{2021}(05), 057. \url{https://doi.org/10.1088/1475-7516/2021/05/057} \bibitem[\citeproctext]{ref-Zuntz2015} Zuntz, J., Paterno, M., Jennings, E., Rudd, D., Manzotti, A., Dodelson, S., Bridle, S., Sehrish, S., \& Kowalkowski, J. (2015). {CosmoSIS: Modular cosmological parameter estimation}. \emph{Astronomy and Computing}, \emph{12}, 45--59. \url{https://doi.org/10.1016/j.ascom.2015.05.005} \end{CSLReferences} \end{document} ``` 4. **Bibliographic Information:** ```bbl ``` 5. **Author Information:** - Lead Author: {'name': 'Dily Duan Yi Ong'} - Full Authors List: ```yaml Dily Ong: phd: start: 2023-10-01 supervisors: - Will Handley thesis: null original_image: images/originals/dily_ong.jpg image: /assets/group/images/dily_ong.jpg Will Handley: pi: start: 2020-10-01 thesis: null postdoc: start: 2016-10-01 end: 2020-10-01 thesis: null phd: start: 2012-10-01 end: 2016-09-30 supervisors: - Anthony Lasenby - Mike Hobson thesis: 'Kinetic initial conditions for inflation: theory, observation and methods' original_image: images/originals/will_handley.jpeg image: /assets/group/images/will_handley.jpg links: Webpage: https://willhandley.co.uk ``` This YAML file provides a concise snapshot of an academic research group. It lists members by name along with their academic roles—ranging from Part III and summer projects to MPhil, PhD, and postdoctoral positions—with corresponding dates, thesis topics, and supervisor details. Supplementary metadata includes image paths and links to personal or departmental webpages. A dedicated "coi" section profiles senior researchers, highlighting the group’s collaborative mentoring network and career trajectories in cosmology, astrophysics, and Bayesian data analysis. ==================================================================================== Final Output Instructions ==================================================================================== - Combine all data sources to create a seamless, engaging narrative. - Follow the exact Markdown output format provided at the top. - Do not include any extra explanation, commentary, or wrapping beyond the specified Markdown. - Validate that every bibliographic reference with a DOI or arXiv identifier is converted into a Markdown link as per the examples. - Validate that every Markdown author link corresponds to a link in the author information block. - Before finalizing, confirm that no LaTeX citation commands or other undesired formatting remain. - Before finalizing, confirm that the link to the paper itself [2511.05470](https://arxiv.org/abs/2511.05470) is featured in the first sentence. Generate only the final Markdown output that meets all these requirements. {% endraw %}