{% raw %} Title: Create a Markdown Blog Post Integrating Research Details and a Featured Paper ==================================================================================== This task involves generating a Markdown file (ready for a GitHub-served Jekyll site) that integrates our research details with a featured research paper. The output must follow the exact format and conventions described below. ==================================================================================== Output Format (Markdown): ------------------------------------------------------------------------------------ --- layout: post title: "$\texttt{unimpeded}$: A Public Grid of Nested Sampling Chains for Cosmological Model Comparison and Tension Analysis" date: 2025-11-06 categories: papers --- ![AI generated image](/assets/images/posts/2025-11-06-2511.04661.png) Dily OngWill Handley Content generated by [gemini-2.5-pro](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/content/2025-11-06-2511.04661.txt). Image generated by [imagen-4.0-generate-001](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/images/2025-11-06-2511.04661.txt). ------------------------------------------------------------------------------------ ==================================================================================== Please adhere strictly to the following instructions: ==================================================================================== Section 1: Content Creation Instructions ==================================================================================== 1. **Generate the Page Body:** - Write a well-composed, engaging narrative that is suitable for a scholarly audience interested in advanced AI and astrophysics. - Ensure the narrative is original and reflective of the tone and style and content in the "Homepage Content" block (provided below), but do not reuse its content. - Use bullet points, subheadings, or other formatting to enhance readability. 2. **Highlight Key Research Details:** - Emphasize the contributions and impact of the paper, focusing on its methodology, significance, and context within current research. - Specifically highlight the lead author ({'name': 'Dily Duan Yi Ong'}). When referencing any author, use Markdown links from the Author Information block (choose academic or GitHub links over social media). 3. **Integrate Data from Multiple Sources:** - Seamlessly weave information from the following: - **Paper Metadata (YAML):** Essential details including the title and authors. - **Paper Source (TeX):** Technical content from the paper. - **Bibliographic Information (bbl):** Extract bibliographic references. - **Author Information (YAML):** Profile details for constructing Markdown links. - Merge insights from the Paper Metadata, TeX source, Bibliographic Information, and Author Information blocks into a coherent narrative—do not treat these as separate or isolated pieces. - Insert the generated narrative between the HTML comments: and 4. **Generate Bibliographic References:** - Review the Bibliographic Information block carefully. - For each reference that includes a DOI or arXiv identifier: - For DOIs, generate a link formatted as: [10.1234/xyz](https://doi.org/10.1234/xyz) - For arXiv entries, generate a link formatted as: [2103.12345](https://arxiv.org/abs/2103.12345) - **Important:** Do not use any LaTeX citation commands (e.g., `\cite{...}`). Every reference must be rendered directly as a Markdown link. For example, instead of `\cite{mycitation}`, output `[mycitation](https://doi.org/mycitation)` - **Incorrect:** `\cite{10.1234/xyz}` - **Correct:** `[10.1234/xyz](https://doi.org/10.1234/xyz)` - Ensure that at least three (3) of the most relevant references are naturally integrated into the narrative. - Ensure that the link to the Featured paper [2511.04661](https://arxiv.org/abs/2511.04661) is included in the first sentence. 5. **Final Formatting Requirements:** - The output must be plain Markdown; do not wrap it in Markdown code fences. - Preserve the YAML front matter exactly as provided. ==================================================================================== Section 2: Provided Data for Integration ==================================================================================== 1. **Homepage Content (Tone and Style Reference):** ```markdown --- layout: home --- ![AI generated image](/assets/images/index.png) The Handley Research Group stands at the forefront of cosmological exploration, pioneering novel approaches that fuse fundamental physics with the transformative power of artificial intelligence. We are a dynamic team of researchers, including PhD students, postdoctoral fellows, and project students, based at the University of Cambridge. Our mission is to unravel the mysteries of the Universe, from its earliest moments to its present-day structure and ultimate fate. We tackle fundamental questions in cosmology and astrophysics, with a particular focus on leveraging advanced Bayesian statistical methods and AI to push the frontiers of scientific discovery. Our research spans a wide array of topics, including the [primordial Universe](https://arxiv.org/abs/1907.08524), [inflation](https://arxiv.org/abs/1807.06211), the nature of [dark energy](https://arxiv.org/abs/2503.08658) and [dark matter](https://arxiv.org/abs/2405.17548), [21-cm cosmology](https://arxiv.org/abs/2210.07409), the [Cosmic Microwave Background (CMB)](https://arxiv.org/abs/1807.06209), and [gravitational wave astrophysics](https://arxiv.org/abs/2411.17663). ### Our Research Approach: Innovation at the Intersection of Physics and AI At The Handley Research Group, we develop and apply cutting-edge computational techniques to analyze complex astronomical datasets. Our work is characterized by a deep commitment to principled [Bayesian inference](https://arxiv.org/abs/2205.15570) and the innovative application of [artificial intelligence (AI) and machine learning (ML)](https://arxiv.org/abs/2504.10230). **Key Research Themes:** * **Cosmology:** We investigate the early Universe, including [quantum initial conditions for inflation](https://arxiv.org/abs/2002.07042) and the generation of [primordial power spectra](https://arxiv.org/abs/2112.07547). We explore the enigmatic nature of [dark energy, using methods like non-parametric reconstructions](https://arxiv.org/abs/2503.08658), and search for new insights into [dark matter](https://arxiv.org/abs/2405.17548). A significant portion of our efforts is dedicated to [21-cm cosmology](https://arxiv.org/abs/2104.04336), aiming to detect faint signals from the Cosmic Dawn and the Epoch of Reionization. * **Gravitational Wave Astrophysics:** We develop methods for [analyzing gravitational wave signals](https://arxiv.org/abs/2411.17663), extracting information about extreme astrophysical events and fundamental physics. * **Bayesian Methods & AI for Physical Sciences:** A core component of our research is the development of novel statistical and AI-driven methodologies. This includes advancing [nested sampling techniques](https://arxiv.org/abs/1506.00171) (e.g., [PolyChord](https://arxiv.org/abs/1506.00171), [dynamic nested sampling](https://arxiv.org/abs/1704.03459), and [accelerated nested sampling with $\beta$-flows](https://arxiv.org/abs/2411.17663)), creating powerful [simulation-based inference (SBI) frameworks](https://arxiv.org/abs/2504.10230), and employing [machine learning for tasks such as radiometer calibration](https://arxiv.org/abs/2504.16791), [cosmological emulation](https://arxiv.org/abs/2503.13263), and [mitigating radio frequency interference](https://arxiv.org/abs/2211.15448). We also explore the potential of [foundation models for scientific discovery](https://arxiv.org/abs/2401.00096). **Technical Contributions:** Our group has a strong track record of developing widely-used scientific software. Notable examples include: * [**PolyChord**](https://arxiv.org/abs/1506.00171): A next-generation nested sampling algorithm for Bayesian computation. * [**anesthetic**](https://arxiv.org/abs/1905.04768): A Python package for processing and visualizing nested sampling runs. * [**GLOBALEMU**](https://arxiv.org/abs/2104.04336): An emulator for the sky-averaged 21-cm signal. * [**maxsmooth**](https://arxiv.org/abs/2007.14970): A tool for rapid maximally smooth function fitting. * [**margarine**](https://arxiv.org/abs/2205.12841): For marginal Bayesian statistics using normalizing flows and KDEs. * [**fgivenx**](https://arxiv.org/abs/1908.01711): A package for functional posterior plotting. * [**nestcheck**](https://arxiv.org/abs/1804.06406): Diagnostic tests for nested sampling calculations. ### Impact and Discoveries Our research has led to significant advancements in cosmological data analysis and yielded new insights into the Universe. Key achievements include: * Pioneering the development and application of advanced Bayesian inference tools, such as [PolyChord](https://arxiv.org/abs/1506.00171), which has become a cornerstone for cosmological parameter estimation and model comparison globally. * Making significant contributions to the analysis of major cosmological datasets, including the [Planck mission](https://arxiv.org/abs/1807.06209), providing some of the tightest constraints on cosmological parameters and models of [inflation](https://arxiv.org/abs/1807.06211). * Developing novel AI-driven approaches for astrophysical challenges, such as using [machine learning for radiometer calibration in 21-cm experiments](https://arxiv.org/abs/2504.16791) and [simulation-based inference for extracting cosmological information from galaxy clusters](https://arxiv.org/abs/2504.10230). * Probing the nature of dark energy through innovative [non-parametric reconstructions of its equation of state](https://arxiv.org/abs/2503.08658) from combined datasets. * Advancing our understanding of the early Universe through detailed studies of [21-cm signals from the Cosmic Dawn and Epoch of Reionization](https://arxiv.org/abs/2301.03298), including the development of sophisticated foreground modelling techniques and emulators like [GLOBALEMU](https://arxiv.org/abs/2104.04336). * Developing new statistical methods for quantifying tensions between cosmological datasets ([Quantifying tensions in cosmological parameters: Interpreting the DES evidence ratio](https://arxiv.org/abs/1902.04029)) and for robust Bayesian model selection ([Bayesian model selection without evidences: application to the dark energy equation-of-state](https://arxiv.org/abs/1506.09024)). * Exploring fundamental physics questions such as potential [parity violation in the Large-Scale Structure using machine learning](https://arxiv.org/abs/2410.16030). ### Charting the Future: AI-Powered Cosmological Discovery The Handley Research Group is poised to lead a new era of cosmological analysis, driven by the explosive growth in data from next-generation observatories and transformative advances in artificial intelligence. Our future ambitions are centred on harnessing these capabilities to address the most pressing questions in fundamental physics. **Strategic Research Pillars:** * **Next-Generation Simulation-Based Inference (SBI):** We are developing advanced SBI frameworks to move beyond traditional likelihood-based analyses. This involves creating sophisticated codes for simulating [Cosmic Microwave Background (CMB)](https://arxiv.org/abs/1908.00906) and [Baryon Acoustic Oscillation (BAO)](https://arxiv.org/abs/1607.00270) datasets from surveys like DESI and 4MOST, incorporating realistic astrophysical effects and systematic uncertainties. Our AI initiatives in this area focus on developing and implementing cutting-edge SBI algorithms, particularly [neural ratio estimation (NRE) methods](https://arxiv.org/abs/2407.15478), to enable robust and scalable inference from these complex simulations. * **Probing Fundamental Physics:** Our enhanced analytical toolkit will be deployed to test the standard cosmological model ($\Lambda$CDM) with unprecedented precision and to explore [extensions to Einstein's General Relativity](https://arxiv.org/abs/2006.03581). We aim to constrain a wide range of theoretical models, from modified gravity to the nature of [dark matter](https://arxiv.org/abs/2106.02056) and [dark energy](https://arxiv.org/abs/1701.08165). This includes leveraging data from upcoming [gravitational wave observatories](https://arxiv.org/abs/1803.10210) like LISA, alongside CMB and large-scale structure surveys from facilities such as Euclid and JWST. * **Synergies with Particle Physics:** We will continue to strengthen the connection between cosmology and particle physics by expanding the [GAMBIT framework](https://arxiv.org/abs/2009.03286) to interface with our new SBI tools. This will facilitate joint analyses of cosmological and particle physics data, providing a holistic approach to understanding the Universe's fundamental constituents. * **AI-Driven Theoretical Exploration:** We are pioneering the use of AI, including [large language models and symbolic computation](https://arxiv.org/abs/2401.00096), to automate and accelerate the process of theoretical model building and testing. This innovative approach will allow us to explore a broader landscape of physical theories and derive new constraints from diverse astrophysical datasets, such as those from GAIA. Our overarching goal is to remain at the forefront of scientific discovery by integrating the latest AI advancements into every stage of our research, from theoretical modeling to data analysis and interpretation. We are excited by the prospect of using these powerful new tools to unlock the secrets of the cosmos. Content generated by [gemini-2.5-pro-preview-05-06](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/content/index.txt). Image generated by [imagen-3.0-generate-002](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/images/index.txt). ``` 2. **Paper Metadata:** ```yaml !!python/object/new:feedparser.util.FeedParserDict dictitems: id: http://arxiv.org/abs/2511.04661v1 guidislink: true link: https://arxiv.org/abs/2511.04661v1 title: '$\texttt{unimpeded}$: A Public Grid of Nested Sampling Chains for Cosmological Model Comparison and Tension Analysis' title_detail: !!python/object/new:feedparser.util.FeedParserDict dictitems: type: text/plain language: null base: '' value: '$\texttt{unimpeded}$: A Public Grid of Nested Sampling Chains for Cosmological Model Comparison and Tension Analysis' updated: '2025-11-06T18:48:39Z' updated_parsed: !!python/object/apply:time.struct_time - !!python/tuple - 2025 - 11 - 6 - 18 - 48 - 39 - 3 - 310 - 0 - tm_zone: null tm_gmtoff: null links: - !!python/object/new:feedparser.util.FeedParserDict dictitems: href: https://arxiv.org/abs/2511.04661v1 rel: alternate type: text/html - !!python/object/new:feedparser.util.FeedParserDict dictitems: href: https://arxiv.org/pdf/2511.04661v1 rel: related type: application/pdf title: pdf summary: "Bayesian inference is central to modern cosmology, yet comprehensive model\ \ comparison and tension quantification remain computationally prohibitive for\ \ many researchers. To address this, we release $\\texttt{unimpeded}$, a publicly\ \ available Python library and data repository providing pre-computed nested sampling\ \ and MCMC chains. We apply this resource to conduct a systematic analysis across\ \ a grid of eight cosmological models, including $\u039B$CDM and seven extensions,\ \ and 39 datasets, including individual probes and their pairwise combinations.\ \ Our model comparison reveals that whilst individual datasets show varied preferences\ \ for model extensions, the base $\u039B$CDM model is most frequently preferred\ \ in combined analyses, with the general trend suggesting that evidence for new\ \ physics is diluted when probes are combined. Using five complementary statistics,\ \ we quantify tensions, finding the most significant to be between DES and Planck\ \ (3.57$\u03C3$) and SH0ES and Planck (3.27$\u03C3$) within $\u039B$CDM. We characterise\ \ the $S_8$ tension as high-dimensional ($d_G=6.62$) and resolvable in extended\ \ models, whereas the Hubble tension is low-dimensional and persists across the\ \ model space. Caution should be exercised when combining datasets in tension.\ \ The $\\texttt{unimpeded}$ data products, hosted on Zenodo, provide a powerful\ \ resource for reproducible cosmological analysis and underscore the robustness\ \ of the $\u039B$CDM model against the current compendium of data." summary_detail: !!python/object/new:feedparser.util.FeedParserDict dictitems: type: text/plain language: null base: '' value: "Bayesian inference is central to modern cosmology, yet comprehensive\ \ model comparison and tension quantification remain computationally prohibitive\ \ for many researchers. To address this, we release $\\texttt{unimpeded}$,\ \ a publicly available Python library and data repository providing pre-computed\ \ nested sampling and MCMC chains. We apply this resource to conduct a systematic\ \ analysis across a grid of eight cosmological models, including $\u039B$CDM\ \ and seven extensions, and 39 datasets, including individual probes and their\ \ pairwise combinations. Our model comparison reveals that whilst individual\ \ datasets show varied preferences for model extensions, the base $\u039B\ $CDM model is most frequently preferred in combined analyses, with the general\ \ trend suggesting that evidence for new physics is diluted when probes are\ \ combined. Using five complementary statistics, we quantify tensions, finding\ \ the most significant to be between DES and Planck (3.57$\u03C3$) and SH0ES\ \ and Planck (3.27$\u03C3$) within $\u039B$CDM. We characterise the $S_8$\ \ tension as high-dimensional ($d_G=6.62$) and resolvable in extended models,\ \ whereas the Hubble tension is low-dimensional and persists across the model\ \ space. Caution should be exercised when combining datasets in tension. The\ \ $\\texttt{unimpeded}$ data products, hosted on Zenodo, provide a powerful\ \ resource for reproducible cosmological analysis and underscore the robustness\ \ of the $\u039B$CDM model against the current compendium of data." tags: - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: astro-ph.CO scheme: http://arxiv.org/schemas/atom label: null - !!python/object/new:feedparser.util.FeedParserDict dictitems: term: astro-ph.IM scheme: http://arxiv.org/schemas/atom label: null published: '2025-11-06T18:48:39Z' published_parsed: !!python/object/apply:time.struct_time - !!python/tuple - 2025 - 11 - 6 - 18 - 48 - 39 - 3 - 310 - 0 - tm_zone: null tm_gmtoff: null arxiv_comment: 47 pages, 13 figures arxiv_primary_category: term: astro-ph.CO authors: - !!python/object/new:feedparser.util.FeedParserDict dictitems: name: Dily Duan Yi Ong - !!python/object/new:feedparser.util.FeedParserDict dictitems: name: Will Handley author_detail: !!python/object/new:feedparser.util.FeedParserDict dictitems: name: Will Handley author: Will Handley ``` 3. **Paper Source (TeX):** ```tex \documentclass[a4paper,11pt]{article} \usepackage{jcappub} % for details on the use of the package, please see the JINST-author-manual % \usepackage{lineno} % Uncomment for line numbers if needed % \linenumbers % Additional packages \usepackage{cleveref} \usepackage{amsmath} \usepackage{tabularx} \usepackage{booktabs} \usepackage{subcaption} % --- JOURNAL ABBREVIATIONS --- \newcommand{\mnras}{Monthly Notices of the Royal Astronomical Society} \newcommand{\apj}{The Astrophysical Journal} \newcommand{\prd}{Phys.\ Rev.\ D} \newcommand{\jcap}{Journal of Cosmology and Astroparticle Physics} \newcommand{\prl}{Physical Review Letters} % --- Probability shorthands --- \newcommand{\Prob}{\text{P}} % Probability \newcommand{\posterior}{\mathcal{P}} % Posterior \newcommand{\likelihood}{\mathcal{L}} % Likelihood \newcommand{\prior}{\pi} % Prior \newcommand{\evidence}{\mathcal{Z}} % Evidence \newcommand{\data}{D} % Data \newcommand{\model}{\mathcal{M}} % Model \newcommand{\params}{\theta} % parameters \newcommand{\paramsM}{\theta_\mathcal{M}} % parameters_model \newcommand{\KL}{\mathcal{D}_{\text{KL}}} % Kullback-Leibler Divergence \newcommand{\vprior}{V_\pi} % Prior Volume \newcommand{\shannon}{\mathcal{I}} % Shannon Information \newcommand{\vposterior}{V_\mathcal{P}} % Posterior Volume \arxivnumber{[INSERT ARXIV NUMBER IF AVAILABLE]} % Only if you have one \title{\boldmath \texttt{unimpeded}: A Public Grid of Nested Sampling Chains for Cosmological Model Comparison and Tension Analysis} % Authors \author[1,2]{Dily Duan Yi Ong\note{Corresponding author.}} \author[1,2]{and Will Handley} \affiliation[1]{Kavli Institute for Cosmology, University of Cambridge,\\Madingley Road, Cambridge, CB3 0HA, U.K.} \affiliation[2]{Cavendish Laboratory, University of Cambridge,\\J.J. Thomson Avenue, Cambridge, CB3 0HE, U.K.} % E-mail addresses: only for the corresponding author \emailAdd{dlo26@cam.ac.uk} \abstract{Bayesian inference is central to modern cosmology, yet comprehensive model comparison and tension quantification remain computationally prohibitive for many researchers. To address this, we release \texttt{unimpeded}, a publicly available Python library and data repository providing pre-computed nested sampling and MCMC chains. We apply this resource to conduct a systematic analysis across a grid of eight cosmological models, including $\Lambda$CDM and seven extensions, and 39 datasets, including individual probes and their pairwise combinations. Our model comparison reveals that whilst individual datasets show varied preferences for model extensions, the base $\Lambda$CDM model is most frequently preferred in combined analyses, with the general trend suggesting that evidence for new physics is diluted when probes are combined. Using five complementary statistics, we quantify tensions, finding the most significant to be between DES and Planck ($\sigma=3.57$) and SH0ES and Planck ($\sigma=3.27$) within $\Lambda$CDM. We characterise the $S_8$ tension as high-dimensional ($d_G=6.62$) and resolvable in extended models, whereas the Hubble tension is low-dimensional and persists across the model space. Caution should be exercised when combining datasets in tension. The \texttt{unimpeded} data products, hosted on Zenodo, provide a powerful resource for reproducible cosmological analysis and underscore the robustness of the $\Lambda$CDM model against the current compendium of data. } \begin{document} \maketitle \flushbottom \section{Introduction} \label{sec:introduction} Bayesian methods of inference are widely used in modern cosmology in parameter estimation, model comparison and tension quantification. Parameter estimation refers to the process of determining the values of cosmological parameters, which describe the properties of a model using observed data. Model comparison refers to the evaluation and selection between different cosmological models and tension quantification is the measurement and study of discrepancies between different observed datasets, which are predicted to be in agreement theoretically by a cosmological model. The last two of these have gained more prominence in recent times due to disparities that have emerged within the context of the concordance model regarding the estimated value of the Hubble constant $H_{0}$~\cite{Verde2019NatAs} using Cosmic Microwave Background (CMB) and Supernovae data (commonly referred to as the Hubble tension), the clustering $\sigma_{8}$~\cite{Joudaki2017MNRAS} using CMB and weak lensing, and the curvature $\Omega_{K}$~\cite{Handley2021PRD,DiValentino2020NatAs} using CMB and lensing/BAO, and between CMB datasets. These tensions, may excitingly, point towards physics beyond the standard $\Lambda$ Cold Dark Matter ($\Lambda$CDM) concordance model. Parameter estimation has commonly been performed using the Markov chain Monte Carlo (MCMC) methods, which are effective for exploring the posterior distributions of model parameters given a set of data and a model. The Planck Legacy Archive (PLA)~\cite{Planck2018params} has been an invaluable community resource, providing MCMC chains for a grid of models and datasets, primarily facilitating parameter estimation. However, MCMC methods are not suitable for calculating Bayesian evidence, which is essential for model comparison and tension quantification. Nested sampling~\cite{Skilling2006,Lemos2021,Handley2019,Lemos2020} has emerged as a powerful alternative, specifically tailored for model comparison and tension quantification. It is a Monte Carlo sampling technique used to efficiently compute the evidence and concurrently generate samples from the posterior distribution as a by-product, and hence, enabling parameter estimation without extra expense. However, the computational cost of nested sampling is still significant, especially when considering the large parameter spaces and complex likelihoods involved in modern cosmological analyses. This paper introduces \texttt{unimpeded}\footnote{The \texttt{unimpeded} library and its source code are available at \url{https://github.com/handley-lab/unimpeded}.}, a publicly available pip-installable Python library and associated data repository. The primary aim of \texttt{unimpeded} is to provide an analogous grid to the PLA but utilising nested sampling chains, thereby enabling robust model comparison and tension quantification alongside parameter estimation. This initiative directly supports the goals of our DiRAC-funded projects (DP192 and 264), which seek a systematic examination of model-dataset combinations to uncover patterns that might illuminate the path towards resolving current cosmological puzzles or identifying a successor to $\Lambda$CDM. The \texttt{unimpeded} grid is designed to incorporate a broad variety of modern datasets and expanding as new data and models become relevant. All associated data products, including nested sampling and MCMC chains, are made publicly and permanently available on Zenodo\footnote{\url{https://zenodo.org/}}. This paper is structured as follows. In Section~\ref{sec:theory}, we review the theoretical foundations of Bayesian inference and the three pillars of Bayesian cosmological analysis, namely parameter estimation, model comparison and tension quantification. We also discuss the concordance $\Lambda$CDM model and other models. We then detail our specific methodological approach with nested sampling in Section~\ref{sec:methodology}. The \texttt{unimpeded} library and its core functionalities are introduced in Section~\ref{sec:unimpeded_action}. We apply this framework to cosmological data, presenting our main findings in Section~\ref{sec:results}, presenting wide grids of model comparison and tension statistics. \section{Theory} \label{sec:theory} \subsection{Nomenclature} \label{ssec:nomenclature} The Universe provides a single laboratory for physics, but with experimental settings we can only observe, not control. Our goal is to utilises observables to rigorously test predictive cosmological models, seeking either to quantify how likely they are to be true given the data, and to improve their parameter constraints. Bayesian inference provides a framework for systematically updating our beliefs about model and their parameters in light of new data. A predictive model $\model$ contains a set of variable parameters $\params$, with some observed dataset $\data$. $\data$ is typically a collection of measurements or observations, such as the Cosmic Microwave Background (CMB), baryon acoustic oscillations (BAO), supernovae data, weak lensing data and gravitational waves. Cosmological models $\model$ are theoretical frameworks that describe the physical properties and evolution of the Universe, typically expressed by a metric with a set of cosmological parameters $\paramsM$. $\paramsM$ has the subscript $\model$ to indicate that the parameters are specific to the model $\model$. For example, the concordance $\Lambda$CDM model has 6 parameters: the Hubble constant $H_0$, the baryon density $\Omega_b h^2$, the cold dark matter density $\Omega_c h^2$, the scalar spectral index $n_s$, the amplitude of primordial scalar perturbations $A_s$ and the reionisation optical depth $\tau_{\mathrm{reio}}$. When only one model is considered, we can drop the subscript $\model$ and write $\params$ instead of $\paramsM$. \subsubsection{Bayesian Inference} Before analysing any data, we can express our beliefs about the parameters $\params$ for a specific model $\model$, termed the prior, \begin{equation} \Prob(\params | \model) \equiv \prior(\params) . \end{equation} Common choices for the prior include uniform or log-uniform distributions over a range of theoretically and physically allowed values or Gaussian distributions centred around expected values. After considering the observed data $\data$, we can update our beliefs about the parameters $\params$, termed the posterior, \begin{equation} \Prob(\params | \data, \model) \equiv {\posterior}(\params). \end{equation} Both the prior and posterior are probability density functions (PDFs), which integrate to 1 over all possible values of $\params$. The likelihood function describes how probable the observed data $\data$ is given a specific set of cosmological parameter values $\params$ and a specific model $\model$, \begin{equation} \Prob(\data | \params, \model) \equiv \likelihood(\params). \end{equation} While $\Prob(\data | \params, \model)$ treats $\data$ as the variable and $\params$ and $\model$ as fixed parameters, $\likelihood(\params)$ treats $\params$ as the variable. This equivalence represents a shift from prediction to inference, which is fundamental in cosmology as we only have one universe to observe. We cannot create new, independent universes to measure, and therefore only have one $\data$. $\Prob(\data | \params, \model)$ is therefore repurposed into a function of $\params$, which we can test with a wide range of $\params$ values and find the maximised $\likelihood(\params)$ that explains our single cosmic observation. Likelihood is not a PDF like prior and posterior, and it does not necessarily integrate to 1 over $\params$ space. The evidence, or the marginal likelihood~\cite{2008ConPh..49...71T}, is the probability of observing $\data$ given $M$, derived from the likelihood by integrating over all parameters and weighted by the prior, \begin{equation} \Prob(\data | \model) \equiv \evidence = \int \Prob(\data | \params, \model) \Prob(\params | \model) d\params. \end{equation} Dropping the model dependence, we have: \begin{equation} \evidence = \int \likelihood(\params) \prior(\params) d\params. \end{equation} It can be intuitively understood as a ``prior-weighted average likelihood''. Mathematically, it is the normalising constant that ensures the posterior integrates to unity. The evidence is usually ignored during parameter estimation (see~\Cref{ssec:param_estimation}) but plays a crucial role in model comparison (see~\Cref{ssec:model_comparison}). \subsubsection{Kullback-Leibler Divergence} \label{sssec:kl_divergence} The Kullback-Leibler (KL) divergence, $\KL$, quantifies the information gain, or compression, between the prior distribution $\prior(\params)$ and the posterior distribution $\posterior(\params)$~\cite{kullback1951information}. It has been widely used by cosmologists~\cite{2014PhRvD..90b3533S,2019JCAP...01..011N,2004PhRvL..92n1302H,2013PDU.....2..166V,2016PhRvD..93j3507S,2016JCAP...05..034G,2016arXiv160606273R,2016MNRAS.455.2461H,2016MNRAS.463.1416G,2017NatAs...1..627Z,2017JCAP...10..045N} and is defined as the average of the Shannon Information, $\shannon(\params)$, over the posterior: \begin{equation} \shannon(\params) = \log\frac{\posterior(\params)}{\prior(\params)}, \label{eq:shannon_info} \end{equation} \begin{equation} \KL = \int \posterior(\params) \log\frac{\posterior(\params)}{\prior(\params)}\,d\params = \left\langle \log\frac{\mathcal{P}}{\pi}\right\rangle_\mathcal{P} = \left\langle \shannon \right\rangle_{\posterior} \approx \log\left(\frac{V_{\pi}}{V_{P}}\right). \label{eq:kl_divergence_def} \end{equation} A higher $\KL$ indicates a larger information gain when moving from the prior to the posterior and is consequently a useful measure of the constraining power of the data. $\KL$ can be understood as approximately the logarithm of the ratio of the prior volume, $V_\pi$, to the posterior volume, $V_\mathcal{P}$. This relationship is exact in the case of uniform (``top-hat'') prior and posterior distributions, but remains highly accurate when a broad prior is used, where the prior is 'locally flat" around the posterior peak. $\KL$ is a strong function of the prior, and it inherits the property of being additive for independent parameters from the Shannon Information. A key practical consideration is that its calculation requires a properly normalised posterior distribution, which in turn requires knowledge of the Bayesian evidence, $\evidence$. Consequently, this quantity is not attainable with common MCMC sampling techniques, which typically generate samples from an unnormalised posterior. To compute it, more computationally intensive algorithms such as nested sampling are necessary. The KL divergence can be related directly to the Bayesian evidence via the expression~\cite{Hergt2021Bayesian}: \begin{equation} \log \evidence = \langle \log \likelihood \rangle_{\posterior} - \KL, \label{eq:logZ_KL} \end{equation} where $\langle \log \likelihood \rangle_{\posterior}$ is the posterior average of the log-likelihood. This relation, sometimes referred to as the Occam's Razor equation~\cite{Handley_dimensionality_2019}, illustrates how the evidence naturally implements Occam's Razor by penalising unnecessary model complexity. As discussed further in~\Cref{ssec:model_comparison}, the penalty factor between competing cosmological models can be approximated using the difference in their respective $\KL$. Using the nested sampling chains from \texttt{unimpeded}, we compute $\KL$ for every model-dataset combination in our grid, enabling a systematic comparison of the constraining power of different datasets and models. These results are presented in~\Cref{ssec:constraining_power}. \subsection{Parameter estimation} \label{ssec:param_estimation} The goal of parameter estimation is to determine the posterior probability distribution of the parameters, $\Prob(\params | \data, \model)$. Combining the prior, likelihood and evidence from~\Cref{ssec:nomenclature}, $\Prob(\params | \data, \model)$ is stated by the Bayes' theorem as: \begin{align} \Prob(\params | \data, \model) &= \frac{\Prob(\data | \params, \model) \Prob(\params | \model)}{\Prob(\data | \model)}, \\ \posterior(\params) &= \frac{\likelihood(\params) \times \prior(\params)}{\evidence}. \end{align} It describes how our initial beliefs $\prior(\params)$ about $\params$ are updated in light of the observed data $\data$ under the assumed model $\model$. MCMC algorithms are typically used for exploring the posterior, particularly in high-dimensional parameter spaces~\cite{2002PhRvD..66j3511L}. However, the evidence cannot be obtained due to technical reasons, so MCMC methods give unnormalised posterior $\Prob(\params | \data, \model) \propto \likelihood(\params) \times \prior(\params)$. However, specialised algorithms like nested sampling, as implemented in codes such as \texttt{PolyChord} \cite{Handley2015PolychordI, Handley2015PolychordII}, can also efficiently generate posterior samples while simultaneously calculating the evidence. The \texttt{unimpeded} library provides access to both nested sampling and MCMC chains for all model-dataset combinations, enabling comprehensive parameter estimation analysis. We present parameter constraints for key cosmological parameters across our grid in~\Cref{ssec:parameter_estimation}. \subsection{Model comparison} \label{ssec:model_comparison} Model comparison addresses the question of how much the data $\data$ support each competing models \{$\model_1,\model_2,\cdots$\}, where each model $\model_i$ has its own set of parameters $\theta_{\model_i}$. The goal is to compute the posterior $\Prob(\model_i | \data)$ of a model $\model_i$ is true given the data $\data$, which can be used to rank and select models. From the Bayes' theorem, we have: \begin{align} \Prob(\model_i|\data) &= \frac{\Prob(\data|\model_i) \Prob(\model_i)}{\Prob(\data)}, \\ &= \frac{\evidence_i\prior_i}{\displaystyle\sum_j \evidence_j\prior_j}. \label{eq:model_posterior} \end{align} Evidence is the inner product of the likelihood function and the prior function over the parameter space, it can also be viewed as the prior-weighted average likelihood. A model's evidence is maximised when its most predictive region of parameter space, i.e. where the prior is highest, coincides with the region of highest likelihood. A model is penalised, however, in two key scenarios: first, a direct conflict where the data favour parameter values the prior deemed unlikely; and second, a penalty for complexity, where a wide prior dilutes the evidence by spreading its predictive power too thinly. Since the prior must integrates to unity, a broader prior implies a lower height, reducing the evidence integral even if the data are well-fit within that space. The evidence thus naturally rewards models that are both predictive and simple, the latter naturally and quantitatively implements Occam's Razor\footnote{Among competing hypotheses, the one with the fewest assumptions should be selected.}. A common approach is to make no prior assumption. Models have uniform prior $\prior=\Prob(\model_i)=\mathrm{constant}$, i.e. $\prior_i=\prior_j$, and $\Prob(\model_i|\data)$ simplifies to the ratio of just the evidence of model $\model_i$ to the sum of evidences of all models under comparison, \begin{equation} \Prob(\model_i|\data) = \frac{\evidence_i}{\displaystyle\sum_j \evidence_j}. \label{eq:model_prob} \end{equation} While the Bayes Factor is widely used for comparing two models, the approach in \Cref{eq:model_prob} provides the advantages of yielding the normalised posterior probability for each model, which is not limited to pairwise comparisons and provides an intuitive ranking among the entire set of competing models. We therefore adopt this method for the analysis in \Cref{ssec:model_comparison_results}. Using the evidence values computed from nested sampling in \texttt{unimpeded}, we calculate model probabilities for all eight cosmological models across all datasets, revealing which model extensions are preferred by individual probes and their combinations. \subsection{Tension Quantification} \label{ssec:tension_quant_theory} Tension quantification assesses the statistical consistency between different datasets, say $\data_A$ and $\data_B$, when interpreted under a common underlying model $\model$. In a Bayesian context, several metrics can be employed to diagnose and quantify the degree of agreement or disagreement. The following sections describe five such statistics utilised in this work, drawing on established methods from the literature~\cite{Handley_dimensionality_2019,Lemos2021,Lemos2021TensionMetrics}. For readers interested in alternative approaches to quantifying tensions, the DES collaboration paper~\cite{Lemos2021TensionMetrics} provides a comprehensive comparison of different tension metrics and their applications to cosmological data. The pre-computed chains from \texttt{unimpeded} enable application of any preferred tension metric beyond those presented here. We apply these five complementary statistics to quantify tensions across all pairwise dataset combinations in our grid, with comprehensive results presented in~\Cref{ssec:tension_quantification_results}. \subsubsection{Combining likelihoods} To perform a joint analysis of two statistically independent datasets, $\data_A$ and $\data_B$, their likelihoods are combined multiplicatively: $\likelihood_{AB} = \likelihood_A \likelihood_B$. The posteriors and evidences for the individual and joint datasets are defined as: \begin{gather} \posterior_A = \frac{\likelihood_A\prior_A}{\evidence_A}, \quad \posterior_B = \frac{\likelihood_B\prior_B}{\evidence_B}, \quad \posterior_{AB} = \frac{\likelihood_A\likelihood_B\prior_{AB}}{\evidence_{AB}}. \label{eqn:Pdef} \\ \evidence_A = \int\likelihood_A\prior_A\,d{\params}, \quad \evidence_B = \int\likelihood_B\prior_B\,d{\params},\nonumber\\ \evidence_{AB} = \int\likelihood_A\likelihood_B\prior_{AB}\,d{\params}. \label{eqn:Zdef} \end{gather} Here, $\prior_A$, $\prior_B$, and $\prior_{AB}$ denote the prior distributions for the individual and joint analyses. In this work, we assume that the priors agree on the shared parameters~\cite{Bevins2022}, such that $\prior_A(\params_{\text{shared}}) = \prior_B(\params_{\text{shared}}) = \prior_{AB}(\params_{\text{shared}})$, while nuisance parameters unique to each dataset retain their respective priors. $\params$ is taken to be the complete set for the joint analysis, including all cosmological parameters and any nuisance parameters unique to each dataset. \subsubsection{The \texorpdfstring{$R$}{R} statistics} \label{sssec:r_statistic} The $R$ statistic quantifying the consistency between two datasets, denoted by subscripts $A$ and $B$, within a shared underlying model $\model$ \cite{Marshall2006}. It is defined through a series of equivalent expressions that relate the evidences and conditional probabilities of the datasets: \begin{equation} R = \frac{\evidence_{AB}}{\evidence_A \evidence_B} = \frac{\Prob(\data_A, \data_B)}{\Prob(\data_A)\Prob(\data_B)} = \frac{\Prob(\data_A|\data_B)}{\Prob(\data_A)} = \frac{\Prob(\data_B|\data_A)}{\Prob(\data_B)}. \label{eq:R_statistic_full} \end{equation} $R$ provides a direct measure of inter-dataset consistency, interpreted with respect to unity. If $R \gg 1$, knowledge of one dataset has strengthened our confidence in the other by a factor of $R$, indicating concordance. If $R \ll 1$, the datasets are inconsistent. The introduction of the second dataset diminishes our confidence in the first under the assumed model, prompting a re-evaluation of the shared model or the datasets themselves. While this establishes a clear framework for consistency, it is crucial to remember that the magnitude of $R$ does not represent an absolute degree of tension, as its value is always conditional on the chosen model and prior. The $R$ statistic satisfies several desirable properties for a tension metric: it is dimensionally consistent, symmetric with respect to the datasets ($R_{AB}=R_{BA}$), invariant under reparametrisation, and constructed from fundamental Bayesian quantities. However, a crucial property of $R$ is its strong dependence on the prior probability distribution, $\prior(\params)$~\cite{Handley2019}. This dependency can be made explicit by rewriting $R$ in terms of the posteriors $\posterior_A$ and $\posterior_B$: \begin{equation} \begin{aligned} R &= \frac{1}{\evidence_A \evidence_B} \int \likelihood_A(\params) \likelihood_B(\params) \prior(\params) d\params \\ &= \int \frac{\likelihood_A(\params) \prior(\params)}{\evidence_A} \frac{\likelihood_B(\params) \prior(\params)}{\evidence_B} \frac{1}{\prior(\params)} d\params \\ &= \int \frac{\posterior_A(\params) \posterior_B(\params)}{\prior(\params)} d\params \\ &= \bigg\langle \frac{\posterior_B}{\prior} \bigg\rangle_{\posterior_A} = \bigg\langle \frac{\posterior_A}{\prior} \bigg\rangle_{\posterior_B}, \end{aligned} \label{eq:R_prior_dependence} \end{equation} where we have assumed the datasets are independent. The final line shows that $R$ can be thought of as the posterior average of the ratio of one posterior to the shared prior, averaged over the other posterior. One should note that reducing the width of the prior on shared, constrained parameters will reduce the value of $R$, thereby increasing the apparent tension between the datasets. This behaviour is opposite to the prior's effect on evidence alone, where narrower priors typically increase the evidence. This creates an attractive balance: one cannot arbitrarily tune priors to increase evidence for a model without simultaneously making it more susceptible to tension if the datasets are not in perfect agreement. While this prior dependence is a feature of a coherent Bayesian analysis, it means that the interpretation of a single $R$ value requires care. If $R$ indicates discordance, this conclusion is robust, since the prior volume effect typically acts to increase $R$ and mask tension. However, if $R$ indicates agreement, one must consider if this is merely the result of an overly wide prior. %A pragmatic approach involves choosing physically reasonable priors and then examining the sensitivity of the conclusions to sensible alterations. \subsubsection{The Information Ratio} %\subsubsection{The Information Ratio (\texorpdfstring{$I$}{I})} \label{sssec:information_ratio} The information ratio, $I$, is defined in terms of the Kullback-Leibler divergences $\KL$ from the individual data ($\data_A,\data_B$) and joint data ($\data_{AB}$) analyses~\cite{Handley2019}: \begin{equation} \log I = \KL^A + \KL^B - \KL^{AB}. \label{eq:information_ratio} \end{equation} To understand the behaviour of $I$, we can employ the volumetric approximation of~\Cref{eq:kl_divergence_def}, $\KL \approx \log(\vprior) - \log(\vposterior)$, as discussed in~\Cref{sssec:kl_divergence}. Substituting this into the~\Cref{eq:information_ratio} yields: \begin{equation} \log I \approx \log(\vprior) - \log(\vposterior^A) - \log(\vposterior^B) + \log(\vposterior^{AB}). \label{eq:logI_volume_approx} \end{equation} In a Bayesian interpretation of probability, a highly improbable event is a highly surprising one. $I$ quantifies this ``surprise'' of agreement, i.e. different datasets making the same predictions. Considering the $\log(\vprior)$ term, a larger prior volume $\vprior$ signifies greater initial uncertainty, making the subsequent agreement of two constraining datasets (with small posterior volumes $\vposterior^A$ and $\vposterior^B$) a more surprising outcome, which results in a larger value of $\log I$ and mathematically encodes this greater degree of surprise. In addition, a more constraining dataset results in a smaller $\vposterior$, making $\log(\vposterior)$ a larger negative number and thus $-\log(\vposterior)$ a larger positive number. Consequently, the terms $-\log(\vposterior^A)$ and $-\log(\vposterior^B)$ increase the value of $\log I$. This is intuitive: if two highly constraining posteriors (tiny $\vposterior$) end up agreeing, it is far more surprising than if two vague, less constraining posteriors (large $\vposterior$) happen to agree. Conversely, if the datasets are in tension, their posteriors barely overlap, causing the joint posterior volume $\vposterior^{AB}$ to become extremely small. This makes $\log(\vposterior^{AB})$ a large negative number, which in turn significantly decreases the value of $\log I$. Therefore, a very low or negative $\log I$ is a strong signal of dataset disagreement. %$\log I$ quantifies the surprise of agreement based on the prior volume and the datasets' individual constraining powers, a concept distinct from their actual statistical mismatch. \subsubsection{Suspiciousness} \label{sssec:suspiciousness} While the information ratio quantifies the surprise of agreement, the suspiciousness $S$ quantifies the statistical conflict between the likelihoods $\likelihood$ of the two datasets. It is defined by both prior-dependent information ratio $I$ and the $R$ statistics~\cite{Handley2019}, and has been applied to quantify tensions in various cosmological contexts~\cite{Joudaki2017MNRAS,Lemos2021}: \begin{equation} S = \frac{R}{I}, \quad \log S = \log R - \log I. \label{eq:suspiciousness_def} \end{equation} Since $R$ and $I$ transform similarly under prior volume alterations, $S$ is largely unaffected by changing the prior widths, as long as this change does not significantly alter the posterior. However, this prior-independence comes at the cost of the direct probabilistic interpretation inherent in $R$, requiring more care to calibrate its scale for significance. Substituting the definitions of $\log R$ and $\log I$ from~\Cref{eq:R_statistic_full,eq:logZ_KL} and~\Cref{eq:information_ratio} into~\Cref{eq:suspiciousness_def}, $\log S$ can be expressed directly in terms of posterior-averaged log-likelihoods: \begin{equation} \log S = \langle \log{\likelihood_{AB}} \rangle_{\posterior_{AB}} - \langle \log{\likelihood_{A}} \rangle_{\posterior_{A}} - \langle \log{\likelihood_{B}} \rangle_{\posterior_{B}}. \label{eq:suspiciousness_likelihood_avg} \end{equation} When likelihoods $\likelihood_A$ and $\likelihood_B$ are in strong agreement, $\log S$ is zero or positive. Conversely, if the likelihoods are in tension, $\log S$ becomes negative, with larger negative values indicating stronger tension. $\log S$ can also be calibrated into a tension probability, $p$, and an equivalent significance in Gaussian standard deviations, $\sigma$ (see~\Cref{sssec:p_and_sigma}). % \subsubsection{Bayesian Model Dimensionality (\texorpdfstring{$d$}{d})} \subsubsection{Bayesian Model Dimensionality} \label{sssec:bayesian_model_dimensionality} While the Kullback-Leibler divergence, $\KL$, discussed in~\Cref{sssec:kl_divergence} provides a single value for the total information gain, it marginalises out any information about individual parameters. It cannot tell us how many parameters are being constrained by the data, nor what each parameter is constraining. For instance, a strong, correlated constraint between two parameters can yield the same $\KL$ as two well-constrained but independent parameters (visually demonstrated in~\cite{Handley_dimensionality_2019}). In high-dimensional cosmological analyses, where corner plots~\cite{2016JOSS....1...24F} show only marginalised views and can hide complex degeneracies, a metric is needed to quantify the effective number of constrained parameters. The Bayesian Model Dimensionality, $d$, was introduced to fulfil this role~\cite{Handley_dimensionality_2019} and is defined as:%It is defined such that a simple Gaussian constraint on a single parameter corresponds to $d=1$, as does a tight, correlated constraint between two parameters. A full derivation and visual demonstration can be found in ref.~\cite{Handley2019a}. \begin{equation} \begin{split} \frac{d}{2} &= \int \posterior(\params) \left(\log\frac{\posterior(\params)}{\prior(\params)} - \KL\right)^2 d\params \\ & = \left\langle{\left(\log\frac{\posterior}{\prior}\right)}^2\right\rangle_{\posterior} - {\left\langle\log\frac{\posterior}{\prior}\right\rangle}_{\posterior}^2 \\ & = \mathrm{var}(\shannon)_{\posterior} \\ & = \langle(\log \likelihood)^2\rangle_{\posterior} - \langle\log\likelihood\rangle_{\posterior}^2, \end{split} \label{eq:bayesian_dimensionality} \end{equation} where $\shannon = \log\frac{\posterior}{\prior}$ is the Shannon Information mentioned in~\Cref{eq:shannon_info}. $d$ is the variance of $\shannon$ over posterior, and hence, a higher-order statistic than the KL divergence.The Bayesian Model Dimensionality possesses several important properties. Crucially, it is only weakly prior-dependent, as the evidence contributions required to normalise the posterior and prior in the $\shannon$ term in~\Cref{eq:bayesian_dimensionality} effectively cancel out. Furthermore, like the KL divergence, it is additive for independent parameters and invariant under a change of variables. When combining datasets,the number of parameters that are constrained becomes: \begin{equation} d_{A \cap B} = d_A + d_B - d_{AB}. \end{equation} % \subsubsection{Tension Probability and Significance (\texorpdfstring{$p$}{p} and \texorpdfstring{$\sigma$}{\textsigma})} \subsubsection{Tension Probability and Significance} \label{sssec:p_and_sigma} Suspiciousness discussed in~\Cref{sssec:suspiciousness} can be calibrated into a more intuitive tension probability, $p$, and an equivalent significance expressed in Gaussian standard deviations, $\sigma$. This calibration relies on the approximation that in the case of a Gaussian likelihood, the quantity $d - 2\log S$ follows a $\chi^2$ distribution. The number of degrees of freedom is given by the Bayesian model dimensionality, $d$, as discussed in~\Cref{sssec:bayesian_model_dimensionality}. $p$ represents the probability that a level of discordance at least as large as the one observed could arise by chance. It is calculated using the survival function of the $\chi_d^2$ distribution: \begin{equation} p = \int_{d-2\log S}^{\infty} \chi_d^2(x)\,\mathrm{d}x = \int_{d-2\log S}^{\infty} \frac{x^{d/2-1}e^{-x/2}}{2^{d/2}\Gamma(d/2)}\,\mathrm{d}x. \label{eq:tension_probability} \end{equation} This $p$-value can then be converted into an equivalent significance on a Gaussian scale, $\sigma$, using the inverse complementary error function ($\mathrm{Erfc}^{-1}$): \begin{equation} \sigma = \sqrt{2}\,\mathrm{Erfc}^{-1}(p). \label{eq:sigma_conversion} \end{equation} Following standard conventions, if $p \lesssim 0.05$ (corresponding to $\sigma \gtrsim 2$), the datasets are considered to be in moderate tension, while $p \lesssim 0.003$ ($\sigma \gtrsim 3$) corresponds to strong tension. \subsubsection{The Look Elsewhere Effect} \label{sssec:look_elsewhere_effect} The Look Elsewhere Effect (LEE) arises when multiple statistical tests are performed, increasing the probability of finding a seemingly significant result purely by chance. This effect is particularly relevant to our analysis, where we systematically evaluate tension statistics for $N=248$ distinct model-dataset combinations~(see \Cref{sec:results}). Without accounting for multiple comparisons, the likelihood of encountering at least one false positive becomes substantial. Rather than applying a Bonferroni correction to each individual $p$-value (which would change as the grid expands), we instead adopt a significance threshold that naturally accounts for the look elsewhere effect. Under the null hypothesis of no genuine tension, $p$-values are uniformly distributed between 0 and 1. Therefore, if we perform $N=248$ independent tests, we expect exactly one result to have $p \leq 1/N$ purely by chance. This provides a natural threshold: \begin{equation} \sigma_{\text{threshold}} = \sqrt{2}\,\mathrm{Erfc}^{-1}\left(\frac{1}{N}\right). \label{eq:sigma_threshold_corrected} \end{equation} For our grid with $N=248$, this gives $\sigma_{\text{threshold}} \approx 2.88$. This threshold is not arbitrary, it represents the significance level at which we would expect only one false positive across all 248 tests if there were no genuine tensions. Any result highlighted above this threshold is more extreme than what we would expect from random fluctuations alone. This approach has the advantage that the threshold itself reflects the scope of the analysis, while individual $p$-values and $\sigma$ values remain interpretable independently of the grid size. % To account for this, we adopt the conservative Bonferroni correction, which adjusts the p-value of each individual test according to the total number of tests, $N$: % \begin{equation} % p_{\text{corr}} = \min(1, N \times p), % \label{eq:bonferroni_correction} % \end{equation} % where $p$ is the uncorrected p-value and $p_{\text{corr}}$ is capped at 1 because it is a probability. The corresponding corrected significance, $\sigma_{\text{corr}}$, is then calculated by replacing $p$ with $p_{\text{corr}}$ in the standard conversion: % \begin{equation} % \sigma_{\text{corr}} = \sqrt{2} \, \mathrm{Erfc}^{-1}(p_{\text{corr}}). % \label{eq:sigma_conversion_corrected} % \end{equation} \subsubsection{Model-Weighted Average Tension Statistics} \label{sssec:model_weighted_average} To evaluate the overall tension between datasets across our entire model space, we employ a model-weighted average for each tension statistic. This approach provides a single, summary ranking of datasets that accounts for the fact that some models are better supported by the data than others, and therefore, their tension statistics should have a heavier weighting under the Bayesian framework. The tension statistic for each model is weighted by its posterior probability, $\Prob(\model_i|\data)$, calculated by~\Cref{eq:model_posterior}. For a generic tension metric between datasets $\data_A$ and $\data_B$, this average is computed as: \begin{equation} \langle \text{Statistic}(\data_A, \data_B) \rangle_\model = \displaystyle\sum_{i} \Prob(\model_i|\data) \times \text{Statistic}(\model_i, \data_A, \data_B). \label{eq:model_weighted_average} \end{equation} For example, the model-weighted Kullback-Leibler divergence is: \begin{equation} \langle \KL \rangle_{\Prob(\model)} = \displaystyle\sum_{i} \Prob(\model_i|\data) \times \KL(\model_i). \label{eq:model_weighted_kl} \end{equation} Similarly, the model-weighted tension significance is: \begin{equation} \langle \sigma \rangle_{\Prob(\model)} = \displaystyle\sum_{i} \Prob(\model_i|\data) \times \sigma(\model_i). \label{eq:model_weighted_sigma} \end{equation} These model-weighted statistics are presented in the heatmaps in~\Cref{ssec:model_comparison_results,ssec:tension_quantification_results}. \subsection{Cosmological Models} \label{ssec:cosmological_models} We consider a comprehensive set of cosmological models extending the standard $\Lambda$CDM paradigm. All models are described within the framework of the Friedmann-Lemaître-Robertson-Walker (FLRW) metric with general relativity. The background expansion is governed by the Friedmann equation, and initial conditions are set by nearly scale-invariant, adiabatic Gaussian scalar perturbations~\cite{Planck2013params}. All models are implemented using the Cobaya framework~\cite{cobayaascl,Torrado2021Cobaya}, which interfaces with the CAMB Boltzmann code~\cite{Lewis:1999bs} to compute theoretical predictions for the observables. Each model is elaborated in detail in the following subsubsections. \subsubsection{Baseline: $\Lambda$CDM} \label{sssec:lcdm} The baseline $\Lambda$CDM model describes a spatially flat universe with a cosmological constant (dark energy equation of state $w = -1$), cold dark matter, and a power-law spectrum of adiabatic scalar perturbations. The cold dark matter paradigm was established by~\cite{Blumenthal1984}, while the cosmological constant component emerged from the discovery of cosmic acceleration through Type Ia supernovae observations~\cite{Riess1998,Perlmutter1999}. The model has been extensively validated by cosmic microwave background measurements~\cite{Planck2013params}. The model is characterized by six fundamental parameters: the baryon density parameter $\omega_b = \Omega_b h^2$, the cold dark matter density parameter $\omega_c = \Omega_c h^2$, the angular scale of the sound horizon at recombination $\theta_*$ (often parameterised as $100\theta_{MC}$), the reionisation optical depth $\tau$, the scalar spectral index $n_s$, and the amplitude of scalar perturbations $\ln(10^{10}A_s)$. The Hubble parameter evolves as: \begin{equation} H^2(a) = H_0^2 \left[ \Omega_r a^{-4} + \Omega_m a^{-3} + \Omega_\Lambda \right], \label{eq:friedmann_lcdm} \end{equation} where $a$ is the scale factor, $\Omega_r$ is the radiation density parameter, $\Omega_m$ is the total matter density parameter, and $\Omega_\Lambda$ is the cosmological constant density parameter. The flatness constraint imposes $\Omega_r + \Omega_m + \Omega_\Lambda = 1$. The primordial scalar power spectrum is: \begin{equation} \mathcal{P}_s(k) = A_s \left( \frac{k}{k_0} \right)^{n_s - 1}, \label{eq:power_spectrum_lcdm} \end{equation} where $k_0 = 0.05\,\text{Mpc}^{-1}$ is the pivot scale. The baseline assumes three standard neutrinos with $N_{\text{eff}} = 3.046$ and minimal neutrino mass $\Sigma m_\nu = 0.06\,\text{eV}$. The helium abundance is computed consistently with Big Bang nucleosynthesis. \textbf{Free parameters:} $\omega_b$, $\omega_c$, $\theta_*$, $\tau$, $n_s$, $\ln(10^{10}A_s)$. \subsubsection{Varying curvature: $\Omega_k\Lambda$CDM} \label{sssec:klcdm} This extension allows for spatial curvature by freeing the curvature density parameter $\Omega_k$~\cite{Planck2013params}. The Friedmann equation becomes: \begin{equation} H^2(a) = H_0^2 \left[ \Omega_r a^{-4} + \Omega_m a^{-3} + \Omega_k a^{-2} + \Omega_\Lambda \right], \label{eq:friedmann_klcdm} \end{equation} with the constraint $\Omega_r + \Omega_m + \Omega_k + \Omega_\Lambda = 1$. Positive $\Omega_k$ corresponds to an open universe (negative spatial curvature), while negative $\Omega_k$ describes a closed universe (positive spatial curvature). Non-zero curvature alters the universe's spatial geometry, modifying the evolution of metric perturbations and photon geodesics. Boltzmann codes account for this by solving the perturbation equations on a curved background, which causes a characteristic angular shift in the CMB power spectrum's acoustic peaks and modifies the late-time integrated Sachs-Wolfe effect. \textbf{Free parameters:} $\Lambda$CDM parameters + $\Omega_k$. \subsubsection{Constant dark energy equation of state: $w$CDM} \label{sssec:wcdm} This model generalizes the cosmological constant to a dark energy fluid with constant equation of state $w = p_{\text{DE}}/\rho_{\text{DE}}$~\cite{Turner1997}. The dark energy density evolves as $\rho_{\text{DE}}(a) \propto a^{-3(1+w)}$, modifying the Friedmann equation to: \begin{equation} H^2(a) = H_0^2 \left[ \Omega_r a^{-4} + \Omega_m a^{-3} + \Omega_{\text{DE}} a^{-3(1+w)} \right]. \label{eq:friedmann_wcdm} \end{equation} The model assumes spatial flatness. Within the Parameterised Post-Friedmann (PPF) framework, dark energy is treated as a perfect fluid with a sound speed typically set to unity ($c_s^2=1$). Consequently, dark energy perturbations are negligible on sub-horizon scales, influencing structure growth primarily through the modified background expansion and the late-time integrated Sachs-Wolfe (ISW) effect on the CMB. \textbf{Free parameters:} $\Lambda$CDM parameters + $w$. \subsubsection{Time-varying dark energy: $w_0w_a$CDM} \label{sssec:w0wacdm} This model allows for time-varying dark energy using the Chevallier-Polarski-Linder (CPL) parameterisation~\cite{Chevallier2001,Linder2003}: \begin{equation} w(a) = w_0 + w_a(1-a), \label{eq:cpl_parameterisation} \end{equation} where $w_0$ is the present-day equation of state and $w_a$ characterizes its time evolution. The dark energy density evolves as: \begin{equation} \rho_{\text{DE}}(a) = \rho_{\text{DE},0} \, a^{-3(1+w_0+w_a)} \exp[-3w_a(1-a)]. \label{eq:rho_de_cpl} \end{equation} The model assumes spatial flatness. Similar to $w$CDM, this model uses the PPF formalism where dark energy is a smooth component with $c_s^2=1$, preventing it from clustering. The time-varying equation of state produces a more complex background evolution, altering the growth history of matter perturbations and creating a distinct signature in the late-time ISW effect compared to a constant $w$. \textbf{Free parameters:} $\Lambda$CDM parameters + $w_0$ + $w_a$. % \subsubsection{Varying neutrino masses: $m_\nu\Lambda$CDM} % \label{sssec:mnu_lcdm} % This extension allows the sum of neutrino masses $\Sigma m_\nu$ to vary, testing constraints on the absolute neutrino mass scale. The possibility of using cosmology to constrain neutrino masses was first recognized by~\cite{Cowsik1972}, with comprehensive modern treatments provided by~\cite{Lesgourgues2006}. The neutrino contribution to the energy density is: % \begin{equation} % \Omega_\nu h^2 = \frac{\Sigma m_\nu}{93.14\,\text{eV}}, % \label{eq:neutrino_density} % \end{equation} % where the numerical factor corresponds to the standard conversion. Neutrino free-streaming effects on structure formation are included consistently. The effective number of relativistic species remains fixed at $N_{\text{eff}} = 3.046$. % % \textbf{Free parameters:} $\Lambda$CDM parameters + $\Sigma m_\nu$. \subsubsection{Varying lensing amplitude: $A_L\Lambda$CDM} \label{sssec:alcdm} This model introduces a phenomenological parameter $A_L$ that scales the lensing potential power spectrum, allowing for deviations from the standard lensing predictions~\cite{Planck2013params}. The parameter modifies the lensed CMB power spectra by scaling the lensing potential correlations: \begin{equation} C_\ell^{\text{lensed}} = C_\ell^{\text{unlensed}} + A_L \Delta C_\ell^{\text{lensing}}, \label{eq:lensing_amplitude} \end{equation} where $\Delta C_\ell^{\text{lensing}}$ represents the correction due to gravitational lensing. The standard $\Lambda$CDM prediction corresponds to $A_L = 1$. Values $A_L > 1$ indicate enhanced lensing effects, while $A_L < 1$ suggest reduced lensing. This extension was motivated by the Planck collaboration's observation of a preference for $A_L > 1$ in the CMB temperature data, providing a way to test the consistency of gravitational lensing predictions. This parameter does not alter the physical evolution of perturbations but acts as a phenomenological scaling of the gravitational lensing potential. In Boltzmann codes, the calculated lensing potential power spectrum is multiplied by $A_L$, which directly modifies the smoothing of the CMB acoustic peaks and the amplitude of the lensing-induced B-mode spectrum. \textbf{Free parameters:} $\Lambda$CDM parameters + $A_L$. \subsubsection{Varying neutrino masses: $m_\nu\Lambda$CDM} \label{sssec:mnulcdm} This model extends the standard $\Lambda$CDM framework by allowing the sum of the three active neutrino masses, $\Sigma m_\nu$, to vary as a free parameter. The effective number of relativistic species is held fixed at the standard value, $N_{\text{eff}} = 3.046$~\cite{Mangano2005}. The contribution of massive neutrinos to the cosmic energy budget today is: \begin{equation} \Omega_\nu h^2 = \frac{\Sigma m_\nu}{93.14\,\text{eV}}. \label{eq:neutrino_density} \end{equation} Massive neutrinos act as hot dark matter, suppressing the growth of structure below their free-streaming length due to their large thermal velocities. Boltzmann codes solve the full neutrino Boltzmann equation to model this effect, which manifests as a distinct, scale-dependent suppression in the matter power spectrum and subtly alters the CMB via the early ISW effect. \textbf{Free parameters:} $\Lambda$CDM parameters + $\Sigma m_\nu$. \subsubsection{Running spectral index: $n_{\text{run}}\Lambda$CDM} \label{sssec:running_lcdm} This model extends the primordial power spectrum beyond a simple power law by allowing the scalar spectral index $n_s$ to vary with scale $k$. This scale dependence is parameterised by the ``running of the spectral index,'' defined as $n_{\text{run}} \equiv dn_s/d\ln k$~\cite{Kosowsky1995,Planck2013params}. While the baseline $\Lambda$CDM model assumes $n_{\text{run}} = 0$, a non-zero running is a generic prediction of many inflationary models. In the context of single-field slow-roll inflation, the running is a second-order effect in the slow-roll parameters and is predicted to be very small. A detection of a significant non-zero running would therefore challenge the simplest inflationary scenarios and provide crucial insights into the shape of the inflaton potential or point towards more complex physics in the early universe. The primordial scalar power spectrum is modified to include a logarithmic scale-dependent term in the exponent: \begin{equation} \mathcal{P}_s(k) = A_s \left( \frac{k}{k_0} \right)^{n_s - 1 + \frac{1}{2}n_{\text{run}} \ln\left(\frac{k}{k_0}\right)}, \label{eq:running_spectrum} \end{equation} where $n_s$ is the spectral index and $n_{\text{run}}$ is the running, both evaluated at the pivot scale $k_0 = 0.05\,\text{Mpc}^{-1}$. This form arises from a first-order Taylor expansion of the spectral index $n_s(k)$ in $\ln k$. \textbf{Free parameters:} $\Lambda$CDM parameters + $n_{\text{run}}$. \subsubsection{Primordial gravitational waves: $r\Lambda$CDM} \label{sssec:r_lcdm} This model includes primordial tensor perturbations (gravitational waves) characterized by the tensor-to-scalar ratio $r$ at the pivot scale, providing a key test of inflationary theory~\cite{Guth1981,Starobinsky1980}. The spectrum of relic gravitational waves from inflation was first calculated by~\cite{Starobinsky1979}. The tensor power spectrum is: \begin{equation} \mathcal{P}_t(k) = A_t \left( \frac{k}{k_p} \right)^{n_t}, \label{eq:tensor_spectrum} \end{equation} where $r = A_t/A_s$ is evaluated at a chosen pivot scale (typically $k_p = 0.002\,\text{Mpc}^{-1}$ for tensor modes). The tensor spectral index $n_t$ is often constrained by the inflationary consistency relation $n_t = -r/8$. \textbf{Free parameters:} $\Lambda$CDM parameters + $r$ (and optionally $n_t$ if not fixed by consistency relation). % \subsection{Cosmological datasets} % \label{ssec:cosmological_datasets} % We analyse a comprehensive set of cosmological observations to constrain the model parameters. The datasets are organized by observational type and combined to provide robust parameter constraints. Table~\ref{tab:datasets} summarizes the observational datasets used in this analysis. \section{Methodology} \label{sec:methodology} \subsection{Nested Sampling} \label{ssec:nested_sampling} Nested sampling, introduced by Skilling (2004)~\cite{Skilling2004,Skilling2006}, is a Monte Carlo method designed for Bayesian computation. It is particularly powerful for calculating the Bayesian evidence (or marginal likelihood), a key quantity for model comparison and tension quantification, while simultaneously producing posterior samples for parameter estimation. The algorithm transforms the multi-dimensional evidence integral into a one-dimensional integral over prior volume, which is then solved numerically. Nested sampling is generally considered the ``ground truth'' method for evidence calculation, representing the reference standard against which other approaches are compared. While alternative methods exist that aim to calculate evidence more efficiently, such as harmonic mean estimators~\cite{Piras2024harmonic} and MC evidence~\cite{Heavens2017MCEvidence}, these typically require validation against nested sampling results to establish their accuracy. The pre-computed grid of nested sampling chains provided by \texttt{unimpeded} therefore serves not only as a resource for cosmological model comparison and tension quantification, but also as a reference dataset for assessing the performance of alternative evidence estimation techniques. Recent comprehensive reviews of nested sampling methodology and applications can be found in~\cite{Buchner2023,Ashton2022NRvMP}. We use the publicly available \texttt{PolyChord} sampler~\cite{Handley2015PolychordI,Handley2015PolychordII}, which provides a robust and efficient implementation of nested sampling well suited to the high-dimensional parameter spaces typical of modern cosmology. This section outlines the core methodology of the nested sampling algorithm. \subsubsection{Generating Samples and Increasing Likelihood} \label{sssec:nested_sampling_algorithm} The fundamental principle of nested sampling is to explore the parameter space $\params$ by iteratively moving through nested contours of constant likelihood $\likelihood_i$. The algorithm begins by drawing $n_0$ initial ``live points'' from the prior distribution $\pi(\params)$. At each iteration $i$, the point with the lowest likelihood, $\likelihood_i$, among the current set of live points is identified. By some criterion we then choose whether or not to remove this from the live set and add it to a collection of ``dead points''. By a different criterion it is then replaced with new points drawn from the prior $\pi(\params)$, subject to the hard constraint that their likelihood $\likelihood(\params)$ must be greater than $\likelihood_i$. This ensures that the likelihoods of the dead points, $\{\likelihood_1, \likelihood_2, \likelihood_3, \dots\}$, form a monotonically increasing sequence. \subsubsection{Prior Volume Contraction} This iterative deletion of live points with the lowest likelihood systematically contracts the region of parameter space, leading to the peak(s) of the posterior. The prior mass $X(\likelihood)$ is the measure of the fraction of the prior mass\footnote{In the context of nested sampling, ``prior volume'' and ``prior mass'' are used interchangeably to refer to the same fundamental concept.} contained within an iso-likelihood contour $\likelihood(\params) = \likelihood$. It is calculated by integrating the element of prior mass $dX = \prior(\params)d\params$ covering all likelihood values greater than $\likelihood$\cite{Skilling2006}: \begin{equation} X(\likelihood) = \int_{\likelihood(\params)>\likelihood} \pi(\params) d\params. \label{eq:prior_volume} \end{equation} By construction described in~\Cref{sssec:nested_sampling_algorithm}, the algorithm generates a sequence of increasing likelihoods $\likelihood_1 < \likelihood_2 < \dots < \likelihood_i$ corresponding to a sequence of shrinking prior volumes $X_i = X(\likelihood_i)$, where $X_1 > X_2 > \dots > X_i$. At each iteration $i$, the removal of the point with likelihood $\likelihood_i$ corresponds to shrinking the prior volume from $X_{i-1}$ to $X_i$, so the ratio of the volumes $t_i = X_i/X_{i-1}$. The initial prior volume is $X_0 = 1$ (the entire prior). More formally, the shrinkage of the prior volume is a stochastic process with distribution ${P(t_i) = n_i t_i^{n_i-1}}$, where $n_i$ is the live point count at iteration $i$. The expected logarithm of the prior volume at iteration $i$ is given by the sum over the live point count $n_k$ at each preceding iteration $k$~\cite{Hu2023aeons}: \begin{equation} \langle \log X_i \rangle = -\sum_{k=1}^{i} \frac{1}{n_k}. \label{eq:logX_general} \end{equation} In the simplified case of a constant number of live points, $n_k = n_{\text{live}}$ for all $k$, this sum reduces to $\langle \log X_i \rangle = -i/n_{\text{live}}$. This leads to the well-known exponential approximation for the prior volume, $\langle X_i \rangle \approx e^{-i/n_{\text{live}}}$. This exponential compression allows the algorithm to efficiently traverse the parameter space from the broad prior towards the narrow, high-likelihood regions where the posterior mass is concentrated. However, as we discuss in~\Cref{ssec:dynamic_ns}, \Cref{eq:logX_general} provides the rigorous framework necessary for analysing runs where the number of live points varies. Each dead point is associated with a specific set of parameters $\params_i$, likelihood $\likelihood_i$ and an estimated prior volume $X_i$, enabling the reconstruction of the evidence integral, discussed in~\Cref{sssec:evidence_estimation}. \subsubsection{Evidence Estimation} \label{sssec:evidence_estimation} The primary strength of nested sampling is its ability to directly calculate the Bayesian evidence, $\evidence = \int \likelihood(\params) \pi(\params) d\params$. Instead of integrating the likelihood over all possible parameters, which is computationally prohibitive in high dimensions, this integral can be reformulated in terms of the prior volume $X$. Since $\likelihood$ can be expressed as an inverse function of its enclosed prior volume, $\likelihood(X)$, the evidence integral can be rewritten as a one-dimensional integral from $X=0$ to $X=1$~\cite{Skilling2006}: \begin{equation} \evidence = \int_0^1 \likelihood(X) dX. \label{eq:evidence_integral} \end{equation} The nested sampling algorithm provides a discrete sequence of points $(\likelihood_i, X_i)$ that allows for a numerical approximation of this integral. Using a simple quadrature scheme, the evidence can be estimated as a weighted sum over the discarded ``dead'' points: \begin{equation} \evidence \approx \displaystyle\sum_{i \in \text{dead}} w_i \likelihood_i, \label{eq:evidence_sum} \end{equation} where $\likelihood_i$ is the likelihood of the $i$-th discarded point and $w_i$ is the associated prior volume, or weight. This weight represents the prior mass contained within the shell between successive likelihood contours, $w_i = X_{i-1} - X_i$. In practice, we approximate $X_i \approx e^{-i/n_{\text{live}}}$ and can thereby estimate the weights for the summation. The evidence can then be used in model comparison and tension quantification. \subsubsection{Importance Weight and Posterior Estimation} In addition to evidence calculation, the collection of live and dead points can be used to derive posterior inferences, and hence, for parameter estimation. Each ``dead point'' $\params_i$ is associated with a likelihood $\likelihood_i$ and a prior mass weight, $w_i$, which represents the element of prior mass of the shell in which point $\params_i$ was sampled. The importance weight, or the posterior, $p_i$ for each dead point is its contribution to the evidence $\evidence_i = w_i \likelihood_i$, normalised by the total evidence $\evidence$ from~\Cref{eq:evidence_sum}: \begin{equation} p_i = \frac{w_i \likelihood_i}{\evidence}. \label{eq:importance_weight} \end{equation} %The set of all dead points and their corresponding importance weights, $\{p_i\}$, constitute a weighted sample from the posterior distribution. % To complete the posterior sample, the remaining prior mass $X_{\text{final}}=1-\Sigma w_i$ at the end of the run is distributed among the final set of live points equally, i.e. each live point has a prior mass weight of $w_{\text{live}} = X_{\text{final}}/N$ and an importance weight of $p_{\text{live}} = w_{\text{live}} \likelihood_{\text{live}}/\evidence$. The posterior expectation value for a function of the parameters, $f(\params)$, is the weighted sum: \begin{equation} \langle f(\params) \rangle \approx \displaystyle\sum_{j \in \{\text{dead}\}} p_j f(\params_j). \end{equation} This allows for the construction of marginalised posterior distributions, credibility intervals, and other standard Bayesian parameter summaries. \subsubsection{Algorithm Termination and Stopping Criterion} As iterations repeat, prior mass weights $w_i$ monotonically decrease, and the likelihoods $\likelihood_i$ monotonically increase. Live points are therefore concentrated in regions of high likelihood, and are associated with tiny prior mass. The nested sampling algorithm is terminated at the $i$-th iteration when the remaining posterior mass is some small fraction of the currently calculated evidence: \begin{equation} Z_{\text{live}} \approx \langle \likelihood_{\text{live}} \rangle X_i, \label{eq:live_evidence} \end{equation} where $\langle \likelihood_{\text{live}} \rangle$ is the average likelihood of the current live points. By this stage, the estimated remaining evidence from the live points is a negligible fraction of the evidence accumulated thus far. A common stopping criterion is to halt the process when the expected future contribution to the evidence is smaller than a user-defined tolerance $\epsilon$: \begin{equation} Z_{\text{live}} < \epsilon Z_{\text{dead}}. \end{equation} The stopping criterion ensures that the final evidence estimate and posterior samples are robust and that computational effort is not wasted on regions of the parameter space with insignificant posterior mass. \subsubsection{Evidence Correction for Unphysical Parameter Space} \label{ssec:evidence_correction} The standard nested sampling algorithm, as outlined in the preceding sections, implicitly assumes that the entire prior volume is accessible and yields a non-zero likelihood. In practice, many cosmological models possess parameter spaces with regions that are ``unphysical.'' These are regions where the model violates fundamental physical constraints, such as predicting a negative age for the Universe, failing to converge during numerical evolution, or producing spectra with unphysical features. In our analysis pipeline, these unphysical points are assigned a minimal log-likelihood value, effectively a numerical log-zero. We therefore partition the parameter space $\params \in \Omega$ into two disjoint subspaces: the ``physical'' subspace $\Omega_{\text{phys}}$, where the likelihood $\likelihood(\data|\params,\model) > 0$, and the ``unphysical'' subspace $\Omega_{\text{unphys}}$, where $\likelihood(\data|\params,\model) \le 0$. The process of generating samples from the prior $\prior(\params|\model)$ typically begins by drawing a point from a unit hypercube, a $D$-dimensional space $[0,1]^D$, where $D$ is the number of model parameters $\paramsM$, with each coordinate sampled uniformly as $(u_1, u_2, \dots, u_D)$ where $u_i \in [0,1]$, which is then transformed into the physical parameter space via the prior transformation. The physical parameter space corresponds to the actual parameter ranges (e.g., $H_0 \in [60, 80]$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\text{b}}h^2 \in [0.019, 0.026]$), obtained by applying the appropriate prior type (uniform, Gaussian, log-uniform, etc.) to each unit hypercube coordinate. This transformed point may fall into either $\Omega_{\text{phys}}$ or $\Omega_{\text{unphys}}$. Since only points in $\Omega_{\text{phys}}$ will enter the nested sampling algorithm and contribute to the evidence integral, it is crucial to account for the fraction of the prior volume that is inaccessible due to unphysicality. This accessible volume fraction can be estimated via rejection sampling: \begin{equation} V_{\text{phys}} \approx \frac{n_{\text{prior}}}{n_{\text{total}}} = \frac{\text{\# of points in } \Omega_{\text{phys}}}{\text{\# of points in } \Omega_{\text{phys}} + \text{\# of points in } \Omega_{\text{unphys}}}. \end{equation} The corrected, true evidence is therefore: \begin{equation} \evidence_{\text{true}} = \evidence_{\text{raw}} \times \left( \frac{n_{\text{prior}}}{n_{\text{total}}} \right). \end{equation} In logarithmic form, which is used for all computations, the correction is additive: \begin{equation} \log(\evidence_{\text{true}}) = \log(\evidence_{\text{raw}}) + \log\left(\frac{n_{\text{prior}}}{n_{\text{total}}}\right). \label{eq:evidence_correction} \end{equation} It is important to note that the correction factor is $n_{\text{prior}}/n_{\text{total}}$, not $n_{\text{live}}/n_{\text{total}}$. This is because the volume shrinkage during the initial compression phase (where $n_{\text{prior}}$ is reduced to $n_{\text{live}}$) is already correctly tracked by \texttt{PolyChord}'s \texttt{update\_evidence()} routine at each nested sampling step. The ratio $n_{\text{prior}}/n_{\text{total}}$ accounts only for the fraction of parameter space that is physical versus unphysical, as determined by the initial rejection sampling. % \subsubsection{Dynamic Nested Sampling} % \label{ssec:dynamic_ns} % To implement this correction, we employ \texttt{PolyChord}'s capability to account for the inaccessible prior volume and leverage the information from all sampled points to improve the evidence estimate. The algorithm proceeds in four distinct phases, which we illustrate with a concrete example from our runs: a total sample count of $n_{\text{total}}=12,000$\footnote{In the \texttt{PolyChord} source code and output files, this quantity is labeled \texttt{ndiscarded}. We use the notation $n_{\text{total}}$ here to avoid confusion, as $n_{\text{total}}$ includes all sampled points—both the $n_{\text{prior}}$ physical points that are retained and the $n_{\text{total}} - n_{\text{prior}}$ unphysical points that are rejected.}, a prior point count of $n_{\text{prior}}=10,000$, and a final live point count of $n_{\text{live}}=1,000$. % \textbf{Phase 1: Prior Volume Estimation.} The algorithm begins by drawing $n_{\text{total}}$ samples from the prior $\prior(\params|\model)$. For each sample, the likelihood is evaluated. The first $n_{\text{prior}}$ samples that fall within $\Omega_{\text{phys}}$ are retained. In our example, 12,000 total points were drawn to find 10,000 physical points. This initial rejection sampling phase provides a direct Monte Carlo estimate of the accessible prior volume fraction: % \begin{equation} % V_{\text{phys}} \approx \frac{n_{\text{prior}}}{n_{\text{total}}}. % \end{equation} % % The $n_{\text{prior}}$ physical points are sorted by their likelihood values. % \textbf{Phase 2: Initial Compression.} This phase constitutes the initial steps of the nested sampling process. Starting with the $n_{\text{prior}}$ sorted physical points, the algorithm iteratively removes the point with the lowest likelihood without replacing it. This process is repeated $n_{\text{prior}} - n_{\text{live}}$ times, reducing the set of active points from $n_{\text{prior}}$ down to $n_{\text{live}}$. In our example, this corresponds to the first $10,000 - 1,000 = 9,000$ nested sampling steps. % \textbf{Phase 3: Standard Nested Sampling.} Once the number of active points has been reduced to $n_{\text{live}}$, the algorithm transitions to the standard nested sampling procedure described in the preceding sections. At each subsequent step, the lowest-likelihood point is removed and replaced with a new point drawn from the prior, constrained to have a likelihood greater than that of the point just removed. \texttt{PolyChord} uses slice sampling for this constrained sampling. This phase continues until the stopping criterion is met. % \textbf{Phase 4: Final Evidence Contribution.} Upon termination, the remaining $n_{\text{live}}$ points are used to compute the final contribution to the evidence integral, accounting for the prior volume enclosed by the likelihood contour of the lowest-likelihood remaining point. % The crucial insight of this dynamic approach is that all $n_{\text{prior}}$ initial points, not just the final $n_{\text{live}}$ points, contribute to the evidence calculation. The evidence integral is approximated as a sum over all discarded points. The ``raw'' evidence, $\evidence_{\text{raw}}$, which is conditioned on the physical subspace $\Omega_{\text{phys}}$, is calculated as: % \begin{equation} % \evidence_{\text{raw}} = \sum_{i=1}^{N_{\text{dead}}} \likelihood_i w_i + \frac{1}{n_{\text{live}}} \sum_{j=1}^{n_{\text{live}}} \likelihood_j X_{N_{\text{dead}}}, % \end{equation} % where $\likelihood_i$ and $w_i$ are the likelihood and weight of the $i$-th discarded point, and the final term is the contribution from the remaining live points. The weights $w_i = X_{i-1} - X_i$ are determined by the shrinkage of the prior volume $X_i$. % In the dynamic deletion phase (Phase 2), the number of active points decreases at each step. For the first step ($i=1$), there are $N=n_{\text{prior}}$ points, for the second step $N=n_{\text{prior}}-1$, and so on, until $N=n_{\text{live}}+1$ for the last step of this phase. In the standard phase (Phase 3), the number of points is constant at $N=n_{\text{live}}$. This changing number of active points is correctly accounted for when calculating the prior volume shrinkage at each step. For our example, the raw evidence sum explicitly includes terms from the 9,000 dynamically deleted points: % \begin{equation} % \evidence_{\text{raw}} = \underbrace{\likelihood_1 w_1 + \likelihood_2 w_2 + \dots + \likelihood_{9000} w_{9000}}_{\text{Phase 2: Dynamic Deletion}} + \underbrace{\sum_{i=9001}^{N_{\text{dead}}} \likelihood_i w_i + \dots}_{\text{Phase 3 \& 4}}. % \end{equation} % This ensures that the likelihood evaluations of all $n_{\text{prior}}$ initial points are used, not just a subset of size $n_{\text{live}}$. % The rationale for this approach is twofold. First, it is computationally efficient. The vast, low-likelihood regions ($n_{\text{prior}}$) of the prior are explored cheaply via simple rejection sampling. The more expensive slice sampling is reserved for the high-likelihood regions ($n_{\text{live}}$) where fine-grained exploration is necessary. Second, it preserves information by integrating over the entire range of prior. Rather than discarding the $n_{\text{prior}} - n_{\text{live}}$ lower-likelihood physical points, the initial compression phase seamlessly integrates them as the first steps of the nested sampling run, providing a more accurate and robust estimate of the evidence integral. % The values of $n_{\text{prior}}$ and $n_{\text{total}}$ for each model-dataset combination analysed in this work are provided in our public data release on Zenodo. All Bayesian evidence values and posterior distributions presented in this paper have been calculated using this methodology and include the correction factor described in Eq.~\ref{eq:evidence_correction}. \subsubsection{Dynamic Nested Sampling \& Synchronous Parallel Sampling} \label{ssec:dynamic_ns} Our statistical analysis is performed using the dynamic nested sampling framework implemented in \texttt{PolyChord}~\cite{Handley2015PolychordI,Handley2015PolychordII}. While this framework supports adaptive live point allocation~\cite{Higson2018dns}, we use a constant target number of live points during the main sampling phase, leveraging the framework's efficient synchronous parallelisation across HPC cores and integrated termination scheme. Our runs proceed through three distinct phases, as illustrated in Figure~\ref{fig:dns_plots}. \textbf{Initial Compression.} The process begins with an initial set of $n_{\text{prior}} \approx 10{,}000$ live points sampled directly from the prior and verified to yield physical solutions (i.e., $\likelihood(\mathcal{D}|\boldsymbol{\theta},\mathcal{M}) > 0$). The compression phase consists of the first $n_{\text{prior}} - n_{\text{live}} \approx 9{,}000$ iterations of the nested sampling algorithm, during which the lowest-likelihood points are sequentially deleted without replacement, reducing the live point count from $n_{\text{prior}}$ to the target value $n_{\text{live}} \approx 1{,}000$, which will enter the main nested sampling stage. During this phase, the number of active points $n_k$ in \Cref{eq:logX_general} decreases from $n_{\text{prior}}$ down to $n_{\text{live}}$. Each deleted point contributes to the evidence integral with its appropriate prior volume weight $w_i = X_{i-1} - X_i$, where the prior volume shrinks according to the decreasing live point count. This phase efficiently accumulates evidence from the vast, low-likelihood regions of the prior volume. \textbf{Synchronous Parallel Sampling.} During the main sampling phase, the live point count oscillates above the target value $n_{\text{live}} \approx 1{,}000$, corresponding to $n_k \approx n_{\text{live}}$ in \Cref{eq:logX_general}. These oscillations, clearly visible in Figure~\ref{fig:dns_plots}, are a characteristic feature of \texttt{PolyChord}'s synchronous parallelisation scheme. In each iteration, a batch of the lowest-likelihood points (equal to the number of parallel cores, in our case 760) is discarded, and the same number of new points are generated simultaneously. This synchronous approach, where all cores must wait for the slowest likelihood evaluation in the batch to complete, is crucial for preventing statistical bias. An asynchronous approach would preferentially sample regions of parameter space with faster likelihood evaluations (e.g., flat universes over curved ones), leading to an incorrect posterior. \textbf{Final Deletion.} The run terminates with a final phase where all remaining live points are systematically removed one by one. In this stage, the active point count $n_k$ decreases from $n_{\text{live}}$ down to 1. This process is the dynamic framework's integrated termination procedure, which replaces the separate calculation of the remaining evidence $Z_{\text{live}} \approx \langle \likelihood \rangle_{\text{live}} X_i$ found in earlier nested sampling implementations~\cite{Handley2015PolychordII}. The total number of iterations required is related to the Kullback--Leibler divergence between the prior and the posterior. However, a strict precision criterion can cause nested sampling to continue beyond the point of maximum information gain from prior to posterior (as reflected by $\KL$). For example, in \Cref{fig:dns_plots}, the iteration count ratio for Pantheon between $w_0w_a$CDM and $\Lambda$CDM is approximately 1:2.66, whilst the corresponding $\KL$ ratio (shown in \Cref{fig:dkl_single}) is only 1:14. This discrepancy arises because the precision criterion requires nested sampling to carry on sampling even after the bulk of information gain has been extracted, ensuring convergence to a high-precision evidence estimate. \textbf{Prior Volume in a Three-Phase Run.} The varying number of live points during the compression and deletion phases means the simple exponential approximation for prior volume does not hold throughout the entire run. The rigorous relationship between iteration number and prior volume is given by \Cref{eq:logX_general}, which correctly accounts for the changing live point count $n_k$ throughout all three phases of the run~\cite{Hu2023aeons}. This provides a precise mapping between the iteration number (x-axis of Figure~\ref{fig:dns_plots}) and the expected log-prior volume being explored. Figure~\ref{fig:dns_plots} demonstrates two key features: (1) for the same dataset but different models (e.g., $\Lambda$CDM and $w_0w_a$CDM for \texttt{Pantheon}, shown in orange and green), the iteration count increases only slightly for the more complex model; (2) for the same model but different datasets (e.g., $\Lambda$CDM with \texttt{Planck+lensing} vs.\ \texttt{Pantheon}, shown in blue and orange), the iteration count varies significantly, with \texttt{Planck}'s larger parameter space requiring substantially longer run times. This behaviour is also illustrated in Figures~\ref{fig:dkl_single}, \ref{fig:dkl_combo_part1}, and \ref{fig:dkl_combo_part2}, where the KL divergence values remain similar across models (across rows) but vary greatly across datasets (across columns). \begin{figure*} \centering \includegraphics[width=\textwidth]{figs/points_vs_iterations_comparison.pdf} \caption{Evolution of the live point count throughout nested sampling runs for $\Lambda$CDM and $w_0w_a$CDM models with Planck with CMB lensing and Pantheon datasets. Three distinct phases are visible: (1) initial compression where the first $n_{\text{prior}} - n_{\text{live}} \approx 9{,}000$ iterations sequentially delete the lowest-likelihood points without replacement, reducing the live point count from $n_{\text{prior}} \approx 10^4$ to the target value $n_{\text{live}} \approx 10^3$, which will enter the main nested sampling stage; (2) main sampling phase where the live point count oscillates above $n_{\text{live}}$ due to synchronous parallel processing with 760 cores; (3) final deletion phase where the live point count decreases from $n_{\text{live}}$ to zero as the remaining points are systematically removed one by one. The iteration number $i$ ($x$-axis) maps to the compressed log-prior volume via $\langle \log X_i \rangle = -\sum_{k=1}^{i} 1/n_k$, where $n_k$ is the live point count at iteration $k$, as shown by the $y$-axis (see~\Cref{ssec:dynamic_ns} for details)~\cite{Hu2023aeons}. The different termination points reflect the different Kullback--Leibler divergences between prior and posterior for each model-dataset combination. More complex models like $w_0w_a$CDM (green) require slightly more iterations than simpler models like $\Lambda$CDM (orange), but this effect is not as dominant as the variation across different datasets, with Planck with CMB lensing (blue) requiring substantially more iterations than Pantheon (orange, green). } \label{fig:dns_plots} \end{figure*} \subsection{Cosmological Datasets} \label{ssec:cosmological_datasets} We analyse a comprehensive set of cosmological observations spanning multiple redshift ranges and probing different physical phenomena. CMB observations probe the early universe at $z \approx 1100$, while late-universe probes including baryon acoustic oscillations, Type Ia supernovae, and weak gravitational lensing constrain the expansion history and structure formation across cosmic time. To constrain the model parameters, we perform MCMC and nested sampling runs using the Cobaya framework~\cite{Torrado2021Cobaya}, which interfaces with the PolyChord sampler~\cite{Handley2015PolychordI,Handley2015PolychordII} and the CAMB Boltzmann code~\cite{Lewis:1999bs}. A full list of the likelihood packages used for this analysis is provided in Table~\ref{tab:datasets}. \begin{table*} \centering \begin{tabular}{p{7cm}p{8cm}} \hline\hline \textbf{Dataset} & \textbf{Likelihood} \\ \hline\hline \multicolumn{2}{l}{\textbf{Cosmic Microwave Background}} \\ \hline Planck~\cite{Planck2020likelihoods,PlanckClik} & \texttt{planck\_2018\_lowl.TT} \\ & \texttt{planck\_2018\_lowl.EE} \\ & \texttt{planck\_2018\_highl\_plik.TTTEEE} \\ & \texttt{planck\_2018\_highl\_plik.SZ} \\[0.3ex] Planck with CMB lensing~\cite{Planck2020likelihoods,Planck2020lensing,PlanckClik} & \texttt{planck\_2018\_lowl.TT} \\ & \texttt{planck\_2018\_lowl.EE} \\ & \texttt{planck\_2018\_highl\_plik.TTTEEE} \\ & \texttt{planck\_2018\_highl\_plik.SZ} \\ & \texttt{planck\_2018\_lensing.clik} \\[0.3ex] CamSpec~\cite{CamSpec2021} & \texttt{planck\_2018\_lowl.TT} \\ & \texttt{planck\_2018\_lowl.EE} \\ & \texttt{planck\_2018\_highl\_CamSpec2021.TTTEEE} \\[0.3ex] CamSpec with CMB lensing~\cite{CamSpec2021,Planck2020lensing} & \texttt{planck\_2018\_lowl.TT} \\ & \texttt{planck\_2018\_lowl.EE} \\ & \texttt{planck\_2018\_highl\_CamSpec2021.TTTEEE} \\ & \texttt{planck\_2018\_lensing.clik} \\[0.3ex] CMB Lensing~\cite{Planck2020lensing} & \texttt{planck\_2018\_lensing.clik} \\[0.3ex] BICEP~\cite{BICEP2018,BICEPKeckData} & \texttt{bicep\_keck\_2018} \\ \hline \multicolumn{2}{l}{\textbf{Baryon Acoustic Oscillations}} \\ \hline SDSS~\cite{Beutler2011,Ross2015,Alam2021,BAOData} & \texttt{bao.sixdf\_2011\_bao} \\ & \texttt{bao.sdss\_dr7\_mgs} \\ & \texttt{bao.sdss\_dr16\_baoplus\_lrg} \\ & \texttt{bao.sdss\_dr16\_baoplus\_elg} \\ & \texttt{bao.sdss\_dr16\_baoplus\_qso} \\ & \texttt{bao.sdss\_dr16\_baoplus\_lyauto} \\ & \texttt{bao.sdss\_dr16\_baoplus\_lyxqso} \\ \hline \multicolumn{2}{l}{\textbf{Type Ia Supernovae}} \\ \hline SH$_0$ES~\cite{Riess2021,Scolnic2018} & \texttt{H0.riess2020Mb} \\ & \texttt{sn.pantheon} \\[0.3ex] Pantheon~\cite{Scolnic2018} & \texttt{sn.pantheon} \\ \hline \multicolumn{2}{l}{\textbf{Weak Lensing}} \\ \hline DES~\cite{Abbott2018} & \texttt{des\_y1.joint} \\ \hline\hline \end{tabular} \caption{Cosmological datasets and their corresponding likelihood components used in the analysis. Datasets are grouped by observational type with references to the actual data packages and implementation repositories used. Likelihood names correspond to those used by \texttt{Cobaya}.} \label{tab:datasets} \end{table*} \subsubsection{Planck} This dataset comprises high-precision measurements of CMB temperature and polarisation anisotropies from the surface of last scattering ($z \approx 1100$) using the \emph{Plik} high-$\ell$ likelihood~\cite{Planck2020params,PlanckClik}. Our analysis utilises four likelihood components: low-$\ell$ temperature and E-mode polarisation (\texttt{planck\_2018\_lowl.TT}, \texttt{planck\_2018\_lowl.EE}) covering $\ell = 2$--29, the high-$\ell$ TTTEEE likelihood (\texttt{planck\_2018\_highl\_plik.TTTEEE}) spanning $\ell = 30$--2508, and a Sunyaev-Zel'dovich (SZ) foreground prior (\texttt{planck\_2018\_highl\_plik.SZ}). Temperature fluctuations arise from acoustic oscillations in the primordial photon-baryon plasma~\cite{Peebles1970,HuWhite1997}, whilst polarisation E-modes trace Thomson scattering during the recombination and reionisation epochs. This dataset provides strong constraints on the fundamental cosmological parameters: the baryon density $\Omega_b h^2$, cold dark matter density $\Omega_c h^2$, Hubble parameter $H_0$, primordial amplitude $A_s$, spectral index $n_s$, and reionisation optical depth $\tau$. \subsubsection{Planck with CMB lensing} This dataset combines the Planck 2018 CMB measurements with the CMB lensing reconstruction (\texttt{planck\_2018\_lensing.clik})~\cite{Planck2020params,Planck2020lensing,PlanckClik}. The lensing likelihood is added to the four baseline Planck components (low-$\ell$ TT and EE, high-$\ell$ TTTEEE, and SZ foreground). This combination provides enhanced constraints by breaking geometric degeneracies and improving measurements of the matter density $\Omega_m$ and the clustering amplitude $\sigma_8$ through an independent probe of large-scale structure growth. \subsubsection{CamSpec} This dataset represents an alternative high-$\ell$ analysis of Planck 2018 data, using the CamSpec 2021 likelihood (\texttt{planck\_2018\_highl\_CamSpec2021.TTTEEE}) combined with the same low-$\ell$ likelihoods as the baseline Planck analysis~\cite{CamSpec2021}. CamSpec employs distinct foreground modelling and power spectrum estimation compared to the official \emph{Plik} pipeline, including different approaches to dust cleaning and the treatment of systematics. It provides an independent systematic cross-check, which is particularly valuable for assessing the robustness of cosmological parameter constraints to the choice of analysis methodology. \subsubsection{CamSpec with CMB lensing} This dataset combines the CamSpec CMB analysis (low-$\ell$ TT and EE plus high-$\ell$ CamSpec2021 TTTEEE) with the CMB lensing reconstruction (\texttt{planck\_2018\_lensing.clik})~\cite{CamSpec2021,Planck2020lensing}. It provides an independent systematic cross-check with enhanced parameter constraints from lensing, and is particularly valuable for assessing whether tensions in $\Omega_m$ and $\sigma_8$ persist across different CMB analysis pipelines. \subsubsection{CMB Lensing} This dataset is the standalone measurement of the lensing potential power spectrum, derived from the gravitational deflection of CMB photons by intervening large-scale structure~\cite{Planck2020lensing}. The lensing reconstruction (\texttt{planck\_2018\_lensing.clik}) uses quadratic estimators~\cite{HuOkamoto2002} on Planck temperature and polarisation maps to extract the lensing convergence signal. It probes the matter distribution and structure growth over cosmic history (primarily $z \sim 0.5$--5), breaking geometric degeneracies and enhancing constraints on $\Omega_m$, $\sigma_8$, and the sum of neutrino masses $\sum m_\nu$. We run CMB lensing alone to enable tension quantification analysis on the rest of the CMB data with and without lensing. \subsubsection{BICEP} This dataset consists of degree-scale B-mode polarisation measurements from the BICEP/Keck Array 2018 data release (\texttt{bicep\_keck\_2018}), which searches for primordial gravitational waves from inflation~\cite{BICEP2018,BICEPKeckData}. Observations at 95, 150, and 220 GHz from the South Pole target the cleanest sky region with the lowest Galactic foreground contamination. B-modes can originate from tensor perturbations (inflationary gravitational waves) or from the weak gravitational lensing of E-modes, which acts as a foreground in this search. The BK18-only constraints on the tensor-to-scalar ratio are $r_{0.05} < 0.06$ (95\% CL), directly probing the inflationary energy scale through the relation $V^{1/4} \propto r^{1/4}$~\cite{Planck2018params}. \subsubsection{SDSS} This dataset is a compilation of baryon acoustic oscillation (BAO) measurements from seven independent surveys spanning $z = 0.1$ to $z > 2$~\cite{Beutler2011,Ross2015,Alam2021,BAOData}. We combine measurements from: 6dFGS (\texttt{bao.sixdf\_2011\_bao}, $z = 0.106$), SDSS DR7 MGS (\texttt{bao.sdss\_dr7\_mgs}, $z = 0.15$), and five eBOSS DR16 tracers covering $z = 0.698$--2.33 using luminous red galaxies (LRG), emission line galaxies (ELG), quasars (QSO), plus the Lyman-$\alpha$ forest auto-correlation and its cross-correlation with quasars at $z > 2$. The BAO feature represents the imprint of primordial sound waves at recombination~\cite{EisensteinHu1998,Eisenstein2005,Cole2005}, with a characteristic sound horizon scale of $r_{\rm drag} \approx 147$ Mpc for Planck-like $\Lambda$CDM cosmologies~\cite{Planck2020params}. This provides a standard ruler that measures both the angular diameter distance $D_A(z)$ and the Hubble parameter $H(z)$ as functions of redshift, thereby constraining $\Omega_m$, $H_0$, and dark energy dynamics. \subsubsection{SH$_0$ES} This dataset provides a Gaussian prior on the Type Ia supernova absolute magnitude $M_b$, derived from the local distance ladder~\cite{Riess2021,Scolnic2018,SNData}. We implement this through the \texttt{H0.riess2020Mb} likelihood, which sets $M_b = -19.253 \pm 0.027$ mag based on HST observations of Cepheid variable stars in SN Ia host galaxies with three geometric anchors (Milky Way parallaxes, the Large Magellanic Cloud, and the NGC 4258 water maser). This $M_b$ prior is used alongside the Pantheon SN Ia dataset (\texttt{sn.pantheon} with \texttt{use\_abs\_mag: true}) to derive an $H_0$ value of $73.2 \pm 1.3$ km s$^{-1}$ Mpc$^{-1}$, anchoring the local expansion rate. This result creates a tension of approximately $4\sigma$ with the CMB-inferred value of $H_0 \approx 67$ km s$^{-1}$ Mpc$^{-1}$ within the $\Lambda$CDM model, motivating searches for new physics or systematic effects. \subsubsection{Pantheon} The Pantheon sample is a compilation of 1048 spectroscopically confirmed Type Ia supernovae from Pan-STARRS1 (PS1), SDSS, SNLS, low-$z$ surveys, and HST (\texttt{sn.pantheon})~\cite{Scolnic2018,SNData}. It covers the redshift range $0.01 < z < 2.3$ with standardised peak magnitudes corrected for light-curve shape and colour using the SALT2 fitter. The luminosity distance-redshift relation $d_L(z)$ directly probes the expansion history $H(z) = H_0 E(z)$ through the integral $d_L(z) = c(1+z) \int_0^z dz'/H(z')$, providing evidence for cosmic acceleration~\cite{Riess1998,Perlmutter1999} at $z \sim 0.5$ and constraining $\Omega_m$ and the dark energy equation of state $w$. When analysed without an external $H_0$ calibration, this dataset constrains the degenerate product $H_0 \times M_b$ rather than absolute distances. \subsubsection{DES} This dataset is the Dark Energy Survey Year 1 (DES Y1) ``3×2pt'' analysis (\texttt{des\_y1.joint})~\cite{Abbott2018,Kilbinger2015}, which combines three two-point correlation functions: cosmic shear (the weak lensing auto-correlation of background galaxies at $z \sim 0.2$--1.3), galaxy clustering (the angular auto-correlation of foreground lens galaxies in five tomographic redshift bins), and galaxy-galaxy lensing (the cross-correlation between lens positions and source shears). This joint analysis, spanning 1321 deg$^2$, probes both the expansion history through geometric effects and structure growth through gravitational lensing. It primarily constrains the matter density $\Omega_m$ and the clustering amplitude through the parameter combination $S_8 = \sigma_8(\Omega_m/0.3)^{0.5}$. The DES Y1 3×2pt analysis finds $S_8 = 0.773 \pm 0.026$, showing a mild tension of approximately $2\sigma$ with the higher value of $S_8 \approx 0.83$ inferred from Planck. \subsection{Tools} This work utilises a modified version of \texttt{Cobaya 3.5.2}\footnote{\url{https://github.com/AdamOrmondroyd/cobaya}}~\citep{cobayaascl, Torrado2021Cobaya} for sampling and modelling framework, which interfaces likelihoods from different datasets with the Boltzmann code \texttt{CAMB 1.4.2.1}~\citep{Lewis:1999bs,Howlett:2012012mh,Mead_2016}. The specific likelihoods used for each dataset are listed in~\Cref{tab:datasets}. \texttt{PolyChord 1.22.1}\footnote{\url{https://github.com/PolyChord/PolyChordLite}} was used as a nested sampling tool to explore parameter spaces and generate posterior samples with 1000 live points. Subsequent analysis, including plots and tension statistic computation, was performed using \texttt{anesthetic}\footnote{\url{https://github.com/handley-lab/anesthetic}} and \texttt{unimpeded}. \section{\texttt{unimpeded} in action} \label{sec:unimpeded_action} \texttt{unimpeded} is a Python-based tool designed to streamline access to pre-computed cosmological chains and facilitate Bayesian analyses. This section outlines its installation, available data, and basic usage. \subsection{Installation} \label{ssec:installation} The Python library \texttt{unimpeded} is publicly available on GitHub. To ensure a clean installation and avoid conflicts with other packages, we highly recommend creating and activating a dedicated Python virtual environment before proceeding. \begin{center} \fbox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule}{% \ttfamily \begin{tabular}{@{}l@{}} \hspace{1em}python -m venv venv\\ \hspace{1em}source venv/bin/activate\\ \end{tabular}% }} \end{center} The simplest method is to install the latest stable release from the Python Package Index (PyPI) using \texttt{pip}: \begin{center} \fbox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule}{\ttfamily \hspace{1em}pip install unimpeded }} \end{center} Alternatively, for users interested in modifying the source code or contributing to development, an editable version can be installed directly from the GitHub repository: \begin{center} \fbox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule}{% \ttfamily \begin{tabular}{@{}l@{}} \hspace{1em}git clone https://github.com/handley-lab/unimpeded\\ \hspace{1em}cd unimpeded\\ \hspace{1em}pip install -e .\\ \end{tabular}% }} \end{center} The full source code, along with further documentation and examples, is hosted at the GitHub repository: \url{https://github.com/handley-lab/unimpeded}. If user wish to use \texttt{unimpeded}'s tension statistics calculator (see~\Cref{ssec:tension_calculator}), chain analysis functionality and visualisation tools (see~\Cref{ssec:sampling_anesthetic}), \texttt{anesthetic}~\cite{Handley2019anesthetic} needs to be installed in the same virtual environment \texttt{venv} where \texttt{unimpeded} is installed. This can be done via \texttt{pip}: \begin{center} \fbox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule}{% \ttfamily \begin{tabular}{@{}l@{}} \hspace{1em}pip install anesthetic \end{tabular}% }} \end{center} Alternatively, an editable version of \texttt{anesthetic} can be installed from the GitHub repository: \begin{center} \fbox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule}{% \ttfamily \begin{tabular}{@{}l@{}} \hspace{1em}git clone https://github.com/handley-lab/anesthetic.git\\ \hspace{1em}cd anesthetic\\ \hspace{1em}pip install -e .\\ \end{tabular}% }} \end{center} \subsection{Available models and datasets} \label{ssec:available_models_datasets} \texttt{unimpeded} provides access to a growing grid of both nested sampling chains and MCMC chains generated using \texttt{Cobaya}~\cite{Torrado2021Cobaya}, with nested sampling performed by \texttt{PolyChord}~\cite{Handley2015PolychordI,Handley2015PolychordII}. \texttt{unimpeded} currently covers 8 cosmological models, detailed in~\Cref{ssec:cosmological_models}, and their prior ranges are summarised in~\Cref{tab:cosmological_models}. \texttt{unimpeded} currently presents 10 datasets and their pairwise combinations, detailed in~\Cref{ssec:cosmological_datasets}. The likelihood(s) used by \texttt{Cobaya} for each dataset's nested sampling and MCMC runs are listed in~\Cref{tab:datasets}. These chains are stored on Zenodo in csv format and are accessible directly through the \texttt{unimpeded} API (see~\Cref{ssec:loading_chains}), or from the Zenodo website. \begin{table} \centering \begin{tabular}{p{2.2cm}p{1.8cm}p{2.2cm}p{6.8cm}} \hline\hline \textbf{Model} & \textbf{Parameter} & \textbf{Prior range} & \textbf{Definition} \\ \hline $\Lambda$CDM & $H_0$ & [20, 100] & Hubble constant \\ & $\tau_{\text{reio}}$ & [0.01, 0.8] & Optical depth to reionization \\ & $\Omega_b h^2$ & [0.005, 0.1] & Baryon density parameter \\ & $\Omega_c h^2$ & [0.001, 0.99] & Cold dark matter density parameter \\ & $\log(10^{10}A_s)$ & [1.61, 3.91] & Amplitude of scalar perturbations \\ & $n_s$ & [0.8, 1.2] & Scalar spectral index \\ \hline $\Omega_k\Lambda$CDM & $\Omega_k$ & [-0.3, 0.3] & Curvature density parameter (varying curvature) \\[0.5ex] $w$CDM & $w$ & [-3, -0.333] & Constant dark energy equation of state \\[0.5ex] $w_0w_a$CDM & $w_0$ & [-3, 1] & Present-day dark energy equation of state \\ & $w_a$ & [-3, 2] & Dark energy equation of state evolution (CPL parameterisation) \\[0.5ex] $m_\nu\Lambda$CDM & $\Sigma m_\nu$ & [0.06, 2] & Sum of neutrino masses (eV) \\[0.5ex] $A_L\Lambda$CDM & $A_L$ & [0, 10] & Lensing amplitude parameter \\[0.5ex] $n_{\text{run}}\Lambda$CDM & $n_{\text{run}}$ & [-1, 1] & Running of spectral index ($dn_s/d\ln k$) \\[0.5ex] $r\Lambda$CDM & $r$ & [0, 3] & Scalar-to-tensor ratio \\ \hline \end{tabular} \caption{Cosmological parameters for the models analysed in this work. The baseline $\Lambda$CDM model contains six fundamental parameters, with extensions adding additional parameters to test specific physical hypotheses. Prior ranges are specified based on theoretical constraints and observational bounds.} \label{tab:cosmological_models} \end{table} % \paragraph{Cosmological Datasets Currently Included} % \begin{itemize} % \item CMB: Planck (Plik, Camspec, NPIPE), BICEP/Keck, ACT, SPT (some including CMB lensing likelihoods) % \item BAO: SDSS, BOSS, eBOSS, Ly$\alpha$ % \item SNe: Pantheon, SH0ES % \item WL: DES Y1 % \end{itemize} % The poster indicates that this list will be expanded with new datasets (e.g., DESI, DES Y5, Union, Pantheon+) as they become available and processed. % \paragraph{Cosmological Models Currently Included} % \begin{itemize} % \item $\Lambda\mathrm{CDM}$: Base six parameters ($H_{\mathrm{0}},\tau_{\mathrm{reio}},\Omega_{\mathrm{b}}h^2,\Omega_{\mathrm{c}}h^2,A_{\mathrm{s}},n_{s}$) % \item $K\Lambda\mathrm{CDM}$: $\Lambda\mathrm{CDM}$ with varying curvature $\Omega_{K}$ % \item $N\Lambda\mathrm{CDM}$: $\Lambda\mathrm{CDM}$ with varying effective number of relativistic species $N_\mathrm{eff}$ and total mass of 3 degenerate neutrinos $\sum m_\nu$ % % \item $n\Lambda\mathrm{CDM}$: $\Lambda\mathrm{CDM}$ with varying $\sum m_\nu$ (3 degenerate neutrinos) and $N_\mathrm{eff}$ fixed to 3.044 % % \item $m\Lambda\mathrm{CDM}$: $\Lambda\mathrm{CDM}$ with varying $N_\mathrm{eff}$, two massless neutrinos and one massive neutrino with $m=0.06 \, \mathrm{eV}$ % \item $\mathrm{n_{run}}\Lambda\mathrm{CDM}$: $\Lambda\mathrm{CDM}$ with a running of the spectral index $n_{\mathrm{run}} = dn_\mathrm{s}/d\ln k$ % \item $w\mathrm{CDM}$: $\Lambda\mathrm{CDM}$ with a constant dark energy equation of state $w$ % \item $w_0w_\mathrm{a}\Lambda\mathrm{CDM}$: $\Lambda\mathrm{CDM}$ with a time-varying dark energy equation of state (CPL parameterisation: $w(a) = w_0 + w_a(1-a)$) % \item $r\Lambda\mathrm{CDM}$: $\Lambda\mathrm{CDM}$ with a varying scalar-to-tensor ratio $r$ % \end{itemize} % A summary can be presented in [TABLE 1: Summary of available cosmological models and datasets in unimpeded.]. \subsection{Loading chains and information} \label{ssec:loading_chains} The primary interface for accessing the pre-computed results is the \texttt{DatabaseExplorer} class in \texttt{unimpeded.database}. It provides a programmatic workflow for downloading nested sampling and MCMC chains and their associated metadata. This example demonstrates the standard user workflow in \texttt{python}. The process begins by instantiating the \texttt{DatabaseExplorer}, which lists the available content through its \texttt{.models} and \texttt{.datasets} attributes. Subsequently, both nested sampling (\texttt{ns}) and MCMC (\texttt{mcmc}) chains, along with their corresponding metadata files, are downloadable for a selected model-dataset combination. The code shows an example of downloading the nested sampling (\texttt{'ns'}) chains for the $\Omega_k\Lambda$CDM model (\texttt{'klcdm}) constrained by the DES and CamSpec with CMB lensing joint-dataset (\texttt{``des\_y1.joint+planck\_2018\_CamSpec''}). Correspondence between cosmological models and datasets and their \texttt{unimpeded} input strings are provided in~\Cref{tab:unimpeded_models} and~\Cref{tab:unimpeded_datasets}, respectively. The call to \texttt{dbe.download\_samples} returns a \texttt{samples} object containing the full posterior samples and prior samples, including their parameter values and importance weights. Complementarily, \texttt{dbe.download\_info} retrieves the \texttt{info} object, which is a yaml file containing the complete run settings used by \texttt{Cobaya} and \texttt{PolyChord} for the analysis. \texttt{samples} and are immediately ready for analysis with tools like \texttt{anesthetic}~\cite{Handley2019anesthetic}. \noindent\framebox[\linewidth][l]{\parbox{0.95\linewidth}{\ttfamily\small from unimpeded.database import DatabaseExplorer\\ \\ \# Initialise DatabaseExplorer\\ dbe = DatabaseExplorer()\\ \\ \# Get a list of currently available models and datasets\\ models\_list = dbe.models\\ datasets\_list = dbe.datasets\\ \\ \# Choose model, dataset and sampling method\\ method = 'ns' \# 'ns' for nested sampling, 'mcmc' for MCMC\\ model = ``klcdm'' \# from models\_list\\ dataset = ``des\_y1.joint+planck\_2018\_CamSpec'' \# from datasets\_list\\ \\ \# Download samples chain\\ samples = dbe.download\_samples(method, model, dataset)\\ \\ \# Download \texttt{Cobaya} and \texttt{PolyChord} run settings\\ info = dbe.download\_info(method, model, dataset) }} \begin{table}[htbp] \centering \begin{tabular}{p{4cm}p{3cm}} \hline\hline \textbf{Model} & \textbf{\texttt{unimpeded} Input} \\ \hline $\Lambda$CDM & \texttt{``lcdm''} \\[0.3ex] $\Omega_k\Lambda$CDM & \texttt{``klcdm''} \\[0.3ex] $w$CDM & \texttt{``wlcdm''} \\[0.3ex] $w_0w_a$CDM & \texttt{``walcdm''} \\[0.3ex] $A_L\Lambda$CDM & \texttt{``Alcdm''} \\[0.3ex] $m_\nu\Lambda$CDM & \texttt{``mlcdm''} \\[0.3ex] $n_{\text{run}}\Lambda$CDM & \texttt{``nrunlcdm''} \\[0.3ex] $r\Lambda$CDM & \texttt{``rlcdm''} \\ \hline \end{tabular} \caption{Correspondence between cosmological models described in \Cref{ssec:cosmological_models} and their \texttt{unimpeded} input strings.} \label{tab:unimpeded_models} \end{table} \begin{table}[htbp] \centering \begin{tabular}{p{6cm}p{5cm}} \hline\hline \textbf{Dataset} & \textbf{\texttt{unimpeded} Input} \\ \hline Planck & \texttt{``planck\_2018\_plik\_nolens''} \\[0.3ex] Planck with CMB lensing & \texttt{``planck\_2018\_plik''} \\[0.3ex] CamSpec & \texttt{``planck\_2018\_CamSpec\_nolens''} \\[0.3ex] CamSpec with CMB lensing & \texttt{``planck\_2018\_CamSpec''} \\[0.3ex] CMB Lensing & \texttt{``planck\_2018\_lensing''} \\[0.3ex] BICEP & \texttt{``bicep\_keck\_2018''} \\[0.3ex] SDSS & \texttt{``bao.sdss\_dr16''} \\[0.3ex] SH$_0$ES & \texttt{``H0.riess2020Mb''} \\[0.3ex] Pantheon & \texttt{``sn.pantheon''} \\[0.3ex] DES & \texttt{``des\_y1.joint''} \\ \hline \end{tabular} \caption{Correspondence between cosmological datasets described in \Cref{tab:datasets} and their \texttt{unimpeded} input strings.} \label{tab:unimpeded_datasets} \end{table} \subsection{Tension Statistics Calculator} \label{ssec:tension_calculator} To perform a tension analysis between two datasets, $\data_A$ and $\data_B$, one must first run three separate nested sampling analyses to obtain the chains for: (1) $\data_A$ alone, (2) $\data_B$ alone, and (3) the joint dataset $\data_{AB}$. These full nested sampling runs across a collection of models and datasets took months to complete on a high performance computer, but \texttt{unimpeded} enables users to access these chains in seconds on a laptop, with only 2 lines of code demonstrated in this minimal working example. Please note that this functionality requires \texttt{anesthetic} to be installed in the same Python environment as \texttt{unimpeded} (see~\Cref{ssec:installation}). \noindent\framebox[\linewidth][l]{\parbox{0.95\linewidth}{\ttfamily\small from unimpeded.tension import tension\_calculator\\ \\ tension\_samples = tension\_calculator(method='ns',\\ \phantom{tension\_samples = tension\_calculator(}model='lcdm',\\ \phantom{tension\_samples = tension\_calculator(}datasetA='planck\_2018\_CamSpec',\\ \phantom{tension\_samples = tension\_calculator(}datasetB='des\_y1.joint',\\ \phantom{tension\_samples = tension\_calculator(}nsamples=1000) }} The output of the \texttt{tension\_calculator()} is an \texttt{anesthetic.samples.Samples} data structure containing the values for the tension statistic detailed in~\Cref{ssec:tension_quant_theory}, which directly correspond to the theoretical quantities defined previously: \begin{itemize} \item \textbf{R statistics}: The function calculates `logR` as $\log\evidence_{AB} - \log\evidence_A - \log\evidence_B$, matching the definition of the logarithmic $R$ statistic (\Cref{sssec:r_statistic}). \item \textbf{Information Ratio}: `I` is computed as $\KL^A + \KL^B - \KL^{AB}$, as defined in \Cref{sssec:information_ratio}. \item \textbf{Suspiciousness}: `logS` is calculated as $\langle\log\likelihood\rangle_{\posterior_{AB}} - \langle\log\likelihood\rangle_{\posterior_A} - \langle\log\likelihood\rangle_{\posterior_B}$, corresponding to the practical computational form of suspiciousness from \Cref{eq:suspiciousness_likelihood_avg}. \item \textbf{Bayesian Model Dimensionality}: The dimensionality of the shared parameter space, `$d_G$`, is computed as $d_A + d_B - d_{AB}$, as defined in \Cref{sssec:bayesian_model_dimensionality}. \item \textbf{Tension Probability and Significance}: The function uses `$d_G$` and `logS` to compute the $p$-value (`p`) and its equivalent Gaussian significance (`tension`) in units of $\sigma$, as described in \Cref{sssec:p_and_sigma}. \end{itemize} This automated calculation provides a consistent and reproducible method for applying the full suite of Bayesian tension metrics across the large grid of datasets and models provided by \texttt{unimpeded}. \subsection{Analysing chains with \texttt{anesthetic}} \label{ssec:sampling_anesthetic} The nested sampling chains generated by \texttt{unimpeded} are readily processed and analysed using the \texttt{anesthetic} package~\cite{Handley2019anesthetic}. This package provides both quantitative statistical measures and powerful visualisation tools. A key function is \texttt{NestedSamples.stats()}, which computes summary statistics essential for cosmological inference, including the evidence \texttt{logZ}, which is essential for model comparison (see~\Cref{ssec:model_comparison}) and \texttt{logL\_P} (the posterior-averaged log-likelihood $\langle \ln \likelihood \rangle_P$). In addition to these quantitative diagnostics, \texttt{anesthetic} can be used to generate corner plots to visualise the one- and two-dimensional marginalised distributions. This functionality is particularly useful as it can plot the distributions for both the posterior and the prior samples, allowing for a direct visual assessment of the information gain for each parameter. \subsection{Future functions} \label{ssec:future_functions} Future work on \texttt{unimpeded} will focus on expanding its capabilities in three primary directions. First, we plan to develop machine learning emulators for both likelihoods and full posterior distributions. Trained on the extensive set of nested sampling chains generated by \texttt{unimpeded}, these emulators will facilitate extremely rapid parameter estimation and model exploration. Furthermore, this infrastructure will enable the application of simulation-based inference (SBI) methodologies, which are essential for cosmological analyses where the likelihood function is intractable. Second, the pre-computed grid of cosmological background and perturbation quantities will be expanded. This crucial update will incorporate the latest astronomical data from current and future surveys, including but not limited to ACT, Pantheon+, DESI, Euclid, and LISA, ensuring that \texttt{unimpeded} remains relevant for modern cosmology. Finally, we will implement importance sampling. This feature will provide a computationally inexpensive method for re-weighting existing posterior samples to account for different model assumptions, thereby significantly accelerating the process of updating cosmological constraints. % \section{Results} % \subsection{Tension quantification grid} % \label{ssec:tension_quant_grid} % A primary scientific goal enabled by \texttt{unimpeded} is the systematic quantification of tensions across a wide grid of cosmological models and dataset combinations. This grid leverages the nested sampling chains, which provide the necessary Bayesian evidences for many tension metrics. % \label{ssec:heatmap} % To visualise and identify patterns in inter-dataset consistency, we can generate heatmaps of tension statistics. For example, the poster presented a preliminary heatmap of the $\log \mathcal{R}$ statistic calculated for 10 pairwise dataset combinations across 9 different cosmological models. % % [FIGURE B: Heatmap of log R tension statistic for selected pairwise datasets across multiple cosmological models. (Based on poster figure 'tension_2.png')]. % Such a heatmap allows for a quick visual assessment of which dataset pairs exhibit strong agreement (e.g., $\log \mathcal{R} \sim 0$) or significant tension (e.g., $\log \mathcal{R} \ll 0$) within the context of each cosmological model. Systematic variations across models or datasets can indicate robust tensions or, conversely, model-dependent resolutions. The specific scientific findings from such a heatmap would be detailed in the Results section based on the authors' analysis. \section{Results} \label{sec:results} % The primary results presented in this paper are the public release of the \texttt{unimpeded} tool and the initial population of its associated data repository with nested sampling and MCMC chains. This resource significantly lowers the barrier to entry for advanced cosmological analyses, particularly model comparison and tension quantification. \subsection{Public Release of Chains via \texttt{unimpeded}} %\paragraph{1. Public Release of Chains via \texttt{unimpeded}} We have publically released a library of nested sampling and MCMC chains, across 8 cosmological models, 10 datasets and 31 pairwise dataset combinations, as detailed in section~\ref{ssec:available_models_datasets}. These chains are accessible via the \texttt{unimpeded} Python package and are stored on Zenodo, ensuring permanent public access and citable DOIs for specific data releases. This fulfills a key objective of our DiRAC-funded projects (DP192 and 264) by providing a community resource analogous to, but extending the capabilities of, the Planck Legacy Archive. \subsection{Parameter Estimation} \label{ssec:parameter_estimation} The \texttt{unimpeded} framework enables efficient parameter estimation by providing direct access to pre-computed full nested sampling chains and MCMC chains. This allows users to bypass the computationally intensive step of generating these chains themselves, facilitating rapid and robust cosmological inference. We demonstrate this capability by constraining the parameters of the $\Omega_k\Lambda$CDM model using a combination of SH0ES and DES Year 1 data\footnote{The specific \texttt{unimpeded} input for this dataset combination is \texttt{H0.riess2020Mb+des\_y1.joint} (see~\Cref{tab:datasets}).}. \Cref{fig:prior_posterior} illustrates how the posterior (orange) for certain parameters are significantly more constrained compared to the broad prior (blue). The diagonal plots show the marginalised posterior probability distribution for that specific parameter. For parameters well-constrained by these datasets, such as $\log(10^{10} A_\mathrm{s})$, $n_\mathrm{s}$, and $\Omega_k$, the posteriors appear as narrow peaks demonstrating substantial information gain. However, for parameters that these particular datasets do not strongly constrain, such as $\tau_\mathrm{reio}$, the posteriors remain broad and similar to the priors. This visualisation was created using \texttt{anesthetic}. \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{figs/prior_posterior.pdf} \caption{Corner plot showing posterior distributions for the $\Omega_k\Lambda$CDM cosmological model constrained by Planck with CMB lensing + SDSS data. The diagonal panels show the one-dimensional marginalised prior (blue) and Planck with CMB lensing + SDSS posterior (orange) distributions, demonstrating the constraining power of the observational data. The lower triangular panels display the two-dimensional joint posterior and prior, where the inner (darker blue and darker orange) and outer (lighter blue and lighter orange) contours correspond to the 68\% ($1\sigma$) and 95\% ($2\sigma$) credible regions, respectively. The upper triangular panels show scatter plots of samples drawn from the posterior, visually representing parameter correlations. The posterior volume (orange) is much smaller than the prior volume (blue). This corner plot was created using \texttt{anesthetic}.} \label{fig:prior_posterior} \end{figure} \subsection{Model Comparison} \label{ssec:model_comparison_results} The Bayesian evidence values computed via nested sampling from \texttt{unimpeded} form the basis for rigorous model comparison. Here, we present a systematic comparison of eight cosmological models using both individual and combined datasets, as outlined in \Cref{ssec:available_models_datasets}. Since we used uniform priors for the set of competing models, $\Prob(\model_i) = \mathrm{constant}$, the posterior probability of a model given the data $\data$, $\Prob(\model_i|\data) = \evidence_i / \sum_j \evidence_j$ (\Cref{eq:model_prob}), provides a self-contained, normalised probability distribution over the models, allowing for a direct and intuitive ranking of their relative support from the data. Since the posterior probabilities can span many orders of magnitude, we present the natural logarithm of $\log \Prob(\model_i|\data)$, where higher values (i.e., less negative) indicate stronger evidence in favour of a given model. The results of our model comparison are summarised in \Cref{fig:model_comp_single,fig:model_comp_combined_part1,fig:model_comp_combined_part2}. \Cref{fig:model_comp_single} presents a heatmap of $\log \Prob(\model_i|\data)$ for each of the eight models tested against 10 individual datasets. The colour scale indicates the level of support, with bluer colours corresponding to higher $\Prob(\model_i|\data)$ and redder colour indicating that a model is more disfavoured relative to the others. To structure the visualisation, the datasets ($y$-axis) are sorted by descending constraining power (model-posterior-weighted average $\langle\KL\rangle$), whilst the models ($x$-axis) are sorted by their descending $\KL$ values from the Planck with CMB lensing dataset. \Cref{fig:model_comp_combined_part1,fig:model_comp_combined_part2} show a similar analysis but for various combinations of datasets, designed to leverage their complementary constraining power. One should note that in~\Cref{eq:model_prob}, the sum of evidences is taken over all models being compared for a specific dataset or combination of datasets. Therefore, the numerical values of $\log \Prob(\model_i|\data)$ are only comparable horizontally across models for a fixed dataset, and not vertically across datasets for a fixed model. To enable recovery of the raw log-evidence values, the final column (in yellow) of each heatmap displays the normalising factor $\log\left(\sum_j \evidence_j\right)$, which is the logarithm of the denominator of~\Cref{eq:model_prob}. The raw log-evidence for any specific model-dataset combination can be recovered by multiplying the probability value in that cell by the normalising factor of that row. The model preference exhibits a dependence on the specific dataset being considered. As shown in \Cref{fig:model_comp_single}, an analysis of individual datasets reveals a diversity in the preferred cosmological model. No single model is universally favoured. Instead, different probes indicate a weak preference for different extensions to the base $\Lambda$CDM model. For instance, the SDSS dataset weakly prefers the $A_L\Lambda$CDM model, whilst DES weakly prefers a non-flat universe ($\Omega_k\Lambda$CDM). The Planck primary and CamSpec datasets both weakly prefer the $w$CDM model, characterised by a constant but non-standard dark energy equation of state. Other datasets show weak preferences for a running spectral index ($n_{\mathrm{run}}\Lambda$CDM for Pantheon) or massive neutrinos ($m_\nu\Lambda$CDM for SH0ES), whilst BICEP weakly prefers the base $\Lambda$CDM model itself. Notably, the $\Lambda$CDM model, though not always exhibiting the highest $\log \Prob(\model_i|\data)$, emerges as the most consistently well-performing model amongst the eight models considered. This picture changes when datasets are combined, as illustrated in \Cref{fig:model_comp_combined_part1,fig:model_comp_combined_part2}. In the combined analyses, the base $\Lambda$CDM model is most often the preferred scenario. We emphasise that this analysis involves comparing model performance horizontally within each dataset row; due to differences in data normalisation, a vertical comparison of log-evidence values across different datasets for a fixed model is not meaningful. %While some combinations show a slight preference for extensions such as $A_L\Lambda$CDM, the overarching trend is that the evidence for any single extension diminishes when multiple, independent probes are considered simultaneously. % The tendency for individual dataset preferences for extended models to be averaged out upon combination suggests that these preferences may be driven by dataset-specific systematic effects or statistical fluctuations, rather than a true signal of new physics. The statistical power of the combined datasets appears to dilute these individual tendencies, reinforcing the status of $\Lambda$CDM as the most efficient and sufficient description of the current cosmological data compendium. W \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{figs/model_comparison_single_sorted_by_dkl.pdf} \caption{Heatmap of the log-posterior model probabilities, $\log \Prob(\model_i|\data)$, for each cosmological model ($x$-axis) evaluated against individual dataset ($y$-axis). Bluer colours indicate stronger statistical support for a model given the data. Comparison should only be made horizontally across models for a fixed dataset, as the sum of evidences in~\Cref{eq:model_prob} is taken over all models for that specific dataset. The final column (in yellow) shows the normalising factor $\log\left(\sum_j \evidence_j\right)$, the logarithm of the denominator of~\Cref{eq:model_prob}. The raw log-evidence for any model-dataset combination can be recovered by multiplying the probability value in that cell by the normalising factor of that row. The results show that while different datasets favour different model extensions, the base $\Lambda$CDM model emerges as the most consistently well-performing model across all individual datasets (overall blue).} \label{fig:model_comp_single} \end{figure} \begin{figure}[p] \vspace{-1cm} \centering \includegraphics[width=\textwidth]{figs/model_comparison_combo_sorted_by_dkl_part1.pdf} \caption{Same as \Cref{fig:model_comp_single}, but for combinations of datasets. The final column (in yellow) shows the normalising factor $\log\left(\sum_j \evidence_j\right)$, the logarithm of the denominator of~\Cref{eq:model_prob}, allowing recovery of raw log-evidence values by multiplying the probability in each cell by the normalising factor of that row. The combination of multiple probes sharpens the model comparison, further strengthening the preference for $\Lambda$CDM and increasing the degree to which extended models are disfavoured. Part 1 of combined datasets.} \label{fig:model_comp_combined_part1} \end{figure} \begin{figure}[p] \centering \includegraphics[width=\textwidth]{figs/model_comparison_combo_sorted_by_dkl_part2.pdf} \caption{Same as \Cref{fig:model_comp_single}, but for combinations of datasets. The final column (in yellow) shows the normalising factor $\log\left(\sum_j \evidence_j\right)$, the logarithm of the denominator of~\Cref{eq:model_prob}, allowing recovery of raw log-evidence values by multiplying the probability in each cell by the normalising factor of that row. The combination of multiple probes sharpens the model comparison, further strengthening the preference for $\Lambda$CDM and increasing the degree to which extended models are disfavoured. Part 2 of combined datasets.} \label{fig:model_comp_combined_part2} \end{figure} \subsection{Constraining Power of Models and Datasets} \label{ssec:constraining_power} The Kullback-Leibler divergence $\KL$ quantifies the information gain from prior to posterior after taking into account the data, providing a measure of how much the data constrains each model-dataset combination (see~\Cref{sssec:kl_divergence} for details in theory). \Cref{fig:dkl_single} presents a heatmap of $\KL$ for each of the eight models tested against 10 individual datasets. \Cref{fig:dkl_combo_part1,fig:dkl_combo_part2} show a similar analysis but for various combinations of datasets. Higher values of $\KL$ indicate that the dataset provides stronger constraints on the model parameters, representing greater information gain from the prior to the posterior. The datasets ($y$-axis) are sorted in descending order by the model posterior $\Prob(\model_i|\data)$-weighted average $\KL$ (\Cref{eq:model_weighted_kl}), thereby ranking each dataset by its constraining power, weighted by the posterior probability of each model. Similarly, the models along the $x$-axis are sorted in descending order according to their $\KL$ values for the Planck with CMB lensing dataset. \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{figs/dkl_single_sorted_by_dkl.pdf} \caption{This heatmap illustrates the Kullback-Leibler divergence ($\KL$) for each dataset ($y$-axis) and model ($x$-axis) combination, with higher values (bluer colours) indicating a greater overall constraint. Datasets are sorted vertically by their model-posterior-weighted average $\langle \KL \rangle_{\Prob(\model)}$ (\Cref{eq:model_weighted_kl}), while models are sorted horizontally by their $\KL$ from the Planck with CMB lensing dataset. A prominent feature is the strong vertical gradient, showing that $\KL$ varies significantly among datasets but remains relatively constant across models for a given dataset. This indicates that the information gain is predominantly determined by the statistical power of the observational probe, with more constraining, information-rich datasets naturally yielding higher $\KL$ values.} \label{fig:dkl_single} \end{figure} \begin{figure}[p] \centering \includegraphics[width=\textwidth]{figs/dkl_combo_sorted_by_dkl_part1.pdf} \caption{Same as \Cref{fig:dkl_single}, but for combinations of datasets ($y$-axis). Combined datasets yield substantially higher $\KL$ values compared to individual datasets, reflecting the enhanced constraining power from multiple complementary observational probes. The strong vertical gradient persists, with $\KL$ varying significantly among dataset combinations but remaining relatively constant across models for a given combination. Part 1 of combined datasets.} \label{fig:dkl_combo_part1} \end{figure} \begin{figure}[p] \centering \includegraphics[width=\textwidth]{figs/dkl_combo_sorted_by_dkl_part2.pdf} \caption{Same as \Cref{fig:dkl_single}, but for combinations of datasets ($y$-axis). Combined datasets yield substantially higher $\KL$ values compared to individual datasets, reflecting the enhanced constraining power from multiple complementary observational probes. The strong vertical gradient persists, with $\KL$ varying significantly among dataset combinations but remaining relatively constant across models for a given combination. Part 2 of combined datasets.} \label{fig:dkl_combo_part2} \end{figure} \subsection{Tension Quantification} \label{ssec:tension_quantification_results} To systematically quantify the consistency between the various cosmological datasets employed in this work, we utilised the tension statistics calculator implemented in the \texttt{unimpeded} package and the nested sampling chains it offers. We performed a comprehensive tension analysis across 31 pairwise dataset combinations for each of the 8 cosmological models under consideration. For each pair, we computed the tension statistics discussed in~\cref{ssec:tension_quant_theory}, including $p$-value significance $\sigma$ (\Cref{fig:tension_heatmap}), Bayesian Model Dimensionality ($d_G$) (\Cref{fig:tension_d_G}), Information Ratio (\Cref{fig:tension_I}), $R$ statistic (\Cref{fig:tension_logR}), and Suspiciousness (\Cref{fig:tension_logS}), providing a comprehensive view of the statistical agreement between datasets. For each of the tension statistics, the dataset ($y$-axis) is ranked ascendingly or descendingly using its model posterior $\Prob(\model_i|\data)$ weighted average, as stated by \Cref{eq:model_weighted_average}. The results of this extensive analysis are summarised in~\Cref{fig:tension_heatmap}, which presents the tension significance expressed as the equivalent Gaussian sigma ($\sigma$) of the $p$-value (see~\Cref{ssec:tension_quant_theory} for the theory and equations). A crucial feature of this representation is that the numerical $\sigma$ values are directly comparable both across rows and down columns, unlike the model comparison heatmap in~\Cref{ssec:model_comparison_results}. Each of the five heatmaps is sorted to bring the most concerning dataset combinations to the top, providing an immediate visual guide to potential tensions. For the $p$-value significance in~\Cref{fig:tension_heatmap}, rows are sorted in descending order of their average $\sigma$ across all models. The subsequent heatmaps for the $\log R$, $\log I$, and $\log S$ statistics (\Cref{fig:tension_logR,fig:tension_I,fig:tension_logS}) are sorted in ascending order to place the most negative values—indicating strong tension—at the top. The Bayesian model dimensionality in~\Cref{fig:tension_d_G} is also sorted in ascending order. We employ red highlighting to flag values that cross specific thresholds of concern. For the $p$-value, we highlight $\sigma > 2.88$, a threshold calculated by \Cref{eq:sigma_threshold_corrected} in \Cref{sssec:look_elsewhere_effect} that accounts for the look-elsewhere effect across our 248 analyses (8 models $\times$ 31 dataset pairs). This threshold is not arbitrary, if there were no genuine tensions, we would expect exactly one result to reach $\sigma = 2.88$ purely by chance. \Cref{fig:tension_heatmap} shows 14 dataset-model combinations with $\sigma > 2.88$, significantly more than the single false positive expected under the null hypothesis. Rather than correcting individual $p$-values or $\sigma$ values (which would change as the grid expands), we apply this threshold, ensuring that $\sigma$ values remain directly interpretable independently of the grid size. For the other statistics, red flags indicate $\log R < 0$ and $\log S < 0$, signalling dataset inconsistency and direct likelihood conflict, respectively. An analysis of the $p$-value significance in~\Cref{fig:tension_heatmap} immediately identifies the well-known tensions in cosmology. The comparisons of DES vs Planck ($\sigma=3.57$ in $\Lambda$CDM) and SH0ES vs Planck ($\sigma=3.27$ in $\Lambda$CDM) exhibit the highest significance, exceeding our $\sigma > 2.88$ threshold. Other comparisons involving these datasets, such as DES vs CamSpec and SH0ES vs CamSpec, also show tension in $\Lambda$CDM ($\sigma=3.19$ and $\sigma=2.84$, respectively). The results demonstrate model dependence; for instance, the DES vs Planck tension is alleviated in all model extensions (e.g., dropping to $\sigma=1.90$ in $w$CDM), whereas the SH0ES vs Planck tension remains above $2\sigma$ in most models, only falling below this mark for the $w_0w_a$CDM and $\Omega_k\Lambda$CDM models. This suggests that the physics introduced in the extended models is more effective at resolving the $S_8$ tension than the Hubble tension. We note that the DES vs Planck tension is expected to relax when we extend the grid to include the DES Year 3 (Y3) data release. The suite of five statistics provides a far more nuanced picture than the $p$-value alone, revealing crucial differences in the nature of these tensions. The rankings of the most problematic dataset pairs are broadly consistent across the $p$-value, Information Ratio (\Cref{fig:tension_I}), and Suspiciousness (\Cref{fig:tension_logS}) heatmaps. For example, DES vs Planck and SH0ES vs Planck comparisons populate the top rows of all three, showing highly negative values for $\log I$ and $\log S$ (e.g., for DES vs Planck in $\Lambda$CDM, $\log I = -3.36$ and $\log S = -4.67$), confirming a genuine conflict between their likelihoods and minimal posterior overlap. However, a stark disagreement emerges when comparing these to the $R$ statistic (\Cref{fig:tension_logR}). For SH0ES vs Planck, while $\log S$ is strongly negative ($-4.19$ in $\Lambda$CDM), $\log R$ is positive ($+1.19$), indicating concordance. This discrepancy arises because the Suspiciousness is prior-independent, whereas the $R$ statistic is not. The positive $\log R$ signifies that despite the likelihood conflict, the combined posterior is still substantially more constraining than the prior, a common feature in high-dimensional parameter spaces. This highlights the value of using the prior-independent Suspiciousness to isolate direct data conflict. A multi-metric analysis allows a deeper physical interpretation of the tensions. The Hubble tension (SH0ES vs CMB comparisons) is characterised by high $\sigma$, negative $\log I$ and $\log S$, but a low Bayesian dimensionality (e.g., $d_G = 1.36$ for SH0ES vs Planck in $\Lambda$CDM, see~\Cref{fig:tension_d_G}). This confirms that the conflict is sharp but concentrated in a very small number of parameter dimensions, principally $H_0$. In stark contrast, the $S_8$ tension (DES vs CMB comparisons) appears as a more systemic disagreement. For DES vs Planck in $\Lambda$CDM, not only are $\sigma$, $\log I$ and $\log S$ all indicative of tension, but the dimensionality is very high ($d_G = 6.62$). This indicates that the datasets disagree across a wide range of parameter directions, representing a more fundamental inconsistency within the $\Lambda$CDM framework. The fact that this high-dimensional tension is largely resolved in extended models reinforces the interpretation that it may be a signature of new physics. In summary, this comprehensive five-statistic analysis provides a detailed and robust characterisation of the consistency landscape. We find that relying on a single metric like the $p$-value can be misleading. The combined view confirms that the DES vs CMB and SH0ES vs CMB tensions are the most significant statistical conflicts in the data, but their natures are profoundly different. The $S_8$ tension is a high-dimensional problem that is effectively resolved by allowing for new physical degrees of freedom, whereas the Hubble tension is a sharp, low-dimensional conflict that persists across models and is only flagged as a severe issue by prior-independent metrics like Suspiciousness. This nuanced understanding, gained by synthesising information from multiple complementary statistics, is crucial for guiding future model building and determining which dataset combinations can be reliably used for joint cosmological analyses. Caution should be exercised when combining datasets in tension. Conversely, pairs at the bottom of the rankings, such as BICEP vs Pantheon, show excellent agreement across all five metrics ($\sigma \approx 0$, $\log R > 0$) and can be combined with confidence. Our findings are consistent with the curvature tension analysis of~\cite{Handley2021PRD}, which reported similar moderate tensions between Planck 2018 and CMB lensing ($\sigma = 2.49 \pm 0.07$) and between Planck 2018 and BAO ($\sigma = 3.03 \pm 0.06$) in the context of curved $\Omega_K\Lambda$CDM cosmologies. However, our model comparison results show lower Bayes factors (1.85 log units for $\Omega_k\Lambda$CDM vs $\Lambda$CDM compared to 4 log units in that work), which can be attributed to the deliberately wider priors adopted in our analysis using the Cobaya defaults. These wider priors provide greater flexibility for importance reweighting if tighter priors are desired in future analyses. Whilst that work focused specifically on the curvature parameter $\Omega_K$, our systematic analysis across eight model extensions and 31 dataset pairs provides a broader view of the tension landscape, demonstrating that the methodology is robust and the tensions persist across multiple cosmological frameworks. \begin{figure}[p] \vspace{-3cm} \centering \includegraphics[width=\textwidth]{figs/tension_stats_p_sorted_by_p.pdf} \caption{A heatmap quantifying the tension between 31 pairwise dataset combinations ($y$-axis) across 8 cosmological models ($x$-axis). The tension is expressed as the significance in equivalent Gaussian standard deviations ($\sigma$), derived from the $p$-value, allowing for direct comparison across the grid. The dataset pairs are sorted vertically in descending order of their average tension across all models (\Cref{eq:model_weighted_sigma}), placing the most discordant combinations at the top. Values with $\sigma > 2.88$, highlighted in red, exceed the significance threshold that accounts for the look-elsewhere effect across all 248 analyses performed. This threshold is defined such that if no genuine tensions existed, only one false positive would be expected by chance (see~\Cref{sssec:look_elsewhere_effect}). We observe 14 such instances.} \label{fig:tension_heatmap} \end{figure} \begin{figure}[p] \vspace{-3cm} \centering \includegraphics[width=\textwidth]{figs/tension_stats_d_G_sorted_by_d_G.pdf} \caption{A heatmap quantifying the Bayesian Model Dimensionality ($d_G$) for 31 pairwise dataset combinations ($y$-axis) across 8 cosmological models ($x$-axis). $d_G$ measures the effective number of constrained parameters in the shared parameter space of two datasets (see~\Cref{sssec:bayesian_model_dimensionality}), allowing for direct comparison across the grid. The dataset pairs are sorted vertically in ascending order of their average dimensionality across all models. This metric distinguishes between sharp, low-dimensional conflicts and broader, systemic disagreements.} \label{fig:tension_d_G} \end{figure} \begin{figure}[p] \vspace{-3cm} \centering \includegraphics[width=\textwidth]{figs/tension_stats_I_sorted_by_I.pdf} \caption{A heatmap quantifying the tension using the Information Ratio ($I$) for 31 pairwise dataset combinations ($y$-axis) across 8 cosmological models ($x$-axis). $I$ quantifies tension by comparing the $\KL$ of the combined posterior relative to the individual posteriors (see~\Cref{sssec:information_ratio}), allowing for direct comparison across the grid. The dataset pairs are sorted vertically in ascending order of their average $I$ across all models, placing the combinations with the most negative $I$ values, and thus the strongest tension, at the top. A negative $I$ ($I < 0$) signifies that the volume of the combined posterior is substantially smaller than would be expected from statistically consistent datasets, pointing to minimal overlap between their individual parameter constraints. This metric therefore provides an intuitive, volume-based measure of statistical surprise.} \label{fig:tension_I} \end{figure} \begin{figure}[p] \vspace{-3.2cm} \centering \includegraphics[width=\textwidth]{figs/tension_stats_logR_sorted_by_logR.pdf} \caption{A heatmap quantifying inter-dataset consistency using the logarithmic $R$ statistic ($\log R$) for 31 pairwise dataset combinations ($y$-axis) across 8 cosmological models ($x$-axis). The $R$ statistic is a prior-dependent measure of consistency that compares the joint Bayesian evidence of two datasets to the product of their individual evidences (see~\Cref{sssec:r_statistic}), and is interpreted relative to unity. The dataset pairs are sorted vertically in ascending order of their average $\log R$ across all models, placing the most inconsistent combinations at the top. Values of $R > 1$ ($\log R > 0$) indicate concordance, where each dataset strengthens the probability of the other. Conversely, values with $\log R < 0$ ($R < 1$), highlighted in red, indicate inconsistency, signifying that the joint probability of the data is lower than would be expected if the datasets were independent under the assumed model.} \label{fig:tension_logR} \end{figure} \begin{figure}[p] \vspace{-3cm} \centering \includegraphics[width=\textwidth]{figs/tension_stats_logS_sorted_by_logS.pdf} \caption{A heatmap quantifying tension using the logarithmic Suspiciousness ($\log S$) for 31 pairwise dataset combinations ($y$-axis) across 8 cosmological models ($x$-axis). $S$ is a prior-independent metric that quantifies the statistical conflict between the likelihoods of two datasets (see~\Cref{sssec:suspiciousness}), allowing for direct comparison across the grid. The dataset pairs are sorted vertically in ascending order of their average $\log S$ (\Cref{eq:model_weighted_average}) across all models, placing the combinations with the most negative $\log S$ values, and thus the strongest tensions, at the top. Values with $\log S \ge 0$ indicate agreement, while values with $\log S < 0$, highlighted in red, indicate tension, with more negative values signifying a stronger conflict between the datasets.} \label{fig:tension_logS} \end{figure} \clearpage % \section{Discussion} % \label{sec:discussion} % The release of \texttt{unimpeded} and its associated data grid represents a significant step towards democratizing advanced cosmological analysis and systematically addressing the growing number of tensions in the field. % The ability to readily access and analyse nested sampling chains is transformative. Previously, generating such chains for even a single model-dataset combination required considerable computational resources and expertise. \texttt{unimpeded} allows researchers to bypass this costly step for a wide range of standard scenarios, freeing up resources for novel theoretical work or analysis of more exotic models. This significantly enhances the community's capacity to perform robust Bayesian model comparison, moving beyond simple parameter goodness-of-fit to quantitatively assess the relative probabilities of competing cosmological paradigms. % The systematic application of tension statistics across the grid, as exemplified by the conceptual $\log \mathcal{R}$ heatmap, provides a powerful diagnostic tool. It allows for a global view of (in)consistencies, helping to distinguish isolated issues from broader patterns. For instance, if a particular dataset consistently shows tension with multiple other independent datasets across various models, it might point to unaddressed systematics within that dataset. Conversely, if a specific model consistently alleviates tensions seen in simpler models, it could lend support to that new physics scenario. % This work directly fulfills key aims of our DiRAC grant, which proposed the creation of a publicly accessible grid of nested sampling chains to facilitate model comparison and tension quantification. The \texttt{unimpeded} framework is the realisation of this goal, providing an updated and extended analogue to the PLA. The ongoing population and analysis of this grid will continue to inform our understanding of systematic errors, the limitations of the $\Lambda$CDM model, and potential avenues for new physics. By making both the tools and the data products openly available, we aim to foster collaborative efforts and accelerate progress in resolving the current cosmological tensions. % The insights gained from such systematic studies are crucial. They can guide observational strategies for future surveys, inform the development of new theoretical models, and ultimately help us to build a more complete and consistent picture of the Universe. % \section{Conclusion} % \label{sec:conclusion} % In this paper, we have introduced \texttt{unimpeded}, a new Python library and data repository designed to facilitate advanced Bayesian inference in cosmology. The key contributions are: % \begin{enumerate} % \item The public release of \texttt{unimpeded}, providing easy access to a comprehensive grid of pre-computed nested sampling and MCMC chains for numerous cosmological models and datasets. % \item The hosting of these data products on Zenodo, ensuring open access and long-term availability for the scientific community. % \item A demonstration of how \texttt{unimpeded}, in conjunction with tools like \texttt{anesthetic}, enables systematic tension quantification across this grid, providing a global view of dataset consistency. % \end{enumerate} % \texttt{unimpeded} significantly lowers the computational barrier for researchers wishing to perform Bayesian model comparison and rigorously assess inter-dataset tensions. This work serves as an update to our DiRAC ``Case for Support,'' realising its objective to create a next-generation analysis resource for the cosmology community. % \section{Future Work} \section{Conclusions} \label{sec:conclusions} In this work, we have introduced \texttt{unimpeded}, a comprehensive and publicly available resource for Bayesian cosmological analysis. We have performed a systematic nested sampling analysis of eight cosmological models, from the base $\Lambda$CDM paradigm to seven well-motivated extensions, constrained by a suite of 39 individual and combined datasets. The primary data product of this analysis is an extensive repository of MCMC and nested sampling chains, hosted on Zenodo, which we provide to the community to facilitate reproducible and extensible cosmological research. The use of deliberately wide priors ensures that these chains are a versatile resource, suitable for importance reweighting and a wide range of future studies. Our analysis yields two principal scientific conclusions. First, through a comprehensive model comparison, we find that whilst individual datasets show varied preferences for model extensions, the base $\Lambda$CDM model is most frequently preferred in combined analyses, with the general trend suggesting that evidence for new physics is diluted when probes are combined. This reinforces the predictive power and economy of the standard cosmological model. Second, by employing five complementary tension statistics, we systematically quantified the discordances between key datasets. We find the most significant tensions to be between SH0ES and Planck ($\sigma=3.27$) and between DES(Y1) and Planck ($\sigma=3.57$), within $\Lambda$CDM. Our multi-metric approach reveals that these tensions have profoundly different natures: the $S_8$ tension between DES and Planck is a high-dimensional disagreement ($d_G=6.62$) that is mildly alleviated in models with a varying dark energy equation of state, whereas the Hubble tension between SH0ES and Planck is a sharp, low-dimensional conflict ($d_G=1.36$) that persists across almost all model extensions considered. The \texttt{unimpeded} resource provides a powerful platform for future investigations. The upgrade to DES Year 3 data is expected to clarify the status of the $S_8$ tension, and our framework provides the ideal foundation for a rapid and consistent analysis of this and other forthcoming datasets. Caution should be exercised when combining datasets in tension. By providing a standardised and accessible suite of Bayesian analysis products, we hope to accelerate progress in understanding the remaining tensions within the cosmological landscape and to robustly test the limits of the $\Lambda$CDM model. \appendix \acknowledgments This work was performed using the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. DiRAC is part of the National e-Infrastructure. W.H. acknowledges support from a Royal Society University Research Fellowship. % Bibliography % [A] Recommended: using JHEP.bst file \bibliographystyle{JHEP} \bibliography{biblio} % Ensure your biblio.bib file is in the same directory %% or %% [B] Manual formatting (see below) %% (i) We suggest to always provide author, title and journal data or doi: %% in short all the informations that clearly identify a document. %% (iiii) please avoid comments such as ``For a review'', ''For some examples", %% ``and references therein'' or move them in the text. In general, please leave only references in the bibliography and move all %% accessory text in footnotes. %% (iii) Also, please have only one work for each \bibitem. % \begin{thebibliography}{99} % \bibitem{a} % Author, % \emph{Title}, % \emph{J. Abbrev.} {\bf vol} (year) pg. % \bibitem{b} % Author, % \emph{Title}, % arxiv:1234.5678. % \bibitem{c} % Author, % \emph{Title}, % Publisher (year). % \end{thebibliography} \end{document} ``` 4. **Bibliographic Information:** ```bbl ``` 5. **Author Information:** - Lead Author: {'name': 'Dily Duan Yi Ong'} - Full Authors List: ```yaml Dily Ong: phd: start: 2023-10-01 supervisors: - Will Handley thesis: null original_image: images/originals/dily_ong.jpg image: /assets/group/images/dily_ong.jpg Will Handley: pi: start: 2020-10-01 thesis: null postdoc: start: 2016-10-01 end: 2020-10-01 thesis: null phd: start: 2012-10-01 end: 2016-09-30 supervisors: - Anthony Lasenby - Mike Hobson thesis: 'Kinetic initial conditions for inflation: theory, observation and methods' original_image: images/originals/will_handley.jpeg image: /assets/group/images/will_handley.jpg links: Webpage: https://willhandley.co.uk ``` This YAML file provides a concise snapshot of an academic research group. It lists members by name along with their academic roles—ranging from Part III and summer projects to MPhil, PhD, and postdoctoral positions—with corresponding dates, thesis topics, and supervisor details. Supplementary metadata includes image paths and links to personal or departmental webpages. A dedicated "coi" section profiles senior researchers, highlighting the group’s collaborative mentoring network and career trajectories in cosmology, astrophysics, and Bayesian data analysis. ==================================================================================== Final Output Instructions ==================================================================================== - Combine all data sources to create a seamless, engaging narrative. - Follow the exact Markdown output format provided at the top. - Do not include any extra explanation, commentary, or wrapping beyond the specified Markdown. - Validate that every bibliographic reference with a DOI or arXiv identifier is converted into a Markdown link as per the examples. - Validate that every Markdown author link corresponds to a link in the author information block. - Before finalizing, confirm that no LaTeX citation commands or other undesired formatting remain. - Before finalizing, confirm that the link to the paper itself [2511.04661](https://arxiv.org/abs/2511.04661) is featured in the first sentence. Generate only the final Markdown output that meets all these requirements. {% endraw %}