{% raw %}
Title: Create a Markdown Blog Post Integrating Research Details and a Featured Paper
====================================================================================
This task involves generating a Markdown file (ready for a GitHub-served Jekyll site) that integrates our research details with a featured research paper. The output must follow the exact format and conventions described below.
====================================================================================
Output Format (Markdown):
------------------------------------------------------------------------------------
---
layout: post
title: "Non-Gaussian Likelihoods for Type Ia Supernovae Cosmology: Implications
for Dark Energy and $H_0$"
date: 2023-12-04
categories: papers
---



Content generated by [gemini-2.5-pro](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/content/2023-12-04-2312.02075.txt).
Image generated by [imagen-3.0-generate-002](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/images/2023-12-04-2312.02075.txt).
------------------------------------------------------------------------------------
====================================================================================
Please adhere strictly to the following instructions:
====================================================================================
Section 1: Content Creation Instructions
====================================================================================
1. **Generate the Page Body:**
- Write a well-composed, engaging narrative that is suitable for a scholarly audience interested in advanced AI and astrophysics.
- Ensure the narrative is original and reflective of the tone and style and content in the "Homepage Content" block (provided below), but do not reuse its content.
- Use bullet points, subheadings, or other formatting to enhance readability.
2. **Highlight Key Research Details:**
- Emphasize the contributions and impact of the paper, focusing on its methodology, significance, and context within current research.
- Specifically highlight the lead author ({'name': 'Toby Lovick'}). When referencing any author, use Markdown links from the Author Information block (choose academic or GitHub links over social media).
3. **Integrate Data from Multiple Sources:**
- Seamlessly weave information from the following:
- **Paper Metadata (YAML):** Essential details including the title and authors.
- **Paper Source (TeX):** Technical content from the paper.
- **Bibliographic Information (bbl):** Extract bibliographic references.
- **Author Information (YAML):** Profile details for constructing Markdown links.
- Merge insights from the Paper Metadata, TeX source, Bibliographic Information, and Author Information blocks into a coherent narrative—do not treat these as separate or isolated pieces.
- Insert the generated narrative between the HTML comments:
and
4. **Generate Bibliographic References:**
- Review the Bibliographic Information block carefully.
- For each reference that includes a DOI or arXiv identifier:
- For DOIs, generate a link formatted as:
[10.1234/xyz](https://doi.org/10.1234/xyz)
- For arXiv entries, generate a link formatted as:
[2103.12345](https://arxiv.org/abs/2103.12345)
- **Important:** Do not use any LaTeX citation commands (e.g., `\cite{...}`). Every reference must be rendered directly as a Markdown link. For example, instead of `\cite{mycitation}`, output `[mycitation](https://doi.org/mycitation)`
- **Incorrect:** `\cite{10.1234/xyz}`
- **Correct:** `[10.1234/xyz](https://doi.org/10.1234/xyz)`
- Ensure that at least three (3) of the most relevant references are naturally integrated into the narrative.
- Ensure that the link to the Featured paper [2312.02075](https://arxiv.org/abs/2312.02075) is included in the first sentence.
5. **Final Formatting Requirements:**
- The output must be plain Markdown; do not wrap it in Markdown code fences.
- Preserve the YAML front matter exactly as provided.
====================================================================================
Section 2: Provided Data for Integration
====================================================================================
1. **Homepage Content (Tone and Style Reference):**
```markdown
---
layout: home
---

The Handley Research Group stands at the forefront of cosmological exploration, pioneering novel approaches that fuse fundamental physics with the transformative power of artificial intelligence. We are a dynamic team of researchers, including PhD students, postdoctoral fellows, and project students, based at the University of Cambridge. Our mission is to unravel the mysteries of the Universe, from its earliest moments to its present-day structure and ultimate fate. We tackle fundamental questions in cosmology and astrophysics, with a particular focus on leveraging advanced Bayesian statistical methods and AI to push the frontiers of scientific discovery. Our research spans a wide array of topics, including the [primordial Universe](https://arxiv.org/abs/1907.08524), [inflation](https://arxiv.org/abs/1807.06211), the nature of [dark energy](https://arxiv.org/abs/2503.08658) and [dark matter](https://arxiv.org/abs/2405.17548), [21-cm cosmology](https://arxiv.org/abs/2210.07409), the [Cosmic Microwave Background (CMB)](https://arxiv.org/abs/1807.06209), and [gravitational wave astrophysics](https://arxiv.org/abs/2411.17663).
### Our Research Approach: Innovation at the Intersection of Physics and AI
At The Handley Research Group, we develop and apply cutting-edge computational techniques to analyze complex astronomical datasets. Our work is characterized by a deep commitment to principled [Bayesian inference](https://arxiv.org/abs/2205.15570) and the innovative application of [artificial intelligence (AI) and machine learning (ML)](https://arxiv.org/abs/2504.10230).
**Key Research Themes:**
* **Cosmology:** We investigate the early Universe, including [quantum initial conditions for inflation](https://arxiv.org/abs/2002.07042) and the generation of [primordial power spectra](https://arxiv.org/abs/2112.07547). We explore the enigmatic nature of [dark energy, using methods like non-parametric reconstructions](https://arxiv.org/abs/2503.08658), and search for new insights into [dark matter](https://arxiv.org/abs/2405.17548). A significant portion of our efforts is dedicated to [21-cm cosmology](https://arxiv.org/abs/2104.04336), aiming to detect faint signals from the Cosmic Dawn and the Epoch of Reionization.
* **Gravitational Wave Astrophysics:** We develop methods for [analyzing gravitational wave signals](https://arxiv.org/abs/2411.17663), extracting information about extreme astrophysical events and fundamental physics.
* **Bayesian Methods & AI for Physical Sciences:** A core component of our research is the development of novel statistical and AI-driven methodologies. This includes advancing [nested sampling techniques](https://arxiv.org/abs/1506.00171) (e.g., [PolyChord](https://arxiv.org/abs/1506.00171), [dynamic nested sampling](https://arxiv.org/abs/1704.03459), and [accelerated nested sampling with $\beta$-flows](https://arxiv.org/abs/2411.17663)), creating powerful [simulation-based inference (SBI) frameworks](https://arxiv.org/abs/2504.10230), and employing [machine learning for tasks such as radiometer calibration](https://arxiv.org/abs/2504.16791), [cosmological emulation](https://arxiv.org/abs/2503.13263), and [mitigating radio frequency interference](https://arxiv.org/abs/2211.15448). We also explore the potential of [foundation models for scientific discovery](https://arxiv.org/abs/2401.00096).
**Technical Contributions:**
Our group has a strong track record of developing widely-used scientific software. Notable examples include:
* [**PolyChord**](https://arxiv.org/abs/1506.00171): A next-generation nested sampling algorithm for Bayesian computation.
* [**anesthetic**](https://arxiv.org/abs/1905.04768): A Python package for processing and visualizing nested sampling runs.
* [**GLOBALEMU**](https://arxiv.org/abs/2104.04336): An emulator for the sky-averaged 21-cm signal.
* [**maxsmooth**](https://arxiv.org/abs/2007.14970): A tool for rapid maximally smooth function fitting.
* [**margarine**](https://arxiv.org/abs/2205.12841): For marginal Bayesian statistics using normalizing flows and KDEs.
* [**fgivenx**](https://arxiv.org/abs/1908.01711): A package for functional posterior plotting.
* [**nestcheck**](https://arxiv.org/abs/1804.06406): Diagnostic tests for nested sampling calculations.
### Impact and Discoveries
Our research has led to significant advancements in cosmological data analysis and yielded new insights into the Universe. Key achievements include:
* Pioneering the development and application of advanced Bayesian inference tools, such as [PolyChord](https://arxiv.org/abs/1506.00171), which has become a cornerstone for cosmological parameter estimation and model comparison globally.
* Making significant contributions to the analysis of major cosmological datasets, including the [Planck mission](https://arxiv.org/abs/1807.06209), providing some of the tightest constraints on cosmological parameters and models of [inflation](https://arxiv.org/abs/1807.06211).
* Developing novel AI-driven approaches for astrophysical challenges, such as using [machine learning for radiometer calibration in 21-cm experiments](https://arxiv.org/abs/2504.16791) and [simulation-based inference for extracting cosmological information from galaxy clusters](https://arxiv.org/abs/2504.10230).
* Probing the nature of dark energy through innovative [non-parametric reconstructions of its equation of state](https://arxiv.org/abs/2503.08658) from combined datasets.
* Advancing our understanding of the early Universe through detailed studies of [21-cm signals from the Cosmic Dawn and Epoch of Reionization](https://arxiv.org/abs/2301.03298), including the development of sophisticated foreground modelling techniques and emulators like [GLOBALEMU](https://arxiv.org/abs/2104.04336).
* Developing new statistical methods for quantifying tensions between cosmological datasets ([Quantifying tensions in cosmological parameters: Interpreting the DES evidence ratio](https://arxiv.org/abs/1902.04029)) and for robust Bayesian model selection ([Bayesian model selection without evidences: application to the dark energy equation-of-state](https://arxiv.org/abs/1506.09024)).
* Exploring fundamental physics questions such as potential [parity violation in the Large-Scale Structure using machine learning](https://arxiv.org/abs/2410.16030).
### Charting the Future: AI-Powered Cosmological Discovery
The Handley Research Group is poised to lead a new era of cosmological analysis, driven by the explosive growth in data from next-generation observatories and transformative advances in artificial intelligence. Our future ambitions are centred on harnessing these capabilities to address the most pressing questions in fundamental physics.
**Strategic Research Pillars:**
* **Next-Generation Simulation-Based Inference (SBI):** We are developing advanced SBI frameworks to move beyond traditional likelihood-based analyses. This involves creating sophisticated codes for simulating [Cosmic Microwave Background (CMB)](https://arxiv.org/abs/1908.00906) and [Baryon Acoustic Oscillation (BAO)](https://arxiv.org/abs/1607.00270) datasets from surveys like DESI and 4MOST, incorporating realistic astrophysical effects and systematic uncertainties. Our AI initiatives in this area focus on developing and implementing cutting-edge SBI algorithms, particularly [neural ratio estimation (NRE) methods](https://arxiv.org/abs/2407.15478), to enable robust and scalable inference from these complex simulations.
* **Probing Fundamental Physics:** Our enhanced analytical toolkit will be deployed to test the standard cosmological model ($\Lambda$CDM) with unprecedented precision and to explore [extensions to Einstein's General Relativity](https://arxiv.org/abs/2006.03581). We aim to constrain a wide range of theoretical models, from modified gravity to the nature of [dark matter](https://arxiv.org/abs/2106.02056) and [dark energy](https://arxiv.org/abs/1701.08165). This includes leveraging data from upcoming [gravitational wave observatories](https://arxiv.org/abs/1803.10210) like LISA, alongside CMB and large-scale structure surveys from facilities such as Euclid and JWST.
* **Synergies with Particle Physics:** We will continue to strengthen the connection between cosmology and particle physics by expanding the [GAMBIT framework](https://arxiv.org/abs/2009.03286) to interface with our new SBI tools. This will facilitate joint analyses of cosmological and particle physics data, providing a holistic approach to understanding the Universe's fundamental constituents.
* **AI-Driven Theoretical Exploration:** We are pioneering the use of AI, including [large language models and symbolic computation](https://arxiv.org/abs/2401.00096), to automate and accelerate the process of theoretical model building and testing. This innovative approach will allow us to explore a broader landscape of physical theories and derive new constraints from diverse astrophysical datasets, such as those from GAIA.
Our overarching goal is to remain at the forefront of scientific discovery by integrating the latest AI advancements into every stage of our research, from theoretical modeling to data analysis and interpretation. We are excited by the prospect of using these powerful new tools to unlock the secrets of the cosmos.
Content generated by [gemini-2.5-pro-preview-05-06](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/content/index.txt).
Image generated by [imagen-3.0-generate-002](https://deepmind.google/technologies/gemini/) using [this prompt](/prompts/images/index.txt).
```
2. **Paper Metadata:**
```yaml
!!python/object/new:feedparser.util.FeedParserDict
dictitems:
id: http://arxiv.org/abs/2312.02075v1
guidislink: true
link: http://arxiv.org/abs/2312.02075v1
updated: '2023-12-04T17:39:47Z'
updated_parsed: !!python/object/apply:time.struct_time
- !!python/tuple
- 2023
- 12
- 4
- 17
- 39
- 47
- 0
- 338
- 0
- tm_zone: null
tm_gmtoff: null
published: '2023-12-04T17:39:47Z'
published_parsed: !!python/object/apply:time.struct_time
- !!python/tuple
- 2023
- 12
- 4
- 17
- 39
- 47
- 0
- 338
- 0
- tm_zone: null
tm_gmtoff: null
title: "Non-Gaussian Likelihoods for Type Ia Supernovae Cosmology: Implications\n\
\ for Dark Energy and $H_0$"
title_detail: !!python/object/new:feedparser.util.FeedParserDict
dictitems:
type: text/plain
language: null
base: ''
value: "Non-Gaussian Likelihoods for Type Ia Supernovae Cosmology: Implications\n\
\ for Dark Energy and $H_0$"
summary: 'The latest improvements in the scale and calibration of Type Ia supernovae
catalogues allow us to constrain the specific nature and evolution of dark
energy through its effect on the expansion history of the universe. We present
the results of Bayesian cosmological model comparison on the SNe~Ia catalogue
Pantheon+, where Flat $\Lambda$CDM is preferred by the data over all other
models and we find moderate evidence ($\Delta \log \mathcal{Z} \sim 2.5$) to
reject a number of the alternate dark energy models. The effect of peculiar
velocity corrections on model comparison is analysed, where we show that
removing the peculiar velocity corrections results in a varying fit on
non-$\Lambda$CDM parameters. As well as comparing cosmological models, the
Bayesian methodology is extended to comparing the scatter model of the data,
testing for non-gaussianity in the Pantheon+ Hubble residuals. We find that
adding a scale parameter to the Pantheon+ covariances, or alternately using a
multivariate Student''s t-distribution fits the data better than the fiducial
analysis, producing a cosmology independent evidence increase of $\Delta \log
\mathcal{Z} = 2.29 $ and $2.46$ respectively. This improved treatment of the
scatter decreases the uncertainty in the constraint on the Hubble constant,
finding $H_0 = 73.67 \pm 0.99 $ km s$^{-1}$ Mpc$^{-1}$, in $ 5.7 \sigma$
tension with Planck. We also explore $M_B$ transition models as a potential
solution for the Hubble tension, finding no evidence to support these models
among the SNe data.'
summary_detail: !!python/object/new:feedparser.util.FeedParserDict
dictitems:
type: text/plain
language: null
base: ''
value: 'The latest improvements in the scale and calibration of Type Ia supernovae
catalogues allow us to constrain the specific nature and evolution of dark
energy through its effect on the expansion history of the universe. We present
the results of Bayesian cosmological model comparison on the SNe~Ia catalogue
Pantheon+, where Flat $\Lambda$CDM is preferred by the data over all other
models and we find moderate evidence ($\Delta \log \mathcal{Z} \sim 2.5$)
to
reject a number of the alternate dark energy models. The effect of peculiar
velocity corrections on model comparison is analysed, where we show that
removing the peculiar velocity corrections results in a varying fit on
non-$\Lambda$CDM parameters. As well as comparing cosmological models, the
Bayesian methodology is extended to comparing the scatter model of the data,
testing for non-gaussianity in the Pantheon+ Hubble residuals. We find that
adding a scale parameter to the Pantheon+ covariances, or alternately using
a
multivariate Student''s t-distribution fits the data better than the fiducial
analysis, producing a cosmology independent evidence increase of $\Delta \log
\mathcal{Z} = 2.29 $ and $2.46$ respectively. This improved treatment of the
scatter decreases the uncertainty in the constraint on the Hubble constant,
finding $H_0 = 73.67 \pm 0.99 $ km s$^{-1}$ Mpc$^{-1}$, in $ 5.7 \sigma$
tension with Planck. We also explore $M_B$ transition models as a potential
solution for the Hubble tension, finding no evidence to support these models
among the SNe data.'
authors:
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Toby Lovick
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Suhail Dhawan
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Will Handley
author_detail: !!python/object/new:feedparser.util.FeedParserDict
dictitems:
name: Will Handley
author: Will Handley
arxiv_comment: 13 pages, 13 figures
links:
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
href: http://arxiv.org/abs/2312.02075v1
rel: alternate
type: text/html
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
title: pdf
href: http://arxiv.org/pdf/2312.02075v1
rel: related
type: application/pdf
arxiv_primary_category:
term: astro-ph.CO
scheme: http://arxiv.org/schemas/atom
tags:
- !!python/object/new:feedparser.util.FeedParserDict
dictitems:
term: astro-ph.CO
scheme: http://arxiv.org/schemas/atom
label: null
```
3. **Paper Source (TeX):**
```tex
\documentclass[fleqn,usenatbib]{mnras}
% MNRAS is set in Times font. If you don't have this installed (most LaTeX
% installations will be fine) or prefer the old Computer Modern fonts, comment
% out the following line
\usepackage{newtxmath,newtxtext}
\usepackage{siunitx}
\usepackage{pythonhighlight}
\usepackage[noabbrev]{cleveref}
% Depending on your LaTeX fonts installation, you might get better results with one of these:
%\usepackage{mathptmx}
%\usepackage{txfonts}
% Use vector fonts, so it zooms properly in on-screen viewing software
% Don't change these lines unless you know what you are doing
\usepackage[T1]{fontenc}
% Allow "Thomas van Noord" and "Simon de Laguarde" and alike to be sorted by "N" and "L" etc. in the bibliography.
% Write the name in the bibliography as "\VAN{Noord}{Van}{van} Noord, Thomas"
\DeclareRobustCommand{\VAN}[3]{#2}
\let\VANthebibliography\thebibliography
\def\thebibliography{\DeclareRobustCommand{\VAN}[3]{##3}\VANthebibliography}
%%%%% AUTHORS - PLACE YOUR OWN PACKAGES HERE %%%%%
% Only include extra packages if you really need them. Common packages are:
\usepackage{graphicx} % Including figure files
%\usepackage{amsmath} % Advanced maths commands
% \usepackage{amssymb} % Extra maths symbols
\usepackage{bm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%% AUTHORS - PLACE YOUR OWN COMMANDS HERE %%%%%
% Please keep new commands to a minimum, and use \newcommand not \def to avoid
% overwriting existing commands. Example:
%\newcommand{\pcm}{\,cm$^{-2}$} % per cm-squared
\newcommand{\err}[3]{$#1^{+ #2}_{- #3}$}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand{\sdt}[1]{{\color[rgb]{0.9,0.0,0.0} {\textbf{#1}}}}
%%%%%%%%%%%%%%%%%%% TITLE PAGE %%%%%%%%%%%%%%%%%%%
% Title of the paper, and the short title which is used in the headers.
% Keep the title short and informative.
\title[Dark energy model comparison]{Non-Gaussian Likelihoods for Type Ia Supernovae Cosmology: Implications for Dark Energy and $H_0$}
% The list of authors, and the short list which is used in the headers.
% If you need two or more lines of authors, add an extra line using \newauthor
\author[T. Lovick et al.]{
Toby Lovick,$^{1}$
Suhail Dhawan,$^{1}$
Will Handley $^{1,2}$
\\
% List of institutions
$^{1}$ Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK \\
$^{2}$ Astrophysics Group, Cavendish Laboratory University of Cambridge,
JJ Thomson Avenue, Cambridge, CB3 0HE, UK
}
% These dates will be filled out by the publisher
\date{4 December 2023}
% Enter the current year, for the copyright statements etc.
\pubyear{0000}
% Don't change these lines
\begin{document}
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\maketitle
\begin{abstract}
The latest improvements in the scale and calibration of Type Ia supernovae catalogues allow us to constrain the specific nature and evolution of dark energy through its effect on the expansion history of the universe. We present the results of Bayesian cosmological model comparison on the SNe~Ia catalogue Pantheon+, where Flat $\Lambda$CDM is preferred by the data over all other models and we find moderate evidence ($\Delta \log \mathcal{Z} \sim 2.5$) to reject a number of the alternate dark energy models. The effect of peculiar velocity corrections on model comparison is analysed, where we show that removing the peculiar velocity corrections results in a varying fit on non-$\Lambda$CDM parameters. As well as comparing cosmological models, the Bayesian methodology is extended to comparing the scatter model of the data, testing for non-gaussianity in the Pantheon+ Hubble residuals. We find that adding a scale parameter to the Pantheon+ covariances, or alternately using a multivariate Student's t-distribution fits the data better than the fiducial analysis, producing a cosmology independent evidence increase of $\Delta \log \mathcal{Z} = 2.29 $ and $2.46$ respectively. This improved treatment of the scatter decreases the uncertainty in the constraint on the Hubble constant, finding $H_0 = 73.67 \pm 0.99 $ \si{km.s^{-1}.Mpc^{-1}}, in $ 5.7 \sigma$ tension with Planck. We also explore $M_B$ transition models as a potential solution for the Hubble tension, finding no evidence to support these models among the SNe data.
\end{abstract}
% Select between one and six entries from the list of approved keywords.
% Don't make up new ones.
\begin{keywords}
cosmological parameters, dark energy, methods: statistical
\end{keywords}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%% BODY OF PAPER %%%%%%%%%%%%%%%%%%
\section{Introduction}
Type Ia supernovae (SNe~Ia) are extremely bright thermonuclear explosions of a carbon-oxygen white dwarf in a binary system \citep[see][for a review of their observational properties]{Maguire2017}. After correcting for the lightcurve shape \citep{phillips1993}, colour \citep{tripp1998} and host galaxy properties \citep[e.g.][]{Kelly2010,Sullivan_2010}, they have a small dispersion in their brightness, making them excellent distance indicators for cosmology \citep[e.g., see][for a review of their cosmological utility]{Goobar2011}. They were instrumental in the discovery of universal accelerated expansion \citep{Riess1998,Perlmutter_1999} and are crucial for precision measurements of dark energy properties, e.g. energy density, equation of state and changes with cosmic time \citep{brout_2022}.
%While supernovae in general have a very wide range of light curves and physical origins, the discovery of the much more standard class of Type Ia SNe gave cosmologists access to one of the most important cosmological probes in estimating the evolution history and composition of the universe. They are visible out to high redshifts, relatively abundant, and with modern analysis have their intrinsic brightness scatter down to $\sim 0.1$mag \citep{ruiter2019}, and it's for these reasons that Ia SNe were the first observation to provide evidence for the acceleration of the universe \citep{Riess1998,Perlmutter_1999}. In the decades since, cosmologists have been able to establish and tightly constrain the standard cosmology $\Lambda$CDM.
Observations of the SN~Ia magnitude-redshift relation, as well as high-precision early and late universe probes, e.g. the cosmic microwave background power spectrum \citep{Planck2018}, baryon acoustic oscillations \citep{Bourboux2020}, and the distribution and evolution of galaxies \citep{d_Amico_2020}, have led to the inception of a standard cosmological model, termed $\Lambda$CDM. It is a model with only 6 parameters, explaining a wide range of datasets and phenomena. However, there exist significant theoretical and observational problems with dark energy that are unsolved by $\Lambda$CDM \citep{Perivolaropoulos_2022}, which motivates looking into physics beyond our standard cosmology. The most relevant problem with $\Lambda$CDM for this paper is the Hubble tension, the observed $5 \sigma$ tension between the CMB-measured value of $H_0 \sim 67.4 \pm 0.5$ \si{km.s^{-1}.Mpc^{-1}} \citep{Planck2018}, and the distance measure Hubble constant, constrained on the Pantheon+ data as $H_0 = 73.04 \pm 1.04$ \si{km.s^{-1}.Mpc^{-1}} \citep{Riess2022}. Tensions (Hubble or otherwise) in our current theory motivate searching for physics beyond the standard cosmology; alternate formulations of dark energy.
Despite the mathematical simplicity of a cosmological constant there is no reason a priori to assume that dark energy takes this form, and many alternative theories exist that also function as an accelerating factor in the dynamics of the scale of the universe. The simplest modification is $w$CDM, where the equation of state parameter $P = w\rho$ is no longer fixed at $w = -1$ for dark energy. Other models allow for a dynamical dark energy, where instead of acting like a cosmological constant, the accelerating power of dark energy evolves with cosmological time. A final category of models allows modifications to General Relativity; since our evidence for dark energy comes from Einstein's equations, it's possible to justify the appearance of an accelerating universe via a novel theory of gravity. The list of models considered in this paper are drawn largely from \cite{Dhawan2020} and cover a wide range of physical motivations and theoretical approaches to modelling dark energy. %Those original dark energy Ia SNe papers, \cite{Riess1998,Perlmutter_1999}, were only concerned with the existence of dark energy, but now with the latest Pantheon+ catalogue \cite{brout_2022}, containing 1701 observations from 1550 distinct Ia SNe, we are able to constrain the specific nature of dark energy.
Pantheon+ is the culmination of a series of efforts to standardise the luminosity of SNe~Ia, to improve their constraining power as a cosmological probe. The data have been corrected in a number of ways, accounting for statistical and systematic effects on the data: see \cite{brout_2022} Figure 1 for a full list of the companion papers that form the total catalogue. All of these effects lead to the corrected apparent magnitudes in the Pantheon+ catalogue, with their intrinsic scatter reduced to $\sim 0.1$ mag.
The usage of Bayesian techniques in cosmology has greatly increased in recent years \citep{hobson2010bayesian}. The Bayesian approach is to update our current belief, the prior distribution, using the observed data and its likelihood to form a new belief about a hypothesis, the posterior distribution. This allows us to take an uninformative prior (such as $\Omega_m \sim U[0,1]$) and update it using the supernova data to infer the value of parameters within a dark energy model. Beyond parameter inference, we can update our level of belief in competing dark energy models, as in \cite{Kurek_2008}, \cite{Handley_2021}. Starting from the belief that all dark energy models are equally likely, we can calculate the posterior probabilities of competing models as they are favoured by the data.
A review of the relative merits and demerits of a Bayesian model selection analysis are explored in detail in \cite{bernado2009,lindley2000,bayesinthesky}. For comparing various different models, a key advantage is that the Bayes factor, i.e. the logarithm of the ratio of the Bayesian evidence, penalises models with larger degrees of freedom if the additional degrees do not lead to a significant improvement in the quality of the fit \citep{Hergt_2021}. This effectively applies an ``Occam's razor" in the analysis. In this paper, we move beyond just dark energy model comparison and use the Bayes factor to not only compare different cosmological models using SNe~Ia data but also different likelihood models and how well they describe the Pantheon+ data. We present our methodology in \cref{sec:method}, our results in \cref{sec:results} and discuss them in light of the current literature in \cref{sec:discussion}. Our conclusions are presented in \cref{sec:conclusion}
\section{Methodology}\label{sec:method}
\subsection{Bayes' Theorem} \label{Bayes}
All equations in this section are contained in \cite{bayesinthesky}. \\
In the context of parameter inference and model comparison, Bayes' theorem says that the distribution of a set of parameters $\theta$ within a model $\mathcal{M}$ gets updated by data $d$ as:
\begin{equation}
p(\theta|d,\mathcal{M}) = \frac{p(d|\theta,\mathcal{M}) p(\theta|\mathcal{M})}{p(d|\mathcal{M})},
\label{posterior}
\end{equation}
or
\begin{equation*}
\text{Posterior} = \frac{\text{Likelihood}\times\text{Prior}}{\text{Evidence}} \equiv \mathcal{P} = \frac{\mathcal{L}\pi}{\mathcal{Z}}.
\end{equation*}
In this analysis we take the prior distribution of models to be uniform, i.e.
\begin{equation}
p(\mathcal{M}) = \frac{1}{\text{Number of models}},
\end{equation}
so that model comparison is quantified by the evidence ratio:
\begin{equation}
\frac{p(\mathcal{M}_0|d)}{p(\mathcal{M}_1|d)} = \frac{p(d|\mathcal{M}_0)}{p(d|\mathcal{M}_1)} = B_{01},
\label{Bayes Factor Equation}
\end{equation}
where $B_{01}$ is the Bayes factor. We compute these evidences by integrating out the posterior distribution over parameter space in \cref{posterior} to find
\begin{equation}
p(d|\mathcal{M}) = \int_{\Omega_\theta} p(d|\theta,\mathcal{M}) p(\theta|\mathcal{M}) d\theta = \langle \mathcal{L} \rangle _{\pi}.
\label{EvidenceEquation}
\end{equation}
From here on the evidence of a model is labelled with $\mathcal{Z}$, and all model comparisons are expressed in terms of $\Delta \log \mathcal{Z} = \log(\mathcal{Z}_0/\mathcal{Z}_1)$. \cref{table:jeff} shows the Jeffreys' Scale \citep{jeffreys1961} which we use to judge the significance of our results.
\begin{table}
\centering
\caption{The odds, probabilities, and interpretations of various Bayes factor as given by \protect\cite{jeffreys1961}. Here the probabilities are the posterior probability of the superior model, starting with an even prior $p(\mathcal{M}_0) = p(\mathcal{M}_1) = 0.5$.}
\hspace{5ex} \\
\begin{tabular}{@{\extracolsep{\fill}} l c c c}
\hline
$|\log B_{01}|$ & Odds & Probability & Strength of evidence \\ [0.5ex]
\hline
$<1.0$ & $\lesssim 3:1$ & $<0.750$ & Inconclusive \\
$1.0$ & $\sim 3:1$ & $0.750$ & Weak \\
$2.5$ & $\sim 12:1$ & $0.923$& Moderate \\
$5$ & $\sim 150:1$ & $0.993$ & Strong \\ [1ex]
\hline
\end{tabular}
\label{table:jeff}
\end{table}
\subsection{Occam Penalty} \label{occam}
As mentioned previously Bayesian model comparison has a built in Occam's razor term, that works to rule out over-complicated models. This takes the form of Kullback–Leibler divergence \citep{kullback}, the statistical distance between the posterior and prior :
\begin{equation}
\mathcal{D}_\textrm{KL}(\theta|\mathcal{M}) = \int p(\theta | d,\mathcal{M}) \log \frac{p(\theta | d,\mathcal{M})}{p(\theta | \mathcal{M})}d\theta = \int \mathcal{P} \log \frac{\mathcal{L}}{\mathcal{Z}}d\theta.
\end{equation}
Splitting up the logarithm demonstrates how this term penalises the evidence \citep{Hergt_2021}:
\begin{equation}
\log \mathcal{Z} = \langle \log\mathcal{L} \rangle _{\mathcal{P}} - \mathcal{D}_\textrm{KL},
\label{DKLequation}
\end{equation}
so the evidence of a model is equal to the goodness of fit, as measured by the posterior averaged log likelihood, minus a penalty term, the KL-divergence. $\Delta \log \mathcal{Z}$ can be dissected even further, using the approximation:
\begin{equation} \label{laplaceapprox}
\langle \log \mathcal{L}\rangle_\mathcal{P} \approx \log \mathcal{L}_{\text{max}} -\frac{n}{2},
\end{equation}
where $n$ is the model dimension. This expression is exact when the likelihood is a multivariate Gaussian in $\theta$. This can be generalised by replacing $n$ with $\hat{d}$,the Bayesian Model Dimensionality \citep{complexity}, which is a measure of the effective number of constrained parameters in a theory. This holds for parameters with approximately Gaussian or uniform posteriors, i.e. over ``well-behaved" parameter spaces. Substituting this into \cref{DKLequation}:
\begin{equation} \label{occampenaltyapprox}
\Delta \log \mathcal{Z} \approx \Delta(\log \mathcal{L}_{\text{max}} ) - \frac{\Delta\hat{d}}{2} - \Delta(\mathcal{D}_\textrm{KL}).
\end{equation}
The evidence difference is (approximately) the best possible fit a model can find, plus a $\mathcal{D}_\textrm{KL}$ penalty, plus a dimensionality penalty.
\subsection{Cosmological Model Fitting}
Given some parameters $\theta$ in a given cosmology, we calculate luminosity distances with
\begin{equation}
\label{DL}
d_L(z) = \frac{c(1+z)}{H_0\sqrt{|\Omega_k|}} \text{sinn} \left(\sqrt{|\Omega_k|} \int ^{z}_0 \frac{dz'}{E(z';\theta)}\right),
\end{equation}
where $E^2(z) = H^2(z)/H^2_0$ is the dimensionless Hubble parameter, which relates the expansion history to cosmological parameters. This is related to the distance modulus $\mu$ by:
\begin{equation}
\mu(z) = 5\log(\frac{d_L(z)}{10\text{pc}}).
\label{moduli}
\end{equation}
% \sdt{this assumes flatness, modify to include the sinnh function and the explicit dependence on $\Omega_{\rm K}$ (I'VE MENTIONED THIS IN APPENDIX A BUT COULD BE MOVED TO MAIN TEXT?)}
We now calculate Hubble residuals, the deviation of the observed distances to the model:
\begin{equation}
\Delta_i = \mu_i - \mu_{\text{model}}(z_i) = (m_{B\text{ } i}-M_B) -\mu_{\text{model}}(z_i).
\end{equation}
The corrected $m_B$ values are given in Pantheon+ as:
\begin{equation}
m_B^{corr} = m_B + \alpha x_1 - \beta c - \delta_{\text{bias}} +\delta_{\text{host}}.
\label{mbcorr}
\end{equation}
This equation standardises across light curves via the Tripp parameters $\alpha,\beta$ \citep{tripp1998}, as well as correcting for host galaxy/survey specific selection biases \citep{brout_2022}.
For Calibrator observations (SNe that have an independent distance measure from Cepheid Variable stars), residuals are calculated as:
\begin{equation} \label{cephres}
\Delta = (m_B-M_B)-\mu_{\text{ceph}}.
\end{equation}
This cosmology-independent term only constrains $M_B$, which propagates through the data so $H_0$ can be estimated on the remaining ``Hubble Flow" SNe. This data is shown in \cref{hubblediagram}, along with the redshift cut-off applied to the Hubble Flow SNe data at $z = 0.023$ (as implemented in the SH$_0$ES calculation of $H_0$ \citep{Riess2022}). We then construct a Gaussian likelihood:
\begin{equation}
\log\mathcal{L} = -\frac{1}{2}\left( \Delta^T \bm{C}^{-1} \Delta + \log |2\pi \bm{C}| \right),
\label{gaussianlikelihood}
\end{equation}
where $\bm{C}$ is the covariance matrix of the Pantheon+ data. The process of defining this matrix is detailed in Section 2.2 of \cite{Dhawan2020}. This likelihood function, as well as the prior distributions detailed in \cref{table:prior} are fed into the nested sampling software \texttt{PolyChord} \citep{polychord,Polychord2}. While nested sampling is a numerical method, all errors in the evidence values are $< 0.05$, so are quoted/plotted without error bars.
\subsection{Dark energy Models}\label{DEmodels}
This section describes the set of dark energy models considered in this paper, with brief explanations of the motivation and mechanism of each model. The priors used for each model parameter are given in \cref{table:prior}. From here on the fiducial 6-parameter cosmology is referred to as Flat/F$\Lambda$CDM, to distinguish it from $\Lambda$CDM with non-zero curvature.
\subsection*{$\Lambda$CDM and $w$CDM}
All of the models considered in this paper act as modifications to F$\Lambda$CDM, and due to the many successful predictions of F$\Lambda$CDM must behave significantly like it to be worth considering. Despite F$\Lambda$CDM being a 6-parameter model, many of the parameters do not directly affect the expansion history, so as a dark energy model it has a single parameter $\Omega_m = 1-\Omega_\Lambda$.
The first 4 models of this paper: F$\Lambda$CDM, $\Lambda$CDM, $w$CDM, and Flat $w$CDM (F$w$CDM) treat dark energy as having a static equation of state throughout cosmic evolution. $\Lambda$CDM allows for an over or under-dense universe $\Omega_m + \Omega_\Lambda \neq 1$, which we parameterise as curvature density $\Omega_m + \Omega_\Lambda = 1 - \Omega_k$. As well as affecting the evolution of the Hubble parameter, the presence of curvature also affects luminosity distance geometrically, as on a large scale the universe is no longer Euclidean \citep{cosmologicalmodels}.
$w$CDM is a phenomenological model where the equation of state is not fixed to -1 (the value corresponding to a cosmological constant) but is instead a free parameter. We test this model both as a flat cosmology, and with the curvature density as a free parameter.
These four models are nested in one another; starting from curved $w$CDM setting $w = -1, \Omega_k = 0$, or both leads to $\Lambda$CDM, F$w$CDM and F$\Lambda$CDM respectively. The dimensionless Hubble Parameter for $w$CDM is
\begin{equation}
\frac{H^2}{H^2_0} = \Omega_M(1+z)^3 +\Omega_k(1+z)^2 + (1-\Omega_M - \Omega_k)(1+z)^{3(1+w)}.
\end{equation}
\begin{figure}
\centering
\includegraphics{hubblediagram.pdf}
\caption{The redshift-apparent magnitude data in the Pantheon+ catalogue, where here $m_B^*$ refers to the corrected Pantheon+ magnitudes. The dashed grey line shows the redshift cutoff applied to the data, at $z = 0.023$. This data is drawn from the ``zHD" and ``m\textunderscore b\textunderscore corr" columns of Pantheon+.}
\label{hubblediagram}
\end{figure}
\subsection*{One-Parameter Slow Roll Dark Energy}
Scalar field dark energy theories model current day cosmic acceleration and early time cosmic inflation with the same mechanism; a scalar field rolling down a potential. \cite{SLepian2014} showed that, for a potential satisfying the ``Slow-Roll Conditions" that allow it to behave like dark energy, the generic behaviour of this form of dark energy is independent of the initial value and shape of the potential. This generality makes it a highly desirable model, as it has only a single additional parameter $\delta w_0$ in its Hubble parameter expression:
\begin{equation}
\frac{H^2}{H^2_0} = \Omega_M(1+z)^3 + (1-\Omega_M)\left( \frac{(1+z)^3}{\Omega_M(1+z)^3+1-\Omega_M}\right)^{\frac{\delta w}{1-\Omega_M}}.
\end{equation}
% This scalar field acts like a particle moving in one dimension, rolling down a sloped potential.
\subsection*{Growing Neutrino Mass}
% The Growing Neutrino Mass model initially evolves as a scalar field, and then transitions into a constant dark energy at low redshift.
This model aims to address the ``cosmological coincidence" problem; a model that proposes an interaction with the scalar field that halts its evolution can justify why $\Omega_m \sim \Omega_\Lambda$, if the interaction only happens at a typical matter/energy density. In the case of this growing neutrino mass model \citep{Fardon2004,Wetterich2007}, the scalar field stops evolving due to neutrinos becoming non-relativistic (at a value of $\rho_\Lambda$ typical of an epoch transition such as is observed today). This model introduces two additional parameters, the early dark energy density $\Omega_e$, and the current day neutrino density $\Omega_\nu$.
In this cosmology the dark energy density has different late/early time behavior:
\begin{equation*}
\Omega_\textrm{de}
\begin{cases}
\frac{\Omega_{de}a^3+2\Omega_\nu (a^{3/2}-a^3)}{1-\Omega_{de}(1-a^3)+2\Omega_\nu (a^{3/2}-a^3)} & \text{; } a > a_t \\ \\
\Omega_e & \text{; } a < a_t.
\end{cases}
\end{equation*}
The Hubble expression is then:
\begin{equation}
\frac{H^2}{H^2_0} = \frac{\Omega_m (1+z)^3}{1-\Omega_{de}(a)}.
\end{equation}
\subsection*{Algebraic Thawing}
Thawing models contain a dark energy with $w = -1$ at high redshift, that departs from this value over cosmic time to less negative values. One such way of realising this evolution is Algebraic Thawing \citep{Linder_2007}, a dark energy that evolves in a similar way to a slow-roll dark energy during matter domination, and as dark energy becomes more dominant the evolution diverges from the scalar field dynamics. Thawing models act like a cosmological constant for most of cosmic history, only diverging at low-redshift, so they match the early universe predictions of $\Lambda$CDM. This model contains two additional parameters beyond $\Omega_m$, the current day equation of state $w_0$, and $p$, a shape parameter that controls how fast the equation of state thaws. It has Hubble parameter expression
\begin{equation}
\begin{split}
\frac{H^2}{H^2_0} & = \Omega_m(1+z)^3 + (1-\Omega_m) \times \\ & \exp \left( \frac{3(1+w_0)}{\alpha p} \left[1-\left(1-\alpha +\frac{\alpha}{(1+z)^3}\right)^{p/3}\right] \right),
\end{split}
\end{equation}
where $\alpha = \frac{1}{1+b}$, $b = 0.3$.
\subsection*{Dark Energy Transitions}
Dark energy transition models attempt to resolve the Hubble tension by inducing a large step-like response in $H(z)$ at a low redshift via large fluctuations in the equation of state parameter \citep{Mortonson_2009}. This model adds a single parameter $\delta$, where the local observed Hubble constant is $H_0 \approx (1+\delta)\Tilde{H}_0$, and $\Tilde{H}_0$ is the Hubble constant of a reference $\Lambda$CDM model. In this paper the authors suggest a transition at $z_t \approx 0.02$, however our redshift cutoff is above this value, and so we use $z_t = 0.1$ to make use of the low redshift SNe up to $z = 0.1$, as in \cite{Dhawan2020}. This model has Hubble parameter
\begin{equation}
\frac{H^2}{\Tilde{H}^2_0} = \Tilde{\Omega}_m (1=z)^3 + \left[1+ \frac{2\delta \times \mathcal{S}(z)}{(1-\Tilde{\Omega}_m)\mathcal{S}(0)} \right] (1-\Tilde{\Omega}_m),
\end{equation}
where
\begin{equation*}
\mathcal{S}(z) = \frac{1}{2}\left [ 1 - \tanh\left ( \frac{z-z_t}{\Delta z}\right)\right].
\end{equation*}
The local observed Hubble constant and matter density are then given by
\begin{align*}
H_0 = \Tilde{H}_0 \sqrt{1+2\delta}, \\
\Omega_m = \Tilde{\Omega}_m \sqrt{1+2\delta}.
\end{align*}
\subsection*{Bimetric Gravity}
Any deviation from a cosmological constant can also be realised by modifications to the equations of GR. This model of Bimetric gravity adds a massive particle to mediate the gravitational force, by having two interacting metrics for space-time instead of just one \citep{deRham2011,mortsell2018}. In this paper we consider the simplest case of Bimetric gravity, including only the linear term of this interaction, which leads to a single additional parameter $B_1$ (where $B_1$ = 0 collapses the theory back down to $\Lambda$CDM). The Hubble parameter expression is given by:
\begin{equation}
\frac{H^2}{H^2_0} = \frac{\Omega_M(1+z)^3}{2} + \frac{B_0}{6} + \sqrt{\left(\frac{\Omega_M(1+z)^3}{2}+\frac{B_0}{6}\right) ^2+\frac{B_1^2}{3}},
\end{equation}
where
\begin{equation*}
B_0 = 3(1-\Omega_M)-B_1^2.
\end{equation*}
\subsection{Likelihood Models}\label{LKmodels}
\Cref{residuals} shows the normalised residuals $\bm{C}^{-1/2} \Delta$ for best fit F$\Lambda$CDM parameters. The likelihood function in \cref{gaussianlikelihood} assumes that these residuals will form a unit Gaussian, shown by the continuous black line, but the underlying distribution appears to be slightly tighter. A likelihood function that assigns more accurate weighting to the residuals, so that observations with more or less uncertainty contribute more or less to the likelihood function, will have higher likelihood values, and therefore will be favoured by the evidence.
The discrepancy in \cref{residuals} motivates a series of tests that adjust the form of the likelihood function in \cref{gaussianlikelihood}. Here we apply the Bayesian model comparison methodology not just to the cosmological model, but also to the model of the apparent magnitude scatter. A pair of well motivated models are considered; a generalised Gaussian, and the multivariate Student's t-distribution. \cite{Dainotti} demonstrated a preference for a univariate Student's t-distribution on the normalised residuals, finding a Bayesian Information Criterion of $\Delta_\textrm{BIC} = 134$, which corresponds to an approximate Bayes factor of $67$. We expand upon this by using evidences instead of the BIC, as well as including the non-diagonal elements of the covariance matrix in our analysis.
\begin{figure}
\begin{minipage}[c]{0.5\textwidth}
\centering
\includegraphics{residuals.pdf}
\end{minipage}\hfill
\begin{minipage}[c]{0.5\textwidth}
\caption{The normalised Hubble residuals of the Pantheon+ data in a best fit F$\Lambda$CDM universe, over-plotted with a standard Gaussian in the solid black line. The residuals are normalised in a way that takes into account covariances, $\Delta_{\text{norm}} = \bm{C}^{-1/2}\Delta$. As the figure shows, the data does not normalise perfectly to the Gaussian distribution, with the data showing a higher peak. The dashed line shows a Gaussian with variance $\sigma^2 = 0.88$.}
\label{residuals}
\end{minipage}
\end{figure}
\subsection*{Generalised Gaussian}
Modelling on 21 \si{cm} cosmology data and artificially added scatter, \cite{Scheutwinkel_2023} found that on data with an unknown ``ground-truth" likelihood function, a generalised Gaussian distribution is a good first order-approximation, and will generally outperform a Gaussian in its Bayesian evidence (unless the underlying scatter is truly Gaussian). In the multivariate case it extends the Gaussian with two additional parameters, scale $A$ and shape $B$ \footnotemark{}, with likelihood function \citep{Pascal_2013}:
\footnotetext{These parameters are often denoted $(\alpha,\beta)$ in the literature, but these characters are reserved for the Tripp light-curve fitting parameters here, so $(A,B)$ are used instead.}
\begin{equation}\label{ABGGD}
% \mathcal{L} = \frac{B \Gamma(N/2)}{|2^{1/B}\pi A \bm{C}|\Gamma(N/2B)}\exp\left( -\frac{1}{2} \left(\frac{\Delta^T\bm{\Sigma}^{-1}\Delta}{A}\right)^{B} \right)
\mathcal{L} = \mathcal{N}(A,B,\bm{C})\exp\left( -\frac{1}{2} \left(\frac{\Delta^T\bm{C}^{-1}\Delta}{A}\right)^{B} \right).
\end{equation}
This has covariance matrix $\bm{M}$ given by:
\begin{equation} \label{ABcovariance}
\bm{M} = A \frac{2^{1/B} \Gamma(\frac{N+2}{2B})}{N \Gamma(\frac{N}{2B})} \bm{C}.
\end{equation}
We extend this distribution even further by introducing a ``skew" parameter $K$, to test the residuals for asymmetry. This combines to give the full distribution:
\begin{equation} \label{ABKGGD}
% \mathcal{L} = \frac{B \Gamma(N/2)}{|2^{1/B}\pi A \bm{C}|\Gamma(N/2B)}\exp\left( -\frac{1}{2} \left(\frac{\Delta^T\bm{\Sigma}^{-1}\Delta}{A}\right)^{B} \right)
\mathcal{L} = \frac{\mathcal{N}(A,B,\bm{C})}{\prod_i (1-K[(A\bm{C})^{-1/2} \Delta]_i)} \exp\left( -\frac{1}{2} (V^T \cdot V)^{B} \right),
\end{equation}
where $V$ are skewed, scaled, normalised residuals:
\begin{equation}
V_i = \frac{1}{K} \log \left( 1 - K [(A\bm{C})^{-1/2} \Delta]_i \right).
\end{equation}
If any of the residuals are too large ($K (A\bm{C})^{-1/2} \Delta_i > 1$) then the distribution is set to 0, so for higher $K$, large parts of the parameter space are ruled out.
The full ``Doubly-Generalised" distribution tends back to \cref{ABGGD} as $K \xrightarrow{} 0$, and reverts back to the pure Gaussian in \cref{gaussianlikelihood} for $(A,B,K) = (1,1,0)$. For $A$ and $B$ we take priors $A \sim U[0.5,2]$, $B \sim U(0,3]$, which are sufficiently wide as these bounds extend out to highly non-Gaussian forms. For the skew parameter we take the thin prior $K \sim U[-0.2,0.2]$. Since we have $>1000$ data points we expect to find values at $ > 3\sigma$ even with the best fit parameters, so making $\frac{1}{|K|} < 4$ would have likelihood 0 on almost all of the cosmological parameter space. The univariate case of these distributions are shown for a selection of $B, K$ values in \cref{generalisedgrid}
\begin{figure}
\centering
\includegraphics{GeneralisedGrid.pdf}
\caption{Probability densities of a univariate generalised Gaussian from \cref{ABKGGD} for varying values of $B$ and $K$. The scale parameter and covariance matrix are set to unity, $A = \bm{C} = 1$.}
\label{generalisedgrid}
\end{figure}
\subsection*{Student's t}
While the generalised Gaussian is a wide-field first attempt at finding non-gaussianity in the data, we can also try a more specific distribution. The 1D t-distribution is constructed from a Gaussian distribution divided by an independent $\chi^2_\nu$ distribution, so arises as the posterior predictive when working with a Gaussian distribution of unknown variance. Analogously, the multivariate t-distribution shows up as the posterior predictive in a number of marginalisation problems, in particular when estimating the covariance matrix with $\nu$ sums of pairwise deviation products. The process of creating the covariance matrix has uncertainty attached to it, and neglecting this effect by putting in fixed values for the covariance matrix could misrepresent the scatter of the data. This distribution has likelihood function:
\begin{equation} \label{stulikelihood}
% \mathcal{L} = \frac{\Gamma((\nu + k)/2)}{\Gamma(\nu/2)(\nu \pi)^{k/2}|\bm{\Sigma}|^{1/2}}\left[ 1+ \frac{1}{\nu}(\Delta^T\bm{\Sigma}^{-1}\Delta)\right]^{-(\nu+p)/2}
\mathcal{L} =\mathcal{N}(\nu,\bm{C})\left[ 1+ \frac{1}{\nu}(\Delta^T\bm{\Sigma}^{-1}\Delta)\right]^{-(\nu+k)/2},
\end{equation}
where $k$ is the length of the residuals vector, the degrees of freedom $\nu$ is the shape parameter of the distribution, and $\bm{\Sigma}$ is related to the covariance matrix by:
\begin{equation}
\bm{\Sigma} = \frac{\nu-2}{\nu} \bm{C}.
\end{equation}
For this test we use prior $\nu \sim \log U(2,10^6]$. This bound is chosen as below $\nu = 2$ the t-distribution has no defined variance, and is extended to high $\nu$ since we are running this on a large number of data points. In this case a uniform prior would be inappropriate, since it would over-weigh the high $\nu$ values, $\nu > 10^5$.
\begin{table}
\centering
\caption{Prior distributions for all model parameters in the dark energy models as well as likelihood models tested in this work. For parameters without strong theoretical bounds, wide uniform priors are used.}
%, largely following \protect\cite{Dhawan2020} for the cosmological parameters. While they have been slightly constrained by theoretical predictions, the parameter spaces are left intentionally wide to make sure that the bounds contain the entire posterior bulk.}
\hspace{5ex} \\
\begin{tabular}{@{\extracolsep{\fill}} l c l}
\hline
Parameter & Prior & Model \\ [0.5ex]
\hline
$M_B$& U[-20,-18] & All \\
$H_0$& U[50,100] & All\\
$\Omega_m$ & U[0,1] & All\\
$w$& U[-2,-2] & $w$CDM (Flat and Curved) \\
$\Omega_k$& U[-0.5,0.5] & $w$CDM and $\Lambda$CDM\\
$w_0$& U[-2,2] & Algebraic thawing\\
$p$& U[-4,4] & Algebraic thawing\\
$B_1$& U[0,6] & Bimetric gravity\\
$\delta w_0$& U[-2,1] & One parameter slow-roll dark energy \\
$\Omega_e$& U[0,0.25] & Growing $\nu$ mass \\
$\Omega_\nu$& U[0,0.4] & Growing $\nu$ mass \\
$\delta$& U[-0.4,0.6] & Dark energy transition \\ [1ex]
\hline
$A$& U[0.5,2] & Generalised Gaussian \\
$B$& U(0,3] & Generalised Gaussian \\
$K$& U[-0.2,0.2] & Generalised Gaussian \\
$\nu$& $\log$U(2,$10^6$] & Student's T \\ [1ex]
\hline
\end{tabular}
\label{table:prior}
\end{table}
\renewcommand{\arraystretch}{1.1}
\begin{table*}
\centering
\caption{Parameter estimations and evidences (relative to F$\Lambda$CDM) for each model. All parameter values are quoted as median $\pm$ Bayesian credibility interval. Included also are the odds of each model, given by $\mathcal{Z} \colon \mathcal{Z}_0$.}
\begin{tabular}{@{\extracolsep{\fill}} l c c c c | c c}
\hline
Model & $H_0$ & $\Omega_m$ & Parameter 2 & Parameter 3 & $\Delta \log \mathcal{Z}$ & Odds \\ [0.5ex]
\hline
Flat $\Lambda$ CDM & \err{73.70}{0.98}{1.07} & \err{0.33}{0.02}{0.02} & -- & -- & 0 & -- \\
$\Lambda$ CDM & \err{73.65}{1.0}{1.05} & \err{0.32}{0.05}{0.06} & $\Omega_k =$ \err{0.03}{0.13}{0.13} & -- & -1.06 & $<$1:2 \\
Flat $w$CDM & \err{73.69}{1.01}{1.07} & \err{0.32}{0.07}{0.06} & $w =$ \err{-0.98}{0.17}{0.15} & -- & -2.33 & $<$1:10 \\
$w$CDM & \err{73.68}{1.07}{1.02} & \err{0.29}{0.07}{0.06} & $\Omega_k =$ \err{0.09}{0.33}{0.21} & $w =$ \err{-1.09}{0.5}{0.29} & -2.33 & $<$1:10 \\
Bimetric Gravity & \err{73.78}{1.0}{1.04} & \err{0.35}{0.03}{0.04} & $B_1 = $ \err{0.57}{0.23}{0.57} & -- & -1.65 & $<$1:5 \\
Slow Roll Inflation & \err{73.69}{1.06}{1.02} & \err{0.33}{0.05}{0.04} & $\delta\ w_0 = $ \err{0.01}{0.16}{0.13} & -- & -2.1 & $<$1:8 \\
Algebraic Thawing & \err{73.69}{1.05}{1.03} & \err{0.33}{0.05}{0.05} & $w_0 =$ \err{-1.0}{0.18}{0.14} & $p = $ \err{0.56}{2.71}{2.22} & -2.37 & $<$1:10 \\
Growing Neutrino Mass & \err{73.57}{1.01}{1.02} & \err{0.25}{0.07}{0.05} & $\Omega_e =$ \err{0.13}{0.1}{0.07} & $\Omega_\nu=$ \err{0.1}{0.05}{0.1} & -0.75 & $<$1:2 \\
Dark Energy Transition & \err{73.59}{1.48}{1.48} & \err{0.33}{0.04}{0.04} & $\delta =$ \err{0.0}{0.03}{0.03} & -- & -2.57 & $<$1:13 \\ [1ex]
\hline
\end{tabular}
\label{parametertable}
\end{table*}
\renewcommand{\arraystretch}{1}
\section{Results}
\label{sec:results}
\subsection{Cosmological Model Comparison}\label{Firstanalysis}
In this section we present the results of the model comparison and parameter estimation on each of the cosmological models.
\Cref{parametertable} gives the posterior estimates for each cosmological parameter with a Gaussian likelihood, as well as the relative evidences of each model. The parameters are quoted with their median values, and the intervals given are Highest Posterior Density Intervals \citep{HPDI}, a form of Bayesian credibility interval.
\Cref{wCDMTriangle} shows an example of the 1 and 2D posteriors for F$\Lambda$CDM and F$w$CDM, with the median/interval values over plotted. We can see that both models find a consistent $H_0$ value, showing $w$'s independence from $H_0$, as well as the effect on $\Omega_m$'s constraint by adding the additional parameter.
The relative evidence values are plotted in \cref{evidence1}, where 4 of models are disfavoured at a moderate evidence level (shown by the dashed blue line). The black lines in \cref{evidence1} show the relative $\mathcal{D}_\textrm{KL}$ of each model, the size of the ``Occam Penalty" of each model compared to F$\Lambda$CDM. As well as the evidence values favouring F$\Lambda$CDM, the $\log \mathcal{L}_{max}$ values show that none of the models are able to find a better fit than F$\Lambda$CDM with their additional parameters.
\begin{figure}
\centering
\includegraphics[width=.48\textwidth]{wcdmtriangle.pdf}
\vspace{-0.1cm}
\caption{The posterior distribution of $H_0$, $\Omega_m$ and $w$ for a Gaussian likelihood. Over plotted are the $H_0$ and $\Omega_m$ posteriors from F$\Lambda$CDM, finding similar values for both values since $w$ is constrained tightly to the null $w = -1$.}
\label{wCDMTriangle}
\end{figure}
\begin{figure}
\includegraphics[width = .48\textwidth, trim = 0 20 0 0]{evidence.pdf}
\vspace{-0.1cm}
\caption{The evidences (relative to F$\Lambda$CDM) of each model, run on the set of all SNe with $z>0.023$. The black lines show the $\mathcal{D}_{\textrm{KL}}$ of the models, i.e. the Occam penalty applied to them by the Bayesian evidence. Also shown is the ``Moderate Evidence" cut-off at $\Delta \log \mathcal{Z} = -2.5$, which 1 models fall beyond, a dark energy transition. The black dots show the maximum likelihood point (again relative to F$\Lambda$CDM) achieved by each model.}
\label{evidence1}
\end{figure}
\subsection{Peculiar Velocity Corrections} \label{PVC}
The peculiar velocities (PVs) of the SNe can be up to $300$ \si{km.s^{-1}}, which accounts for $\sim 10\%$ of the total redshift at $z = 0.01$. For higher redshifts, PVs still contribute to the total uncertainty, and the $v_{\text{pec}}$ values used to correct the measured redshifts of the SNe in Pantheon+ are evaluated in \cite{Peterson_2022} and \cite{Carr_2022}. If the process of modelling the large-scale velocity field of the universe out to high redshift has assumed a standard cosmological model, this may have an effect on model comparison, by artificially increasing the fit and evidence for F$\Lambda$CDM. By analysing the Pantheon+ data with and without the peculiar velocity corrections, we can explore whether any possible source of bias is displayed within the relative Bayesian evidences of the models.
This concern is addressed in \cite{Carr_2022}, where parameter estimation is compared on the corrected and uncorrected velocities. They found that the tested parameters ($H_0$ and $w$) do not change appreciably.
In this study, we present a complementary analysis to the parameter estimation in \citet{Carr_2022}.
For this test, instead of using the fiducial case, wherein the redshifts are corrected for the peculiar velocities, we perform model comparison using the CMB frame redshifts with no correction. The relative evidences are shown in \cref{evidenceVPEC}, with the relative evidences from \cref{evidence1} over-plotted. Note that each set of tests (CMB frame and fiducial) are plotted relative to their own F$\Lambda$CDM, as the evidences of the CMB frame tests are much lower than the fiducial. While the distribution of models is roughly the same for both, in the uncorrected data all models perform slightly better as compared to F$\Lambda$CDM, with $|\Delta \log \mathcal{Z}| \sim 0.5$ smaller on average.
\begin{figure}
\includegraphics[scale = 0.9]{vpecevidence.pdf}
\caption{The relative evidences of each model with and without peculiar velocity corrections. Without peculiar velocity corrections no models fall beyond the moderate evidence line, and all models except Bimetric gravity find a better maximum fit relative to F$\Lambda$CDM than the first analysis \cref{evidence1}, shown by the black $\log \mathcal{L}_\textrm{max}$ dots.}
\label{evidenceVPEC}
\end{figure}
\subsection{Alternate Likelihood Models} \label{altlike}
The initial test is repeated, taking the same Hubble Flow and Calibrator SNe set, but now through each of the alternate likelihood models described in \cref{LKmodels}. For the generalised Gaussian, we run a test with each combination of the parameters free/fixed, giving seven alternate forms of the likelihood function, and these tests are referred to by their free parameters: $A$,$B$,$K$,$AB$,$AK$,$BK$,$ABK$. This gives nine total likelihoods, along with the regular Gaussian and the Student's t. The evidences of each cosmology/likelihood model pair are given in \cref{margtable}, and all cosmology-marginalised parameter values are given in \cref{scatterparametertable}.
For the set of generalised Gaussian models, the results of the model comparison in \cref{margevscatter} can be explained by the estimates of each of the parameters $A$, $B$, and $K$, as well as their relationships to each other. As shown in \cref{Agausscomp} $A$ is centered on $0.89 \pm 0.03$, finding a distinctly non-null value, and this results in a better fit. On the other hand, both $B$ and $K$ are tightly constrained to their null values: $ B = 1.02 \pm 0.005$ and $K = 0.02 \pm 0.02$. This means that the $B$, $K$, and $BK$ tests all underperform the Gaussian; they introduce an unnecessary parameter and are punished according to Occam's razor.
For a multivariate distribution with $N = 1448$, the covariance of the generalised Gaussian, given in \cref{ABcovariance}, is extremely sensitive to $B$, leading in this case to a very tight posterior centred on $B=1$. This can also be seen when considering that for a normal distribution, the distance term $(\Delta^T \bm{C}^{-1}\Delta)^B \sim N^B$, so any non-unity $B$ in the exponent has an exponential effect on the size of this term. As well as this, $A$ and $B$ have a degenerate effect on the scale of the covariance, which means that in the $AB$ test this degeneracy breaks the strong $A = 0.89$ preference that had benefited the model. This also results in the full $ABK$ model finding a disfavoured evidence, in combination with the data finding a null value of the skew.
Finally, $A$ and $K$ are almost independent in their effect on the model comparison. This is shown in the ratio of their evidences:
\begin{equation}
\frac{\mathcal{Z}_K}{\mathcal{Z}_\text{Gaussian}} \approx \frac{\mathcal{Z}_{AK}}{\mathcal{Z}_A}.
\end{equation}
This result is reassuring: $A$ is analogous to the covariances being uniformly smaller/larger, which is independent from the role of K in skewing the distribution (especially since it skews the residuals after normalisation), so K's effect on the evidence shouldn't change based on whether $A$ is free or fixed.
The t-distribution also find an improvement over the Gaussian of $\Delta \log \mathcal{Z} = +2.46$, or a Bayes factor of $B = e^{2.46} = 11.7$. \Cref{numarg} shows the 1D posterior for $\nu$ on each cosmological model. If the posterior was one sided and increasing towards higher $\nu$, this would suggest that the more Gaussian the distribution, the better the fit, but while the value of $\nu$ is not tightly constrained, it peaks at $\nu \sim 150$ on each model. The posterior and evidence show a preference for a non-Gaussian form.
\begin{figure}
\centering
\includegraphics{AGausscomp.pdf}
\caption{The posterior for $A$ and $H_0$ in A-scaled F$\Lambda$CDM (blue), compared with the fiducial analysis (orange). $A$ takes on a distinctly non-null value, and as a result the $H_0$ estimate is slightly tighter.}
\label{Agausscomp}
\end{figure}
\begin{figure}
\centering
\includegraphics{nu.pdf}
\caption{The 1D posterior distribution of $\nu$ on each cosmological model. The data shows a clear preference for lower $\nu$ values, with the posterior dying off where the distribution limits towards a Gaussian.}
\label{numarg}
\end{figure}
\subsection{Model Marginalisation} \label{modelmarg}
In the case of having both a selection of cosmological models and a selection of scatter models, we can marginalise out one or both of these choices to obtain either: \textbf{a)} Cosmology independent evidences for each scatter model or \textbf{b)} Scatter-model independent evidences for each cosmology.
Labeling the cosmological model as $\mathcal{M}_c$ and scatter model $\mathcal{M}_s$, we marginalise the evidence (as in \cref{Marginalisation}):
\begin{equation} \label{modelmarg2}
p(d|\mathcal{M}_s) = \sum_{\mathcal{M}_c} p(d|\mathcal{M}_c,\mathcal{M}_s)p(\mathcal{M}_c|\mathcal{M}_s),
\end{equation}
where since $p(\mathcal{M}_c|\mathcal{M}_s)$ is not conditioned on the data it takes the prior values, i.e. is uniform.
This equation also holds with ``$c$" and ``$s$" swapped to marginalise out the scatter model on the cosmological evidences. The results of these marginalisations are shown in \cref{margevcosmo,margevscatter}. Since all of the summing in \cref{modelmarg2} takes place outside of the $\log$, the results are weighted very strongly towards high evidence models, i.e the marginalised likelihood comparison doesn't vary appreciably from the likelihood comparison for F$\Lambda$CDM.
We also calculate the total $\mathcal{D}_\textrm{KL}$ for each cosmology/scatter model, by considering the $\mathcal{D}_\textrm{KL}$ of the joint distribution over model and parameter space:
\begin{equation}
p(\theta , \mathcal{M}_s|d,\mathcal{M}_c) = p(\theta|d,\mathcal{M}_c,\mathcal{M}_s) p(\mathcal{M}_s|d,\mathcal{M}_c).
\end{equation}
From \cite{kroupa2023}, the joint $\mathcal{D}_\textrm{KL}$ can be broken down as:
\begin{equation}
\begin{split}
\mathcal{D}_\textrm{KL}(\theta,\mathcal{M}_s|\mathcal{M}_c) & = \mathcal{D}_\textrm{KL}(\mathcal{M}_s|\mathcal{M}_c) + \\ & \sum_{\mathcal{M}_s} p(\mathcal{M}_s|d,\mathcal{M}_c) \mathcal{D}_\textrm{KL}(\theta| \mathcal{M}_s, \mathcal{M}_c),
\end{split}
\end{equation}
i.e. the posterior averaged $\mathcal{D}_\textrm{KL}$ from each scatter model, as well as the $\mathcal{D}_\textrm{KL}$ of the (discrete) distribution of scatter models.
These $\mathcal{D}_\textrm{KL}$ are plotted on the marginalised evidence plots \cref{margevcosmo,margevscatter}. For the cosmological models, each model finds the same fit as F$\Lambda$CDM, and the $\mathcal{D}_\textrm{KL}$s line up at $\Delta \log \mathcal{Z} \approx -0.5$ just as in the Gaussian case.
Fully marginalising over both cosmology and scatter models should give the best estimate of $H_0$ that SNe are able to offer, using the data to the fullest extent. Marginalising over models gives:
\begin{equation} \label{Marginalisation}
p(H_0|d) = \sum_{\mathcal{M}_c,\mathcal{M}_s} p(H_0|d,\mathcal{M}_c,\mathcal{M}_s) p(\mathcal{M}_c,\mathcal{M}_s|d).
\end{equation}
Over the marginalised posterior, we find a value of $H_0 = 73.67\pm0.99$ \si{km.s^{-1}.Mpc^{-1}}, in agreement with the results of the SH$_0$ES collaboration \cite{Riess2022}, $H_0 = 73.04 \pm 1.04$
\si{km.s^{-1}.Mpc^{-1}}, although with slightly tighter constraints owing to the more general treatment of the distribution of the residuals.
All other parameters are similarly marginalised, and given in \cref{margparametertable,scatterparametertable}, along with the marginalised evidences and odds.
\begin{table*}
\centering
\caption{Evidences for each cosmological/likelihood pair model. The marginalised evidences are given for a cosmology/scatter independent model comparison, as described in \cref{modelmarg}.}
\begin{tabular}{@{\extracolsep{\fill}} l c c c c c c c c c| c }
\hline
Model & Gaussian & $A$ & B & K & $AB$ & $AK$ & $BK$ & $ABK$ & Student's T & \textbf{Scatter Independent} \\ [0.5ex]
\hline
Flat $\Lambda$CDM & 811.2 & 813.5 & 810.9 & 810.3 & 811.0 & 812.5 & 809.9 & 810.1 & 813.7 & 812.4 \\
$\Lambda$CDM & 810.2 & 812.5 & 809.8 & 809.2 & 809.8 & 811.3 & 808.8 & 808.9 & 812.6 & 811.3 \\
Flat $w$CDM & 808.9 & 811.2 & 808.7 & 808.1 & 808.6 & 810.2 & 807.6 & 807.8 & 811.4 & 810.1 \\
$w$CDM & 808.9 & 811.2 & 808.5 & 808.1 & 808.7 & 810.1 & 807.6 & 807.7 & 811.3 & 810.0 \\
Bimetric Gravity & 809.6 & 811.8 & 809.3 & 808.7 & 809.3 & 810.8 & 808.2 & 808.4 & 812.1 & 810.7 \\
Slow Roll Inflation & 809.1 & 811.4 & 808.8 & 808.2 & 808.8 & 810.3 & 807.8 & 808.0 & 811.5 & 810.3 \\
Algebraic Thawing & 808.9 & 811.2 & 808.5 & 808.1 & 808.6 & 810.2 & 807.6 & 807.8 & 811.3 & 810.0 \\
Growing Neutrino Mass & 810.5 & 812.9 & 810.2 & 809.6 & 810.3 & 811.7 & 809.1 & 809.3 & 813.0 & 811.7 \\
Dark Energy Transition & 808.7 & 811.0 & 808.8 & 807.7 & 808.3 & 809.8 & 807.4 & 807.4 & 811.1 & 809.8 \\ \hline
\textbf{Cosmology Independent} & 810.0 & 812.3 & 809.7 & 809.0 & 809.7 & 811.2 & 808.6 & 808.8 & 812.4 & -- \\
\hline
\end{tabular}
\label{margtable}
\end{table*}
\begin{figure}
\includegraphics{cosmarg.pdf}
\caption{The relative evidences of each cosmology, marginalised over the scatter model. F$\Lambda$CDM has the highest evidence, providing similar results to the Gaussian analysis (\cref{evidence1}). The black lines show the joint $\mathcal{D}_\textrm{KL}$ of the scatter model/parameters.}
\label{margevcosmo}
\end{figure}
\begin{figure}
\includegraphics{scatmarg.pdf}
\caption{The relative evidences of each scatter model, marginalised over the cosmological model. Two likelihood functions, the $A$-scaled Gaussian and the Student's t-distribution are favoured over the Gaussian. The black lines show the joint $\mathcal{D}_\textrm{KL}$ of the cosmological model/parameters.}
\label{margevscatter}
\end{figure}
\subsection{Transitional and Changing Absolute Magnitudes} \label{MBTRANS}
The dark energy transition model attempts to resolve the Hubble tension by inducing a step-like transition in $H_0$ at low redshift via a spike in $w(z)$. This model is highly phenomenological, and the required change in $w(z)$ is theoretically permitted, but not theoretically motivated. Since $M_B$ and $H_0$ have a degenerate effect on our Hubble residuals, a step-like change in $H_0$ can also be realised by a changing absolute magnitude. We test two additional models, both working in a F$\Lambda$CDM cosmology:
\begin{equation}
\label{doubleMBequation}
M_B =
\begin{cases}
M_{B1} & \text{if } z \leq 0.1 \\
M_{B2} & \text{if } z > 0.1
\end{cases}
\end{equation}
\begin{equation}
\label{linearMBequation}
M_B = M_{B1} + \frac{z}{z_{\textrm{max}}}\times(M_{B2} - M_{B1}).
\end{equation}
\Cref{doubleMBequation} mimics the effect of the dark energy transition model on $H_0$, a sharp step in $M_B$ at $z = 0.1$. This adds an additional step to our distance ladder; the Cepheid distances constrain $M_{B1}$ independent of $H_0$, so that the ``near" supernovae can constrain $H_0$, which is used to constrain $M_{B2}$.
\Cref{linearMBequation} is a first order test for an evolving $M_B$, by including a steady $M_B - z$ gradient. \cite{Kistler_2013}, \cite{Meng2011} both suggest that SNe~Ia evolve with metallicity (which can change the amount of radioactive material in the explosion, and hence the luminosity of the SN~Ia). Since SNe~Ia measure dark energy via relative distance measurements, any intrinsic evolution with cosmic time must be accounted for. Both models are run in an F$\Lambda$CDM cosmology. The triangle plots for both models are given in \cref{doublemb,linearMB}, as well the evidence values relative to F$\Lambda$CDM run on the same dataset.
Both dynamic $M_B$ models come out with lower evidence than F$\Lambda$CDM, and similarly to the dark energy models have their additional parameters constrained to their null values. The data shows no support for a dynamic $M_B$.
The transition model is ruled out at a higher level than the dark energy transition model, $\Delta \log \mathcal{Z} = -4.0$ relative to F$\Lambda$CDM. This model has lower evidence than the dark energy transition model, likely because the prior range of $M_{B1} \in [-20,-18]$ has a much larger effect than the dark energy transition parameter $\delta \in [-0.4,0.6]$.
The lack of an $M_B$ transition shown in \cref{doublemb} can be attributed to the same reason a dark energy transition is not favoured by the evidence; the effect may not occur at $z_t = 0.1$. However, due to the high peculiar velocity error in the lower end of this data set, we are unable to test the suggested lower value of $z_t = 0.02$.
For the linear model, the $M_B$ gradient isn't tightly constrained, as its effect is reduced on lower redshifts (where much of the data lies), and shows some degeneracy with $\Omega_m$ in \cref{linearMB}. This suggests that neglecting the effects of a magnitude gradient could lead to over or underestimating the matter density. However, SNe~Ia evolution is likely to be reflected in the light curve of the SNe, which is precisely what the Pantheon+ data have been calibrated to standardise. Therefore, the lack of an effect here suggests that the $M_B$ corrections used in the Pantheon+ catalogue have already corrected for a first order $M_B$ gradient. The negative $\Delta \log \mathcal{Z} = -1.1$ of this model reflects the lack of improvement of fit, and the weakness of the evidence difference reflects the weakness of the constraint on the $M_B$ gradient (as in the model comparison discussion).
Importantly for both of these models the $H_0$ estimation remains the same, and does not vary drastically towards the Planck value (although the uncertainty in $H_0$ is doubled by the inclusion of the $M_{B2}$ parameter). This paper finds no evidence to suggest that the Hubble Tension can be resolved via a step-like or linear $M_B - z$ relationship.
\section{Discussion}\label{sec:discussion}
\begin{figure}
\centering
\includegraphics{doubleMB.pdf}
\caption{The posterior for an $M_B$ transition model, showing the near and far magnitudes (split about redshift z =0.1). Both values are tightly centred on the Cepheid calibrated value.}
\label{doublemb}
\end{figure}
\renewcommand{\arraystretch}{1.1}
\begin{table*}
\centering
\caption{Parameter estimations and evidences for each cosmological model, now marginalised over the scatter model. All parameter values are quoted as median $\pm$ Bayesian credibility interval, and again the (marginalised) odds and evidences of each model are listed, with odds given as $\mathcal{Z}\colon \mathcal{Z}_0$.}
\begin{tabular}{@{\extracolsep{\fill}} l c c c c | c c}
\hline
Model & $H_0$ & $\Omega_m$ & Parameter 2 & Parameter 3 & $\Delta \log \mathcal{Z}$ & Odds \\ [0.5ex]
\hline
Flat $\Lambda$ CDM & \err{73.7}{0.93}{1.0}& \err{0.33}{0.02}{0.02}& -- & -- & 0 & -- \\
$\Lambda$ CDM & \err{73.67}{0.96}{0.99}& \err{0.32}{0.05}{0.05}& $\Omega_k =$ \err{0.03}{0.12}{0.12}& -- & -1.11 & $<$1:3 \\
Flat $w$CDM & \err{73.69}{0.97}{0.99}& \err{0.32}{0.06}{0.06}& $w =$\err{-0.98}{0.16}{0.14}& -- & -2.30 & $<$1:10 \\
$w$CDM &\err{73.71}{0.98}{1.0}& \err{0.29}{0.07}{0.06}& $\Omega_k =$ \err{0.09}{0.32}{0.22}& $w =$\err{-1.09}{0.52}{0.28} & -2.37 & $<$1:10 \\
Bimetric Gravity & \err{73.77}{0.97}{0.97}& \err{0.35}{0.03}{0.04}& $B_1 =$ \err{0.55}{0.22}{0.55}& -- & -1.65 & $<$1:5 \\
Slow Roll Inflation & \err{73.69}{0.96}{1.01}& \err{0.33}{0.05}{0.04}& $\delta w_0 =$\err{0.01}{0.15}{0.13}& -- & -2.13 & $<$1:8 \\
Algebraic Thawing & \err{73.7}{0.97}{1.0}& \err{0.33}{0.05}{0.05}& $w_0 =$ \err{-1.0}{0.16}{0.14}& $p = $ \err{0.55}{2.54}{2.4} & -2.36 & $<$1:10 \\
Growing Neutrino Mass & \err{73.58}{0.94}{1.0}& \err{0.26}{0.07}{0.05}& $\Omega_e = $ \err{0.13}{0.11}{0.06}& $\Omega_\nu =$ \err{0.1}{0.04}{0.1} & -0.69 & $<$1:2 \\
Dark Energy Transition & \err{73.61}{1.43}{1.37}& \err{0.33}{0.03}{0.04}& $\delta = $ \err{0.00}{0.03}{0.03}& -- & -2.56 & $<$1:13 \\ [1ex]
\hline
\end{tabular}
\label{margparametertable}
\end{table*}
\renewcommand{\arraystretch}{1}
\begin{figure}
\centering
\includegraphics{linearMB.pdf}
\caption{The posterior for a linear $M_B$ model. The figure shows a degeneracy with $\Omega_m$, and approximately triples the size of its error bounds compared to F$\Lambda$CDM.}
\label{linearMB}
\end{figure}
\subsection{Alternate Likelihood Distributions}
From the forms of the various likelihood functions in \cref{gaussianlikelihood,ABGGD,,ABKGGD,,stulikelihood}, each distribution is (naturally) peaked by the minimising the residuals $\Delta^T \bm{C}^{-1} \Delta$ (except for the skewed distributions $K\neq 0$ which have their peak shifted only by a small factor). As a result the choice of likelihood function doesn't change the central value of the parameter estimation, and therefore the conclusions of the model comparison don't change over different scatter models. Additionally, since all alternate cosmologies are converging to their F$\Lambda$CDM limit, scatter model parameters are not sensitive to the specific cosmological model. This leads to an independence in this analysis between the scatter and cosmological models.
The two scatter models with the greatest evidence - a scaled Gaussian with only $A$ as a free parameter and the student's-$t$ distribution - appear to succeed for unrelated reasons, suggesting there could be more than one factor contributing to the improvement of fit in each case.
It is clear why the A-scaled Gaussian succeeds; from \cref{residuals} the Hubble residuals are packed more tightly than the Pantheon+ covariance predicts, and therefore tightening the distribution with $A = 0.89$ allows the smaller residuals to contribute more to the likelihood and find a better fit. \Cref{Agausscomp} also shows that uniformly scaling the Pantheon+ covariances down makes the uncertainty on other model parameters smaller, a natural result considering that for a uniform prior, the likelihood and posterior are proportional. A-scaled F$\Lambda$CDM achieves the tightest $H_0$ of any model, at $\pm 0.93$. This, more clearly than any marginalised result, displays the usefulness of treating the residuals properly in achieving better parameter estimates. As motivation to look into the root of this result; if we had an a priori reason to insert this scaled covariance matrix (and ignore all of the other tested likelihoods) then we would gain the full benefit of the lower variances and we could, in good faith, report this even tighter $H_0$ estimate.
The t-distribution is less immediately interpreted. In a full Bayesian treatment, instead of using the Pantheon+ covariance matrix as it is, we would incorporate the observations that led to the creation of this covariance matrix (such as SNe simulations), as well as a prior for the covariance matrix, into our analysis. This would allow us to marginalise out the true covariance and construct a posterior predictive, i.e. a likelihood that is conditioned on the previously observed SNe data. In the case that the prior for the covariance matrix was an uninformative inverse-Wishart distribution, the correct likelihood for the data would be a multivariate t-distribution, where the shape parameter $\Sigma$ is the sample sum of squares of the input data. In this case the degrees of freedom $\nu = n-k$, where $n$ is the number of observations used to construct the covariance matrix, and $k$ is the size of the covariance matrix (1448).
While a 1D t-distribution with degrees of freedom $\nu \geq 10$ is indistinguishable from a Gaussian by eye, in the multivariate case much higher $\nu$ values are relevant, due to the higher dimensionality. This can be seen in the $\nu+k$ term in the exponent of \cref{stulikelihood}. Over 1448 data points, this difference can be resolved from the standard Gaussian distribution, and a statistical preference for $\nu \sim 150$ can be identified. The construction described above is not exactly how this inference runs, but treating the covariance matrix as a sample covariance matrix estimated as a sum of deviations, and accounting for the uncertainty in this via a t-distribution, is preferred by the evidence, in a cosmology independent way, by a Bayes factor of $ e^{2.46} \approx 12$. Although the bound is not very tight, the posterior on $\nu$ says that the covariance matrix was effectively estimated with $ n = k+\nu \sim 2000$ observations. For a full derivation of these results, see Chapter 3.6 of \cite{BDA}.
\renewcommand{\arraystretch}{1.1}
\begin{table*}
\centering
\caption{Parameter estimations and evidences for each scatter model, marginalised over the cosmological model. All parameter values are quoted as median $\pm$ Bayesian credibility interval, and the marginalised odds and evidences of each scatter model are listed, with odds given as $\mathcal{Z}\colon \mathcal{Z}_0$.}
\begin{tabular}{@{\extracolsep{\fill}} l c c c | c c}
\hline
Model & Parameter 1 & Parameter 2 & Parameter 3 & $\Delta \log \mathcal{Z}$ & Odds \\ [0.5ex]
\hline
Gaussian & -- & -- & -- & 0 & -- \\
$A$ & $A$ = \err{0.89}{0.03}{0.03}& -- & -- & +2.29 & >10:1\\
$B$ & $B$=\err{1.02}{0.005}{0.005}& -- & -- & -0.31 & <1:1.3\\
$K$ & $K$=\err{0.02}{0.02}{0.02}& -- & -- &-0.94 & <1:3\\
$AB$ & $A$=\err{1.31}{0.68}{0.27}& $B$=\err{1.05}{0.06}{0.03}& -- &-0.27 & <1:1.3\\
$AK$& $A$=\err{0.89}{0.03}{0.03}& $K$=\err{0.02}{0.02}{0.02}& -- & +1.21 & >3:1\\
$BK$& $B$=\err{1.02}{0.005}{0.005}& $K$=\err{0.02}{0.02}{0.02}& -- & -1.34 & <1:4\\
$ABK$& $A$=\err{1.42}{0.58}{0.26}& $B$=\err{1.06}{0.05}{0.03}& $K$=\err{0.02}{0.02}{0.02}& -1.17 & <1:3 \\
Student's t & $\log_{10}(\nu)$=\err{2.09}{0.72}{0.77}& -- & -- & +2.46 & >12:1\\
[1ex]
\hline
\end{tabular}
\label{scatterparametertable}
\end{table*}
\renewcommand{\arraystretch}{1}
\subsection{Model Comparison}\label{modelcompdiss}
From \cref{margevcosmo}, the Occam penalty approximation \cref{occampenaltyapprox} holds; the difference between where the black $\mathcal{D}_\textrm{KL}$ lines end ($\langle \log \mathcal{L}\rangle_\mathcal{P}$) and the black $\log \mathcal{L}_{\textrm{max}}$ dots is in all cases $\approx 1/2$, and each model has $\Delta \hat{d} \approx 1$.
In both the Gaussian and the marginalised analysis (\cref{parametertable,margparametertable}) all parameters lie within $1\sigma$ of their null value, i.e. the value that turns the theory back into F$\Lambda$CDM, and therefore all models find the same fit as F$\Lambda$CDM. The alternate dark energy models do not achieve a significantly better fit on the data, and due to the Occam penalty on their expanded prior space have lower evidence than F$\Lambda$CDM in all cases, where the scale of the evidence difference is determined solely by the $\mathcal{D}_\textrm{KL}$.
With some simplifying assumptions we can calculate the theoretical $\Delta \log \mathcal{Z}$ of a future model comparison. Consider a one parameter expansion to F$\Lambda$CDM, where the posterior of the additional parameter is still centred around its null value (so that $\Delta \log \mathcal{L}_{\textrm{max}} \approx 0$), and the additional parameter $\theta^*$ is approximately independent of $\Omega_M$ (so that $\Delta \mathcal{D}_\textrm{KL} \approx \mathcal{D}_\textrm{KL}(\theta^*)$). For the sake of simplicity, give this parameter a uniform prior width $b$ and Gaussian posterior with variance $\sigma^2$. Additionally assume that the prior is sufficiently wide, so that $\mathcal{D}_\textrm{KL}$ can be effectively evaluated over $\pm\infty$. Calculating $\mathcal{D}_\textrm{KL}(\theta^*)$ analytically, we find:
\begin{equation}
\int P(\theta^*)\log(P(\theta^*)/\pi (\theta^*)) d \theta^* = \log(b/\sigma) -\frac{1}{2}\log (2\pi) -\frac{1}{2}.
\end{equation}
With $\Delta \hat{d} = 1$ this provides a difference in evidence of
\begin{equation} \label{approxdeltalogz}
\begin{split}
\Delta \log \mathcal{Z} & = (\langle \log \mathcal{L}\rangle_\mathcal{P}-\mathcal{D}_\textrm{KL})_1-(\langle \log \mathcal{L}\rangle_\mathcal{P}-\mathcal{D}_\textrm{KL})_0 \\ & \approx - \mathcal{D}_\textrm{KL}(\theta*) - \frac{1}{2} = \frac{1}{2}\log (2\pi)-\log(b/\sigma).
\end{split}
\end{equation}
So for the well behaved, one-parameter extensions that centre around F$\Lambda$CDM (F$w$CDM, $\Lambda$CDM, Slow Roll) the Bayes factor has a clear relationship to the constraint on the additional parameter; $B_{01} \propto \sigma^{-1}$. While this result is only exact in a highly idealized case, as a heuristic it informs us how much more precise future surveys need to become to rule out parameters like curvature with SNe alone. The strength of our belief in discarding an extension to our base model is directly proportional to how tightly we can constrain its added parameters to 0.
\subsection{Peculiar Velocity Corrections}
From the evidence diagram \cref{evidenceVPEC} the inference with the CMB frame redshifts does not distinguish between the dark energy models as well as inference after peculiar velocity correction are added to the redshifts.
For each model, the Bayesian evidence for the data with peculiar velocity corrections is significantly greater, e.g. by $\sim$ 10 compared to the data without corrections. The smaller evidence differences in the uncorrected analysis could be caused by the data having less constraining power, due to the additional noise. From \cref{approxdeltalogz} if additional parameters cannot be constrained as tightly, then evidence differences will be smaller.
In the first analysis (\cref{evidence1}), the black $\log \mathcal{L}_{\textrm{max}}$ dots line up with F$\Lambda$CDM; the alternate models provide no improvement in fit. As discussed before, because all alternate models contain F$\Lambda$CDM this implies that their additional parameters are all being constrained around their null values. In the case of the uncorrected data however, all models except for Bimetric gravity are now achieving a slightly better fit, and the evidence difference is no longer just the penalty terms. The additional scatter that is added by removing peculiar velocity corrections is fit well by all models except Bimetric gravity, in a way that the scatter in the normal analysis does not favour any model over F$\Lambda$CDM. As a result, the growing neutrino mass model is now equally favoured by the evidence, at $\Delta\log\mathcal{Z} = +0.11$. In order for a model to achieve an improvement in fit it needs to have one of its parameter's stray from the null value, displayed most clearly with curvature in $\Lambda$CDM. \Cref{vpeccomp} shows the posterior of $\Lambda$CDM with and without velocity corrections, where there is a distinct preference for a non-zero curvature in the uncorrected analysis.
\begin{figure}
\includegraphics{vpeccomp.pdf}
\caption{The scatter added by removing peculiar velocity corrections appears to be better fit by a non-zero curvature, as shown in the blue posterior. This is the reason that the evidence for curved $\Lambda$CDM shifts closer to the default cosmology when run on the uncorrected data.}
\label{vpeccomp}
\end{figure}
\section{Conclusions}\label{sec:conclusion}
In this paper we have considered nine dark energy models, covering $\Lambda$CDM and its extensions, dynamical dark energy theories, and a theory of modified gravity. We find that Flat $\Lambda$CDM is preferred by the data over all models (\cref{margevcosmo}), and this preference holds independent of the form of the likelihood function.
When removing peculiar velocity redshift corrections, the growing neutrino mass model is favoured ahead of F$\Lambda$CDM (although not at any significance), and on all other models the degree of the model comparison is weakened without the corrections. This could simply be caused by a lack of constraining power on the uncorrected data, however the values of the $\Delta \log \mathcal{L}_{\textrm{max}}$ suggest that the model have a different fit on the uncorrected redshifts. This could suggest that the assumed F$\Lambda$CDM model used in velocity field reconstruction introduces a slight bias towards F$\Lambda$CDM when performing model comparison.
On the handful of alternate likelihood functions tested, we find evidence to suggest that the covariance matrix should be uniformly scaled down, or equal evidence that the residuals should be modelled with a Student's t-distribution. The success of these two distributions appear to contradict one-another; the t-distribution succeeds because it correctly models the additional uncertainty in the estimation of the covariance matrix, whereas the A-only test succeeds by uniformly reducing the uncertainty. We believe that a more precise formulation of the t-distribution on this data could clarify the emergence of this non-Gaussian result, and reconcile this apparent contradiction.
As a result of this treatment of the residuals on a more general class of likelihood functions, the uncertainty on the Hubble constant is reduced by $5\%$. We have tightened the $H_0$ constraint with no new data by more appropriately modelling the scatter, and regardless of the increase in precision, the scatter-independent parameter estimates of \cref{margparametertable} are a more honest reflection of the data, as they marginalise out our uncertainty about the form of the likelihood function. Future work into likelihood-free inference could result in being able to constraint $H_0$, and other parameters, even further. This would improve not only current estimation and model comparison, but set us up well for the next generation of future telescopes, which are expected to increase the size of SNe catalogues by an order of magnitude \citep{rose2021}.
It's worth stressing that all of the conclusions of this paper are based on SNe~Ia alone, achieving precision cosmology results without high-redshift constraints, and evaluating the cosmologies through only a single mechanism (luminosity distances). Incorporating other refined probes (e.g. BAOs, CMB) with this improved analysis of SNe could lead to a similar increase in precision to that found in this paper, on a much broader set of parameters and cosmological tests.
\section*{Acknowledgements}
This work was performed using the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. DiRAC is part of the National e-Infrastructure.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section*{Data Availability}
The Pantheon+ dataset is publicly available on the Pantheon+ repository, at
https://github.com/PantheonPlusSH0ES/DataRelease/
The nested sampling chains underlying this article are available on Zenodo, at https://doi.org/10.5281/zenodo.10026539
%%%%%%%%%%%%%%%%%%%% REFERENCES %%%%%%%%%%%%%%%%%%
% The best way to enter references is to use BibTeX:
\bibliographystyle{mnras}
\bibliography{nongaussianityinSNE} % if your bibtex file is called example.bib
%%%%%%%%%%%%%%%%% APPENDICES %%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Don't change these lines
\bsp % typesetting comment
\label{lastpage}
\end{document}
```
4. **Bibliographic Information:**
```bbl
\begin{thebibliography}{}
\makeatletter
\relax
\def\mn@urlcharsother{\let\do\@makeother \do\$\do\&\do\#\do\^\do\_\do\%\do\~}
\def\mn@doi{\begingroup\mn@urlcharsother \@ifnextchar [ {\mn@doi@}
{\mn@doi@[]}}
\def\mn@doi@[#1]#2{\def\@tempa{#1}\ifx\@tempa\@empty \href
{http://dx.doi.org/#2} {doi:#2}\else \href {http://dx.doi.org/#2} {#1}\fi
\endgroup}
\def\mn@eprint#1#2{\mn@eprint@#1:#2::\@nil}
\def\mn@eprint@arXiv#1{\href {http://arxiv.org/abs/#1} {{\tt arXiv:#1}}}
\def\mn@eprint@dblp#1{\href {http://dblp.uni-trier.de/rec/bibtex/#1.xml}
{dblp:#1}}
\def\mn@eprint@#1:#2:#3:#4\@nil{\def\@tempa {#1}\def\@tempb {#2}\def\@tempc
{#3}\ifx \@tempc \@empty \let \@tempc \@tempb \let \@tempb \@tempa \fi \ifx
\@tempb \@empty \def\@tempb {arXiv}\fi \@ifundefined
{mn@eprint@\@tempb}{\@tempb:\@tempc}{\expandafter \expandafter \csname
mn@eprint@\@tempb\endcsname \expandafter{\@tempc}}}
\bibitem[\protect\citeauthoryear{Bernardo \& Smith}{Bernardo \&
Smith}{2009}]{bernado2009}
Bernardo J.~M., Smith A.~F., 2009, Bayesian theory.
~ Vol. 405, John Wiley \& Sons
\bibitem[\protect\citeauthoryear{{Brout} et~al.}{{Brout}
et~al.}{2022}]{brout_2022}
{Brout} D., et~al., 2022, \mn@doi [\apj] {10.3847/1538-4357/ac8e04}, \href
{https://ui.adsabs.harvard.edu/abs/2022ApJ...938..110B} {938, 110}
\bibitem[\protect\citeauthoryear{Carr, Davis, Scolnic, Said, Brout, Peterson
\& Kessler}{Carr et~al.}{2022}]{Carr_2022}
Carr A., Davis T.~M., Scolnic D., Said K., Brout D., Peterson E.~R.,
Kessler R., 2022, \mn@doi [Publications of the Astronomical Society of
Australia] {10.1017/pasa.2022.41}, 39
\bibitem[\protect\citeauthoryear{Dainotti, Bargiacchi, Bogdan, Capozziello \&
Nagataki}{Dainotti et~al.}{2023}]{Dainotti}
Dainotti M.~G., Bargiacchi G., Bogdan M., Capozziello S., Nagataki S.,
2023, Reduced uncertainties up to 43\\% on the Hubble constant and the matter
density with the SNe Ia with a new statistical analysis (\mn@eprint {arXiv}
{2303.06974})
\bibitem[\protect\citeauthoryear{{Dhawan}, {Brout}, {Scolnic}, {Goobar},
{Riess} \& {Miranda}}{{Dhawan} et~al.}{2020}]{Dhawan2020}
{Dhawan} S., {Brout} D., {Scolnic} D., {Goobar} A., {Riess} A.~G.,
{Miranda} V., 2020, \mn@doi [\apj] {10.3847/1538-4357/ab7fb0}, \href
{https://ui.adsabs.harvard.edu/abs/2020ApJ...894...54D} {894, 54}
\bibitem[\protect\citeauthoryear{Ellis \& van Elst}{Ellis \& van
Elst}{2008}]{cosmologicalmodels}
Ellis G. F.~R., van Elst H., 2008, Cosmological models (Carg\`{e}se lectures
1998) (\mn@eprint {arXiv} {gr-qc/9812046})
\bibitem[\protect\citeauthoryear{{Fardon}, {Nelson} \& {Weiner}}{{Fardon}
et~al.}{2004}]{Fardon2004}
{Fardon} R., {Nelson} A.~E., {Weiner} N., 2004, \mn@doi [\jcap]
{10.1088/1475-7516/2004/10/005}, \href
{https://ui.adsabs.harvard.edu/abs/2004JCAP...10..005F} {2004, 005}
\bibitem[\protect\citeauthoryear{{Goobar} \& {Leibundgut}}{{Goobar} \&
{Leibundgut}}{2011}]{Goobar2011}
{Goobar} A., {Leibundgut} B., 2011, \mn@doi [Annual Review of Nuclear and
Particle Science] {10.1146/annurev-nucl-102010-130434}, \href
{https://ui.adsabs.harvard.edu/abs/2011ARNPS..61..251G} {61, 251}
\bibitem[\protect\citeauthoryear{Handley}{Handley}{2021}]{Handley_2021}
Handley W., 2021, \mn@doi [Physical Review D] {10.1103/physrevd.103.l041301},
103
\bibitem[\protect\citeauthoryear{Handley \& Lemos}{Handley \&
Lemos}{2019}]{complexity}
Handley W., Lemos P., 2019, \mn@doi [Phys. Rev. D]
{10.1103/PhysRevD.100.023512}, 100, 023512
\bibitem[\protect\citeauthoryear{Handley, Hobson \& Lasenby}{Handley
et~al.}{2015a}]{Polychord2}
Handley W.~J., Hobson M.~P., Lasenby A.~N., 2015a, \mn@doi [Monthly Notices
of the Royal Astronomical Society: Letters] {10.1093/mnrasl/slv047}, 450, L61
\bibitem[\protect\citeauthoryear{{Handley}, {Hobson} \& {Lasenby}}{{Handley}
et~al.}{2015b}]{polychord}
{Handley} W.~J., {Hobson} M.~P., {Lasenby} A.~N., 2015b, \mn@doi [\mnras]
{10.1093/mnras/stv1911}, \href
{https://ui.adsabs.harvard.edu/abs/2015MNRAS.453.4384H} {453, 4384}
\bibitem[\protect\citeauthoryear{Hergt, Handley, Hobson \& Lasenby}{Hergt
et~al.}{2021}]{Hergt_2021}
Hergt L., Handley W., Hobson M., Lasenby A., 2021, \mn@doi [Physical
Review D] {10.1103/physrevd.103.123511}, 103
\bibitem[\protect\citeauthoryear{Hewson}{Hewson}{2015}]{BDA}
Hewson P., 2015, Journal of the Royal Statistical Society Series A, 178, 301
\bibitem[\protect\citeauthoryear{Hobson, Jaffe, Liddle, Mukherjee \&
Parkinson}{Hobson et~al.}{2010}]{hobson2010bayesian}
Hobson M.~P., Jaffe A.~H., Liddle A.~R., Mukherjee P., Parkinson D.,
2010, Bayesian methods in cosmology.
Cambridge University Press
\bibitem[\protect\citeauthoryear{Hyndman}{Hyndman}{1996}]{HPDI}
Hyndman R.~J., 1996, \mn@doi [The American Statistician]
{10.1080/00031305.1996.10474359}, 50, 120
\bibitem[\protect\citeauthoryear{Jeffreys}{Jeffreys}{1961}]{jeffreys1961}
Jeffreys H., 1961, Theory of Probability.
International series of monographs on physics, Clarendon Press, \url
{https://books.google.co.uk/books?id=AavQAAAAMAAJ}
\bibitem[\protect\citeauthoryear{{Kelly}, {Hicken}, {Burke}, {Mandel} \&
{Kirshner}}{{Kelly} et~al.}{2010}]{Kelly2010}
{Kelly} P.~L., {Hicken} M., {Burke} D.~L., {Mandel} K.~S., {Kirshner}
R.~P., 2010, \mn@doi [\apj] {10.1088/0004-637X/715/2/743}, \href
{https://ui.adsabs.harvard.edu/abs/2010ApJ...715..743K} {715, 743}
\bibitem[\protect\citeauthoryear{Kistler, Stanek, Kochanek, Prieto \&
Thompson}{Kistler et~al.}{2013}]{Kistler_2013}
Kistler M.~D., Stanek K.~Z., Kochanek C.~S., Prieto J.~L., Thompson T.~A.,
2013, \mn@doi [The Astrophysical Journal] {10.1088/0004-637x/770/2/88}, 770,
88
\bibitem[\protect\citeauthoryear{Kroupa, Yallup, Handley \& Hobson}{Kroupa
et~al.}{2023}]{kroupa2023}
Kroupa N., Yallup D., Handley W., Hobson M., 2023, Kernel-, mean- and
noise-marginalised Gaussian processes for exoplanet transits and $H_0$
inference (\mn@eprint {arXiv} {2311.04153})
\bibitem[\protect\citeauthoryear{Kullback \& Leibler}{Kullback \&
Leibler}{1951}]{kullback}
Kullback S., Leibler R.~A., 1951, The Annals of Mathematical Statistics, 22,
79
\bibitem[\protect\citeauthoryear{Kurek \& Szydłowski}{Kurek \&
Szydłowski}{2008}]{Kurek_2008}
Kurek A., Szydłowski M., 2008, \mn@doi [The Astrophysical Journal]
{10.1086/526333}, 675, 1
\bibitem[\protect\citeauthoryear{Linder}{Linder}{2007}]{Linder_2007}
Linder E.~V., 2007, \mn@doi [General Relativity and Gravitation]
{10.1007/s10714-007-0550-z}, 40, 329
\bibitem[\protect\citeauthoryear{Lindley}{Lindley}{2000}]{lindley2000}
Lindley D.~V., 2000, Journal of the Royal Statistical Society Series D: The
Statistician, 49, 293
\bibitem[\protect\citeauthoryear{{Maguire}}{{Maguire}}{2017}]{Maguire2017}
{Maguire} K., 2017, in {Alsabti} A.~W., {Murdin} P., eds, , Handbook of
Supernovae.
Springer International Publishing, p.~293,
\mn@doi{10.1007/978-3-319-21846-5_36}
\bibitem[\protect\citeauthoryear{Meng, Li \& Yang}{Meng
et~al.}{2011}]{Meng2011}
Meng X.~C., Li Z.~M., Yang W.~M., 2011, \mn@doi [Publications of the
Astronomical Society of Japan] {10.1093/pasj/63.4.L31}, 63, L31
\bibitem[\protect\citeauthoryear{Mortonson, Hu \& Huterer}{Mortonson
et~al.}{2009}]{Mortonson_2009}
Mortonson M., Hu W., Huterer D., 2009, \mn@doi [Physical Review D]
{10.1103/physrevd.80.067301}, 80
\bibitem[\protect\citeauthoryear{{M{\"o}rtsell} \& {Dhawan}}{{M{\"o}rtsell} \&
{Dhawan}}{2018}]{mortsell2018}
{M{\"o}rtsell} E., {Dhawan} S., 2018, \mn@doi [\jcap]
{10.1088/1475-7516/2018/09/025}, \href
{https://ui.adsabs.harvard.edu/abs/2018JCAP...09..025M} {2018, 025}
\bibitem[\protect\citeauthoryear{Pascal, Bombrun, Tourneret \&
Berthoumieu}{Pascal et~al.}{2013}]{Pascal_2013}
Pascal F., Bombrun L., Tourneret J.-Y., Berthoumieu Y., 2013, \mn@doi
[{IEEE} Transactions on Signal Processing] {10.1109/tsp.2013.2282909}, 61,
5960
\bibitem[\protect\citeauthoryear{Perivolaropoulos \& Skara}{Perivolaropoulos \&
Skara}{2022}]{Perivolaropoulos_2022}
Perivolaropoulos L., Skara F., 2022, \mn@doi [New Astronomy Reviews]
{10.1016/j.newar.2022.101659}, 95, 101659
\bibitem[\protect\citeauthoryear{Perlmutter et~al.}{Perlmutter
et~al.}{1999}]{Perlmutter_1999}
Perlmutter S., et~al., 1999, \mn@doi [The Astrophysical Journal]
{10.1086/307221}, 517, 565
\bibitem[\protect\citeauthoryear{Peterson et~al.}{Peterson
et~al.}{2022}]{Peterson_2022}
Peterson E.~R., et~al., 2022, \mn@doi [The Astrophysical Journal]
{10.3847/1538-4357/ac4698}, 938, 112
\bibitem[\protect\citeauthoryear{{Phillips}}{{Phillips}}{1993}]{phillips1993}
{Phillips} M.~M., 1993, \mn@doi [\apjl] {10.1086/186970}, \href
{https://ui.adsabs.harvard.edu/abs/1993ApJ...413L.105P} {413, L105}
\bibitem[\protect\citeauthoryear{{Planck Collaboration} et~al.}{{Planck
Collaboration} et~al.}{2020}]{Planck2018}
{Planck Collaboration} et~al., 2020, \mn@doi [\aap]
{10.1051/0004-6361/201833910}, 641, A6
\bibitem[\protect\citeauthoryear{{Riess} et~al.,}{{Riess}
et~al.}{1998}]{Riess1998}
{Riess} A.~G., et~al., 1998, \mn@doi [\aj] {10.1086/300499}, \href
{https://ui.adsabs.harvard.edu/abs/1998AJ....116.1009R} {116, 1009}
\bibitem[\protect\citeauthoryear{{Riess} et~al.}{{Riess}
et~al.}{2022}]{Riess2022}
{Riess} A.~G., et~al., 2022, \mn@doi [\apjl] {10.3847/2041-8213/ac5c5b}, \href
{https://ui.adsabs.harvard.edu/abs/2022ApJ...934L...7R} {934, L7}
\bibitem[\protect\citeauthoryear{Rose et~al.,}{Rose et~al.}{2021}]{rose2021}
Rose B.~M., et~al., 2021, A Reference Survey for Supernova Cosmology with the
Nancy Grace Roman Space Telescope (\mn@eprint {arXiv} {2111.03081})
\bibitem[\protect\citeauthoryear{Scheutwinkel, Handley \& de
Lera~Acedo}{Scheutwinkel et~al.}{2023}]{Scheutwinkel_2023}
Scheutwinkel K.~H., Handley W., de Lera~Acedo E., 2023, \mn@doi
[Publications of the Astronomical Society of Australia]
{10.1017/pasa.2023.16}, 40
\bibitem[\protect\citeauthoryear{{Slepian}, {Gott} \& {Zinn}}{{Slepian}
et~al.}{2014}]{SLepian2014}
{Slepian} Z., {Gott} J.~R., {Zinn} J., 2014, \mn@doi [\mnras]
{10.1093/mnras/stt2195}, \href
{https://ui.adsabs.harvard.edu/abs/2014MNRAS.438.1948S} {438, 1948}
\bibitem[\protect\citeauthoryear{Sullivan et~al.,}{Sullivan
et~al.}{2010}]{Sullivan_2010}
Sullivan M., et~al., 2010, \mn@doi [Monthly Notices of the Royal Astronomical
Society] {10.1111/j.1365-2966.2010.16731.x}, pp no--no
\bibitem[\protect\citeauthoryear{{Tripp}}{{Tripp}}{1998}]{tripp1998}
{Tripp} R., 1998, \aap, \href
{https://ui.adsabs.harvard.edu/abs/1998A&A...331..815T} {331, 815}
\bibitem[\protect\citeauthoryear{Trotta}{Trotta}{2008}]{bayesinthesky}
Trotta R., 2008, \mn@doi [Contemporary Physics] {10.1080/00107510802066753},
49, 71
\bibitem[\protect\citeauthoryear{{Wetterich}}{{Wetterich}}{2007}]{Wetterich2007}
{Wetterich} C., 2007, \mn@doi [Physics Letters B]
{10.1016/j.physletb.2007.08.060}, \href
{https://ui.adsabs.harvard.edu/abs/2007PhLB..655..201W} {655, 201}
\bibitem[\protect\citeauthoryear{d{\textquotesingle}Amico, Gleyzes, Kokron,
Markovic, Senatore, Zhang, Beutler \&
Gil-Mar{\'{\i}}n}{d{\textquotesingle}Amico et~al.}{2020}]{d_Amico_2020}
d{\textquotesingle}Amico G., Gleyzes J., Kokron N., Markovic K., Senatore
L., Zhang P., Beutler F., Gil-Mar{\'{\i}}n H., 2020, \mn@doi [Journal of
Cosmology and Astroparticle Physics] {10.1088/1475-7516/2020/05/005}, 2020,
005
\bibitem[\protect\citeauthoryear{{de Rham}, {Gabadadze} \& {Tolley}}{{de Rham}
et~al.}{2011}]{deRham2011}
{de Rham} C., {Gabadadze} G., {Tolley} A.~J., 2011, \mn@doi [\prl]
{10.1103/PhysRevLett.106.231101}, \href
{https://ui.adsabs.harvard.edu/abs/2011PhRvL.106w1101D} {106, 231101}
\bibitem[\protect\citeauthoryear{du~Mas~des Bourboux et~al.,}{du~Mas~des
Bourboux et~al.}{2020}]{Bourboux2020}
du~Mas~des Bourboux H., et~al., 2020, \mn@doi [The Astrophysical Journal]
{10.3847/1538-4357/abb085}, 901, 153
\makeatother
\end{thebibliography}
```
5. **Author Information:**
- Lead Author: {'name': 'Toby Lovick'}
- Full Authors List:
```yaml
Toby Lovick:
phd:
start: 2024-10-01
supervisors:
- Will Handley
thesis: null
partiii:
start: 2022-10-01
end: 2023-06-01
supervisors:
- Suhail Dhawan
- Will Handley
thesis: null
summer:
start: 2023-07-24
end: 2023-09-15
supervisors:
- Suhail Dhawan
thesis: null
original_image: images/originals/toby_lovick.jpeg
image: /assets/group/images/toby_lovick.jpg
Suhail Dhawan:
coi:
start: 2024-10-01
thesis: null
image: https://www.lucy.cam.ac.uk/sites/default/files/styles/650x650_scale_and_crop/public/fellow-images/Suhail%20Dhawan.jpg?itok=IinGhbdp
links:
Department webpage: https://www.lucy.cam.ac.uk/fellows/dr-suhail-dhawan
Will Handley:
pi:
start: 2020-10-01
thesis: null
postdoc:
start: 2016-10-01
end: 2020-10-01
thesis: null
phd:
start: 2012-10-01
end: 2016-09-30
supervisors:
- Anthony Lasenby
- Mike Hobson
thesis: 'Kinetic initial conditions for inflation: theory, observation and methods'
original_image: images/originals/will_handley.jpeg
image: /assets/group/images/will_handley.jpg
links:
Webpage: https://willhandley.co.uk
```
This YAML file provides a concise snapshot of an academic research group. It lists members by name along with their academic roles—ranging from Part III and summer projects to MPhil, PhD, and postdoctoral positions—with corresponding dates, thesis topics, and supervisor details. Supplementary metadata includes image paths and links to personal or departmental webpages. A dedicated "coi" section profiles senior researchers, highlighting the group’s collaborative mentoring network and career trajectories in cosmology, astrophysics, and Bayesian data analysis.
====================================================================================
Final Output Instructions
====================================================================================
- Combine all data sources to create a seamless, engaging narrative.
- Follow the exact Markdown output format provided at the top.
- Do not include any extra explanation, commentary, or wrapping beyond the specified Markdown.
- Validate that every bibliographic reference with a DOI or arXiv identifier is converted into a Markdown link as per the examples.
- Validate that every Markdown author link corresponds to a link in the author information block.
- Before finalizing, confirm that no LaTeX citation commands or other undesired formatting remain.
- Before finalizing, confirm that the link to the paper itself [2312.02075](https://arxiv.org/abs/2312.02075) is featured in the first sentence.
Generate only the final Markdown output that meets all these requirements.
{% endraw %}