A General Bayesian Framework to Account for Foreground Map Errors in Global 21-cm Experiments
In our latest paper, A General Bayesian Framework to Account for Foreground Map Errors in Global 21-cm Experiments, we introduce a robust statistical framework to tackle one of the most significant obstacles in modern cosmology: the overwhelming glare of galactic foregrounds obscuring the faint signal from the early Universe. The quest to detect the sky-averaged 21-cm signal from Cosmic Dawn and the Epoch of Reionization is a data analysis challenge of epic proportions. This signal holds the key to understanding the formation of the first stars and galaxies, but it is buried beneath astrophysical foregrounds that are two to five orders of magnitude brighter.
The critical difficulty lies in separating this faint cosmological whisper from the foreground shout. The tantalizing, but fiercely debated, detection reported by the EDGES collaboration (Bowman et al., 2018) has intensified the community’s focus on meticulously characterizing and mitigating all possible systematic errors. Our work, led by [Michael Pagano], confronts a crucial and previously under-addressed systematic: the inherent errors within the foreground sky maps themselves, which are foundational to any forward-modelling analysis.
A More Flexible Foregrounds Model
Existing low-frequency radio sky maps, while invaluable, contain uncertainties from their own measurement and calibration processes. These can manifest as an overall offset in brightness or complex, spatially-varying temperature perturbations relative to the true sky. If unaccounted for, these errors can propagate through the analysis pipeline and masquerade as a cosmological signal, leading to a biased or false detection.
Building on the physically-motivated Bayesian pipeline developed for the Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) (Anstey et al., 2021), our new framework enhances the foreground model with the flexibility to self-correct for these map deficiencies. We achieve this by:
- Partitioning the Sky: The foreground base map is divided into a number of regions,
N_a
. - Introducing Amplitude Scale Factors: For each region, we introduce a multiplicative scaling parameter,
a_i
, which adjusts the brightness of our model sky. - Fitting a Monopole Offset: We include a global offset term,
γ_offset
, to account for any overall zero-level error in the map.
Principled Model Selection with Bayesian Evidence
A key feature of our approach is determining the appropriate model complexity directly from the data. Adding too few parameters might fail to capture the true errors, while adding too many can lead to overfitting. We navigate this trade-off using Bayesian model selection, a cornerstone of our group’s philosophy. By computing the Bayesian evidence for models with varying numbers of spectral regions (N_β
) and amplitude scale factors (N_a
), we can identify the simplest model that provides the best description of the data. This computationally intensive task is made feasible by the powerful nested sampling algorithm PolyChord, developed in-house by Will Handley.
Our simulations show that this framework is highly effective. We find that in the presence of realistic foreground map errors, models without our scaling factors consistently fail to recover the true 21-cm signal. Furthermore, we demonstrate that amplitude errors and spectral complexities are distinct challenges; simply increasing the number of spectral regions cannot compensate for spatial errors in the base map, and vice-versa. Our joint-fitting approach robustly disentangles these effects.
This work, a collaboration involving [Dominic Anstey], Will Handley, and Eloy de Lera Acedo, provides a vital tool for the global 21-cm community. By enabling a higher-fidelity foreground mitigation, this framework represents a significant step towards a robust and definitive detection of the signal from our cosmic dawn.
Content generated by gemini-2.5-pro using this prompt.
Image generated by imagen-3.0-generate-002 using this prompt.