Climate Models, Calibration, and Confirmation
Abstract
We argue that concerns about double-counting—using the same evidence both to calibrate or tune climate models and also to confirm or verify that the models are adequate—deserve more careful scrutiny in climate modelling circles. It is widely held that double-counting is bad and that separate data must be used for calibration and confirmation. We show that this is far from obviously true, and that climate scientists may be confusing their targets. Our analysis turns on a Bayesian/relative-likelihood approach to incremental confirmation. According to this approach, double-counting is entirely proper. We go on to discuss plausible difficulties with calibrating climate models, and we distinguish more and less ambitious notions of confirmation. Strong claims of confirmation may not, in many cases, be warranted, but it would be a mistake to regard double-counting as the culprit.
-
1 Introduction
-
2 Remarks about Models and Adequacy-for-Purpose
-
3 Evidence for Calibration Can Also Yield Comparative Confirmation
-
3.1 Double-counting I
-
3.2 Double-counting II
-
-
4 Climate Science Examples: Comparative Confirmation in Practice
-
4.1 Confirmation due to better and worse best fits
-
4.2 Confirmation due to more and less plausible forcings values
-
-
5 Old Evidence
-
6 Doubts about the Relevance of Past Data
-
7 Non-comparative Confirmation and Catch-Alls
-
8 Climate Science Example: Non-comparative Confirmation and Catch-Alls in Practice
-
9 Concluding Remarks
- © The Author 2013. Published by Oxford University Press on behalf of British Society for the Philosophy of Science. All rights reserved. For Permissions, please email: journals.permissions@oup.com






