The current goal of my research is to create and elucidate a solution to the characterization or epistemological problem of induction. The problem concerns the description of a universal theory of inference.

Even though this is ostensibly a philosophical problem, the roots of my interest lie in a problem that lies at the heart Quantum Theory, namely a vagueness concerning what it says about how the world *is* versus its prescriptive content about how to make predictions for `experimental outcomes.’ Despite its success and apparent consistency, I believe this vagueness hamstrings future progress in theoretical physics.

The vagueness is made apparent if one (I think rightly) considers the probabilities in Quantum Theory as subjective (in some way). Namely, the probabilities should be determined in some way by certain factors such as what is assumed and the logical structure of those assumptions. Quantum Theory does not do this and is thus—under this view—incomplete and possibly incoherent. The problem is then to derive the probabilities in Quantum Theory or some successor from some well-motivated principles determining instances of the factors determining the probabilities.

But what are the determining factors? In Bayesian Probability Theory, it is a combination of logical structure (conjunctions and disjunctions of propositions lead to constrained probabilities through the product and sum rules) and gut feeling as the probabilities are considered ‘degrees of belief’ in some sense. It is my own gut feeling that this is fine for much of the statistical uses the theory is applied to where one deals with very imprecise hypotheses and ‘information’, but is insufficient for the fidelity and range of representation required for deriving Quantum Theory from first principles.

For example, perhaps propositions associated with possible states of affairs are not actually mutually exclusive and exhaustive (and these properties emerge somehow). This could potentially lead to an explanation of interference as a probability of the disjunction of the propositions of two possible states now has an extra—‘interference’—term. But because a situation like this is so divorced from our everyday experience, there is no gut feeling telling us what these probabilities should be. Moreover, the Bayesian approach also has its own, independent, problems.

I am of the view that ideally, a probability should be uniquely determined by the assumptions made. This is the view taken by people who consider that probabilities should be *logical*. It is a minority view as there are a few well cited problems, namely, Goodman’s ‘grue’ problem and the problem of the uniqueness of priors due to redescription. It is my contention that these are pseudo-problems that are dissolved if one takes care to distinguish between logic as a component of an apparatus of inference and logic as a tool for precise expression, involving logical languages. For example, the ‘grue’ problem relies on probabilities being determined by the *form* of the sentences/propositions assumed but if the notion of form can be considered as a linguistic notion and hence irrelevant to probabilities, then the problem dissolves.

The goal then is to characterize logic in a way that it can be divorced from language and all its conceptual baggage from the history of analytic philosophy (such as things like form, reference and analyticity) but is still a rich enough concept to be the structure in which (ideally all) theories can be understood with. In the theory I am developing, the richness comes from integration of logic with a new decision theory, motivated independent from the above considerations. The motivations range from a certain argument of pragmatic flavour for the licensing of acceptance of hypotheses one does not have much evidence for (the argument differs in important ways from the classic one from William James)—and hence providing an explanation where Bayesian Probability Theory has an explanatory gap—to problems relating to counterfactual conditionals.

For more information, visit: https://ronininstitute.academia.edu/CaelHasse

Contact Cael at cael.hasse@ronininstitute.org