Instead, I aim to develop novel numerical algorithms that explicitly estimate their own error, incorporating all possible error sources, as well as adaptively assigning computation so as to reduce overall risk. Probabilistic numerics is a new, rigorous, framework for the quantification of computational error in numerical tasks. Probabilistic Numerics was born of recent developments in the interpretation of numerical methods, providing new tools for ensuring AI safety. Numerical algorithms estimate latent (non-analytic) quantities from the result of tractable (“observable”) computations. Their task can thus be described as inference in the statistical sense, and numerical algorithms cast as learning machines that actively collect (compute) data to infer a non-analytic quantity. Importantly, this notion applies even if the quantity in question is entirely of a deterministic nature—uncertainty can be assigned to quantities that are not stochastic, just unknown. Probabilistic Numerics is the treatment of numerical computation as inference, yielding algorithms that take in probability distributions over input variables, and return probability distributions over their output, such that the output distribution reflects uncertainty caused both by the uncertain inputs and the imperfect internal computation. Moreover, Probabilistic Numerics, through its estimates of how uncertain and hence how valuable is a computation, allows the allocation of computation to itself be optimised. As a result, probabilistic numeric algorithms have been shown to offer significantly lower computational costs than alternatives. Intelligent allocation of computation can also improve safety, by forcing computation to explore troublesome edge cases that might otherwise be neglected.
I aim to apply the probabilistic numeric framework to the identification and communication of computational errors within composite AI systems. Probabilistic numerical methods offer the promise of monitoring assumptions in running computations, yielding a monitoring regime that can safely interrupt algorithms overwhelmed by their task’s complexity. This approach will allow AI systems to monitor the extent to which their own internal model matches external data, and to respond appropriately cautiously.