# Thomas Bayes

Thomas Bayes
Portrait purportedly of Bayes used in a 1936 book, [1] but it is doubtful whether the portrait is actually of him. [2] No earlier portrait or claimed portrait survives.
Born c. 1701
Died 7 April 1761(1761-04-07) (aged 59)
Tunbridge Wells, Kent, England
Nationality British
Alma mater University of Edinburgh
Known for Bayes' theorem
Scientific career
Fields Probability
Signature

Thomas Bayes (/bz/; c. 1701 – 7 April 1761[2][3][note 1]) was an English statistician, philosopher and Presbyterian minister who is known for formulating a specific case of the theorem that bears his name: Bayes' theorem. Bayes never published what would become his most famous accomplishment; his notes were edited and published after his death by Richard Price.[4]

### Biography

Mount Sion Chapel, where Bayes served as minister.

Thomas Bayes was the son of London Presbyterian minister Joshua Bayes,[5] and was possibly born in Hertfordshire.[6] He came from a prominent nonconformist family from Sheffield. In 1719, he enrolled at the University of Edinburgh to study logic and theology. On his return around 1722, he assisted his father at the latter's chapel in London before moving to Tunbridge Wells, Kent, around 1734. There he was minister of the Mount Sion Chapel, until 1752.[7]

He is known to have published two works in his lifetime, one theological and one mathematical:

1. Divine Benevolence, or an Attempt to Prove That the Principal End of the Divine Providence and Government is the Happiness of His Creatures (1731)
2. An Introduction to the Doctrine of Fluxions, and a Defence of the Mathematicians Against the Objections of the Author of The Analyst (published anonymously in 1736), in which he defended the logical foundation of Isaac Newton's calculus ("fluxions") against the criticism by George Berkeley, a bishop and noted philosopher, the author of The Analyst

It is speculated that Bayes was elected as a Fellow of the Royal Society in 1742[8] on the strength of the Introduction to the Doctrine of Fluxions, as he is not known to have published any other mathematical works during his lifetime.

In his later years he took a deep interest in probability. Professor Stephen Stigler, historian of statistical science, thinks that Bayes became interested in the subject while reviewing a work written in 1755 by Thomas Simpson,[9] but George Alfred Barnard thinks he learned mathematics and probability from a book by Abraham de Moivre.[10] Others speculate he was motivated to rebut David Hume's argument against believing in miracles on the evidence of testimony in An Enquiry Concerning Human Understanding.[11] His work and findings on probability theory were passed in manuscript form to his friend Richard Price after his death.

Monument to members of the Bayes and Cotton families, including Thomas Bayes and his father Joshua, in Bunhill Fields burial ground

By 1755 he was ill and by 1761 had died in Tunbridge Wells. He was buried in Bunhill Fields burial ground in Moorgate, London, where many nonconformists lie.

### Bayes' theorem

Bayes's solution to a problem of inverse probability was presented in "An Essay towards solving a Problem in the Doctrine of Chances" which was read to the Royal Society in 1763 after Bayes' death. Richard Price shepherded the work through this presentation and its publication in the Philosophical Transactions of the Royal Society of London the following year.[12] This was an argument for using a uniform prior distribution for a binomial parameter and not merely a general postulate.[13] This essay gives the following theorem (stated here in present-day terminology).

Suppose a quantity R is uniformly distributed between 0 and 1. Suppose each of X1, ..., Xn is equal to either 1 or 0 and the conditional probability that any of them is equal to 1, given the value of R, is R. Suppose they are conditionally independent given the value of R. Then the conditional probability distribution of R, given the values of X1, ..., Xn, is

${\displaystyle {\frac {(n+1)!}{S!(n-S)!}}r^{S}(1-r)^{n-S}\,dr\quad {\text{for }}0\leq r\leq 1,{\text{ where }}S=X_{1}+\cdots +X_{n}.}$

Thus, for example,

${\displaystyle \Pr(R\leq r_{0}\mid X_{1},\ldots ,X_{n})={\frac {(n+1)!}{S!(n-S)!}}\int _{0}^{r_{0}}r^{S}(1-r)^{n-S}\,dr.}$

This is a special case of Bayes' theorem.

In the first decades of the eighteenth century, many problems concerning the probability of certain events, given specified conditions, were solved. For example: given a specified number of white and black balls in an urn, what is the probability of drawing a black ball? Or the converse: given that one or more balls has been drawn, what can be said about the number of white and black balls in the urn? These are sometimes called "inverse probability" problems.

Bayes's "Essay" contains his solution to a similar problem posed by Abraham de Moivre, author of The Doctrine of Chances (1718).

In addition, a paper by Bayes on asymptotic series was published posthumously.

### Bayesianism

Bayesian probability is the name given to several related interpretations of probability as an amount of epistemic confidence – the strength of beliefs, hypotheses etc. – rather than a frequency. This allows the application of probability to all sorts of propositions rather than just ones that come with a reference class. "Bayesian" has been used in this sense since about 1950. Since its rebirth in the 1950s, advancements in computing technology have allowed scientists from many disciplines to pair traditional Bayesian statistics with random walk techniques. The use of the Bayes theorem has been extended in science and in other fields.[14]

Bayes himself might not have embraced the broad interpretation now called Bayesian, which was in fact pioneered and popularised by Pierre-Simon Laplace;[15] it is difficult to assess Bayes's philosophical views on probability, since his essay does not go into questions of interpretation. There, Bayes defines probability of an event as (Definition 5) "the ratio between the value at which an expectation depending on the happening of the event ought to be computed, and the value of the thing expected upon its happening". Within modern utility theory, the same definition would result by rearranging the definition of expected utility (the probability of an event times the payoff received in case of that event – including the special cases of buying risk for small amounts or buying security for big amounts) to solve for the probability. As Stigler points out,[9] this is a subjective definition, and does not require repeated events; however, it does require that the event in question be observable, for otherwise it could never be said to have "happened". Stigler argues that Bayes intended his results in a more limited way than modern Bayesians. Given Bayes's definition of probability, his result concerning the parameter of a binomial distribution makes sense only to the extent that one can bet on its observable consequences.