Computer Proofs: Are they Rigorous?

Author: Casey O'Donnell

Math for Everyone Series

Just how reliable are computers, anyway? Can they be trusted to carry out long, complex mathematical computations without erring? How concerning is the risk of a tiny glitch in the scope of a massive solution?

Joshua Cooper, professor of mathematics at the University of South Carolina, encouraged students to ask these questions and more in a talk he gave at Notre Dame on Thursday, October 9. Part of Notre Dame’s monthly “Math for Everyone” series, Cooper’s talk, entitled “Computer Proofs: Are they Rigorous?,” analyzed the increasing dependence on computers in the world of modern proofs.

“Computers are becoming more and more integral to the process of mathematical proofs,” Cooper said. “So the question of their reliability is something people are really seriously thinking about.”

To explore that question, Cooper focused in particular on the long-debated “Sudoku Problem.” This problem sought to mathematically determine the minimum number of clues needed to give a Sudoku grid a single, unique solution. The computation, far too massive to be done by hand, remained unsolved for many years. However, in 2012, Professor Gary Mcguire and his colleagues at University College Dublin (UCD) in Ireland released a proof that may have finally answered the question. In order to reach a solution, the UCD team ran mathematical computations on a massive supercomputer for a straight year.

Cooper invited the 100+ students in attendance at the talk to consider the myriad of ways in which this massive, year-long computation could have been interrupted. Ranging from the familiar computer bug to the unconsidered interference of solar particles, the possible risks encompassed a number of different possibilities. The occurrence of any one of these events could have resulted in an incorrect output for the final solution. Although Cooper admitted the infinitely rare chance of some of these complications, he cited research showing that others are quite common.

“Certainly, humans can and do make mistakes in their proofs,” Cooper conceded. “But there’s one major difference with computers. You can’t examine their internal processes the way you can look at a person’s written proof. Computers break the chain of evidence.”

Cooper qualified that statement by suggesting that outputting proof certificate would provide a checkable verification of a computation’s result. However, to end his presentation, Cooper stressed that the aim of mathematicians should always be to achieve human-readable proofs.