Discussion about this post

User's avatar
Burnt Eliot's avatar

It is noteworthy that within Godel's proof, in the Theory of Computation we easily see that when the machine encounters such an undecidable question, it will never stop calculating. It can calculate neither the existence of a proof, nor the non-existence of a proof. It will instead simply never halt. This is interesting for AI because probability arithmetic is subject to Godel's Theorem. There are many, many potentially undecidable questions put before the LLM and its calculator. The "AI solution" is to simply halt the machine by fiat after a certain number of cycles (or Dollars) and present the answer it is currently evaluating. This accounts for a great many hallucinations, lies, and ethical breaches that are spilling out of these machines. For these types of questions, they have absolutely no way to decide "I don't know." These AI calculators cannot do that, so the owners, developers, and promoters are also hallucinating, lying, and/or unethical when they tell us that Godel does not apply to their machines or to the "answers" they spew out.

We wonder if these machines are intelligent! Krishnamurti had an idea about Intelligence: He said it was the ability to know when we do not know something, and to be able to act on that knowledge. Godel does not apply to this kind of intelligence.

4 more comments...

No posts

Ready for more?