If all mathematics are derived from the fundamental axioms...

If all mathematics are derived from the fundamental axioms, why can't we teach an AI to learn the axioms and build upon it and come up with new theorems?

Attached: 0*kdGu1ZHL-MMbxktv.jpg (1190x595, 209K)

Other urls found in this thread:

en.wikipedia.org/wiki/Gödel's_incompleteness_theorems
twitter.com/SFWRedditImages

Yar assumption (or if i may say axiom) that maths is based on the same set of axioms is wrong.

No, you have AI make their own axioms. Also,

read a single book in your lifetime before posting please

>what is abstract thinking?

Whatever, pick a system of arithmetics and generate theorems for it. He didn't even assume that, but it's easily solvable.

>ur dumb, I won't explain how, but u r

What are you implying? In fact, what are you talking about?

I don't give a shit about this topic but this is where you start if you want to find out more about this issue.

en.wikipedia.org/wiki/Gödel's_incompleteness_theorems

I know Gödel's work much better than you. You're not presenting anything, posting a wikipedia page with a "I could totally explain, but I won't" attitude is at the same level of argumentation as smug anime girls. Either answer my questions or admit you don't have an answer.

This is what automatic theorem provers do. They still suck though.

Ya mostly cause coming with anything novel or novel is reducible to 3SAT anyways.

I admit I don't have an answer since I don't know shit about this topic thus I don't care, just heard from old math classes that axioms are still an active topic thanks to link related. This is literally all I know. If we can't get a complete picture of lvl 0 then we can't build anything on it much like math in general.

Gödel essentially proved that maths isn't complete, that is, that there are statements which are true and which are impossible to prove. We can still derive new theorems though.
There's a class of problems which, if you can prove that they cannot be proven, that proves them. I'm not joking. If the Rhiemann hypothesis cannot be proven, that means it's true.

No you idiot.
It's about soundness and completeness of formal theories.
You cannot have both at the same time, it basically answered you from the past

>Gödel essentially proved that maths isn't complete, that is, that there are statements which are true and which are impossible to prove
No, he didn‘t, retarded poledditor. The incompleteness theorem puts bounds on decidability of axiomatic systems. It states that there always true but unprovable statements for every such system with arithmetic basis. Eg there are statements that cannot be proven or disproven within the ubiquitous ZFC set theory, but can in MK set theory. In no possible way can you infer „math is incomplete“ from this. Fucking neck yourself, you popsci wikitard.

Mathemathics is really a cool meme.

>Yar assumption (or if i may say axiom)

Attached: john_carmack_oculus_rift-e1425653634513.jpg (750x600, 39K)

kys

because learning machines come in two flavors: function approximators and inference engines.
function approximators don't "learn" a whole lot--rather they key in on features and indicate when particular combinations of features are present.
inference engines clamp an output to see how the inputs change.
neither of those kinds of things will ever do maths.

We can do exactly that with proof checking. The bigger problem is letting a computer randomly come up with new theorems. In all likelihood it'd just spiral off making correct theorems about shit that doesnt matter to us in the present. It also would be difficult to know when to make jumps and which abstraction levels are worth working at.
For example the computer has just proven the square root of 2 to be irrational. Does it then continue on and try to develop similar proofs for rationality or irrationality or other numbers? Or does it develop some general proof applicable to all integers and their square roots? When does it know to break out of this cycle? Does it work on what we consider to be lower level proofs or higher level proofs from there? How can it know what a higher level proof is?

Attached: carmack.png (444x614, 212K)

symbolic ai did this already in the 60s, what are machine generated proofs

Silicon can't think
Prove me wrong

No. But most of the interesting stuff beyond 3SAT is totally intractable, at least to standard theorem solving algorithms.

>In no possible way can you infer „math is incomplete“ from this.
The correct inference is that there's no bounded set of possible consistent axioms that you might use because there are always statements that cannot be proven or disproven with any finite axiom scheme. (Goedel's theorem proves that even pretty trivial systems can construct statements that are only true if they are false and vice versa, which is nonsense. The way out of the hole is incompleteness, which has deep links to algorithm nontermination in computing systems.)
The trivial consequence of this is that mathematics is incomplete, and always will be: there will be always more to discover and prove. I think that's great.

What the hell are you talking about?

Even with the best most state of the art in AI that we have we can teach something b y giving examples of previous instances of any aspect, for eg. The problem of classifying and distinguishing between cats and dogs = feed a CNN lots of images of both and then run the resultant models on newer ones. There are other processes

This kind of "learning" is basically just learning the statistical properties of old data and using it to gauge new data.

The "robots" and "AI systems" you hear of are not magically learning to think on themselves but are made of a collection of many subrocesses (regression, classification etc.) and it's not limited to AI either, you would be surprised at how many of these "automatic" systems have hand written rules which decide what happens in different cases. The science for doing all of this is still flawed and relatively nascent.

You're talking about the problem of teaching a machine which is slightly different, you have to consider:
1. What are you teaching it
2. How does the model represent what you teach it
3. How does it REASON over what you teach it to move beyond and extrapolate on what you taught it

This is an open research area and people are still working on it but it is NOT this "hurr durr magic thing" which can learn axioms on it's own and just "come up" with new theorems.

You have to ask yourself, what is the theorem? how would you interpret it? how do you learn it? how do you learn it if you were all given an example? what kind of example(s) would you even use? how do you store that knowledge? what are the basic steps in the process of what you call "coming up with new theorems"? How do you reason over what knowledge you stored so that you can implement this process?

As I said, when working on something like this, you have to take into account a lot of details, AI does not mean that it magically does everything for you, there's a lot more that goes on into it.

OP btfo

Finding new mathematics that's also useful requires human creativity. Machines aren't capable of it and never will be.

Homotopy type theory is literally this, however it's more complicated than you realise.
Rather than use a computer to generate proofs, you use a computer to find analytic curves (called paths) in the phase space of all possible proofs.

Nobody is arguing that Mathematics is complete. But new theorems are being discovered every day using the previous ones, something which AI should be able to do it.

>AI
No such thing.