I don’t see why the example requiring training for humans to understand is unfortunate.
Humans aren’t innately good at math. I wouldn’t have been able to prove the statement without looking things up. I certainly would not be able to come up with the Peano Axioms, or anything comparable, on my own. Most people, even educated people, probably wouldn’t understand what there is to prove. Actually, I’m not sure if I do.
It’s not clear why such deficiencies among humans do not argue against human consciousness.
A leading AI has way more training than would ever be possible for any human, still they don’t grasp basic concepts, while their knowledge is way bigger than for any human.
That’s dubious. LLMs are trained on more text than a human ever sees, but humans are trained on data from several senses. I guess it’s not entirely clear how much data that is, but it’s a lot and very high quality. Humans are trained on that sense data and not on text. Humans read text and may learn from it.
Being conscious is not just to know what the words mean, but to understand what they mean.
Just because you can’t make a mathematical proof doesn’t mean you don’t understand the very simple truth of the statement.
If I can’t prove it, I don’t know how I can claim to understand it.
It’s axiomatic that equality is symmetric. It’s also axiomatic that 1+1=2. There is not a whole lot to understand. I have memorized that. Actually, having now thought about this for a bit, I think I can prove it.
What makes the difference between a human learning these things and an AI being trained for them?
I think if I could describe that, I might actually have solved the problem of strong AI.
Then how will you know the difference between strong AI and not-strong AI?
Then how will you know the difference between strong AI and not-strong AI?
I’ve already stated that that is a problem:
From a previous answer to you:
Obviously the Turing test doesn’t cut it, which I suspected already back then. And I’m sure when we finally have a self aware conscious AI, it will be debated violently.
Because I don’t think we have a sure methodology.
I think therefore I am, is only good for the conscious mind itself.
I can’t prove that other people are conscious, although I’m 100% confident they are.
In exactly the same way we can’t prove when we have a conscious AI.
But we may be able to prove that it is NOT conscious, which I think is clearly the case with current level AI. Although you don’t accept the example I provided, I believe it is clear evidence of lack of a consciousness behind the high level of intelligence it clearly has.
Humans aren’t innately good at math. I wouldn’t have been able to prove the statement without looking things up. I certainly would not be able to come up with the Peano Axioms, or anything comparable, on my own. Most people, even educated people, probably wouldn’t understand what there is to prove. Actually, I’m not sure if I do.
It’s not clear why such deficiencies among humans do not argue against human consciousness.
That’s dubious. LLMs are trained on more text than a human ever sees, but humans are trained on data from several senses. I guess it’s not entirely clear how much data that is, but it’s a lot and very high quality. Humans are trained on that sense data and not on text. Humans read text and may learn from it.
What might an operational definition look like?
Just because you can’t make a mathematical proof doesn’t mean you don’t understand the very simple truth of the statement.
I think if I could describe that, I might actually have solved the problem of strong AI.
You are asking unreasonable questions.
If I can’t prove it, I don’t know how I can claim to understand it.
It’s axiomatic that equality is symmetric. It’s also axiomatic that 1+1=2. There is not a whole lot to understand. I have memorized that. Actually, having now thought about this for a bit, I think I can prove it.
What makes the difference between a human learning these things and an AI being trained for them?
Then how will you know the difference between strong AI and not-strong AI?
I’ve already stated that that is a problem:
From a previous answer to you:
Because I don’t think we have a sure methodology.
I think therefore I am, is only good for the conscious mind itself.
I can’t prove that other people are conscious, although I’m 100% confident they are.
In exactly the same way we can’t prove when we have a conscious AI.
But we may be able to prove that it is NOT conscious, which I think is clearly the case with current level AI. Although you don’t accept the example I provided, I believe it is clear evidence of lack of a consciousness behind the high level of intelligence it clearly has.