• 0 Posts
  • 4 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle

  • Let’s get something straight, no I’m not saying we have our modern definition of AGI but we’ve practically got the original definition coined before LLMs were a thing. Which was that the proposed AGI agent should maximise “the ability to satisfy goals in a wide range of environments”. I personally think we’ve just moved the goal posts a bit.

    Wether we’ll ever have thinking, rationalised and possibly conscious AGI is beyond the question. But I do think current AI is similar to existing brains today.

    Do you not agree that animal brains are just prediction machines?

    That we have our own hallucinations all the time? Think visual tricks, lapses in memory, deja vu, or just the many mental disorders people can have.

    Do you think our brain doesn’t follow path of least resistance in processing? Or do you think our thoughts comes from elsewhere?

    I seriously don’t think animal brains or human to be specific are that special that nurural networks are beneath. Sure people didn’t like being likened to animals but it was the truth, and I as do many AI researches, liken us to AI.

    AI is primitive now, yet it can still pass the bar, doctors exams, compute complex physics problems and write a book (soulless as it may be like some authors) in less than a few seconds.

    Whilst we may not have AGI the question was about math. The paper questioned how it did 36+59 and it did things in an interesting way where it half predicted what the tens column would be and ‘knew’ what the units column was, then put it together. Although thats not how I or even you may do it there are probably people who do it similar.

    All I argue is that AI is closer to how our brains think, and with our brains being irrational quite often it shouldn’t be surprising that AI nural networks are also irrational at times.


  • I agree. This is the exact problem I think people need to face with nural network AIs. They work the exact same way we do. Even if we analysed the human brain it would look like wires connected to wires with different resistances all over the place with some other chemical influences.

    I think everyone forgets that nural networks were used in AI to replicate how animal brains work, and clearly if it worked for us to get smart then it should work for something synthetic. Well we’ve certainly answered that now.

    Everyone being like “oh it’s just a predictive model and it’s all math and math can’t be intelligent” are questioning exactly how their own brains work. We are just prediction machines, the brain releases dopamine when it correctly predicts things, it self learns from correctly assuming how things work. We modelled AI off of ourselves. And if we don’t understand how we work, of course we’re not gonna understand how it works.


  • I understand all the concerns about losing jobs and being left behind, but that’s also what happened when the loom was invented. An entire profession gone. Looms were destroyed in protests, people died over embracing the new machine and the inventors of every new version had their lifes threatened. But imagine if we we’re still hand weaving all our clothes today? Yeah maybe they would be more durable than what we have today, but you wouldn’t have many clothes, and there would be a large portion of the population just weaving fabrics.

    Same thing happened when threshing machines were invented, steam pumps, cranes, the printing press. History repeats itself where jobs will be lost to new innovation but look at what new jobs and careers these inventions sparked.

    Its hard to see it now, but automation is a good thing. It will drive new technology where we will once again find new jobs and careers.

    Believe me, as someone still getting into my career which is being threatened by AI, I’m certain there will still be work that isn’t just manual labor.