How are you using new AI technology? Maybe you're only deploying things like ChatGPT to summarize long texts or draft up mindless emails. But what are you losing by taking these shortcuts? And is this tech taking away our ability to think?
The thing is… AI is making me smarter! I use AI as a learning tool. The absolute best thing about AI is the ability to follow up questions with additional questions and get a better understanding of a subject. I use it to ask about technical topics and flush out a better understanding that I ever got from just a text book. I have seem some instances of hallucinating in the past, but with the current generation of AI I’ve had very good results and consider it an excellent tool for learning.
For reference I’m an engineer with over 25 years of experience and I am considered an expert in my field.
The article says stupid, not dumb. If I’m not mistaken, the difference is like being intelligent versus being smart. When you stop using the brain muscle that’s responsible for researching, digging thru trash and bunch of obscure websites for info, using critical thinking to filter and refine your results, etc., that muscle will become atrophied.
You have essentially gone from being a researcher to being a reader.
“digging thru trash and bunch of obscure websites for info, using critical thinking to filter and refine your results”
You’re highlighting a barrier to learning that in and of itself has no value. It’s like arguing that kids today should learn cursive because you had to and it exercises the brain! Don’t fool yourself into thinking that just because you did something one way that it’s the best way. The goal is to learn and find solutions to problems. Whatever tool allows you to get there the easiest is the best one.
Learning through textbooks and one way absorption of information is not an efficient way to learn. Having the ability to ask questions and challenge a teacher (in this case the AI), is a far superior way to learn IMHO.
You’re highlighting a barrier to learning that in and of itself has no value.
It has no value as long as those tools are available to you. Like calculator, where nowadays everyone’s so used to them people have became pretty bad at math in head. While this is indeed not an issue since calculators are widely available to everyone, we’re not really talking about doing math, but using critical thinking, which is a very important skill in your daily life
EDIT: Disclaimer: I’m a vivid AI user and I’ve defended it here before, but I’m not about to start kidding myself that letting the AI analyize and think for me makes me more intelligent
Like calculator, where nowadays everyone’s so used to them people have became pretty bad at math in head.
Were people ever very good at math in head?
There are those who have become calculator dependent who might not have if there were no calculators, but I’d say they’re a small middle ground. Some people are still good at math in their head, and even when they are, they should be using a calculator when it’s available to double check their math when it might be in question.
At the lower end of the scale, there are people who never would have been able to do math in head, but with calculator can do math all day without problem, except when they mis-key the question and have no idea that the answer is wrong, because they have no sense of math without the calculator.
Same, I use it to put me down research paths. I don’t take anything it tells me at face value, but often it will introduce me to ideas in a particular field which I can then independently research by looking up on kagi.
Instead of saying “write me some code which will generate a series of caverns in a videogame”, I ask “what are 5 common procedural level generation algorithms, and give me a brief synopsis of them”, then I can take each one of those and look them up
I recently read that LLMs are effective for improving learning outcomes. When I read one of the meta studies, however, it seemed that many of the benefits were indirect: LLMs improved accessibility by allowing teachers to quickly tailor lessons to individual students, for example. It also seems that some students ask questions more freely and without embarrassment when chatting with an LLM, which can improve learning for those students - and this aligns with what you mention in your post. I personally have withheld follow-up questions in lectures because I didn’t want to look foolish or reveal my imperfect understanding of the topic, so I can see how an LLM could help me that way.
What the studies did not (yet) examine was whether the speed and ease of learning with LLMs were somehow detrimental to, say, retention. Sure, I can save time studying for an exam/technical interview with an LLM, but will I remember what I learned in 6 months? For some learning tasks, the long struggle is essential to a good understanding and retention (for example, writing your own code implementation of an algorithm vs. reading someone else’s). Will my reliance on AI somehow damage my ability to learn in some circumstances? I think that LLMs might be like powered exoskeletons for the mind - the operator slowly wastes away from lack of exercise.
It seems like a paradox, but learning “more, faster” might be worse in the long run.
The thing is… AI is making me smarter! I use AI as a learning tool. The absolute best thing about AI is the ability to follow up questions with additional questions and get a better understanding of a subject. I use it to ask about technical topics and flush out a better understanding that I ever got from just a text book. I have seem some instances of hallucinating in the past, but with the current generation of AI I’ve had very good results and consider it an excellent tool for learning.
For reference I’m an engineer with over 25 years of experience and I am considered an expert in my field.
The article says stupid, not dumb. If I’m not mistaken, the difference is like being intelligent versus being smart. When you stop using the brain muscle that’s responsible for researching, digging thru trash and bunch of obscure websites for info, using critical thinking to filter and refine your results, etc., that muscle will become atrophied.
You have essentially gone from being a researcher to being a reader.
By that logic probably shouldn’t use a search engine and you should go to a library to look things up manually in a book, like I did.
“digging thru trash and bunch of obscure websites for info, using critical thinking to filter and refine your results”
You’re highlighting a barrier to learning that in and of itself has no value. It’s like arguing that kids today should learn cursive because you had to and it exercises the brain! Don’t fool yourself into thinking that just because you did something one way that it’s the best way. The goal is to learn and find solutions to problems. Whatever tool allows you to get there the easiest is the best one.
Learning through textbooks and one way absorption of information is not an efficient way to learn. Having the ability to ask questions and challenge a teacher (in this case the AI), is a far superior way to learn IMHO.
Why bother learning anything when you can get the answer in a fraction of a second ?
It has no value as long as those tools are available to you. Like calculator, where nowadays everyone’s so used to them people have became pretty bad at math in head. While this is indeed not an issue since calculators are widely available to everyone, we’re not really talking about doing math, but using critical thinking, which is a very important skill in your daily life
EDIT: Disclaimer: I’m a vivid AI user and I’ve defended it here before, but I’m not about to start kidding myself that letting the AI analyize and think for me makes me more intelligent
Were people ever very good at math in head?
There are those who have become calculator dependent who might not have if there were no calculators, but I’d say they’re a small middle ground. Some people are still good at math in their head, and even when they are, they should be using a calculator when it’s available to double check their math when it might be in question.
At the lower end of the scale, there are people who never would have been able to do math in head, but with calculator can do math all day without problem, except when they mis-key the question and have no idea that the answer is wrong, because they have no sense of math without the calculator.
The brain pathways used to control the fine-motor skills for cursive writing can doubtless be put to other uses.
Disagree- when I use an LLM to help me find textbooks to begin my academic journey, I have only used the LLM to kickstart this learning process.
That’s not really what I was talking about. It would be closer to asking ChatGPT to make summary of said books instead of reading them
Same, I use it to put me down research paths. I don’t take anything it tells me at face value, but often it will introduce me to ideas in a particular field which I can then independently research by looking up on kagi.
Instead of saying “write me some code which will generate a series of caverns in a videogame”, I ask “what are 5 common procedural level generation algorithms, and give me a brief synopsis of them”, then I can take each one of those and look them up
$100 billion and the electricity consumption of France seems a tad pricey to save a few minutes looking in a book…
I recently read that LLMs are effective for improving learning outcomes. When I read one of the meta studies, however, it seemed that many of the benefits were indirect: LLMs improved accessibility by allowing teachers to quickly tailor lessons to individual students, for example. It also seems that some students ask questions more freely and without embarrassment when chatting with an LLM, which can improve learning for those students - and this aligns with what you mention in your post. I personally have withheld follow-up questions in lectures because I didn’t want to look foolish or reveal my imperfect understanding of the topic, so I can see how an LLM could help me that way.
What the studies did not (yet) examine was whether the speed and ease of learning with LLMs were somehow detrimental to, say, retention. Sure, I can save time studying for an exam/technical interview with an LLM, but will I remember what I learned in 6 months? For some learning tasks, the long struggle is essential to a good understanding and retention (for example, writing your own code implementation of an algorithm vs. reading someone else’s). Will my reliance on AI somehow damage my ability to learn in some circumstances? I think that LLMs might be like powered exoskeletons for the mind - the operator slowly wastes away from lack of exercise.
It seems like a paradox, but learning “more, faster” might be worse in the long run.