How are you using new AI technology? Maybe you're only deploying things like ChatGPT to summarize long texts or draft up mindless emails. But what are you losing by taking these shortcuts? And is this tech taking away our ability to think?
See, this is the problem I’m talking about. You think you can gauge if the code works or not, but even for small pieces (and in some cases, especially for small pieces) there is a world of very bad, very dangerous shit that lies between “works” and “not works”.
And it is as dangerous when you trust it to explain something for you. It’s by definition something you don’t know therefore can’t check.
And it will work, but on a failure of a first question, instead of failing gracefully it wipes your hard drive clean.
You can find shit like that on the regular Internet, but the difference is, it will be downvoted and some nerd will leave a snarky comment explaining why it’s stupid. When llm gives you that, you don’t have ways to distinguish a working code from a slow boiling trap
See, this is the problem I’m talking about. You think you can gauge if the code works or not, but even for small pieces (and in some cases, especially for small pieces) there is a world of very bad, very dangerous shit that lies between “works” and “not works”.
And it is as dangerous when you trust it to explain something for you. It’s by definition something you don’t know therefore can’t check.
I mean I literally can test it immediately lol, a nodered js function isn’t going to be dangerous lol
Or an AHK script that displays a keystroke on screen, or cleaning up a docker command into docker compose, simple shit lol
Oh yeah, you absolutely can test it.
And then it gives you (and this is a real example, with real function names removed)
find_something > dirpath
… rm - rf $dirpath/*
do_something_in_the_dir(dirpath)
And it will work, but on a failure of a first question, instead of failing gracefully it wipes your hard drive clean.
You can find shit like that on the regular Internet, but the difference is, it will be downvoted and some nerd will leave a snarky comment explaining why it’s stupid. When llm gives you that, you don’t have ways to distinguish a working code from a slow boiling trap