I agree with the other comments that ChatGPT isn’t really that good for programming, it hallucinates often and you end up working too hard just to try and figure out what it got wrong. However, I have found a good AI engine, phind.com, that has started to replace my google searches. It’s just a wrapper for ChatGPT, but it cites its sources so you can verify or dig deeper, provides search engine results in a sidebar and has upvote/downvote options to help it improve. So it feels like a personal google “agent” that runs off and googles something for you and comes back with a concise report.
Personally I just can’t work with system that lies to me (even for a little) but all the time.
I tried to use chatGpt and Bing bot and phind.com few times and everytime I got answers that looks like real and looks like correct answer but slightly (and few times completely) wrong.
Everytime I have to reread documentation, check links, investigate is there a reason why LLM answered this way, maybe I wrong this time and LLM found something that I did not found…
I agree that phind.com get best results, but every small incorrectness here and there irks me and makes me question myself and answer as whole.
Upd: in general questions, like when you trying to investigate some new field, technology, tooling suite LLM is very, very good. When you want to get something like overview of topic that you interested in.
I agree with the other comments that ChatGPT isn’t really that good for programming, it hallucinates often and you end up working too hard just to try and figure out what it got wrong. However, I have found a good AI engine, phind.com, that has started to replace my google searches. It’s just a wrapper for ChatGPT, but it cites its sources so you can verify or dig deeper, provides search engine results in a sidebar and has upvote/downvote options to help it improve. So it feels like a personal google “agent” that runs off and googles something for you and comes back with a concise report.
Personally I just can’t work with system that lies to me (even for a little) but all the time.
I tried to use chatGpt and Bing bot and phind.com few times and everytime I got answers that looks like real and looks like correct answer but slightly (and few times completely) wrong.
Everytime I have to reread documentation, check links, investigate is there a reason why LLM answered this way, maybe I wrong this time and LLM found something that I did not found…
I agree that phind.com get best results, but every small incorrectness here and there irks me and makes me question myself and answer as whole.
Upd: in general questions, like when you trying to investigate some new field, technology, tooling suite LLM is very, very good. When you want to get something like overview of topic that you interested in.