Mind explaining a bit your workflow at the moment?
Mind explaining a bit your workflow at the moment?
I don’t think I understand your point, are you saying there is no benefit in running locally and that Websites or APIs are more convenient?
improves my experience coding in unfamiliar languages
Alan Perlis said “A programming language that doesn’t change the way you think is not worth learning.”
So… if you code in another language without actually “getting it”, solely having a usable result, what is actually the point of changing languages?
FWIW I did try a lot (LLMs, code, generative AI for images, 3D models) in a lot of ways (CLI, Web based, chat bot) both locally and using APIs.
I don’t use any on a daily basis. I find it exciting that we can theoretically do a lot “more” automatically but… so far the results have not been worth the efforts. Sadly some of the best use cases are exactly what you highlighted, i.e low effort engagement for spam. Overall I find that either working with a professional (script writer, 3D modeler, dev, designer, etc) is a lot more rewarding but also more efficient which itself makes it cheaper.
For use cases where customization helps while quality does matter much due to scale, i.e spam, then LLMs and related tools are amazing.
PS: I’d love to hear the opinion of a spammer actually, maybe they also think it’s not that efficient either.
I like Ollama, and recommend it to tinker, but I admit this “LLM Explorer” is quite neat thanks to sections like “LLMs Fit 16GB VRAM”
Ollama just works but it doesn’t help to pick which model best fits your needs.
Yes I’m talking about DeArrow. Well yes but to be more precise they initially “block” the addon from working for few hours then they let you use it without paying. Slightly different, again I’m not criticizing just highlighting this is not how most add-ons do work.
Interesting that this extension is pay only, first time I see this. Again makes sense to go against a business model of “free” of cost but too expensive for sanity.
I find YouTube itself to be so adversarial that I don’t even use it anymore.
Still, I’m installing both this and SponsorBlock to symbolically show support to this of projects that IMHO show that I want the Web MY way. I don’t want to browse in whatever way maximizes attention and distraction to increase profit margin of surveillance capitalism.
You’re just making another assumption, maybe the dorm has optic fiber with a big bandwidth and a lower latency that most home and business connection. Maybe OP doesn’t care about 120hz and only heat. I don’t think you are getting my point if you are pointing out imperfection about the current technology : it’s possible.
Right, and I mentioned CUDA earlier as one of the reason of their success, so it’s definitely something important. Clients might be interested in e.g Google TPU, startups like Etched, Tenstorrent, Groq, Cerebras Systems or heck even design their own but are probably limited by their current stack relying on CUDA. I imagine though that if backlog do keep on existing there will be abstraction libraries, at least for the most popular ones e.g TensorFlow, JAX or PyTorch, simply because the cost of waiting is too high.
Anyway what I meant isn’t about hardware or software but rather ROI, namely when Goldman Sachs and others issue analyst report saying that the promise itself isn’t up to par with actual usage for paying customers.
I’m also no stockologist and I agree but I that’s not my point. The stock should be high but that might already have been factored in, namely this is not a new situation, so theoretically that’s been priced in since investors have understood it. My point anyway isn’t about the price itself but rather the narrative (or reason, as the example you mention on backlog and lack of competition) that investors themselves believe.
I’m not sure if you played PCVR in the Summer but imagine that in a tiny room… it’s just way too hot. Again I’m NOT saying it’s good, or bad, I’m only saying you made assumption about OP usage. I’m not sure if you tried CloudXR but basically, it works and it’s not that complex to setup (e.g 1h) so it’s relatively faster and cheaper than building and owning a gaming PC.
I don’t understand why you are even arguing about a legitimate usage.
Sure yet it’s a perfectly legitimate one. I’m not OP, it might be exactly their use case.
You do if you are rendering in the cloud, e.g NVIDIA CloudXR. Not sure what OP plans to do.
Not a lawyer but if you have an email that says you can, I’d argue it’s override the ToS assuming the person giving permission actually legally can.
Anyway I bet what they avoid is reselling access so I believe as long as you don’t pay for yourself then resell to others you’ll be OK.
Stuff like LLMs or ConvNets (and the likes) can already be used to do some pretty amazing stuff that we could not do a decade ago, there is really no need to shit rainbows and puke glitter all over it.
I’m shitting rainbows and puking glitter on a daily basis BUT it’s not against AI as a field, it’s not against AI research, rather it’s against :
I’m sure I’m forgetting a few but basically none of those criticism are technical. None of those criticism is about the current progress made. Rather, they are about business practices.
Their valuation is because there’s STILL a lineup a mile long for their flagship GPUs.
Genuinely curious, how do you know where the valuation, any valuation, come from?
This is an interesting story, and it might be factually true, but as far as I know unless someone has actually asked the biggest investor WHY they did bet on a stock, nobody why a valuation is what it is. We might have guesses, and they might even be correct, but they also change.
I mentioned it few times here before but my bet is yes, what you did mention BUT also because the same investors do not know where else do put their money yet and thus simply can’t jump boats. They are stuck there and it might again be become they initially though the demand was high with nobody else could fulfill it, but I believe that’s not correct anymore.
Unfortunately it’s part of the marketing, thanks OpenAI for that “Oh no… we can’t share GPT2, too dangerous” then… here it is. Definitely interesting then but now World shattering. Same for GPT3 … but through exclusive partnership with Microsoft, all closed, rinse and repeat for GPT4. It’s a scare tactic to lock what was initially open, both directly and closing the door behind them through regulation, at least trying to.
I’m sure whatever the next fad is will require a GPU to run huge calculations.
I also bet it will, cf my earlier comment on rendering farm and looking for what “recycles” old GPUs https://lemmy.world/comment/12221218 namely that it makes sense to prepare for it now and look for what comes next BASED on the current most popular architecture. It might not be the most efficient but probably will be the most economical.
Thanks for that, was quite interesting and I agree that completion too early (even… in general) can be distracting.
I did mean about AI though, how you manage to integrate it in your workflow to “automate the boring parts” as I’m curious which parts are “boring” for you and which tools you actual use, and how, to solve the problem. How in particular you are able to estimate if it can be automated with AI, how long it might take, how often you are correct about that bet, how you store and possibly share past attempts to automate, etc.