What’s wrong with the tongue test for 9 volts? I know it tickles some but is it actually harmful? I’ve been doing that for over 30 years…
What’s wrong with the tongue test for 9 volts? I know it tickles some but is it actually harmful? I’ve been doing that for over 30 years…
That seems more like an argument for free higher education rather than restricting what corpuses a deep learning model can train on
I take it you don’t understand how startups work?
OpenAI is not making any profit and is losing money hand over fist today. Valuation and raising investment rounds isn’t profit.
Eh? That article says nothing about their profit margins. Today they have something like $3.5B in ARR (not really, that’s annualized from their latest peak, in Feb they had like $2B ARR). Meanwhile they have operating costs over $7B. Meaning they are losing money hand over fist and not making a profit.
I’m not suggesting anything else, just that they are not profitable and personally I don’t see a road to profitability beyond subsidizing themselves with investment.
OpenAI is burning billions of dollars not making profit.
I live upon morsels you happen to drop
This regulation (and similar being proposed in California) would not be applied retroactively.
https://arewereorganizedyet.com/ lol already updated
He dun messed up.
How does lemmy federation work in this case? Conceivably after being restored from backup the lemmy.ml instance could see those few hours of lost history as federated to other lemmy instances and resync it back as the host instance. Obv I’m vastly oversimplifiying things but what happens today?
https://github.com/Mozilla-Ocho/Memory-Cache is the actual project if you want to use it.
Basically it’s a firefox extension to save a page as a pdf in a directory that is symlinked to your local PrivateGPT install which then ingests the docs. It doesn’t seem to me that it provides any in-browser querying of PrivateGPT but I haven’t tried setting it up to confirm that.
Re thumb-key do you have recommended tutorials for getting comfortable with it? I found trying to do touch typing tutorials didn’t really help, both because they are generally made for desktop environments and they are geared towards qwerty layout (e.g., get comfortable with home row first etc). I tried forcing myself to use it for a full 24 hours as the concept makes a ton of sense to me, but got very frustrated with myself and then dug into the world of which layout to choose, got overwhelmed, and switched back to whatever this qwerty layout that samsung one ui provides on galaxys.
I think that is overly simplistic. Embeddings used for LLMs do definitely include a concept of what things mean and the relationship of things to other things.
E.g., compare the embeddings of Paris, Athens, and London to other cities and they will have small cosine distance between them. Compare France, Greece, and England and same. Then very interestingly, look at Paris - France, Athens - Greece, London - England and you’ll find the resulting vectors all align (fundamentally the vector operation seems to account for the relationship “is the capital of”). Then go a step further, compare those vector to Paris - US, Athens - US, London - Canada. You’ll see the previous set are not aligned with these nearly as much but these are aligned with each other (relationship being something like “is a smaller city in this countrry, named after a famous city in some other country”)
The way attention works there is a whole bunch of semantic meaning baked into embeddings, and by comparing embeddings you can get to pragmatic meaning as well.
I agree. Family of 5 many hotels require us get 2 rooms. Plus no option to cook meals makes for a much more expensive stay usually. At least until a few years ago when airbnb went insane with the cleaning fees plus cleaning requirements and all that nonsense.
There’s likely a lot of reasons one wouldn’t want to work with DHH. It’s unfortunate because he is clearly really brilliant but seemingly fails to realize that a huge amount of software engineering involves working together with other people, many of whom aren’t that brilliant.
Did you not read the article? Their whole point is chatgpt isn’t good at this type of writing because open ai intentionally trained it to do very non threatening corporate friendly speech. While code-davinci-002 (which predates gpt3.5/4) is objectively weaker this limitation wasn’t baked into it and it demonstrates LLMs can certainly write jokes and edgy humor.
Don’t forget the furries!
There’s also the UC Beniof Childrens Hospital
This is correct. And given the parent license is MIT I’d just include an MIT LICENSE file in my fork and be done with it.
Live music archive is still down for example 😞