What’s the progress of the Human Connectome Project?
What’s the progress of the Human Connectome Project?
I remember what the standardising committee did to XMPP: users wanted to share photos, send files, and make audio/video calls; XMPP said “we’re not going to standardize that, but each application can use its own extensions”… then it all went to hell.
Banks are allowed to use fractional reserve to lend several times more than they are required to warrant themselves, governments only force banks to have an entity who will pinky swear to write down up to a certain amount in everyone’s accounts in case the banks can’t. Neither skill nor labor produce money, central banks produce money as a loan with a repayment obligation, skill and labor only shift around the fractional obligations created by banks from thin air. Crypto is actually generated as an effect of the skill and labor required to secure its own ledger. People use golf courses to claim carbon offsets they sell in get-rich-quick schemes, or stamp collections, or digital collectibles, or natural gas extraction plants, or a thousand other schemes; everything can be, and is being used to scam someone somewhere at every moment, doesn’t mean everything is a scam.
Someone had real gold in their coffer full of gold coins, then someone convinced them that credit written down as a number on some slips of paper had the same value, that they could trust the bank’s computers with keeping track of the total value, and everyone clapped.
Recently saw a report on cocaine, apparently the prices haven’t changed since the 1990s… just the purity has gone down and it now comes laced with fentanyl.
Shining light on a problem is a good step to make people realize there is a problem in the first place.
What the fuck are you going to do about it?
Start a meme campaign targeted at countries with privacy legislations, aimed at making their future governments ask for higher bribes more lobbying before signing away taxpayer money to Microsoft contracts…
I mean, ideally have Microsoft rethink its approach, like Meta is rethinking its with Instagram, but let’s start with something simple.
Too late, it already has learned it:
Default (GPT-3.5)
User: Translate the following text into Esperanto: “I’m just going to start posting in Esperanto. Even AI won’t be interested in learning Esperanto.”
ChatGPT: “Mi ĵus komencos afiŝi en Esperanto. Eĉ la intelekta artifiko ne estos interesita lerni Esperanton.”
Didn’t Mozilla get most of its funding from Google for promoting its search engine? Or has that changed?
encrypted body of the message
Encrypted what? LinkedIn lets you add a key/cert to send you encrypted emails?
Unless you followed by installing gpg… then you failed. There are tons of uses for it, not necessarily encrypting emails (or more precisely, it kind of sucks at encrypting emails).
As has been prophesized… and it’s only starting.
“Scared” is a strong word… more like “curious”, to see how it goes. I’m mostly waiting for the “autonomous rifle dog fails” videos, hoping to not be part of them.
Part of the reason of “adding AI” to everything, “dumb AI”, is to reduce reaction times and increase obedience mission completion rates. Meaning, to cut the human out of the loop.
It’s being sold as a “smart” move.
Nukes are becoming a problem, because China is ramping up production. It will be just natural for India to do the same. From a two-way MAD situation, we’re getting into a 4-way Mexican standoff. That’s… really bad.
There won’t be an “AI insurgency”, just enough people plugging in plugs for some dumb AIs to tell them they can win the standoff. Let’s hope they don’t also put AIs in charge of the multiple nuclear launch buttons… or let the people in charge check with their own, like on a smartphone, dumb AIs telling them to go ahead.
Climate change is clearly a done thing, unless we get something like unlimited fusion power to start some terraforming projects (seems unlikely).
You have a point with insects, but I think that’s just linked to climate change; populations will migrate wherever they get something to eat, even if that turns out to be Antarctica.
We used to run “machine learning”, “neural networks”, over 25 years ago. The “AI” term has always been kind of a sci-fi thing, somewhere between a buzzword, a moving target, and undefined since we lack a fixed comprehensive definition of “intelligence” to begin with. The limiting factors of the models have always been the number of neurons one could run in real-time, and the availability of good training data sets. Both have increased over a million-fold in that time, progressively turning more and more previously untractable problems into solvable ones to the point where the results are equal or better and/or faster than what people can do.
Right now, there are supercomputers out there orders of magnitude more capable than what runs stuff like ChatGPT, DallE, or all the public facing "AI"s that made the news. Bigger ones keep getting built… and memristors are coming, to become a game changer the moment they can be integrated anywhere near current GPU/CPU levels.
For starters, a supercomputer with the equivalent neural network processing power of a human brain, is expected for 2024… that’s next year… but it won’t be able to “run a human brain”, because we lack the data on how “all of” the human brain works. It will likely become obsoleted by ones with several orders of magnitude more processing power, way before we can simulate an actual human brain… but the question will be: do we need to? Does a neural network need to mimick a human brain, in order to surpass it? A calculator already does, and it doesn’t use a neural network at all. At what point the integration of what size and kind of neural network, with what kind of “classical” computer, can start running circles around any human… or all of humanity taken together?
And of course we’ll still have to deal with the issue of dumb humans telling/trusting dumb "AI"s to do things way over their heads… but I’m afraid any attempt at “regulation”, is going to end up like the case with “international law”: those who want, obey it; those who should, DGAF.
Even if all tech giants with all lawmakers got to agree on the strictest of regulations imaginable, like giving all "AI"s the treatment of weapons of mass destruction, there is a snowflake’s chance in hell that any military in the world will care about any of it.
Then we’ll need an AI running in ring -10 of every CPU to make sure you don’t run some unlicensed AI…
At some point ML (machine learning) becomes undistinguishable from BL (biological learning).
Whether there is any actual “intelligence” involved in either, hasn’t been proven yet.
The real risk is that humans will use AIs to asses the risk/benefits of starting a war… and an AI will give them the “go ahead” without considering mutually assured destruction from everyone else doing exactly the same.
It’s not that AIs will get super-human, it’s that humans will blindly trust limited AIs and exterminate each other.
three-way race between AI, climate change, and nuclear weapons proliferation
Bold of you to assume that people behind maximizing profits (high frequency trading bot developers) and behind weapons proliferation (wargames strategy simulation planners) are not using AI… or haven’t been using it for well over a decade… or won’t keep developing AIs to blindly optimize for their limited goals.
First StarCraft AI competition was held in 2010, think about that.
Who’s paying him? Seriously: