Based on the comments it appears the prompt doesn’t really even fully work. It mainly seems to be something to laugh at while despairing over the writer’s nonexistant command of logic.
Based on the comments it appears the prompt doesn’t really even fully work. It mainly seems to be something to laugh at while despairing over the writer’s nonexistant command of logic.
The simple solution here is to record to flash when wifi dies. Yes wired stuff is nice but half of these are consumer installed.
Scam Altman Freid strikes again
The whole idea behind Manjero’s update scheme is just generally a landmine. LTS releases typically work by maintaining a older branch that gets updates. In this way you delay features not patches. If you run Firefox on such a system it will be Firefox LTS with this week’s patches (this is kinda important for security reasons). Manjaro doesn’t do this instead it just holds everything back artificially one or two weeks.
Bluntly doing this with a browser or other security critical software should be a crime.
Manjero just generally feels very amaturish and its history of taking down Arch’s servers is not helping here.
It’s worth pointing out here that this script was probably written by a human.
Edit: reporting now indicates that it was human written https://arstechnica.com/ai/2024/01/george-carlins-heirs-sue-comedy-podcast-over-ai-generated-impression/
From the perspective of a computer engineer SSDs are painfully slow. Waiting for data on disk is slow enough that it is typically done by asking the OS for the data and having the OS schedule another process onto the CPU while it waits. RAM is also slow although not nearly as slow. Ideally you want your data in the L1 cache which is fast enough to minimally stall the CPU. The L2 and L3 caches are slower but larger and more likely to have the data you want. If the caches are empty and you have to read RAM your CPU will either do a lot of speculative execution or more likely stall.
Speculative execution on CPUs is a desperate attempt to deal with the fact that all memory access is slow by just continuing through the code as if you know what is in memory. If the speculative execution is wrong a lot of work gets thrown out (hopefully nothing unsound happens) and the delay is more noticable.
Bluntly an SSD only system would probably be an order of magnitude slower. I’m also not sure switching to a new process (or even thread) to load from SSD would be viable without RAM as it would likely invalidate a lot of cache triggering more loads.
Anti virtualization is sometimes used in copy protection. Altering virtualization to avoid those checks might be circumvention under DMCA.
In new games sure. I was referring to old titles
Getting anti-cheat that technically already works enabled on Linux has been a lot of work and Epic still won’t enable it. Piracy protection systems will also be an issue. Most EA games inspect your CPU to see if they like it on startup (I think this is using vmprotect and some non-OS x86 calls but don’t quote me on that). These kinds on anti virtualization checks are really common (not just in games ProctorU and lock down browser do them too). I don’t think valve running an open virtualization layer will be well received by companies and they will probably ban it from running games. MMOs (due to botting) and anything with anticheat will look particularly askance at this. I also suspect Valve won’t want to try hiding the VM signatures as it borders on violating DMCA.
Newer games will probably get ported if a large part of the market buys into ARM. Unity stuff might get re-released as it is .net if the publishers can be bothered. Minecraft java edition will also always love you (the launcher might not though).
The main advantage of ARM right now is that there are low power cores available. The actual instruction set is unrelated to this advantage. If Intel or AMD put more serious effort into power efficiency most of the advantages go out the window.
As for instruction set changes impacting what software you can run I think that is still a big issue. Yes porting to ARM is straitforward in more modern programming environments but most software actively developed at the moment has a lot of old cruft that won’t easily port if the engineers can even be convinced to touch it. Most businesses are dependent on old software not all of which is still maintained. Most gamers are even more tied to old software that is not going to get ported and often has annoying anti-virtualization checks (see games breaking on systems with enabled intel e-cores).
I am not sure how large the modern non gaming personal pc market is (tablets, phones, works computers, and chromebooks probably took a chunk out of it) but that could be in play.
Trilium is great. It has a copy of excalidraw with history which is nice. You can also automate things inside of it with scripting
Decades from now I will have to explain what the “3D Objects” folder is to some kid
Probably the wheels falling off their cars. But I don’t know I actually read the article
Not really seeing anonymous sources cited here. This looks like good old fashioned speculation.
I’m not holding my breath here. People seem far too willing to put up with stuff they wouldn’t otherwise because it’s Twitter. Far too many news orgs still point people to Twitter accounts.
I will not stand slander of the arch wiki.
Also start with Linux Mint XFCE (unless they’ve fixed the stability problems with cinnamon)
The problem isn’t hosting its paying for production of content. His existing stuff will probably stay up and I find it unlikely that YouTube will take down the mean things John Oliver has said about China. The issue is shows with that amount of research and production is that they need a lot of money to produce content.
I feel personally attacked. Yes I’ve actually done this (minus sending them money). I had a server (that I am pretty sure sent headers to the effect that it ran x86) which had some logs indicating someone had tried to download an arm IOT botnet onto it. So I downloaded it and tried running it through a decompiler. I found a UPX stub. The rest was compressed. So I tried the UPX unpacker. This didn’t work because it was built with a modified copy of UPX. So I hauled out a raspberry pi, reflashed the OS and tried running it in GDB in hopes of just dumping the unpacked bit from memory. Nothing. So I downloaded qemu and set up an aarch 64 arm 9 image still nothing. So I tried 32 bit arm again in qemu. At this point I gave up
Prigozhin tries to march on C8 but he chickens out and winds up hiding in F7 instead
Probably wiping process control code from the systems that contain tons of fiddly hard to find constants and other information.