• 0 Posts
  • 216 Comments
Joined 1 year ago
cake
Cake day: July 29th, 2023

help-circle
  • mlg@lemmy.worldtoTechnology@lemmy.worldWhat the hell Proton!
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    16 hours ago

    is this saying you can still be fucked on public WiFi even if you connect through a VPN?

    The quick and dirty answer is no, unless an attacker can figure out a way to get your VPN to strip it’s encryption (doubt you’ll ever see this outside something like defcon but you never know lol).

    The long answer is that not all VPNs are equal depending on what you are trying to accomplish.

    A VPN will simply tunnel your internet traffic over an encrypted channel to a server anywhere in the world.

    On a technical level, this means that it will guarantee your internet traffic is unreadable until it hits the destination, which does mean it can make it more secure to use a public wifi/hotspot.

    Of course privacy is actually a massive security iceberg, so some caveats in no particular order are:

    spoiler
    • Modern protocols like HTTPS are already encrypted, although someone can still mess with stripping and poisoning techniques, so having a VPN running would be peace of mind.

    • Your privacy from companies like Google, Facebook, etc won’t be enforced by a VPN if you don’t also use a new browser session (incognito) because they can easily track your identity via cookies and accounts.

    • Even if you use a fresh session and dedicated VPN accounts, aforementioned tech companies can still identify you via statistical modeling based on your activity. They don’t really care what your IP is unless they need to pay tax for a country or follow some random media block law.

    • Your privacy from the government is nonexistent because most VPN companies will share your info if the government requests it.

    • Lots of VPNs choose to block torrenting so they don’t have to deal with protecting their customers (although lots also don’t).

    • Even if you setup your own VPN via a VPS in anonymous way, the government can still watch your exit traffic and link the origin back to you by inspecting the VPN packets (which is why Tor exists, a much different solution to the privacy problem).

    You should use a VPN if:

    • You want to torrent copyrighted material (yar har piracy)
    • You want to spoof your location to get access to geolocked content
    • You want to negate an attackers ability to mess with your connections on public WiFi
    • You want a secure channel between two of your own locations (make two separate networks accessible to eachother, or VPN to home/work to access resources on that network).
    • ^ same thing but remote access etc.

    You should not use a VPN if:

    • You need to hide what you do on the internet from the government (See Tor, journalists stuck in shithole regimes).
    • You want privacy from internet megacorps (you’d have to keep fresh sessions or use them sparingly which you can 90% do without a VPN anyway)
    • You want to hide anything after it reaches the VPN server (public VPN services, doesn’t apply if you VPN to something you physically own and access only its local resources).

    After all that, the use case basically becomes:

    • VPN to within your own country to secure your connection on public WiFi
    • VPN to home or work to access network
    • VPN with a good public service to other countries to watch or torrent media



  • I don’t know why the guy just assumed every linux and BSD machine runs cups-browsed by default?

    It took me literally 5 seconds to check that it’s disabled on Fedora by default.

    Then he wrote a whole paragraph about how no one should use CUPS for printing because based off of his own analysis, it’s some insanely crappy and insecure system.

    Which is actually stupid because the only alternative is windows??? Which is universally known for printer driver and spooler vulnerabilities.

    Then he got mad the the maintainer for patching before his disclosure…


  • After 15 years of wayland development hell, I’m honestly open to anything. Problem is I can definitely see an experimental branch being just as scrutinized. One of the core issues highlighted was that features and requests were rejected because of hypotheticals and the maintainers trying to avoid fragmentation like early Xorg.

    Basic features from X11 are still missing. Everyone ended up somewhat fragmenting anyway via compositors because weston wasn’t really useful for developers beyond a demo. Wayfire started out as a Compiz redux and now its being considered by several DEs like XFCE to be the default compositor which they should standardize around.

    Regardless, I really hope they nail it down in the next year because the halfway migration to wayland is seriously harming Linux desktop, especially when lots of frontend UI has been done perfectly decades ago on X11, and wayland still not properly supporting new features like HDR.



  • People fear the same thing about Valve.

    One wrong person and we could all end up in the same money milk machine as EA.

    I know people complain about Linus hurling insults at merge requests, but his rigidness is what keeps the kernel viable. If it weren’t for him, google would have already shit all over it with a mega fork and essentially cornered the market like they did with Android and HTTP3.

    Both are technically “open source”, yet Google essentially dictates what they want or need for their economic purpose, like ignoring JPEGXL, forcing AVIF, making browsers bloaty, using manifestv3, etc. Android is even worse and may as well be considered separate from Linux because it’s just google’s walled garden running on the linux kernel.

    He is open to new technology, but he understands the fundamental effects of design choices and will fight people over it to prevent the project from fracturing due to feature breaking changes, especially involving userspace.



  • I’d support anything to see NIntendo get kicked in the nuts for shutting down yuzu, which could have easily continued legally by removing like 2 paragraphs and probably a few lines of code.

    Also Citra which was 100% legal.

    EDIT:

    I also wanna mention that current Pokemon gameplay sucks, and would also kill to see GameFreak’s billion dollar franchising burn. Maybe 15 20 years ago when hardware was “limited”, a low asset turn based RPG focused around pocket monsters was a fun game. Ain’t no way a PS1 graphics looking game with practically zero changes to the formula can be considered AAA title in 2024. And even then they’ve somehow made it into an A button press simulator by nuking the difficulty.

    Being completely honest, the DS hardware was not that limited (had 2 generations on it with significant upgrades despite being the same console). BW2 was probably the golden era with very well done animated sprites, overworld, features, etc. The moment it hit the 3DS, it started showing its cracks with GF continuing to develop the game without expanding the team to meet development demand.

    Palworld isn’t even the first challenger. TemTem gained some popularity purely for showing how much of an upgrade it was from Pokemon only a few years ago.


  • I’d say about 99% is the same.

    Two notable things that were different were:

    • Podman config file is different which I needed to edit where containers are stored since I have a dedicated location I want to use
    • The preferred method for running Nvidia GPUs in containers is CDI, which imo is much more concise than Docker’s Nvidia GPU device setup.

    The second one is also documented on the CUDA Container Toolkit site, and very easy to edit a compose file to use CDI instead.

    There’s also some small differences here and there like podman asking for a preferred remote source instead of defaulting to dockerhub.


  • Other than the stuff already mentioned here, people (probably fairly accurately) thought Linus was a salesman douche with no real knowledge of computers back in the NCIX days.

    It was partially true, he was basically a warehouse manager who happened to get lucky making a successful youtube channel which he turned into his own media business after NCIX died.

    But that’s what the key term is. It’s Linus Media Group. Their top goal is to create content that generates views for revenue, and not content that might be useful or takes a lot of effort to do.

    Which is why you will almost never see any heavy IT people watching his videos. There are so many examples of people running entire data centers in their house better than LMG could do with actual budget, server space, and hardware. They used to use windows server for everything because they didn’t have anyone who knew linux lol.


  • Yeah I should have mentioned the context is FBLA, and Google partially fixed the prompt.

    Original from a few weeks ago:

    BPA is another student org called Business Professionals of America

    The AI ignores the subject context and just compares whatever is the most common acronym.

    They lazy patched it by making the model do a subject check on the result, but not on the prompt so it still comes back with the chemical lol.




  • iirc due to some anti trust lawsuits, they cannot do that anymore.

    But it’s still easy to coerce OEMs to run Windows because they offer stuff like quick support and standardized IT support.

    If an OEM ships Linux, they don’t want to have to make an entire department to help troubleshoot the OS for users who will inevitably call for help. Ignoring them would only result in returns and loss of sales.

    I think some thinkpads actually do ship with some distro like redhat or opensuse as an option, but that’s because thinkpads are very popular in the business space which means lots of CS people use them, so it helps save some cost from a windows license that won’t get used.

    Like I said though, if windows really dives into the deep end, I think a potential market would open and some OEM will take a chance on it.


  • Not to be that guy but why not use Curve25519?

    I still remember all the conspiracies surrounding NIST and now 25519 is the default standard.

    In 2013, interest began to increase considerably when it was discovered that the NSA had potentially implemented a backdoor into the P-256 curve based Dual_EC_DRBG algorithm.[11] While not directly related,[12] suspicious aspects of the NIST’s P curve constants[13] led to concerns[14] that the NSA had chosen values that gave them an advantage in breaking the encryption.[15][16]


  • There’s plenty of videos on YouTube of people trying Linux for the first time, and it can be painful to watch how poorly they try to fix something or unintentionally break their system.

    That’s not to say windows is any better, because they’d do the same thing there.

    But people will only switch permanently if windows really falls off hard, which may or may not happen.

    You have to think of it like how people first learned to use a mouse and double click back in the 90s. It’s not immediately intuitive for everyone, they often have to start over.

    That being said, having a big OEM ship linux would do wonders, but Microsoft fights hard to make sure that almost never happens.


  • Yeah that means the driver is loaded fine, but it looks like it is selecting the iGPU by default. You have several options to fix this.

    1. You can disable integrated graphics in the bios if there is an option for it. This is the easiest, but if you’re on a laptop, leaving it enabled might save some battery in which case goto 2.

    2. You can tell either each program or the OS to prefer the Nvidia GPU. The way you do this also depends on how the gpu is set up (most laptops have it as secondary)

    You can test this by running __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxgears in one terminal, and nividia-smi in a second terminal to verify a program (in this case glxgears) is running on the nvidia gpu.

    I’ll try to find a good guide, but depending on the setup, it could be a simple MUX switch you can flip to change between iGPU and Nvidia GPU, or with the use of some preference selector tool (I think it was called prime?).

    It’s confusing because lots of laptops essentially use the Nvidia GPU as offload which makes it a bit tricky to coaxe it into using the correct one.



  • It’s not 90s tech though, especially for China.

    Their latest x86 CPU is comparable to Kaby Lake in cycle speed which is only 8 years old, except it comes with more cores and supports DDR5 so it might as well be a first gen ryzen 7.

    They still haven’t revealed how they fabricated it or what process they used, probably because they want to keep the production chain and size a secret.

    Enriching uranium and making nukes, in comparison, is banging rocks together.

    No it isn’t, especially for weapons grade Uranium. Look at Iran, they’ve been perpetually “10% away from a bomb” for more than 20 years and still haven’t succeeded.

    The ridiculously high precision required to make the centrifuges, and then the scale required to make hundreds of thousands of them per plant just to reach 20% enrichment is insane.

    Reaching 90% is like taking all that and ramping it up several hundred times.

    The only reason Pakistan succeeded was because they got (stole) the critical design parameters needed for the centrifuges to work, and a rather brilliant metallurgist who took several years to figure out how to manufacture the centrifuges consistently at scale. Plus an entire set of physicists just to figure out the centrifuge physics in a way that would allow them to maximize refinement with dozens of design variables. It still took them a decade, but they eventually got it.

    It’s a pretty good comparison to lithography machines which requires similar dead precision with each decreasing size of transistor requiring an order of magnitude more precision in quality engineering.

    Also I don’t think the US is involved in this, at least not directly:

    I doubt it because they’ve been making it a pretty big deal for the past 4 years. Tons of Chinese tech OEMs are blacklisted, and the trade war keeps escalating with new bans/tariffs/exclusions every year. Plus they dumped billions of dollars into intel and TSMC in a desperate attempt to make a fab on the home front.

    It doesn’t matter that it’s DUV, they just want to ensure they make it harder for China to catch up, so even last gen tech is on the line because they believe it can be studied and reverse engineered.

    imo it’s a stupid shortsighted policy, but it’s nothing new for the US pulling these types of moves. I just wish for once they’d see that it’ll only delay the inevitable, and maybe they should put that effort into actually making quality products at home instead of throwing money at chip OEMs and expecting them to move out of Taiwan overnight.