• 0 Posts
  • 69 Comments
Joined 1 year ago
cake
Cake day: August 8th, 2023

help-circle

  • I was just about to post the same thing. I’ve been using Linux for almost 10 years. I never really understood the folder layout anyway into this detail. My reasoning always was that /lib was more system-wide and /usr/lib was for stuff installed for me only. That never made sense though, since there is only one /usr and not one for every user. But I never really thought further, I just let it be.


  • I try to steer as many people as I know to Signal, but I don’t want to be the type of person who accepts no compromise so I also use a bunch of others. Whatsapp is the most common, as pretty much everyone here in the Netherlands uses it. I used to use Telegram, but nowadays I trust it less than Whatsapp and all my Telegram chats have moved to Signal. SMS is only there for backup and older people who don’t use other apps. And Discord is there for people who want their messages to never be read, because that app is a dumpster fire that constantly makes me miss messages.



  • No, people just don’t like crypto because it’s a huge waste of energy that has no use for the average person at the moment and is only used by rich people to get richer without much regulation. Don’t get me wrong, it might definitely be useful when used correctly in the future. Not wasting as much energy by ditching proof of work, becoming actually useful for normal transactions, etc. But right now it’s just an overhyped technology for obnoxious cryptobros.



  • Machine learning and compression have always been closely tied together. It’s trying to learn the “rules” that describe the data rather than memorizing all the data.

    I remember implementing a paper older than me in our “Information Theory” course at university that treated the creation of a decision tree as compression. Their algorithm considered sending the decisions tree and all the exceptions to the decision tree and the tree itself. If a node in the tree increased the overall message size, it would simply be pruned. This way they ensured that you wouldn’t make conclusions while having very little data and would only add the big patterns in the data.

    Fundamentally it is just compression, it’s just a way better method of compression than all the models that we had before.

    EDIT: The paper I’m talking about is “Inferring decision trees using the minimum description length principle” - L. Ross Quinlan & Ronald L. Rivest


  • I’m on Arch (actually a converted Antergos) and I have an NVIDIA card as well. My first attempt a few months ago was horrible, bricking my system and requiring a bootable USB an a whole evening to get Linux working again.

    My second attempt was recently, and went a lot better. X11 no longer seems to work, so I’m kinda stuck with it, but it feels snappy as long as my second monitor is disconnected. I’ve yet to try some gaming. My main monitor is a VRR 144Hz panel with garbage-tier HDR. The HDR worked out of the box on KDE Plasma, with the same shitty quality as on Windows, so I immediately turned it off again. When my second monitor is connected I get terrible hitching. Every second or so the screen just freezes for hundreds of milliseconds. Something about it (1280x1024, 75Hz, DVI) must not make Wayland happy. No settings seem to change anything, only physically disconnecting the monitor seems to work.


  • I bought a ThinkPad new in 2014 for my study for like 1200 euro’s. She’s still happily purring today. Around 2019 I made the mistake of emptying a cup of tea into the ThinkPad accidentally and then holding it upside down to get the water out. I think I should’ve just let it leak out of the bottom since the laptop has holes for that, but I panicked. This broke the keyboard, but not the rest of the laptop. I got an official new keyboard for like 100 euro’s which came with a tool and the simple instructions, and since then everything has been working flawlessly.

    So I recommend ThinkPads, although I can’t really say anything about compatibility of new models


  • I’m not so sure that the laypeople will, but I do expect a shift. Personally I’m still running Windows 10 next to Linux currently. Most of my time is still spent on Windows, because it’s generally a bit more stable and hassle free due to the Windows monopoly. Software is written for Windows, so sadly it’s usually just a better experience.

    But so many things I read about Win 11 (and beyond) piss me off. It’s my computer, I don’t want them to decide things for me or farm my data. I’m mentally preparing for the transition to Linux-only. 90% of the software I use will work out of the box, and I think with some effort I can get like 8% of the rest to work. It’ll be a lot of effort, but Micro$oft has pushed so far that I’m really starting to consider.

    Multiple friends and colleagues (all programmers) I spoke are feeling the same way. I think Linux may double in full-time desktop users in a few years of this goes on.





  • You call it “quick to judge and superficial”, but imo that’s the wrong attitude. Every tool we use as humans should be designed to be as intuitive as possible. It makes it easiest for people to learn how to use a new tool. That doesn’t mean that a tool cannot be complex or customizable, but the default experience should make it easy for new users to quickly achieve something. Once they grow accustomed to the tool they can tailor it their own way.

    No tool has to do this, but if it wants to be widely used then this is kinda necessary.

    There’s a reason why there are whole fields of study into human media interaction, and why software companies hire UI designers. Everything that doesn’t have to be explained in words and text because it is intuitive saves mental overhead for the user and makes the application more accessible.



  • Can, but not by default. The default setup is what leaves an impression on most users. Most users opening GIMP for the first time expect to be able to find stuff that they need, not have to first spend a lot of time getting familiar with all of its options. It shouldn’t be needed to first spend time opening all the sane default windows and re-aliging stuff every time you boot it for the first time. At least, that shouldn’t be the case of GIMP wants to be as popular with non-technical users like Krita is.

    Also, the tool bar still doesn’t have the nice separations between tool functions, and it still feel a bit more chaotic. Not sure of it’s the icons or the order.


  • Now admittedly I’m not someone who often uses drawing programs, but my biggest issue in GIMP is that I never seem to be able to find what I’m looking for.

    In the two images you posted you can actually see an example of such a case. In Krita all the tools (or whatever you’d call them) in the bar on the left are ordered in a logical way, and separate types of tools are also visually separated by separator lines. The bar with tools is also only 2 icons wide, which makes scanning for the right tool a bit easier, since you can mostly just scan along the vertical axis. In GIMP it’s just a pile of low contrast icons in seemingly random order. Unless you’ve used it enough to know the order, you’re gonna have to do a lot more searching. And searching will be way harder since you’ll have to search horizontally and vertically.

    It’s like reading a website where the text is taking the whole with of the screen and without paragraphs (GIMP) vs reading a website where the line length is constrained, the text is horizontally centered, and there are proper paragraphs.

    I feel like this example reflects my personal experience with both. I’ve used quite a few different types of image editing programs, and with most of them I can fairly easily find the stuff I need. Using GIMP however, I used to be quite lost. Nowadays it’s gotten better because the windows are not all floating around and I’ve used it more. But still, I only found Krita after using a fair bit of GIMP, and yet I felt instantly more at home because the UI was easier to navigate.

    Edit: That being said, GIMP is a very cool program. I don’t want to hate on it too much. It’s helped me countless times. The UI has already improved a lot since the floaty window days, and I hope that continues.


  • Damn, it’s so weird hearing the kind of rebellious teen mind in a voice and culture from so long ago. She sounds so mature, so aware of what’s going on. I remember thinking how hypocritical adults were, wanting to break free of all these stupid rules, yet also wanting guidance. But I never put it this eloquently. And yet I suspect that her rebellion wasn’t fully positively received, despite how nicely it actually was put.


  • When I was a student I never understood how something like this could happen. “Just rewrite it” I thought, how hard can that be. But working in a corporate environment I now totally understand it. Everything you write will at some point become part of code that, to the fast majority of colleagues, will just be a black box that they’ve never touched and don’t intend to. Despite my urge to test and document everything, I’ve already more than once complained about my own code, only realizing later that I wrote it myself. Often you can still find out what it does, but the “why” gets lost and because of that people are afraid to change it.


  • I’m not a hundred percent sure, but afaik it has to do with how random the output of the GPT model will be. At 0 it will always pick the most probable next continuation of a piece of text according to its own prediction. The higher the temperature, the more chance there is for less probable outputs to get picked. So it’s most likely to pick 42, but as the temperature increases you see the chance of (according to the model) less likely numbers increase.

    This is how temperature works in the softmax function, which is often used in deep learning.