Tinkering is all fun and games, until it’s 4 am, your vision is blurry, and thinking straight becomes a non-option, or perhaps you just get overly confident, type something and press enter before considering the consequences of the command you’re about to execute… And then all you have is a kernel panic and one thought bouncing in your head: “damn, what did I expect to happen?”.

Off the top of my head I remember 2 of those. Both happened a while ago, so I don’t remember all the details, unfortunately.

For the warmup, removing PAM. I was trying to convert my artix install to a regular arch without reinstalling everything. Should be kinda simple: change repos, install systemd, uninstall dinit and it’s units, profit. Yet after doing just that I was left with some PAM errors… So, I Rdd-ed libpam instead of just using --overwrite. Needless to say, I had to search for live usb yet again.

And the one at least I find quite funny. After about a year of using arch I was considering myself a confident enough user, and it so happened that I wanted to install smth that was packaged for debian. A reasonable person would, perhaps, write a pkgbuild that would unpack the .deb and install it’s contents properly along with all the necessary dependencies. But not me, I installed dpkg. The package refused to either work or install complaining that the version of glibc was incorrect… So, I installed glibc from Debian’s repos. After a few seconds my poor PC probably spent staring in disbelief at the sheer stupidity of the meatbag behind the keyboard, I was met with a reboot, a kernel panic, and a need to find another PC to flash an archiso to a flash drive ('cause ofc I didn’t have one at the time).

Anyways, what are your stories?

  • Binzy_Boi@lemmy.ca
    link
    fedilink
    arrow-up
    90
    ·
    10 months ago

    Perhaps not the same definition of “broken” that you’re looking for, but when I first started using Linux, I was using Kubuntu as my first distro have some brief experimenting with Manjaro.

    Anyway, back then, I for some reason had the Skype snap installed. Can’t recall why I had it to begin with, but I decided later on that ofc I didn’t need Skype, and of course uninstalled the snap.

    A few days later, I was met with some storage issues, where I had a limited amount of storage left on my SSD. I’m sitting there a little confused since I swore I was using less storage, but I did a thorough cleaning of my computer by deleting files I didn’t necessarily need, and uninstalling any programs that I hardly ever used. That seemed to do the job, even if it was less storage space…

    Until the next day, when the storage was full again. After getting some help from someone, I found that Skype, despite being uninstalled, was still running in the background, and found that there were residual files. The residual stuff running in the background was trying to communicate with what I had uninstalled, and logged multiple errors per second in a plaintext file that ended up being 176GB.

    Whether I did something wrong or if there was something up with the snap, I still don’t know as this was over a year ago and I was still learning the ropes of Linux at the time.

    • Petter1@lemm.ee
      link
      fedilink
      arrow-up
      13
      ·
      10 months ago

      I agree in blaming Snap for that 😂 good ol apt would have done a better job, I guess.

      • Turun@feddit.de
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        I had this problem before as well. Something was spamming log messages and filled up the boot drive. No snap needed.

    • nyan@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 months ago

      I would blame Skype itself for being a corporate-owned closed-source flaming pit of doom in this case, not your actions or the snap.

  • krimson@feddit.nl
    link
    fedilink
    arrow-up
    64
    ·
    edit-2
    10 months ago

    Many many years ago I wanted to clean up my freshly installed Slackware system by removing old files.

    find / -mtime +30 -exec rm -f {};

    Bad idea.

  • jordanlund@lemmy.world
    link
    fedilink
    arrow-up
    54
    arrow-down
    1
    ·
    10 months ago

    Not me, but one I saw… dude used chmod to lock down permissions across the board… including root… including the chmod command.

    “What do I do?”

    🤔

    “Re-install?”

    • Bilbo@jlai.lu
      link
      fedilink
      arrow-up
      30
      ·
      10 months ago

      You could boot on an USB, mount the filesystem and change the permissions. But if the dude changed a whole lot of permissions, reinstalling might be the smart thing to do…

    • fl42v@lemmy.mlOP
      link
      fedilink
      arrow-up
      10
      ·
      10 months ago

      Yeah, a very unfortunate one: probably, the most painful to recover from. I’d just reinstall, honesty 😅 At least with mine I could simply add the necessary stuff from chroot or pacstrap and not spend a metric ton of time tracking all the files with incorrect permissions

    • Captain Janeway@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      There’s got to be other tools though that could change the file permissions on chmod, right? Though I suppose you’d need permission to use them and/or download them.

      • fl42v@lemmy.mlOP
        link
        fedilink
        arrow-up
        8
        ·
        10 months ago

        You can dump the permissions from the working system and restore them. Quite useful when working with archives that don’t support those attributes or when you run random stuff from the web 😁

        • Petter1@lemm.ee
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          10 months ago

          Many distros offer a automated file/directory ownership restore feature on their liveOS

    • cmnybo@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      I managed to do that back when I was new. Luckily it was a fresh install, so I didn’t lose much when I had to reinstall.

      So far, that has been the only time I really screwed something up outside of a virtual machine.

    • rhys the great@mastodon.rhys.wtf
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      @jordanlund @fl42v I *think* this one could be recoverable if they had a terminal still active by using the dynamic loader to call chmod — or by booting from a liveCD and chmodding from there.

      That’d likely get you to a ‘working’ state quickly, but it’d take forever to get back to a ‘sane’ state with correct permissions on everything.

      • jordanlund@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        10 months ago

        Exactly. There’s no way to even know what the previous permissions were for everything.

        They were TRYING to recursively change permissions in a single directory, accidentally hit the whole system. :(

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    arrow-up
    46
    ·
    10 months ago

    Tried to convert Ubuntu to Debian by replacing the repos in sources.list and apt dist-upgrading. 💣 Teenagers…

  • topperharlie@lemmy.world
    link
    fedilink
    arrow-up
    43
    ·
    10 months ago

    One that I can remember many years ago, classic trying to do something on a flash drive and dd my main hdd instead.

    Funny thing, since this was a 5400rpm and noticed relatively quick (say 1-2 minutes), I could ctrl-c the dd, make a backup of most of my personal files (being very careful not to reboot) and after that I could safely reformat and reinstall.

    To this day it amazes me how linux managed to not crash with a half broken root file system (I mean, sure, things were crashing right and left, but given the situation, having enough to back up most things was like magic)

    • Serinus@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      10 months ago

      Many years ago I was dual booting Linux and Windows XP. I was having issues with the Linux install, and decided to just reinstall. It wasn’t giving me the option to reinstall fresh, only to modify the existing install.

      So I had the bright idea to just rm -rf /

      Surely it’ll let me do a fresh Linux install then.

      Immediately after hitting enter I realized that my Windows partitions would be mounted. I did clearly the only sensible thing and pulled the plug.

      I think I recovered all of my files. Kind of. I only lost all the file paths and file names. There was plenty to recover if I just sorted though 00000000.file, 00000001.file, 00000002.file, etc. Was 00000004.file going to be a Word document or a binary from system32 directory? Your guess is as good as mine!

  • evatronic@lemm.ee
    link
    fedilink
    English
    arrow-up
    34
    ·
    edit-2
    10 months ago

    sudo rm -f /lib /usr/share/backup/blah blah.tar.gz

    Note the space.

      • evatronic@lemm.ee
        link
        fedilink
        English
        arrow-up
        10
        ·
        10 months ago

        Oh no, this was back in the days when we loaded our distros by way of a stack of floppy disks.

    • martinb@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      Top tip, if tired, replace the rm -f part of the command with something innocuous for a first run. Actually, is better to do this mistake once so that the two important lessons are learned… Backup (obviously, in your case it was backups, but the point still stands) and double check your command if it has potential for destruction 👍

  • raoul@lemmy.sdf.org
    link
    fedilink
    arrow-up
    34
    ·
    edit-2
    10 months ago

    First, the classical typo in a bash script:

    set FOLDER=/some/folder

    rm -rf ${FODLER}/

    which is why I like to add a set -u at the begining of a script.

    The second one is not with a Linux box but a mainframe running AIX:

    If on Linux killall java kills all java processes, on AIX it just ignore the arguments and kill all processes that the user can kill. Adios the CICS region 😬 (on the test env. thankfully)

    • NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      17
      ·
      10 months ago

      If on Linux killall java kills all java processes, on AIX it just ignore the arguments and kill all processes that the user can kill.

      jfc, is ignoring arguments the intended behavior?

      • aard@kyu.de
        link
        fedilink
        arrow-up
        19
        arrow-down
        1
        ·
        10 months ago

        On a real UNIX (not only AiX) killall is part of the shutdown process - it gets called by init at that stage when you want to kill everything left before reboot/shutdown.

        Linux is pretty unique in using that for something else.

        • raoul@lemmy.sdf.org
          link
          fedilink
          arrow-up
          10
          ·
          10 months ago

          I didn’t know that, good to know.

          They could have send a SIGTERM by default instead of a SIGKILL. I would not have corrupt everything 😅

          • aard@kyu.de
            link
            fedilink
            arrow-up
            11
            ·
            edit-2
            10 months ago

            killall typically sends SIGTERM by default. It accepts a single argument, the signal to send - so shutdown would call it once with SIGTERM, then with SIGKILL. killall is not meant to to be called interactively - which worked fine, until people who had their first contact with UNIX like systems on Linux started getting access to traditional UNIX systems.

            It used to be common to discourage new Linux users from using killall interactively for exactly that reason. Just checked, there’s even a warning about that in the killall manpage on Linux.

    • Yuumi@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      after reading what “set -u” does, bro this should be default behavior, wtf?

  • shadowintheday2@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    10 months ago

    I thoroughly backup up my slow nvme before installing a new faster one. I actually didn’t even want to reuse the installation, just the files at /home.

    So I mounted it at /mnt/backupnvme0n1, 2, etc and rsynced

    The first few dry runs showed a lot of data was redundant, so I geniously thought “wow I should delete some of these”. And that’s when I did a classic sudo rm -rf in the /mnt root folder instead of /mnt/dirthathadthoseredundantfiles

  • mozz@mbin.grits.dev
    link
    fedilink
    arrow-up
    29
    ·
    10 months ago

    I can’t even remember how I did this, but overwriting the partition table on the main production server at our small startup (back when “the server” would usually live on the premises of the startup). I remember my boss starting to hyperventilate from panic while I reconstructed it from memory / notes, and all the filesystems came back and he calmed down.

    Same job, they gave me a little embedded-systems unit for me to use to build a prototype on. I hooked it up, nothing worked. I brought it back to them.

    Hey, this one doesn’t work.

    Huh… that’s weird, it was working before. Did you break it?

    I don’t think so. Can I have one that works?

    They literally told me, as they were handing me the second one: Okay, here’s another one. Don’t break it.

    I figured it out literally seconds after breaking the second one… I was hooking it up to 12 volts of power when it needed 5. Second dead computer. Explaining that and that I needed a third one now was fun.

    • MrFunnyMoustache@lemmy.ml
      link
      fedilink
      arrow-up
      11
      ·
      10 months ago

      Had something similar several years ago: prototype stopped working for some reason, one of the hardware engineers and I were troubleshooting on a second prototype, and we exploded a large capacitor… The rest of the team were not amused that we destroyed 2 out of 3 working prototypes within 10 minutes.

  • jws_shadotak@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    27
    ·
    10 months ago

    Not quite catastrophic but:

    I’m in the process of switching my main server over from windows to Linux

    I went with Deb 12 and it all works smoothly but I don’t have enough room to back up data to change the drive formats so they’re still NTFS. I was looking at my main media HDD and thought “oh, I’ll at least delete those windows partitions and leave the main partition intact.”

    I found out the hard way that NTFS partitions can’t just reclaim space like that. It shuffles all the data when you change the partition. It’s currently 23 hours into the job and it’s 33% done.

    I did this to reclaim 30 MB of space on a 14 TB drive.

    • fl42v@lemmy.mlOP
      link
      fedilink
      arrow-up
      10
      ·
      10 months ago

      You mean you’ve removed the service partitions used by windows and grown the main one into the freed space? Than yes, it’s not the way. 'Cause creating a new partition instead of growing the existing one shouldn’t have touched the latter at all :/

  • INeedMana@lemmy.world
    link
    fedilink
    arrow-up
    26
    ·
    edit-2
    10 months ago

    Tinkering is all fun and games, until it’s 4 am, your vision is blurry, and thinking straight becomes a non-option, or perhaps you just get overly confident, type something and press enter before considering the consequences of the command you’re about to execute… And then all you have is a kernel panic and one thought bouncing in your head: “damn, what did I expect to happen?”.

    Nah, that’s when the fun really starts! ;)

    The package refused to either work or install complaining that the version of glibc was incorrect… So, I installed glibc from Debian’s repos.

    :D That one is a classic. Most distributions don’t include packagers from other distros because 99% of the time it’s a bad idea. But with Arch you can do whatever you want, of course

    My two things:

    • I’ve heard about some new coreutils (rm, cp, cat… this time the name really fits the contents :D) and I decided to test it out. Of course it was conflicting with my current coreutils package and I couldn’t just replace it because deleting the old package would break requirements. So without thinking I forced the package manager to delete it “I’ll install a new one in just a second”. Turns out it’s hard to install a package without cp, etc :D
    • I don’t remember what I was doing but I overwrote the first bytes of hdd. Meaning my partition table disappeared. Nothing could be mounted, no partitions found. Seemingly a brick.
      Turns out, if you run a rescue iso, ask it to try and recognize partitions and recreate the table without formatting, Linux will come back to life as if nothing happened
    • fl42v@lemmy.mlOP
      link
      fedilink
      arrow-up
      12
      ·
      10 months ago

      Nah, that’s when the fun really starts! ;)

      Well, on the upside, it definitely works better than coffee or energy drinks :D

      Also, nice save with the last one!

  • glibg10b@lemmy.ml
    link
    fedilink
    arrow-up
    24
    ·
    edit-2
    10 months ago

    Before installing Arch on a USB flash drive, I disabled ext4 journaling in order to reduce disk reads and writes, being fully aware of the implications (file corruption after unexpected power loss). I was confident that I would never have to pull the plug or the drive without issuing a normal shutdown first. Unfortunately, there was one possibility I hadn’t considered: sometimes, there’s that one service preventing your PC from turning off, and at that stage there’s no way to kill it (besides waiting for systemd to time out, but I was impatient).

    So I pulled the plug. The system booted fine, but was missing some binaries. Unfortunately, I couldn’t use pacman to restore them because some of the files it relied on were also destroyed.

    This was not the last time I went through this. Luckily I’ve learned my lesson by now

  • msmc101@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    24
    ·
    10 months ago

    First time trying Linux I went with an arch install because I Googled “best version of Linux” and went with arch. Followed a guide to the point of drive formatting and I decided to go with a setup with drive encryption. I didn’t understand what I was doing, ended up locking myself out of my hard drives and couldn’t get windows to reinstall on them. I used a MacBook for a week until I installed Ubuntu and managed to wipe and reset my drives and reinstalled. Needless to say I am going to read up a little more before I try that again.

    • fl42v@lemmy.mlOP
      link
      fedilink
      arrow-up
      12
      ·
      10 months ago

      Been there, and even without encryption: took me to reinstall a few times before I realized I can chroot again and repair 😅

    • Petter1@lemm.ee
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      Archinstall python script is your friend 😄😉 I tried install arch manually, but as I learned that not even sudo is included in the Linux essential packages, I stopped the process and went back to aromatic script install, lol, got no time for that S*** 😂