• otacon239@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 months ago

    Yeah, I figured all the parts talking about losing files were jokes. All the new guys know about unrm. Also useful is ssh root@[remote] unshutdown

  • zitrone 🍋@lemmings.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    not only does unrm not exist, the shell expands * to whatever is in the current directory, which is empty because anon just rmd everything

    • zitrone 🍋@lemmings.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      bash is weird

      $ bash -c 'echo e*'
      e*
      $ fish -c 'echo e*'
      fish: No matches for wildcard 'e*'. See `help wildcards-globbing`.
      echo e*
           ^^```
    • PoliteDudeInTheMood@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      I installed Mint on my buddies computer yesterday and the install defaulted to ext4, and then had the audacity to have “Enable Backups” in the welcome dialog and only have RSYNC available.

      It gave me the option to do my own partitions during setup but every time I clicked the link to do so the installer would hang. My buddy started getting nervous so I just left it on default. But I was very annoyed that I couldn’t easily just switch to BTRFS.

  • 🍉 Albert 🍉@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    dumb question, how hard would it be to implement?

    when most files are deleted, they aren’t removed from memory, just their indexes are.

    how about rm just marks the index as discartable in case a new file needs space it can be saved there, but until then, rm can be reversed?

    • pcn@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Filesystems are either pretty simple or really complex. The old dos FAT filesystem just overwrote the first character of a file name with an omega, and so usually you could just undelete by having a utility that would change the name back, as long as nothing used the blocks.

      Modern filesystems are an absolute wonder of spinning wheels inside of spinning wheels allocating ranges of blocks, and then doing bookkeeping to reorganize linked data structures as fast as an SSD can write or as efficiently as possible on spinning rust.

      Some log structured filesystems can do special snapshotting either automatically like NetApp ontap or manually like zfs so that when you take a snapshot any further changes preserve a view of that snapshot at a point in time that you can treat like a special directory where you can cd to and copy back out data as it was as long as you have space. Windows supports this kind of functionality with the VSS API if the underlying FS tech supports it.

      The downside to these approaches is that they tend to cause fragmentation, can cause a lot of extra space to be used (after all, if you delete a tb, it may be because you meant to and you needed to, so if you mean it, why hasn’t it gone away, etc.,) and are a lot of complexity that 99% of the time 99% of the people don’t want to think about it or pay for it (pay as in it’s slower, uses more space, the complexity leads to more failure modes, etc )

    • theit8514@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Sometimes distros will alias rm with the -i flag so it prompts for each file. An annoyance but makes you stop and think before continuing.

      • myotheraccount@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Confirmation is not very effective, except if you use the function rarely. If you use it a lot, confirming just goes into musle memory. The “shit, i didn’t mean to do that” moment is really when concious thought kicks in again. That’s why undo is so great.

    • dadarobot@lemmy.sdf.org
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      i think the better way would be to replace rm with something that just moves files to a trash bin like how graphical file managers do it.

      if you were just pulling the data back off the disk, and you didnt notice it IMMEDIATELY or a background process is writing some data, it could still be corrupted.

      there was something like that i had on win3.2 called like undel.exe or something, but same deal, often it was courupted somehow by the time i was recovering the data

      • thebestaquaman@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        I usually don’t think about it at all, but every now and then I’m struck by how terrifyingly destructive rm -r can be.

        I’ll use it to delete some build files or whatever, then I’ll suddenly have a streak of paranoia and need to triple check that I’m actually deleting the right thing. It would be nice to have a “safe” option that made recovery trivial, then I could just toggle “safe” to be on by default.

          • thebestaquaman@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            Honestly, after re-reading my own comment, I’m considering just putting some stupid-simple wrapper around mv that moves files to a dedicated trash bin. I’ll just delete the trash bin every now and then…

            -Proceeds to collect 300 GB of build files and scrapped virtual environments over the coming month-

              • thebestaquaman@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                2 months ago

                My thought wasn’t to alias rm, but rather to make a function like rmv <file> that would move the file to a trash directory.

                But of course this already exists- thanks for pointing me to the resource:)

  • NeatNit@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Among many other reasons, this is one more why I always prefer to use a GUI than a terminal shell. The default delete operation is just sends files to trash, and that’s easily undoable. I think you can even press Ctrl+Z to do so (can’t check atm).

    I don’t even know how to do that from commandline.

    (one online search later…)

    There’s a package for that but best I can tell there’s no universal way.

    • kazaika@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Trash-cli probably does what the xdg desktop spec defines, is my guess, which is probably the same as most gui file managers. However trashing files just means moving them to some other hidden directory instead of deleting. So different implementations could use different locations for example, which may make sense in the desktop they where written for

    • BanMe@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      The fear is real but in 30 years of unix and linux work, i’ve never actually deleted anything I didn’t mean to.

      • HereIAm@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        The first time I accidentally lost a number of files was when I wrote a script to rename some images from the format ddmmyyyy to yyyy-mm-dd. But I put the parsing and saved the variable only once outside the for loop, so all files ended up overwriting each other. Learnt my lesson to run untested scripts on files without a back up