deleted by creator
deleted by creator
Here’s one I have saved in my shell aliases.
nscript() {
local name="${1:-nscript-$(printf '%s' $(echo "$RANDOM" | md5sum) | cut -c 1-10)}"
echo -e "#!/usr/bin/env bash\n#set -Eeuxo pipefail\nset -e" > ./"$name".sh && chmod +x ./"$name".sh && hx ./"$name".sh
}
alias nsh='nscript'
Admittedly much more complicated than necessary, but it’s pretty full featured. first line constructs a filename for the new script from a generated 10 character random hash and prepends “nscript” and a user provided name.
The second line writes out the shebang and a few oft used bash flags, makes the file executable and opens in in my editor (Helix in my case).
The third line is just a shortened alias for the function.
I see now, that makes sense why you are building the image since it was set up that way. I don’t know why projects set up the compose file to build the image when they already have a publicly available image to use; it just creates unnecessary friction for people who just want to test out the software. Anyway, using that image should work for you, but feel free to ask if you run into any issues.
Why are you building the image yourself? Not that there’s a problem with that necessarily, but it seems a bit wasteful of your resources unless you have a specific reason to do so. There’s a docker image (quay.io/invidious/invidious:latest
) built by the developers that gets updated pretty frequently. I’ve been using it for years now and it’s been working perfectly fine for me the whole time.
Even if you need something just once, just install it and then uninstall it, takes like 10 seconds.
apt install foo && apt remove foo
That’s essentially what nix-shell -p
does. Not a special feature of nix, just nix’s way of doing the above.
Actually using it though is pretty convenient; it disappears on its own when I exit the shell. I used it just the other day with nix-shell -p ventoy
to install ventoy onto an ssd, I may not need that program again for years. Just used it with audible-cli to download my library and strip the DRM with ffmpeg. Probably won’t be needing that for a while either.
The other thing to keep in mind is that since Nix is meant to be declarative, everything goes in a config file, which screams semi-permenant. Having to do that with ventoy and audible-cli would just be pretty inconvenient. That’s why it exists; due to how Nix is, you need a subcommand for temporary one-off operations.
If you’re ok with just file storage sftpgo has been solid for me for years now. Does sftp ftp and WebDAV (like nextcloud). Webui isn’t as pretty but it’s fast. Mobile apps will be various sync apps with sftp or WebDAV support. On Android folder sync pro is pretty good for keeping documents and pictures backed up
They did. Its called airmessage. Has been around for almost 3 years now
Full disclosure: I’ve never used 1Password so can’t really comment on it compared with others, but I’m currently running a selfhosted Bitwarden re-implementation (vaultwarden) and am generally pretty happy with it. I’ve only ever used LastPass as a password manager before (aside from a seeding algo back in the day), and while I really don’t like their business practices or security history, their extension has or at least had a bit better consistency on Firefox than Bitwarden does, at least with regards to detecting username/password fields and detecting when a new credential is being created and asking it to be saved automatically. That being said, it’s something that I can live with considering it’s free software. As far as I’m aware, in terms of features all the big players in that space are pretty evenly matched, though I do remember some advanced feature that 1Password offered over others; maybe related to privilege access management in enterprise.
Not trying to out myself, but I may be one of the few people that actually owned that shirt lol
Kopia repo on a separate disk dedicated to backups. Have Kopia on my servers as well sending to my local s3 gateway and second copy to wasabi.
Today’s episode of Veronica Explains is brought to you in part by corporate greed.
Less than 5 seconds in and I already know I’m going to like this video.
Well that’s disappointing. I’ll have to investigate further I guess. I was really hoping to set it up (at least initially) without any type of media storage.
Oh I see, I definitely misunderstood what you were asking. How is your caddy server set up? Is it serving one site per subdomain (site.your.domain) or is it one site per path (your.domain/site/)? I am running traefik so I probably won’t be able to help with specifics, but it’s worth a shot.
The way I have my monitoring set up is to poll the containers from behind the proxy layer. Ex. if I’m trying to poll Portainer for example:
---
services:
portainer:
...
with the service name portainer
from uptime-kuma within the same docker network it would look like this:
Can confirm this is working correctly to monitor that the service is reachable. This doesn’t however ensure that you can reach it from your computer, because that depends on if your reverse proxy is configured correctly and isn’t down, but that’s what I wanted in my case.
Edit: If you’re wanting to poll the http endpoint you would add it before like http://whatever_service:whatever_port
I believe the Pictrs is a hard dependency and Lemmy just won’t work without it, and there is no way to disable the caching
I’ll have to double check this but I’m almost certain pictrs isn’t a hard dependency. Saw either the author or one of the contributors mention a few days ago that pictrs could be discarded by editing the config.hjson to remove the pictrs block. Was playing around with deploying a test instance a few days ago and found it to be true, at least prior to finalizing the server setup. I didn’t spin up the pictrs container at all, so I know that it will at least start and let me configure the server.
The one thing I’m not sure of however is if any caching data is written to the container layer in lieu of being sent to pictrs, as I didn’t get that far (yet). I haven’t seen any mention that the backend even does local storage, so I’m assuming that no caching is taking place when pictrs is dot being used.
Edit: Clarifications
Thanks for sharing! I’ll definitely be looking into adding this to my infra alerting stack. Should pair well with webhooks using ntfy for notifications. Currently just have bash scripts push to uptime-kuma for disk usage monitoring as a dead man trigger, but this should be better as a first-line method. Not to mention all the other functionalities it has baked in.
Edit: Would also be great if there was an already compiled binary in each release so I can use bare-metal, but the container on ghcr.io is most-likely what I’ll be using anyway. Thanks for not only uploading to docker hub.
I have reservations about running either the agent or portainer itself on something external to my lan.
I don’t feel like it’s safe enough personally either, so I just have portainer edge-agent nodes connected to the primary on my intranet through through vpn tunnels. I really, really would prefer not to ever open ports on my local firewall, but being able to monitor and control remote docker hosts is also pretty convenient, so my solution has been decent for me.
They most-likely used a Reddit Data Request. It’s kind of like Google Takeout if you’ve ever used that. If you’ve already deleted your reddit account it won’t work IIRC. I scrubbed my comments and posts with PowerDeleteSuite a day or two after I submitted the data request, but before I actually revived the data (took 20 days), but all of my comments and posts did show up in the data request. Don’t know if that points to reddit actually keeping that data in their database and just hiding it on the site or not, but either way, if it’s not visible on the site it becomes worthless to them unless they decide to provide that data to someone behind the scenes.
You don’t. It works perfectly fine OOTB. Can’t speak for the Pinecil v2 with Bluetooth and the companion app but I have v1 and the software been stable and bug-free enough I’ve never even given a thought to updating the firmware on it