I don’t know anything about GPU design but expandable VRAM is a really interesting idea. Feels too consumer friendly for Nvidia and maybe even AMD though.
also at beehaw
I don’t know anything about GPU design but expandable VRAM is a really interesting idea. Feels too consumer friendly for Nvidia and maybe even AMD though.
I can’t believe someone has paid for that domain name for 23 years… O_O
I like the friendlier feeling of Seaford (the o shapes have a little tilt to them rather than being straight on the grid), but I’m guessing they leaned towards the most “generic” of the five because as a default font you want it to become “invisible” almost. I think a more unique font would stand out and then become a little grating over time given how much it would be seen.
Yup; hopefully there are some advances in the training space, but I’d guess that having large quantities of VRAM is always going to be necessary in some capacity for training specifically.
Yup; hopefully there are some advances in the training space, but I’d guess that having large quantities of VRAM is always going to be necessary in some capacity for training specifically.
So I’m no expert at running local LLMs, but I did download one (the 7B vicuña model recommended by the LocalLLM subreddit wiki) and try my hand at training a LoRA on some structured data I have.
Based on my experience, the VRAM available to you is going to be way more of a bottleneck than PCIe speeds.
I could barely hold a 7B model in 10 GB of VRAM on my 3080, so 8 GB might be impossible or very tight. IMO to get good results with local models you really have large quantities of VRAM and be using 13B or above models.
Additionally, when you’re training a LoRA the model + training data gets loaded into VRAM. My training dataset wasn’t very large, and even so, I kept running into VRAM constraints with training.
In the end I concluded that in the current state, running a local LLM is an interesting exercise but only great on enthusiast level hardware with loads of VRAM (4090s etc).
Using it to separate work from other uses makes sense to me - I think if I worked from my desktop rather than the company laptop, I’d be more inclined to use the virtual desktops.
Wanting to pin a floating window was always something I wanted on Windows, so I was excited to see that being natively supported by KDE.
Agree on disliking alt-tab because it’s non-deterministic! Cycling through a whole list of apps has always felt clunky to me so I never use it.
I really wish I could load Sway on my desktop… unfortunately I’ve got an Nvidia card and I couldn’t get the live ISO to boot with sway. :<
Very tempting to try it on my laptop though! All the setups I’ve seen using it look really clean.
How far away from your monitor do you sit to see all of the 49”?! It must all be in your peripheral vision, haha. (Edit: oh, I overlooked the ultra wide mention and was picturing a 49” tv type thing, haha. Ultra wide makes more sense!)
I actually went down from two monitors on my desktop to one… nothing wrong with the second monitor now sitting in my closet, but I’m liking the extra space on my desk and it feels more ergonomic to not be swiveling my neck as much.
I’ve accidentally tried to switch workspaces with the i3 shortcuts when on a windows machine before! that muscle mememory, haha.
when I’m booting Windows on my desktop, I use MS PowerToys to snap windows around which gives me the same feeling of nice organization as tiling but feels more intuitive in the Windows environment for me.
That’s helpful; this sounds like a docker issue or qBit issue then. The default qBit location for torrents is /downloads, but you’d need to make sure to point it towards the container volume mapping you’re setting up in docker.
my relevant qBittorrent compose volume mapping is as follows:
volumes:
- /volume1/shared/torrents:/data/torrents
Personally, I don’t separate my torrent downloads by type; I use incoming & completed folders. Here’s how I set up my qBittorrent config:
Original Value | New Value |
---|---|
Session\DefaultSavePath=/downloads/ | Session\DefaultSavePath=/data/torrents/1_completed/ |
Session\TempPath=/downloads/incomplete/ | Session\TempPath=/data/torrents/2_incoming/ |
Downloads\SavePath=/downloads/ | Downloads\SavePath=/data/torrents/1_completed/ |
Downloads\TempPath=/downloads/incomplete/ | Downloads\TempPath=/data/torrents/2_incoming/ |
This should just be part of configuring Sonarr/Radarr settings correctly. Do you have a red message in the settings that says a download client is missing, or have you filled out the download clients settings section with your torrent client info? If yes, have you checked the “auto import from client” box? and, have you set your root library folder in the media management section?
Interesting. I wonder if it’s worth putting a Faraday cage around a home NAS – but it sounds like the electrical surge from it being plugged in might fry it as well.
I really can’t tell - does it work on top of Markdown flat files or not? Based on the mention of an exporter, I’d guess not.
Part of the reason I moved from TiddlyWiki to Obsidian was to get my writing into plain text files, so anything that doesn’t interface with the OS file system is off the table for me from the get-go. (Part of the reason I care about this is so that I’m not locked into a specific app and can use VS Code to browse and edit as needed, or build a static site from my files, etc.)
Reminds me of this neocities site that has a collection of Gameboy Camera photos collected from the internet.
ahh yeah I forgot about that; it’s been a while since I created my accounts. not sure if that’s an option that instance owners can set by default for new accounts or not.
This has happened to me once, where I only saw I had a comment I could reply to by going to the post itself.
I’m seeing other issues with federation not working so I suspect this has something to do with that…?
Interesting, thanks for this! I’ve got a reasonably sized wiki I exported from TiddlyWiki into Obsidian and it works alright; but now I’m curious if Logseq would be a better fit. All my daily and review entries in TiddlyWiki were bullet-pointed, so it should feel natural in that respect.
I’m curious, how are you discovering new music this way? my understanding of soulseek and nicotine+ is that they’re great for finding music by artists you already know, but idk how they would work for discovery…?