Too Many Drives

Following up on my minor server upgrade last year, I’ve made a few new changes to fix some issues with my general home setup.

One of the worst things about my server has always been the case. When I built it, I bought the cheapest MicroATX case I could find on NewEgg that still had space for a few 3.5” drives. That case only had three drive bays, and I’ve been running six drives in it, with them basically just stuck anywhere they’d fit. The system drive, a 2.5” SATA SSD, literally rested on the CPU cooler because it was the only place I could fit it.

It was definitely due for an upgrade. I’ve been eyeing the Fractal Design Node 804 for a while, and finally went for it. It’s a fantastic case, with great airflow, loads of drive bays, excellent removable drive cages, and plenty of space to work in. Along with the new case, I moved over my third 12 TB HDD from my main PC, as a dedicated YouTube mirror drive, because I’ve reached a point where that’s a thing a I need for some reason.

With the latest changes, my disks now look like this:

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 111.8G  0 disk
├─sda1   8:1    0   1.9M  0 part
└─sda2   8:2    0 111.8G  0 part /
sdb      8:16   0  10.9T  0 disk
sdc      8:32   0  10.9T  0 disk /storage
sdd      8:48   0   7.3T  0 disk
sde      8:64   0   7.3T  0 disk /archive
sdf      8:80   0  10.9T  0 disk
└─sdf1   8:81   0  10.9T  0 part /youtube
sdg      8:96   0   2.7T  0 disk
└─sdg1   8:97   0   2.7T  0 part /scratch

I’m still using btrfs for the RAID1 /archive and RAID0 /storage volumes. I added the /scratch volume with a dedicated disk used as a cache drive, and for heavy random IO that’s too big for the SSD (temporary transcoding, compiling Android ROMs, etc.).

The newly-added 12 TB drive mounted at /youtube is a good sign of my /r/datahoarder tendencies, as I’m already using 65% of its capacity. 3.4 TiB of that is just my Nerd³ archive, which I think has every video publicly listed on every official Nerd³ channel, and the complete UnofficialNerdCubedLive collection.

I still eventually plan to move to a many-disk setup with at least 8×12 TB drives in either RAID10 or a parity setup of some kind, likely on ZFS if I can justify the RAM requirement. I’ll probably also move off of Arch Linux as my host OS, maybe even going for something like ESXi. Arch is great, but having to reboot as often as I do for updates when I have a bunch of VMs, containers, and other services running, is somewhat annoying. I’ll probably just end up using Debian or something and running everything in Docker containers.

I also really want to upgrade the network card. I’d love at least a 2.5 Gbps connection on it, as my main PC’s new motherboard has dual Ethernet with a 2.5 Gbps port. I could even just use a direct connection with a crossover adapter and skip buying a switch for now, since the only device that’d actually sustain > 1 Gbps is my main PC when I’m doing large file transfers. That’ll probably be my next upgrade.

WireGuard

I’ve always had various ways of connecting to my local network externally, from unencrypted VNC connections directly to my PC in the early days, to RDP, SSH tunnels, and eventually proper VPN setups.

Most recently, I was using SoftEther on my file server as my way into my local network, which is nice because it bundles support for most of the modern protocols out of the box, but is a bit of a pain to keep running correctly on Arch, and seems overkill for what I was doing. Not to mention having like 10 ports open for an occasional single VPN connection felt really silly.

Now though, I’m using WireGuard, and it is a much better experience. With native support in the Linux kernel, it “just works” out of the box on the client and server ends (both just peers in WireGuard’s implementation), and is really clean to set up and maintain. It only uses a single UDP port for incoming traffic, and works much better with my slightly weird networking setup.

Like basically everything on linux, the Arch Wiki article on WireGuard is fantastic and has basically everything you could need to set it up.

Server configuration

My “server” configuration at /etc/wireguard/wg0.conf:

[Interface]
Address = 10.200.200.1/24
ListenPort = 51820
PrivateKey = [key here]

PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o enp1s0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o enp1s0 -j MASQUERADE

[Peer]
PublicKey = [key here]
PresharedKey = [psk here]
AllowedIps = 10.200.200.2/32

[Peer]
PublicKey = [key here]
PresharedKey = [psk here]
AllowedIps = 10.200.200.3/32

The Interface address should be a new subnet that is only used for assigning addresses to the WireGuard peers themselves. The PostUp and PostDown settings are used to update iptables to forward IP traffic from the peers through your primary network interface. Replace enp1s0 with whatever your interface name is (you can use ip link to list them).

If it’s behind a firewall, you’ll need to add a NAT rule allowing UDP traffic to your server on the ListenPort you defined.

You can generate a private key with wg genkey, and generate a pre-shared key to give the clients with wg genpsk.

The Peer sections here are the “clients” in the network. You’ll want to generate a PSK to add here and to the peer when configuring it, then let the peer generate its own key pair to add to the server’s config. Each peer should either have a unique address or use a /24 subnet to allow it to be dynamically assigned an IP when it connects.

One configured, you can use wg-quick up wg0 directly to start your server, or manage it as a systemd service:

systemctl start wg-quick@wg0.service

Client configuration

When adding a client peer, it works best to let the client generate its own key pair and just add its public key as a new peer. A client’s config should look something like this:

[Interface]
Address = 10.200.200.2/24
PrivateKey = [auto-generated private key here]
DNS = 1.1.1.1

[Peer]
PublicKey = [server public key here]
PresharedKey = [psk here]
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = wireguard.example.net:51820

The Address should be compatible with the AllowedIps setting for that peer in the server’s configuration, and the Endpoint should be the hostname and port of your server. The DNS can be set to any DNS server that’s accessible once your connected. If you’re not forwarding traffic on the server end, this will need to be a DNS server in the WireGuard subnet if you want name resolution to work.

Home Server Updates

My home server I re-set up last January has been working wonderfully. I’ve continued to run Arch Linux as the host OS, with a few VMs running various things from Minecraft servers to Pi-Hole, and kept btrfs as my filesystem for the storage drives. As I continue downloading and archiving far too much crap (my YouTube archive alone is now several terabytes), I needed to expand my capacity this year, swapping two of the 8 TB drives from the RAID10 array for 12 TB ones in a RAID0 setup.

My current btrfs setup now has two filesystems. The /archive filesystem is running two 8 TB WD Red drives under btrfs RAID1, used for backups and other long-term storage where resiliency is important. The /storage filesystem is running two 12 TB Seagate drives under btrfs RAID0, with metadata on RAID1, and is manually backed up to cold storage periodically. This has worked out really well and gives lots of room to grow, and excellent performance. The disks never struggle to max out the 1 Gbps network connection on the server (which is just a cheap AMD A8 desktop), and I’ve had excellent reliability so far.

I have had one issue earlier this year when I had multiple power outages combined with a bad RAM stick that ended up causing issues with the old 8 TB array, which partially prompted the upgrade. The filesystem was readable with btrfs’s recovery tools, but refused to mount, so I duplicated it all to the 12 TB drives, then created a new filesystem and moved data around until I reached my current setup. I bought a third 12 TB drive as well which I now use for my Downloads folder on my desktop PC, since I’m really bad at cleaning up my old downloads, which was useful in the migration process to have extra space to keep things temporarily.

Long-term, I plan on completely replacing this server. It works great for my current use-cases, but once I buy a house in the next year or so, I’d like to set up a server rack with a rack-mounted 10 Gbps switch and a proper server, likely running a new EPYC CPU instead of an old A8. I’d love to have a setup where I can run a large number of VMs, all with dedicated storage devices and plenty of RAM, so I’ll likely be looking at building something with a large number of SSDs, and maybe having my bulk storage on a separate server with basically just HDDs, like a Storinator or a Backblaze pod. I don’t really have a reason to have that kind of storage, but I’d love to be able to dedicate huge amounts of storage and bandwidth to things like Archive Team warriors and other long-term archiving efforts, both personal and public.

I’ve already started my own archiving tools like my comic archiver, and plan on continuing to expand those efforts to cover everything I might miss in the future if it were to disappear from the Internet. Even just things I’ve wget -r‘d have grown to be quite large of a collection, so continuing to grow my storage makes sense to me. I don’t see myself ever truly needing storage on the petabyte-scale, but I love the idea of having a petabyte of storage just because it’s awesome, so I may even go that far as drive capacities continue to increase.

A New Blog Style

I’ve rebuilt my blog’s visual style again, this time on Tailwind CSS with some fun new tooling. I was able to re-use most of my HTML from the old templates, since there’s not much to the theme, only swapping things like mb-sm-3 for sm:mb-3 on some of the Bootstrap utility classes, to change them to Tailwind ones. The new CSS stuff was fine, but the tooling around it was a bit more interesting.

This was my first time using PurgeCSS. I’m obsessive about keeping this blog loading as fast as possible, so making the new stylesheet tiny was part of my intent with this update. Tailwind gives a big stylesheet by default, breaking 500 KB even when minified. Using PurgeCSS I was able to get this down to about 9 KB minified. Using it with Jekyll works really well since Jekyll generates final HTML for every page on the site, making PurgeCSS’s selector matching quite painless. If you want to see the full setup (fairly simple), check out the webpack.mix.js for this project that uses Laravel Mix with PostCSS.

I also set up Puppeteer with a simple script to generate a PDF of my Resume, since I haven’t had an up-to-date PDF for it in years. I’ve used Puppeteer in the past when building the StructHub print export feature and found it to be an excellent way to generate complex PDFs, with some annoying limitations since you’re still working in the context of a web page. Particularly for StructHub, a table of contents would have been amazing, but wasn’t really possible to generate in a reasonable way since page numbers in the final print view aren’t known until after the PDF is generated. We could have tried to render things out in JS with page-sized containers, but that could have led to all sorts of hard-to-fix bugs with weird content sizing inconsistencies. This new PDF is much simpler than that.

For a long time on this site, I refused to include any client-side JavaScript. Recently though, I set up a locally-hosted copy of instant.page, which is a JavaScript module that prefetches pages when there is an interaction with a link to them (either via hover or touchstart). I never intend to add JS to this site that has any effect on usability, or any user tracking, but this tiny module works great to make the website feel even faster than if it was a fully traditional, lightweight HTML site.

I also have no analytics on this site (and don’t even really keep server logging), so I have no idea how many people, if any, read it, but it’s still fun to work on as a side thing to experiment with new frontend technologies, new server configurations, and new design concepts. I’ll likely continue to redesign it every year or two as I get bored with whatever it is at the time.

Maybe I’ll even leave Jekyll. But probably not.

Inconsistency in the Windows experience

I still don’t like Windows 10. All of my original complaints are still issues, but there’s so much more to the Windows experience that is just bad.

One thing I’m frequently frustrated by is application management. On Linux, if you use it correctly, every application (graphical and CLI), every library, and even your OS itself is installed and updated consistently via the package manager. macOS isn’t quite as seamless, but applications are only really installed in three ways: the App Store, a .dmg image, and a package installer. You can use brew too if you want, which isn’t perfect, but works well enough.

On Windows, applications come with infinite varieties of installers, and install all over the place. Microsoft tried to standardize this with MSI packages and the Program Files directory, but both of those have minor limitations that many developers work around by just not using them. Chrome, for example, installs to an unprotected directory in %APPDATA%, using a completely custom installer. Microsoft’s own installers are incredibly inconsistent as well, with Office, Visual Studio, VS Code, and Teams all using completely different installers (none of them MSIs), though they at least all install the bulk of their executables in Program Files.

Microsoft’s own first-party apps are also a great example of how inconsistent Microsoft is with visual styles. Even without installing anything extra, Windows 10 ships with WordPad, Calculator, and Paint, all of which look like they’re running on completely different operating systems built by completely different companies. It gets even worse with applications like Visual Studio and Office, where they have very custom UIs that don’t seem to use any of the native Windows UI.

The command-line experience on Windows is a mess too. Many things still rely on cmd, and that’s where many users’ knowledge of the Windows command-line will be, but with PowerShell being the default now it makes sense to want to transition. This shouldn’t be too difficult, except that PowerShell is almost completely incompatible with cmd. It’s also just bad 🙃

PowerShell is excessively verbose. Things like:

MKLINK C:\src C:\dest

become:

New-Item -Path C:\dest -ItemType SymbolicLink -Value C:\src

…for some reason. You can always use cmd.exe /c ... from PS, but it’s incredibly stupid that that’s necessary. It’s also incredibly ridiculous that Windows by default doesn’t even let you run scripts in PowerShell. I get that running scripts can potentially be insecure, but so is running any executable and those don’t require any system configuration changes to enable (yet). PowerShell is decent for Windows system administration, but for development purposes where everyone’s been using bash already for decades, it just feels so terribly implemented.

PC Settings is another great place to look if you want to see just how unfinished and pieced-together Windows 10 is. It’s gotten a lot better over the years, but about 2/3 of the advanced options in PC Settings still just open Control Panel windows for things, and many of the things that have been migrated over are missing much of the functionality of the original control panel items. Control Panel was really, really bad, but at least it had everything you needed somewhere in there.

Windows 10’s Dark Mode is a welcome new feature, but it needs so much more work to feel complete that it is only useable as a neat re-skin rather than something to actually change your whole OS to be dark. Linux has had fantastic theme support (including light/dark options) through GTK and QT for forever, and Apple went all-out on their dark theme in Mojave, completely updating all of their first-party apps and making a complete redesigned dark version of UIKit available to all third-party apps without their devs needing to do much at all to implement it.

Overall, Windows 10 has definitely improved over the years, but it still has a long way to go to be an OS I’d consider good.