Building the Web, Properly

It’s 2018. The web has more potential to be incredible now than it ever has before, with all major browsers supporting a huge number of powerful modern technologies, including HTTP/2, SVG, and CSS Flexible Boxes and Grids. With all of this consistency in support and optimization in engines and network stack, our websites should be faster and more robust than ever.

But they’re not.

We’ve reached a point in the industry where most new web developers learn just enough code to start building, but are never exposed to the greater needs surrounding development. Things like accessibility, compatibility, and performance are afterthoughts, rather than core concepts within a website’s architecture. Even simple things like UI state changes due to user input are being replaced with buggy JavaScript that only occasionally approximates what the browser could do natively if only we’d let it.

Let’s take the new web interface for YouTube as an example. It’s fairly clean, UI-wise, but it’s built on Polymer, which relies on polyfills for every browser except Chrome and Safari, resulting in very poor performance throughout the site, even on incredibly high-end PCs. An even more painful example is Google Play Music, which is also built on Polymer, but uses an incredibly large number of custom element instances, that absolutely tank even the fast Chrome implementation when doing things as simple as scrolling though a playlist.

And then you have sites like the Lego Shop. It was rebuilt in React a couple years ago, right before their big winter holiday sale, and the rewrite completely broke a huge portion of the major features. To this day, there is still a huge number of core features, like adding to the shopping cart and wishlist, that take a really long time to complete and have no visual indication of activity, and often don’t complete at all. Session data gets very broken as you navigate throughout the site, with AJAX calls updating quantities and other user-specific information happening on huge delays, sometimes taking tens of seconds to show that your cart isn’t empty, even on the cart page itself! There’s no real reason for this, since even with React and a fully-static frontend, most of this data could be safely persisted between pageloads and even prefetched in cases of critical data used widely throughout the experience.

The new Bitbucket UI is also very poorly engineered in many ways. Despite being a single-page application, it sometimes reloads the entire navigation UI through a several-second AJAX call, even when just clicking a tab within the existing navigation. It also takes an incredibly long time to load very common information, like file listings and settings pages, the latter of which should be fairly easy to generate on the server side. I really have no idea why Bitbucket is so slow.

Not everyone is doing a terrible job though. GitHub is probably my favorite example of a website built just about perfectly. It’s incredibly fast, and uses a perfect hybrid of AJAX and full page loads, with minimal assets on each page, resulting in navigation that’s basically instant, and a UI that stays visible and interactive throughout the experience. Their site is designed so well that even the URLs are intuitive and hackable. Unlike Bitbucket, that seemingly uses random names for many parts of the application that don’t correspond with the UI, GitHub structures their entire site in a very obvious-feeling way, that really makes me wonder why other people over-complicate things so much. GitHub even goes as far as using things like the <details> element for dropdown menus, which are more semantic than CSS checkbox hacks, and don’t require any JavaScript to function. I really admire that level of care in engineering.

Personally, I’m trying to keep everything I build as standards-compliant, cross-browser friendly, and performant as I can. I try to find a good balance between cacheable static pages and completely dynamic experiences, and still try to avoid JavaScript where possible to keep things instantly usable and more resilient. Where I do need complexity, I usually stick with Vue.js, so I can avoid adding overhead to pages and site components that don’t need a lot of custom behavior. Minifying assets, setting good TTLs, and serving things over HTTP2 are also all easy wins for performance, but the easiest thing you can do is just remove code that doesn’t need to be there. If you have several megabytes of JavaScript on your site, you’re almost certainly doing something very, very wrong.

Stop wasting so much bandwidth :P

Too Many Phones

I started this year off with a Galaxy S8+ and iPhone 2G that I’d purchased late last year, and was content with my S8 as my primary device.

Since then, I’ve purchased two Pixel 2 XLs (one 64 GB, locked to Verizon, and a 128 GB Unlocked model), an iPhone 5C (new in box), an Epik One Pebble K400 (the cheapest Android Go phone on Amazon, only $40!), and most recently, a 256 GB iPhone XS in Gold, because why not.

Since my first smart phone in 2013, I’ve been fairly satisfied with Android. Initially using Samsung phones with their over-engineered TouchWiz UI, I quickly moved to the stock Android experience starting with the Nexus 5, and then later with the Nexus 6P. Once Google introduced the Pixel line though, I started to question if I still liked where Android was going. Google was essentially releasing the same Nexus devices as they always had, but with a massive price increase for seemingly no reason. I skipped the first Pixel phone, and even went back to Samsung with the S8, since it had so many more features for a similar price.

After the release of the Pixel 2, I tried my luck with stock Android again, and was… a bit disappointed. The stock experience felt very clean and fast initially, but the Pixel 2 XL has a surprisingly large number of bugs and slowdowns for a high-end stock reference Android device. This summer, I decided on a whim to buy an iPhone 5C (which you can apparently get new on eBay for around $100), and tried it out as a secondary phone for a while. Obviously it was significantly slower than my Pixel 2, and was limited in features since it predates Touch ID and is limited to iOS 10, but it was still a great phone for the price, and I started to somewhat like iOS again.

The Epik One Pebble K400 I bought a few weeks ago though, that is an impressive little phone. If you want a reliable, fully-supported Android Oreo phone with dual-SIM, you really can’t ask for much more for $40. I bought it primarily as a low-end testing device for app development, but also a bit out of curiosity for what a cheap phone these days can do. It’s incredibly slow, and the display and camera aren’t great, but it’d honestly do just about anything most people would need for a day-to-day device.

This September though, Apple basically said “hey we released the same phone with a new CPU but it comes in gold”, so I bought one. $1150 later, and I’m an iPhone XS owner. I haven’t yet decided whether I’m switching over to iOS as my primary mobile OS, but I have to say I like what Apple’s offering these days. iOs used to be so far behind Android in power user features, but it’s really come a long way in the last few years. Since iOS 11 introduced Files and Control Center, and they finally ditched the home button and huge bezels, the iPhone X, XS, and XR are all pretty fantastic phones. Overprice, surely, but fantastic nonetheless.

So far I really like my iPhone. Face ID is still worse than Touch ID in a lot of ways, but it’s implemented well, and as much as Animoji were a silly concept, I actually quite like the Memoji, with the more personal experience they add. The new gesture navigation is fantastic, too. After using Android Pie’s gesture nav for a few months, the iPhone X’s navigation feels far more fluid and polished.

There are still a lot of small annoyances I have with iOS. I’ve noticed over the last few days that push notifications arrive significantly faster on Android than iOS for most of the apps I use, and I’d love the ability to install apps from third-parties without using my own Xcode certs that expire in only a few days, but overall the iPhone is a fairly decent phone these days.

And back again.

After a bit of time trying some other ventures, including a few things we definitely intend to get back to at some point because they’re awesome, we’re back on StructHub!

Mike and I are hard at work again building a proper company out of our crazy ideas, and it’s going fairly well this time around. I’m currently building most of our software stack on Laravel, including an instruction editor, a media file manager, and a consumer app. We’re also most likely going to use Flutter to build our iOS and Android apps when the time comes, which is looking increasingly usable lately.

We’re also planning on releasing our first beta really soon. To make that happen, the pace of development needs to increase dramatically, and since I’m the only developer right now, that means I need more time to work on it. To make that happen, I’m cutting down my time at may current day job to only two days a week. That way I can keep a bit of income to cover living expenses, while still having a good portion of a proper work week to dedicate to StructHub.

Check it out!

YABS4S

This blog is now yet another Bootstrap 4-based site. I can’t resist the awesomeness that is Bootstrap 4 on any of my sites lately, and while it’s aesthetically quite similar to what it was before, this site is basically just a tiny bit worse now, since it has to load much more CSS to give what is nearly the same layout and overall look that it had before. Neat.

Why would I do this?

Performance-wise, it makes nearly no difference. Only the components I’m actually using in Bootstrap are included, and the core styles gained by relying on Bootstrap mostly just ensure a better experience in browsers I don’t use as often. This also means a switch back to using Flexbox instead of Grid for the layouts, which gains a lot of backwards compatibility. The site doesn’t look as unique as it used to in it’s current form, but I could rely on the Bootstrap Grid to bring back the previous site layout without the need of CSS grid (which I’m increasingly finding to be more trouble than it’s worth in its current state).

More than anything, I made the change for consistency between my projects. Since everything I’m working on right now is built on Bootstrap 4, it’s nice to be able to use the same helper classes and variables without having to remember which project I’m on. That’s really what Bootstrap has become for me, a way to have consistent standards between projects. And it’s awesome.

New Year, New Server

TL;DR: 4x8TB WD + 2x3TB Seagate on Arch w/ btrfs RAID10 is fun.

I’m obsessive about keeping data. I pretty much never delete anything, and I’ve started archiving data from other sources that could potentially disappear in the future like websites, YouTube videos, and even old Operating System installer images. Doing something like this takes quite a bit of disk space, and my old server with only 6 terabytes of usable redundant storage was just not cutting it. I had only a few gigabytes left available on that server, plus a few terabytes of non-redundant data on my main desktop PC with nowhere better to move the files, so I decided it was finally time to rebuild.

The first step was to find the best value I could on large hard drives. As a frequent lurker on r/datahoarder, I had seen people mention that the WD easystore 8 TB drives were frequently on sale for around 50% off at Best Buy, and often had proper WD Red drives inside. I set up an IFTTT recipe to notify me when the prices changed on the Best Buy website, and got 4 drives for only $150 a piece, which is quite a savings over pretty much any other 8 TB drive on the market. Unfortunately, I wasn’t thorough on checking the model numbers on the boxes, so I ended up with the generic non-Red drives once I opened up the enclosures, but since I’m paranoid and was planning on running RAID10 with checksums anyway, I didn’t really care that much.

Weirdly though, two of the four drives wouldn’t power on with a standard SATA power connector. After a bit of research, there seemed to be two solutions for the issue, with one a bit more elegant than the other, but both a bit weird. Some people found that you could add a resistor between certain leads on the cable to trick the drive into powering on, while some others found that using a Molex to SATA power adapter seemed to fix the problem as well. A hacky combination of adapters resulting in a SATA > Molex > SATA > 4-way SATA splitter, and the drives were up and running! Now for the fun part.

My last server, Vista, ran Ubuntu 14.04 with mdadm RAID10 on 4x 3 TB HDDs. This time around, I wanted to use the best OS ever, Arch Linux, and my favorite filesystem toy, btrfs. I put in a 120 GB ADATA SSD, and in no time at all, my Arch server was up with a basic RAID10 filesystem running on my new drives. Then I started up rsync, and called it a night.

Around 6 TB of data later, the new server just needed a bit of software setup to match the old server’s feature set, and it was good to go. I swapped the IPs of the machines on my DHCP server to keep existing static entries working, and after a few days of running the old one to make sure everything was there, I took it down. Since I had a few spare SATA ports on the new server still, I moved two of the 3 TB Seagate HDDs from the old server over to the new one and added them to the btrfs volume, put another one in my desktop PC, Rin, and the fourth drive into one of the enclosures from the 8 TB drives. Finally, plenty of disk space everywhere I need it.

A look at the current filesystem status shows just how wonderfully overkill this is for a home file server:

[alan@longhorn ~]$ btrfs filesystem usage /storage -H
Overall:
    Device size:                           38.01TB
    Device allocated:                      12.46TB
    Device unallocated:                    25.54TB
    Device missing:                         8.00TB
    Used:                                  12.44TB
    Free (estimated):                      12.78TB      (min: 12.78TB)
    Data ratio:                               2.00
    Metadata ratio:                           2.00
    Global reserve:                       536.87MB      (used: 0.00B)

I still need to go through a few terabytes of data on my desktop PC, and eventually organize it all and move it over to Longhorn, but the result I’ve got now is something I’m incredibly satisfied with. The new setup leaves Longhorn with 38 terabytes of hard drives, giving a usable 19 terabytes after RAID1 redundancy and the overhead of btrfs’s metadata storage.