2/21 update
Configs are evidently not perfect yet, I'm digging to see what exactly happened and will try to make the failover actually work as intended.
Apologies for the downtime everyone, I will outline what exactly happened (at least as much as I can) here along with my plans to prevent it from ever happening again.
1. The proxy host fucking died
This is why there was downtime at all, but n0t why that downtime was such an issue. As much as having an eastern European host that doesn't care benefits a site like, at the end of the day its still run by the cock.li guys ##it's not cockbox##. The following points will outline why I could not mitigate this and how I plan to in the future.
2. I only had one proxy
I kept putting off setting up a second proxy. There's no other explanation, I kept putting it off and eventually forgot. This paired with the not-so-reliable nature of the single proxies host was enough to fuck the site. I will be setting up a second, cheap proxy that will function more as a fall back with it's own subdomain. Balancing an array like previously is a huge pain, and often broke. I've reviewed a few different hosts that will probably not cuck out like the last ones did, and a $5 vps with them will be perfectly serviceable.
And for the dumbest, most bullyable mistake:
3: I didn't backup my fucking proxy configs
All of the nginx config files used for the proxies have changed substantially each time I have had to make server changes, and I didn't back shit up when I set up the current one. This left me unable to rapidly set up a dirt cheap vps to work as a stop-gap in the meantime regardless of their TOS. ##We'd be off it before they could handle any reports anyway##
Last edited by sturgeon