Configs are evidently not perfect yet, I'm digging to see what exactly happened and will try to make the failover actually work as intended.
Apologies for the downtime everyone, I will outline what exactly happened (at least as much as I can) here along with my plans to prevent it from ever happening again.
1. The proxy host fucking died
This is why there was downtime at all, but n0t why that downtime was such an issue. As much as having an eastern European host that doesn't care benefits a site like, at the end of the day its still run by the cock.li guys ##it's not cockbox##. The following points will outline why I could not mitigate this and how I plan to in the future.
2. I only had one proxy
I kept putting off setting up a second proxy. There's no other explanation, I kept putting it off and eventually forgot. This paired with the not-so-reliable nature of the single proxies host was enough to fuck the site. I will be setting up a second, cheap proxy that will function more as a fall back with it's own subdomain. Balancing an array like previously is a huge pain, and often broke. I've reviewed a few different hosts that will probably not cuck out like the last ones did, and a $5 vps with them will be perfectly serviceable.
And for the dumbest, most bullyable mistake:
3: I didn't backup my fucking proxy configs
All of the nginx config files used for the proxies have changed substantially each time I have had to make server changes, and I didn't back shit up when I set up the current one. This left me unable to rapidly set up a dirt cheap vps to work as a stop-gap in the meantime regardless of their TOS. ##We'd be off it before they could handle any reports anyway##
So in short a combination of laziness and dumb mistakes on my part are the reason the site went down at all. Host reliability should not have been a factor.
Shot version of the plan going forward:
1. get 1-2 more proxies on international hosts
2. set them up as subdomains, potentially keep one turned on standby and not accessible except for emergencies. Maybe even a dedicated phantom proxy.
2a. I will probably test running the onion thru one of these unlisted proxies. I will announce this but keep an eye out.
3. I've already backed up my shit and will be automating the backup of relevant files across all servers.
This is the dedicated bully thread now. I earned it.
i red thanks for explain
>I didn't backup my fucking proxy configs
I bet you also use google docs to make notes and stream your anime.
your mum gay
good job dipshit
At least don't fuck up the spoiler formatting immediately after returning, fish.
AND editing the post.
I can't believe how worried I was at the idea that I lost my home yet again. Fuck, I'm too invested in this for my own good, aren't I?
Glad to have the site back, but I propose that we now call you Gay Fish as a nickname, to eternally bully you.
I can't believe dolphin won
I am moving to oceanchan™
>This is the dedicated bully thread now.
I'm gonna step on you, who wants to get stepped on and called a faggot
Why does the onion need to be proxied in the first place? Isn't the entire point of a hidden service that it's host server is hidden?
Proxies also cache requests, and it comes out cheaper to have the onion piggyback off the work the proxies do rather than get a vps with enough storage to cache a meaningful amount on it's own. It's also way easier to route through the proxies than it is to add to the appserver whitelist.
if only you listened to age old wisdom
>##We'd be off it before they could handle any reports anyway##
You're good BO don't sweat it.
He's not the BO
>go to bed
>no new posts so it was dead all night
I'm going fishing
If it's any consolation it was a new issue that I am working on a new solution for. Appserver host had to retart part of their infra and my shit didn't get rebooted automatically. I'm looking to set up a nagios server to automatically kick things that are down and scream at me when it cant.
sall gravy baby
its good to be back thanks for the hard work