There is going to be a couple hours of downtime this evening (Pacific Time). We are going to be moving all equipment to a new data center. Of course if you try to reach the site when it's down, you of course won't see this message, but at least you'll know after the fact I suppose.
Oh and in case anyone is wondering, we are moving all equipment from here: https://kionetworks.com/en/ to here: https://www.scalematrix.com/ It's a pretty neat setup... each rack is sealed up and has it's own A/C zone, can get 45kW of power per rack (a crazy amount of power for a rack), etc. https://www.scalematrix.com/service/colocation-solutions
These guys are still the primary 8 servers... https://forums.digitalpoint.com/threads/new-server-hardware-for-the-geeks.2654797/
Great Joseph, their website ALONE would make me reject using them... Nothing like a two minute pageload for some broken inaccessible "gee ain't it neat" scripttard animated BS to MAYBE convey 1k of text and have a near useless/impossible to use navigation. Real shame is, throwing all this hardware at it instead of fixing things. Real laugh being that even with it being a bloated mess, Xenforo is STILL leaner than the latest iterations of vBull.
Equipment is all in the new facility. There is still some network related issues (hopefully just routing issues) I need to work out... but too tired to mess with that this very second.
Network issues should be resolved now. Super weird, but just in case anyone ever runs into the same thing and stumbles across this thread... We couldn't get individual connections going faster than about 0.8Mbit/sec even though there was plenty of bandwidth to spare. Uplinks on switch were set to auto negotiate, but switch negotiated those ports wrong setting themselves to half duplex instead of full. Forcing ports to full duplex and everything seems good to go now.
I'm noticing a HUGE difference here, particularly when posting. Hoping that holds up. No longer am I hitting post and then sitting there leaving the tab open and browsing other tabs HOPING the post goes through. ... so... Good job? I don't trust auto-anything when it comes to routers and switches. Bit me in the ass one too many times so I can't say I'm surprised that was a bottleneck on a new config. It's like installing downloads from C-Net, ALWAYS choose "custom" install so you can decline, decline, decline the bloatware. Probably why I prefer my hosts to simply install just enough of Debian that I can SSH or KVM into it, so I can set up the rest of it my damned self! Particularly when the defaults for things like php.ini or mysql.conf are utter rubbish.
Wired... Truthfully I don't notice a difference speed-wise. Was fast for me before and is still fast. My best guess is your ISP probably had a crappy route to the old data center somehow. Nothing changed on this end other than we moved equipment about 15 miles. Same equipment, nothing changed on backend (other than IP of course). If you are bored, try running a trace route to old data center: redit.com vs. new: scalematrix.com
Well... running traces to the datacenter won't necessarily do that much good, since you're utilizing Cloudflare. If I, for instance, run a trace on the two datacenters, I get a quicker end and response from redit.com (16 hops) than scalematrix.com (18 hops to first scalematrix server, after that it times out till it reaches 30 hops (limit)), but if I run a trace directly on digitalpoint, I get a way quicker response from Cloudflare (13 hops)
Somewhat true I suppose... the pages itself still need to be backhauled from CloudFlare to the data center since they are dynamic. But if it's fast for @deathshadow now and slow before, it's probably not a CloudFlare issue since he would still be hitting the same CloudFlare data center. But I suppose it could be a routing issue for the CloudFlare data center local to him to our old data center. Honestly not really sure... I guess at the end of the day it's working better, so w/e...