classic33
Leg End Member
Nay ladIt made it very clear who was responsible: "Cloudflare, London."
Shown as Cloudflare, Manchester up here.
Nay ladIt made it very clear who was responsible: "Cloudflare, London."
Nay lad
Shown as Cloudflare, Manchester up here.
That's because the system resolved for you to use Cloudflare's London servers. Connect from eg aIP address in the Netherlands and you'll get the same message but with "Cloudflare, Amsterdam." (and I tried through several countries and always Cloudflare <country you are connecting from>"It made it very clear who was responsible: "Cloudflare, London."
We used to have to pay telephone providers a lot of money to ensure multiple data lines went through different conduits and over different routes so eg one rogue digger didn't take everything out. Also added radio data links in case data lines failed.There is much truth in what you say. I used to work in disaster recovery testing for a while in the 90s. We took it all very seriously. I think there's less rigour about it these days. But it's a lot cheaper.
We used to have to pay telephone providers a lot of money to ensure multiple data lines went through different conduits and over different routes so eg one rogue digger didn't take everything out. Also added radio data links in case data lines failed.
That's because the system resolved for you to use Cloudflare's London servers. Connect from eg aIP address in the Netherlands and you'll get the same message but with "Cloudflare, Amsterdam." (and I tried through several countries and always Cloudflare <country you are connecting from>"
I used to manage the installation of resilient WAN circuit upgrades, which typically involved a primary fibre from a local exchange and a secondary fibre from another exchange. Depending on the distances involved, the installation charges could be eye-wateringly expensive back then.Yup
I was talking to a techy person at a techy event I went to when I was an IT techy
He worked in a company that had a major data centre in London
They had 2 comms runs out of it - one on the North of the building and one of the South
as far seperated as possible
One of the 2 went out and communicated via company A (might have been Vodaphone
the other went through company B - something like BT
again totally seperate and different in every way
so they were confident that in the event of the main one failing they could switch to the other
and they did every few weeks to ensure it was all working
then "something somewhere" failed and their datacentre was cut off from the outside worlk
so the switched and it was the same
turned ou tthat Company A and Company B merged their comms at some point far distant from the datacentre
so all the redundancy in the world was useless
unless they knew every bit of comms equipment it wetn through everywhere - which is impractical

Some of you guys were much nearer to the nuts and bolts of resilience than me. I was just doing things like dress rehearsals for disasters.
My favourite experience of that was when we got the sparks to simulate a power failure. The UPS kicked in (good) the diesel generators fired up (good) smoky exhaust from the diesels went in through a window and set off the fire alarms (er ... sub-optimal)![]()