What was that about?

Page may contain affiliate links. Please see terms for details.

I like Skol

A Minging Manc...
Nay lad
Shown as Cloudflare, Manchester up here.

I've been suffering in Manchester for most of the day and the error page made it perfectly clear that London was to blame!
 

Psamathe

Über Member
It made it very clear who was responsible: "Cloudflare, London."
That's because the system resolved for you to use Cloudflare's London servers. Connect from eg aIP address in the Netherlands and you'll get the same message but with "Cloudflare, Amsterdam." (and I tried through several countries and always Cloudflare <country you are connecting from>"
 

Psamathe

Über Member
There is much truth in what you say. I used to work in disaster recovery testing for a while in the 90s. We took it all very seriously. I think there's less rigour about it these days. But it's a lot cheaper.
We used to have to pay telephone providers a lot of money to ensure multiple data lines went through different conduits and over different routes so eg one rogue digger didn't take everything out. Also added radio data links in case data lines failed.
 
We used to have to pay telephone providers a lot of money to ensure multiple data lines went through different conduits and over different routes so eg one rogue digger didn't take everything out. Also added radio data links in case data lines failed.

Yup

I was talking to a techy person at a techy event I went to when I was an IT techy

He worked in a company that had a major data centre in London

They had 2 comms runs out of it - one on the North of the building and one of the South
as far seperated as possible
One of the 2 went out and communicated via company A (might have been Vodaphone
the other went through company B - something like BT

again totally seperate and different in every way

so they were confident that in the event of the main one failing they could switch to the other
and they did every few weeks to ensure it was all working


then "something somewhere" failed and their datacentre was cut off from the outside worlk
so the switched and it was the same

turned ou tthat Company A and Company B merged their comms at some point far distant from the datacentre
so all the redundancy in the world was useless
unless they knew every bit of comms equipment it wetn through everywhere - which is impractical
 
That's because the system resolved for you to use Cloudflare's London servers. Connect from eg aIP address in the Netherlands and you'll get the same message but with "Cloudflare, Amsterdam." (and I tried through several countries and always Cloudflare <country you are connecting from>"

Just checked and yes, I'm still in Germany.
 

lazybloke

Ginger biscuits and cheddar
Location
Leafy Surrey
Yup

I was talking to a techy person at a techy event I went to when I was an IT techy

He worked in a company that had a major data centre in London

They had 2 comms runs out of it - one on the North of the building and one of the South
as far seperated as possible
One of the 2 went out and communicated via company A (might have been Vodaphone
the other went through company B - something like BT

again totally seperate and different in every way

so they were confident that in the event of the main one failing they could switch to the other
and they did every few weeks to ensure it was all working


then "something somewhere" failed and their datacentre was cut off from the outside worlk
so the switched and it was the same

turned ou tthat Company A and Company B merged their comms at some point far distant from the datacentre
so all the redundancy in the world was useless
unless they knew every bit of comms equipment it wetn through everywhere - which is impractical
I used to manage the installation of resilient WAN circuit upgrades, which typically involved a primary fibre from a local exchange and a secondary fibre from another exchange. Depending on the distances involved, the installation charges could be eye-wateringly expensive back then.

For one building, the telco provided an amazingly low quote and short installation time. They explained this could be achieved by using existing fibres that covered most of the routes to both exchanges.

During installation, they discovered their records were inaccurate, revealing a long gap in the secondary fibre run. They had to lay several KILOMETRES of new underground fibre.

It took weeks of work including road closures and night working.
And bless them, they didn't object when I suggested they honour their original quotation; it was great to deliver that project under budget!
 

Dogtrousers

Lefty tighty. Get it righty.
Some of you guys were much nearer to the nuts and bolts of resilience than me. I was just doing things like dress rehearsals for disasters.

My favourite experience of that was when we got the sparks to simulate a power failure. The UPS kicked in (good) the diesel generators fired up (good) smoky exhaust from the diesels went in through a window and set off the fire alarms (er ... sub-optimal) :laugh:
 

Gwylan

Guru
Location
All at sea⛵
Some of you guys were much nearer to the nuts and bolts of resilience than me. I was just doing things like dress rehearsals for disasters.

My favourite experience of that was when we got the sparks to simulate a power failure. The UPS kicked in (good) the diesel generators fired up (good) smoky exhaust from the diesels went in through a window and set off the fire alarms (er ... sub-optimal) :laugh:

No! Stick with the " it was planned to expose our vulnerabilities. Now we learn and rebuild"
 
Top Bottom