Skip to main content

Challenge: Have you ever caused a bug in a live application? And on a scale of 1-10, how much chaos did it cause?

  • March 24, 2026
  • 10 replies
  • 152 views

lewisp707

... Because I have, a couple of times. The most chaotic one was while working on a national health application in the UK, where my refactor to the API which made the application more testable meant that patients nationwide couldn't access information about their prescriptions for a couple of hours during the day. The panic set in realising that my changes had gone through to production too soon... Revert revert!!!

I would rate that a 7 out of 10... Do you agree? Anyone got a more severe bug than that?


Share your failure story in the comments, and I will pick three winners who will get the book “Contract Testing in Action. With Pact, PactFlow, and GitHub Actions” from ShiftSync.

The challenge closes in 10 days.

 

 

10 replies

PolinaKr
Forum|alt.badge.img+6
  • Community Manager
  • March 24, 2026

I once managed to “break” our entire HubSpot communication flow.

We were preparing a campaign and I wanted to clean things up a bit: remove duplicates, fix some workflows, make the segmentation more precise (it always starts with good intentions).
At some point, I updated one of the lifecycle stage rules and tweaked a workflow filter that didn’t look right to me. 
What I didn’t realize was that this workflow was powering multiple automations (email sends, lead assignments, you name it). Turns out I had effectively blocked contacts from entering several key workflows. And first when nothing new was coming to the flow, I thought: “Quiet day, nice.”
But then other departments started panicking, and I realized what had happened.

Fixing it took way longer than causing it!! 😂


I set a device under test (a battery) on fire by not properly connecting the wires 🔥 Everyone was safe, because it was in a lab with proper safety measures. But I was super scared 😅


IOan
  • March 25, 2026

It was a while ago when i first read about load testing. So before black friday I did a mini test on some local vendor website with no intention to break it. It turned out even my mini load test was enough to cause te search for the website to crash. They did recover in time for black friday. Who knows...maybe my mini test helped :)


dharmendratak
Forum|alt.badge.img+1

A change that looked “simple” almost turned into a production nightmare.

Earlier, all images across our app and web were exposed via S3 URLs.
Everyone knew it. Even the client.

Then came a request:
“Let’s secure the S3 URLs.”

Sounds straightforward, right?

 

The change was implemented.
But there was a catch — those URLs were deeply embedded across almost every screen and flow in the app. Around the same time, I had to step away for a few days due to health issues. Before leaving, I ran one round of testing and caught a few issues related to the new URL logic.

When I came back, production was scheduled for the next day. From a surface view, things looked stable. I had already tested earlier. But something didn’t feel right ‘the QA instincts’.

 

I said:
“If nothing has changed, we should be good…
but I’d still like to run one more cycle.”

That one decision changed everything.

 

Just before production, multiple issues surfaced:

  • Event creation flows breaking
  • Switching business within the app causing inconsistencies
  • iPhone app crashing on event listing
  • And then… AI-integrated features started failing

 

We had to stop the release. Then postpone again the next day. For a moment, it felt like everything was connected… and everything was breaking.

 

But here’s the part that mattered:

The client didn’t panic.
They said — “This is how we learn. Let’s fix it properly.”

 

Lesson?

Sometimes the biggest risk is not the change itself…
It’s underestimating how deeply that change is connected to everything else.

And sometimes, the most important QA decision is simply this: “Let me test it one more time.”

 

Chaos level?

A solid 8/10
just one push away from becoming a production incident.


PolinaKr
Forum|alt.badge.img+6
  • Community Manager
  • March 25, 2026

A change that looked “simple” almost turned into a production nightmare.

Earlier, all images across our app and web were exposed via S3 URLs.
Everyone knew it. Even the client.

Then came a request:
“Let’s secure the S3 URLs.”

Sounds straightforward, right?

 

The change was implemented.
But there was a catch — those URLs were deeply embedded across almost every screen and flow in the app. Around the same time, I had to step away for a few days due to health issues. Before leaving, I ran one round of testing and caught a few issues related to the new URL logic.

When I came back, production was scheduled for the next day. From a surface view, things looked stable. I had already tested earlier. But something didn’t feel right ‘the QA instincts’.

 

I said:
“If nothing has changed, we should be good…
but I’d still like to run one more cycle.”

That one decision changed everything.

 

Just before production, multiple issues surfaced:

  • Event creation flows breaking
  • Switching business within the app causing inconsistencies
  • iPhone app crashing on event listing
  • And then… AI-integrated features started failing

 

We had to stop the release. Then postpone again the next day. For a moment, it felt like everything was connected… and everything was breaking.

 

But here’s the part that mattered:

The client didn’t panic.
They said — “This is how we learn. Let’s fix it properly.”

 

Lesson?

Sometimes the biggest risk is not the change itself…
It’s underestimating how deeply that change is connected to everything else.

And sometimes, the most important QA decision is simply this: “Let me test it one more time.”

 

Chaos level?

A solid 8/10
just one push away from becoming a production incident.

I feel you! Also messed up URLs ones. A little differently, but still!! 


PolinaKr
Forum|alt.badge.img+6
  • Community Manager
  • March 25, 2026

It was a while ago when i first read about load testing. So before black friday I did a mini test on some local vendor website with no intention to break it. It turned out even my mini load test was enough to cause te search for the website to crash. They did recover in time for black friday. Who knows...maybe my mini test helped :)

For sure it did help! 


Forum|alt.badge.img

... Because I have, a couple of times. The most chaotic one was while working on a national health application in the UK, where my refactor to the API which made the application more testable meant that patients nationwide couldn't access information about their prescriptions for a couple of hours during the day. The panic set in realising that my changes had gone through to production too soon... Revert revert!!!

I would rate that a 7 out of 10... Do you agree? Anyone got a more severe bug than that?

Share your failure story in the comments, and I will pick three winners who will get the book “Contract Testing in Action. With Pact, PactFlow, and GitHub Actions” from ShiftSync.

The challenge closes in 10 days.

 

 

@PolinaKr ​@lewisp707 Once, I missed the higher-level authorisation while testing. Test & pass the story by the lower level credentials. It was in the testing environment. 😁.

Later on I came to know that “What mistake I made”.

I realize my mistake & again check the story requirements & retested with respect to the requirements.

From that day, I have always checked the authorized users.


PolinaKr
Forum|alt.badge.img+6
  • Community Manager
  • March 25, 2026

@ersourabhskjain That’s why mistakes are useful, right? We learn so much from them 


  • Apprentice
  • March 25, 2026

I once sent out a marketing email where I didn't test all the links before sending - the filler 'xxx' was still present when the email was sent to 800k senior citizens - linking them directly to porn.


Forum|alt.badge.img

@ersourabhskjain That’s why mistakes are useful, right? We learn so much from them 

Yes ​@PolinaKr, from then onwards I always make sure about the authorisation & made a note of the quality.