Skip to main content

When a bug slips into production: causes, learnings & prevention

  • January 28, 2026
  • 4 replies
  • 506 views
Mukta Sharma
Forum|alt.badge.img

This happened a few years ago, when I was working as an SDET in a product company. At the time, it felt like just another sprint. Looking back now, it became one of those moments that reshaped how I test.

It taught me that users don’t read acceptance criteria, and that testing isn’t only about validating what’s written — it’s about predicting how real people will behave once the feature is live. They click around, skip steps, and try things we never imagine during sprint planning.

We were an agile team working on a transport and tourism e‑commerce platform based in London. This particular sprint focused on improving the customer journey, and one of the key deliverables was a new user‑details form that appeared during the purchase flow.

Some fields — like Full Name, Email Address, and Payment Details — were mandatory for checkout, while others — such as Alternate Phone Number, Special Requests, or Promo Code — were optional.

Our goal was to ensure the entire end‑to‑end experience worked smoothly, from filling in these details to completing payment and receiving confirmation.

Once the new user‑details form was built and integrated, I went through all the manual test cases we had for it. Everything behaved exactly the way we expected.

Our automation regression suite ran overnight and didn’t flag any failures. There were no high‑ or critical‑severity bugs open, and nothing in the backlog looked risky. The Product Owner reviewed the story and gave us the green signal. So, as planned, this user story went out with the rest of the sprint’s deliverables.

The team was confident. I was confident. It felt like a release that wouldn’t cause any surprises once it went live.

The bug that didn’t look like a bug

Two days later, our manager dropped a message in the team channel saying a customer had reported an issue. It wasn’t about a broken feature or a crash — just something that “didn’t feel right.”

The issue was this: the user‑details form was being submitted even when mandatory fields were empty, but only if the customer navigated back and forth using the browser’s Back and Forward buttons.

When I dug deeper, I realized this wasn’t a missed validation or a simple UI glitch. The browser was restoring old, cached form data when the user navigated back, and our system was still treating that stale data as valid input.

So, from the customer’s perspective, the form looked empty — but the system thought it already had the previous values. That combination of unexpected user behaviour and cached data created a scenario none of our tests had covered.

How did we miss this?

Once we managed to reproduce the issue, the real question wasn’t what went wrong — it was why. And the answer was obvious.

We had tested exactly what the acceptance criteria told us to test. We walked through the happy paths, step by step, and every expected flow behaved exactly the way it should. But we also assumed users would move through the journey in the same clean, linear order we had in mind.

This customer didn’t.

They went back. They jumped forward. They changed a few fields, and they skipped some mandatory ones entirely. That unusual mix of actions — never mentioned in the requirements and never discussed during planning — exposed the flaw.

Manual testing missed it because we didn’t test beyond the scenarios we expected.

Automation missed it because our scripts only checked the ideal flow with the ideal inputs. They never explored what happened when the user behaved differently.

And sprint pressure didn’t help. As the deadline got closer, the time we normally kept aside for exploratory testing just wasn’t there anymore. We were focused on finishing the planned test cases and closing the story, not exploring the “what‑ifs that weren’t part of the acceptance criteria.

Fixing it. And fixing ourselves

Once we understood the root cause, the fix itself was straightforward. I walked the developer through the exact steps, and a small code change ensured the form data refreshed correctly when users navigated back and forth.

Before the fix went live, I retraced the customer’s journey — and then pushed it further. I tried the same flow on desktop and mobile, using different paths and variations to make sure nothing else slipped through.

On the automation side, I added a new test to cover this scenario. Not because automation had failed, but because it needed to evolve based on what we learned.

When the fix reached production, the issue was gone and the customer was unblocked.
The bug was fixed — but the lesson stayed.

What that bug taught me

This bug taught me a few things that still shape how I test today:

  • Users don’t read acceptance criteria.
  • Exploratory testing isn’t optional, even when automation is all green.
  • Automation needs to evolve with real user behaviour, not just the specifications.
  • Assumptions eventually show up as production bugs.

Most importantly, every escaped bug is a chance to improve how we test, not just fix a line of code.

Since that incident, I’ve approached releases differently.

I don’t stop at “Does this work?” I also ask, “How could this break if someone uses it differently?”

That small shift — from checking expected behaviour to being curious about unexpected behaviour — reduced production issues far more than adding more test cases ever did. Some bugs hurt. Some bugs teach. This one did both.

Thank you for reading! Hope, my experience could help you in preventing slippage of bugs in production.

About author:
I am a quality engineering professional with deep expertise built through years of hands-on experience in manual testing, test automation, and overall Software delivery. Passionate about simplifying complex testing concepts. Excited to collaborate with ShiftSync and contribute to the global testing community. Enjoy reading my articles here!

4 replies

ujjwal.kumar.singh
Forum|alt.badge.img+1

In my previous company, I always add buffer time in time estimation for my testing despite of questions from EMs, PMs, etc. and then I would use this buffer time for exploratory testing and I contnue with same strategy in my current organization also. The reason is exploratory is creative , interesting and bit uncertain in nature, and if somehow after exploratory testing no bug is found then it is bit difficult to explain where and how did we do the testing. Because what I have observed that there is stereotype that if any testing is done then bugs are supposed to be found. If that doesn't happen , problem lies in our testing, so to avoid that common stereotype I never explicitly mentioned exploratory testing  but it is always part of testing.

 

Another facts that I have noticed that for backend exploratory is important but for front-end it is crucial as that is what client will use.


Mukta Sharma
Forum|alt.badge.img
  • Author
  • Specialist
  • January 30, 2026

Thanks Ujjwal! I really appreciate the honesty here — this is a reality sadly many testers  live in.

The “buffer time” approach makes sense, especially in orgs where testing is still judged by number of bugs found instead of confidence gained. Exploratory testing is creative and uncertain, and you’re right — explaining HOW you explored when no bugs appear is often harder than explaining a defect.

That stereotype of “no bugs = bad testing” is very real, and it’s why exploratory work often gets neglected. Hopefully, as an industry, we keep moving toward valuing learning and risk coverage and  not just defect counts.

Also fully agree on frontend — backend exploratory is important, but frontend exploratory is crucial because that’s what users actually experience. Don't you think?

Thanks for sharing your experience hhere.i am glad ,my article could provide some insights on the production defects. Have a great day ahead! Keep learning! Keep testing!

 


Ankur
Forum|alt.badge.img
  • Ensign
  • February 1, 2026

This is really a good article(topic) and eye opener for  every QA team. 

Here my opinion is , every thing is not possible to mentioned in Acceptance criteria.

So, when any one start testing, he or she needs to be consider exploratory testing as part of their stiry task.

Why i am telling that because tester should always need think that how customer can use the application. Because you can’t tell customer or user to use the application is specific way.

 

In many project Me and My team experienced that everything was tested and working fine and it’s matching the requirement still client report the issue which team has never thought , because we never thought how user used the application.

 

And the bug mentioned in this topic is  very great example which shows QA team responsibility is not limited validate the acceptance criteria or requirement but it’s about how we secure user experience and user is not suffer because of over deliverables.


Mukta Sharma
Forum|alt.badge.img
  • Author
  • Specialist
  • February 3, 2026

Thanks for sharing your thoughts, Ankur— I completely agree with you

Acceptance criteria can never cover every possible scenario, and that’s exactly why exploratory testing is so important. Once formal test cases are done, testers really need to put on the user’s gloves and explore the application the way a real customer would.

Like you said, users don’t follow a “defined path.” They use the product in ways we often don’t expect, and many issues only surface when we think beyond requirements. I’ve had similar experiences where everything passed as per ACs, but the client still found issues because the real usage was different from what we assumed.

The example mentioned in this topic clearly shows that QA responsibility is not just about validating acceptance criteria. It’s also about safeguarding the overall user experience and making sure users don’t face issues due to edge cases or overlooked scenarios. Exploratory thinking really helps bridge that gap.

I am glad, my experience resonated with you.✨️✌️