Are you using AI in testing?
Great, this article is for you.
If you’ve just spent three months learning AI testing, this article is for you. If your company just spent the budget on AI tools and you’re still seeing production bugs, this is for you.
Here is why:
The industry wants speed. Faster test generation. Faster automation. Faster execution. Faster Go-To-Market (GTM).
But here is what twenty years in testing has taught me: speed was never the problem.
Strategy was.
AI is making us test faster. But faster bad testing is still bad testing.
We are optimizing the wrong thing
Most testers I mentor do not have a testing speed problem. They have a "we don't know what to test" problem.
They have a "We use AI agents to test. Still have production bugs" problem. They optimize for coverage metrics instead of business risk.
AI does not fix this (at least as of now).
Without proper human oversight, it may amplify the problem.
I recently spoke with a company leader who mentioned they automated hundreds of test cases with AI. Management was thrilled. Coverage increased significantly, with fewer testers supporting the process.
After launch, they hit a critical production bug. It cost them thousands of dollars.
That bug was never caught by AI.
Here is another example:
One of my mentees spent two months learning AI to generate tests. Took courses. Practiced daily. Built impressive automation. Generated 600+ test cases for his project. Management praised his productivity. He felt accomplished.
Product launched. A critical data sync issue was reported by a major customer. None of his 600 AI-generated tests caught it.
Both optimized for quantity. Both got quantity. But quality ?
Two ways we are getting this wrong
Volume over value
Many QAs are living this dream now: "Generate 100 test cases in 5 minutes! Achieve 90% code coverage!"
I have seen testers do exactly that. Generate hundreds of test cases. Hit impressive coverage metrics. Feel increased productivity. Ship the product.
Product breaks when it is live.
Here is what actually happened: They generated volume. Fast. Those 100 test cases tested happy paths. Low-hanging fruit. Obvious scenarios. Standard inputs. The tests ran. They passed. Tests reports are green everywhere. Coverage reports looked beautiful.
But what did they actually cover?
Low-risk admin screens. Configuration pages used twice a year. Default error handlers that never trigger. Login with valid credentials and some invalid ones. Forms with perfect inputs.
But critical edge case? Not tested.
The sync issue when two actions happen simultaneously? Never considered.
The integration failure under load? Missed completely. The timeout scenario?
Whether you utilized the company budget or your personal time and money to learn this, the result is the same.
Impressive numbers. Superficial testing. Critical bugs in production.
I ask testers (now it’s done by developers ): "How long did it take you to generate those test cases?"
"Just five minutes with AI! ", they answer with a glow in their eyes.
"How long does it take to decide which test cases actually are important?", I asked.
Silence.
"What about that 90% coverage - what are you actually covering?"
"Um... the code?", they replied. This time the glow is missing.
"But is it the code that matters to users ? Is it testing the scenarios that may break in production?", I asked a straight question.
This time long silence.
They invested time in execution. Chased metrics. But skipped thinking.
An SDET once showed me their test suite proudly. "Look - 700+ automated tests. All automated in just 10 minutes. 87% coverage!"
I asked her to show their production bugs. The majority of them were - Payment failures. Data sync issues. Race conditions. Timeouts.
Then I looked at those 700+ tests. Not one covered those failure scenarios.
Volume is not value.
Coverage is not quality. Speed is not strategy.
You can generate 1,000 test cases and corresponding automation scripts in minutes with AI.
But if they test the wrong things, you just wasted time creating expensive false quality status.
And it is dangerous.
Execution over thinking
Here is the fact. And it is bitter.
We are using AI to replace the easy part of testing (and testers). Running tests. Generating test data. Writing basic assertions.
We are not using AI for the hard part.
Knowing what actually matters. Understanding business risk. Deciding what to test when you cannot test everything. Asking "what could go wrong that would cost us money or trust?"
That differentiates good testers from others. And AI is not solving that part. At all.
Whether you are an individual contributor investing personal time or a company investing budget, we are solving the wrong problem.
The hard questions are still hard:
- What actually matters to the business?
- What actually matters to users?
- Which failures would cost us the most?
- Where are the integration points that can break?
- What happens under stress, under load, under unusual conditions?
- What do users actually do versus what we think they do?
AI does not answer these questions. You do.
AI can execute your strategy. At scale. Loyally.
But if you do not have a strategy, AI just executes confusion faster.
So, what should we be solving?
The problem is not "how do we test faster?"
The problem is "how do we test smarter?"
Not more test cases. Right test cases. Not higher coverage. But risk-based coverage.
Not just automation, but strategic automation of high-value scenarios.
Not AI adoption for just the sake of it. But real use cases where it actually assists you.
If your testing strategy is already solid, AI will amplify it. You will test smarter and faster. Your time investment or company budget will help.
But if your strategy itself is broken, AI will just help you fail faster.
Your effort gets wasted.
I have seen teams and individual testers transform their testing by asking better questions first:
"What are the top 5 business risks in this release?"
"What failures would cost us the most money or trust?"
"If we can only test 20% of scenarios, which 20% matters most?"
"What broke in production in the last release that our tests missed?"
“Which other scenarios can be added to test failure scenarios?”
Answer those first.
Then use AI to execute that strategy faster. That is how you make your time investment or company budget actually matter.
The uncomfortable truth
If AI is not helping your testing, maybe your testing approach is always broken.
The tool is exposing the problem. Not creating it. Period.
After two decades of experience, I can tell you this - good testers do not need AI to be good. But good testers with AI become exceptional.
Whether you spent personal time learning, or your company spent money buying AI tools, the question is not, "Should we adopt AI in testing?"
The question is, "Do we actually understand what good testing looks like?" or “How to Adopt AI and not replace testers?”.
Fix the foundation first.
Start here
I know what you are thinking.
Maybe you already spent two months learning AI in testing. You cannot exactly get that time or money back.
Fair.
But you can change how you use them.
Stop asking: "How many test cases can AI generate? What coverage can we achieve?"
Start asking: "Does this testing align with actual business risk? Are we testing what actually breaks? How can I use this AI tool to add value, not volume ?"
Stop chasing: Coverage metrics and automation counts
Start measuring: Critical bugs found before production. Customer-impacting issues prevented. Business risk mitigated.
Stop treating: AI as test executor and metric booster.
Start using: AI as a thinking partner for strategy.
Stop investing time in: More courses on AI test generation
Start investing time in: Understanding what actually needs testing and why
The industry will not tell you this. They are selling tools and courses.
Your company will not admit this. They just invest. Want to adopt AI and market themselves.
You might not want to hear this. You just invested your time and / or money.
But after experimenting and using AI, here is my take:
“AI is not the solution if you do not know the problem.”
Stop investing time or money to fix strategy problems with tools.
Start building strategy. Then amplify it with tools.
Years from now, people and companies will remember whether you shipped quality software and gave good user experience. Not about the AI or automation tools you used.
Think and use.
It's not AI.
It's I + AI.