Skip to main content
Question

Answer Chris and Sanjay's Question - The Cost of Quality

  • November 20, 2025
  • 15 replies
  • 156 views

Mustafa
Forum|alt.badge.img+8

Where are you in your adoption of AI in your QE practice?  Just starting?  Pilot running?  Seeing ROI, if so, what does that look like for you?
 

 

15 replies

  • Ensign
  • November 20, 2025

I have used AI for 1 year in daily QA work like analyzing requirement, test case creation, test plan preparation, writing test design, implementation, review… Increasing about 30% productivity. 


We’re in the early stages of adopting AI within our QE practice. Right now, we’re exploring use cases, evaluating tools, and building internal understanding of where AI can add the most value particularly in areas like test case generation, defect prediction, and accelerating root-cause analysis.


  • Space Cadet
  • November 20, 2025

Just starting: We are exploring AI tools in our QE practice, evaluating opportunities to automate repetitive tasks like test generation, defect analysis, and reporting. At present using tools like TestRigor and Applitools.


  • Apprentice
  • November 20, 2025

As a team of manual QAs, we’re only at the beginning of our journey in integrating AI into our workflow. We’ve started using it to summarise and simplify complex Acceptance criteria. We also utilise it to help us structure our test cases so we don’t miss anything.

We’re already seeing significant benefits in terms of speed and test coverage.

 


  • Ensign
  • November 20, 2025

We currently use Testim, and we are exploring autonomous test generation tools.


Forum|alt.badge.img
  • Ensign
  • November 20, 2025

I have been using AI in our regular testing for the past year. I use it to create test cases after finalizing requirements, verify and refine them, and also support our overall testing environment.


  • Ensign
  • November 20, 2025

Just starting


  • Apprentice
  • November 20, 2025

We’ve used AI for test case creation, test data creation, test automation development, and fixing test automation bugs. Each case has improved productivity, test coverage, and test effectiveness. Our largest opportunities are defect prediction and accelerating Root Cause analysis.


Bharat2609
Forum|alt.badge.img+3
  • Ensign
  • November 20, 2025

@Mustafa 

I’m already using AI actively in my QA workflow. Not at the “just exploring” stage anymore. I’d say I’m between pilot and clear ROI.

How I’m using it

  • Generating and refining test cases

  • Writing and debugging automation scripts faster

  • Using self-healing locators to cut maintenance

  • Creating synthetic test data

  • Getting early risk insights from AI-based analysis

What ROI looks like

  • Faster test cycles

  • Better coverage

  • Fewer flaky tests

  • Less manual effort

  • Earlier defect detection

Tech I follow
GenAI automation tools, self-healing frameworks, AI-driven API testing, autonomous testing agents, and AI-powered dashboards.

 


  • Ensign
  • November 20, 2025

Just starting


سامان ذوالفقاریان
Forum|alt.badge.img

Our approach has moved past the initial pilot phase; we are currently focused on Widespread Deployment and Continuous Optimization & Scaling of AI models across the entire Software Development Life Cycle (SDLC).

​Here is a breakdown of our status and results:

​1. Current Stage: Strategic Scaling

​We are leveraging Generative AI (e.g., Gemini) not just for basic test automation, but for Intelligent Dev Asset Generation—producing synthetic test data, generating code snippets, and, crucially, automatically identifying and creating complex, high-impact edge-case scenarios that human testers often miss.

​2. Key Initiative: The Astra Digital Twin Project

​Our flagship project in this domain is the deployment and scaling of the "Astra Digital Twin v5". This advanced simulation model, powered by LLMs like Gemini, is designed to accurately simulate authentic user behavior and personas within production-like environments.

​Astra allows us to shift QA from a cycle-end phase to an Embedded Quality Feature throughout development. It automatically traverses critical user paths and provides real-time quality reports.

​3. Return on Investment (ROI) and Impact

​Yes, the ROI is clearly evident, fundamentally shifting our CoQ model:

​Reduction in Appraisal and Failure Costs: By automating script generation with Generative AI, we have achieved over a 70% reduction in the time spent on traditional test script maintenance and creation.

​Increased Critical Bug Detection: By enabling a strong "Shift Left" strategy through Astra, we have seen a 45% increase in the rate of identifying critical bugs during the earlier development and staging phases. This drastically reduces the exponential cost of fixing bugs in production.

​Business Acceleration: Beyond monetary savings, AI provides Confidence and Velocity. It transforms QA from a potential organizational bottleneck into a Business Enabler, allowing our teams to release exceptional quality products faster and with greater assurance.

​In summary, AI is not merely an automation tool for us; it is a driver of quality innovation that ensures our products meet exceptional standards while achieving unparalleled speed to market.


  • Ensign
  • November 20, 2025

Am just exploring myself and my industry has not started yet to adapt these AI tools as of now.. so am in starting stage


  • Apprentice
  • November 20, 2025

I am a developer and do QA as well and I have been using AI for the last couple years to assist in all areas of the SDLC. In specifically testing, I have been using AI (primarily ChatGPT) to create test case scenarios with all details, from descriptions, to expected results, as well as analyzing production code and creating code snippets to demonstrate specific functionality. 


  • Ensign
  • November 20, 2025

The adoption of AI in Quality Engineering (QE) typically falls into a few maturity stages, and organizations vary widely in where they are. Here’s how it usually looks:

1. Just Starting

  • Focus: Exploring AI concepts, identifying use cases (e.g., predictive defect analysis, intelligent test case generation).
  • Tools: Experimenting with AI-enabled features in existing test automation tools.
  • Challenges: Skills gap, unclear ROI, cultural resistance.

2. Pilot Running

  • Focus: Running small-scale pilots in areas like:
    • Test data generation using AI.
    • Self-healing test automation.
    • Defect prediction models.
  • Outcome: Proof of concept for feasibility and cost-benefit.
  • Challenges: Integration with existing pipelines, data quality.

3. Seeing ROI

  • Indicators of ROI:
    • Reduced Test Cycle Time: AI-driven prioritization and automation can cut regression cycles by 30–50%.
    • Improved Defect Detection: Predictive analytics reduces production defects by 20–40%.
    • Cost Savings: Lower manual effort in test design and maintenance.
    • Enhanced Coverage: AI helps identify gaps and generate edge cases.

What ROI Looks Like

  • Quantitative: Faster releases, fewer defects, reduced cost of quality.
  • Qualitative: Improved customer experience, better risk management, and higher confidence in releases.

We're in the pilot-to-scaled adoption phase, having moved beyond initial experimentation about 8 months ago. We've deployed AI in specific areas where we're seeing measurable impact, while still exploring opportunities in others.