Skip to main content
Question

Seeking Expert Guidance on Generative AI for Test Automation

  • December 19, 2025
  • 3 replies
  • 31 views

Hi everyone, I'm currently working on integrating Generative AI into our test automation framework and could use some expert advice. Has anyone had experience using AI to enhance test case generation or maintenance? Any recommendations on best practices or tools would be greatly appreciated!

3 replies

Forum|alt.badge.img+3

Hello ​@CharmingMoore121,

 

Can’t tell I’m an expect but I have done some poc’s and based on my experience, I believe I can help.

 

If I understood your query - you’re using GenAI to enhance test case generation - We can do that with proper context that means - Data needed for GenAI To understand the test case, formatting instructions, and even a sample test case.

 

Looking forward to help on this more.


Bharat2609
Forum|alt.badge.img+3
  • Ensign
  • January 3, 2026

Hello ​@CharmingMoore121

AI can help generate test cases, suggest edge scenarios, and reduce maintenance effort, but human review is essential to validate business logic and test accuracy. I’ve seen better results when AI assists in drafting tests while testers review, refine, and approve them.

Best practice is to start small—use AI for test case suggestions or refactoring, keep humans in control of decisions, and gradually expand usage. This balance ensures quality, trust, and reliable automation.

Hope this helps 👍

 

Please let me know require any help/suggestion


ujjwal.kumar.singh
Forum|alt.badge.img+1

Hi everyone, I'm currently working on integrating Generative AI into our test automation framework and could use some expert advice. Has anyone had experience using AI to enhance test case generation or maintenance? Any recommendations on best practices or tools would be greatly appreciated!

There are many use cases of Gen-AI in test automation, we are currently using claude code and cursor : 

  1. We feed our automation code and PRD to match if the script and requirements are in sync, if there is any gap we add that 
  2. We again use Gen-AI to write script for existing automation framework 
  3. We further use Gen-AI to opitimize refactor existing automation code 
  4. Then there is another use of self-healing ai based testing tools support or somehow we have integrated llm in our automation framework then that would self heal our script if it fails.

 

We are using browserstack test management tool for test case management so we are using their ai test case generator to generate the test cases, the test cases are okay but they need manual review before taking into consideration.

For test cases we have to provide prompt in detail like what kind of test cases we expect positive, negative, edge cases, etc. still that won’t guarantee complete coverage but will help a lot upto certain extent.