Challenge

Day 2: Capabilities of AI in Optimizing Testing Strategies

  • 13 June 2024
  • 2 replies
  • 31 views

Userlevel 4
Badge +5
  • Technical Community Manager
  • 42 replies

🔍 Let's dive into Day 2 of our AI Testing Myths vs. Realities Challenge!

Time for some deep dive discussions! Share your thoughts on the capabilities of AI in optimizing testing strategies.

Join the conversation, share your insights, and tell us which one you think is the reality and which is the myth and explain your reasoning in the comments below for a chance to win a ShiftSync gift box!

Click here to check the rest of the questions.

Tune in tomorrow for the Day 3 Challenge.


2 replies

Userlevel 6
Badge +5

There are a lot of cases where one can incorporate AI into the work including testing.

I will try to express some in a different way

 

In my world, where testing’s key,

AI it said to bring efficiency.

Much like refining LEGO's art,

It can optimized each crucial part.

 

1. Automated Test Case Generation

 

AI crafts test cases with great care,

From requirements and past wear.

Like LEGO sets it studies deep,

Ensuring every piece will keep.

 

2. Intelligent Test Prioritization

Thousands of pieces in a new design,

AI finds weak points, saves our time.

A LEGO bridge with joints at risk,

AI tests those first, no chance they miss.

 

3. Adaptive Test Automation

LEGO evolves, so must our scripts,

AI adapts with gentle shifts.

Like a builder skilled and wise,

It keeps our tests up-to-size.

 

4. Predictive Analytics for Defect Detection

Predicting flaws from patterns seen,

AI ensures our code stays clean.

Like knowing towers weak at base,

It tests new builds with steady pace.

 

5. Enhanced Performance Testing

Complex cities made of bricks,

AI ensures no weak point sticks.

Simulating loads and strains,

It tests each piece, ensures it gains.

 

Sorry for the Lego examples but that is my go to place when I want to give a clear example people will understand

Userlevel 2

This myth (AI-Assisted Test Case Design) is a myth. Reality (smart impact analysis) is a reality.

 

AI-Assisted Test Case Design: AI can confidently tell that whatever it says is right, but it might be absolutely wrong, and it can even justify its reasoning. It can design the test cases, but we need to validate the results once again. It all depends on the data we are giving to AI models. And the reason I mentioned this as a myth is because I’m using AI in my test case design, but I’m not just copying and pasting the content provided by the AI. I’m using the outputs provided by AI and designign the test cases. Most of the time, prompting matters, and, as a matter of fact, chain-of-thought prompting worked well for me.

 

Smart impact analysis: I tried a tool called Avo Assure, and we have this essential feature where, once the application is upgraded, we can do the impact analysis to determine which controls are changed, based on which we can perform risk analysis and also check the high-risk areas where we can focus more testing. 

In a similar way, I had come over our own Tosca Co-pilot, where by using Gen AI capabilities by using prompts, we can run the test cases where we felt in the web page, we can see more errors arising, can you run the test cases that include this module (not sure whether it works but very soon, I believe it is going to be)? Other than that, it’s useful for people who struggle to write TQL queries and cleanup the unused modules and controls, which is a much-needed feature.

So I believe smart impact analysis is here, but it still needs human intervention.

Reply