Challenge

Day 1: Basics of AI Testing

  • 12 June 2024
  • 2 replies
  • 63 views

Userlevel 4
Badge +5
  • Technical Community Manager
  • 42 replies

Day 1 of our AI Testing Myths vs. Realities Challenge kicks off today!

The challenge is to select which option is a Myth and which one is a reality and explain your reasoning.

We’re starting by exploring the capabilities of AI in testing.

Join the conversation, share your insights, and tell us which one you think is the reality and which is the myth in the comments below for a chance to win a ShiftSync gift box!

Click here to check the rest of the questions.

Tune in tomorrow for the Day 2 Challenge.


2 replies

Userlevel 6
Badge +5

From a general point of view the first one is myth and the second is reality. However if we really go in both still have room to improve. 

 

Giving an AI a URL and have it perform complex scenarios, be able to report any bugs, clarify issues, create Change request and so so is at the moment ( and I hope it will also remain in the future) not possible. 

 

AI is of course able to give you test cases based on texts or prompts but there are a few issues here as well. On the one hand do you really want an AI to know your product, your requirements? How private or public is that information after you have shared it with an AI? Where is your competitive advantage? Also for an AI to really give you complex scenarios it would have to have a prompt similar to all your requirements, otherwise you will get really easy, simple and boring test cases :).

 

What do you think?

Userlevel 2

Based on my experience with AI in testing, Myth and Reality statments are correctly aligned.

 

Myth: Autonomous testing by justing give access to URL is out of reach at least now and what I’m thinking is without humans, AI cannot just test and provide the inputs. Even though AI is getting evolved, it’s not human. If you can consider one application, if you provide the controls, it can provide you the test cases with data which is a good thing but in order to cover complex senarios it needs to know in and out of the system and after all that we can’t really think these test cases are going to make sense and at last we need more validation of AI designed test cases. 

 

Fact: Automatic test case generation - it’s already live and I tried it with Chat GPT, Gemini models both worked well and in detailed manner as well. Still we need some manual intervention to make it work like a charm(evolving phase and eventually it’ll get there)

 

We need to create a good prompt for that by mentioning the userstory, might be check the existing test case, provide how the applications looks - see we need to train the model here in order to write the better test cases. It’s like if we give one web page and write the test cases, we can write so many but we also need to know the exact requirements for AI to create the good test cases by mentioning that “cover the edge cases, positive cases, negative cases as well as cases that most testers usually miss”. That worked well for me. 

Thanks for the wonderful question, @Mustafa.

Reply