Skip to main content

πŸš€ AI Testing Vlogathon! πŸš€ Share Your 1-5 Minute Video on Transforming Testing with AI and Compete for Rewards! Gain Recognition Among Experts and Peers!

  • November 26, 2024
  • 32 replies
  • 14663 views
πŸš€ AI Testing Vlogathon! πŸš€ Share Your 1-5 Minute Video on Transforming Testing with AI and Compete for Rewards! Gain Recognition Among Experts and Peers!
Did this topic help you find an answer to your question?
Show first post

32 replies

  • Apprentice
  • 1 reply
  • December 20, 2024
komalgc wrote:

My Entry in the AI Testing Vlogathon

A simple thumbs-up  is sweet πŸ€—, but a thoughtful comment would be the cherry on top! πŸ˜Ž

Augmenting LLMs for Risk Identification 

 

1. The Heuristic Bias Challenge

Because identifying risks is a heuristic-based activity it is subject to bias. This means that we can, at times, miss potential risks that require our attention


 

 

 

2. Testers List of Risk limited and is biased

 

 



3. Role of LLMs in mitigating bias:

 

We can use LLMs as an additional tool to help us consider different paths and perhaps highlight potential risks we hadn’t considered


4. Evaluating LLM Outputs

 

  Importance of filtering useful risks:
  Some risks are highly relevant, while others may overlap or seem redundant.
  


6. Balancing LLM Support and Tester Expertise


LLMs are valuable tools to explore new paths for risk identification.
They help testers break creative blocks and "shake things up" during the testing process.
A balanced approach combining LLM insights and human expertise leads to better exploratory testing outcomes.  

 

 

References : 

  1. Software Testing with Generative AI - Mark Winteringham
  1. https://thetesteye.com/posters/TheTestEye_SoftwareQualityCharacteristics.pdf

Thank you ​@parwalrahul for the shoutout to submit my entry and ​@Kat for sharing the challenge this was fun creating!

https://www.linkedin.com/posts/komal-chowdhary-1b701051_shiftsynctricentis-tricentis-llm-activity-7275132355831308288-_cRL?utm_source=share&utm_medium=member_desktop

-Komal Chowdhary

 

Well Done and Thanks for the information πŸ‘ 


  • Apprentice
  • 1 reply
  • December 20, 2024
Suman wrote:
komalgc wrote:

My Entry in the AI Testing Vlogathon

A simple thumbs-up  is sweet πŸ€—, but a thoughtful comment would be the cherry on top! πŸ˜Ž

Augmenting LLMs for Risk Identification 

 

1. The Heuristic Bias Challenge

Because identifying risks is a heuristic-based activity it is subject to bias. This means that we can, at times, miss potential risks that require our attention


 

 

 

2. Testers List of Risk limited and is biased

 

 



3. Role of LLMs in mitigating bias:

 

We can use LLMs as an additional tool to help us consider different paths and perhaps highlight potential risks we hadn’t considered


4. Evaluating LLM Outputs

 

  Importance of filtering useful risks:
  Some risks are highly relevant, while others may overlap or seem redundant.
  


6. Balancing LLM Support and Tester Expertise


LLMs are valuable tools to explore new paths for risk identification.
They help testers break creative blocks and "shake things up" during the testing process.
A balanced approach combining LLM insights and human expertise leads to better exploratory testing outcomes.  

 

 

References : 

  1. Software Testing with Generative AI - Mark Winteringham
  1. https://thetesteye.com/posters/TheTestEye_SoftwareQualityCharacteristics.pdf

Thank you ​@parwalrahul for the shoutout to submit my entry and ​@Kat for sharing the challenge this was fun creating!

https://www.linkedin.com/posts/komal-chowdhary-1b701051_shiftsynctricentis-tricentis-llm-activity-7275132355831308288-_cRL?utm_source=share&utm_medium=member_desktop

-Komal Chowdhary

 

Well Done and Thanks for the information πŸ‘ 

Thank you 😊 


Mustafa
Forum|alt.badge.img+6
  • Technical Community Manager
  • 73 replies
  • January 2, 2025

Hello everyone,

Thank you to everyone who submitted their vlogs and for the contributions you made on this post. And thank you to everyone who joined the challenge by the way of voting on the vlogs.

We’re happy to announce that the winners of this challenge are:

We will reach out soon to coordinate how you should receive your prizes through the emails you used to register on ShiftSync, so be on the lookout for those.

Congratulations to all the winners, we hope you enjoy your prizes.

And to everyone else, we hope you have a happy new year 2025. πŸŽ‡


Rishikeshvajre

Thank you 😊 ​@Mustafa. Happy New Year to all 🍻 πŸŽ‰


Forum|alt.badge.img
Mustafa wrote:

Hello everyone,

Thank you to everyone who submitted their vlogs and for the contributions you made on this post. And thank you to everyone who joined the challenge by the way of voting on the vlogs.

We’re happy to announce that the winners of this challenge are:

We will reach out soon to coordinate how you should receive your prizes through the emails you used to register on ShiftSync, so be on the lookout for those.

Congratulations to all the winners, we hope you enjoy your prizes.

And to everyone else, we hope you have a happy new year 2025. πŸŽ‡

 

Thank you so much, MustafaπŸ™. It means a lot to me.


komalgc
  • Ensign
  • 5 replies
  • January 6, 2025
Mustafa wrote:

Hello everyone,

Thank you to everyone who submitted their vlogs and for the contributions you made on this post. And thank you to everyone who joined the challenge by the way of voting on the vlogs.

We’re happy to announce that the winners of this challenge are:

We will reach out soon to coordinate how you should receive your prizes through the emails you used to register on ShiftSync, so be on the lookout for those.

Congratulations to all the winners, we hope you enjoy your prizes.

And to everyone else, we hope you have a happy new year 2025. πŸŽ‡

Thank you ​@Mustafa  … Yay ! and Happy new year to u too :)


dankopetrovic

Dear community,

 

I was still building the AI tool, so I couldn’t officially participate, but I’m still happy to share it here.

As part of the directory of AI-powered testing tools, https://testingtools.ai, I am also building free AI testing tools.

I’m happy to present the FREE AI Test Case Generator.

It generates manual test cases from requirements within seconds. It's completely free to use. Each user gets 10 generations for free each month. For more usage, users can provide their own OpenAI API key, and use it unlimited.

Key Features & Benefits

  • Comprehensive AI-generated test cases (positive, negative, edge cases)
  • Editing and manual adjustments for full control
  • Nicely formatted test cases
  • Copy options for easy transfer into Word, Confluence, and more
  • 10 free generations per month (unlimited with own OpenAI API key)
  • Secure data handling with encrypted API keys and zero storage of sensitive data

For more details on how AI Test Case Generator works, please watch the Demo video on YouTube.

I will be creating more free AI testing tools, like test data generator, user requirements analyzer, etc.

I’m looking forward to your feedback.

 

Thanks,

Danko

!-->


Reply