Skip to main content

We’re excited to announce our first AI Testing Vlogathon, and we’re inviting YOU to participate and showcase your creativity and technical prowess! 

This isn’t just another contest—it’s your chance to connect with fellow innovators, share your expertise, and earn recognition for your work. Oh, and did we mention the prizes? 

To showcase the power of AI in testing, we’re launching a Vlogathon where you can submit 1-5 minute videos on some thrilling AI testing topics. 

💡 Topics to Choose From: 

 🎥 Test Case Generation: Show how AI creates test cases from simple requirements, like a login form, and covers scenarios like empty fields or invalid inputs. 

 🎥 Defect Prediction: Use AI to predict high-risk areas in code and demonstrate how it highlights bug-prone modules for better testing focus. 

 🎥 Automated Test Script Maintenance: Show how AI updates broken test scripts after UI changes, saving time and effort. 

🎥 Your Own Topic! Got a unique idea on how AI solves testing challenges? This is your chance to share it and inspire others.

🎁 Prizes for the Top 3 Winners: 

  • 🥇 1st Place: A prize amount valued at $100 USD
  • 🥈 2nd Place: A prize amount valued at $50 USD, plus a book of your choice
  • 🥉 3rd Place: A prize amount valued at $50 USD

But that’s not all! Participants with exceptional entries will also receive certificates of recognition and badges to flaunt in our community. 

📅 Key Dates to Remember: 

  • Submission Deadline: Dec. 18
  • Assessment Period: Dec. 18-20 
  • Winners Announced: Dec. 20

🌟 How It Works: 

  • Community Votes: Rally your network to vote for your video. Votes will be open until the assessment period. 
  • Expert Board Review: Our expert panel will evaluate videos based on creativity, clarity, and technical depth. 
  • Final Scores: Community votes + expert evaluations = our winners! 

🎬 How to Participate: 

  1. Create a 1-5 minute original video on any of the topics listed above. 

  1. Submit your video (the file or a link) in the comments. 

  1. Promote your video and encourage your community to vote! 

🗳️ Who Can Vote? 

 Anyone! (Except yourself 😉) Ask your friends and colleagues to cast their votes and help you climb the leaderboard. 

Don’t miss this opportunity to share your knowledge, educate others, and win prizes. Let’s put AI testing in the spotlight! 

📗Use these resources to get inspiration: 

Questions? Need help? Reach out to us at shiftsync@tricentis.com. 

Get ready to innovate, inspire, and win! 

🚀 Thrilled to Announce My Entry in the AI Testing Vlogathon! 🚀

I’m  excited to share my project IntelliGen—a RAG-based Generative AI chatbot that’s redefining the way we approach testing!

About IntelliGen

IntelliGen is a versatile  tool designed to simplify  testing processes by harnessing the power of Azure OpenAI, LangChain, and Gradio. The application offers two core modes:

🔹 Design Mode:

  • Generate test cases from requirement documents in a structured format.
  • Create detailed test plans and strategies based on release notes.
  • Automatically generate and manage Jira Epics and User Stories, integrating test cases seamlessly.

🔹 Ops Mode:

  • Summarize automation and performance test reports using AI.
  • Analyze log files and identify defects with detailed insights.
  • Integrate with Jira to summarize issues and propose solutions.

Why Did I Build It?

After years of experience in the software testing domain, I know how laborious it is to analyze requirement documents to develop test scenarios and automation scripts. So, I thought—why not automate the whole process?

What Can IntelliGen Do?

1️⃣ Ask Me Anything: Ask questions, and it responds like a smart teammate.

2️⃣ Chat with Documents: Upload files (Word, PDF, Excel, PowerPoint etc.)and it chats with them.

3️⃣ Test Case Generator: Creates manual or automated test plan and test cases from requirements.

4️⃣ Report Summarization: Simplifies automation and performance reports into key insights.

5️⃣ Log Analysis: Analyzes logs to identify and explain failures.

 

🤖 Powered By:

  • LLM: Azure OpenAI GPT-4 O Mini (smart and cost-efficient!)
  • Vector Store: FAISS (open-source and budget-friendly!)
  • UI: Gradio (because simplicity is key).
  • Framework: Langchain (for all the heavy lifting).
  • Embedding Model: Text Embedding Large 3

I have attached a zip file containing the demo recording. This particular demo highlights the tool's chatbot capabilities, where users can ask any query and receive answers. It also demonstrates summarizing the requirement document, generating test cases from the requirement document, and automating test scripting using Pytest. Furthermore, the tool can export the test cases in a proper tabular format in Excel.

https://bit.ly/3D1mGfs


Shift Sync Community managers - As always, you're the best at conducting activities that are definitely useful for all of us. Kudos, Team!

To the community - For those who are getting started on integrating GenAI in your testing life cycle, I suggest you check out TestingTitbits by ​@parwalrahul  and for prompting basics, the Prompting Guide

@Kat - Thanks for sharing those noteworthy resources. The content is excellent.

I tried to record the video multiple times as the content I wanted to cover was extensive. This final attempt is what I got the correct output for. And I hope, the attached Mindmap covers it and speaks a lot🚀.

I've attached my video as a Zip file, and I hope you'll enjoy it. Just drop me a message so we can discuss AI further.

A quick like 👍 would be appreciated, but a detailed comment would be a real treat! 🍬


Thank you for sharing this information 


Thank you for sharing this valuable information.


Good blog


Shift Sync Community managers - As always, you're the best at conducting activities that are definitely useful for all of us. Kudos, Team!

To the community - For those who are getting started on integrating GenAI in your testing life cycle, I suggest you check out TestingTitbits by ​@parwalrahul  and for prompting basics, the Prompting Guide

@Kat - Thanks for sharing those noteworthy resources. The content is excellent.

I tried to record the video multiple times as the content I wanted to cover was extensive. This final attempt is what I got the correct output for. And I hope, the attached Mindmap covers it and speaks a lot🚀.

I've attached my video as a Zip file, and I hope you'll enjoy it. Just drop me a message so we can discuss AI further.

A quick like 👍 would be appreciated, but a detailed comment would be a real treat! 🍬

Thank you Dinesh for sharing this information 


Shift Sync Community managers - As always, you're the best at conducting activities that are definitely useful for all of us. Kudos, Team!

To the community - For those who are getting started on integrating GenAI in your testing life cycle, I suggest you check out TestingTitbits by ​@parwalrahul  and for prompting basics, the Prompting Guide

@Kat - Thanks for sharing those noteworthy resources. The content is excellent.

I tried to record the video multiple times as the content I wanted to cover was extensive. This final attempt is what I got the correct output for. And I hope, the attached Mindmap covers it and speaks a lot🚀.

I've attached my video as a Zip file, and I hope you'll enjoy it. Just drop me a message so we can discuss AI further.

A quick like 👍 would be appreciated, but a detailed comment would be a real treat! 🍬

Great article


Performance Testing of LLM

https://drive.google.com/drive/folders/1hK1Xh6LXmVyAqB-lwmB2pgoo0UDOJjCQ?usp=drive_link


I am more on the practical side and made a small 5 minute example of how you can use AI, your Browser to make your work faster and easier.

 

Check the video below

 

https://www.youtube.com/watch?v=eVQIP3c1jVw


I am more on the practical side and made a small 5 minute example of how you can use AI, your Browser to make your work faster and easier.

 

Check the video below

 

https://www.youtube.com/watch?v=eVQIP3c1jVw

Loved it, ​@IOan. You just need to know what you want to do and then transform it into a prompt. Gonna try it. And I believe it’s going to even add some value for accessibility testing for the low-vision testers to test the hyper links or buttons.


Shift Sync Community managers - As always, you're the best at conducting activities that are definitely useful for all of us. Kudos, Team!

To the community - For those who are getting started on integrating GenAI in your testing life cycle, I suggest you check out TestingTitbits by ​@parwalrahul  and for prompting basics, the Prompting Guide

@Kat - Thanks for sharing those noteworthy resources. The content is excellent.

I tried to record the video multiple times as the content I wanted to cover was extensive. This final attempt is what I got the correct output for. And I hope, the attached Mindmap covers it and speaks a lot🚀.

I've attached my video as a Zip file, and I hope you'll enjoy it. Just drop me a message so we can discuss AI further.

A quick like 👍 would be appreciated, but a detailed comment would be a real treat! 🍬

Just adding to this, the video that I’ve explained is about leveraging Gen AI capabilities in test case creation and also for Test effort estimation (manual/automation).

Other than that, irrespective of your role or your technical expertise (coder/non-coder), I’ve designed this in a way that anyone can use it with minimal changes, and the main aim behind this is that testers can concentrate more on understanding the requirement, drafting more exploratory test scenarios, etc., which are definitely noteworthy areas to spend time on rather than spending hours on test case writing.

Here are real-time expertise notes on utilising GenAI:

  1. Some LLMs cannot perform well because of token constraints. So better to be more precise on how many test cases you need, etc..,
  2. LLMs can hallucinate because of more data they’ve trained on. So ask them what they’re considering while drafting the test case
  3. They’re like us (not close though)—they can forget. As you already spent some time on understanding the requirements, etc., if you feel it has forgotten covering one test case, just go ahead and ask it like—I believe you’ve covered all the test cases except this scenario—XXXX. So that it'll draft a new one.
  4. I’m in the process of doing POC where I’m training an LLM specific to this - The results are actually promising. 

 

Happy learning.


CHALLENGE -🚀 AI Testing Vlogathon! Submission

 

🎥 Happily Sharing My Entry in the AI Testing #Vlogathon! 🚀

 

🔗 Watch Me Here (just like my comment to vote) 👉 

 

This video is to participate in the AI Testing Vlogathon, where I’ve explored AI’s capabilities and limitations in automated test execution.

 

From showcasing a working example -> to analyzing edge case failures and providing practical solutions with #Playwright + #ZerostepAI, this video is a deep dive into how AI can transform testing processes.

 

💡 Here’s what you’ll find in my video:

1️⃣ A generic working example of AI in testing.

2️⃣ An edge case failure analysis that reveals current limitations.

3️⃣ A Playwright-powered solution that addresses these challenges.

4️⃣ Key takeaways for the future of AI in test automation.

 

🗳️ How Can You Support Me?

I’d love for you to check out my video and cast your vote! Your support not only helps me but also promotes innovation in AI testing.

 

🔗 Video Link Here: 

 

 

💬 Let’s Connect: Have thoughts on AI in testing? Drop them in the comments or DM me—I’d love to discuss on the remaining Scripts/Solutions!

 

Let’s spotlight the future of testing together! 🌟

 

#AITesting #TestAutomation #Playwright #zerostepai #Vlogathon

 

Thank you Rahul Parwal who encouraged me in making this video effort!

 

Let me in the game and share it 💖 Kateryna Gandzeichuk, Mustafa Elshabrawy & Tricentis


Rishikesh Vajre | TestTales.com 


CHALLENGE -🚀 AI Testing Vlogathon! Submission

 

🎥 Happily Sharing My Entry in the AI Testing #Vlogathon! 🚀

 

🔗 Watch Me Here (just like my comment to vote) 👉 

 

This video is to participate in the AI Testing Vlogathon, where I’ve explored AI’s capabilities and limitations in automated test execution.

 

From showcasing a working example -> to analyzing edge case failures and providing practical solutions with #Playwright + #ZerostepAI, this video is a deep dive into how AI can transform testing processes.

 

💡 Here’s what you’ll find in my video:

1️⃣ A generic working example of AI in testing.

2️⃣ An edge case failure analysis that reveals current limitations.

3️⃣ A Playwright-powered solution that addresses these challenges.

4️⃣ Key takeaways for the future of AI in test automation.

 

🗳️ How Can You Support Me?

I’d love for you to check out my video and cast your vote! Your support not only helps me but also promotes innovation in AI testing.

 

🔗 Video Link Here: 

 

 

💬 Let’s Connect: Have thoughts on AI in testing? Drop them in the comments or DM me—I’d love to discuss on the remaining Scripts/Solutions!

 

Let’s spotlight the future of testing together! 🌟

 

#AITesting #TestAutomation #Playwright #zerostepai #Vlogathon

 

Thank you Rahul Parwal who encouraged me in making this video effort!

 

Let me in the game and share it 💖 Kateryna Gandzeichuk, Mustafa Elshabrawy & Tricentis


Rishikesh Vajre | TestTales.com 


This is super cool!


Do check this…

 

 

 

 

 

Test Mate is an innovative application powered by Open-Source Large Language Models (LLMs) that empowers software testers to streamline their workflow and boost their efficiency.

 

With Test Mate, you can 👉:

• Generate Test Cases: - based on requirements.

• Optimize Test Cases: - i.e. it can review and refine your test cases for maximum effectiveness.

• You can (Q&A) i.e. you can ask questions related to uploaded test cases and get instant answers.

• Also, you can convert manual test cases into boilerplate code for automation.

 

It is a multi-modal application, i.e. you can select model based on your needs.

 

 


CHALLENGE -🚀 AI Testing Vlogathon! Submission

 

🎥 Happily Sharing My Entry in the AI Testing #Vlogathon! 🚀

 

🔗 Watch Me Here (just like my comment to vote) 👉 

 

This video is to participate in the AI Testing Vlogathon, where I’ve explored AI’s capabilities and limitations in automated test execution.

 

From showcasing a working example -> to analyzing edge case failures and providing practical solutions with #Playwright + #ZerostepAI, this video is a deep dive into how AI can transform testing processes.

 

💡 Here’s what you’ll find in my video:

1️⃣ A generic working example of AI in testing.

2️⃣ An edge case failure analysis that reveals current limitations.

3️⃣ A Playwright-powered solution that addresses these challenges.

4️⃣ Key takeaways for the future of AI in test automation.

 

🗳️ How Can You Support Me?

I’d love for you to check out my video and cast your vote! Your support not only helps me but also promotes innovation in AI testing.

 

🔗 Video Link Here: 

 

 

💬 Let’s Connect: Have thoughts on AI in testing? Drop them in the comments or DM me—I’d love to discuss on the remaining Scripts/Solutions!

 

Let’s spotlight the future of testing together! 🌟

 

#AITesting #TestAutomation #Playwright #zerostepai #Vlogathon

 

Thank you Rahul Parwal who encouraged me in making this video effort!

 

Let me in the game and share it 💖 Kateryna Gandzeichuk, Mustafa Elshabrawy & Tricentis


Rishikesh Vajre | TestTales.com 


This is super cool!

Thank you 😀


My Entry in the AI Testing Vlogathon

A simple thumbs-up  is sweet 🤗, but a thoughtful comment would be the cherry on top! 😎

Augmenting LLMs for Risk Identification 

 

1. The Heuristic Bias Challenge

Because identifying risks is a heuristic-based activity it is subject to bias. This means that we can, at times, miss potential risks that require our attention


 

 

 

2. Testers List of Risk limited and is biased

 

 



3. Role of LLMs in mitigating bias:

 

We can use LLMs as an additional tool to help us consider different paths and perhaps highlight potential risks we hadn’t considered


4. Evaluating LLM Outputs

 

  Importance of filtering useful risks:
  Some risks are highly relevant, while others may overlap or seem redundant.
  


6. Balancing LLM Support and Tester Expertise


LLMs are valuable tools to explore new paths for risk identification.
They help testers break creative blocks and "shake things up" during the testing process.
A balanced approach combining LLM insights and human expertise leads to better exploratory testing outcomes.  

 

 

References : 

  1. Software Testing with Generative AI - Mark Winteringham
  1. https://thetesteye.com/posters/TheTestEye_SoftwareQualityCharacteristics.pdf

Thank you ​@parwalrahul for the shoutout to submit my entry and ​@Kat for sharing the challenge this was fun creating!

https://www.linkedin.com/posts/komal-chowdhary-1b701051_shiftsynctricentis-tricentis-llm-activity-7275132355831308288-_cRL?utm_source=share&utm_medium=member_desktop

-Komal Chowdhary

 


Sharing My Entry in the AI Testing #Vlogathon! 🚀

Postbot is an AI tool in Postman that can help with API development by: 

  • Generating tests: Postbot can create tests from scratch or update existing ones. It can also suggest likely endpoint and parameter values to help ensure that all necessary endpoints and parameters are tested. 
  • Visualizing responses: Postbot can visualize received responses in a table, line chart, or bar chart. 
  • Writing documentation: Postbot can automatically write clear, concise, and up-to-date documentation for APIs. 
  • Debugging APIs: Postbot can help debug APIs. 
  • Answering questions: Postbot can answer random questions about building things in Postman. 

Other features of Postbot include: 

  • Improved accuracy: Postbot's machine learning algorithms can make more accurate predictions than a human. 
  • Error reduction: Postbot can reduce errors that can occur when manually entering values. 
  • Increased efficiency: Postbot can help software QA engineers be more efficient by reducing the time and effort required to create and run API tests. 
  • Faster performance: The latest version of Postbot is over twice as fast as before. 

     


Video here

AI in testing through my journey as a Automation is the best thing that had ever happened to me.

Thanks ​@Kat for letting us know about the initiative and thanks ​@Dinesh_Gujarathi 

 

I learnt about this videothon through your post. Last minute, no edits, lots of practise. But, made it across

In this video in LinkedIn, I cover 2 of the tools I had conceptualized and implemented in my organization based on the pain points I had.

1️⃣ Test Case Generator 

➡️Objective: To automate the repetitive process of test case creation

➡️Tech stack: Power Automate Cloud with Azure DevOps and Teams integration, OpenAI API (any latest version)

➡️Pros: Creating test cases with Time efficiency of 59% and Accuracy of 80% (measure across 5 products and for the period of 1.5 years)

2️⃣ Text Comparator

➡️Objective: To address the Grey area of comparing text between 2 migrating sites with different Nordic languages like Danish. One would be the baseline and other the actually migrated solution

➡️Tech stack: Power Automate Cloud, Power Automate Desktop and Teams integration, OpenAI API (any latest version)

➡️Pros: Give the web page link and Tada! Your comparison is done. No language barriers (it's so inclusive right). Time efficiency is somewhere around 44% and Accuracy is 100% (depends on the prompt)

 


Video Link

 

Using AI for Better Test Cases

Learning: In this video, I show how AI can make test case creation and improvement much easier in software testing. AI can help automate repetitive tasks, predict tricky test cases, and improve test coverage, making testing faster and more reliable.

Description: I explain how AI tools can create test cases that cover a wide range of possibilities, ensuring thorough testing. I also talk about how AI can improve existing test cases by finding unnecessary ones, focusing on high-risk tests, and speeding up the testing process. The video includes examples and tips for using AI in your testing workflow.

Key Takeaway: AI is a powerful tool that can make testing more efficient and reliable. It helps cover more test scenarios, reduce mistakes, and speed up testing, ultimately making your software better.

Personal Reflection: While working on this, I realized how much easier testing can be with AI. It lets us focus on more important tasks, while AI handles the repetitive work. It’s a great way to improve both testing speed and quality.

 

 


My Entry in the AI Testing Vlogathon

A simple thumbs-up  is sweet 🤗, but a thoughtful comment would be the cherry on top! 😎

Augmenting LLMs for Risk Identification 

 

1. The Heuristic Bias Challenge

Because identifying risks is a heuristic-based activity it is subject to bias. This means that we can, at times, miss potential risks that require our attention


 

 

 

2. Testers List of Risk limited and is biased

 

 



3. Role of LLMs in mitigating bias:

 

We can use LLMs as an additional tool to help us consider different paths and perhaps highlight potential risks we hadn’t considered


4. Evaluating LLM Outputs

 

  Importance of filtering useful risks:
  Some risks are highly relevant, while others may overlap or seem redundant.
  


6. Balancing LLM Support and Tester Expertise


LLMs are valuable tools to explore new paths for risk identification.
They help testers break creative blocks and "shake things up" during the testing process.
A balanced approach combining LLM insights and human expertise leads to better exploratory testing outcomes.  

 

 

References : 

  1. Software Testing with Generative AI - Mark Winteringham
  1. https://thetesteye.com/posters/TheTestEye_SoftwareQualityCharacteristics.pdf

Thank you ​@parwalrahul for the shoutout to submit my entry and ​@Kat for sharing the challenge this was fun creating!

https://www.linkedin.com/posts/komal-chowdhary-1b701051_shiftsynctricentis-tricentis-llm-activity-7275132355831308288-_cRL?utm_source=share&utm_medium=member_desktop

-Komal Chowdhary

 

Loved the illustration, Komal :)


Shift Sync Community managers - As always, you're the best at conducting activities that are definitely useful for all of us. Kudos, Team!

To the community - For those who are getting started on integrating GenAI in your testing life cycle, I suggest you check out TestingTitbits by ​@parwalrahul  and for prompting basics, the Prompting Guide

@Kat - Thanks for sharing those noteworthy resources. The content is excellent.

I tried to record the video multiple times as the content I wanted to cover was extensive. This final attempt is what I got the correct output for. And I hope, the attached Mindmap covers it and speaks a lot🚀.

I've attached my video as a Zip file, and I hope you'll enjoy it. Just drop me a message so we can discuss AI further.

A quick like 👍 would be appreciated, but a detailed comment would be a real treat! 🍬

 

Thank you so much for sharing the great prompting techniques, Dinesh. I’m from a development background with little testing knowledge. But this detailed flowchart is something I can refer to if I need to apply some test cases to my code. I have gone through few of the applications mentioned by you and they are amazing. You have proven again that everything is available in the internet, we just need to ask for it in a right way. 

I am looking forward to more such interesting content from you.

Thanks again.

 


Shift Sync Community managers - As always, you're the best at conducting activities that are definitely useful for all of us. Kudos, Team!

To the community - For those who are getting started on integrating GenAI in your testing life cycle, I suggest you check out TestingTitbits by ​@parwalrahul  and for prompting basics, the Prompting Guide

@Kat - Thanks for sharing those noteworthy resources. The content is excellent.

I tried to record the video multiple times as the content I wanted to cover was extensive. This final attempt is what I got the correct output for. And I hope, the attached Mindmap covers it and speaks a lot🚀.

I've attached my video as a Zip file, and I hope you'll enjoy it. Just drop me a message so we can discuss AI further.

A quick like 👍 would be appreciated, but a detailed comment would be a real treat! 🍬

This looks quite interesting. I am following it .


Shift Sync Community managers - As always, you're the best at conducting activities that are definitely useful for all of us. Kudos, Team!

To the community - For those who are getting started on integrating GenAI in your testing life cycle, I suggest you check out TestingTitbits by ​@parwalrahul  and for prompting basics, the Prompting Guide

@Kat - Thanks for sharing those noteworthy resources. The content is excellent.

I tried to record the video multiple times as the content I wanted to cover was extensive. This final attempt is what I got the correct output for. And I hope, the attached Mindmap covers it and speaks a lot🚀.

I've attached my video as a Zip file, and I hope you'll enjoy it. Just drop me a message so we can discuss AI further.

A quick like 👍 would be appreciated, but a detailed comment would be a real treat! 🍬

Just adding to this, the video that I’ve explained is about leveraging Gen AI capabilities in test case creation and also for Test effort estimation (manual/automation).

Other than that, irrespective of your role or your technical expertise (coder/non-coder), I’ve designed this in a way that anyone can use it with minimal changes, and the main aim behind this is that testers can concentrate more on understanding the requirement, drafting more exploratory test scenarios, etc., which are definitely noteworthy areas to spend time on rather than spending hours on test case writing.

Here are real-time expertise notes on utilising GenAI:

  1. Some LLMs cannot perform well because of token constraints. So better to be more precise on how many test cases you need, etc..,
  2. LLMs can hallucinate because of more data they’ve trained on. So ask them what they’re considering while drafting the test case
  3. They’re like us (not close though)—they can forget. As you already spent some time on understanding the requirements, etc., if you feel it has forgotten covering one test case, just go ahead and ask it like—I believe you’ve covered all the test cases except this scenario—XXXX. So that it'll draft a new one.
  4. I’m in the process of doing POC where I’m training an LLM specific to this - The results are actually promising. 

 

Happy learning.

You make us learn so much. Thank you so much for the wonderful content. 


Shift Sync Community managers - As always, you're the best at conducting activities that are definitely useful for all of us. Kudos, Team!

To the community - For those who are getting started on integrating GenAI in your testing life cycle, I suggest you check out TestingTitbits by ​@parwalrahul  and for prompting basics, the Prompting Guide

@Kat - Thanks for sharing those noteworthy resources. The content is excellent.

I tried to record the video multiple times as the content I wanted to cover was extensive. This final attempt is what I got the correct output for. And I hope, the attached Mindmap covers it and speaks a lot🚀.

I've attached my video as a Zip file, and I hope you'll enjoy it. Just drop me a message so we can discuss AI further.

A quick like 👍 would be appreciated, but a detailed comment would be a real treat! 🍬

Wonderful content. Thanks for sharing 


My Entry in the AI Testing Vlogathon

A simple thumbs-up  is sweet 🤗, but a thoughtful comment would be the cherry on top! 😎

Augmenting LLMs for Risk Identification 

 

1. The Heuristic Bias Challenge

Because identifying risks is a heuristic-based activity it is subject to bias. This means that we can, at times, miss potential risks that require our attention


 

 

 

2. Testers List of Risk limited and is biased

 

 



3. Role of LLMs in mitigating bias:

 

We can use LLMs as an additional tool to help us consider different paths and perhaps highlight potential risks we hadn’t considered


4. Evaluating LLM Outputs

 

  Importance of filtering useful risks:
  Some risks are highly relevant, while others may overlap or seem redundant.
  


6. Balancing LLM Support and Tester Expertise


LLMs are valuable tools to explore new paths for risk identification.
They help testers break creative blocks and "shake things up" during the testing process.
A balanced approach combining LLM insights and human expertise leads to better exploratory testing outcomes.  

 

 

References : 

  1. Software Testing with Generative AI - Mark Winteringham
  1. https://thetesteye.com/posters/TheTestEye_SoftwareQualityCharacteristics.pdf

Thank you ​@parwalrahul for the shoutout to submit my entry and ​@Kat for sharing the challenge this was fun creating!

https://www.linkedin.com/posts/komal-chowdhary-1b701051_shiftsynctricentis-tricentis-llm-activity-7275132355831308288-_cRL?utm_source=share&utm_medium=member_desktop

-Komal Chowdhary

 

Loved the illustration, Komal :)

Thank you :)


Reply