Question

As testers, should we or shouldn't we talk about AI tools like chatGPT?

  • 10 May 2023
  • 6 replies
  • 219 views

Userlevel 6
Badge +2

The talk of the town, ChatGPT, can provide some benefits for testing activities, but its effectiveness in speeding up the testing process or generating test cases will depend on the specific use case and the quality of the input data provided to it. While people do not want to miss the train, but they have to very clear where and how they should use AI. In myĀ opinion,Ā ChatGPT can help in testing in following ways:

  1. Test data generation: ChatGPT can be used to generate synthetic test data based on specific scenarios or requirements. For instance, it can generate test data that simulates specific user interactions or system responses to validate the functionality of a software application.

  2. Test case generation: ChatGPT can be used to generate test cases based on specific requirements or test scenarios. However, the effectiveness of this approach will depend on the quality of the input data provided to the model and the level of detail required for the test cases.

  3. Test script automation: ChatGPT can be used to automate test scripts for repetitive tasks, such as test data creation or validation. This can help speed up the testing process and improve the efficiency of the testing team.

Having listed the above benefits, It is important to note that while ChatGPT can help in testing activities, it is not a replacement for human testers. People will start thinking that this can replace them and take their jobs. However, It is still necessary to have human testers who can validate the results generated by the model and provide context and domain knowledge that may not be captured by the model.

Ā 

What are your thoughts?

Ā 

Ā 

Ā 


6 replies

Userlevel 2
Badge

Hi Vipin,

There are few points which may need to consider before integrating our application with Chat Gpt.

1. Where the data is stored and how it is used, security aspect of the data as well.

2.Chat Gpt lacks the functional knowledge of testing so we may need to train the system with relevantĀ 

Data.

3. We may need to check if the data generated by chat Gpt contains any biases.

4. Some times the data generated by chat Gpt might be wrong.

Userlevel 5
Badge +3

Thereā€™s a lot of debate over the privacy & data capture just now which should be taken as a consideration. Private company data or project info sharing to ChatGPT might therefore cause concern

Putting that aside, some generalised use cases:

  • Code generation for TA scripts
  • Searching relevant open source example TA code suggestions
  • Decision table building, randomised range building, flow path selectionĀ useful for test case building
  • Test data generation from random data off the Internet
  • Chatbot & answer gathering (more narrowed, specific Google search replacement for testing questions)
  • Text parsing and extraction using neural network type understanding
  • Gathering of test focus advice based on info available on Internet (given a web frontend - rest web service - database backend, what are appropriate tests to perform, what are the highest performance testing priorities)
  • Chaos engineering area of failure suggestions
  • Tool suggestions and comparisons
  • Test case generation (i.e. uploading a generalised set of test steps with all company data, URLs etc. removed of a positive test case & asking for negative test case versions of it to be generated)

Ā 

For internal ML where data concerns arenā€™t an issue, far more options:

  • Defect content generation (perhaps difficult generalised)
  • Defect priotity & severity selection aid
  • Relevant test case selection (per release, per defect fix etc.)
  • Contacts points, focus & scope + requirements, example code of other projects using similar or same system under test
  • Usage statistics of testing - frequency, ROI calculations, time saving estimations
  • Test need analysis - time, people, cost estimations
  • Trends over time analysis - failure points of System Under Test, environment, defect measurements, developer fixing measurements
  • Predictive System Under Test change test needs (LiveCompare, Page Object Model generation changes impacted etc.)
  • Monitoring log parsing, analysis and prediction - system usage, performance trends per day + time, comparison between environments statistics
  • Reporting enhancements, data merging
  • Needs & architecture analysis commonality & trends in projects across an organisation
Userlevel 1

Thereā€™s a lot of debate over the privacy & data capture just now which should be taken as a consideration. Private company data or project info sharing to ChatGPT might therefore cause concern

Putting that aside, some generalised use cases:

  • Code generation for TA scripts
  • Searching relevant open source example TA code suggestions
  • Decision table building, randomised range building, flow path selectionĀ useful for test case building
  • Test data generation from random data off the Internet
  • Chatbot & answer gathering (more narrowed, specific Google search replacement for testing questions)
  • Text parsing and extraction using neural network type understanding
  • Gathering of test focus advice based on info available on Internet (given a web frontend - rest web service - database backend, what are appropriate tests to perform, what are the highest performance testing priorities)
  • Chaos engineering area of failure suggestions
  • Tool suggestions and comparisons
  • Test case generation (i.e. uploading a generalised set of test steps with all company data, URLs etc. removed of a positive test case & asking for negative test case versions of it to be generated)

Ā 

For internal ML where data concerns arenā€™t an issue, far more options:

  • Defect content generation (perhaps difficult generalised)
  • Defect priotity & severity selection aid
  • Relevant test case selection (per release, per defect fix etc.)
  • Contacts points, focus & scope + requirements, example code of other projects using similar or same system under test
  • Usage statistics of testing - frequency, ROI calculations, time saving estimations
  • Test need analysis - time, people, cost estimations
  • Trends over time analysis - failure points of System Under Test, environment, defect measurements, developer fixing measurements
  • Predictive System Under Test change test needs (LiveCompare, Page Object Model generation changes impacted etc.)
  • Monitoring log parsing, analysis and prediction - system usage, performance trends per day + time, comparison between environments statistics
  • Reporting enhancements, data merging
  • Needs & architecture analysis commonality & trends in projects across an organisation

That's a very comprehensive list, thank you for sharingĀ 

Userlevel 5
Badge +3

probably 1000s of bits I didnā€™t think of tooā€¦Ā am looking forward to seeing how this post grows & what ideas others come up with in addition!Ā šŸ˜€

Userlevel 4
Badge +4

Iā€™m seeing a lot of shallow and reckless talk about what ChatGPT can do for testers.

We do need to investigate what can be done with AI-based tools. But just telling people ā€œYOU CAN DO THIS! AND THIIIISSS!ā€ does not serve testers or the industry. It is irresponsible.

I have seen several demonstrations of people generating test cases and scripts using ChatGPT and what Iā€™ve seen is a hot mess without much critical thinking. Yes, ChatGPT can do interesting things, but it is not able to do themĀ consistently and reliably.

A student recently responded to an exercise I gave him by asking ChatGPT to write a program to tell if aĀ string of dates (provided as output of a program that purported to produce random dates) were or were not random. The program cheerfully and confidently supplied by ChatGPT was profoundly unsuited to the task, but the tester did not have the mathematical or programming skills necessary to realize that. He just passed the program on to me with the assertion that it solved the problem.

We should not be encouraging people to behave that way.

In the absence of consistency and reliability, I would say anyone who relies on ChatGPT is putting their job at risk. I would encourage employers to make it a firing offense to rely on it-- just as it would be if you discovered that developers were outsourcing their work to other programmers and passing it off as their own.

I do think AI can have a role. Specifically, ChatGPT can be useful as a sort of instant tutorial generator if you are trying to get started using a particular tool or technology. I also think it can be used to aid in brainstorming or training testers.

For instance, a good training exercise might be to ask ChatGPT ā€œwhat are three test cases for a printer?ā€ and then ask a tester to review and fix that answer (because ChatGPTā€™s answer will be far below a professional standard in important ways).

Or you can describe a feature and ask ChatGPT for test ideas. It will give you the obvious ones-- probably-- and maybe a couple of strange ones. But then you need to delete some and add others.

Hereā€™s a professional rule I have when I use ChatGPT: I neverĀ ask it for answers that I do not have the personal knowledge and skill to evaluate on the spot, because more than half the time that I DO have such knowledge and skill, I see important errors in ChatGPT output.

Userlevel 5
Badge +3

I donā€™t subscribe to the ā€œIt is irresponsibleā€ comment made, myself.Ā 

Ā 

It is perfectly responsible & in our profession to treat everything with a critical and cautious eye, however saying not to use such tools & approaches kills passion, experimentation, ambition, enthusiasm & inspiration. If all shared the same attitude, testing would remain stagnant and never progress.


There is no denying the referenced ChatGPT itself is in it's infancy & has many flaws (quite a few funny resulting ones at that online!) & warning of these - as well as sharing ideas to try out - has the best of both worlds. Having the testing community trial things with it then encouraging active discussion - sharing different functionalities tried, what works, what doesn't, sending reports to the vendor etc. are all going to move the community forward.Ā 


AI (ML, whatever we call it) itself is being implemented in many top market testing tools, in many company testing teams such as mine and is touted as a software testing trend by the likesĀ of Gartner etc. ChatGPT is cementing itself as a part of this AI trend quite heavily, whether we like it or not, so learning of, and sharing its pros and cons can keep the testing profession up to date and modern also.Ā 

Reply