Hi Vipin,
There are few points which may need to consider before integrating our application with Chat Gpt.
1. Where the data is stored and how it is used, security aspect of the data as well.
2.Chat Gpt lacks the functional knowledge of testing so we may need to train the system with relevantĀ
Data.
3. We may need to check if the data generated by chat Gpt contains any biases.
4. Some times the data generated by chat Gpt might be wrong.
Thereās a lot of debate over the privacy & data capture just now which should be taken as a consideration. Private company data or project info sharing to ChatGPT might therefore cause concern
Putting that aside, some generalised use cases:
- Code generation for TA scripts
- Searching relevant open source example TA code suggestions
- Decision table building, randomised range building, flow path selectionĀ useful for test case building
- Test data generation from random data off the Internet
- Chatbot & answer gathering (more narrowed, specific Google search replacement for testing questions)
- Text parsing and extraction using neural network type understanding
- Gathering of test focus advice based on info available on Internet (given a web frontend - rest web service - database backend, what are appropriate tests to perform, what are the highest performance testing priorities)
- Chaos engineering area of failure suggestions
- Tool suggestions and comparisons
- Test case generation (i.e. uploading a generalised set of test steps with all company data, URLs etc. removed of a positive test case & asking for negative test case versions of it to be generated)
Ā
For internal ML where data concerns arenāt an issue, far more options:
- Defect content generation (perhaps difficult generalised)
- Defect priotity & severity selection aid
- Relevant test case selection (per release, per defect fix etc.)
- Contacts points, focus & scope + requirements, example code of other projects using similar or same system under test
- Usage statistics of testing - frequency, ROI calculations, time saving estimations
- Test need analysis - time, people, cost estimations
- Trends over time analysis - failure points of System Under Test, environment, defect measurements, developer fixing measurements
- Predictive System Under Test change test needs (LiveCompare, Page Object Model generation changes impacted etc.)
- Monitoring log parsing, analysis and prediction - system usage, performance trends per day + time, comparison between environments statistics
- Reporting enhancements, data merging
- Needs & architecture analysis commonality & trends in projects across an organisation
Thereās a lot of debate over the privacy & data capture just now which should be taken as a consideration. Private company data or project info sharing to ChatGPT might therefore cause concern
Putting that aside, some generalised use cases:
- Code generation for TA scripts
- Searching relevant open source example TA code suggestions
- Decision table building, randomised range building, flow path selectionĀ useful for test case building
- Test data generation from random data off the Internet
- Chatbot & answer gathering (more narrowed, specific Google search replacement for testing questions)
- Text parsing and extraction using neural network type understanding
- Gathering of test focus advice based on info available on Internet (given a web frontend - rest web service - database backend, what are appropriate tests to perform, what are the highest performance testing priorities)
- Chaos engineering area of failure suggestions
- Tool suggestions and comparisons
- Test case generation (i.e. uploading a generalised set of test steps with all company data, URLs etc. removed of a positive test case & asking for negative test case versions of it to be generated)
Ā
For internal ML where data concerns arenāt an issue, far more options:
- Defect content generation (perhaps difficult generalised)
- Defect priotity & severity selection aid
- Relevant test case selection (per release, per defect fix etc.)
- Contacts points, focus & scope + requirements, example code of other projects using similar or same system under test
- Usage statistics of testing - frequency, ROI calculations, time saving estimations
- Test need analysis - time, people, cost estimations
- Trends over time analysis - failure points of System Under Test, environment, defect measurements, developer fixing measurements
- Predictive System Under Test change test needs (LiveCompare, Page Object Model generation changes impacted etc.)
- Monitoring log parsing, analysis and prediction - system usage, performance trends per day + time, comparison between environments statistics
- Reporting enhancements, data merging
- Needs & architecture analysis commonality & trends in projects across an organisation
That's a very comprehensive list, thank you for sharingĀ
probably 1000s of bits I didnāt think of tooā¦Ā am looking forward to seeing how this post grows & what ideas others come up with in addition!Ā
Iām seeing a lot of shallow and reckless talk about what ChatGPT can do for testers.
We do need to investigate what can be done with AI-based tools. But just telling people āYOU CAN DO THIS! AND THIIIISSS!ā does not serve testers or the industry. It is irresponsible.
I have seen several demonstrations of people generating test cases and scripts using ChatGPT and what Iāve seen is a hot mess without much critical thinking. Yes, ChatGPT can do interesting things, but it is not able to do themĀ consistently and reliably.
A student recently responded to an exercise I gave him by asking ChatGPT to write a program to tell if aĀ string of dates (provided as output of a program that purported to produce random dates) were or were not random. The program cheerfully and confidently supplied by ChatGPT was profoundly unsuited to the task, but the tester did not have the mathematical or programming skills necessary to realize that. He just passed the program on to me with the assertion that it solved the problem.
We should not be encouraging people to behave that way.
In the absence of consistency and reliability, I would say anyone who relies on ChatGPT is putting their job at risk. I would encourage employers to make it a firing offense to rely on it-- just as it would be if you discovered that developers were outsourcing their work to other programmers and passing it off as their own.
I do think AI can have a role. Specifically, ChatGPT can be useful as a sort of instant tutorial generator if you are trying to get started using a particular tool or technology. I also think it can be used to aid in brainstorming or training testers.
For instance, a good training exercise might be to ask ChatGPT āwhat are three test cases for a printer?ā and then ask a tester to review and fix that answer (because ChatGPTās answer will be far below a professional standard in important ways).
Or you can describe a feature and ask ChatGPT for test ideas. It will give you the obvious ones-- probably-- and maybe a couple of strange ones. But then you need to delete some and add others.
Hereās a professional rule I have when I use ChatGPT: I neverĀ ask it for answers that I do not have the personal knowledge and skill to evaluate on the spot, because more than half the time that I DO have such knowledge and skill, I see important errors in ChatGPT output.
I donāt subscribe to the āIt is irresponsibleā comment made, myself.Ā
Ā
It is perfectly responsible & in our profession to treat everything with a critical and cautious eye, however saying not to use such tools & approaches kills passion, experimentation, ambition, enthusiasm & inspiration. If all shared the same attitude, testing would remain stagnant and never progress.
There is no denying the referenced ChatGPT itself is in it's infancy & has many flaws (quite a few funny resulting ones at that online!) & warning of these - as well as sharing ideas to try out - has the best of both worlds. Having the testing community trial things with it then encouraging active discussion - sharing different functionalities tried, what works, what doesn't, sending reports to the vendor etc. are all going to move the community forward.Ā
AI (ML, whatever we call it) itself is being implemented in many top market testing tools, in many company testing teams such as mine and is touted as a software testing trend by the likesĀ of Gartner etc. ChatGPT is cementing itself as a part of this AI trend quite heavily, whether we like it or not, so learning of, and sharing its pros and cons can keep the testing profession up to date and modern also.Ā