Skip to main content
Quiz

Week 3 Exercise - Refining Prompt for Testing Task


parwalrahul
Forum|alt.badge.img+3

Objective:


The goal of this activity is to help you recognize the characteristics of a well-structured prompt and practice refining prompts to make them clear, specific, and actionable for the testing tasks.

Steps:

  1. Review the Given Prompt: Here’s a poor prompt for a testing task:

    "Give me some test data for this app."
     
  2. Identify Issues: Consider why this prompt might not yield useful results. Think about missing details like the type of app, testing scope, expected tests, usage of the output produced, output format, or any specific testing techniques.
     
  3. Improve the Prompt: Rewrite the prompt to make it more precise and useful
     
  4. Share and Discuss: Share your refined prompt with others and discuss how different refinements impacted AI responses.
Did this topic help you find an answer to your question?

49 replies

Forum|alt.badge.img+1
  • Specialist
  • 6 replies
  • March 20, 2025

Refined Prompt: "Provide a set of test data for functional testing of a mobile banking app. The data should include valid and invalid user credentials, transaction details, and account information. Please provide the data in a CSV format suitable for automated testing scripts."


  • Ensign
  • 1 reply
  • March 20, 2025

Hi ​@parwalrahul ,
regarding the given prompt - "Give me some test data for this app." 
I have listed a few issues with the given prompt - 

  • prompt does not specify what kind of app is being tested.
  • It does not mention the type of testing
  • It does not specify if the test data should be in which format to be.
  • It does not indicate what kind of test cases the data should support.
  • It does not define normal, boundary, or invalid test data requirements.

The prompt that I came up with is - 
"Generate test data for a login feature in an web application. The data should cover valid login credentials, invalid passwords along with negative scenarios.
Provide the test data  with columns for 'Username', 'Password', and 'Expected Outcome'."


And I have given both prompts in gpt and tested which is more specific.
1. "Give me some test data for this app."
output -

Sure! Could you specify what kind of app you're testing? Is it a web app, mobile app, or desktop app? Also, what type of data do you need—user credentials, form inputs, transaction records, etc.?

2. Generate test data for a login feature in an web application. The data should cover valid login credentials, invalid passwords along with negative scenarios.
Provide the test data  with columns for 'Username', 'Password', and 'Expected Outcome'.

output - 


 


Frank Kokoska
Forum|alt.badge.img

Hello

 

From my perspective, I would say the following

 

Identified Issues with the Prompt:
No App Context
No Testing Techniques
No Expected Tests
No Usage Context
No Output Format
No Testing Scope

 

Improved Prompt:
Develop test data for a web-based insurance calculator application that includes user registration, product search, and checkout features. The test cases should consider all relevant scenarios, including typical user inputs, erroneous inputs, and edge cases. The data should be delivered in a CSV file with clearly labeled columns for seamless integration into manual and automated testing workflows

 

Regards 
Frank


  • Ensign
  • 3 replies
  • March 20, 2025


When I used a vague prompt like "give me test data for app," the response included data for different types of apps, along with valid and invalid scenarios. However, the depth of these scenarios was quite limited, and there was a lot of unnecessary detail that wasn’t relevant.

On the other hand, when I provided a detailed prompt specifying the app and its specific feature, the AI generated highly relevant test cases. The response was well-structured in an Excel format with expected outcomes, which was quite impressive. The test cases covered all critical aspects, including in-depth scenarios and edge cases.


This experience showed me that giving clear and detailed prompts makes AI much more useful. A well-written prompt helps get accurate, well-structured, and complete answers, making AI a more powerful and efficient tool.


parwalrahul
Forum|alt.badge.img+3
  • Author
  • Chief Specialist
  • 91 replies
  • March 20, 2025

@ghanesh : nice, quick and really fast submission !

My only suggestion to you would be that try adding a persona e.g. ask it to “Act as a tester specialised in test data generation” and check if it makes a difference.


parwalrahul
Forum|alt.badge.img+3
  • Author
  • Chief Specialist
  • 91 replies
  • March 20, 2025

@Harsha23 : Nice one. what I like about your response is that you doubled down on “being specific”.

Your prompt shows that you know what you wanted.

Good one.

One great place to use as a reference for amazing test data is Test Heuristics Cheat Sheet | Ministry of Testing

You can also try giving some stuff from here for a good in-context learning and contextualized prompt.


parwalrahul
Forum|alt.badge.img+3
  • Author
  • Chief Specialist
  • 91 replies
  • March 20, 2025

@Frank Kokoska nice one, mate. this is by far one of the best prompts added by someone on this challenge.

the bar is high now.

looking forward to more submissions now!


shashwata
  • Ensign
  • 7 replies
  • March 21, 2025

Issues:

  • Lack of Context – No details about the app (e.g., type, purpose, functionality).
  • Unclear Testing Scope – Doesn't specify which feature or aspect needs test data.
  • No Test Coverage Details – Fails to mention types of test cases (valid, invalid, boundary, etc.).
  • Missing Output Format – Doesn't specify how the data should be structured (e.g., JSON, CSV).
  • No Expected Outcome – Lacks information on how the test data will be used or validated.

Improved Prompt:
Provide a set of test data for a [type of app, e.g., e-commerce checkout system], covering valid, invalid, boundary, and edge cases. Ensure the data aligns with [specific feature, e.g., payment processing] and includes necessary formats (e.g., JSON, CSV). Highlight expected outcomes for each test scenario.

 

 

 


parwalrahul
Forum|alt.badge.img+3
  • Author
  • Chief Specialist
  • 91 replies
  • March 21, 2025

@shashwata you have identified the issues very well.

 

Also, like your template for creating a prompt for test data needs as per the [context].

 

I also see that you have asked it to “highlight expected outcomes for each test scenario”, which is a nice oracle to ask for.


Have you tried running it for any specific use case and evaluating if the outcome matches your expectations?


Forum|alt.badge.img

Generate test data for the below fields :

STest1 ID - Text(13) (External ID) (Unique Case Sensitive)
S4Test Id - Number(10) (External ID) (Case Sensitive)

Example:   2000009626

Create 20  records

The above fields are updated in the Salesforce System through Integration. 


  • Ensign
  • 9 replies
  • March 21, 2025

@parwalrahul 

Analysis of the Given Prompt

Original Prompt: "Give me some test data for this app."

Issues Identified:
  1. Lack of Context – It does not specify what the app does (e.g., e-commerce, finance, healthcare).
  2. Ambiguous Testing Scope – Is it for functional testing, performance testing, security testing, etc.?
  3. No Output Format – Should the response be in JSON, CSV, SQL scripts, or plain text?
  4. No Test Data Constraints – What kind of test data is needed? (e.g., edge cases, valid/invalid inputs, boundary values).
  5. No Use Case Mentioned – The purpose of the test data is unclear (e.g., for login testing, payment validation, API response checks).
     

Refined Prompt for Better Testing Data Generation

"Generate structured test data for an e-commerce web application that includes user accounts, product inventory, and order transactions. The data should include both valid and invalid inputs for testing user registration, product purchase, and payment processing. Output the test data in JSON format, covering normal cases, boundary values, and edge cases for fields like email, password, product prices, and discount codes."
 

Why This Prompt is Better?

Clearly defines the app type (E-commerce web application)
Specifies the key modules (User accounts, inventory, transactions)
Lists the required test cases (Valid & invalid inputs, edge cases, boundary values)
Mentions the output format (JSON)
Includes a variety of testing considerations (Registration, purchase, payment processing)
 

Further Enhancements for Specific Testing Scenarios
 

If testing an API, we can refine the prompt as:
"Generate API test data for a RESTful e-commerce application, focusing on authentication (login/logout), product retrieval, and checkout endpoints. Include valid, invalid, and boundary value test cases in JSON format, considering HTTP methods (GET, POST, PUT, DELETE) and response validation."

If testing performance, we can refine the prompt as:
"Provide load testing data for simulating 10,000 concurrent users on an e-commerce checkout system, including various payment methods (Credit Card, PayPal) and high-volume transactions."
 

Key Takeaways

  1. Be Specific – Always mention the app type, modules, and test scope.
  2. Define Expected Output – Mention the format like JSON, CSV, or database entries.
  3. Include Test Coverage – Specify valid, invalid, boundary, and edge cases.
  4. Tailor for the Use Case – Differentiate between functional, API, performance, or security testing needs.

shashwata
  • Ensign
  • 7 replies
  • March 21, 2025
parwalrahul wrote:

@shashwata you have identified the issues very well.

 

Also, like your template for creating a prompt for test data needs as per the [context].

 

I also see that you have asked it to “highlight expected outcomes for each test scenario”, which is a nice oracle to ask for.


Have you tried running it for any specific use case and evaluating if the outcome matches your expectations?

Yes, I tested similar prompt for an e-commerce checkout system's payment processing. 

Test data produced for valid, invalid, boundary, and edge cases were in JSON and CSV formats for automation.

Each scenario included expected results, making test assertion readable. The result was comprehensive coverage, smooth debugging, and reduction of manual work.


parwalrahul
Forum|alt.badge.img+3
  • Author
  • Chief Specialist
  • 91 replies
  • March 21, 2025

@VimalPatel nice one. your prompt is a good one. Great job with that!


parwalrahul
Forum|alt.badge.img+3
  • Author
  • Chief Specialist
  • 91 replies
  • March 21, 2025

@shashwata  cool, thanks!


Analysis of the prompt- 

Give me some test data for this app.

 

  • What's wrong with this prompt?

           This prompt is not clear and doesn’t give enough details.

  1. It doesn’t say what kind of app it is –Domain, web, mobile, API, etc.
  2. It doesn’t mention which feature or part of the app we are testing – login, signup, payment?
  3. It doesn’t say what kind of testing is expected – positive, negative, boundary?
  4. No mention of how the test data should look – what format? CSV, JSON, or just plain text?
  5. It doesn’t tell us what the test data will be used for – manual testing or automation?
  6. No rules or conditions are given – for example, should the email be valid or invalid? Any password rules?

Redefined Prompt-

Create structured test data in CSV format for a web-based insurance calculator application covering the following modules:

  • User Registration: Fields include Full Name, Email, Password, Phone Number, and Date of Birth. Include valid inputs, invalid formats, empty values, and edge cases (e.g., minimum/maximum length, special characters).*
  • Product/Policy Search: Fields include Location (Country, Zip Code), Insurance Type (e.g., car, health), and Date of Coverage. Include test cases with incorrect values, partial matches, and no match found.*
  • Checkout & Purchase: Fields include Payment Method (Card/Bank), Policy ID, Amount, and Discount Code. Include expired codes, invalid card numbers, and payment failures.*

Ensure the CSV contains columns like TestCaseID, Module, Field, Input Value, Input Type (Valid/Invalid), and Expected Outcome. Prepare at least 30 data sets covering positive, negative, and boundary scenarios. The data will be used in both automated and manual test executions across different environments.

 

Analysis for Improved prompt-

  • Clear app type & purpose – It tells us it’s a web-based insurance calculator with modules like registration, product search, and checkout.
  • Field-level details – It lists specific input fields to be covered, so there’s no guessing.
  • Covers different test types – It asks for valid, invalid, empty, and edge case data.
  • Defined output format – Requests the data in CSV format with specific column names, making it ready for automation/manual use.
  • Testing objective is clear – It states the data is for both manual and automated testing across environments.

parwalrahul
Forum|alt.badge.img+3
  • Author
  • Chief Specialist
  • 91 replies
  • March 22, 2025

really good one, ​@Nikhilkulkarni !

 

your in-depth analysis and improvement are on the point. cheers!


Forum|alt.badge.img
  • Ensign
  • 4 replies
  • March 24, 2025

Given Prompt: "Give me some test data for this app."

Issues:

Vagueness:  "This app" lacks specific identification. What app are we talking about? A web app? A mobile app? A desktop application? We need a name or clear description.
Unclear Scope: "Some test data" is far too broad. What aspects of the app need testing? Login functionality? API endpoints? Database interactions? User interface elements? The prompt needs to define the target area.
Missing Context: What kind of test data is needed? Valid input? Boundary conditions? Invalid input to test error handling? Performance testing data? Security testing data?
Undefined Output Format: How should the test data be delivered? A CSV file? A JSON object? A list within the response itself? A SQL script to populate a database?
No Mention of Usage: How will the generated test data be used? Manual testing? Automated testing? What specific tools or frameworks will be involved?

Improved Prompt (Example 1 - Focusing on UI testing of a specific web application):

"Generate test data for UI testing of the 'Amazon Online Shopping' web application (www.amazon.com).  I need data specifically for the 'Checkout' form, including valid and invalid inputs for the following fields:  'First Name', 'Last Name', 'Address', 'City', 'State', 'Zip Code', 'Credit Card Number', 'Expiry Date', and 'CVV'.  The data should be provided as a CSV file with a header row.  I will use this data for automated UI testing"

Improved Prompt (Example 2 - Focusing on Performance Testing):

"Generate a large dataset (approximately 10,000 records) of realistic user data for load testing the 'Social Media Platform' application.  The data should include usernames, email addresses, and post content.  The data should be provided in a SQL script format compatible with a PostgreSQL database. I will use this data to simulate high user activity during performance testing."

Observation:

By adding specifics about the application, the target area for testing, the types of test data required, the desired output format, and the intended usage, the refined prompts become significantly more actionable. The difference between the original vague prompt and the refined prompts demonstrates the importance of clarity and specificity when interacting with AI tools for software testing tasks.


parwalrahul
Forum|alt.badge.img+3
  • Author
  • Chief Specialist
  • 91 replies
  • March 25, 2025

@Jeethu good one. to the point.

also, like how you have specifically mentioned the name of fields as well as added the context for it that it is specifically for the checkout form.


  • Ensign
  • 3 replies
  • March 25, 2025

@parwalrahul 

I just gave this prompt to the LLM

"Give me some test data for this app

Upon giving this prompt, the response I received was very appealing

I apologize for the confusion, but I don't have any context about a specific app or what kind of test data you need. To provide relevant and useful test data, I'll need more information. Could you please provide some details about:

What kind of app are you referring to?
What is the purpose of the app?
What type of data does the app handle?
Are there any specific fields or data structures you need test data for?
How much test data do you need (e.g., number of records)?
Are there any particular scenarios or edge cases you want to test?
Once you provide more context, I'll be better equipped to generate appropriate test data for your needs.

Is there any extra context you would like to add that might make my answer more accurate?
------------------
This response was actually a starting point to create a prompt that will have answers to all the questions asked by the LLM and then to see the output that we are looking for

 

Hence, I created a detailed prompt with examples for LLM to create the test data

Prompt: 

Create comprehensive test data for the specified fields with considerations for valid inputs, boundary values, and potential errors. We'll need to define some basic patterns or constraints for each field, especially considering the validation rules that might apply. For simplicity, let's assume the following general guidelines. Provide me the set of 5 test data for each category and display the details in a tabular format.

  • First Name: Typically, alphabetic characters with optional spaces. Length limits can vary, but for testing, we'll use common lengths (e.g., 2-8 characters).

  • Last Name: Typically, alphabetic characters with optional spaces. Length limits can vary, but for testing, we'll use common lengths (e.g., 2-8 characters).

  • Phone Number: A format that could be accepted is '(XXX) XXX-XXXX' or simply a local number 'XXX-XXXX', where X represents digits. For simplicity, we'll stick to the latter as it's more common in global testing scenarios.

  • Country (represented by a 2-letter ISO code): Valid country codes.

  • Email Address: A basic format could be 'first.last@domain.extension ' where each segment has different valid ranges.

  • Password: Typically requires at least one uppercase letter, one lowercase letter, one number, and is between 8-20 characters long.

The response was really what was expected

 

1. Valid Inputs

First Name

Last Name

Phone Number

Country

Email Address

Password

John

Smith

555-1234

US

john.smith@email.com

P@ssw0rd123

María

García

123-4567

ES

maria.garcia@test.es

Str0ngPass!

Yuki

Tanaka

987-6543

JP

yuki.tanaka@mail.jp

J@pan3se2023

Sophie

Martin

246-8135

FR

sophie.martin@domain.fr

Fr3nchT0ast!

Ahmed

Ali

369-2580

AE

ahmed.ali@company.ae

Dub@i2023OK

 

2. Boundary Values

First Name

Last Name

Phone Number

Country

Email Address

Password

Jo

Li

000-0000

AD

a@b.co

A1b2c3d4!

Alexander

Washington

999-9999

ZW

verylongemail.address.123456@verylongdomainname.com

Abcdefghijklmnopqrst1!

X

Y

100-0000

AA

x@y.z

A1b2C3d4!

Ab

Cd

999-0000

ZZ

ab.cd@mail.longextension

LongP@ssw0rd123456789

Bob-Alice

Van-Der

555-5555

XX

bob-alice.van-der@mail.co.uk

P@ssw0rd-With-Hyphens!

 

3. Potential Errors

First Name

Last Name

Phone Number

Country

Email Address

Password

123

#$%

abc-defg

U1

@invalid.com

password

""

""

555-55555

USA

john@smith@email.com

short

John1

Smith2

555-123

us

john.smith@.com

PASSWORD123

J0hn

Sm!th

(555)1234

AA1

johnsmith@email

Onlylowercase123

VeryLongFirstNameExceedingTypicalLimits

O'Neill-Smith-Jones

+1-555-123-4567

USA

john.smith@email..com

NoSpecialChars123

 

4. Creative/Edge Cases

First Name

Last Name

Phone Number

Country

Email Address

Password

Æon

Ümlaut

000-0001

ÅL

user+test@email.co

P@ssw0rd!123

Neil-Patrick

O'Brien

999-9998

GB

"quoted"@email.com

!@#$%^&*()_+{}:"<>?

张伟

123-ABCD

CN

用户@例子.中国

密码PassWord123!

Señor

Añejo

555-0000

ES

señor.añejo@correo.es

ContraseñA123!

Mary Ann

De La Cruz

N/A

PH

mary_ann_de_la_cruz@very.long.subdomain.com

P@ssw0rd With Spaces

 

5. Security/Penetration Testing

First Name

         

Last Name

Phone Number

Country

Email Address

Password

<script>alert('XSS')</script>

         

'; DROP TABLE users;--

555-1337

XX

admin@example.com

' OR '1'='1

Robert'); DROP TABLE Students;--

         

O'Neill

000-0000

ZZ

email@[123.123.123.123]

P@ssw0rd123456789012345678901234567890

null

         

undefined

127-0-0-1

AA

email@localhost

password123

%00

         

%0A%0D

555-%00

%00

%0A%0D@email.com

%00P@ssw0rd

Max+Length+First+Name+250+Characters+Lorem+ipsum+dolor+sit+amet+consectetur+adipiscing+elit+sed+do+eiusmod+tempor+incididunt+ut+labore+et+dolore+magna+aliqua+Ut+enim+ad+minim+veniam+quis+nostrud+exercitation+ullamco+laboris+nisi+ut+aliquip+ex+ea+commodo+consequat+Duis+aute+irure+dolor

Max+Length+Last+Name+250+Characters+Lorem+ipsum+dolor+sit+amet+consectetur+adipiscing+elit+sed+do+eiusmod+tempor+incididunt+ut+labore+et+dolore+magna+aliqua+Ut+enim+ad+minim+veniam+quis+nostrud+exercitation+ullamco+laboris+nisi+ut+aliquip+ex+ea+commodo+consequat+Duis+aute+irure+dolor

555-HACK

ZZ

very.long.email.address.that.exceeds.typical.maximum.length.limits.for.email.fields.in.most.systems.and.might.cause.unexpected.behavior.or.errors@very.long.domain.name.that.also.exceeds.typical.length.limits.for.domains.in.email.addresses.com

VeryLongPasswordThatExceedsTypicalMaximumLengthLimitsForPasswordFieldsInMostSystems123!@#

 

         

 

Analysis of Initial vs. Improved Prompt:

  1. Specificity and Context:

    • Initial: "Give me some test data for this app" is extremely vague and provides no context.
    • Improved: Specifies exact fields (First Name, Last Name, Phone Number, Country, Email Address, Password) and provides guidelines for each.
    • Result: The improved prompt leads to much more relevant and targeted test data.
  2. Structure and Format:

    • Initial: No structure or format specified.
    • Improved: Requests data in a tabular format with 5 test cases for each category.
    • Result: The improved prompt results in a more organized and easily readable output.
  3. Test Data Categories:

    • Initial: No categories mentioned.
    • Improved: Specifies categories like valid inputs, boundary values, and potential errors.
    • Result: The improved prompt leads to a more comprehensive set of test cases covering different scenarios.
  4. Field-Specific Considerations:

    • Initial: No field information provided.
    • Improved: Gives specific guidelines for each field (e.g., length limits, format requirements).
    • Result: The improved prompt generates test data that is more realistic and aligned with actual field constraints.
  5. Quantity and Depth:

    • Initial: No quantity specified.
    • Improved: Requests 5 test cases for each category.
    • Result: The improved prompt leads to a more substantial and varied set of test data.
  6. Validation Rules:

    • Initial: No mention of validation rules.
    • Improved: Includes basic validation rules (e.g., password requirements, phone number format).
    • Result: The improved prompt generates test data that can better test the system's validation mechanisms.
  7. Realism and Applicability:

    • Initial: Could result in completely random or irrelevant data.
    • Improved: Guides towards realistic data that could be used in an actual user registration scenario.
    • Result: The improved prompt produces more practical and useful test data.

Major Differences in Results:

  1. Relevance: The improved prompt results in test data directly applicable to user registration fields, while the initial prompt could lead to any type of data.

  2. Comprehensiveness: The improved prompt leads to a wider range of test cases, including edge cases and potential errors, which the initial prompt might miss.

  3. Structure: The improved prompt results in a well-organized, tabular output, making it easier to use and analyze the test data.

  4. Validity: Test data from the improved prompt is more likely to include both valid and invalid inputs based on specified constraints, providing better coverage for testing.

  5. Consistency: The improved prompt ensures a consistent number of test cases across different categories, which the initial prompt doesn't guarantee.

  6. Field-Specific Testing: The improved prompt allows for targeted testing of each field's specific requirements and potential issues.

  7. Realistic Scenarios: The improved prompt is more likely to generate data that represents real-world user inputs, including potential user errors or edge cases.

In conclusion, the improved prompt significantly enhances the quality, relevance, and usefulness of the generated test data. It provides a clear structure and guidelines that result in a comprehensive set of test cases, covering various scenarios that are crucial for thorough testing of a user registration system. This improvement demonstrates the importance of providing detailed context and specific requirements when requesting test data generation from an AI system

 

 


Forum|alt.badge.img
parwalrahul wrote:

Objective:


The goal of this activity is to help you recognize the characteristics of a well-structured prompt and practice refining prompts to make them clear, specific, and actionable for the testing tasks.

Steps:

  1. Review the Given Prompt: Here’s a poor prompt for a testing task:

    "Give me some test data for this app."
     
  2. Identify Issues: Consider why this prompt might not yield useful results. Think about missing details like the type of app, testing scope, expected tests, usage of the output produced, output format, or any specific testing techniques.
     
  3. Improve the Prompt: Rewrite the prompt to make it more precise and useful
     
  4. Share and Discuss: Share your refined prompt with others and discuss how different refinements impacted AI responses.

Deep Issues in the Original Prompt:

Issue Why It’s a Problem (Deep Dive)
Vague Subject ("this app") AI (or anyone) doesn’t know what the app does. A social media app and a healthcare app require totally different test data. No business logic = no context.
Ambiguous Objective “Some test data” is non-specific. Are you asking for happy path data? Edge cases? Negative cases? Are we validating input formats or testing workflows?
Lacks Scope The app might have multiple modules — login, profile creation, search, payment. Without specifying which part needs data, you risk getting generic or irrelevant data.
No Audience/Use-case Is this for manual testing, automation, or demo data for UI? The nature of the data (realistic vs synthetic vs boundary-based) can change depending on use.
No Format Indicated Output might come as plain text, when a table, JSON, or CSV would be easier to plug into a test script or test case management tool.

 

Deep Dive into the Refined Prompt:
"Generate a set of test data for the user registration form of a mobile banking app. The form includes fields for Full Name, Email, Phone Number, Password, and Date of Birth. The test data should include:

Valid and invalid inputs for each field (including edge cases)

At least 5 data sets per field

Clearly labeled as Valid or Invalid

Formatted as a table with one row per test case

This data will be used for manual exploratory testing and for creating automated input validation test cases."

Layer-by-Layer

 
Element Deep Reasoning
App Context Given "Mobile banking app" implies security, strict validation, PII handling, etc. This shapes the type of test data AI will generate — not just random names or numbers, but context-aware entries.
Functional Area Specified "User registration form" limits the scope. Instead of random data across the app, we focus on one form — reducing noise, increasing usefulness.
Fields Listed This provides AI with structure. Knowing the fields means it can generate field-specific data: e.g., invalid email formats, future DOBs, weak passwords.
Types of Data Requested “Valid and invalid inputs including edge cases” pushes AI to generate more diverse and useful data, helping uncover potential bugs.
Quantity Defined Saying “at least 5 data sets per field” sets expectations and ensures depth, not just shallow examples.
Purpose Stated Explaining it's for manual and automated testing ensures data is test-ready: realistic for humans, structured for machines.
Output Format Stated A table makes it clean and consumable. A format like this helps testers copy-paste or directly import the data for testing.

 

When crafting prompts for AI in testing, think like you're writing a mini test case. Your prompt should reflect:

  • Who / What is under test (system/component)

  • What kind of input you're testing

  • Why you're testing it

  • How you want the output presented


Forum|alt.badge.img
  • Ensign
  • 14 replies
  • March 26, 2025

The following are the issues due to which it cannot respond: 

Please provide details about the app for which you need test data. Specifically, I need to know:

  1. The type of app (e.g., e-commerce, social media, etc.).
  2. The kind of data you need (e.g., user accounts, product listings, transaction records).
  3. Any specific fields or attributes you want in the data (e.g., name, email, price, date).
  4. The volume of data you require (e.g., number of records).

With this information, I can generate the appropriate test data for you!

 

Refined prompt: Give me some test data for a social media Instagram app. The kind of data I need: user accounts. Specific fields I want in the data ( name, email).

 

AI response: 

Sure! Below is a sample dataset for user accounts that you might use for a social media application like Instagram. The dataset includes the fields you specified: name and email.

 

Sample Test Data for User Accounts

json

[

{"name": "Alice Johnson", "email": "alice.johnson@example.com"},

{"name": "Bob Smith", "email": "bob.smith@example.com"},

{"name": "Charlie Brown", "email": "charlie.brown@example.com"},

{"name": "Diana Prince", "email": "diana.prince@example.com"},

{"name": "Edward Elric", "email": "edward.elric@example.com"},

{"name": "Fiona Gallagher", "email": "fiona.gallagher@example.com"},

{"name": "George Weasley", "email": "george.weasley@example.com"},

{"name": "Hannah Baker", "email": "hannah.baker@example.com"},

{"name": "Isaac Newton", "email": "isaac.newton@example.com"},

{"name": "Jessica Jones", "email": "jessica.jones@example.com"},

{"name": "Kevin Hart", "email": "kevin.hart@example.com"},

{"name": "Liam Neeson", "email": "liam.neeson@example.com"},

{"name": "Mia Wallace", "email": "mia.wallace@example.com"},

{"name": "Nathan Drake", "email": "nathan.drake@example.com"},

{"name": "Olivia Benson", "email": "olivia.benson@example.com"}

]

 

Usage

You can use this data to:

  • Create user accounts in your social media app.
  • Test user registration and profile functionality.
  • Validate email uniqueness during account creation.
  • Simulate user interactions in your testing scenarios.

 

With the help of a refined prompt, AI knows what and how to respond. From my point of view, it is always recommended to provide proper details to AI models so that they can process an accurate solution which can be useful for the users.


parwalrahul
Forum|alt.badge.img+3
  • Author
  • Chief Specialist
  • 91 replies
  • March 26, 2025

@ameet213 superb! loved the learning style that you followed.

Starting with confusion and ambiguity and ending with effectiveness and clarity.

a nice example of learning and evolution through the bootstrap approach.

i liked the fact that you actually took the time to read and reflect on all the questions asked by gpt after your initial prompt.

kudos! cheers :)


parwalrahul
Forum|alt.badge.img+3
  • Author
  • Chief Specialist
  • 91 replies
  • March 26, 2025

@Saravanan s nice one.

you doubled down on the 5W approach and that’s a really nice to way elaborate anything well. 

In real life as well as in the artificial life (i.e. using ai / llm tools). cheers!


parwalrahul
Forum|alt.badge.img+3
  • Author
  • Chief Specialist
  • 91 replies
  • March 26, 2025

@Charmi07 nice one. also, i see it generated answers for you in JSON format.

Did you also specifically asked for it or was it because of some previous prompts or chats with the llm tool?


Forum|alt.badge.img
  • Ensign
  • 14 replies
  • March 26, 2025
parwalrahul wrote:

@Charmi07 nice one. also, i see it generated answers for you in JSON format.

Did you also specifically asked for it or was it because of some previous prompts or chats with the llm tool?

No, I didn’t specify the format, maybe it was because of previous prompts.


Reply