Question

How to get test's Fail/Pass status during test execution, to prevent storing untrustworthy data into repository

  • 17 November 2023
  • 3 replies
  • 87 views

Userlevel 3
Badge +1

Hi,

In our tests, the test runtime data is stored in a repository by tests while they are running. Each test should complete the execution flow, so it is highly unlikely to interrupt them during execution.

The data that is supposed to be stored in repository becomes unreliable when a soft failure occurs during test execution.

How to detect soft failures in tests during execution so that unreliable data is not saved in repository and can be avoided? but at the same time, to not interrupt the test.

Thanks


3 replies

I think we should use a framework that supports soft assertions. Like if test case failed then we can mark it fail at end and continue to test to run.
Another way is to control with customer conditional programming with testing frameworks like if assertion of post condition fail it will logged warning and then data will not be saved.
For example, 
in robot framework with python power we can implement the test case which return some data and will be saved for next call or something else, with custom keywords we can implment the logic that if post condition failed that data save flag set to false.
 

Userlevel 3
Badge +1

I think we should use a framework that supports soft assertions. Like if test case failed then we can mark it fail at end and continue to test to run.
Another way is to control with customer conditional programming with testing frameworks like if assertion of post condition fail it will logged warning and then data will not be saved.
For example, 
in robot framework with python power we can implement the test case which return some data and will be saved for next call or something else, with custom keywords we can implment the logic that if post condition failed that data save flag set to false.
 

The verification of values in Tosca tests is a “soft failure” which just reports about mismatched value as a failure at the end of the test execution but doesn’t stop the execution. So there is a soft assertion in tosca, but the question is how to get that soft asserts list from inside the test.

I think we should use a framework that supports soft assertions.

Do you mean using a framework with Tosca? If so, could you please give more insights or links how to do that.

 

I think we should use a framework that supports soft assertions. Like if test case failed then we can mark it fail at end and continue to test to run.
Another way is to control with customer conditional programming with testing frameworks like if assertion of post condition fail it will logged warning and then data will not be saved.
For example, 
in robot framework with python power we can implement the test case which return some data and will be saved for next call or something else, with custom keywords we can implment the logic that if post condition failed that data save flag set to false.
 

The verification of values in Tosca tests is a “soft failure” which just reports about mismatched value as a failure at the end of the test execution but doesn’t stop the execution. So there is a soft assertion in tosca, but the question is how to get that soft asserts list from inside the test.

I think we should use a framework that supports soft assertions.

Do you mean using a framework with Tosca? If so, could you please give more insights or links how to do that.

Practically, I didn’t worked with Tosca. So, I think this link might be helpful.

 

Reply