Blogathon 📚: ShiftSync presents its first blog competition


Userlevel 7
Badge +2
  • Community Manager
  • 105 replies

We're thrilled to announce our first blogathon , and we warmly invite you to participate and showcase your exceptional technical and writing skills!

This is not your average competition, folks. It's a chance to connect with like-minded individuals, flaunt your expertise, give your article the spotlight it deserves, and who knows, you might just snag some prizes along the way!

To encourage more people towards blog writing, ShiftSync is launching a one-month Blogathon contest from July 19, 2023, to August 18, 2023

Top 3 winners will get monetary prizes, badges, and certificates of recognition.  

  • 🥇First prize – 300 USD + Swag 

  • 🥈Second prize – 150 USD + Swag 

  • 🥉Third prize – 50 USD + Book of the industry expert 

In addition, we will have some outstanding entries who will receive certificates of participation and ShiftSync badges (based on your points). 

 If you have any concerns or need assistance, feel free to reach out to shiftsync@tricentis.com or message @Kat or @Daria.

You must watch out for these dates

  • Blog submission last date – July 30, 2023 

  • ShiftSync assessment – August 16, 2023 

  • Winner announcement – August 18, 2023 

After blog submission, you can vote for your fellow users’ blogs. The most popular blogs in terms of likes and comments will be shortlisted. Public voting is open for 14 days (about 2 weeks) after submission. In other words, if you submit the blog on July 19, your votes will be counted for 14 days, starting from July 19 to August 1. If you submit your blog on July 30, you will still get 14 days of voting, which means your votes will be counted from July 30 till August 12.  

Please make sure that you submit your blogs on or before July 30, 2023. Post July 30, your blogs won’t be accepted.  

ShiftSync will finalize the top 3 winners based on public voting.  

An opportunity for you all to create blogs on the new community and win prizes. We’re encouraging you to create some meaningful and engaging content with utmost brevity. Get ready to awaken your creative muse. Grab the opportunity to educate your fellow users as well as encourage them to be a part of it.  

How to be a part of this Blogathon? 

  1. Submit your blog to the ShiftSync community. Please reply to the blogathon topic. Screenshot attached. 

 

  1. Create your original blog posts (between 300-1000 words) and cover ONE of the following topics: 
  • 📃Test Automation for Mobile Applications –This is a broad topic, so feel free to pick your own point of view: from how it contributes to company success or share some examples from your experience (both positive or challenges and learnings).
  • 📃What's enough testing?
  • 📃Integrating OAuth 2.0 in C# Microservices: A Step-by-Step Guide. 
  1. Follow the rules:
  • An author can submit multiple blogs; however, they must cover the topics listed above   
  • The entry should be in English 
  • The entry should not have multiple authors 
  • Don't forget to mention the title of the blog that you chose  
  • Please include images (not more than 5) to make your blog visually appealing. You can use images from pexels or unsplash 

Who can vote? 

  • Anyone 
  • You cannot vote your own blog but go ahead and activate your network and friends, have them vote for you and get your name and article to the top of the list! 

22 replies

Userlevel 1

 

What’s enough testing?

Testing is a delicate balancing act performed by “Testers” to bring in order to the chaos within the system under test. It is closely connected to human congnition and state of mind. If you are a satisfied person, it will depict in your testing. Conversely when the minds are unsettled they may veer towards two extremes: either you would take an easier route of lethargically delegating it to others or not taking care of precision in your work and/or second path of going into endless perfection loop. The latter pushes everyone to work excessively, resulting in burnout while still feeling discontented with the amount of testing achieved in given time.

Tester’s balance ∝ Quality Testing

So how do we know if system under test has been vetted properly and that the golden middle is attained. We must develop a guideline for ourselves and the team keeping in mind, that it is not a one-size-fits-all best practice document. Each context is unique and approaches should be adaptive. Now would a doctor treat all diseases with same medicine? That would be disastrous for the doctor and patient both! 

 

A useful guideline would be to set some traps, let the system under test roam freely, reveal its weaknesses and then find a way to overcome those. If it cannot navigate the traps, then dig the root cause. It is essential to acknowledge that it may not be a defect, rather could be an information useful for setting up traps in future! Testers possess the primordial hunter  instincts, in this case they hunt for bugs. Bugs are inherent part of software development cycle as long as humans are involved with the process.

 

Regardless of your motivations be it squashing too many bugs or get visibility as an expert tester, ultimate aim should be to achieve quality. Call yourself Quality Architect, Quality Critic rather calling yourself a mere tester because as tester you continuously change hats with one aim of bringing more quality to the product. I would end by saying testing is enough when you think desired quality is achieved.

 

To further improve our testing process, let's delve into some concrete examples and testing techniques. For instance, exploratory testing allows us to simulate real-world scenarios, regression testing helps ensure new changes don't introduce new issues, and boundary testing verifies that the system handles extreme values correctly. By incorporating these techniques judiciously, we can augment our testing efforts and deliver a higher quality product.

 

One challenge we often face in testing is time constraints. It's crucial to acknowledge this reality and prioritize our efforts effectively. Identify the critical functionalities and high-risk areas, and focus on thorough testing there, while still giving reasonable attention to other parts of the system.

 

In conclusion, testing is enough when we believe the desired level of quality is attained. By following a well-defined guideline, employing various testing techniques, and acknowledging time constraints, testers can effectively balance their efforts and contribute to the success of the product they are testing. Embrace the dynamic nature of testing and continuously adapt and refine your approach to meet the unique demands of each project.

Userlevel 4
Badge

What’s enough testing?

 

What’s enough Testing?   I say it’s a never-ending process!

“Enough testing’” refers to conducting a sufficient amount of testing to ensure the quality and reliability of a software product, system, or application before it is released to end-users. The primary goal of testing is to identify and fix defects, bugs, or any issues that may affect the functionality and performance of the software.

 

The exact amount of testing required can vary depending on factors such as the complexity of the software, the criticality of the application, the intended user base, and the project’s timeline and budget. Here are some common types of testing that can contribute to the overall testing effort:

 

TYPES OF TESTING

 

To determine “enough testing,” the development team and stakeholders must consider the level of risk they are willing to accept and how confident they need to be in the software’s quality. Aiming for 100% bug-free software is usually impractical, but testing should be comprehensive enough to mitigate major risks and ensure the software meets the minimum acceptable level of quality.

 

How much testing is enough for Quality Assurance?

 

It’s also essential to balance testing efforts with project constraints, such as time and budget, as exhaustive testing may not always be feasible. Utilizing test management practices and risk-based testing approaches can help focus testing efforts on areas that are most critical to the software's success. Additionally, obtaining feedback from end-users and incorporating their input into the testing process can be valuable in refining the software and making it more user-centric. 

 

Remember that testing is an ongoing process, and its essential to continually monitor the performance of the processing systems in real-world conditions and update the models as needed to maintain their accuracy and effectiveness over time. It’s important to acknowledge that “enough testing” doesn't mean the software is entirely defect-free or risk-free. Testing provides valuable information about the quality of the product, but it cannot guarantee that the software is entirely bug-free. Therefore, it’s crucial to find a balance between practical constraints and the level of confidence in the software’s performance.

 

To Conclude the fact that testing is an endless process, and it continues till the software exists. It is not impossible to find all the defects in software but definitely, it is not the decisive factor to stop testing.

Userlevel 1

@Kat Why is nobody able to like my post? My colleagues are saying it shows liked for a moment and withdraws back. Can you please assist?

Userlevel 7
Badge +2

@Kat Why is nobody able to like my post? My colleagues are saying it shows liked for a moment and withdraws back. Can you please assist?

Hi, I checked it, everything works. Maybe they are not registered in the community. 

Userlevel 1

@Kat Why is nobody able to like my post? My colleagues are saying it shows liked for a moment and withdraws back. Can you please assist?

Hi, I checked it, everything works. Maybe they are not registered in the community. 

They have but are complaining that their like disappears the moment they press like button on the page. I too tested out and its happening.

Userlevel 7
Badge +2

@Kat Why is nobody able to like my post? My colleagues are saying it shows liked for a moment and withdraws back. Can you please assist?

Hi, I checked it, everything works. Maybe they are not registered in the community. 

They have but are complaining that their like disappears the moment they press like button on the page. I too tested out and its happening.

Could you please send me their usernames in direct messages? It may happen because they haven’t confirmed their emails. 

Title: Test Automation for Mobile Applications

In today's fast-paced digital age, mobile applications have become an fundamental part of our lives. The rise in mobile app usage has led to an increased demand for high-quality, bug-free, and user-friendly applications. To meet these expectations, developers must ensure that their mobile apps undergo rigorous testing before being released to the public. Test automation for mobile applications has emerged as a crucial solution to expedite the testing process while maintaining the app's quality and reliability.

Mobile application development involves dealing with a myriad of devices, operating systems, and screen sizes. Manual testing, though essential, is time-consuming and inefficient when it comes to covering the vast array of test scenarios. Test automation offers several key benefits that contribute to a more streamlined and robust testing process:

Accelerated Testing: Automation tools allow developers to run multiple tests simultaneously, significantly reducing testing time compared to manual testing. This accelerated testing ensures quicker release cycles and faster time-to-market for mobile apps.

Increased Test Coverage: With an ever-growing number of mobile devices and operating system versions, it is practically impossible to cover all combinations manually. Test automation enables comprehensive test coverage by executing test scripts on various devices and configurations.

Consistency and Repeatability: Automated tests produce consistent and repeatable results, ensuring that the same tests can be executed multiple times without the risk of human errors affecting the outcome.

Early Bug Detection: Identifying bugs and issues early in the development process saves time and resources. Test automation enables continuous integration and early bug detection, facilitating prompt fixes and smoother development cycles.

Cost-Effectiveness: While the initial setup of test automation may require investment, it proves cost-effective in the long run due to reduced manual effort and faster time-to-market.

Choosing the Right Test Automation Tools

Selecting the appropriate test automation tools is crucial for successful mobile app testing. A wide range of tools are available, each with its strengths and weaknesses. Some popular test automation frameworks for mobile apps include:

Appium: Appium is an open-source, cross-platform automation tool that allows developers to test native, hybrid, and mobile web applications. It supports various programming languages and provides seamless integration with popular development environments.

 

Espresso: Developed by Google, Espresso is a powerful automation framework designed for testing Android applications. It offers a rich set of features, including synchronization with the app's UI thread and the ability to handle complex gestures.

 

XCUITest: XCUITest is Apple's official automation framework for iOS apps. It provides excellent support for iOS-specific elements and interactions, making it a go-to choice for testing applications on iPhones and iPads.

Selendroid: Selendroid is an open-source framework specifically tailored for testing Android applications. It supports multiple devices simultaneously and offers robust handling of native and hybrid apps.

Detox: Detox is a popular end-to-end testing framework for React Native applications. It allows developers to simulate user interactions and verify app behavior on both Android and iOS devices.

Best Practices for Test Automation in Mobile App Development

While test automation can enhance efficiency and effectiveness, its success relies on following best practices:

Early Test Inclusion: Incorporate testing from the initial stages of app development to identify and address issues as early as possible. This reduces the chances of defects accumulating and affecting the app's overall quality.

Device and OS Coverage: Aim to test on a diverse range of real devices and operating system versions. Emulators and simulators are valuable for initial testing but may not replicate all real-world scenarios.

Maintainable Test Scripts: Create well-structured and maintainable test scripts to ensure scalability as the app evolves. Frequent updates and changes should not lead to a complete overhaul of the test suite.

Continuous Integration: Implement continuous integration practices to run tests automatically after each code commit. This aids in early bug detection and ensures that the app remains stable throughout the development process.

User Experience Testing: Apart from functional testing, prioritize user experience testing to ensure the app is intuitive and user-friendly.

In the ever-expanding world of mobile applications, test automation has become an indispensable part of the development process. By leveraging automation tools and following best practices, developers can enhance test coverage, identify bugs early, and deliver high-quality mobile apps to users in a more efficient and cost-effective manner. Embracing test automation not only saves time and resources but also contributes to building a loyal user base by providing reliable and user-friendly mobile applications. As the mobile app landscape continues to evolve, test automation will remain a vital component in delivering exceptional app experiences to users worldwide.

Userlevel 4
Badge

@Kat Why is nobody able to like my post? My colleagues are saying it shows liked for a moment and withdraws back. Can you please assist?

Hi, I checked it, everything works. Maybe they are not registered in the community. 

 

We're thrilled to announce our first blogathon , and we warmly invite you to participate and showcase your exceptional technical and writing skills!

This is not your average competition, folks. It's a chance to connect with like-minded individuals, flaunt your expertise, give your article the spotlight it deserves, and who knows, you might just snag some prizes along the way!

To encourage more people towards blog writing, ShiftSync is launching a one-month Blogathon contest from July 19, 2023, to August 18, 2023

Top 3 winners will get monetary prizes, badges, and certificates of recognition.  

  • 🥇First prize – 300 USD + Swag 

  • 🥈Second prize – 150 USD + Swag 

  • 🥉Third prize – 50 USD + Book of the industry expert 

In addition, we will have some outstanding entries who will receive certificates of participation and ShiftSync badges (based on your points). 

 If you have any concerns or need assistance, feel free to reach out to shiftsync@tricentis.com or message @Kat or @Daria.

You must watch out for these dates

  • Blog submission last date – July 30, 2023 

  • ShiftSync assessment – August 16, 2023 

  • Winner announcement – August 18, 2023 

After blog submission, you can vote for your fellow users’ blogs. The most popular blogs in terms of likes and comments will be shortlisted. Public voting is open for 14 days (about 2 weeks) after submission. In other words, if you submit the blog on July 19, your votes will be counted for 14 days, starting from July 19 to August 1. If you submit your blog on July 30, you will still get 14 days of voting, which means your votes will be counted from July 30 till August 12.  

Please make sure that you submit your blogs on or before July 30, 2023. Post July 30, your blogs won’t be accepted.  

ShiftSync will finalize the top 3 winners based on public voting.  

An opportunity for you all to create blogs on the new community and win prizes. We’re encouraging you to create some meaningful and engaging content with utmost brevity. Get ready to awaken your creative muse. Grab the opportunity to educate your fellow users as well as encourage them to be a part of it.  

How to be a part of this Blogathon? 

  1. Submit your blog to the ShiftSync community. Please reply to the blogathon topic. Screenshot attached. 

 

  1. Create your original blog posts (between 300-1000 words) and cover ONE of the following topics: 
  • 📃Test Automation for Mobile Applications –This is a broad topic, so feel free to pick your own point of view: from how it contributes to company success or share some examples from your experience (both positive or challenges and learnings).
  • 📃What's enough testing?
  • 📃Integrating OAuth 2.0 in C# Microservices: A Step-by-Step Guide. 
  1. Follow the rules:
  • An author can submit multiple blogs; however, they must cover the topics listed above   
  • The entry should be in English 
  • The entry should not have multiple authors 
  • Don't forget to mention the title of the blog that you chose  
  • Please include images (not more than 5) to make your blog visually appealing. You can use images from pexels or unsplash 

Who can vote? 

  • Anyone 
  • You cannot vote your own blog but go ahead and activate your network and friends, have them vote for you and get your name and article to the top of the list! 

TITLE: Testing Automation for Mobile Apps, Their Challenges and Some Tools to Do It 
 

 

With the ever growing population and technology, the amount of mobile users is also rapidly increasing giving the need for various mobile application developers to introduce new features, make the application more scalable, secure, fully functional and many other things. 

But with the rapid requirement of the market, maintaining a good quality under a deadline sometime becomes a hectic job for developers and that time there may be something that is left out of scope. 

To fill in the gaps of the prototype to final product a lot of testing is required but as I told you due to the increasing requirement. The testing and distribution of final product needs to be done at a very fast rate. And to comprehend that with the help of only manual testing is a little hard and may be little time consuming than ideal. 

It is where the Automated Testing of Mobile Apps, The work done manual testing team can be automated to make the process more efficient and also scale the whole deployment in a better way. 

Now, the sound of this does seems interesting and fairly simple. But the problem with automation is when you are trying to do a basic task with little rules it is good but to make it appropriate for a larger number of rules and implementing more check. Then the automation process becomes a little more complex than it sounds. 

But nonetheless with automation testing a lot of things can be achieved such as: a lot of time can be saved, multi threading or multiple test cases can run simultaneously, For repeated scenarios same test cases can be used multiple times and many more. 

Let’s hear the bitter side of it, which are challenges to the automated testing of mobile application: 
--Frequent and faster requirement of new features.
--Support Multiple Platforms/OS. 
--Seamless Integration and Functionality 
and some more. 

Let’s Have a look at different tools that are available to do the automated testing of mobile applications:- 
 

  1. Appium: An open-source mobile test automation tool for Android and iOS apps. It supports native, mobile web, and hybrid apps using WebDriver interface, allowing code reuse between platforms.

  2. Robotium: Another open-source tool for Android app testing. It writes powerful Java-based test cases for hybrid and native apps quickly.

  3. MonkeyRunner: Designed for framework/functional testing of Android apps, offering features like multi-device control, regression testing, and Python-based scripting.

  4. UI Automator: Developed by Google for functional Android UI test cases, supporting devices from Android 4.1 onwards.

  5. Selendroid: Leading test automation software for Android's hybrid, native, and mobile web apps with multi-device interaction.

  6. MonkeyTalk: Automates functional testing for Android and iOS apps with user-friendly scripts, XML/HTML reporting, and screenshot capture.

  7. Testdroid: Cloud-based mobile app testing platform for iOS and Android devices with different configurations, reducing development costs and improving app quality.

  8. Calabash: Supports .NET, Ruby, Flex, Java, and more to test native and hybrid mobile apps with Cucumber framework integration.

  9. Frank: Focuses on testing iOS apps, combining JSON and Cucumber, and provides detailed app information with the app inspector "Symbioate."

  10. SeeTest: A cross-platform test automation tool for websites and mobile apps, supporting iOS, Android, Symbian, Blackberry, and Windows Phone.

  11. Kobiton: Assists in building top-notch mobile experiences with real device testing, appium script generation, and device lab management.

  12. TestComplete: A functional automated testing platform by SmartBear Software for Web, Android, iOS, and Windows apps.

  13. Espresso: Google's open-source test automation framework for Android apps, known for its simplicity and scalability.

  14. Ranorex Studio: A powerful GUI test automation framework for web-based, mobile, and desktop apps, utilizing languages like C# and VB.NET.

  15. Eggplant: An AI-assisted testing tool ensuring fast app release, business continuity, and scalability with cloud support.

Last but not the least, what things to be kept in mind while writing automation scripts for mobile apps are: 

Automating mobile app testing requires using analytics data for device and OS prioritization, including at least one device per screen size range, testing various network speeds for optimal performance, breaking down scenarios for efficiency, using OOPS for code organization, and conducting consistent testing for high app quality. Following these practices ensures a seamless user experience and a successful application.

 

 

Userlevel 1
Badge +1

Hi @Kat - I am unable to post my blog article as it is showing 30000 Character limits. Tried with both Reply and Quote & Reply.  MS Word says including spaces it is having 5K Characters. Attached the screenshot for your reference. Please do the needful. Thank you so much for your support.

 

Userlevel 1
Badge +1
Page-1
Page-2
Page-3

 

Userlevel 6
Badge +2

We're thrilled to announce our first blogathon , and we warmly invite you to participate and showcase your exceptional technical and writing skills!

This is not your average competition, folks. It's a chance to connect with like-minded individuals, flaunt your expertise, give your article the spotlight it deserves, and who knows, you might just snag some prizes along the way!

To encourage more people towards blog writing, ShiftSync is launching a one-month Blogathon contest from July 19, 2023, to August 18, 2023

Top 3 winners will get monetary prizes, badges, and certificates of recognition.  

  • 🥇First prize – 300 USD + Swag 

  • 🥈Second prize – 150 USD + Swag 

  • 🥉Third prize – 50 USD + Book of the industry expert 

In addition, we will have some outstanding entries who will receive certificates of participation and ShiftSync badges (based on your points). 

 If you have any concerns or need assistance, feel free to reach out to shiftsync@tricentis.com or message @Kat or @Daria.

You must watch out for these dates

  • Blog submission last date – July 30, 2023 

  • ShiftSync assessment – August 16, 2023 

  • Winner announcement – August 18, 2023 

After blog submission, you can vote for your fellow users’ blogs. The most popular blogs in terms of likes and comments will be shortlisted. Public voting is open for 14 days (about 2 weeks) after submission. In other words, if you submit the blog on July 19, your votes will be counted for 14 days, starting from July 19 to August 1. If you submit your blog on July 30, you will still get 14 days of voting, which means your votes will be counted from July 30 till August 12.  

Please make sure that you submit your blogs on or before July 30, 2023. Post July 30, your blogs won’t be accepted.  

ShiftSync will finalize the top 3 winners based on public voting.  

An opportunity for you all to create blogs on the new community and win prizes. We’re encouraging you to create some meaningful and engaging content with utmost brevity. Get ready to awaken your creative muse. Grab the opportunity to educate your fellow users as well as encourage them to be a part of it.  

How to be a part of this Blogathon? 

  1. Submit your blog to the ShiftSync community. Please reply to the blogathon topic. Screenshot attached. 

 

  1. Create your original blog posts (between 300-1000 words) and cover ONE of the following topics: 
  • 📃Test Automation for Mobile Applications –This is a broad topic, so feel free to pick your own point of view: from how it contributes to company success or share some examples from your experience (both positive or challenges and learnings).
  • 📃What's enough testing?
  • 📃Integrating OAuth 2.0 in C# Microservices: A Step-by-Step Guide. 
  1. Follow the rules:
  • An author can submit multiple blogs; however, they must cover the topics listed above   
  • The entry should be in English 
  • The entry should not have multiple authors 
  • Don't forget to mention the title of the blog that you chose  
  • Please include images (not more than 5) to make your blog visually appealing. You can use images from pexels or unsplash 

Who can vote? 

  • Anyone 
  • You cannot vote your own blog but go ahead and activate your network and friends, have them vote for you and get your name and article to the top of the list! 
How much Testing is Enough Testing?

Doesn’t this topic sounds straightforward? People will say that when you stop finding bugs, the testing should stop, OR when all the tests have been executed and pass, testing is enough. Many other theories also can become answers. As we finish testing, or feel like it is done, new test ideas come up. Hence is it very very important to know when to stop testing. With my experience as a junior tester, senior tester and testing manager over 20 years, I have found few ways where you can find out if the testing that is performed is enough or not.

 

  1. All the test ideas discussed in meetings, or provided by managers or tossed in between testing are implemented :-  You went through each of the sample scenarios that came with the story to test. This means that you only executed the test cases created by someone else. This might feels like something is left, often it is the case in teams with many seniors and junior testers. Junior testers are asked to remain within scope of testing, hence they know when to stop when the list of testing is complete.
  2. Testing time is over :-  With all the SDLC, Agile and other practices in place, testing is often a time bound activity. As the race to Market intensifies, someone can always declare that time is up and the software needs to be shipped. Like it or not, we have a finite amount of time for testing, so these may be good indicators that it’s time to stop. In such cases, testing is done based on priority of test cases.
  3. Testing is not produced desirable results:- Its been months of repeating same tests over and over again. Automation may have been implemented. If the management has stopped adding new features, the testing also need not be done as regressively as it was earlier. You know that even there is a bug hidden, it would be a very simple error hence no point wasting resources on testing. We can stop.
  4. Tired testers:- Whether it is towards the end of day, or towards the end of product life, testers can feel exhausted and that is the time we should stop. If I cannot find a bug today after 5-6 hrs of testing, chances are that I am losing focus because I am mentally tired. I may want to stop testing , for today, and continue tomorrow. 
  5.  Irrelevant Test ideas:- Sometimes the tests ideas that we put in front of management can be dismissed and we hear that these tests are not worthy and the product can live with any bug related to those tests. In such cases we can stop testing that particular functional area.

Apart from the general guidelines, there are several other situations which determine the extent and duration of software testing. After conducting series of tests you might be satisfied with the quality of codes or the bug percentage but the reality might be a completely different picture. The quality, bugs percentage etc. could be pointed in just the opposition direction. If you release the software in this state, it may prove expensive and detrimental to your business. You can control it by evaluating the possibility of finding more defects. This is inferred from the analysis of the results obtained so far. The analysis is important because

●       A newfound defect is a strong indication to continue testing as often they lead to other defects.

●       If you found defects with a small portion of overall functionality, you should continue the testing.

●       If multiple testing of the functionality of the software is not showcasing any defects, then it is the right time to stop testing the software.

●       Before putting a stop to software testing, you need to ensure that its significant features are mostly or entirely tested for satisfactory performance.

●       One should always rely on comprehensive tests and results and not based their decision on a false confidence veneer. It is not uncommon that many developers completely skip on tests stating that as long as the codes comply; it has reached the acceptance testing benchmark.

Success comes from identifying the risks early. It is a good indicator of when to stop software testing. The risk factors will determine your level of testing. If in various testing like Unit testing, System testing, Regression testing etc., you are getting positive results, then you can stop testing.

We can easily surmise that you can understand it is time to stop software testing when you are confident of finding any additional defects and when various tests give you a high degree of confidence that the software is ready to be released. 

Conclusion:

It can be easily seen that there are no predefined rules that you can use every time to find how much testing is enough. Visualize project situations, project readiness and timelines to decide on your own.

 

Userlevel 5
Badge +3

Why MAST is a must?

Mobile applications are a huge deal these days. Nowadays there is an app for almost everything. Want to know what was the song you’ve listened to on a radio recently? Or what is the nearest tram station to your location right now? Or with whom did your partner meet the other day when he/she claimed they went to the gym alone? Well, there is an app for that as well. But that is not the topic of today’s article. With over 255 billion mobile applications downloaded in 2022, there is just no denial that mobile applications are prevalent. This puts a lot of pressure on the mobile application testing. That doesn’t mean only the functional testing, but the non-functional testing as well. In today’s article, I’d like to talk about security testing and show an example of how an attacker can leverage a poorly tested application.

Introducing the problem

Imagine you are a small company that came up with the idea to create an app that would allow the users to rate the pubs and breweries they have visited. Due to the immense popularity of Android within your target user group, you have decided to create an application on Android, publish it on Google Play and have the users download the app from there.

Many people perceive the mobile applications as secure by nature. The developers usually write the Java (or Kotlin) code, compile it, bundle it together with other stuff and publish it as an executable file. An executable file is just the combination of unreadable mess for a normal human being. If something goes horribly wrong during the development, no one should be able to figure that out right? Wrong.

Let’s return back to our story. The company was able to publish the application, but it also attracted the attention of some shady guys on the internet. We will play the role of these individuals. The first thing we have to obtain is the application itself. Luckily, there are sites such as APKpure that allows us to obtain them.

But the file that gets downloaded is just an APK. In order to get to the code, the attacker would use a decompiler which is a software that allows us to the reverse process to code compilation. There are various decompilers available, but in this case a simple JADX (Dex to Java) decompiler should do the trick. What is great about this tool is that it also has a GUI version which makes the reverse engineering process much easier.

Having a code available, the attacker can start looking for interesting parts within the database. In this case, it might be the secrets. Secret is for example a password for a database or an API token the third party service you might be using. It is something you wouldn’t want any attacker to get their hands on. There are many ways for an attacker to look for secrets. One of them would be write a simple regex looking for strings with a given pattern (for example AWS tokens are usually in the form of „AKIA” followed by 16 characters that can be either numbers or capital letters). In this example, we have a commercial secret scanner.

A simple mistake that could have been prevented right in the development phase was discovered from the final executable. In the worst case scenario, this mistake could bring down the small company and their service. This puts a lot of emphacise on testing and more specifically an automated security testing.

MAST as solution

So now that we have explained a real-life scenario and showcased the problém, we should also speak about how to prevent it. And the prevention should start during the development by the usage of various automatic MAST tools. We have used the term MAST, but what exactly does it stand for?

MAST is „Mobile application security testing“ and it should be part of every organization’s mobile application development process. It consists of the combination of various tools that can be used automatically throughout the build and testing phase of the SDLC. Most of these tools can be seemlessly integrated into the CI/CD and based on their output, we can let the build fail for not meeting the security requirements.

Speaking about tools, let’s do a quick rundown of each tool category:

  • Static application security testing (SAST) is used for scanning the application source code to identify the vulnerabilities. These tools can be run really early in your CI/CD or even as an IDE plugin while coding and are considered as „white box“ testing.
  • Dynamic application security testing (DAST) is used for checking the security at runtime by testing common attack types against the running apps. In comparison to SAST, they can only be used much later in the development process and are an example of a „black box“ testing.
  • Interactive application security testing (IAST) is used for checking security at runtime via application scanning and analysis of internal application flows. It is a blend of white box and blackbox testing as it links the foundings of DAST to the source code scanned by SAST.
  • Software composition analysis (SCA) is used for tracking third-party code dependencies, which is useful when your solution relies on many open source libraries. Developers can use these tools to discover components, their supporting libraries and indirect dependencies that might result in vulnerabilities and possible exploits.

Ideally, you would like to put these tools into your development pipeline in the succession like this one:

Conclusion

With the rise of mobile applications, i tis vital to perform not only the functional testing, but the security testing as well. Considering the magnitude of attack vectors, the only feasible way forward is to use automated testing tools that can perform various scanning activities throughout your whole development phase.

Userlevel 6

What’s enough testing?

 

PrWut1KnKB-uMOMn5rd2pxnwGhjlAV9xinvIxOpuZO78VZ-gpCoIaAjdt-c6RlmlFDUtgPNJJ0hAWsvWtYy-h-pq_bigN6waNPqwQ-2WpwcqAPKbPiS6GpcmjhRevZkip2KptsQR2s4-I4S8NV2ioZ0

 

Testing: The term can be used as a synonym for fixing the problems. In Software Development, testing means evaluating the product and verifying that the product does what it's supposed to do. 

But now a million dollar question arises!

 

At  what time do we have to stop testing or what’s enough testing?

 

The answer lies in 2 factors: the QA functionality and how scrupulously that Functionality is applied in testing as “Quality matters more than quantity!”

 

Enough Testing” refers to the amount of testing which ensures that a software product meets the requirements as defined. 

Testing is all about finding the errors and using the right test cases enables this. This increases the effectiveness of the work and also increases the accuracy rate but couldn’t guarantee the 100% error free result.

In my opinion, there are two main reasons why any software/application can not be tested completely.

  1. For a particular input, there may be “n” number of tests and,
  2. There are too many ways through which the product can be tested.

Testing Myths:

  1. Know that, the things which are covered in the client meeting are all to develop the error free software/application: Often we think that the points which are discussed in the meetings are enough to get the 100% accuracy but this is wrong. There might be few edge cases that were missed while testing.
  2. If a tester feels that his role is to verify that the product should work perfectly, then he is doing injustice to himself because in the real world any program is not hundred percent bug free.

Most of the developers catch and fix maximum of their mistakes in a product before deploying it for testing, so it is the role of the tester to find the remaining. This suppose, 1% may be neglected by the developers because they assume the end-user wouldn’t take that route while using the product but as a tester you can’t. The main goal is to have a top-level stable product.

Approaches to define Enough Testing:

  1. Time for the testing ran out: While providing the time estimation of the work to the client, it has been clearly mentioned that this is the time allotted for the testing of the particular program and work should be completed within it.
  2. Ideas that were discussed in the meetings: There are chances that the new ideas which were given or discussed by the manager, TLs, or other members of the team were enough for testing.
  3. Getting the fall off results: Suppose a long time ago you stopped finding bugs, and the test ideas which you have are sufficiently hard to run and you think they will yield lower results. Better yet, even if they did have errors, the errors would be small, or only happen with a complex and rare setup. It’s not worth continuing testing.
  4. Ideas put on the table were out of scope or unrelated to the given work.

Conclusion:  Although the testing is one of the never ending process but keeping in mind the project situations, the amount of work that is ready for delivery and project deadlines can be considered to decide when to stop the testing.                          

 

Userlevel 7
Badge +2

Hi @Kat - I am unable to post my blog article as it is showing 30000 Character limits. Tried with both Reply and Quote & Reply.  MS Word says including spaces it is having 5K Characters. Attached the screenshot for your reference. Please do the needful. Thank you so much for your support.

 

Hi, the problem may be in images if they are too large. Please just publish the article with less (or better without) pictures. 

Userlevel 1
Badge +1

Hi @Kat - I am unable to post my blog article as it is showing 30000 Character limits. Tried with both Reply and Quote & Reply.  MS Word says including spaces it is having 5K Characters. Attached the screenshot for your reference. Please do the needful. Thank you so much for your support.

 

Hi, the problem may be in images if they are too large. Please just publish the article with less (or better without) pictures. 

Hi @Kat - Thank you for your response. I tried to publish the document without image as per your suggestion. It got submitted, however unable to view the submitted document in this page. Please let me know what needs to be done. Thanks!

Userlevel 7
Badge +2

Hi @Kat - I am unable to post my blog article as it is showing 30000 Character limits. Tried with both Reply and Quote & Reply.  MS Word says including spaces it is having 5K Characters. Attached the screenshot for your reference. Please do the needful. Thank you so much for your support.

 

Hi, the problem may be in images if they are too large. Please just publish the article with less (or better without) pictures. 

Hi @Kat - Thank you for your response. I tried to publish the document without image as per your suggestion. It got submitted, however unable to view the submitted document in this page. Please let me know what needs to be done. Thanks!

I messaged you 😊

Userlevel 7
Badge +2

The article is written by @arunkumardutta 

Introduction:

After working in the IT industry over 17.5 years as a tester across many parts of the globe and publishing over 64 blog articles, 2 community books, 20+ international conferences and 11 global webinars, I still feel that “what’s enough testing” is a very thought-provoking topic.

Back in Aug 2020, I published an article in EuroSTAR Huddle – “Testing everything is impossible” (https://huddle.eurostarsoftwaretesting.com/testing-everything-is-impossible/). In that article, I talked about “In today’s fast paced industry, where accelerating delivery is the only way to survive, testing everything or every combination is simply impossible. Even in most favourable cases, where we assume that we tested everything, there will always be a chance of missing something. In both cases, continuous monitoring will assist to uncover missing cases well in advance and proactively before the actual end-users are affected.”

I am glad that Tricentis ShiftSync is conducting its first Blogathon contest and asked the testing community to pen down their thought about this interesting topic- “what’s enough testing”.

Enough testing- is simply Impossible:

While there is no doubt that even at the last stages of SDLC- Software Development Life Cycle, we can confirm that we conducted enough testing and there is no more testing required. We even can’t say that there will not be any more bugs or issues as we already conducted enough testing. Even in a software tester’s dream this is impossible.

If we look closely, there are so many possible combinations of testing (many technology platforms, many operating systems, many browsers, many devices, many versions, many networks) and testing all these is simply unattainable. Yes, leveraging automation adds value and can save an incredible amount of time but even still, testing all of these is not possible to conduct in a shorter sprint duration in accelerated delivery. Adding additional resources is also not possible due to the project budget. Test early, test often, test fast, test as much as possible, test even in production and test the right things will be the overall objective.

I personally think quality always matters more than quantity. So, we should not think about what enough testing is and when we can say enough testing done rather, we should think about the product end-users. As a tester, our objective is to ensure that end-users are happy in terms of different quality attributes like functionality, performance, security, usability, availability, reliability, accessibility for example.

Enough testing- No Data points rather Consent as a Team:

Simply, neither there is (or there will be) any data points for what enough testing is. Enough testing is just a consent by a group of different types of testers after conducting different types of testing in a specified time frame to go live. Ultimately, what I am trying to say is that there is no such formula or data points for enough testing.

I think testers should be in a destructive attitude towards the software product to make it stronger and better for its end-user.

At the end, software products become live only when product risks are verified by conducting different types of comprehensive testing and as a group all agree to move to production.

Enough testing- Comprehensive Test Strategy Document can assist:

While we, the testers, can’t say ever that enough testing is done. However, testers can confirm whether applications will be live or not after all their test ideas are thoroughly tested, followed by discussion as a team.

Please note that testing should be everyone’s job- when I say everyone, I mean starting from architect to developers, database folks to different types of testers to product owners to operation team to businesspeople - literally everyone.

Test ideas include all positive, negative, and applicable out of box thinking scenarios and all these must be conducted in a specified time as mentioned on the test strategy document. So, a comprehensive test strategy document is helpful to go live in a production environment but not unfolding what’s enough testing is (as no matter how comprehensive test ideas are, there will always be a chance to miss some test ideas).

Enough testing- Continuous Monitoring can assist:

While enough testing is impossible to confirm, continuous monitoring in all aspects-

a. continuous project monitoring of the test process (progress, compare, re-prioritize, confirm).

b. continuous application performance monitoring- dashboards, trending charts, automated alerts for proactively resolving the issues across all application components, APIs, Interfaces even before impacting end-users.

c. continuous security monitoring – again to identify security vulnerabilities, source code, attacks in advance before it affects end-users.

d. full stack observability- to get full visibility across the whole live system.

For me, what’s enough testing in today’s world is seen as full stack observability to proactively support end-users.

Conclusion:

While I don’t know what’s enough testing is even after working as a passionate tester for so long but for me what’s enough testing is a combination of comprehensive test strategy, collaborative as a whole team and moving towards full stack observability to ensure 3E-enhanced end-user experience, endure in the market for long to establish as a brand.

About Arun Kumar Dutta

Arun has over 17 and half years of managing end to end performance testing delivery experiences. He has been selected in multiple international testing conferences and global webinars. His multiple blogs have been published in different global testing forums.  Additionally, he contributed to 2 community books with other amazing testers across the globe.  He also won various internal and external awards.

Currently working as an Associate Director at Enterprise Performance & Resiliency Testing Practice in LTIMindtree.

Userlevel 1

What’s enough testing?

 

PrWut1KnKB-uMOMn5rd2pxnwGhjlAV9xinvIxOpuZO78VZ-gpCoIaAjdt-c6RlmlFDUtgPNJJ0hAWsvWtYy-h-pq_bigN6waNPqwQ-2WpwcqAPKbPiS6GpcmjhRevZkip2KptsQR2s4-I4S8NV2ioZ0

 

Testing: The term can be used as a synonym for fixing the problems. In Software Development, testing means evaluating the product and verifying that the product does what it's supposed to do. 

But now a million dollar question arises!

 

At  what time do we have to stop testing or what’s enough testing?

 

The answer lies in 2 factors: the QA functionality and how scrupulously that Functionality is applied in testing as “Quality matters more than quantity!”

 

Enough Testing” refers to the amount of testing which ensures that a software product meets the requirements as defined. 

Testing is all about finding the errors and using the right test cases enables this. This increases the effectiveness of the work and also increases the accuracy rate but couldn’t guarantee the 100% error free result.

In my opinion, there are two main reasons why any software/application can not be tested completely.

  1. For a particular input, there may be “n” number of tests and,
  2. There are too many ways through which the product can be tested.

Testing Myths:

  1. Know that, the things which are covered in the client meeting are all to develop the error free software/application: Often we think that the points which are discussed in the meetings are enough to get the 100% accuracy but this is wrong. There might be few edge cases that were missed while testing.
  2. If a tester feels that his role is to verify that the product should work perfectly, then he is doing injustice to himself because in the real world any program is not hundred percent bug free.

Most of the developers catch and fix maximum of their mistakes in a product before deploying it for testing, so it is the role of the tester to find the remaining. This suppose, 1% may be neglected by the developers because they assume the end-user wouldn’t take that route while using the product but as a tester you can’t. The main goal is to have a top-level stable product.

Approaches to define Enough Testing:

  1. Time for the testing ran out: While providing the time estimation of the work to the client, it has been clearly mentioned that this is the time allotted for the testing of the particular program and work should be completed within it.
  2. Ideas that were discussed in the meetings: There are chances that the new ideas which were given or discussed by the manager, TLs, or other members of the team were enough for testing.
  3. Getting the fall off results: Suppose a long time ago you stopped finding bugs, and the test ideas which you have are sufficiently hard to run and you think they will yield lower results. Better yet, even if they did have errors, the errors would be small, or only happen with a complex and rare setup. It’s not worth continuing testing.
  4. Ideas put on the table were out of scope or unrelated to the given work.

Conclusion:  Although the testing is one of the never ending process but keeping in mind the project situations, the amount of work that is ready for delivery and project deadlines can be considered to decide when to stop the testing.                          

 

 

Userlevel 7
Badge +2

IMPORTANT NOTICE: We want to ensure fairness for all participants in this blog contest. After careful consideration, we have decided to make some adjustments to the rules to enhance fairness and provide a better chance for every participant. Going forward, community votes will merge with evaluations from our Tricentis expert board. This ensures a level field and a more impartial selection of the winning article. Good luck! 

Userlevel 6
Badge +2

IMPORTANT NOTICE: We want to ensure fairness for all participants in this blog contest. After careful consideration, we have decided to make some adjustments to the rules to enhance fairness and provide a better chance for every participant. Going forward, community votes will merge with evaluations from our Tricentis expert board. This ensures a level field and a more impartial selection of the winning article. Good luck! 

This is  a great thought. It will put a lot of fairness in blogs

Userlevel 7
Badge +2

Blogathon contest has come to an end. Thank you all for participating. 😊We will announce winners on Monday. 

Reply