software testing Archives - SD Times https://sdtimes.com/tag/software-testing/ Software Development News Fri, 09 Aug 2024 18:12:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg software testing Archives - SD Times https://sdtimes.com/tag/software-testing/ 32 32 Analyst View: Software engineering leaders must understand the potential of synthetic data https://sdtimes.com/data/analyst-view-software-engineering-leaders-must-understand-the-potential-of-synthetic-data/ Fri, 17 May 2024 18:00:40 +0000 https://sdtimes.com/?p=54616 Synthetic data is a class of data artificially generated through advanced methods like machine learning that can be used when real-world data is unavailable. It offers a multitude of compelling advantages, such as its flexibility and control, which allows engineers to model a wide range of scenarios that might not be possible with production data. … continue reading

The post Analyst View: Software engineering leaders must understand the potential of synthetic data appeared first on SD Times.

]]>
Synthetic data is a class of data artificially generated through advanced methods like machine learning that can be used when real-world data is unavailable. It offers a multitude of compelling advantages, such as its flexibility and control, which allows engineers to model a wide range of scenarios that might not be possible with production data.

Market awareness of synthetic data for software testing has been very low and its potential has not yet been realized by software engineering leaders. Gartner has found that 34% of software engineering leaders have identified improving software quality as one of their top three performance objectives.

However, many software engineering leaders are inadequately equipped to achieve these objectives because their teams rely on antiquated development and testing strategies. These leaders should evaluate the feasibility of synthetic data to boost software quality and accelerate delivery.

Take Advantage of the Benefits of Synthetic Data

While market awareness of synthetic data is generally low, it is rising. Compared to large language models, synthetic data generation is a relatively mature market. Synthetically generated data for software testing offers a number of benefits including:
Security and compliance: Synthetic data can mitigate the risk of exposing sensitive or confidential information to comply with data privacy regulations.
Reliability: Synthetic data allows for control over specific data characteristics, such as age, income or location, to specify customer demographics. Software engineers can generate data that matches their product’s testing needs, and update the data as use cases change. Once generated, datasets can be retrained for reliable and consistent testing scenarios.
Customization: Synthetic data generation techniques and platforms provide customization capabilities to include diverse data patterns and edge cases. Since the data is artificially generated, test data can be made available even if a feature has no production data, resulting in the ability to test new features and inherently enhancing the test coverage.
Data on demand: Quality engineers can create any volume of data they need without limitations or delays associated with real-world data acquisition. This is particularly valuable for testing features with limited real-world data or for large-scale performance testing.

Software engineering leaders can enhance development cycle efficiency by strategically transitioning to synthetic data for testing. This enables teams to conduct secure, efficient and comprehensive tests, resulting in high-quality software.

Calculate ROI for Using Synthetic Data for Software Testing

Today’s challenging economic climate is driving companies to prioritize cost-cutting initiatives, with ROI meticulously examined before any investment is made. While the benefits of using synthetic data are evident, it’s essential to delve into the costs organizations may encounter during its implementation.

It is vital to determine ROI that outlines the strategic significance, expected returns and methods for mitigating risks to generate the requisite support and secure budget for synthetic data investment.

To accurately determine ROI, software engineering leaders should include non-financial benefits such as improved compliance, data security, and innovation. Benchmark ROI against other investment opportunities to determine the best allocation of capital. Reassess ROI yearly as actual data comes in and update projections to reflect any changes.

The post Analyst View: Software engineering leaders must understand the potential of synthetic data appeared first on SD Times.

]]>
Bad habits that stop engineering teams from high-performance https://sdtimes.com/agile/bad-habits-that-stop-engineering-teams-from-high-performance/ Mon, 26 Feb 2024 15:41:05 +0000 https://sdtimes.com/?p=53866 I’ve been working in and managing Agile engineering teams for over a decade, and whilst I won’t profess to know everything you should be doing, I can share some insight on things you definitely should not be doing. All learned from screwups, I might add. You’ll find excuses, like “Oh, I’ll get back to it … continue reading

The post Bad habits that stop engineering teams from high-performance appeared first on SD Times.

]]>
I’ve been working in and managing Agile engineering teams for over a decade, and whilst I won’t profess to know everything you should be doing, I can share some insight on things you definitely should not be doing. All learned from screwups, I might add.

You’ll find excuses, like “Oh, I’ll get back to it later,” or “Come on, it’s half a point; everyone knows what to do”. Don’t do it.

Realize as you spout these self-platitudes that you are being an arse – not to me, but to future-you and future-you’s team. That’s not cool. Write out the story. It’ll take you two minutes, but it’ll force you to think about what you actually want to get out of this effort and why. That’s rather important in most endeavors.

You only talk at stand-up

I once worked at a job like this and quit after about three months because it was utterly soul-destroying. Most humans want to work in a team, so find a way to work as one. Giving a two-minute, fact-based update in a 15-minute meeting once a day doesn’t cut it, and you risk losing half/all of your team who feel isolated.

Communication is hard. So is software development. So, the idea that we all wander off into our silos for 24 hours once standup is done, and nothing will still be unclear, hard or confusing in the meantime, is just plain silly.

If your team isn’t talking a lot during the day, it might mean they’re all super-humans. Or, more likely, your culture is bad, and they’re afraid or unwilling to communicate. 

Some things that I’ve seen work well to overcome this are Perma-calls, Kick-off chats and setting clear expectations for what junior devs should do when blocked.

Planning sessions of 2.5 hours

Your workload is not an impossible-to-plan anomaly. You do not need several hours to agree on what is coming into the sprint for the next week or two. What is actually happening is that you’re doing planning wrong. 

Instead of a chat that sounds like, “Let’s do these {n} things in the sprint, any concerns/emergencies/issues?” – what is almost definitely happening is that you’re discovering a bunch of new information in the planning session, which is leading to a re-refinement (or first refinement if you’re terrible) of the work.

Instead, do refinements. I won’t go into how to do a refinement; go Google it/read this link. But please do them. One of the hardest parts of Agile development is getting what to work on (and why) properly defined for the whole team. Focus on “defined and aligned.” If your team hasn’t defined what success means and aren’t all in agreement, the story shouldn’t be in planning. Send it back to the refinement stage.

PM-&-lead-only work-scoping

Delegation is easy, but it’s the thing I see most people screw up most often, myself very much included. But it must be overcome if you want a team to get good at planning, scoping and estimating work.

To be explicit:

  • Everyone in the team should be involved in scoping out tickets before the refinement.
  • Everyone in the team should be actively involved in the refinement session itself.

When teams don’t do this, they’re missing out on a bunch of things such as experience gained for junior devs, seniors learning to better explain and share their thoughts and helping the team internalize the code as something that they own.

Delaying releases until all the stories are done

If you’re not delivering “continuously”, then please go back to 2003. We’re not FTPing our files onto production servers for a deployment process.

The faster you integrate code (i.e. make it part of the main branch) the earlier your team is discovering differences with the code they’re writing. The longer this time-to-integrate is, the further things will have diverged, and thus, the more time will be wasted picking it apart.

The faster you deploy your code, the quicker your work is getting out to your customers, and the sooner you’ll know (if you have a robust error monitoring setup at least) whether you’ve introduced a new bug as part of the work, meaning the time-to-fix is vastly reduced. A nice side benefit is that the less that has been queued up to deploy, the smaller the “deploy diff” will be. That’s going to make it a heck of a lot easier to fix, in my experience.

Bad excuses I’ve heard over the years as to why you can’t do this include, ‘it takes up too much time’ (deploying should be a click of a button), ‘It’s not all ready yet’ (meaning you’re planning your work wrong) or we’re not allowed to because of “X regulation.” For the latter, read The Phoenix Project for some good lessons on how to address concerns here.

Caveat: Sometimes it’s physically not possible to do regular deployments, like if you’re writing software for a cruise missile. But otherwise, if SpaceX can deliver software to their satellites every week, you can do continuous delivery, mate.

We’ll add the tests later

Add them now or admit (ideally in a signed pact with the devil, written in your firstborn’s blood) that you just don’t care about whether your code works or not.

I suspect that if I have to argue this point further, then you’re already too far gone to be saved, but put succinctly-ish: untested code is code that probably doesn’t work. It’s impossible to know if it does work because the tests that should describe the functionality aren’t there. It’s also impossible to change because you don’t know what you’re breaking when you change it. Untested code is immediately legacy code that must either be immediately fixed (with tests) or completely replaced.

The post Bad habits that stop engineering teams from high-performance appeared first on SD Times.

]]>
Report: 85% of CEOs may not be properly testing software before its release https://sdtimes.com/software-testing/report-85-of-ceos-may-not-be-properly-testing-software-before-its-release/ Thu, 30 Jun 2022 18:33:27 +0000 https://sdtimes.com/?p=48160 New research released by the no-code software test automation company Leapwork has revealed that 85% of U.S. CEOs do not see a problem with releasing software that has not been properly tested, so long as it is patch tested later. On top of this, 79% of testers reported that 40% of software is sent to … continue reading

The post Report: 85% of CEOs may not be properly testing software before its release appeared first on SD Times.

]]>
New research released by the no-code software test automation company Leapwork has revealed that 85% of U.S. CEOs do not see a problem with releasing software that has not been properly tested, so long as it is patch tested later. On top of this, 79% of testers reported that 40% of software is sent to market without sufficient testing being completed.  

Ultimately, this has led to 52% of testers claiming that their teams spend 5 to 10 days per year patching software. 

The report also showed that despite the majority of testers expressing concern that insufficiently tested software is going to market, 94% of CEOs still say that they are confident that their software is tested regularly.

Additionally, 95% of CEOs and 76% of testers surveyed reported that they have concerns in regards to losing their jobs in the wake of a software failure. Both groups also agreed on the fact that insufficiently tested software poses a risk to the company as a whole, with 77% of CEOs saying that software failures have harmed their company’s reputation in the last 5 years. 

“Our research shows the widespread issues that exist in software testing today. While CEOs and testers understand the consequences of releasing software that hasn’t been tested properly, an alarming number still think it’s acceptable to issue it and prefer to rely on patch testing afterwards to fix any problems,” said Christian Brink Frederiksen, co-founder and CEO at Leapwork. “This often comes down to not thinking there is a viable option and choosing speed over stability – a devil’s dilemma. But what’s more concerning is the disconnect between CEOs and their developer teams, indicating that testing issues are falling under the radar and not being escalated until it’s too late.”

When asked why software was not being properly tested before it was released, 39% of CEOs cited ‘reliance on manual testing’ as the main reason. However, many testers blamed a failure to invest in test automation with only 43% claiming that they are using some element of automation. 

Testers also reported that there is a lack of time (34%), and an inability to test all software because of the increased frequency of development (29%). 

CEOs and testers both found fault in the lack of skilled developers as 34% and 42% respectively cited this as a key issue. 

Lastly, more than one third of CEOs said that the ‘underinvestment in testing personnel including continuous professional development’ is the main reason why software is not tested properly.

“We’ve seen the implications of huge software failures in the news, so on the current trajectory, more and more companies will struggle with failures and outages which could cost them a significant amount in financial and reputational damage. Businesses need to urgently consider a different approach and embrace no code test automation systems that don’t require coding skills and free up their skilled teams to focus on the most high-value tasks,” Frederiksen said.

 

The post Report: 85% of CEOs may not be properly testing software before its release appeared first on SD Times.

]]>
Software test automation for the survival of business https://sdtimes.com/test/software-test-automation-for-the-survival-of-business/ Tue, 06 Jul 2021 13:15:35 +0000 https://sdtimes.com/?p=44626 In this two part series, we explore the two sides of testing: automated and manual. In this article, we examine why automated testing should be done. To read the other side of the argument, go here.  In today’s business environment, stakeholders rely on their enterprise applications to work quickly and efficiently, with absolutely no downtime. … continue reading

The post Software test automation for the survival of business appeared first on SD Times.

]]>
In this two part series, we explore the two sides of testing: automated and manual. In this article, we examine why automated testing should be done. To read the other side of the argument, go here

In today’s business environment, stakeholders rely on their enterprise applications to work quickly and efficiently, with absolutely no downtime. Anything short of that could result in a slew of business performance issues and ultimately lost revenue. Take the recent incident in which CDN provider Fastly failed to detect a software bug which resulted in massive global outages for government agencies, news outlets and other vital institutions. 

Effective and thorough testing is mission-critical for software development across categories including business software, consumer applications and IoT solutions. But as continuous deployment demands ramp up and companies face an ongoing tech talent shortage, inefficient software testing has become a serious pain point for enterprise developers, and they’ve needed to rely on new technologies to improve the process.

The Benefits of Test Automation

As with many other disciplines, the key to quickly implementing continuous software development and deployment is robust automation. Converting manual tests to automated tests not only reduces the amount of time it takes to test, but it can also reduce the chance of human error and allows minimal defects to escape into production. Just by converting manual testing to automated testing, companies can reduce three to four days of manual testing time to one, eight-hour overnight session. Therefore, testing does not even have to be completed during peak usage hours.

Automation solutions also allow organizations to test more per cycle in less time by running tests across distributed functional testing infrastructures and in parallel with cross-browser and cross-device mobile testing. Furthermore, if a team lacks mobile devices to test on, it can leverage solutions to enable devices and emulators to be controlled through an enterprise-wide mobile lab manager.

Challenges in Test Automation

Despite all the benefits of automated software testing, many companies are still facing challenges that prevent them from reaping the full benefits of automation. One of those key challenges is managing the complexities of today’s software testing environment, with an increasing pace of releases and proliferation of platforms on which applications need to run (native Android, native iOS, mobile browsers, desktop browsers, etc.). With so many conflicting specifications and platform-specific features, there are many more requirements for automated testing – meaning there are just as many potential pitfalls.

Software releases and application upgrades are also happening at a much quicker pace in recent years. The faster rollout of software releases, while necessary, can break test automation scripts due to fragile, properties-based object identification, or even worse, bitmap-based identification. Due to the varying properties across platforms, tests must be properly replicated and administered on each platform – which can take immense time and effort.

Therefore, robust, and effective test automation also requires an elevated skill set, especially in today’s complex, multi-ecosystem application environment. Record-and-playback testing, a tool which records a tester’s interactions and executes them many times over, is no longer sufficient.

With all of these challenges to navigate, including how difficult it can be to find the right talent, how can companies increase release frequency without sacrificing quality and security?

Ensuring Robust Automation with Artificial Intelligence

To meet the high demands of software testing, automation must be coupled with Artificial Intelligence (AI). Truly robust automation must be resilient, and not rely on product code completion to be created. It must be well-integrated into an organization’s product pipelines, adequately data-driven and in full alignment with the business logic.

Organizations can allow quality assurance teams to begin testing earlier – even in the mock-up phase – through the use of AI-enabled capabilities for the creation of single script that will automatically execute on multiple platforms, devices and browsers. With AI alone, companies can experience major increases in test design speed as well as significant decreases in maintenance costs.

Furthermore, with the proliferation of low-code/no-code solutions, AI-infused test automation is even more critical for ensuring product quality. Solutions that infuse AI object recognition can enable test automation to be created from mockups, facilitating test automation in the pipeline even before product code has been generated or configured. These systems can provide immediate feedback once products are initially released into their first environments, providing for more resilient, successful software releases.

To remain competitive, all businesses need to be as productive and efficient as possible, and the key to that lies in properly tested, functioning, performant enterprise applications. Cumbersome, manual testing is no longer sufficient, and enterprises that continue to rely on it will be caught flat-footed and getting outperformed and out-innovated. Investing in automation and AI-powered development tools will give enterprises the edge they need to stay ahead of the competition.

The post Software test automation for the survival of business appeared first on SD Times.

]]>
The Open Testing Platform https://sdtimes.com/test/the-open-testing-platform/ Wed, 13 Jan 2021 19:30:55 +0000 https://sdtimes.com/?p=42661 This is a rather unique time in the evolution of software testing.  Teams worldwide are facing new challenges associated with working from home. Digital transformation initiatives are placing unprecedented pressure on innovation.  Speed is the new currency for software development and testing. The penalty for software failure is at an all-time high as news of … continue reading

The post The Open Testing Platform appeared first on SD Times.

]]>
This is a rather unique time in the evolution of software testing.  Teams worldwide are facing new challenges associated with working from home. Digital transformation initiatives are placing unprecedented pressure on innovation.  Speed is the new currency for software development and testing. The penalty for software failure is at an all-time high as news of outages and end-user frustration go viral on social media. Open-source point tools are good at steering interfaces but are not a complete solution for test automation.

Meanwhile, testers are being asked to do more while reducing costs.

Now is the time to re-think the software testing life cycle with an eye towards more comprehensive automation. Testing organizations need a platform that enables incremental process improvement, and data curated for the purpose of optimizing software testing must be at the center of this solution. Organizations that leverage multiple open-source or proprietary testing tools must consider an Open Testing Platform to keep pace with Agile and enterprise DevOps initiatives.   

What is an Open Testing Platform?
An Open Testing Platform (OTP) is a collaboration hub that assists testers to keep pace with change. It transforms observations into action – enabling organizations to inform testers about critical environment and system changes, act upon observations to zero in on ‘what’ precisely needs to be tested, and automate the acquisition of test data required for effective test coverage.

RELATED CONTENT:
Testing tools deliver quality – NOT!
The de-evolution of software testing

The most important feature of an Open Testing Platform is that it taps essential information across the application development and delivery ecosystem to effectively test software. Beyond accessing an API, an OTP leverages an organization’s existing infrastructure tools without causing disruption—unlocking valuable data across the infrastructure. An OTP allows any tester (technical or non-technical) to access data, correlate observations and automate action. 

Model in the middle
At the core of an Open Testing Platform is a model. The model is an abstracted representation of the transactions that are strategic to the business. The model can represent new user stories that are in-flight, system transactions that are critical for business continuity, and flows that are pivotal for the end-user experience.

In an OTP, the model is also the centerpiece for collaboration. All tasks and data observations either optimize the value of the model or ensure that the tests generated from the model can execute without interruption.  Since an OTP is focused on the software testing life cycle, we can take advantage of known usage patterns and create workflows to accelerate testing. For example, with a stable model at the core of the testing activity:

  •   The impact of change is visualized and shared across teams
  •   The demand for test data is established by the model and reused for team members
  •   The validation data sets are fit to the logic identified by the model
  •   The prioritization of test runs can dynamically fit the stage of the process for each team, optimizing for vectors such as speed, change, business-risk, maintenance, etc.

Models allow teams to identify critical change impacts quickly and visually. And since models express test logic abstracted from independent applications or services, they also provide context to help testers collaborate across team boundaries.

Data curated for testing software
Automation must be driven by data. An infrastructure that can access real-time observations as well as reference a historical baseline is required to understand the impact of change. Accessing data within the software testing life cycle does not have to be intrusive or depend on a complex array of proprietary agents deployed across an environment. In an overwhelming majority of use cases, accessing data via an API provides enough depth and detail to achieve significant productivity gains.  Furthermore, accessing data via an API from the current monitoring or management infrastructure systems eliminates the need for additional scripts or code that require maintenance and interfere with overall system performance.

 Many of the data points required to optimize the process of testing exist, but they are scattered across an array of monitoring and infrastructure management tools such as Application Performance Monitoring (APM), Version Control, Agile Requirements Management, Test Management, Web Analytics, Defect Management, API Management, etc.

An Open Testing Platform curates data for software testing by applying known patterns and machine learning to expose change. This new learning system turns observations into action to improve the effectiveness of testing and accelerate release cycles. 

Why is an Open Testing Platform required today?
Despite industry leaders trying to posture software testing as value-added, the fact is that an overwhelming majority of organizations identify testing as a cost center. The software testing life cycle is a rich target for automation since any costs eliminated from testing can be leveraged for more innovative initiatives.

If you look at industry trends in automation for software testing, automating test case development hovers around 30%.  If you assess the level of automation across all facets of the software testing life cycle, then automation averages about 20%.  This low average automation rate highlights that testing still requires a high degree of manual intervention which slows the software testing process and therefore delays software release cycles.

But why have automation rates remained so low for software testing when initiatives like DevOps have focused on accelerating the release cycle? There are four core issues that have impacted automation rates:

  •   Years of outsourcing depleted internal testing skills
  •   Testers had limited access to critical information
  •   Test tools created siloes
  •   Environment changes hampered automation

Outsourcing depleted internal testing skills
The general concept here is that senior managers traded domestic, internal expertise in business and testing processes for offshore labor, reducing Opex . With this practice known as labor arbitrage, an organization could reduce headcount and shift the responsibility for software testing to an army of outsourced resources trained on the task of software testing. This shift to outsourcing had three main detrimental impacts to software testing: the model promoted manual task execution, the adoption of automation was sidelined and there was a business process “brain-drain” or knowledge drain. 

 With the expansion of Agile and the adoption of enterprise DevOps, organizations must execute the software testing life cycle rapidly and effectively. Organizations will need to consider tightly integrating the software testing life cycle within the development cycle which will challenge organizations using an offshore model for testing.  Team must also think beyond the simple bottom-up approach to testing and re-invent the software testing life cycle to meet increasing demands of the business. 

Testers had limited access to critical information 
Perhaps the greatest challenge facing individuals responsible for software testing is staying informed about change. This can be requirements-driven changes of dependent application or services, changes in usage patterns, or late changes in the release plan which impact the testers’ ability to react within the required timelines. 

Interestingly, most of the data required for testers to do their job is available in the monitoring and infrastructure management tools across production and pre-production. However, this information just isn’t aggregated and optimized for the purpose of software testing. Access to APIs and advancements in the ability to manage and analyze big data changes this dynamic in favor of testers. 

Although each organization is structurally and culturally unique, the one commonality found among Agile teams is that the practice of testing software has become siloed. The silo is usually constrained to the team or constrained to a single application that might be built by multiple teams. These constraints create barriers since tests must execute across componentized and distributed system architectures.

Ubiquitous access to best-of-breed open-source and proprietary tools also contributed to these silos. Point tools became very good at driving automated tests. However, test logic became trapped as scripts across an array of tools. Giving self-governing teams the freedom to adopt a broad array of tools comes at a cost:  a significant degree of redundancy, limited understanding of coverage across silos, and a high amount of test maintenance. 

The good news is that point tools (both open-source and proprietary) have become reliable to drive automation. However, what’s missing today is an Open Testing Platform that assists to drive productivity across teams and their independent testing tools.  

Environment changes hampered automation
Remarkably, the automated development of tests hovers at about 30% but the automated execution of tests is half the rate at 15%. This means that tests that are built to be automated are not likely to be executed automatically – manual intervention is still required. Why? It takes more than just the automation to steer a test for automation to yield results. For an automated test to run automatically, you need: 

  •   Access to a test environment
  •   A clean environment, configured specifically for the scope of tests to be executed
  •   Access to compliant test data
  •   Validation assertions synchronized for the test data and logic

 As a result, individuals who are responsible for testing need awareness of broader environment data points located throughout the pre-production environment. Without automating the sub-tasks across the software testing life cycle, test automation will continue to have anemic results.

An Open Testing Platform levels the playing field 
Despite the hampered evolution of test automation, testers and software development engineers in test (SDETs) are being asked to do more than ever before. As systems become more distributed and complex, the challenges associated with testing compounds. Yet the same individuals are under pressure to support new applications and new technologies – all while facing a distinct increase in the frequency of application changes and releases. Something has got to change. 

An Open Testing Platform gives software testers the information and workflow automation tools to make open-source and proprietary testing point tools more productive in light of constant change.  An OTP provides a layer of abstraction on top of the teams’ point testing tools, optimizing the sub-tasks that are required to generate effective test scripts or no-code tests. This approach gives organizations an amazing degree of flexibility while significantly lowering the cost to construct and maintain tests.

 An Open Testing Platform is a critical enabler to both the speed and effectiveness of testing.  The OTP follows a prescriptive pattern to assist an organization to continuously improve the software testing life cycle.  This pattern is ‘inform, act and automate.’ An OTP offers immediate value to an organization by giving teams the missing infrastructure to effectively manage change. 

The value of an Open Platform

Inform the team as change happens
What delays software testing? Change, specifically late changes that were not promptly communicated to the team responsible for testing. One of the big differentiators for an Open Testing Platform is the ability to observe and correlate a diverse set of data points and inform the team of critical changes as change happens. An OTP automatically analyzes data to alert the team of specific changes that impact the current release cycle.

 Act on observations

Identifying and communicating change is critically important, but an Open Testing Platform has the most impact when testers are triggered to act. In some cases, observed changes can automatically update the test suite, test execution priority or surrounding sub-tasks associated with software testing. Common optimizations such as risk-based prioritization or change-based prioritization of test execution can be automatically triggered by the CI/CD pipeline.  Other triggers to act are presented within the model-based interface as recommendations based on known software testing software algorithms.

Automate software testing tasks 
When people speak of “automation” in software testing they are typically speaking about the task of automating test logic versus a UI or API.  Of course, the scope of tests that can be automated goes beyond the UI or API but also it is important to understand that the scope of what can be automated in the software testing life cycle (STLC) goes far beyond the test itself.  Automation patterns can be applied to:

  •   Requirements analysis
  •   Test planning
  •   Test data
  •   Environment provisioning
  •   Test prioritization
  •   Test execution
  •   Test execution analysis
  •   Test process optimization

Key business benefits of an Open Testing Platform
By automating or augmenting with automation functions within the software testing life cycle, an Open Testing Platform can provide significant business benefits to an organization. For example:

  • Accelerating testing will improve release cycles
  • Bringing together data that had previously been siloed allows more complete insight
  • Increasing the speed and consistency of test execution builds trust in the process
  • Identifying issues early improves capacity
  • Automating repetitive tasks allows teams to focus on higher-value optimization
  • Eliminating mundane work enables humans to focus on higher-order problems, yielding greater productivity and better morale

Software testing tools have evolved to deliver dependable “raw automation.” Meaning that the ability to steer an application automatically is sustainable with either open-source or commercial tools.  If you look across published industry research, you will find that software testing organizations report test automation rates to be (on average) 30%.  These same organizations also report that automated test execution is (on average) 16%.  This gap between the creation of an automated test and the ability to execute it automatically lies in the many manual tasks required to run the test. Software testing will always be a delay in the release process if organizations cannot close this gap.  

Automation is not as easy as applying automated techniques for each of the software testing life cycle sub-processes.  There are really three core challenges that need to be addressed:

  1. Testers need to be informed about changes that impact testing efforts.  The requires interrogating the array of  monitoring and infrastructure tools and curating data that impacts testing.
  2. Testers need to be able to act on changes as fast as possible. This means that business rules will automatically augment the model that drives testing – allowing the team to test more effectively.
  3. Testers need to be able to automate the sub-tasks that exist throughout the software testing lifecycle.  Automation must be flexible to accommodate each team need yet simple enough to make incremental changes as the environment and infrastructure shifts.

Software testing needs to begin its own digital transformation journey. Just as digital transformation initiatives are not tool initiatives, the transformation to sustainable continuous testing will require a shift in mindset.  This is not shift-left.  This is not shift-right. It is really the first step towards Software Quality Governance.  Organizations that leverage multiple open-source or proprietary testing tools must consider an Open Testing Platform to keep pace with Agile and enterprise DevOps initiatives.  

The post The Open Testing Platform appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: VHS https://sdtimes.com/open-source/sd-times-open-source-project-of-the-week-vhs/ Fri, 18 Dec 2020 14:44:59 +0000 https://sdtimes.com/?p=42481 Performance testing company Stormforge has launched a new open-source project designed to improve and advance application performance and optimization test creation. The project, VHS, records live traffic to test performance against “reality instead of just an educated guess,” Noah Abrahams, open source advocate at StormForge, explained in a post.  “VHS started as a project that … continue reading

The post SD Times Open-Source Project of the Week: VHS appeared first on SD Times.

]]>
Performance testing company Stormforge has launched a new open-source project designed to improve and advance application performance and optimization test creation. The project, VHS, records live traffic to test performance against “reality instead of just an educated guess,” Noah Abrahams, open source advocate at StormForge, explained in a post. 

“VHS started as a project that filled a need related to our performance testing and optimization portfolio, namely, accurate load generation,” Abrahams said. “Our mission as a company is to extend the concept of application performance from being a reactive mindset focused on operations teams, to a proactive, automatic and continuous process that includes and empowers the application developers themselves. Part of that mission is ensuring that developers in the community are not only aware that proactive solutions are available to them, but that they’re able to contribute and help build tomorrow’s application performance solutions.”

According to the company, current methods for recording and replaying app traffic did not provide clear enough or precise results. VHS aims to provide load generational aligned with actual live production to better guarantee performance testing and forecasted traffic. 

As part of the community-driven project initiative, StormForge is asking the open-source community to help rename the project in Q1 of 2021. “The name VHS wouldn’t be particularly easy to find in a Google search, anyway, and the acronym is already taken in most places that matter, so the rename will be happening sooner rather than later,” Abrahams wrote. 

The post SD Times Open-Source Project of the Week: VHS appeared first on SD Times.

]]>
premium App testing: how companies are getting it right — and wrong https://sdtimes.com/test/app-testing-how-companies-are-getting-it-right-and-wrong/ Fri, 20 Nov 2020 19:05:29 +0000 https://sdtimes.com/?p=42211 As we enter the fourth quarter of an explosively eventful year, important trends are emerging within the app testing industry – trends that will surely extend into 2021. The most important is the accelerated pace at which companies are moving to the cloud. The speed-up is being driven by the need to support remote teams … continue reading

The post <span class="sdt-premium">premium</span> App testing: how companies are getting it right — and wrong appeared first on SD Times.

]]>
As we enter the fourth quarter of an explosively eventful year, important trends are emerging within the app testing industry – trends that will surely extend into 2021.

The most important is the accelerated pace at which companies are moving to the cloud. The speed-up is being driven by the need to support remote teams that no longer have physical access to in-house device labs due to COVID-19. This move was driven by the pandemic, but it will have benefits that extend beyond the current state of affairs. Remote work is here to stay, and having a test infrastructure in the cloud allows anywhere, anytime access, which can quickly translate into productivity.

A second trend is an increase in the speed at which teams are moving to automate their testing. While manual testing will still play an important role – not everything can be automated – it’s clear that automation is crucial for companies that want to scale the quick release of new versions without compromising quality. 

Speed vs. quality: A false choice
The quality bar has been set very high by industry leaders, and the days of moving fast and breaking things are long gone. In fact, “breaking things” – releasing code that has not been properly tested – can have horrendous consequences. For example, a software error at Knight Capital Group resulted in a $460 million loss, leading to the company’s bankruptcy. Provident Financial Group lost $2.2 billion in market value due to an app failure. These are extreme cases of what can go wrong when companies release buggy code, but untested code hurts many more companies in ways that don’t make the headlines.

Today’s users are unforgiving, and bugs can kill any momentum an app may have. According to one survey, a single negative review drives away 22 percent of prospective customers, and three bad reviews lead to a loss of almost 60 percent. Nonetheless, many companies still feel they need to choose between quality and speed. All too often, quality loses the battle. This can mean rushing the testing teams, or it can mean limiting the scope of testing and ignoring the wide variety of devices used around the world. Either way, the result is unhappy users, negative reviews, poor sales and ultimately poor financial performance. 

There are two best practices that can address the speed vs. quality challenge. The first is automating as much of the testing process as possible. Automation doesn’t replace human judgment. Rather, it frees test engineers from repetitive, time-consuming tasks so they can do a better job. 

A second best practice is breaking down silos and eliminating the “toss-it-over-the-wall” attitude towards testing. Instead of receiving finished code, test engineers should work hand-in-hand with developers in an agile fashion while features are being developed. This ensures that quality is built into the product rather than bolted on as an afterthought. 

The automation scorecard
At BrowserStack, we have classified companies into innovators and late adopters of automation.. The results clearly indicate the value of automation. Specifically, innovators:

  • run 6X fewer manual tests
  • run 12X more tests per day
  • produce 40X builds per day
  • produce each build 9X faster and 5X smaller
  • have failure rates that are 4X lower

To summarize, innovators produce more builds per day, run more tests with more coverage, and have lower failure rates. 

Speed and quality can co-exist. Netflix and Amazon, for example, release code hundreds of times every day without introducing severe bugs. A combination of collaboration and automation are behind that success, and these best practices are available to any company that wants to eliminate developer pain and boost quality output.

The post <span class="sdt-premium">premium</span> App testing: how companies are getting it right — and wrong appeared first on SD Times.

]]>
premium Testing tools deliver quality – NOT! https://sdtimes.com/test/testing-tools-deliver-quality-not/ Thu, 19 Nov 2020 17:00:13 +0000 https://sdtimes.com/?p=42137 I was recently hired to do an in-depth analysis of the software testing tool marketplace.  By the way, there are more tools in the software testing space than a do-it yourself, home improvement warehouse.  Given this opportunity to survey a broad set of software testing tool vendors, it was pretty interesting to look at the … continue reading

The post <span class="sdt-premium">premium</span> Testing tools deliver quality – NOT! appeared first on SD Times.

]]>
I was recently hired to do an in-depth analysis of the software testing tool marketplace.  By the way, there are more tools in the software testing space than a do-it yourself, home improvement warehouse.  Given this opportunity to survey a broad set of software testing tool vendors, it was pretty interesting to look at the promises they make to the market. These promises can be split up into four general categories:

  • We provide better quality
  • We have AI and we are smarter than you
  • We allow you to do things faster
  • We are open-source – give it a go

What struck me most was the very large swath of software testing tool vendors who are selling the idea of delivering or providing “quality.”  Placing this into pointed analogy: claiming that a testing tool provides quality is like claiming that COVID testing prevents you from being infected.  The fact is when a testing tool finds a defect, then “quality” has already been compromised.  Just as when you receive a positive COVID test, you are already infected.

Let’s get this next argument out of the way.  Yes, testing is critical in the quality process; however the tool that detects the defect DOES NOT deliver quality.  Back to the COVID test analogy, the action of wearing masks and limiting your exposure to the public prevents the spread of the infection. A COVID test can assist you to make a downstream decision to quarantine in order to stop the spread of infection or an upstream decision to be more vigilant in wearing a mask or limiting your exposure to high-risk environments.  I’m going to drop the COVID example at this point out of sheer exhaustion on the topic.  

But let’s continue the analogy with weight loss – a very popular topic as we approach the holidays.  Software testing is like a scale, it can give you an assessment of your weight.  Software delivery is like the pair of pants you want to wear over the holidays. Weighing yourself is a pretty good indicator of your chances to fit into the pair of pants at a particular point in time.  

Using the body weight analogy is interesting because a single scale might not give you all the information you need, and you might have the option to wear a different pair of pants.  Let me unpack this a bit.  

The scale(s)
We cannot rely on a single measurement nor a single instance of that measurement to make an assessment of the quality of an application.  In fact, it requires the confluence of many measurements both quantitative and qualitative to assess the quality of software at any particular point in time. At a very high level there are really only three types of software defects:

  • Bad Code
    • The code is poorly written
    • The code does not implement the user story as defined
  • Bad User Story
    • The user story is wrong or poorly defined
  • Missing User Story 
    • There is missing functionality that is critical for the release

Using this high-level framework there are radically different testing approaches required.  If we want to assess bad code, then we would rely on development testing techniques like static code analysis to measure the quality of the code.  We would use unit testing or perhaps test-driven development (TDD) as a preliminary measurement to understand if the code is aligned to critical function or component of the user story.  If we want to assess a bad user story, this is where BDD, manual testing, functional testing (UI and API) and non-functional testing takes over to assess if the user story is adequately delivered in the code.  And finally, if we want to understand if there is a missing user story this is usually an outcome of exploratory testing when you get that ‘A-ha’ moment that something critical is missing.

The pants
Let’s refresh the analogy quickly.  The scale is like a software testing tool and we want to weigh ourselves to make sure we can fit into our pants, which is our release objective. The critical concept here is that not all pants are designed to fit the same and the same is true for software releases.  Let’s face it, our software does not have to be perfect and, to be blunt, “perfection” comes at a cost that is far beyond an organization’s resources to achieve. Therefore, we have to understand that some pants are tight with more restrictions and some pants are loose, which give you more comfort. So, you might have a skinny jeans release or a sweatpants release.    

Our challenge in the software development and delivery industry is that we don’t differentiate between skinny jeans and sweatpants. This leads us to a test-everything approach ,which is a distinct burden to both speed and costs. The alternative, which is the “test what we can” approach, is also suboptimal.  

So, what’s the conclusion?  I think we need to worry about fitting into our pants at a particular point in time.  There is enough information that currently exists throughout the software development life cycle and production that can guide us to create and execute the optimal set of tests. The next evolution of software testing will not solely be AI.  The next evolution will be using the data that already exists to optimize both what to test and how to test it.  Or in other terms, we will understand the constraints associated with each pair of pants and we will use our scale effectively to make sure we fit in them in time for the holiday get- together of less than 10 close family members.

The post <span class="sdt-premium">premium</span> Testing tools deliver quality – NOT! appeared first on SD Times.

]]>
premium Get back to the fun part of testing https://sdtimes.com/test/get-back-to-the-fun-part-of-testing/ Thu, 22 Oct 2020 16:40:37 +0000 https://sdtimes.com/?p=41793 In an ideal world, software testing is all about bringing vital information to light so our teams can deliver amazing products that grow the business (to paraphrase James Bach). Investigation and exploration lie at the heart of testing. The obsession with uncovering critical defects before they unfurl into business problems is what gets under our … continue reading

The post <span class="sdt-premium">premium</span> Get back to the fun part of testing appeared first on SD Times.

]]>
In an ideal world, software testing is all about bringing vital information to light so our teams can deliver amazing products that grow the business (to paraphrase James Bach). Investigation and exploration lie at the heart of testing. The obsession with uncovering critical defects before they unfurl into business problems is what gets under our skin and makes us want to answer all those “what if…” questions before sending each release off into the world. 

But before the exploring can begin, some work is required. If you’re tracking down literal bugs in the wilderness, you’re not going to experience any gratification until after you check the weather forecasts, study your maps and field guides, gear up, slather on the sunscreen and mosquito repellant, and make it out into the field. If your metaphorical hunting grounds are actually software applications, these mundane tasks are called “checking.” This includes both the rote work of ensuring that nothing broke when you last made a code change (regression testing) and that the basic tenets of the requirement are actually met (progression testing).

RELATED CONTENT: Testing in a complex digital world

This work is rarely described as “fun.” It’s not what keeps us going through those late-night bug hunts (along with pizza and beverages of choice). So, what do we do? We automate it! Now we’re talking… There is always primal joy in creating something, and automation is no different. The rush you get when your cursor moves, the request is sent, the API is called…all without you moving a finger…can make you feel fulfilled. Powerful, even. For a moment, the application is your universe, and you are its master.

You now breathe a sigh of relief and put your feet up, satisfied with your efforts. Tomorrow is now clear to be spent exploring. Back to the bug hunting! The next day, you flip open your laptop, ready to roll up your sleeves and dive into the fun stuff. But what’s that? Build failed? Awesome! Your work is already paying off. Your automated checks have already surfaced some issues…or have they?

No… not really. It was just an XPath change. No problem; you won’t make that mistake again. You fix it up, and run the tests again. Wait, that element has a dynamic ID? Since when? Ok ok ok…fine! You utter the incantation and summon the arcane power of Regex, silently praying that you never have to debug this part of your test again. At some point, you glance at the clock. Another day has passed without any time for real exploration. This work was not fun. It was frustrating. No longer are you the master of this universe, but an eternal servant at the whims of an ever-growing list of flaky, capricious tests. 

Turns out, the trick to getting past all the mind-numbing grunt work isn’t outsmarting the traditional script-based UI test automation that everyone’s been battling for years. It’s enlisting innately smarter automation—automation that understands what you need it to do so you can focus on what you actually want to do: explore!  

With the latest generation of AI-driven test automation based on optical recognition, you can delegate all the automation logistics to a machine—so you can focus on the creative aspects that truly make a product successful. (Full disclosure: Several companies offer AI-driven UI test automation based on optical recognition…and I’m leading the development of this technology at one of them.)

The idea behind this approach is to tap an engine that can understand and drive the UI like a human would. This human behavior is simulated using various AI and machine learning strategies—for example, deep convolutional neural networks combined with advanced heuristics—to deliver stable, self-healing, platform-agnostic UI automation. 

From the tester perspective, you provide a natural language description of what actions to perform, and the engine translates that to the appropriate UI interactions. UI elements are identified based on their appearance rather than their technical properties. If some UI element is redesigned or the entire application is re-implemented using a new technology, it doesn’t matter at all. Like a human, the automation will simply figure it out and adapt.

Making sure this works with the necessary speed and accuracy across all the technologies you need to test is the hard part—but that’s our job, not your problem. You can just roll up your sleeves, tell it what you want to test, and let the automation handle the rest. Then the fun can begin. 

Here are two core ways that this “AI-driven UI test automation” approach helps you get back to the fun part of testing…

Automation without aggravation
I’m no stranger to automation. I’ve worked in automation for over a decade, managing automation teams, implementing automation, and even doing some work on the Selenium project. Building stable automation at a technical level is invariably tedious, no matter what you’re automating. And aside from some very high-level guiding principles like separation of concerns, data abstraction, and design patterns, your mastery of automating one technology doesn’t really translate when it’s time to automate another. 

Automatically driving a browser or mobile interface is a lot different than “steering” a desktop application, a mainframe, or some custom/packaged app that’s highly specialized. Technologies like model-based test automation remove complexity, adding an abstraction layer that lets you work at the business layer instead of the technical layer. However, it’s not always feasible to apply model-based approaches to extremely old applications, applications running on remote environments (e.g., accessed via Citrix), highly-specialized applications for your company/industry, etc.

With image-based test automation, the underlying technology is irrelevant. If you can exercise the application via a UI, you can build automation for it. No scripting. No learning curve. Just exercise it naturally—like you probably already do as you’re checking each new user story—and you can get all the repeatability and scalability of automation without any of the work or hassle. 

Technology Stockholm Syndrome
Back when I was a university student, I “learned” that nothing would ever be developed that wasn’t a big heavy C thick client. There was some talk of thin clients, but of course those wouldn’t last. After I graduated, everyone was scrambling to rewrite their thick clients using the shiny new service-oriented architecture. Then came mobile. And containerization and Kubernetes.

By this time, I figured out that I have a problem: let’s call it Technology Stockholm Syndrome. I was held captive by the ever-changing landscape of technology. And the strange thing was that I kind of liked it because this ever-changing, ever-shifting set of goalposts was so much fun.

This is a good problem to have in terms of ensuring the continued value and viability of your organization’s applications. You want your dev teams to stay on top of the latest trends and take advantage of new approaches that improve application flexibility, reliability, security, and speed.  But if you’re responsible for building and maintaining the test automation, each change can be torture. In most cases, a technical shift means you need to scrap the existing test automation (which likely represents a significant investment of time and resources) and start over—rebuilding what’s essentially the same thing from scratch. Not fun.

 Also, not really required anymore. You’d be surprised at how few fundamental changes are introduced into an application’s UI from generation to generation. (Don’t believe it? Just take a trip back in web history on The Wayback Machine and see for yourself.)

Although the underlying implementation and specific look and feel might shift dramatically, most of the same core test sequences typically still apply (for example, enter maxmusterman under username, enter 12345 under password, and click the login button). If those same general actions remain valid, then image-based test automation should still be able to identify the appropriate UI elements and complete the required set of actions.

Remember Java’s mantra of “write once, run anywhere”?” (Go ahead, insert your own Java joke here). Done right, this testing approach should actually deliver on that promise. Of course, you’ll probably want to add/extend/prune some tests as the app evolves from generation to generation—but you certainly don’t need to start with a blank slate each time the application is optimized or re-architected.

Deep down, testing is truly fun
UI test automation is undeniably a critical component of a mature enterprise test strategy.  But wrestling with all the nitty gritty bits and bytes of it isn’t fun for anyone. Go search across engineers, professional testers, business users, and all sorts of other project stakeholders, and I guarantee you won’t find anyone with a burning desire to deal with it. But testing as a discipline, as a creative, problem-solving process, is truly fun at its core. 

Peel away the layers of scripting, flakiness, and constant failures, and you can (re)focus on the experimentation, exploration, questioning, modeling… all the things that make it fun and fulfilling.

The post <span class="sdt-premium">premium</span> Get back to the fun part of testing appeared first on SD Times.

]]>
Testing in a complex digital world https://sdtimes.com/test/testing-in-a-complex-digital-world/ Thu, 01 Oct 2020 14:32:47 +0000 https://sdtimes.com/?p=41512 About a decade ago, application testing was fairly straightforward, albeit a manual effort and somewhat of a drag on delivery. Tests cases were written, functional and UI tests were done, regression, pen and load testing would happen, and the application was deemed ‘good to go.’ Today’s digital world of APIs, open-source components, mobile devices, IoT … continue reading

The post Testing in a complex digital world appeared first on SD Times.

]]>
About a decade ago, application testing was fairly straightforward, albeit a manual effort and somewhat of a drag on delivery. Tests cases were written, functional and UI tests were done, regression, pen and load testing would happen, and the application was deemed ‘good to go.’

Today’s digital world of APIs, open-source components, mobile devices, IoT endpoints, DevOps pipelines and containers — not to mention the squeezed timelines for application delivery — render manual testing almost completely ineffective.

Yes, the testing world has evolved. We’re seeing automated testing, continuous testing and security testing emerge, as well as non-traditional testing such as feature experimentation and chaos engineering advancing to keep pace with organizational demands in the digital age.

This showcase is a guide to some of the companies that provide testing tools, and each comes at the issue from a different perspective. We hope you find it useful, and encourage you to reach out to these solution providers to learn more.

The Future of Testing is AI: Visual AI
Parasoft Leads Testing Innovation
Supercharge Testing with Mobile Labs
Automate Mobile Testing with Kobiton
Fix Penetration Testing Finds Faster
Software Testing Showcase

The post Testing in a complex digital world appeared first on SD Times.

]]>