automated testing Archives - SD Times https://sdtimes.com/tag/automated-testing/ Software Development News Fri, 20 Oct 2023 19:23:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg automated testing Archives - SD Times https://sdtimes.com/tag/automated-testing/ 32 32 Running automation in circle: Promoting cross-functional collaboration with automated testing https://sdtimes.com/test/running-automation-in-circle-promoting-cross-functional-collaboration-with-automated-testing/ Mon, 17 Jul 2023 20:30:18 +0000 https://sdtimes.com/?p=51782 As technology stacks become more complex and companies rely on infrastructure as code to manage their systems, it is becoming increasingly important to ensure the quality of not just the product, but the entire user experience. In order to do this effectively, testing teams must look beyond their product to understand and optimize the entire … continue reading

The post Running automation in circle: Promoting cross-functional collaboration with automated testing appeared first on SD Times.

]]>
As technology stacks become more complex and companies rely on infrastructure as code to manage their systems, it is becoming increasingly important to ensure the quality of not just the product, but the entire user experience. In order to do this effectively, testing teams must look beyond their product to understand and optimize the entire ecosystem that supports it. Any number of third-party vendors or service disruptions can affect the user experience, and no matter the root cause, users will almost certainly associate a negative experience with the product.

Traditionally, testing teams operate in a silo and focus solely on their product. The supply chain industry, in particular, has undergone a digital transformation in recent years. Technology stacks are constantly growing deeper and more varied. Supply chain and fulfillment software must integrate with multiple sales channels, Warehouse Management Systems (WMS), and booking platforms in addition to the usual software technology stack (networks and infrastructure, code repositories, cloud services, Databases, etc.). These multiple connection points make the product flexible to meet the demands of the end user but ultimately make it vulnerable as well. This is why it’s critical to step out of the testing silo to equip cross-functional teams to test the end-to-end process just as a customer would experience it.

Cross-Functional Collaboration

The greatest opportunity testers have to collaborate cross-functionally to ensure that the end-to-end process is tested thoroughly is with their DevOps team. DevOps has a high-level view of the entire system, including many of the connection points. They monitor for network problems, data layer problems, and upstream dependencies issues and can greatly benefit from self-service automated testing. 

To effectively empower DevOps teams to run their own tests and identify problem areas within current supply chain projects, it’s important to implement best practices and workstreams. One effective workstream is to build templated tests that DevOps can easily run remotely on their own. 

One practical use case I’ve encountered is implementing an internal API automation suite built using Axios and AVA. In this case, the team configured the tests to run in CircleCi (a CI/CD tool) to allow the results to be kicked off and viewed by anyone, as well as incorporate them into product pipelines. They also added tests to a schedule to run nightly, which allowed the team to check for any errors in the code. To enable the DevOps team to test applications outside of the core product, the team gave DevOps the ability to kick off a test suite using Slack. Any Slack user could kick off tests with links to the results even if they were not familiar with the CI/CD tool. This allowed the DevOps team to run any test suite on any of the environments. 

The team then added schedules to run the automation tests continuously throughout the day, and log any failures to DataDog (a monitoring tool). This allowed the Devops team to monitor the results in order to identify code issues, problems with cloud providers, configuration issues after releases, performance issues with the API gateway and more.

Like with any testing protocol, there must also be a plan for follow-through. Once issues within the system are identified, it’s important to have clear processes in place for mitigating and responding to them. This can involve creating a centralized repository for tracking issues and assigning ownership to specific team members. It can also involve setting up automated alerts to notify the relevant teams when issues arise, so that they can be addressed in a timely manner.

Setting up these protocols is a time investment that will pay dividends for your organization and create greater collaboration across departments. By empowering DevOps to test and simulate the end-to-end user experience, you can optimize your entire ecosystem and deliver and drive greater user sentiment towards your product.

Ultimately, the goal of empowering DevOps teams to test the end-to-end process is to ensure that issues are resolved quickly and efficiently, often before customers are even aware of them. By working together and leveraging each other’s skills, testing and DevOps teams can create a more effective and reliable platform that delivers a seamless customer experience.

The post Running automation in circle: Promoting cross-functional collaboration with automated testing appeared first on SD Times.

]]>
Report: Test automation coverage has rebounded after a dip last year https://sdtimes.com/test/report-test-automation-coverage-has-rebounded-after-a-dip-last-year/ Wed, 09 Nov 2022 17:18:50 +0000 https://sdtimes.com/?p=49547 Test automation coverage has rebounded after a dip last year, according to SmartBear’s State of Quality Testing 2022 report.  SmartBear conducted a global online survey over the course of five weeks earlier this year. The findings are based upon aggregated responses from more than 1,500 software developers, testers, IT/operations professionals, and business leaders across many … continue reading

The post Report: Test automation coverage has rebounded after a dip last year appeared first on SD Times.

]]>
Test automation coverage has rebounded after a dip last year, according to SmartBear’s State of Quality Testing 2022 report. 

SmartBear conducted a global online survey over the course of five weeks earlier this year. The findings are based upon aggregated responses from more than 1,500 software developers, testers, IT/operations professionals, and business leaders across many different industries.

Last year saw the amount of companies performing manual tests at 11%, while that number dwindled to 7% this year, almost returning to pre-pandemic levels of 5% of all tests being performed completely manually. 

This year also saw slightly higher numbers than ever before for respondents that said 50-99%  of their tests are automated across the board. The biggest jump happened in the 76-99% group which jumped over 10% to 16% over the last year. The amount of respondents that said their tests are all automated regained some ground to the pre-pandemic level of 4%.

When looking at the different types of tests and how they are performed, over half of respondents reported using manual testing for usability and user acceptance tests. Unit tests, performance tests, and BDD framework tests were highest among all automated testing. 

Another finding is that the time spent testing increased for traditional testers but decreased for developers. However, the average percentage of time spent testing remained the same as last year, at 63% across the organization.

QA engineers/automation engineers spend the most time testing, averaging 76% of their weeks on testing up from 72% last year. While the trend for developer testing inched up between 2018 to 2021, reaching 47%, it sank to 40% this year. Testing done by architects plummeted from 49% to 30% over the last year. 

This year, the most time-consuming activity was performing manual and exploratory tests, jumping to 26% from 18% last year as the most time-consuming task. In the same time period, learning how to use test tools as the most time-consuming challenge with testing fell from 22% to just 8%. 

The biggest challenges that organizations reported for test automation varied by company size.  Companies with 1-25 employees cite “not having the correct tools” as their biggest challenge, while companies with 501-1,000 employees cite “not having the right testing environments available” as their biggest challenge. These are different from the biggest problem that was cited last year “not enough time to test” at 37%.

The post Report: Test automation coverage has rebounded after a dip last year appeared first on SD Times.

]]>
Using Data to Sustain a Quality Engineering Transformation https://sdtimes.com/test/using-data-to-sustain-a-quality-engineering-transformation/ Thu, 03 Nov 2022 16:26:53 +0000 https://sdtimes.com/?p=49454 DevOps and quality engineering enable better development practices and improve business resiliency, but many teams struggle to sustain this transformation outside of an initial proof of concept. One of the key challenges with scaling DevOps and quality engineering is determining how software testing fits into an overall business strategy. By leveraging automated testing tools that … continue reading

The post Using Data to Sustain a Quality Engineering Transformation appeared first on SD Times.

]]>
DevOps and quality engineering enable better development practices and improve business resiliency, but many teams struggle to sustain this transformation outside of an initial proof of concept. One of the key challenges with scaling DevOps and quality engineering is determining how software testing fits into an overall business strategy.

By leveraging automated testing tools that collect valuable data, organizations can create shared goals across teams that foster a DevOps culture and drive the business forward. Testing data also helps tie quality engineering to customer experiences, leading to better business outcomes in the long run.

Creating Shared Data-Driven Goals

Collaborative testing is essential for scaling DevOps sustainably because it encourages developers to have shared responsibility over software quality. Setting unified goals backed by in-depth testing data can help every team involved with a software project take ownership over its quality. This collaborative approach helps break down the silos that have traditionally prevented organizations from scaling DevOps across teams.

More specifically, testing data and trend reports that can be easily shared across teams make it easier for organizations to maintain focus on the same core goals. Sharing this testing knowledge better aligns testing and development so that quality goals are considered throughout every stage of the software development lifecycle (SDLC). 

When software-related insights can move seamlessly between developers, testers, and product owners, organizations can deliver a higher quality product faster than before. This reinforces the benefits of sharing responsibility for software quality and helps get more teams on board with DevOps and quality engineering throughout the organization.

In short, tracking testing data is crucial for setting goals that scale DevOps adoption across multiple teams and throughout the SDLC. Intelligent reporting and test maintenance also help quality engineering teams implement quality improvements that directly impact DevOps transformation and business outcomes.

Tying Quality Engineering to Customer Experiences

Sharing data and goals can help encourage developer participation with quality engineering efforts, but tying quality to customer outcomes can encourage investment in software quality from the broader organization. The key is using testing data to adapt quality engineering to new features and customer use patterns.

In our previous article, we discussed how quality engineering connects development teams to customers. A quality-centric approach can help retain customers and lead to a more resilient business over time because a poor user experience encourages them to consider a competitor’s product. 

For example, tracking data from quality testing can reveal a decline in application performance before it’s noticeable to users. These types of changes can build up over time and be difficult to detect without data analysis. By sharing these data insights with the development team, however, the issue can be resolved before it leads to a poor customer experience. This means testing data forms an essential link between code and customers.

Actionable insights from testing data can drive a quality engineering strategy that makes a lasting improvement on customer experiences. And this leads to positive business results that encourages larger investments in software quality throughout the organization. Using data to tie software quality to customer experiences, therefore, endorses the role of quality engineering as a key part of DevOps adoption.

Sustainable Quality Engineering and DevOps

As organizations struggle to build sustainable DevOps practices, they should consider how they can leverage the quality engineering team as an enabler. Quality engineering teams have an enormous amount of testing data that can help development teams improve their processes for delivering high-quality software much faster.

However, testing data is only useful if it can be easily shared with the right stakeholders, whether it’s developers or product managers. This requires collaborative testing tools that integrate throughout the SDLC and empower teams to access data that improves their workflows related to software delivery.

In short, testing data can transform a small-scale adoption of DevOps practices into an organization-wide culture of quality. Data-driven collaboration helps align code to customers through shared goals and insights. Over time, this leads to stronger customer experiences and greater business resilience.

Content provided by Mabl

 

The post Using Data to Sustain a Quality Engineering Transformation appeared first on SD Times.

]]>
Instilling QA in AI Model Development https://sdtimes.com/ai/instilling-qa-in-ai-model-development/ Mon, 17 Oct 2022 17:36:19 +0000 https://sdtimes.com/?p=49283 In the 1990’s, when software started to become ubiquitous in the business world, quality was still a big issue. It was common for new software and upgrades to be buggy and unreliable, and rollouts were difficult. Software testing was mostly a manual process, and the people developing the software typically also tested it. Seeing a … continue reading

The post Instilling QA in AI Model Development appeared first on SD Times.

]]>
In the 1990’s, when software started to become ubiquitous in the business world, quality was still a big issue. It was common for new software and upgrades to be buggy and unreliable, and rollouts were difficult. Software testing was mostly a manual process, and the people developing the software typically also tested it. Seeing a need in the market, consultancies started offering outsourced software testing. While it was still primarily manual, it was more thorough. Eventually, automated testing companies emerged, performing high-volume, accurate feature and load testing. Soon after, automated software monitoring tools emerged, to help ensure software quality in production. Eventually, automated testing and monitoring became the standard, and software quality soared, which of course helped accelerate software adoption. 

AI model development is at a similar inflection point. AI and Machine Learning technologies are being adopted at a rapid pace, but quality varies. Often, the data scientists developing the models are also the ones manually testing them, and that can lead to blind spots. Testing is manual and slow. Monitoring is nascent and ad hoc. And AI model quality is suffering, becoming a gating factor for the successful adoption of AI. In fact, Gartner estimates that 85 percent of AI projects fail.

The stakes are getting higher. While AI was first primarily used for low-stakes decisions such as movie recommendations and delivery ETAs, more and more often, AI is now the basis for models that can have a big impact on people’s lives and on businesses. Consider credit scoring models that can impact a person’s ability to get a mortgage, and the Zillow home-buying model debacle that led to the closure of the company’s multi-billion dollar line of business buying and flipping homes. Many organizations learned too late that Covid broke their models – changing market conditions left models with outdated variables that no longer made sense (for instance, basing credit decisions for a travel-related credit card on volume of travel, at a time when all non-essential travel had halted).

Not to mention, regulators are watching.

Enterprises must do a better job with AI model testing if they want to gain stakeholder buy-in and achieve a return on their AI investments. And history tells us that automated testing and monitoring is how we do it.

Emulating testing approaches in software development

First, let’s recognize that testing traditional software and testing AI models require significantly different processes. That is because AI bugs are different. AI bugs are complex statistical & data anomalies (not functional bugs), and the AI blackbox makes it really hard to identify and debug them. As a result, AI development tools are methodologies that are immature and not prepared for dealing with high stakes use cases.  

AI model development differs from software development in three important ways:

  • It involves iterative training/experimentation vs being task and completion oriented;
  • It’s predictive vs functional; and 
  • Models are created via black-box automation vs human designed.

Machine Leading also presents unique technical challenges that aren’t present in traditional software – chiefly:

  • Opaqueness/Black box nature
  • Bias and fairness
  • Overfitting and unsoundness
  • Model reliability
  • Drift

The training data that AI and ML model development depend on can also be problematic. In the software world, you could purchase generic software testing data, and it could work across different types of applications. In the AI world, training data sets need to be specifically formulated for the industry and model type in order to work. Even synthetic data, while safer and easier to work with for testing, has to be tailored for a purpose. 

Taking proactive steps to ensure AI model quality

So what should companies leveraging AI models do now? Take proactive steps to work automated testing and monitoring into the AI model lifecycle. 

A solid AI model quality strategy will encompass four categories:

  • Real-world model performance, including conceptual soundness, stability/monitoring and reliability, and segment and global performance.
  • Societal factors, including fairness and transparency, and security and privacy
  • Operational factors, such as explainability and collaboration, and documentation
  • Data quality, including missing and bad data

All are crucial towards ensuring AI model quality. 

For AI models to become ubiquitous in the business world – as software eventually did – the industry has to dedicate time and resources to quality assurance. We are nowhere near the five nines of quality that’s expected for software, but automated testing and monitoring is putting us on the path to get there.

The post Instilling QA in AI Model Development appeared first on SD Times.

]]>
Perforce updates Helix ALM with enhanced automated testing support https://sdtimes.com/test/perforce-updates-helix-alm-with-enhanced-automated-testing-support/ Wed, 05 Oct 2022 18:41:00 +0000 https://sdtimes.com/?p=49124 Perforce has announced the latest version of its testing solution Helix ALM. Version 2022.2 introduces enhanced support for automated testing.  With this release, customers can use a single tool for manual and automated testing. Bringing these together into one tool increases efficiency, reduces risk, and enables a more holistic testing strategy, according to Perforce. “We’re … continue reading

The post Perforce updates Helix ALM with enhanced automated testing support appeared first on SD Times.

]]>
Perforce has announced the latest version of its testing solution Helix ALM. Version 2022.2 introduces enhanced support for automated testing. 

With this release, customers can use a single tool for manual and automated testing. Bringing these together into one tool increases efficiency, reduces risk, and enables a more holistic testing strategy, according to Perforce.

“We’re excited to deliver this milestone release to our customers,” said Brad Hart, chief technology officer at Perforce. “With enhanced support for test automation, Helix ALM can help customers manage automated testing in a more consistent, controlled, and trusted way – from digital apps and software development to medical device production, life sciences, semiconductor, and beyond.”

The test automation support enhancements of the new release include out-of-the-box automated testing support, the ability to create test automation suites, the ability to consolidate and automatically map automated test results, native Jenkins integration, and rapid failure analysis.

More information on the latest release is available here.

The post Perforce updates Helix ALM with enhanced automated testing support appeared first on SD Times.

]]>
Automated testing for mobile is a huge struggle https://sdtimes.com/test/automated-testing-for-mobile-is-a-huge-struggle/ Tue, 19 Jul 2022 16:49:56 +0000 https://sdtimes.com/?p=48305 Organizations realize the importance of test automation but many struggle to make a move to automation on mobile.  The inception of mobile testing wasn’t as user-friendly for developers when compared to web testing, for example, and the difficulties still last today, according to Kobiton’s DevOps evangelist Shannon Lee, in the SD Times Live! webinar, “Creating … continue reading

The post Automated testing for mobile is a huge struggle appeared first on SD Times.

]]>
Organizations realize the importance of test automation but many struggle to make a move to automation on mobile. 

The inception of mobile testing wasn’t as user-friendly for developers when compared to web testing, for example, and the difficulties still last today, according to Kobiton’s DevOps evangelist Shannon Lee, in the SD Times Live! webinar, “Creating and implementing a test automation strategy for mobile app quality.”

“For the web, people made it so that it’s more friendly to develop together. Whereas mobile applications, we really saw kind of that capitalism come into a place where we are now divided; we have the Android platform and we have the iOS platform,” Lee explained. “The iOS platform really only works well with other iOS tools, whereas Android is a little bit more agnostic and open. The rules of the road are just a little bit more complicated.”

Also, while Selenium was released in 2007, paving the way for additional open-source frameworks for web development, Appium for mobile wasn’t released until 2014 and the number of additional frameworks was limited due to the complexity with mobile, Lee added. 

Lee found that many teams struggle because these open-source frameworks struggle to keep up with new technologies such as image injection or Face ID and environments such as varying network conditions, locations, and other virtualized services. 

Now, the pressure to increase the speed to market has resulted in enormous pressure for developers and testers. Monthly releases are not cutting it anymore, and without a strong automation strategy in place, releasing weekly or daily is a herculean task.

“There are features constantly being released to keep up with, so there are more tests to write and of course, as I’m alluding to less time to write them. And with that complexity and less time, it becomes hard to deliver stable code,” Lee said. “So if you do find that you have time to automate a test case, you want to ensure that if you do it so quickly and you kind of do it haphazardly, it’s not going to be the best stable code. And that kind of proves itself pointless in a sense if you get past false negatives or false positives.” 

Teams can combine the best of both scriptless and scripted test cases to test faster, Lee explained. Scriptless can be used for UI and end-to-end tests, and scripted test cases should be used for APIs and any additional tests. 

Teams should also start with critical test cases first and automate and execute end-to-end tests to cover UI and back-end services. 

To learn more, watch the webinar, “Creating and implementing a test automation strategy for mobile app quality,” on-demand now.

The post Automated testing for mobile is a huge struggle appeared first on SD Times.

]]>
UserTesting enhances ML-powered post-test analysis and improves collaboration https://sdtimes.com/test/usertesting-enhances-ml-powered-post-test-analysis-and-improves-collaboration/ Wed, 13 Jul 2022 14:36:22 +0000 https://sdtimes.com/?p=48250 UserTesting announced new advanced Instant Insight features that are powered by machine learning to speed up human insights.  It also announced the UserTesting Human Insight Platform, which can detect patterns and anomalies within customer data and automatically display high-value insights within Customer Experience Narratives.  The new features in this product release include UserTesting’s test-level Instant … continue reading

The post UserTesting enhances ML-powered post-test analysis and improves collaboration appeared first on SD Times.

]]>
UserTesting announced new advanced Instant Insight features that are powered by machine learning to speed up human insights. 

It also announced the UserTesting Human Insight Platform, which can detect patterns and anomalies within customer data and automatically display high-value insights within Customer Experience Narratives. 

The new features in this product release include UserTesting’s test-level Instant Insight feature utilizes data-driven automation and machine learning models, and a UserTesting navigation redesign, which enables customers to access core functionalities more readily with a new user interface, folder management, easily accessible resources, and a workspace switcher. 

“More than ever, it’s imperative that companies know how their customers feel, and why. UserTesting is continuously innovating its platform to help companies gain actionable insights so they can make smarter and faster business decisions,” said Kaj van de Loo, CTO at UserTesting. “UserTesting’s data-driven automation helps customers speed up analysis of video feedback, so they can make decisions quicker than ever. The platform helps companies optimize the use of human insights, so that they can better understand what is driving customer behavior, and adapt to any changes in the market.”

New features also include enhanced card sorting capabilities so that users can view video feedback alongside card sorting metrics and also the ability to securely upload audio, video, and other media assets directly onto the Human Insight Platform. 

UserTesting is also now available in French in addition to English and German.

The post UserTesting enhances ML-powered post-test analysis and improves collaboration appeared first on SD Times.

]]>
Report: Fully automated testing remains elusive for organizations https://sdtimes.com/ai-testing/report-fully-automated-testing-remains-elusive-for-organizations/ Thu, 30 Jun 2022 13:22:07 +0000 https://sdtimes.com/?p=48157 Despite the growing complexity of the software that drives organizations, few companies have fully automated testing or are using AI, according to new research conducted by Forrester and commissioned by Keysight.  For the study, Forrester conducted an online survey in December 2021 that involved 406 test operations decision-makers at organizations in North America, EMEA, and … continue reading

The post Report: Fully automated testing remains elusive for organizations appeared first on SD Times.

]]>
Despite the growing complexity of the software that drives organizations, few companies have fully automated testing or are using AI, according to new research conducted by Forrester and commissioned by Keysight. 

For the study, Forrester conducted an online survey in December 2021 that involved 406 test operations decision-makers at organizations in North America, EMEA, and APAC to evaluate current testing capabilities for electronic design and development and to hear their thoughts on investing in automation. It found that only 11% of respondents have fully automated testing. Eighty-four percent of respondents said that the majority of testing involves complex environments. 

Most companies reported that they’re moderately or very satisfied with their testing methods and three-fourths of them use a combination of automated and manual testing. However, 45% of companies say that they’re willing to move to a fully automated testing environment within the next 3 years to increase productivity, gain the ability to simulate product function and performance and shorten the time to market. 

Companies are also looking to add AI for integrating complex test suites, an area of test automation that is severely lacking, with only 16% of companies using it today. 

“Despite their reported high satisfaction levels with their testing methods, companies are interested in moving to more automated approaches and using AI for integrating complex test suites. They understand this will increase their productivity, simulate product function or performance, and shorten design cycles, thereby, reducing product time to market,” the research stated. ‘In turn, this improvement in the testing and development process will yield higher customer satisfaction and increase product sales or revenue. They recognize that reducing time to market can be achieved by better analytics on current test and measurement data, integrated software tools across the product development lifecycle, and an improved ability to share data across teams.”

The post Report: Fully automated testing remains elusive for organizations appeared first on SD Times.

]]>
Reduce test execution times to keep up with pace of delivery https://sdtimes.com/test/reduce-test-execution-times-to-keep-up-with-pace-of-delivery/ Mon, 10 Jan 2022 17:46:10 +0000 https://sdtimes.com/?p=46300 In this era of Agile software development, the life of a product manager, who has to talk about or plan a single feature, is easy. The life of a developer, who has to code one feature, is easy.  For the designer and DevOps engineer, designing and deploying one feature is easy. You know who’s life … continue reading

The post Reduce test execution times to keep up with pace of delivery appeared first on SD Times.

]]>
In this era of Agile software development, the life of a product manager, who has to talk about or plan a single feature, is easy. The life of a developer, who has to code one feature, is easy.  For the designer and DevOps engineer, designing and deploying one feature is easy.

You know who’s life isn’t easy? The tester. He has to test that new feature, or product, while at the same time testing all the old features as well. And, he has to do it in a very small amount of time.

And, according to Mudit Singh, director of marketing and growth at cloud testing platform provider LambdaTest, it’s not just regression testing. It’s more like a progression of tests. “A progression is more of that I have tested once, and found a bug. The developer has fixed it. I come back and test it again,” he explained. “But in general, it’s the first level of testing itself. He’s asked to test one feature, plus 1500 old features as well, in that small amount of time.”

This is because continuous deployment is happening, and even though people are moving to microservices architectures – and there are mitigations to that, Singh said – the state of testing remains the same, in that you have to test everything to be sure it’s right. But, he said, “you can make the process more intelligent. The brute force way still has limits. So people are saying, I cannot do manual testing of each and every feature; I’ll automate the tasks that are repetitive. And that’s where automation testing comes in.”

The result is that when a developer commits code in the repository, that act triggers a set of automation tests that ensures whatever has been committed is perfect or not.

In a small enterprise, in a small-scale setting, this process works. It’s in larger enterprises, where they might have hundreds of thousands of tests, the test time of whole build takes hours. As Singh pointed out, “I write code today, and I commit it, and the test starts running. The whole test process will take, let’s say, four or five hours. I am now dependent upon the test to complete and it’ll be four or five hours until I get feedback on what I’ve written is right or not. So now I’m reading, twiddling my thumbs.”

Commonly, organizations will write code all day and run the tests overnight and get the feedback the next morning. But by then, the developer is out of the zone, and has to remember what is there to debug what is breaking, Singh said, “so the whole productivity starts to break down.”

But productivity could continue if you run the test in an hour, or 30 minutes, and remediate issues while they’re still fresh in your mind, with much less time wasted waiting for feedback from the test results.

Platforms such as LambdaTest can run tests on a massive scale, across multiple machines, at the same time, in parallel, to reduce overall test execution time by multiple folds. 

“If I have 100 test cases, each test case takes a minute to execute,” Singh said. “If I run them sequentially, it would have taken me 100 minutes to do the whole test suite. But if I run these tests in a parallel setting in 10 different machines, I run 10 tests at the same time. So the whole test execution time drops by a factor of 10. In 10 minutes, now my whole test is complete. If I’m doing 100 parallel, the whole 100 minutes has been reduced down to one minute.”

This is something that Singh maintains is difficult for organizations to do in-house, as opposed to leveraging the scalability of cloud computing. “There was a time when enterprises and big-scale companies used to set up their own in-house device labs, their own in-house VMs and everything. But one of the biggest challenges was, of course, maintenance of these devices, maintaining some of these VMs anytime a new operating system comes, anytime a new security patch comes, anytime a new browser version comes,” he said. 

As to the difficulty of doing this in-house, Singh noted that a new version of Chrome is released once every two months. Firefox updates once every three months. In the same year, you could get 10 to 12 different browser versions – not to even mention new mobile devices and operating systems. Samsung, for example, comes out with 17 new devices each year, and if the company bought every one, that’s at least a $20,000 expenditure for each developer, since almost all developers are working remotely, away from an in-house installation.

Further, during the time the developer is coding, and not testing, the developer’s test lab is not being fully, 100% utilized. But if that lab is connected via the cloud, now four or five people – or more – can work on it simultaneously. This makes it more cost-effective.

Using a platform such as LambdaTest, you get full browser and device coverage, and perhaps most significantly in this age of instant gratification, test execution times are reduced as well.


Content provided by SD Times and LambdaTest

The post Reduce test execution times to keep up with pace of delivery appeared first on SD Times.

]]>
Software test automation for the survival of business https://sdtimes.com/test/software-test-automation-for-the-survival-of-business/ Tue, 06 Jul 2021 13:15:35 +0000 https://sdtimes.com/?p=44626 In this two part series, we explore the two sides of testing: automated and manual. In this article, we examine why automated testing should be done. To read the other side of the argument, go here.  In today’s business environment, stakeholders rely on their enterprise applications to work quickly and efficiently, with absolutely no downtime. … continue reading

The post Software test automation for the survival of business appeared first on SD Times.

]]>
In this two part series, we explore the two sides of testing: automated and manual. In this article, we examine why automated testing should be done. To read the other side of the argument, go here

In today’s business environment, stakeholders rely on their enterprise applications to work quickly and efficiently, with absolutely no downtime. Anything short of that could result in a slew of business performance issues and ultimately lost revenue. Take the recent incident in which CDN provider Fastly failed to detect a software bug which resulted in massive global outages for government agencies, news outlets and other vital institutions. 

Effective and thorough testing is mission-critical for software development across categories including business software, consumer applications and IoT solutions. But as continuous deployment demands ramp up and companies face an ongoing tech talent shortage, inefficient software testing has become a serious pain point for enterprise developers, and they’ve needed to rely on new technologies to improve the process.

The Benefits of Test Automation

As with many other disciplines, the key to quickly implementing continuous software development and deployment is robust automation. Converting manual tests to automated tests not only reduces the amount of time it takes to test, but it can also reduce the chance of human error and allows minimal defects to escape into production. Just by converting manual testing to automated testing, companies can reduce three to four days of manual testing time to one, eight-hour overnight session. Therefore, testing does not even have to be completed during peak usage hours.

Automation solutions also allow organizations to test more per cycle in less time by running tests across distributed functional testing infrastructures and in parallel with cross-browser and cross-device mobile testing. Furthermore, if a team lacks mobile devices to test on, it can leverage solutions to enable devices and emulators to be controlled through an enterprise-wide mobile lab manager.

Challenges in Test Automation

Despite all the benefits of automated software testing, many companies are still facing challenges that prevent them from reaping the full benefits of automation. One of those key challenges is managing the complexities of today’s software testing environment, with an increasing pace of releases and proliferation of platforms on which applications need to run (native Android, native iOS, mobile browsers, desktop browsers, etc.). With so many conflicting specifications and platform-specific features, there are many more requirements for automated testing – meaning there are just as many potential pitfalls.

Software releases and application upgrades are also happening at a much quicker pace in recent years. The faster rollout of software releases, while necessary, can break test automation scripts due to fragile, properties-based object identification, or even worse, bitmap-based identification. Due to the varying properties across platforms, tests must be properly replicated and administered on each platform – which can take immense time and effort.

Therefore, robust, and effective test automation also requires an elevated skill set, especially in today’s complex, multi-ecosystem application environment. Record-and-playback testing, a tool which records a tester’s interactions and executes them many times over, is no longer sufficient.

With all of these challenges to navigate, including how difficult it can be to find the right talent, how can companies increase release frequency without sacrificing quality and security?

Ensuring Robust Automation with Artificial Intelligence

To meet the high demands of software testing, automation must be coupled with Artificial Intelligence (AI). Truly robust automation must be resilient, and not rely on product code completion to be created. It must be well-integrated into an organization’s product pipelines, adequately data-driven and in full alignment with the business logic.

Organizations can allow quality assurance teams to begin testing earlier – even in the mock-up phase – through the use of AI-enabled capabilities for the creation of single script that will automatically execute on multiple platforms, devices and browsers. With AI alone, companies can experience major increases in test design speed as well as significant decreases in maintenance costs.

Furthermore, with the proliferation of low-code/no-code solutions, AI-infused test automation is even more critical for ensuring product quality. Solutions that infuse AI object recognition can enable test automation to be created from mockups, facilitating test automation in the pipeline even before product code has been generated or configured. These systems can provide immediate feedback once products are initially released into their first environments, providing for more resilient, successful software releases.

To remain competitive, all businesses need to be as productive and efficient as possible, and the key to that lies in properly tested, functioning, performant enterprise applications. Cumbersome, manual testing is no longer sufficient, and enterprises that continue to rely on it will be caught flat-footed and getting outperformed and out-innovated. Investing in automation and AI-powered development tools will give enterprises the edge they need to stay ahead of the competition.

The post Software test automation for the survival of business appeared first on SD Times.

]]>