mabl Archives - SD Times https://sdtimes.com/tag/mabl/ Software Development News Tue, 23 Apr 2024 13:33:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg mabl Archives - SD Times https://sdtimes.com/tag/mabl/ 32 32 Mabl now offers automated mobile testing https://sdtimes.com/test/mabl-now-offers-automated-mobile-testing/ Tue, 23 Apr 2024 13:30:19 +0000 https://sdtimes.com/?p=54349 The testing company mabl has announced that it now offers automated mobile testing capabilities in its platform, which already offered testing for web and APIs.  It was designed to give full coverage of all the unique functionalities of varying mobile devices and their operating systems. With this new new offering, tests are created through a … continue reading

The post Mabl now offers automated mobile testing appeared first on SD Times.

]]>
The testing company mabl has announced that it now offers automated mobile testing capabilities in its platform, which already offered testing for web and APIs. 

It was designed to give full coverage of all the unique functionalities of varying mobile devices and their operating systems.

With this new new offering, tests are created through a low-code interface, meaning that developers can create tests for their mobile apps in a matter of minutes and non-developers can also utilize the platform. According to the company, this makes testing more accessible and will help to build a culture of quality throughout the organization.

Tests can be executed in parallel across multiple devices, ensuring that testing teams are able to get results faster.  

The platform also offers capabilities designed to increase trust in test results by minimizing the occurrence of flaky tests, which are tests that return both passing and failing results. These include things like auto-healing, which is when tests rewrite themselves to adapt to code changes, and Intelligent Wait, which tailors testing wait times to the normal pace of the application. 

In addition to offering a low-code interface, mabl’s mobile testing capabilities also utilize AI to help make testers even more productive by reducing manual selector tests, speeding up test creation, automatically discovering gaps in test coverage, and identifying performance degradation issues. 

“Ensuring  the  highest  quality  software  across  the  entire  user  experience  is  critical  for organizations  today.  End  user  transactions  globally  occur  primarily  on  smartphones,  yet  the mobile  app  testing  and  deployment  process  has  failed  to  catch  up  to  the  pace  of  change,  and largely  continued  to  be  arcane,  time-intensive,  and  highly  piecemeal  in  its  focus  on  the  testing experience,”  said  Dan  Belcher,  cofounder  at  mabl.  “In  this  climate,  organizations  that  don’t  put mobile  quality  front  and  center  will  fail  to  attract  and  maintain  their  user  base.  At  mabl,  we’ve seen  firsthand  that  organizations  that  embrace  AI-powered,  automated  testing  solutions  have  a competitive  advantage,  by  democratizing  mobile  app  testing  and  accelerating  time  to  market.”

The post Mabl now offers automated mobile testing appeared first on SD Times.

]]>
The impact of AI use by developers on quality engineering https://sdtimes.com/test/the-impact-of-ai-use-by-developers-on-quality-engineering/ Tue, 23 Jan 2024 16:24:29 +0000 https://sdtimes.com/?p=53542 Generative AI is starting to help software engineers solve problems in their code. The impact of this on quality engineers is already being felt. According to data from Stack Overflow’s 2023 Developer Survey, 70% of all respondents are using or are planning to use AI tools in their development process. Further, the study of 90,000 … continue reading

The post The impact of AI use by developers on quality engineering appeared first on SD Times.

]]>
Generative AI is starting to help software engineers solve problems in their code. The impact of this on quality engineers is already being felt.

According to data from Stack Overflow’s 2023 Developer Survey, 70% of all respondents are using or are planning to use AI tools in their development process. Further, the study of 90,000 developers found that 86% of professional developers want to use AI to help them write code.

The next largest use for AI, at about 54% of professional developers, is debugging code. Next, 40% of that cohort said they’d use AI for documenting code. And fourth, 32% said they want to learn about code.

Each of these use cases actually creates significant opportunities for speeding creation and delivery of code, but according to Gevorg Hovsepyan, head of product at low-code test automation platform mabl, each also creates significant risk in terms of quality. The impact of AI on software quality is only just being assessed, but consumer expectations continue to rise.

Though AI can quickly produce large quantities of information, the quality of those results is often lacking. One study by Purdue University discovered, for example, that ChatGPT answered 52% of software engineering questions incorrectly. Accuracy varies across different models and tools, and is likely to improve as the market matures, but software teams still need to ensure that quality is maintained as AI becomes an integral part of development cycles. 

Hovsepyan explained that engineering leaders should consider how — and who — AI is affecting their development pipelines. Developer AI tools can help increase their productivity, but unless QA also embraces AI support, any productivity increases will be lost to testing delays, bugs in production, or slower mean times to resolution (MTTR). 

“We saw this trend with DevOps transformation: companies invest in developer tools, then wonder why their entire organization hasn’t seen improvements. AI will have the same impact unless we look at how everyone in the ecosystem is affected. Otherwise, we’ll have the same frustrations and slower transformation,” Hovsepyan said. 

AI can also further lower the barrier to entry for non-technical people, breaking down long standing silos across DevOps teams and empowering more people to contribute to software development. For software companies, this opportunity can help reduce the risk of AI experimentation. Hovsepyan shared: 

“No one knows your customers better than manual testers and QA teams, because they live in the product and spend much of their time thinking about how to better account for customer behavior. If you give those people AI tools and the resources to learn new technologies, you reduce the risk of AI-generated code breaking the product and upsetting your users.”

So if AI is not yet at the point where it can be fully trusted, what can quality engineers do to mitigate those risks? Hovsepyan said you can’t address all of those risks, but you can position yourself in the best possible way to handle them.

By that, he means learning about AI, its capabilities and flaws. First, he said, it’s “incredibly important for quality engineers to figure out a way to get out of the day-to-day tactical, and start thinking about some of these major risks that are coming our way.”

He went on to say that the use of intelligent testing can help organizations win time to focus on bigger picture questions. “If you do test planning, you can do it with intelligent testing solutions. If you do maintenance, you remove some of that burden, and win the time back. In my mind, that’s number one. Make sure you get out of the tactical day-to-day work that can be done by the same tool itself.”

His second point is that quality engineers need to start to understand AI tools. “Educate, educate, educate,” he said. “I know it’s not necessarily a solution for today’s risks. But if those risks are realized and become an issue tomorrow, and our quality engineers aren’t educated on the subject we’re in, we’re in trouble.”

The post The impact of AI use by developers on quality engineering appeared first on SD Times.

]]>
DevOps success starts with quality engineering https://sdtimes.com/devops/devops-success-starts-with-quality-engineering/ Fri, 06 Oct 2023 15:18:57 +0000 https://sdtimes.com/?p=52589 Studies show that DevOps adoption is still a moving target for the vast majority of software development teams, with just 11% reporting full DevOps maturity in 2022. Navigating this transition requires organization-wide metrics that help everyone understand their role. To that end, Google developed the DORA (DevOps Research and Assessment) metrics to give development teams … continue reading

The post DevOps success starts with quality engineering appeared first on SD Times.

]]>
Studies show that DevOps adoption is still a moving target for the vast majority of software development teams, with just 11% reporting full DevOps maturity in 2022. Navigating this transition requires organization-wide metrics that help everyone understand their role. To that end, Google developed the DORA (DevOps Research and Assessment) metrics to give development teams a straightforward way to measure DevOps maturity. 

Fernando Mattos, Director of Product Marketing at low-code test automation platform mabl, put it this way: “DORA metrics capture the productivity and the stability of development pipelines, which can impact a business’ ability to innovate and keep customers happy. If an organization is struggling to balance higher deployment frequency with lowering change failure rates, quality engineering is critical for bridging that gap.”

DORA metrics were created in 2014 to help development organizations understand what t strategies make teams elite, and in turn, help more companies mature their DevOps practices. This, of course, cements the correlation between engineering efficiency and hitting the goals of the business. By delivering new features faster, fixing defects faster, and providing a better customer experience, the result is more business, higher conversion rates, and lower churn from customers. 

Mattos went on to explain that mabl sees test automation as a critical piece in the delivery chain. “Lead time for change and change failure rate are two key metrics we see impacted there,” he said. “Change lead time is the time it takes from committing a piece of code to when it’s released to production. It’s a straightforward metric that captures a complex process.” He gave the example of a team that has streamlined its code review process, automated its entire pipeline, but still needs to do testing before a feature can be released to production. “A thorough software testing strategy can include unit testing, UI testing, API testing, end-to-end testing, and even non-functional tests like accessibility and performance. And these are essential for reducing change failure rates. But if it takes too long, then it extends the lead time for change, which negatively impacts the business. So, all the improvements that they did in other parts of the process just go down the drain.

“So, by integrating quality engineering and test automation specifically,” he continued, “development teams can shorten the time needed for comprehensive testing and  really optimize their outcomes.” 

Mattos went on to stress that it’s critically important to ensure that test coverage is focused on the customer experience, which will lower the change failure rate. “Lots of customers we talk to have high test coverage, but it’s removed from the customer experience. So they feel like they’re testing everything, but when they release (the software) to production, defects still emerge, especially if there’s an integration with third-party tools, which is very difficult to pass using traditional test automation tools.”

Mabl is trying to help teams build end-to-end continuous testing that’s focused on the customer, according to Mattos. “That’s what customers care about when making purchasing decisions, the experience that they go through – functional and non-functional. Connecting to usage, metrics tools, understanding what user journeys  are  most important to  customers.. those flows must have high coverage.” Mabl helps development organizations create and scale a quality engineering practice that supports DORA improvements and high-quality customer experiences, so businesses see a positive impact on their overall goals. “When your team has an automated testing practice that reflects the customer experience, deployments can happen more often without introducing defects. DORA metrics improve and customers are happier.”

Content created by SD Times and Mabl

 

The post DevOps success starts with quality engineering appeared first on SD Times.

]]>
Buyers Guide: AI and the evolution of test automation https://sdtimes.com/test/buyers-guide-the-evolution-of-test-automation/ Fri, 22 Sep 2023 14:35:53 +0000 https://sdtimes.com/?p=52402 Test automation has undergone quite an evolution in the decades since it first became possible.  Yet despite the obvious benefits, the digitalization of the software development industry has created some new challenges. It comes down to three big things, according to Kevin Parker, vice president of product at Appvance. The first is velocity and how … continue reading

The post Buyers Guide: AI and the evolution of test automation appeared first on SD Times.

]]>
Test automation has undergone quite an evolution in the decades since it first became possible. 

Yet despite the obvious benefits, the digitalization of the software development industry has created some new challenges.

It comes down to three big things, according to Kevin Parker, vice president of product at Appvance. The first is velocity and how organizations “can keep pace with the rate at which developers are moving fast and improving things, so that when they deliver new code, we can test it and make sure it’s good enough to go on to the next phase in whatever your life cycle is,” he said. 

RELATED CONTENT:
A guide to automated testing tools
Take advantage of AI-augmented software testing

The second area is coverage. Parker said it’s important to understand that enough testing is being done, and being done in the right places, to the right depth. And, he added, “It’s got to be the right kind of testing. If you Google test types, it comes back with several hundred kinds of testing.”

How do you know when you’ve tested enough? “If your experience is anything like mine,” Parker said, “the first bugs that get reported when we put a new release out there, are from when the user goes off the script and does something unexpected, something we didn’t test for. So how do we get ahead of that?”

And the final, and perhaps most important, area is the user interface, as this is where the rubber meets the road for customers and users of the applications. “The user interfaces are becoming so exciting, so revolutionary, and the amount of psychology in the design of user interfaces is breathtaking. But that presents even more challenges now for the automation engineer,” Parker said.

Adoption and challenges

According to a report by Research Nester, the test automation market is expected to grow to more than $108 billion by 2031, up from about $17 billion in 2021. Yet as for uptake, it’s difficult to measure the extent to which organizations are successfully using automated testing.

 “I think if you tried to ask anyone, ‘are you doing DevOps? Are you doing Agile?’ Everyone will say yes,” said Jonathan Wright, chief technologist at Keysight, which owns the Eggplant testing software. “And everyone we speak to says, ‘yes, we’re already doing automation.’ And then you dig a little bit deeper, they say, ‘well, we’re running some selenium, running some RPM, running some Postman script.’ So I think, yes, they are doing something.”

Wright said most enterprises that are having success with test automation have invested heavily in it, and have established automation as its own discipline. These organizations, he said, 

“They’ve got hundreds of people involved to keep this to a point where they can run thousands of scripts.” But in the same breath, he noted that the conversation around test case optimization, and risk-based testing, still needs to be had. “Is over-testing a problem?” he posited. “There’s a continuous view that we’re in a bit of a tech crunch at the moment. We’re expected to do more with less, and testing, as always, is one of those areas that have been put under pressure. And now, just saying I’ve got 5,000 scripts, kind of means nothing. Why don’t you have 6,000 or 10,000? You have to understand that you’re not just adding a whole stack of tech debt into a regression folder that’s giving you this feel-good feeling that I’m reading 5,000 scripts a day, but they’re not actually adding any value because they’re not covering new features.”

RELATED CONTENT:
How Cox Automotive found value in automated testing
Accessibility testing
Training the model for testing

Testing at the speed of DevOps

One effect of the need to release software faster is the ever-increasing reliance on open-source software, which may or may not have been tested fully before being let out into the wild.

Arthur Hicken, chief evangelist at Parasoft, said he believes it’s a little forward thinking to assume that developers aren’t writing code anymore, that they’re simply gluing things together and standing them up. “That’s as forward thinking as the people who presume that AI can generate all your code and all your tests now,” he said. “The interesting thing about this is that your cloud native world is relying on a massive amount of component reuse. The promises are really great. But it’s also a trust assumption that the people who built those pieces did a good job. We don’t yet have certification standards for components that help us understand what the quality of this component is.”

He suggested the industry create a bill of materials that includes testing. “This thing was built according to these standards, whatever they are, and tested and passed. And the more we move toward a world where lots of code is built by people assembling components, the more important it will be that those components are well built, well tested and well understood.”

Appvance’s Parker suggests doing testing as close to code delivery as possible. “If you remember when you went to test automation school, we were always taught that we don’t test

the code, we test against the requirements,” he said. “But the modern technologies that we use for test automation require us to have the code handy. Until we actually see the code, we can’t find those [selectors]. So we’ve got to find ways where we can do just that, that is bring our test automation technology as far left in the development lifecycle as possible. It would be ideal if we had the ability to use the same source that the developers use to be able to write our tests, so that as dev finishes, test finishes, and we’re able to test immediately, and of course, if we use the same source that dev is using, then we will find that Holy Grail and be testing against requirements. So for me, that’s where we have to get to, we have to get to that place where dev and test can work in parallel.”

As Parker noted earlier, there are hundreds of types of testing tools on the market – for functional testing, performance testing, UI testing, security testing, and more. And Parasoft’s Hicken pointed out the tension organizations have between using specialized, discrete tools or tools that work well together. “In an old school traditional environment, you might have an IT department where developers write some tests. And then testers write some tests, even though the developers already wrote tests, and then the performance engineers write some tests, and it’s extremely inefficient. So having performance tools, end-to-end tools, functional tools and unit test tools that understand each other and can talk to each other, certainly is going to improve not just the speed at which you can do things and the amount of effort, but also the collaboration that goes on between the teams, because now the performance team picks up a functional scenario. And they’re just going to enhance it, which means the next time, the functional team gets a better test, and it’s a virtuous circle rather than a vicious one. So I think that having a good platform that does a lot of this can help you.”

Coverage: How much is enough?

Fernando Mattos, director of product marketing at test company mabl, believes that test coverage for flows that are very important should come as close to 100% as possible. But determining what those flows are is the hard part, he said. “We have reports within mabl that we try to make easy for our customers to understand. Here are all the different pages that I have on my application. Here’s the complexity of each of those. And here are the tests that have touched on those, the elements on those pages. So at least you can see where you have gaps.”

It is common practice today for organizations to emphasize thorough testing of the critical pieces of an application, but Mattos said it comes down to balancing the time you have for testing and the quality that you’re shooting for, and the risk that a bug would introduce.

“If the risk is low, you don’t have time, and it’s better for your business to be introducing new features faster than necessarily having a bug go out that can be fixed relatively quickly… and maybe that’s fine,” he said.

Parker said AI can help with coverage when it comes to testing every conceivable user experience. “The problem there,” he said, “is this word conceivable, because it’s humans conceiving, and our imagination is limited. Whereas with AI, it’s essentially an unlimited resource to follow every potential possible path through the application. And that’s what I was saying earlier about those first bugs that get reported after a new release, when the end user goes off the script. We need to bring AI so that we can not only autonomously generate tests based on what we read in the test cases, but that we can also test things that nobody even thought about testing, so that the delivery of software is as close to being bug free as is technically possible.”

Parasoft’s Hicken holds the view that testing without coverage isn’t meaningful.  “If I turn a tool loose and it creates a whole bunch of new tests, is it improving the quality of my testing or just the quantity? We need to have a qualitative analysis and at the moment, coverage gives us one of the better ones. In and of itself, coverage is not a great goal. But the lack of coverage is certainly indicative of insufficient testing. So my pet peeve is that some people say, it’s not how much you test, it’s what you test. No. You need to have as broad code coverage as you can have.”

The all-important user experience

It’s important to have someone who is very close to the customer, who understands the customer journey but not necessarily anything about writing code, creating tests, according to mabl’s Mattos. “Unless it’s manual testing, it tends to be technical, requiring writing code and no updating test scripts. That’s why we think low code can really be powerful because it can allow somebody who’s close to the customer but not technical…customer support, customer success.  They are not typically the ones who can understand GitHub and code and how to write it and update that – or even understand what was tested. So we think low code can bridge this gap. That’s what we do.”

Where is this all going?

The use of generative AI to write tests is the evolution everyone wants to see, Mattos said. “We’ll get better results by combining human insights. We’re specifically working on AI technology that will allow implementing and creating test scripts, but still using human intellect to understand what is actually important for the user. What’s important for the business? What are those flows, for example, that go to my application on my website, or my mobile app that actually generates revenue?”

“We want to combine that with the machine,” he continued. “So the human understands the customer, the machine can replicate and create several different scenarios that traverse those. But of course, right, lots of companies are investing in allowing the machine to just navigate through your website and find out the different quarters, but they weren’t able to prioritize for us. We don’t believe that they’re gonna be able to prioritize which ones are the most important for your company.”

Keysight’s Wright said the company is seeing value in generative AI capabilities. “Is it game changing? Yes. Is it going to get rid of manual testers? Absolutely not. It still requires human intelligence around requirements, engineering, feeding in requirements, and then humans identifying that what it’s giving you is trustworthy and is valid. If it suggests that I should test (my application) with every single language and every single country, is it really going to find anything I might do? But in essence, it’s just boundary value testing, it’s not really anything that spectacular and revolutionary.”

Wright said organizations that have dabbled with automation over the years and have had some levels of success are now just trying to get that extra 10% to 20% of value from automation, and get wider adoption across the organization. “We’ve seen a shift toward not tools but how do we bring a platform together to help organizations get to that point where they can really leverage all the benefits of automation. And I think a lot of that has been driven by open testing.” 

“As easy as it should be to get your test,” he continued, “you should also be able to move that into what’s referred to in some industries as an automation framework, something that’s in a standardized format for reporting purposes. That way, when you start shifting up, and shifting the quality conversation, you can look at metrics. And the shift has gone from how many tests am I running, to what are the business-oriented metrics? What’s the confidence rating? Are we going to hit the deadlines? So we’re seeing a move toward risk-based testing, and really more agility within large-scale enterprises.”

 

The post Buyers Guide: AI and the evolution of test automation appeared first on SD Times.

]]>
A guide to automated testing tools https://sdtimes.com/test/a-guide-to-automated-testing-tools-5/ Fri, 22 Sep 2023 14:15:18 +0000 https://sdtimes.com/?p=52398 The following is a listing of automated testing tool providers, along with a brief description of their offerings. FEATURED PROVIDERS APPVANCE is the leader in generative AI for Software Quality.  Its premier product AIQ is an AI-native, unified software quality platform that delivers unprecedented levels of productivity to accelerate digital transformation in the enterprise.   Leveraging generative … continue reading

The post A guide to automated testing tools appeared first on SD Times.

]]>
The following is a listing of automated testing tool providers, along with a brief description of their offerings.

FEATURED PROVIDERS

APPVANCE is the leader in generative AI for Software Quality.  Its premier product AIQ is an AI-native, unified software quality platform that delivers unprecedented levels of productivity to accelerate digital transformation in the enterprise.   Leveraging generative AI and machine learning,  AIQ robots autonomously validate all the possible user flows to achieve complete application coverage.

KEYSIGHT is a leader in test automation, where our AI-driven, digital twin-based solutions help innovators push the boundaries of test case design, scheduling, and execution. Whether you’re looking to secure the best experience for application users, analyze high-fidelity models of complex systems, or take proactive control of network security and performance, easy-to-use solutions including Eggplant and our broad array of network, security, traffic emulation, and application test software help you conquer the complexities of continuous integration, deployment, and test.

MABL is the enterprise SaaS leader of intelligent, low-code test automation that empowers high-velocity software teams to embed automated end-to-end tests into the entire development lifecycle. Mabl’s platform for easily creating, executing, and maintaining reliable browser, API and mobile web tests helps teams quickly deliver high-quality applications with confidence. That’s why brands like Charles Schwab, jetBlue, Dollar Shave Club, Stack Overflow, and more rely on mabl to create the digital experiences their customers demand.

PARASOFT helps organizations continuously deliver high-quality software with its AI-powered software testing platform and automated test solutions. Supporting embedded and enterprise markets, Parasoft’s proven technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software by integrating everything from deep code analysis and unit testing to UI and API testing, plus service virtualization and complete code coverage, into the delivery pipeline. 

OTHER PROVIDERS

Applitools is built to test all the elements that appear on a screen with just one line of code, across all devices, browsers and all screen sizes. We support all major test automation frameworks and programming languages covering web, mobile, and desktop apps.

Digital.ai Continuous Testing provides expansive test coverage across 2,000+ real mobile devices and web browsers, and seamlessly integrates with best-in-class tools throughout the DevOps/DevSecOps pipeline.

RELATED CONTENT: The evolution of test automation

IBM: Quality is essential and the combination of automated testing and service virtualization from IBM Rational Test Workbench allows teams to assess their software throughout their delivery life cycle. IBM has a market leading solution for the continuous testing of end-to-end scenarios covering mobile, cloud, cognitive, mainframe and more. 

Micro Focus enables customers to accelerate test automation with one intelligent functional testing tool for web, mobile, API and enterprise apps. Users can test both the front-end functionality and back-end service parts of an application to increase test coverage across the UI and API.

Kobiton offers GigaFox on-premises or hosted, and solves mobile device sharing and management challenges during development, debugging, manual testing, and automated testing. A pre-installed and pre-configured Appium server provides “instant on” Appium test automation.

Orasi is a leading provider of software testing services, utilizing test management, test automation, enterprise testing, Continuous Delivery, monitoring, and mobile testing technology. 

ProdPerfect is an autonomous, end-to-end (E2E) regression testing solution that continuously identifies, builds and evolves E2E test suites via data-driven, machine-led analysis of live user behavior data. It addresses critical test coverage gaps, eliminates long test suite runtimes and costly bugs in production.  

Progress Software’s Telerik Test Studio is a test automation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production. 

Sauce Labs provides a cloud-based platform for automated testing of web and mobile applications. Optimized for use in CI and CD environment, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium.

SmartBear offers tools for software development teams worldwide, ensuring visibility and end-to-end quality through test management, automation, API development, and application stability. Popular tools include SwaggerHub, TestComplete, BugSnag, ReadyAPI, Zephyr, and others. 

testRigor helps organizations dramatically reduce time spent on test maintenance, improve test stability, and dramatically improve the speed of test creation. This is achieved through its support of “plain English” language that allows users to describe how to find elements on the screen and what to do with those elements from the end-user’s perspective. People creating tests on their system build 2,000+ tests per year per person. On top of it,  testRigor helps teams deploy their analytics library in production that will make systems automatically produce tests reflecting the most frequently used end-to-end flows from production.

 

The post A guide to automated testing tools appeared first on SD Times.

]]>
Quality engineering should be an enterprise-wide endeavor https://sdtimes.com/testing/quality-engineering-should-be-an-enterprise-wide-endeavor/ Thu, 22 Jun 2023 15:04:36 +0000 https://sdtimes.com/?p=51504 Everyone, it seems, wants to shift all the steps required to produce and deliver quality, performant software to the left. The assumption is that by asking developers to take on a greater role in quality assurance and security, the cost to remediate problems is lowered by discovering those issues earlier.  The downside of this is … continue reading

The post Quality engineering should be an enterprise-wide endeavor appeared first on SD Times.

]]>
Everyone, it seems, wants to shift all the steps required to produce and deliver quality, performant software to the left. The assumption is that by asking developers to take on a greater role in quality assurance and security, the cost to remediate problems is lowered by discovering those issues earlier. 

The downside of this is that developers now say that they spend not even half of their time on coding, meaning that instead of working on innovative new products or features, they’re learning how to test all aspects of their application or trying to understand how to secure their code. (Thanks, “you build it, you own it!”)  

Many of these same developers also report in surveys that testing is a big headache for them. “Rather than reducing stress for developers, shift left has introduced new obstacles,” said Gevorg Hovsepyan, head of product at test automation platform mabl. “They built something. They try to deploy but testing breaks. Then they are responsible for fixing the test, trying to update the test, then running the test again. And if things don’t work out, they can spend a lot of time trapped in this cycle.” 

But developers often lack the proper tools and proper training to handle the burden of testing. And, as mandated delivery cycles grow ever shorter, it’s easy to see how testing becomes a significant stress driver for developers. 

As deployment frequency increases, so does the level of testing required to ensure that quality is maintained. This is where test automation can relieve many of the mundane tasks that slow developers down. “We see in our most recent Testing in DevOps Report that teams who have high test coverage have less stress in deployment,” Hovsepyan said. “In fact, if we compare the teams that have high coverage to the teams that don’t, the high-coverage teams are twice as likely to have stress-free deployments.” 

But, Hovsepyan noted, it’s not only about automation – it’s also about testing strategies. 

And this is where shifting testing to the left also impacts quality assurance engineers, whose role is changing – from writing and running the tests to becoming the architects of quality. In these scenarios, QA becomes a center of excellence that enables product teams – developers, owners, and designers – to deliver high-quality software. 

QA engineers in organizations that have shifted testing left are defining what quality looks like and enabling more people to participate in building quality software. “We’re seeing people in our customer base taking more of a leadership role in thinking about what quality really means and what it looks like in the supply chain, versus focusing exclusively on automating test cases.” 

Quality is a team effort 

Organizations doing very frequent deployments must understand that the test effort should be shared, and everyone should participate in building quality into the software, Hovsepyan said. 

“This means that developers are not always the ones building the tests, which makes it much easier for developers to support testing” he explained. “They might build the initial set, but low code means that test creation is faster, and more roles can contribute there. That’s number one, and number two, quality efforts are built around improving the customer experience, rather than a binary pass/fail mindset. This helps everyone – developers, QA, and business stakeholders – understand the true impact of quality engineering and see the value in testing efforts” 

With quality engineering, teams test the full user journey by going through the steps of logging in, finding what they need, checking out, and those steps in between. “I think using an outcomes-focused approach to testing that’s focused on the user journey versus spending cycles of editing the code in the script helps improve testing efficiency and accelerates development cycles.” 

Reduce the test effort 

To ease the load on developers, organizations need to start thinking beyond the typical, traditional technology stack and looking into modern testing solutions, Hovsepyan suggested. For instance, he said, adopting low-code automation tools enables teams beyond developers to join the testing practices. 

These cloud-based solutions often employ AI to reduce the burden of test creation and maintenance for everyone, including developers, he noted. These modern tools leverage intelligence to help teams reap the benefits of thorough testing without slowing development cycles.  

Finally, Hovsepyan pointed out that organizations following Agile practices – iterate on small changes, get customer feedback, and iterate again – further reduce the risk of delivering suboptimal experiences.  

“These transformations – cloud, Agile, and quality engineering – all fit together,” he said. “Modern technology stacks give teams the means to deploy multiple times per day, methodologies like Agile give them the processes, and quality engineering ensures that changes only improve the customer experience.” 

Content provided by SD Times and mabl

The post Quality engineering should be an enterprise-wide endeavor appeared first on SD Times.

]]>
Mabl’s load testing offering provides increased insight into app performance https://sdtimes.com/test/mabls-load-testing-offering-provides-increased-insight-into-app-performance/ Wed, 03 May 2023 14:42:41 +0000 https://sdtimes.com/?p=51072 Low-code intelligence automation company mabl today announced its new load testing offering geared at allowing engineering teams to assess how their application will perform under production load. This capability integrates into mabl’s SaaS platform so that users can enhance the value of existing functional tests, move performance testing to an earlier phase of the development … continue reading

The post Mabl’s load testing offering provides increased insight into app performance appeared first on SD Times.

]]>
Low-code intelligence automation company mabl today announced its new load testing offering geared at allowing engineering teams to assess how their application will perform under production load.

This capability integrates into mabl’s SaaS platform so that users can enhance the value of existing functional tests, move performance testing to an earlier phase of the development lifecycle, and cut down on infrastructure and operations costs.

“The primary goal is to help customers test application changes under production load before they release them so that they can detect any new bottlenecks or things that they would have experienced as the changes hit production before release,” said Dan Belcher, co-founder of mabl.

According to the company, these API load testing capabilities allow for the unification of functional and non-functional testing by utilizing functional API tests for performance and importing Postman Collections to cut down on the time it takes to create tests. 

Mabl also stated that this performance testing lowers the barrier to a sustainable and collaborative performance testing practice, even for teams that do not have dedicated performance testers or specific performance testing tools. 

“Anyone within the software team can use it, so it is not limited to just the software developers or just the performance experts,” Belcher said. “Because we’re low-code and already handling the functional testing, it makes it super easy for the teams to be able to define and execute performance tests on their own without required specialized skills.”

Furthermore, these tests can also be configured to run alongside functional tests on demand, on a schedule, or as a part of CI/CD pipelines. 

The post Mabl’s load testing offering provides increased insight into app performance appeared first on SD Times.

]]>
A guide to automated testing tools https://sdtimes.com/test/a-guide-to-automated-testing-tools-4/ Thu, 01 Dec 2022 21:06:40 +0000 https://sdtimes.com/?p=49708 The following is a listing of automated testing tool providers, along with a brief description of their offerings. FEATURED PROVIDERS mabl is the enterprise SaaS leader of intelligent, low-code test automation that empowers high-velocity software teams to embed automated end-to-end tests into the entire development lifecycle. Customer-centric brands rely on mabl’s unified platform for creating, managing, and … continue reading

The post A guide to automated testing tools appeared first on SD Times.

]]>
The following is a listing of automated testing tool providers, along with a brief description of their offerings.
FEATURED PROVIDERS

mabl is the enterprise SaaS leader of intelligent, low-code test automation that empowers high-velocity software teams to embed automated end-to-end tests into the entire development lifecycle. Customer-centric brands rely on mabl’s unified platform for creating, managing, and running automated tests that result in faster delivery of high-quality, business critical applications. Learn more at https://www.mabl.com; follow @mablhq on Twitter and @mabl on LinkedIn.

Parasoft helps organizations continuously deliver quality software with its market-proven automated software testing solutions. Parasoft’s AI-enhanced technologies reduce the time, effort, and cost of delivering secure, reliable, compliant software with everything from deep code analysis and unit testing to web UI and API testing, plus service virtualization and merged code coverage. Bringing all this together, Parasoft’s award-winning reporting and analytics dashboard delivers a centralized view of application quality, enabling organizations to deliver with confidence.

testRigor helps organizations dramatically reduce time spent on test maintenance, improve test stability, and dramatically improve the speed of test creation. This is achieved through its support of “plain English” language that allows users to describe how to find elements on the screen and what to do with those elements from the end-user’s perspective. People creating tests on their system build 2,000+ tests per year per person. On top of it,  testRigor helps teams deploy their analytics library in production that will make systems automatically produce tests reflecting the most frequently used end-to-end flows from production.

OTHER PROVIDERS

Applitools is built to test all the elements that appear on a screen with just one line of code. Using Visual AI, you can automatically verify that your web or mobile app functions and appears correctly across all devices, all browsers and all screen sizes. It is designed to integrate with your existing tests. We support all major test automation frameworks and programming languages covering web, mobile, and desktop apps.

Appvance IQ can generate its own tests, surfacing critical bugs in minutes with limited human involvement in web and mobile applications. AIQ empowers enterprises to improve the quality, performance and security of their most critical applications, while transforming the efficiency and output of their testing teams and lowering QA costs.

Digital.ai Continuous Testing enables organizations to reduce risk and provide their customers satisfying, error-free experiences — across all devices and browsers. Digital.ai Continuous Testing provides expansive test coverage across 2000+ real mobile devices and web browsers, and seamlessly integrates with best-in-class tools throughout the DevOps/DevSecOps pipeline.

HCL Software develops, markets, sells, and supports over 20 product families with particular focus on Customer Experience, Digital Solutions, Secure DevOps, and Security & Automation. Its mission is to drive ultimate customer success of their IT investments through relentless innovation of our software products. 

HPE Software’s automated testing solutions simplify software testing within fast-moving agile teams and for continuous integration scenarios. Integrated with DevOps tools and ALM solutions, HPE automated testing solutions keep quality at the center of today’s modern applications and hybrid infrastructures. 

IBM: Quality is essential and the combination of automated testing and service virtualization from IBM Rational Test Workbench allows teams to assess their software throughout their delivery life cycle. IBM has a market leading solution for the continuous testing of end-to-end scenarios covering mobile, cloud, cognitive, mainframe and more. 

Keysight Technologies Digital Automation Intelligence (DAI) platform is the first AI-driven test automation solution with unique capabilities that make the testing process faster and easier. With DAI, you can automate 95% of activities, including test-case design, test execution, and results analysis. This enables teams to rapidly accelerate testing, improve the quality of software and integrate with DevOps at speed. The intelligent automation reduces time to market and ensures a consistent experience across all devices.

Micro Focus enables customers to accelerate test automation with one intelligent functional testing tool for web, mobile, API and enterprise apps. Users can test both the front-end functionality and back-end service parts of an application to increase test coverage across the UI and API.

Microsoft’s Visual Studio helps developers create, manage, and run unit tests by offering the Microsoft unit test framework or one of several third-party and open-source frameworks. The company provides a specialized tool set for testers that delivers an integrated experience starting from Agile planning to test and release management, on-premises or in the cloud. 

Kobiton offers its patented GigaFox on-premises or hosted, and solves mobile device sharing and management challenges during development, debugging, manual testing, and automated testing. A pre-installed and pre-configured Appium server provides “instant on” Appium test automation.

NowSecure identifies the broadest array of security threats, compliance gaps and privacy issues in custom-developed, commercial, and business-critical mobile apps. NowSecure customers can choose automated software on-premises or in the cloud, expert professional penetration testing and managed services, or a combination of all as needed. 

Orasi is a leading provider of software testing services, utilizing test management, test automation, enterprise testing, Continuous Delivery, monitoring, and mobile testing technology. 

Perfecto users can pair their favorite frameworks with Perfecto to automate advanced testing capabilities, like GPS, device conditions, audio injection, and more. It also includes full integration into the CI/CD pipeline, continuous testing improves efficiencies across all of DevOps.  

ProdPerfect is an autonomous, end-to-end (E2E) regression testing solution that continuously identifies, builds and evolves E2E test suites via data-driven, machine-led analysis of live user behavior data. It addresses critical test coverage gaps, eliminates long test suite runtimes and costly bugs in production, and removes the QA burden that consumes massive engineering resources.  

Progress Software’s Telerik Test Studio is a test automation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production. 

Sauce Labs provides a cloud-based platform for automated testing of web and mobile applications. Optimized for use in CI and CD environment, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium.

SmartBear tools are built to streamline your process while seamlessly working with your existing products. Whether it’s TestComplete, Swagger, Cucumber, ReadyAPI, Zephyr, or one of our other tools, we span test automation, API life cycle, collaboration, performance testing, test management, and more. 

Synopsys offers a powerful and highly configurable test automation flow that provides seamless integration of all Synopsys TestMAX capabilities. Early validation of complex DFT logic is supported through full RTL integration while maintaining physical, timing and power awareness through direct links into the Synopsys Fusion Design Platform.

SOASTA’s Digital Performance Management (DPM) Platform includes five technologies: TouchTest mobile functional test automation; mPulse real user monitoring (RUM); the CloudTest platform for continuous load testing; Digital Operation Center (DOC) for a unified view of contextual intelligence accessible from any device; and Data Science Workbench, simplifying analysis of current and historical web and mobile user performance data. 

Tricentis Tosca, the #1 continuous test automation platform, accelerates testing with a script-less, AI-based, no-code approach for end-to-end test automation. With support for over 160+ technologies and enterprise applications, Tosca provides resilient test automation for any use case. 

To read the full Buyers Guide, click here. To see how companies are helping with automated testing initiatives, click here.

 

The post A guide to automated testing tools appeared first on SD Times.

]]>
How these companies can help with your automated testing initiatives https://sdtimes.com/test/how-these-companies-can-help-with-your-automated-testing-initiatives/ Thu, 01 Dec 2022 20:57:36 +0000 https://sdtimes.com/?p=49705 We asked these tool providers to share more information on how their solutions help companies with automated testing. Their responses are below Darrel Farris, manager of solutions engineering at mabl Software development teams are realizing that automated testing is key to accelerating product velocity and reaching the full potential of DevOps. When fully integrated into … continue reading

The post How these companies can help with your automated testing initiatives appeared first on SD Times.

]]>
We asked these tool providers to share more information on how their solutions help companies with automated testing. Their responses are below

Darrel Farris, manager of solutions engineering at mabl

Software development teams are realizing that automated testing is key to accelerating product velocity and reaching the full potential of DevOps. When fully integrated into a company’s development pipeline, testing becomes an early alert system for short-term defects as well as long-term performance issues. The key to realizing this potential: simple test creation and rich reporting features. 

Mabl is low-code, intelligent test software that allows everyone to create automated tests covering web UIs, APIs, and mobile browsers with 80% less effort. Quality teams can extend the value of end-to-end tests even further with automated accessibility checks that help ensure every user has a delightful experience, regardless of access needs. Machine learning and AI features like auto-healing and Intelligent Wait help teams create more reliable tests and reduce test maintenance. Results from every test are tracked within mabl’s comprehensive suite of reporting features, making it easy to understand product quality trends. With test creation simplified and quality data at their fingertips, everyone can focus on resolving defects quickly and improving product quality. 

Mabl also includes native integrations with tools like Microsoft Teams, Slack, and Jira, so that testing information can be seamlessly integrated into workflows and everyone can benefit from mabl’s rich diagnostic data. Teams can monitor performance with speed indexes for all web pages, and manage API quality with data on the response time for each API endpoint. This allows teams to shift from reacting to failed tests and customer complaints to proactively managing product quality, improving the customer experience.

Arthur Hicken, chief evangelist at Parasoft

At Parasoft, we have various AI components and capabilities that augment the testers’ work at every layer of the testing pyramid. 

Our AI improves the static analysis experience with fewer false positives, better prioritization and understanding of risk models, and it has the necessary standards such as ISO 26262, PCI-DSS, OWASP, and CWE for compliance in certain industries. 

On top of that, we have advanced test creation with the generation of mocks and stubs to follow the best practices of unit testing in isolation and we have the tools that can help you determine how you can expand a test to provide additional code coverage. 

Test impact analysis helps you understand what tests you need to run when there are changes in code, tests, or requirements. 

We also have AI for API testing to record manual tester behavior and automatically convert that into API tests that are highly maintainable and execute quickly. We can apply AI to create test assets that not only perform functional testing, but you can automatically apply additional testing like security tests, or load and performance tests.

Further, we can use AI to capture a manual test and use it to create a test that can be run automatically because it can be automated and integrated in regression, have AI-based self-healing capabilities and perform security tests without additional tester effort or special training.

Parasoft’s solution can perform deep code analysis, which provides users with the ability to find structural problems. It also helps in functional testing, whether API testing, UI testing, or automated testing. We have a unique position in testing because our solutions cover both a white-box view at the code level as well as a black-box view at the functional and application level. Because we have both views, it enables us to make inferences that wouldn’t be possible otherwise. So, we can start to correlate literally what’s going on at the code analysis level and the unit and functional test level with what the external tests are doing and use this to provide better advice on where a problem exists in the code and how to repair it.

Parasoft’s capability of using AI to automate testing and having a full understanding from deep code analysis all the way through the external testing lets us provide a better experience to the end user. 

Artem Golubev, CEO at testRigor

testRigor empowers manual testers to build functional end-to-end test automation at any degree of complexity, without the need for engineering knowledge in the mix. If a user can express manual test case steps in English, they’ll be able to build tests on the platform. testRigor will then execute the test for you from a human’s standpoint, interacting with a web, native, or mobile application. 

Any person, including those that don’t necessarily have coding skills, will be able to edit, maintain, upgrade, in addition to creating those tests. Also, our tests were measured to be 200 times more stable than Selenium tests, and our customers are typically spending 95% less time managing these tests. 

The QA teams can then be freed from click-through manual regression testing and maintaining automated scripts because the issue of maintenance with testRigor is eliminated for good. 

Just ask Keith Powe, VP of Engineering at IDT Corporation. His team could automate only four test cases a week per person, but with testRigor, they have increased their testing coverage from less than 34% to more than 91% in under 9 months. Spending a maximum of 0.1% of the time in test maintenance, IDT has a 90% reduction in bugs and a more effective CI/CD. Many other companies such as Upgrade, DataHerald, and others have cited drastic improvements in their testing strategy with the benefits that testRigor offers. 

Be sure to visit our site https://testrigor.com/ to learn more about how testRigor can help solve the biggest challenges that you’re facing with automated testing today.

 

To read the full Buyers Guide, click here. To see the guide to automated testing tools, click here.

The post How these companies can help with your automated testing initiatives appeared first on SD Times.

]]>
While automated testing has rebounded this year, it still has a long way to go https://sdtimes.com/test/while-automated-testing-has-rebounded-this-year-it-still-has-a-long-way-to-go/ Thu, 01 Dec 2022 20:47:49 +0000 https://sdtimes.com/?p=49702 Despite all the changes automated software testing has undergone in recent years, data shows that it still has some way to go to accelerate delivery of value and quality to the business, according to Forrester.  However, while test automation coverage saw a notable dip during the pandemic, it has since rebounded last year, according to … continue reading

The post While automated testing has rebounded this year, it still has a long way to go appeared first on SD Times.

]]>
Despite all the changes automated software testing has undergone in recent years, data shows that it still has some way to go to accelerate delivery of value and quality to the business, according to Forrester. 

However, while test automation coverage saw a notable dip during the pandemic, it has since rebounded last year, according to SmartBear’s State of Quality Testing 2022 report. 

Last year saw the amount of companies performing just manual tests at 11%, while that number dwindled to 7% this year, almost returning to pre-pandemic levels of 5% of all tests being performed completely manually. 

When looking at the different types of tests and how they are performed, over half of respondents reported using manual testing for usability and user acceptance tests.

Unit tests, performance tests, and BDD framework tests were highest among all automated testing. 

This year, the most time-consuming activity was performing manual and exploratory tests, jumping to 26% from 18% last year as the most time-consuming task. In the same time period, learning how to use test tools as the most time-consuming challenge with testing fell from 22% to just 8%.

In the Agile and DevOps realm, there are higher levels of automation versus those companies that are still in the waterfall stages, according to Diego Lo Giudice, VP, principal analyst at Forrester. This is inherent to DevOps because if most of the testing is manual, it’s just going to slow down the rest of the team. 

“With DevOps and all the automation going on around it, testing needs to be very high, it needs to be above 80%. You kind of see that only for a few companies or specific projects inside an organization, but if you look at the rest of the market, probably it’s less than 30%,” Lo Giudice said. “I would say we’ve made some progress, but there’s more automation that’s needed.”

In fact, some of the companies that are adopting agile or DevOps methods find that testing sometimes becomes the bottleneck to rapid delivery, according to Darrel Farris, manager of solutions engineering at mabl. Testing in DevOps must be integrated into the pipeline so developers aren’t throwing code over to QA that hasn’t been tested – especially if teams are deploying multiple times per week or month.

Some of the big challenges to implementing automated testing are that there’s a lack of skills and because test automation requires change within the organization. 

“So there are a number of changes regarding people, processes, and technology, it’s not just getting a tool. And automating tests, this is about organizing, testing completely in a different way,” Lo Giudice added. 

Challenges with getting automated testing just right 

“One of the challenges we see from people is that they’re fundamentally approaching this wrong. We’ve had some of our customers talk about this, how they had to change the way they were thinking and so that the kind of common obvious symptom that you see about this today is people saying ‘we had a whole bunch of manual testers and so we’ll build a whole strategy on recording what they do and playing it back and building from there. And this is just fundamentally the wrong approach,” said Arthur Hicken, chief evangelist at Parasoft. 

Another challenge is that automated tests can become incredibly time-consuming to maintain due to the sheer number of tests that are generated. 

“The largest issue is that once a person builds 300 tests, it becomes a full-time job to maintain those tests and you hit the ceiling,” Artem Golubev, CEO at testRigor said. “Coupled with the fact that budgets are limited, people just can’t build more automations.” 

Golubev added that this difficulty to maintain all automated tests is the main reason why the majority of tests are still executed manually today. Automating tests can also be futile if it’s focused on the wrong areas. 

“QA teams are spending 80% of their weeks maintaining scripts due to rapidly changing UIs, instead of focusing on growing functional test coverage or expanding the types of testing they are doing on their application, such as accessibility or performance testing,” mabl’s Farris said. 

“I believe the testing pyramid is built on false assumptions that have never been correct in the first place,” Golubev said. “In a perfect vacuum, of course this is how things work and there are maybe one or two companies which have done it that way. In a real scenario, it’s always been more of an hourglass shape of testing.” 

He explained that this is because engineers who mostly write unit tests are very unlikely to contribute to end-to-end tests, very few engineers would write integration tests since they are such a pain to maintain, and there would be a lot of end-to-end tests where you have people working on them full-time. 

While the integration test value is to make sure that the system integrates properly, it doesn’t matter if you enter and the system doesn’t work properly, Golubev continued. End-to-end tests are actually the ones covering integration because those tests are the test which will prove that your system is usable by your end users.

“Let’s say you’re logging into a banking application and they can’t transfer money from account A to account B, then it does not matter. Even if all your integration tests are green and all your unit tests pass through it, it’s completely useless,” Golubev said. “So the most important tests are end-to-end tests, only then can that system function as intended. And therefore end-to-end tests should be the bulk of the tests that are done.”

The best way to then optimize end-to-end tests to make them run faster is to prioritize because end-to-end tests will inherently be much slower than unit tests. 

“With every type of testing in the organization, people need to assess whether they need to really leverage automation? Is it worth it? Is it something that will be repeated over and over that changes continuously? If you have to run a test, the same test more than three, four times you start asking yourself, well, maybe I should automate this,” Forrester’s Lo Giudice said. “So I don’t think 100% is what customers will achieve and will keep it more towards 80% as I said.”

One of the most efficient ways to make sure that all testing resources are aligned correctly is to align as a team on a testing strategy by starting with the most critical test cases that will ensure a high quality application experience for users, according to mabl’s Farris. This can be done by taking on a few test cases at first, then layering in additional test cases over time.

One way to do this is to create a quality center of excellence or a “quality champion” in an organization. This person or group is a testing expert who can advise and coach everyone from developers to product owners on testing best practices, Farris explained. Some of the manual testing is changing too because of the increasing use of exploratory testing, Lo Giudice explained. This type of manual testing is where the tester sits down with the developer and they work out the issues together. The tester puts the application through certain scenarios, the developer sees the problems and tries to fix them, and they take about two hours a day like that. 

The structure around automated testing is shifting

Both companies’ attitudes towards testing and who gets involved have shifted. As testing becomes more federated, you no longer have a centralized team that does all the testing as an afterthought, according to Lo Giudice. 

Now, there are testers that are moving into the development teams and the product teams to get all of the testing done together. And so what remains in the central team is specialized testing resources that maybe choose the tools that define what the new practices would look like, whether that’s shifting testing to the left or suggesting test-driven development or behavior-driven development. 

The test center is now much smaller working in consulting with the teams but testers move into the team itself, Lo Giudice explained. 

“So the typical manual tester that used to put a test case in an Excel sheet and run it through the application looking at what the test case told him to do suddenly now finds himself with a tool that is quite technical where he needs to write code to automate what he was doing manually,” Lo Giudice said. To solve this, there’s a trend among vendors to raise the level of abstraction of the tools so that a manual tester or even a person on the business side can test using a low code testing tool. 

Then come the technologies, platforms, and tools because after all, an organization needs testing tools that are integrated into CI/CD pipelines with the rest of the development and delivery tools that integrate with CI servers effectively on the cloud. 

“The point really is that testing takes a village and it takes all these different personas in an organization: business tester, and a subject matter expert in testing who is technical but not a coder, and developers that also may be doing API testing, lower level infrastructure testing within their IDE at a very technical level,” Lo Giudice said. 

According to testRigor’s Golubev, the directors of QA will benefit the most from automated testing since they’ll be able to cover far more functionality faster than they ever could before. However, engineers, manual testers, and product management will also be able to benefit from automated testing tooling since they’ll be able to collaborate together on the same tool. 

Previously, it was companies in the banking and health sectors that have been getting automated testing right but now it’s organizations like Lenovo or Volkswagen that have these 

highly complex software test, build, and deploy systems that are the envy of anybody, Parasoft’s Hicken said. Ultimately, it’s one of the things companies are going to do because that is what they’re competitors are moving toward.

AI helps with various levels of testing 

When you send data of all the tests that passed: the log files, the bugs and feed them to AI it can start telling you what you need to test and how when there’s a change coming. It also helps to tell whether to run all of the tests or just to select the few ones that will be impacted by the change. 

There have been impressive improvements in the vision and computer vision space to enable visual testing, Lo Giudice said. There’s a tool out there that sees what the human eye does when looking at the application and will notice things that are going wrong. It can also do it on types of applications that move very fast that the human eye can’t capture. 

One can also teach AI to not fail tests in certain scenarios to help with self-healing. For example, tests can sometimes fail simply because an object moved on the screen differently on the same application on a browser, and then on a mobile device because the layout might change and it’s not necessarily a bug. And so one can now teach the algorithm to not fail the test even though it’s not in the same position because it can find the locator of that object in some other place, Lo Giudice explained. 

There are also AI models that help minimize tests to solve the maintenance problem.

“This is the idea of the AI guiding a person to create tests that are more stable. The Holy Grail is that you create a set of tests that maximize coverage, but minimize the number of tests so that you have less to maintain, and that they’re not brittle,” Hicken said. “You want tests that have proper levels of abstraction, so that you aren’t spending more on keeping them alive than you were in creating them in the first place.”

Also with error clustering, AI can help find and classify bugs in a way that a tester can quickly recognize the bug and can suggest the right developer to fix the bug to reduce mean time to repair. It can use data from production to find out what are the most frequently used features within that application. There’s even a tool that generates unit tests as you code, which Forrester refers to as the tester Turing bot. 

“AI can also support the execution of more stable tests. For example, tests running in the cloud can execute almost too fast, before your application is in a loaded state,” mabl’s Farris said. “It applies intelligence that can slow down or speed up the execution of your tests by automatically adjusting wait times.”

“So AI is infusing along the entire software development lifecycle. And testing is one of the stages where it’s actually more mature than any other stage of the development lifecycle,” Forrester’s Lo Giudice said. 

To read how providers are helping with automated testing initiatives, click here. To read the guide to automated testing tools, click here.

The post While automated testing has rebounded this year, it still has a long way to go appeared first on SD Times.

]]>