test automation Archives - SD Times https://sdtimes.com/tag/test-automation/ Software Development News Wed, 02 Oct 2024 13:40:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg test automation Archives - SD Times https://sdtimes.com/tag/test-automation/ 32 32 SmartBear Boosts Testing Efficiency and Software Quality with Integrated Load Testing in TestComplete https://sdtimes.com/test/smartbear-boosts-testing-efficiency-and-software-quality-with-integrated-load-testing-in-testcomplete/ Wed, 02 Oct 2024 13:40:28 +0000 https://sdtimes.com/?p=55766 SmartBear, a provider of software quality and visibility solutions, has integrated the load testing engine of LoadNinja into its automated testing tool, TestComplete. Testers can now re-use their functional tests and run them as a load test in a single workflow, optimizing efficiency and productivity, enhancing test coverage, and reducing costs – solving the pain … continue reading

The post SmartBear Boosts Testing Efficiency and Software Quality with Integrated Load Testing in TestComplete appeared first on SD Times.

]]>
SmartBear, a provider of software quality and visibility solutions, has integrated the load testing engine of LoadNinja into its automated testing tool, TestComplete. Testers can now re-use their functional tests and run them as a load test in a single workflow, optimizing efficiency and productivity, enhancing test coverage, and reducing costs – solving the pain of needing point solutions to run a complete UI testing suite.

“By integrating load testing capabilities into TestComplete, SmartBear is empowering its customers to enhance software quality through streamlined testing workflows,” said Prashant Mohan, Senior Director of Product Management at SmartBear. “With the ability to automate both functional and load testing from a single location, our customers can now conduct thorough and efficient testing, ensuring their applications perform reliably under heavy loads.”

This integration allows testers to quickly convert existing functional tests into load tests, providing a faster and more efficient way to prepare your application for peak usage. Testers can also leverage AI-driven self-healing featuring SmartBear HaloAI to ensure that load tests remain relevant and effective, even as the application evolves. With this integration, testers can now automate a full-scale UI test suite composed of essential testing types, including functional testing, visual testing, device cloud testing, and load testing, giving teams exceptional testing coverage within a single tool.

“We can execute tests from our end users’ point of view, unlike other tools that can’t accurately understand the differences from yesterday’s build to today’s,” said Alexei Karas, software test automation professional. “Self-healing with TestComplete also makes load testing easier by enabling us to check, verify, and guarantee that my load tests recorded in the past still correspond to today’s reality.”

SmartBear has also launched test data generation featuring HaloAI. This new offering provides additional test data generation capabilities, delivering a more advanced and customizable solution. With the power of HaloAI, users can put their growing LLM skills to work, inputting simple-text commands to generate tailored datasets that better suit their specific testing needs. This approach not only streamlines the creation process but also ensures the security of customer data by generating it directly within TestComplete, eliminating the risk of using external LLM tools. Testers can quickly and easily create realistic and diverse datasets, enhancing the quality and effectiveness of their data-driven tests.

To learn more, register for the webinar, “Test Automation Just Got Stronger & Simpler,” at: https://smartbear.com/resources/webinars/launch-alert-test-automation-just-got-stronger-sim/

For more information, go to: https://smartbear.com/product/testcomplete/

The post SmartBear Boosts Testing Efficiency and Software Quality with Integrated Load Testing in TestComplete appeared first on SD Times.

]]>
The power of automation and AI in testing environments https://sdtimes.com/test/the-power-of-automation-and-ai-in-testing-environments/ Mon, 25 Mar 2024 15:35:45 +0000 https://sdtimes.com/?p=54094 Software testing is a critical aspect of the SDLC, but constraints on time and resources can cause software companies to treat testing as an afterthought, rather than a linchpin in product quality. The primary challenge in the field of testing is the scarcity of talent and expertise, particularly in automation testing, according to Nilesh Patel, … continue reading

The post The power of automation and AI in testing environments appeared first on SD Times.

]]>
Software testing is a critical aspect of the SDLC, but constraints on time and resources can cause software companies to treat testing as an afterthought, rather than a linchpin in product quality.

The primary challenge in the field of testing is the scarcity of talent and expertise, particularly in automation testing, according to Nilesh Patel, Senior Director of Software Services at KMS Technology. Many organizations struggle due to a lack of skilled testers capable of implementing and managing automated testing frameworks. As a result, companies often seek external assistance to fill this gap and are increasingly turning to AI/ML. 

Many organizations possess some level of automation but fail to leverage it fully, resorting to manual testing, which limits their efficiency and effectiveness in identifying and addressing software issues, Patel added. 

Another significant issue is the instability of testing environments and inadequate test data. Organizations frequently encounter difficulties with unstable cloud setups or lack the necessary devices for comprehensive testing, which hampers their ability to conduct efficient and effective tests. The challenge of securing realistic and sufficient test data further complicates the testing process. 

The potential solution for this, KMS’s Patel said, lies in leveraging advanced technologies, such as AI and machine learning, to predict and generate relevant test data, improving test coverage and the reliability of testing outcomes. 

Patel emphasized that applications are becoming more intricate than ever before, so AI/ML technologies are not only essential for managing that complexity but also play a crucial role in enhancing testing coverage by identifying gaps that could have been previously overlooked. 

“If you have GenAI or LLM models, they have algorithms that are actually looking at user actions and how the customers or end users are using the application itself, and they can predict what data sets you need,” Patel told SD Times. “So it helps increase test coverage as well. The AI can find gaps in your testing that you didn’t know about before.”

In an environment characterized by heightened complexity, rapid release expectations, and intense competition, with thousands of applications offering similar functionalities, Patel emphasizes the critical importance of launching high-quality software to ensure user retention despite these challenges. 

This challenge is particularly pronounced in the context of highly regulated industries like banking and health care, where AI and ML technologies can offer significant advantages, not only by streamlining the development process but also by facilitating the extensive documentation requirements inherent to these sectors.

“The level of detail is through the roof and you have to plan a lot more. It’s not as easy as just saying ‘I’m testing it, it works, I’ll take your word for it.’ No, you have to show evidence and have the buy-ins and it’s those [applications] that will probably have longer release cycles,” Patel said. “But that’s where you can use AI and GenAI again because those technologies will help figure out patterns that your business can use.”

The system or tool can monitor and analyze user actions and interactions, and predict potential defects. It emphasizes the vast amount of data available in compliance-driven industries, which can be leveraged to improve product testing and coverage. By learning from every possible data point, including the outcomes of test cases, the algorithm enhances its ability to ensure more comprehensive coverage for subsequent releases.

Testing is becoming all hands on deck

More people in the organization are actively engaged in testing to make sure that the application works for their part of the organization, Patel explained. 

“I would say everyone is involved now. In the old days, it used to be just the quality team or the testing team or maybe some of the software developers involved in testing, but I see it from everyone now. Everyone has to have high-quality products. Even the sales team, they’re doing demos right to their clients, and it has to work, so they have opinions on quality and in that case even serve as your  end users,” Patel said.

“Then when they’re selling, they’re getting actual feedback on how the app works. When you see how it works, or how they’re using it, the testers can take that information and generate test cases based on that. So it’s hand in hand. It’s everyone’s responsibility,” he added. 

In the realm of quality assurance, the emphasis is placed on ensuring that business workflows are thoroughly tested and aligned with the end users’ actual experiences. This approach underscores the importance of moving beyond isolated or siloed tests to embrace a comprehensive testing strategy that mirrors real-world usage. Such a strategy highlights potential gaps in functionality that might not be apparent when testing components in isolation. 

To achieve this, according to Patel, it’s crucial to incorporate feedback and observations from all stakeholders, including sales teams, end users, and customers, into the testing process. This feedback should inform the creation of scenarios and test cases that accurately reflect the users’ experiences and challenges. 

By doing so, quality assurance can validate the effectiveness and efficiency of business workflows, ensuring that the product not only meets but exceeds the high standards expected by its users. This holistic approach to testing is essential for identifying and addressing issues before they affect the customer experience, ultimately leading to a more robust and reliable product.

 

The post The power of automation and AI in testing environments appeared first on SD Times.

]]>
Buyers Guide: AI and the evolution of test automation https://sdtimes.com/test/buyers-guide-the-evolution-of-test-automation/ Fri, 22 Sep 2023 14:35:53 +0000 https://sdtimes.com/?p=52402 Test automation has undergone quite an evolution in the decades since it first became possible.  Yet despite the obvious benefits, the digitalization of the software development industry has created some new challenges. It comes down to three big things, according to Kevin Parker, vice president of product at Appvance. The first is velocity and how … continue reading

The post Buyers Guide: AI and the evolution of test automation appeared first on SD Times.

]]>
Test automation has undergone quite an evolution in the decades since it first became possible. 

Yet despite the obvious benefits, the digitalization of the software development industry has created some new challenges.

It comes down to three big things, according to Kevin Parker, vice president of product at Appvance. The first is velocity and how organizations “can keep pace with the rate at which developers are moving fast and improving things, so that when they deliver new code, we can test it and make sure it’s good enough to go on to the next phase in whatever your life cycle is,” he said. 

RELATED CONTENT:
A guide to automated testing tools
Take advantage of AI-augmented software testing

The second area is coverage. Parker said it’s important to understand that enough testing is being done, and being done in the right places, to the right depth. And, he added, “It’s got to be the right kind of testing. If you Google test types, it comes back with several hundred kinds of testing.”

How do you know when you’ve tested enough? “If your experience is anything like mine,” Parker said, “the first bugs that get reported when we put a new release out there, are from when the user goes off the script and does something unexpected, something we didn’t test for. So how do we get ahead of that?”

And the final, and perhaps most important, area is the user interface, as this is where the rubber meets the road for customers and users of the applications. “The user interfaces are becoming so exciting, so revolutionary, and the amount of psychology in the design of user interfaces is breathtaking. But that presents even more challenges now for the automation engineer,” Parker said.

Adoption and challenges

According to a report by Research Nester, the test automation market is expected to grow to more than $108 billion by 2031, up from about $17 billion in 2021. Yet as for uptake, it’s difficult to measure the extent to which organizations are successfully using automated testing.

 “I think if you tried to ask anyone, ‘are you doing DevOps? Are you doing Agile?’ Everyone will say yes,” said Jonathan Wright, chief technologist at Keysight, which owns the Eggplant testing software. “And everyone we speak to says, ‘yes, we’re already doing automation.’ And then you dig a little bit deeper, they say, ‘well, we’re running some selenium, running some RPM, running some Postman script.’ So I think, yes, they are doing something.”

Wright said most enterprises that are having success with test automation have invested heavily in it, and have established automation as its own discipline. These organizations, he said, 

“They’ve got hundreds of people involved to keep this to a point where they can run thousands of scripts.” But in the same breath, he noted that the conversation around test case optimization, and risk-based testing, still needs to be had. “Is over-testing a problem?” he posited. “There’s a continuous view that we’re in a bit of a tech crunch at the moment. We’re expected to do more with less, and testing, as always, is one of those areas that have been put under pressure. And now, just saying I’ve got 5,000 scripts, kind of means nothing. Why don’t you have 6,000 or 10,000? You have to understand that you’re not just adding a whole stack of tech debt into a regression folder that’s giving you this feel-good feeling that I’m reading 5,000 scripts a day, but they’re not actually adding any value because they’re not covering new features.”

RELATED CONTENT:
How Cox Automotive found value in automated testing
Accessibility testing
Training the model for testing

Testing at the speed of DevOps

One effect of the need to release software faster is the ever-increasing reliance on open-source software, which may or may not have been tested fully before being let out into the wild.

Arthur Hicken, chief evangelist at Parasoft, said he believes it’s a little forward thinking to assume that developers aren’t writing code anymore, that they’re simply gluing things together and standing them up. “That’s as forward thinking as the people who presume that AI can generate all your code and all your tests now,” he said. “The interesting thing about this is that your cloud native world is relying on a massive amount of component reuse. The promises are really great. But it’s also a trust assumption that the people who built those pieces did a good job. We don’t yet have certification standards for components that help us understand what the quality of this component is.”

He suggested the industry create a bill of materials that includes testing. “This thing was built according to these standards, whatever they are, and tested and passed. And the more we move toward a world where lots of code is built by people assembling components, the more important it will be that those components are well built, well tested and well understood.”

Appvance’s Parker suggests doing testing as close to code delivery as possible. “If you remember when you went to test automation school, we were always taught that we don’t test

the code, we test against the requirements,” he said. “But the modern technologies that we use for test automation require us to have the code handy. Until we actually see the code, we can’t find those [selectors]. So we’ve got to find ways where we can do just that, that is bring our test automation technology as far left in the development lifecycle as possible. It would be ideal if we had the ability to use the same source that the developers use to be able to write our tests, so that as dev finishes, test finishes, and we’re able to test immediately, and of course, if we use the same source that dev is using, then we will find that Holy Grail and be testing against requirements. So for me, that’s where we have to get to, we have to get to that place where dev and test can work in parallel.”

As Parker noted earlier, there are hundreds of types of testing tools on the market – for functional testing, performance testing, UI testing, security testing, and more. And Parasoft’s Hicken pointed out the tension organizations have between using specialized, discrete tools or tools that work well together. “In an old school traditional environment, you might have an IT department where developers write some tests. And then testers write some tests, even though the developers already wrote tests, and then the performance engineers write some tests, and it’s extremely inefficient. So having performance tools, end-to-end tools, functional tools and unit test tools that understand each other and can talk to each other, certainly is going to improve not just the speed at which you can do things and the amount of effort, but also the collaboration that goes on between the teams, because now the performance team picks up a functional scenario. And they’re just going to enhance it, which means the next time, the functional team gets a better test, and it’s a virtuous circle rather than a vicious one. So I think that having a good platform that does a lot of this can help you.”

Coverage: How much is enough?

Fernando Mattos, director of product marketing at test company mabl, believes that test coverage for flows that are very important should come as close to 100% as possible. But determining what those flows are is the hard part, he said. “We have reports within mabl that we try to make easy for our customers to understand. Here are all the different pages that I have on my application. Here’s the complexity of each of those. And here are the tests that have touched on those, the elements on those pages. So at least you can see where you have gaps.”

It is common practice today for organizations to emphasize thorough testing of the critical pieces of an application, but Mattos said it comes down to balancing the time you have for testing and the quality that you’re shooting for, and the risk that a bug would introduce.

“If the risk is low, you don’t have time, and it’s better for your business to be introducing new features faster than necessarily having a bug go out that can be fixed relatively quickly… and maybe that’s fine,” he said.

Parker said AI can help with coverage when it comes to testing every conceivable user experience. “The problem there,” he said, “is this word conceivable, because it’s humans conceiving, and our imagination is limited. Whereas with AI, it’s essentially an unlimited resource to follow every potential possible path through the application. And that’s what I was saying earlier about those first bugs that get reported after a new release, when the end user goes off the script. We need to bring AI so that we can not only autonomously generate tests based on what we read in the test cases, but that we can also test things that nobody even thought about testing, so that the delivery of software is as close to being bug free as is technically possible.”

Parasoft’s Hicken holds the view that testing without coverage isn’t meaningful.  “If I turn a tool loose and it creates a whole bunch of new tests, is it improving the quality of my testing or just the quantity? We need to have a qualitative analysis and at the moment, coverage gives us one of the better ones. In and of itself, coverage is not a great goal. But the lack of coverage is certainly indicative of insufficient testing. So my pet peeve is that some people say, it’s not how much you test, it’s what you test. No. You need to have as broad code coverage as you can have.”

The all-important user experience

It’s important to have someone who is very close to the customer, who understands the customer journey but not necessarily anything about writing code, creating tests, according to mabl’s Mattos. “Unless it’s manual testing, it tends to be technical, requiring writing code and no updating test scripts. That’s why we think low code can really be powerful because it can allow somebody who’s close to the customer but not technical…customer support, customer success.  They are not typically the ones who can understand GitHub and code and how to write it and update that – or even understand what was tested. So we think low code can bridge this gap. That’s what we do.”

Where is this all going?

The use of generative AI to write tests is the evolution everyone wants to see, Mattos said. “We’ll get better results by combining human insights. We’re specifically working on AI technology that will allow implementing and creating test scripts, but still using human intellect to understand what is actually important for the user. What’s important for the business? What are those flows, for example, that go to my application on my website, or my mobile app that actually generates revenue?”

“We want to combine that with the machine,” he continued. “So the human understands the customer, the machine can replicate and create several different scenarios that traverse those. But of course, right, lots of companies are investing in allowing the machine to just navigate through your website and find out the different quarters, but they weren’t able to prioritize for us. We don’t believe that they’re gonna be able to prioritize which ones are the most important for your company.”

Keysight’s Wright said the company is seeing value in generative AI capabilities. “Is it game changing? Yes. Is it going to get rid of manual testers? Absolutely not. It still requires human intelligence around requirements, engineering, feeding in requirements, and then humans identifying that what it’s giving you is trustworthy and is valid. If it suggests that I should test (my application) with every single language and every single country, is it really going to find anything I might do? But in essence, it’s just boundary value testing, it’s not really anything that spectacular and revolutionary.”

Wright said organizations that have dabbled with automation over the years and have had some levels of success are now just trying to get that extra 10% to 20% of value from automation, and get wider adoption across the organization. “We’ve seen a shift toward not tools but how do we bring a platform together to help organizations get to that point where they can really leverage all the benefits of automation. And I think a lot of that has been driven by open testing.” 

“As easy as it should be to get your test,” he continued, “you should also be able to move that into what’s referred to in some industries as an automation framework, something that’s in a standardized format for reporting purposes. That way, when you start shifting up, and shifting the quality conversation, you can look at metrics. And the shift has gone from how many tests am I running, to what are the business-oriented metrics? What’s the confidence rating? Are we going to hit the deadlines? So we’re seeing a move toward risk-based testing, and really more agility within large-scale enterprises.”

 

The post Buyers Guide: AI and the evolution of test automation appeared first on SD Times.

]]>
Take advantage of AI-augmented software testing https://sdtimes.com/test/take-advantage-of-ai-augmented-software-testing/ Thu, 21 Sep 2023 21:13:05 +0000 https://sdtimes.com/?p=52393 The artificial intelligence-augmented software-testing market continues to rapidly evolve. As applications become increasingly complex, AI-augmented testing plays a critical role in helping teams deliver high-quality applications at speed.  By 2027, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering toolchain, which is a significant increase from 10% in 2022, according to … continue reading

The post Take advantage of AI-augmented software testing appeared first on SD Times.

]]>
The artificial intelligence-augmented software-testing market continues to rapidly evolve. As applications become increasingly complex, AI-augmented testing plays a critical role in helping teams deliver high-quality applications at speed. 

By 2027, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering toolchain, which is a significant increase from 10% in 2022, according to Gartner. AI-augmented software-testing tools assist humans in their testing efforts and reduce the need for human intervention. Overall, these tools streamline, accelerate and improve the test workflow. 

The future of the AI-augmented testing market

Many organizations continue to rely heavily on manual testing and aging technology, but market conditions demand a shift to automation, as well as more intelligent testing that is context-aware. AI-augmented software-testing tools will amplify testing capacity and help to eliminate steps that can be performed more efficiently by intelligent technologies. 

Over the next few years, there will be several trends that drive the adoption of AI-augmented software-testing tools, including increasing complexity of applications, increased adoption of agile and DevOps, shortage of skilled automation engineers and the need for maintainability. All of these factors will continue to drive an increasing need for AI and machine learning (ML) to increase the effectiveness of test creation, reduce the cost of maintenance and drive efficient test loops. Additionally, investment in AI-augmented testing will help software engineering leaders to delight their customers beyond their expectations and ensure production incidents are resolved quickly. 

AI augmentation is the next step in the evolution of software testing and is a crucial element for a strategy to reduce significant business continuity risks when critical applications and services are severely compromised or stop working. 

How generative AI can improve software quality and testing 

AI is transforming software testing by enabling improved test efficacy and faster delivery cycle times. AI-augmented software-testing tools use algorithmic approaches to enhance the productivity of testers and offer a wide range of capabilities across different areas of the test workflow.

There are currently several ways in which generative AI tools can assist software engineering leaders and their teams when it comes to software quality and testing:

  • Authoring test automation code is possible across unit, application programming interface (API) and user interface (UI) for both functional and nonfunctional checks and evaluation. 
  • Generative AI can help with general-impact analysts, such as comparing different versions of use stories, code files and test results for potential risks and causes, as well as to triage flaky tests and defects. 
  • Test data can be generated for populating a database or driving test cases. This could be common sales data, customer relationship management (CRM) and customer contact information, inventory information, or location data with realistic addresses. 
  • Generative AI offers testers a pairing opportunity for training, evaluating and experimenting in new methods and technologies. This will be of less value than that of human peers who actively suggest improved alternatives during pairing exercises. 
  • Converting existing automated test cases from one framework to another is possible, but will require more human engineering effort, and is currently best used as a pairing and learning activity rather than an autonomous one. 

While testers can leverage generative AI technology to assist in their roles, they should also expect a wave of mobile testing applications that are using generative capabilities. 

Software engineering leaders and their teams can exploit the positive impact of AI implications that use LLMs as long as human touch is still involved and integration with the broad landscape of development and testing tools is still improving. However, avoid creating prompts to feed into systems based on large language models (LLMs) if they have the potential to contravene intellectual property laws, or expose a system’s design or its vulnerabilities. 

Software engineering leaders can maximize the value of AI by identifying areas of software testing in their organizations where AI will be most applicable and impactful. Modernize teams’ testing capabilities by establishing a community of practice to share information and lessons and budgeting for training. 

The post Take advantage of AI-augmented software testing appeared first on SD Times.

]]>
Perforce adds generative AI to test automation platform https://sdtimes.com/ai/perforce-adds-generative-ai-to-test-automation-platform/ Mon, 18 Sep 2023 15:58:33 +0000 https://sdtimes.com/?p=52312 Perforce Software, a DevOps solutions provider, has introduced Test Data Pro by BlazeMeter, an advanced component of its continuous testing platform.  Test Data Pro utilizes AI technology to streamline and make test data generation more accessible. The primary goal is to address the significant challenge of obtaining accurate and synchronized test data, which is particularly … continue reading

The post Perforce adds generative AI to test automation platform appeared first on SD Times.

]]>
Perforce Software, a DevOps solutions provider, has introduced Test Data Pro by BlazeMeter, an advanced component of its continuous testing platform. 

Test Data Pro utilizes AI technology to streamline and make test data generation more accessible. The primary goal is to address the significant challenge of obtaining accurate and synchronized test data, which is particularly crucial as organizations embrace a “shift left” approach in testing, Perforce explained.

“Obtaining test data from production is a time-consuming process involving multiple teams. PII data has to be properly scrubbed, and the data has to be synchronized across the testing landscape,” explains Stephen Feloney, VP of continuous testing at Perforce. “Because of this lengthy process, testers refresh data less often than they should. Now consider today’s world of rapid releases. There is no time to get data and prep it. Developers and agile testers needed to test yesterday.”

One of its standout features is the utilization of generative AI technology to swiftly profile and create data-generating functions and test data from scratch. This level of precision ensures that users have access to highly accurate and tailored data necessary for executing tests, ultimately leading to increased testing speed and accuracy. Moreover, Test Data Pro excels in synchronizing data across various aspects, including the data driving the test, data in mock or virtual services, and data in systems under test.

This solution also addresses the need for expanded testing coverage. By creating diverse sets of data, Test Data Pro enables comprehensive testing across a wide range of scenarios, even encompassing negative testing. 

In addition to enhancing testing efficiency, Test Data Pro also places a strong emphasis on data privacy. It achieves this by automatically generating synthetic, realistic test data. This approach ensures that testing environments do not utilize real production data, eliminating concerns related to data privacy and compliance risks.

Lastly, Test Data Pro introduces the concept of chaos testing for system resilience. By integrating both positive and negative test data during test executions, it empowers users to assess the resilience of systems and validate the performance of applications under circumstances that they might not have tested under conventional methods. This innovative approach helps organizations identify and address vulnerabilities, ultimately enhancing the robustness of their software systems.

The post Perforce adds generative AI to test automation platform appeared first on SD Times.

]]>
Tricentis Acquires Codeless Mobile Test Automation Platform Waldo https://sdtimes.com/softwaredev/tricentis-acquires-codeless-mobile-test-automation-platform-waldo/ Fri, 07 Jul 2023 16:54:43 +0000 https://sdtimes.com/?p=51662 AUSTIN, Texas–(BUSINESS WIRE)–Tricentis, a global leader in continuous testing and quality engineering, announced today the acquisition of Waldo, a SaaS-based, no-code, zero-footprint mobile test automation platform. Waldo complements and extends Tricentis’ mobile testing offerings with new test automation capabilities, including native, hybrid, and web mobile application testing using virtual devices supporting iOS simulators and Android emulators. … continue reading

The post Tricentis Acquires Codeless Mobile Test Automation Platform Waldo appeared first on SD Times.

]]>
AUSTIN, Texas–(BUSINESS WIRE)–Tricentis, a global leader in continuous testing and quality engineering, announced today the acquisition of Waldo, a SaaS-based, no-code, zero-footprint mobile test automation platform. Waldo complements and extends Tricentis’ mobile testing offerings with new test automation capabilities, including native, hybrid, and web mobile application testing using virtual devices supporting iOS simulators and Android emulators.

Mobile applications are a central part of our lives, enabling business operations, accelerating communications around the world, and providing an effortless way to connect and collaborate. It is no surprise that according to Statista, mobile devices now generate nearly 59% of global website traffic. However, managing a mobile test infrastructure as part of a continuous integration/continuous delivery (CI/CD) pipeline can be time-consuming and expensive for organizations striving to achieve mobile application quality.

Waldo addresses these challenges by delivering SaaS-based test automation that simplifies mobile test authoring, management, and execution directly from a browser. As organizations shift left and increase their focus on shipping quality code, Waldo allows them to cost-effectively test every commit through emulation using virtual mobile devices.

“The number and complexity of mobile applications continue to increase with no signs of slowing down,” said Kevin Thompson, Chairman and CEO, Tricentis. “We believe the combination of what Waldo brings from a depth-of-knowledge and technology perspective combined with what Tricentis offers in our breadth-of-test automation expertise will allow us to deliver higher-quality mobile applications at the speed and scale businesses require.”

Along with Tricentis Testim Mobile and Tricentis Tosca Mobile, Waldo adds unmatched value to the Tricentis mobile application testing offerings and AI-based test automation platform. Mobile application development teams gain simplicity and speed authoring tests and power and flexibility through execution on a virtual device cloud.

“We are very excited to join Tricentis,” said Laurent Sigal, Co-Founder and CTO of Waldo. “As a leader in test automation with a comprehensive set of mobile application testing offerings, the company has uniquely positioned itself to support mobile app quality holistically across the software development lifecycle. We see a world of possibilities in how we can leverage our technologies.”

About Tricentis

Tricentis is a global leader in continuous testing and quality engineering. The Tricentis AI-based, continuous testing portfolio of products provide a new and fundamentally different way to perform software testing through an approach that’s totally automated, fully codeless, and intelligently driven by AI. This approach addresses both agile development and complex enterprise apps, enabling organizations to accelerate their digital transformation initiatives by dramatically increasing software release speed, reducing costs, and improving software quality. Widely credited for reinventing software testing for DevOps, cloud, and enterprise applications, Tricentis has been recognized as a leader by all major industry analysts, including Forrester, Gartner, and IDC. Tricentis has more than 2,500 customers, including the largest brands in the world, such as McKesson, Allianz, Telstra, Dolby, and Vodafone. To learn more, visit https://www.tricentis.com.

The post Tricentis Acquires Codeless Mobile Test Automation Platform Waldo appeared first on SD Times.

]]>
In the low-code era, codeless testing tools deliver the efficiency and profitability coded test automation can’t https://sdtimes.com/test/in-the-low-code-era-codeless-testing-tools-deliver-the-efficiency-and-profitability-coded-test-automation-cant/ Mon, 08 May 2023 17:05:49 +0000 https://sdtimes.com/?p=51097 The use of low code and no code gained traction in recent years as demand continues to rise for faster and more efficient application development. To keep pace with the influx of newly built applications, many IT leaders are investing in testing automation — a market that’s projected to show a compound annual growth rate of … continue reading

The post In the low-code era, codeless testing tools deliver the efficiency and profitability coded test automation can’t appeared first on SD Times.

]]>
The use of low code and no code gained traction in recent years as demand continues to rise for faster and more efficient application development. To keep pace with the influx of newly built applications, many IT leaders are investing in testing automation — a market that’s projected to show a compound annual growth rate of 16.4% through 2027.

Software development engineers in test (SDETs) have historically relied on coded test automation as the go-to approach for quality assurance. However, coded test automation calls for extensive coding that’s resource-intensive and challenging to maintain. Although it’s based on free, open-source frameworks, coded test automation requires skilled labor that’s scarce and costly — constraints that hamstring overburdened tech teams. 

Fortunately, not all testing requires coded automation. New advancements in test automation are emerging, and codeless platforms present a key opportunity to streamline software testing.

Coded automation not the only option 

Coded test automation still plays an important role in scenarios like unit testing and component-level testing. But the development arena has changed in the last 20 years, underscoring the fact that coded test automation isn’t an optimal approach to quality assurance for certain use cases — like functional testing.

Coded test automation requires skilled SDETs or software developers to not only write hundreds of lines of code, but also maintain them. That’s increasingly difficult to accomplish with engineers stretched thin and employers facing ongoing talent shortages. As a result, many development teams lack the resources to maintain copious amounts of code once an application is deployed. Supporting code for coded test automation is also expensive, especially if the test framework requires regular updates or modifications.

It’s clear that new testing approaches are needed to maintain software quality and keep pace with technological advancements. And codeless test automation is gaining momentum — fast. 

Revolutionize testing with codeless automation

Codeless automated testing platforms are now available in the commercial marketplace, eliminating the need to write code for automated tests. With these tools, quality assurance (QA) professionals who lack coding skills can develop automated tests alongside SDETs and developers.

Some developers may hesitate to lean on codeless automation. After all, many developers have spent the lion’s share of their careers writing lines of code. But coded test automation isn’t going away — it’s just becoming one of several approaches developers can turn to. In fact, coded automation remains critical in many testing scenarios. 

However, for functional testing, end-to-end testing, data validation, and regression testing, codeless platforms offer a streamlined approach for both user interface (UI) and application programming interface (API) testing that can cut costs and reduce time-to-market.

Consider the benefits that codeless automation can provide:

  • Reduced reliance on technical expertise: Codeless testing platforms enable developers to shift testing responsibilities to QA teams, who can focus solely on testing rather than coding and debugging. Codeless platforms also help free up developers’ time and empower them to focus on new technologies and complex software development.
  • Accelerated development cycles: Codeless platforms enable QA teams to use pre-built and visual components to develop automated tests, which is a much faster process than writing net-new code. This enables testers to create more test cases in a fraction of the time, which increases test coverage and results in higher quality software. An added bonus? Shorter development cycles also reduce costs.
  • Easier maintenance: Codeless testing eliminates the need for programming skills that are typically required to maintain and update coded test suites. This makes maintenance faster and easier when an application changes. Some codeless automation platforms even have self-healing capabilities that enable the testing tool to automatically fix test scripts or test cases when a test fails or the software changes.

There’s always a learning curve when adopting a new approach. But the barrier to entry is low and the rewards are high when it comes to deploying codeless test automation tools. In the current no- and low-code era, the swift pace of innovation demands agile and efficient workflows.

Consider all the factors when determining whether codeless automated testing is right for a specific use case, from resource availability to the category of testing required. But when you discover codeless is the right fit for a use case, your entire team can test faster with greater efficiency and coverage — ultimately reducing time-to-market for new products while maintaining product quality.

The post In the low-code era, codeless testing tools deliver the efficiency and profitability coded test automation can’t appeared first on SD Times.

]]>
Automated testing still lags https://sdtimes.com/test/automated-testing-still-lags/ Tue, 02 Aug 2022 20:20:17 +0000 https://sdtimes.com/?p=48461 Automated testing initiatives still lag behind in many organizations as increasingly complex testing environments are met with a lack of skilled personnel to set up tests.  Recent research conducted by Forrester and commissioned by Keysight found that while only 11% of respondents had fully automated testing, 84% percent of respondents said that the majority of … continue reading

The post Automated testing still lags appeared first on SD Times.

]]>
Automated testing initiatives still lag behind in many organizations as increasingly complex testing environments are met with a lack of skilled personnel to set up tests. 

Recent research conducted by Forrester and commissioned by Keysight found that while only 11% of respondents had fully automated testing, 84% percent of respondents said that the majority of testing involves complex environments. 

For the study, Forrester conducted an online survey in December 2021 that involved 406 test operations decision-makers at organizations in North America, EMEA, and APAC to evaluate current testing capabilities for electronic design and development and to hear their thoughts on investing in automation.

The complexity of testing has increased the number of tests, according to 75% of the respondents. Sixty-seven percent of respondents said the time to complete tests has risen too.

Challenges with automated testing 

Those that do utilize automated testing often have difficulty making the tests stable in these complex environments, according to Paulina Gatkowska, head of quality assurance at STX Next, a Python software house. 

One such area where developers often find many challenges is in UI testing in which the tests work like a user: they use the browser, click through the application, fill fields, and more. These tests are quite heavy, Gatkowska continued, and when a developer finishes their test on a local environment, sometimes it fails in another environment, or only works 50% times, or a test works the first week, and then starts to be flaky. 

“What’s the point of writing and running the tests, if sometimes they fail even though there is no bug? To avoid this problem, it’s important to have a good architecture of the tests and good quality of the code. The tests should be independent, so they don’t interfere with each other, and you should have methods for repetitive code to change it only in one place when something changes in the application,” Gatkowska said. “You should also attach great importance to ‘waits’ – the conditions that must be met before the test proceeds. Having this in mind, you’ll be able to avoid the horror of maintaining flaky tests.”

Then there are issues with the network that can impede automated tests, according to Kavin Patel, founder and CEO of Convrrt, a landing page builder. A common difficulty for QA teams is network disconnection, which makes it difficult for them to access databases, VPNs, third-party services, APIs, and certain testing environments, because of shaky network connections, adding needless time to the testing process. The inability to access virtual environments, which are typically utilized by testers to test programs, is also a worry. 

Because some teams lack the expertise to implement automated testing, manual testing is still used as a correction for any automation gaps. This creates a disconnect with the R&D team, which is usually two steps ahead, according to Kenny Kline, president of Barbend, an online platform for strength sports training and nutrition.

“To keep up with them, testers must finish their cycles within four to six hours, but manual testing cannot keep up with the rate of development. Then, it is moved to the conclusion of the cycle,” Kline said. “Consequently, teams must include a manual regression, sometimes known as a stabilization phase, at the end of each sprint. They extend the release cadence rather than lowering it.”

Companies are shifting towards full test automation 

Forrester’s research also found that 45% of companies say that they’re willing to move to a fully automated testing environment within the next three years to increase productivity, gain the ability to simulate product function and performance, and shorten the time to market. 

The companies that have implemented automated testing right have reaped many rewards, according to Michael Urbanovich, head of the testing department at a1qa, an international quality assurance company. The ones relying on robotic process automation (RPA), AI, ML, natural language processing (NLP), and computer vision for automated testing have attained greater efficiency, sped up time to market, and freed up more resources to focus on strategic business initiatives. RPA alone can lower the time required for repetitive tasks up to 25%, according to research by Automation Alley. 

For those looking to gain even more from their automation initiatives, a1qa’s Urbanovich suggests looking into continuous test execution, implementing self-healing capabilities, RPA, API automation, regression testing, and UAT automation. 

Urbanovich emphasized that the decision to introduce automated QA workflows must be conscious. Rather than running with the crowd to follow the hype, organizations must calculate ROI based on their individual business needs and wisely choose the scope for automation and a fit-for-purpose strategy. 

“To meet quality gates, companies need to decide which automated tests to run and how to run them in the first place, especially considering that the majority of Agile-driven sprints last for up to only several weeks,” Urbanovich said. 

Although some may hope it were this easy, testers can’t just spawn automated tests and sit back like Paley’s watchmaker gods. The tests need to be guided and nurtured. 

“The number one challenge with automated testing is making sure you have a test for all possibilities. Covering all possibilities is an ongoing process, but executives especially hear that you have automated testing now and forget that it only covers what you actually are testing and not all possibilities,” said David Garthe, founder of Gravyware, a social media management tool. “As your application is a living thing, so are the tests that are for it. You need to factor in maintenance costs and expectations within your budget.” 

Also, just because a test worked last sprint, doesn’t mean it will work as expected this sprint, Garthe added. As applications change, testers have to make sure that the automated tests cover the new process correctly as well. 

Garthe said that he has had a great experience using Selenium, referring to it as the “gold standard” with regard to automated testing. It has the largest group of developers that can step in and work on a new project. 

“We’ve used other applications for testing, and they work fine for a small application, but if there’s a learning curve, they all fall short somewhere,” Garthe said. “Selenium will allow your team to jump right in and there are so many examples already written that you can shortcut the test creation time.”

And, there are many other choices to weave through to start the automated testing process.

“When you think about test automation, first of all you have to choose the framework. What language should it be? Do you want to have frontend or backend tests, or both? Do you want to use gherkin in your tests?,” STX Next’s Gatkowska said. “Then of course you need to have your favorite code editor, and it would be annoying to run the tests only on your local machine, so it’s important to configure jobs in the CI/CD tool. In the end, it’s good to see valuable output in a  reporting tool.”

Choosing the right tool and automated testing framework, though, might pose a challenge for some because different tools excel at different conditions, according to Robert Warner, Head of Marketing at VirtualValley, a UK-based virtual assistant company.

“Testing product vendors overstate their goods’ abilities. Many vendors believe they have a secret sauce for automation, but this produces misunderstandings and confusion. Many of us don’t conduct enough study before buying commercial tools, that’s why we buy them without proper evaluation,” Warner said. “Choosing a test tool is like marrying, in my opinion. Incompatible marriages tend to fail. Without a good test tool, test automation will fail.”

AI is augmenting the automated testing experience

In the next three years 52% of companies that responded to the Forrester report said they would consider using AI for integrating complex test suites.

The use of AI for integrated testing provides both better (not necessarily more) testing coverage and the ability to support agile product development and release, according to the Forrester report.

Companies are also looking to add AI for integrating complex test suites, an area of test automation that is severely lacking, with only 16% of companies using it today. 

a1qa’s Urbanovich explained that one of the best ways to cope with boosted software complexity and tight deadlines is to apply a risk-based approach. For that, AI is indispensable. Apart from removing redundant test cases, generating self-healing scripts, and predicting defects, it streamlines priority-setting. 

“In comparison with the previous year, the number of IT leaders leveraging AI for test prioritization has risen to 43%. Why so?” Urbanovich continued, alluding to the World Quality Report 2021-2022. “When you prioritize automated tests, you put customer needs FIRST because you care about the features that end users apply the most. Another vivid gain is that software teams can organize a more structured and thoughtful QA strategy. Identifying risks makes it easier to define the scope and execution sequence.”

Most of the time, companies are looking to implement AI in testing to leverage the speed improvements and increased scope of testing, according to Kevin Surace, CTO at Appvance, an AI-driven software testing provider

“You can’t write a script in 10 minutes, maybe one if you’re a Selenium master. Okay, the machine can write 5,000 in 10 minutes. And yes, they’re valid. And yes, they cover your use cases that you care about. And yes, they have 1,000s of validations, whatever you want to do. And all you did was spend one time teaching it your application, no different than walking into a room of 100 manual testers that you just hired, and you’re teaching them the application: do this, don’t do this, this is the outcome, these are the outcomes we want,” Surace said. “That’s what I’ve done, I got 100 little robots or however many we need that need to be taught what to do and what not to do, but mostly what not to do.”

QA has difficulty grasping how to handle AI in testing 

Appvance’s Surace said that the overall place of where testing needs to go is to be completely hands off from humans.

“If you just step back and say what’s going on in this industry, I need a 4,000 times productivity improvement in order to find essentially all the bugs that the CEO wants me to find, which is find all the bugs before users do,” Surace said. “Well, if you’ve got to increase productivity 4,000 times you cannot have people involved in the creation of very many use cases, or certainly not the maintenance of them. That has to come off the table just like you can’t put people in a spaceship and tell them to drive it, there’s too much that has to be done to control it.”  

Humans are still good at prioritizing which bugs to tackle based on what the business goals are

because only humans can really look at something and say, well, we’ll just leave it, it’s okay, we’re not gonna deal with it or say this is really critical and push it to the developers side to fix it before release, Surace continued. 

“A number of people are all excited about using AI and machine learning to prioritize which tests you should run, and that entire concept is wrong. The entire concept should be, I don’t care what you change in application, and I don’t understand your source code enough to know the impacts and on every particular outcome. Instead, I should be able to create 10,000 scripts and run them in the next hour, and give you the results across the entire application,” Surace said. “Job one, two, and three of QA is to make sure that you found the bugs before your users do. That’s it, then you can decide what to do with them. Every time a user finds a bug, I can guarantee you it’s in something you didn’t test or you chose to let the bug out. So when you think about it, that way users find bugs and the things we didn’t test. So what do we need to do? We need to test a lot more, not less.”

A challenge with AI is that it is a foreign concept to QA people so teaching them how to train AI is a whole different field, according to Surace. 

First off, many people on the QA team are scared of AI, Surace continued, because they see themselves as QA people but really have the skillset of a Selenium tester that writes Selenium scripts and tests them. Now, that has been taken away similar to how RPA disrupted many industries such as customer support and insurance claims processing. 

The second challenge is that they’re not trained in it.

“So one problem that we see that we have is you explain how the algorithms work?,” Surace said. “In AI, one of the challenges we have in QA and across the AI industry is how do we make people comfortable that here’s a machine that they may not ever be able to understand. It’s beyond their skillset to actually understand the algorithms at work here and why they work and how neural networks work so they now have to trust that the machine will get them from point A to point B, just like we trust the car gets from point A to point B.”

However, there are some areas of testing in which AI is not as applicable, for example, in a form-based application where there is nothing else for the application to do than to guide you through the form such as in a financial services application. 

“There’s nothing else to do with an AI that can add much value because one script that’s data-driven already handles the one use case that you care about. There are no more use cases. So AI is used to augment your use cases, but if you only have one, you should write it. But, that’s few and far between and most applications have hundreds of 1,000s of use cases perhaps or 1,000s of possible combinatorial use cases,” Surace said. 

According to Eli Lopian, CEO at Typemock, a provider of unit testing tools to developers worldwide, QA teams are still very effective at handling UI testing because the UI can often change without the behavior changing behind the scenes. 

“The QA teams are really good at doing that because they have a feel for the UI, how easy it is for the end user to use that code, and they can see the thing that is more of a product point of view and less of doesn’t work or does it not work point of view, which now is really it’s really essential if you want to an application to really succeed,” Lopian said. 

Dan Belcher, the co-founder at mabl, said that there is still plenty of room for a human in the loop when it comes to AI-driven testing. 

“So far, what we’re doing is supercharging quality engineers so human is certainly in the loop, It’s eliminating repetitive tasks where their intellect isn’t adding as much value and doing things that require high speed, because when you’re deploying every few minutes, you can’t really rely on a human to be involved in that in that loop of executing tests. And so what we’re empowering them to do is to focus on higher level concerns, like do I have the right test coverage? Are the things that we’re seeing good or bad for the users?,” Belcher said.

AI/ML excels at writing tests from unit to end-to-end scale

One area where AI/ML in testing excels at is in unit testing on legacy code, according to Typemock’s Lopian.

“Software groups often have this legacy code which could be a piece of code that maybe they didn’t do a unit test beforehand, or there was some kind of crisis, and they had to do it quickly, and they didn’t do the test. So you had this little piece of code that doesn’t have any unit tests. And that grows,” Lopian said. “Even though it’s a difficult piece of code, it wasn’t built for testability in mind, we have the technology to both write those tests for those kinds of code and to generate them in an automatic manner using the ML.”

The AI/ML can then make sure that the code is running in a clean and modernized way. The tests can refactor the code to work in a secure manner, Lopian added. 

AI-driven testing is also beneficial for UI testing because the testers don’t have to explicitly design the way that you reference things in the UI, you can let the AI figure that out, according to mabl’s Belcher. And then when the UI changes, typical test automation results in a lot of failures, whereas the AI can learn and improve the tests automatically, resulting in 85-90% reduction in the amount of time engineers spend creating and maintaining tests with AI. 

In the UI testing space, AI can be used for auto healing, intelligent timing, detecting visual changes automatically in the UI, and detecting anomalies and performance. 

According to Belcher, AI can be the vital component in creating a more holistic approach to end-to-end testing. 

“We’ve all known that the answer to improving quality was to bring together the insights that you get when you think about all facets of quality, whether that’s functional or performance, or accessibility, or UX. And, and to think about that holistically, whether it’s API or web or mobile. And so the area that will see the most innovation is when you can start to answer questions like, based on my UI tests, what API tests should I have? And how do they relate? So when the UI test fails? Was it an API issue? And then, when a functional test fails, did anything change from the user experience that could be related to that?,” Belcher said. “And so the key is to do this is we have to bring kind of all of the kind of end-to-end testing together and all the data that’s produced, and then you can really layer in some incredibly innovative intelligence, once you have all of that data, and you can correlate it and make predictions based on that.”

6 types of Automated Testing Frameworks 
  1. Linear Automation Framework – Also known as a record-and-playback framework in which testers don’t need to write code to create functions and the steps are written in a sequential order. Testers record steps such as navigation, user input, or checkpoints, and then plays the script back automatically to conduct the test.
  2.  Modular Based Testing Framework – one in which testers need to divide the application that is being tested into separate units, functions, or sections, each of which can then be tested in isolation. Test scripts are created for each part and then combined to build larger tests. 
  3. Library Architecture Testing Framework – in this testing framework, similar tasks within the scripts are identified and later grouped by function, so the application is ultimately broken down by common objectives. 
  4. Data-Driven Frameworktest data is separated from script logic and testers can store data externally. The test scripts are connected to the external data source and told to read and populate the necessary data when needed. 
  5. Keyword-Driven Framework – each function of the application is laid out in a table with instructions in a consecutive order for each test that needs to be run. 
  6. Hybrid Testing Framework – a combination of any of the previously mentioned frameworks set up to leverage the advantages of some and mitigate the weaknesses of others.

Source: https://smartbear.com/learn/automated-testing/test-automation-frameworks/

The post Automated testing still lags appeared first on SD Times.

]]>
Disrupting the economics of software testing through AI https://sdtimes.com/test/disrupting-the-economics-of-software-testing-through-ai/ Fri, 14 Jan 2022 21:20:27 +0000 https://sdtimes.com/?p=46357 EMA (Enterprise Management Associates) recently released a report titled “Disrupting the Economics of Software Testing Through AI.” In this report, author Torsten Volk, managing research director at EMA, discusses the reasons why traditional approaches to software quality cannot scale to meet the needs of modern software delivery. He highlights five key categories of AI and … continue reading

The post Disrupting the economics of software testing through AI appeared first on SD Times.

]]>
EMA (Enterprise Management Associates) recently released a report titled “Disrupting the Economics of Software Testing Through AI.” In this report, author Torsten Volk, managing research director at EMA, discusses the reasons why traditional approaches to software quality cannot scale to meet the needs of modern software delivery. He highlights five key categories of AI and six critical pain points of test automation that AI addresses. 

We sat down with Torsten and talked about the report and his insights into the impact that AI is having in Software Testing:

Q: What’s wrong with the current state of testing? Why do we need AI?

Organizations reliant upon traditional testing tools and techniques fail to scale to the needs of today’s digital demands and are quickly falling behind their competitors. Due to increasing application complexity and time to market demands from the business, it’s difficult for software delivery teams to keep up. There is a growing need to optimize the process with AI to help root out the mundane and repetitive tasks and control the costs of quality that have gotten out of control.

Q: How can AI help and with what?

There are five key capabilities where AI can help: smart scrawling/Natural Language Process (NLP) driven test creation, self healing, coverage detection, anomaly detection, and visual inspection. The report I wrote highlights six critical pain points where these capabilities can help. For example: false positives, test maintenance, inefficient feedback loops, rising application complexity, device sprawl, and tool chain complexity.

Leading organizations have already adopted some level of self-healing and AI driven test creation but by far the most impactful is Visual Inspection (or Visual AI), which provides complete and accurate coverage of the user experience. It is able to learn and adapt to new situations without the need to write and maintain code-based rules. 

Q: Are people adopting AI?

Yes, AI adoption is on the rise for many reasons, but for me, it’s not that people are not adopting AI – they’re adopting the technical capabilities that are based on AI. For example, people want the ability to do NLP-based test automation for a specific use case. People are more interested in the ROI gained from the speed and scalability of leveraging AI in the development process, and not necessarily how the sausage is being made.

Q: How does the role of the developer / tester change with the implementation of AI?

When you look at test automation, developers and testers need to make a decision about what belongs under test automation. How is it categorized, for example. Then all you need to do is basically set the framework for the AI to operate and provide it with feedback to continuously enhance its performance over time.

Once this happens, developers and testers are freed up to do more creative, interesting and valuable work by eliminating the toil of mundane or repetitive work – the work that isn’t valuable in and of itself but has to be done correctly every time. 

For example, reviewing thousands of webpage renderings. Some of them have little differences, but they don’t matter. If I can have the machine filter out all of the ones that don’t matter and just highlight the few that may or may not be a defect, I’ve now cut my work down from thousands to a very small handful. 

Auto-classification is a great example of being able to reduce your work. If you’re reducing repetitive work, it means you don’t miss things. Whereas, if I’m looking at the same, what looks like the same page each time, I might miss something. Whereas if I can have the AI tell me this one page is slightly different than the other ones you’ve been looking at, and here’s why, iit eliminates repetitive, mundane tasks and reduces the possibilities of error-prone outcomes.

Q: Do I need to hire AI experts or develop an internal AI practice?

The short answer is no. There are lots of vendor solutions available that give you the ability to take advantage of the AI, machine learning and training data already in place.

If you want to implement AI yourself, then you actually need people with two sets of domain knowledge: first, the domain that you want for the application of AI, but second, a deep understanding of the possibilities with AI and how you can chain those capabilities together. Oftentimes, that is too expensive and too rare.

If your core deliverable is not the deliverable of the AI but the deliverable of the ROI that the AI can deliver, then it’s much better to find a tool or service that can do it for you, and allow you to focus on your domain expertise. This will make life much easier because there will be a lot more people in a company that understand that domain and just a small handful of people that will only understand AI.

Q: You talk about the Visual Inspection capability being the highest impact – how does that help?

Training deep learning models to inspect an application through the eyes of the end user is critical to removing a lot of the mundane repetitive tasks that cause humans to be inefficient. 

Smart crawling, self healing, anomaly detection, and coverage detection each are point solutions that help organizations lower their risk of blind spots while decreasing human workload. But, visual inspection goes even further by aiming to understand application workflows and business requirements.

Q: Where should I start today? Can I integrate AI into my existing Test Automation practice?

Yes – example of Applitools Visual AI.

Q: What’s the future state?

Autonomous testing is the vision for the future, but we have to ask ourselves, why don’t we have an autonomous car yet? It’s because today, we’re still chaining together models and models of models. But ultimately, where we’re striving to get to is AI is taking care of all of the tactical and repetitive decisions and humans are thinking more strategically at the end of the process, where they are more valuable from a business-focused perspective.

Thanks to Torsten for spending the time with us and if you are interested in reading the full report http://applitools.info/sdtimes .

The post Disrupting the economics of software testing through AI appeared first on SD Times.

]]>
SD Times news digest: Qt acquires froglogic, the Embedded Software Testing & Compliance Summit, and Catchpoint’s virtual SRE community event https://sdtimes.com/softwaredev/sd-times-news-digest-qt-acquires-froglogic-the-embedded-software-testing-compliance-summit-and-catchpoints-virtual-sre-community-event/ Wed, 14 Apr 2021 15:20:06 +0000 https://sdtimes.com/?p=43654 Qt announced that it will acquire froglogic GmbH, a major provider of quality assurance tools, to bring froglogic’s test automation tools into the Qt product portfolio. “As The Qt Company continues its growth, the acquisition of froglogic is an important milestone in  broadening Qt’s best-in-class software development tools and building in automated testing and code … continue reading

The post SD Times news digest: Qt acquires froglogic, the Embedded Software Testing & Compliance Summit, and Catchpoint’s virtual SRE community event appeared first on SD Times.

]]>
Qt announced that it will acquire froglogic GmbH, a major provider of quality assurance tools, to bring froglogic’s test automation tools into the Qt product portfolio.

“As The Qt Company continues its growth, the acquisition of froglogic is an important milestone in  broadening Qt’s best-in-class software development tools and building in automated testing and code coverage analysis directly into our suite of products. Understanding that speed of delivery for new products is crucial to our customers, our goal is to improve developer productivity and make the product development process as streamlined as possible,” said Juha Varelius, president and CEO of Qt Group Plc. 

Froglogic GmbH offers tooling to support GUI test automation, code coverage analysis and test result management, enabling customers to assess and steer their quality assurance efforts across an application’s life cycle. 

Embedded Software Testing & Compliance Summit announced
Parasoft announced that it is hosting a live virtual event on May 6th in which industry leaders will share their embedded software quality stories of overcoming safety-critical compliance and security challenges with automated software testing solutions. 

“Companies across all industries need to have confidence in their software quality and deliver safe and secure software to their users,” said Arthur Hicken, evangelist and event moderator at Parasoft. “Many embedded software companies are turning to automated and integrated testing that includes static code analysis, unit testing, regression testing, code coverage, and requirements traceability to ensure compliance with functional safety, security, and coding standards. In this summit you’ll hear how organizations are solving real safety and security software issues.”

The talks will cover how a medical device technology company successfully adopted a unit testing solution, how an avionics developer and manufacturer achieved code compliance and streamlines productivity and much more. 

Additional details on the event are available here.

Catchpoint announces virtual SRE community event on June 10th
Catchpoint announced that it will launch its SRE from Anywhere, a virtual, interactive event that focuses on helping SREs connect with peers to share best practices, industry trends and organizational dynamics.

The event will feature panel discussions, practitioner sessions and lightning talks to foster an open forum for inclusion and learning. 

Other talks include results from the 2021 SRE survey sponsored by Catchpoint, VMWare Tanzu, and the DevOps Institute about true observability, DevOps principles and the latest use cases and trends such as Platform Ops. 

Accolade for Smart Products
Sopheon announced Accolade for Smart Products, a new management solution that brings together traditionally siloed software and physical product development. 

The solution aims to foster cross-functional collaboration and synchronization that results in trusted, timely data for faster, better and more dynamic decision making.

“As the digital and physical worlds collide, many companies struggle to find the best ways to manage innovation across different disciplines. Accolade for Smart Products enables companies – from traditional manufacturers to new technology stars – to accelerate product delivery, while also implementing the best practices needed for product reliability without dragging down innovation,” said Paul Heller, the chief technology officer of Sopheon.

The post SD Times news digest: Qt acquires froglogic, the Embedded Software Testing & Compliance Summit, and Catchpoint’s virtual SRE community event appeared first on SD Times.

]]>