UI testing Archives - SD Times https://sdtimes.com/tag/ui-testing/ Software Development News Fri, 27 Oct 2023 14:56:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg UI testing Archives - SD Times https://sdtimes.com/tag/ui-testing/ 32 32 Tricentis expands its mobile testing portfolio with release of Tricentis Device Cloud https://sdtimes.com/test/tricentis-expands-its-mobile-testing-portfolio-with-release-of-tricentis-device-cloud/ Wed, 02 Aug 2023 17:18:16 +0000 https://sdtimes.com/?p=51903 Tricentis has introduced Tricentis Device Cloud (TDC) as a new addition to its mobile testing product lineup.  With this addition, organizations can effectively manage, create, execute, and analyze applications on physical mobile devices from various manufacturers such as Apple, Samsung, and Google throughout the development process.  This eliminates the need to maintain costly and unreliable … continue reading

The post Tricentis expands its mobile testing portfolio with release of Tricentis Device Cloud appeared first on SD Times.

]]>
Tricentis has introduced Tricentis Device Cloud (TDC) as a new addition to its mobile testing product lineup. 

With this addition, organizations can effectively manage, create, execute, and analyze applications on physical mobile devices from various manufacturers such as Apple, Samsung, and Google throughout the development process. 

This eliminates the need to maintain costly and unreliable in-house devices. By identifying crucial mobile failures and performance problems, development teams can swiftly address defects and expedite high-quality releases through their CI/CD pipelines.

“We believe all the pain points for mobile testing are not yet solved, and we’re on a mission to address them in a simplified, seamless way,” said Mav Turner, CTO of DevOps at Tricentis. “Tricentis Device Cloud is another key piece of technology supporting our commitment to helping organizations innovate on high-quality mobile apps faster so they can deliver seamless digital experiences, increase customer engagement and satisfaction, and generate more revenue.”

The Mobile AI engine utilizes machine learning to analyze vast volumes of data and detect potential issues at an early stage. It monitors over 130 Key Performance Indicators (KPIs), including audio-visual quality, network connectivity, and image changes, which enables application development teams to identify bottlenecks and address problems promptly.

Key features include single-tenant and multi-tenant global deployment, real device and cross-device testing, UI testing, and performance optimization focusing on front-end, single-user performance testing. 

Additional details are available here.

The post Tricentis expands its mobile testing portfolio with release of Tricentis Device Cloud appeared first on SD Times.

]]>
Parasoft enhances API and UI testing with 2020.2 release https://sdtimes.com/test/parasoft-enhances-api-and-ui-testing-with-2020-2-release/ Tue, 06 Oct 2020 23:35:47 +0000 https://sdtimes.com/?p=41624 Parasoft revealed version 2020.2 of its enterprise portfolio at STARWEST Virtual 2020 this week. The release comes with updates to SOAtest, Virtualize, Selenic, and Continuous Testing Platform (CTP).  A key feature of the release are the platform-specific locators for Salesforce and Guidewire low-code development environments. According to the company, this will help ensure the testability … continue reading

The post Parasoft enhances API and UI testing with 2020.2 release appeared first on SD Times.

]]>
Parasoft revealed version 2020.2 of its enterprise portfolio at STARWEST Virtual 2020 this week. The release comes with updates to SOAtest, Virtualize, Selenic, and Continuous Testing Platform (CTP). 

A key feature of the release are the platform-specific locators for Salesforce and Guidewire low-code development environments. According to the company, this will help ensure the testability and quality of applications. 

“As organizations accelerate their digital transformation to leverage enterprise platforms and cloud technologies, they need confidence that their applications will continue to run smoothly and provide a positive user experience. Automated testing helps them ensure they cover all the bases for unit, API, and UI levels at speed. Smart companies choose the Parasoft solution to make sure they can meet their business and technical goals,” Richard Sherrard, vice president of products at Parasoft.

The company’s portfolio also includes:

  • SOAtest, which is designed to automatically capture underlying API traffic and leverage artificial intelligence to convert the traffic into API tests
  • Selenic, which aims to validate end-user experience with AI-powered self-healing and recommendations for UI tests
  • Virtualize and CTP, which uses simulated services and APIs to test interactions earlier in the development process. 

The post Parasoft enhances API and UI testing with 2020.2 release appeared first on SD Times.

]]>
Engineering practices that advance testing https://sdtimes.com/test/engineering-practices-that-advance-testing/ Wed, 02 Sep 2020 16:00:05 +0000 https://sdtimes.com/?p=41191 Testing practices are shifting left and right, shaping the way software engineering is done. In addition to the many types of tests described in this Deeper Look, test-driven development (TDD), progressive engineering and chaos engineering are also considered testing today. TDD TDD has become popular with Agile and DevOps teams because it saves time. Tests … continue reading

The post Engineering practices that advance testing appeared first on SD Times.

]]>
Testing practices are shifting left and right, shaping the way software engineering is done. In addition to the many types of tests described in this Deeper Look, test-driven development (TDD), progressive engineering and chaos engineering are also considered testing today.

TDD
TDD has become popular with Agile and DevOps teams because it saves time. Tests are written from requirements in the form of use cases and user stories and then code is written to pass those tests. TDD further advances the concept of building smaller pieces of code, and the little code quality successes along the way add up to big ones. TDD builds on the older concept of extreme programming (XP).

RELATED CONTENT: There’s more to testing than simply testing

“Test-driven development helps drive quality from the beginning and [helps developers] find defects in the requirements before they need to write code,” said Thomas Murphy, senior director analyst at Gartner.

Todd Lemmonds, QA architect at health benefits company Anthem, said his team is having a hard time with it because they’re stuck in an interim phase.

“TDD is the first step to kind of move in the Agile direction,” said Lemmonds. “How I explain it to people is you’re basically focusing all your attention on [validating] these acceptance criteria based on this one story. And then they’re like, OK what tests do I need to create and pass before this thing can move to the next level? They’re validating technical specifications whereas [acceptance test driven development] is validating business specifications and that’s what’s presented to the stakeholders at the end of the day.”

Progressive Software Delivery
Progressive software delivery is often misdefined by parsing the words. The thinking is if testing is moving forward (becoming more modern or maturing), then it’s “progressive.” Progressive delivery is something Agile and DevOps teams with a CI/CD pipeline use to further their mission of delivering higher-quality applications faster that users actually like. It can involve a variety of tests and deployments including A/B and multivariate testing using feature flags, blue-green and canary deployments as well as observability. The “progressive” part is rolling out a feature to progressively larger audiences.

“Progressive software delivery is an effective strategy to mitigate the risk to business operations caused by product changes,” said Nancy Kastl, executive director of testing services at digital transformation agency SPR. “The purpose is to learn from the experiences of the pilot group, quickly resolve any issues that may arise and plan improvements for the full rollout.”

Other benefits Kastl perceives include:

  • Verification of correctness of permissions setup for business users
  • Discovery of business workflow issues or data inaccuracy not detected during testing activities
  • Effective training on the software product
  • The ability to provide responsive support during first-time product usage
  • The ability to monitor performance and stability of the software product under actual production conditions including servers and networks

“Global companies with a very large software product user base and custom configurations by country or region often use this approach for planning rollout of software products,” Kastl said.

Chaos Engineering
Chaos engineering is literally testing the effects of chaos (infrastructure, network and application failures) as it relates to an application’s resiliency. The idea originated at Netflix with a program called “Chaos Monkey,” which randomly chooses a server and disables it. Eventually, Netflix created an entire suite of open-source tools called the “Simian Army” to test for more types of failures, such as a network failure or an AWS region or availability zone drop. 

The Simian Army project is no longer actively maintained but some of its functionality has been moved to other Netflix projects. Chaos engineering lives on. In fact, Gartner is seeing a lot of interest in it.

“Now what you’re starting to see are a couple of commercial implementations. For chaos to be accepted more broadly, often you need something more commercial,” said Gartner’s Murphy. “It’s not that you need commercial software, it’s going to be a community around it so if I need something, someone can help me understand how to do it safely.”

Chaos engineering is not something teams suddenly just do. It usually takes a couple of years because they’ll experiment in phases, such as lab testing, application testing and pre-production. 

Chris Lewis, engineering director at technology consulting firm DMW Group, said his firm has tried chaos engineering on a small scale, introducing the concept to DMW’s rather conservative clientele.

“We’ve introduced it in a pilot sense showing them it can be used to get under the hood of non-functional requirements and showing that they’re actually being met,” said Lewis. “I think very few of them would be willing to push the button on it in production because they’re still nervous. People in leadership positions [at those client organizations] have come from a much more traditional background.”

Chaos engineering is more common among digital disruptors and smaller innovative companies that distinguish themselves using the latest technologies and techniques.

H2: Proceed with caution

Expanding more testing techniques can be beneficial when organizations are actually prepared to do that. One common mistake is trying to take on too much too soon and then failing to reap the intended benefits. Raj Kanuparthi, founder and CEO of custom software development company Narwal, said in some cases, people need to be more realistic. 

“If I don’t have anything in place, then I get my basics right, [create] a road map, then step-by-step instrument. You can do it really fast, but you have to know how you’re approaching it,” said Kanuparthi, who is a big proponent of Tricentis. “So many take on too much and try 10 things but don’t make meaningful progress on anything and then say, ‘It doesn’t work.”

The post Engineering practices that advance testing appeared first on SD Times.

]]>
There’s more to testing than simply testing https://sdtimes.com/test/theres-more-to-testing-than-simply-testing/ Wed, 02 Sep 2020 13:30:44 +0000 https://sdtimes.com/?p=41185 Rapid innovation and the digitalization of everything is increasing application complexity and the complexity of environments in which applications run. While there’s an increasing emphasis on continuous testing as more DevOps teams embrace CI/CD, some organizations are still disproportionately focused on functional testing. “Just because it works doesn’t mean it’s a good experience,” said Thomas … continue reading

The post There’s more to testing than simply testing appeared first on SD Times.

]]>
Rapid innovation and the digitalization of everything is increasing application complexity and the complexity of environments in which applications run. While there’s an increasing emphasis on continuous testing as more DevOps teams embrace CI/CD, some organizations are still disproportionately focused on functional testing.

“Just because it works doesn’t mean it’s a good experience,” said Thomas Murphy, senior director analyst at Gartner. “If it’s my employee, sometimes I make them suffer but that means I’m going to lose productivity and it may impact employee retention. If it’s my customers, I can lose retention because I did not meet the objectives in the first place.”

Today’s applications should help facilitate the organization’s business goals while providing the kind of experience end users expect. To accomplish that, software teams must take a more holistic approach to testing than they have done traditionally, which involves more types of tests and more roles involved in testing.

“The patterns of practice come from architecture and the whole idea of designing patterns,” said Murphy. “The best practices 10 years ago are not best practices today and the best practices three years ago are probably not the best practices today. The leading practices are the things Google, Facebook and Netflix were doing three to five years ago.”

Chris Lewis, engineering director at technology consulting firm DMW Group, said his enterprise clients are seeing the positive impact a test-first mindset has had over the past couple of years.

“The things I’ve seen [are] particularly in the security and infrastructure world where historically testing hasn’t been something that’s been on the agenda. Those people tend to come from more traditional, typically full-stack software development backgrounds and they’re now wanting more control of the development processes end to end,” said Lewis. “They started to inject testing thinking across the life cycle.”

Nancy Kastl, executive director of testing services at digital transformation agency SPR, said a philosophical evolution is occurring regarding what to test, when to test and who does the testing. 

“Regarding what to test, the movement continues away from both manual [and] automated UI testing methods and toward API and unit-level testing. This allows testing to be done sooner, more efficiently and fosters better test coverage,” said Kastl.

“When” means testing earlier and throughout the SDLC.

“Companies are continuing to adopt Agile or improve the way they are using Agile to achieve its benefits of continuous delivery,” said Kastl. “With the current movement to continuous integration and delivery, the ‘shift-left’ philosophy is now embedded in continuous testing.”

However, when everyone’s responsible for testing, arguably nobody’s responsible, unless it’s clear how testing should be done by whom, when, and how. Testing can no longer be the sole domain of testers and QA engineers because finding and fixing bugs late in the SDLC is inadequate, unnecessarily costly and untenable as application teams continue to shrink their delivery cycles. As a result, testing must necessarily shift left to developers and right to production, involving more roles.

“This continues to be a matter of debate. Is it the developers, testers, business analysts, product owners, business users, project managers [or]  someone else?” said Kastl. “With an emphasis on test automation requiring coding skills, some argue for developers to do the testing beyond just unit tests.”

Meanwhile, the scope of tests continues to expand beyond unit, integration, system and user acceptance testing (UAT) to include security, performance, UX, smoke, and regression testing. Feature flags, progressive software delivery, chaos engineering and test-driven development are also considered part of the testing mix today.

Security goes beyond penetration testing
Organizations irrespective of industry are prioritizing security testing to minimize vulnerabilities and manage threats more effectively.

“Threat modeling would be a starting point. The other thing is that AI and machine learning are giving me more informed views of both code and code quality,” said Gartner’s Murphy. “There are so many different kinds of attacks that occur and sometimes we think we’ve taken these precautions but the problem is that while you were able to stop [an attack] one way, they’re going to find different ways to launch it, different ways it’s going to behave, different ways that it will be hidden so you don’t detect it.”

In addition to penetration testing, organizations may use a combination of tools and services that can vary based on the application. Some of the more common ones are static and dynamic application security testing, mobile application security testing, database security testing, software composition analysis and appsec testing as a service.

DMW Group’s Lewis said his organization helps clients improve the way they define their compliance and security rules as code, typically working with people in conventional security architecture and compliance functions.

“We get them to think about what the outcomes are that they really want to achieve and then provide them with expertise to actually turn those into code,” said Lewis.

SPR’s Kastl said continuous delivery requires continuous security verification to provide early insight into potential security vulnerabilities.

“Security, like quality, is hard to build in at the end of a software project and should be prioritized through the project life cycle,” said Kastl. “The Application Security Verification Standard (ASVS) is a framework of security requirements and controls that define a secure application with developing and testing modern applications.”

Kastl said that includes:

  • adding security requirements to the product backlog with the same attention to coverage as the application’s functionality;
  • a standards-based test repository that includes reusable test cases for manual testing and to build automated tests for Level 1 requirements in the ASVS categories, which include authentication, session management, and function-level access control;
  • in-sprint security testing that’s integrated into the development process while leveraging existing approaches such as Agile, CI/CD and DevOps;
  • post-production security testing that surfaces vulnerabilities requiring immediate attention before opting for a full penetration test;
  • and, penetration testing to find and exploit vulnerabilities and to determine if previously detected vulnerabilities have been fixed. 

“The OWASP Top 10 is a list of the most common security vulnerabilities,” said Kastl. It’s based on data gathered from hundreds of organizations and over 100,000 real world applications and APIs.”

Performance testing beyond load testing
Load testing ensures that the application continues to operate as intended as the workload increases with emphasis on the upper limit. By comparison, scalability testing considers both minimum and maximum loads. In addition, it’s wise to test outside of normal workloads (stress testing), to see how the application performs when workloads suddenly spike (spike testing) and how well a normal workload endures over time (endurance testing).

“Performance really impacts people from a usability perspective. It used to be if your page didn’t load within this amount of time, they’d click away and then it wasn’t just about the page, it was about the performance of specific elements that could be mapped to shopping cart behavior,” said Gartner’s Murphy.

For example, GPS navigation and wearable technology company Garmin suffered a multi-day outage when it was hit by a ransomware attack in July 2020. Its devices were unable to upload activity to Strava’s mobile app and website for runners and cyclists. The situation underscores the fact that cybersecurity breaches can have downstream effects.

“I think Strava had a 40% drop in data uploads. Pretty soon, all this data in the last three or four days is going to start uploading to them so they’re going to get hit with a spike of data, so those types of things can happen,” said Murphy.

To prepare for that sort of thing, one could run performance and stress tests on every build or use feature flags to compare performance with the prior build.

Instead of waiting for a load test at the end of a project to detect potential performance issues, performance tests can be used to baseline the performance of an application under development.

“By measuring the response time for a single user performing specific functions, these metrics can be gathered and compared for each build of the application,” said Kastl. “This provides an early warning of potential performance issues. These baseline performance tests can be integrated with your CI/CD pipeline for continuous monitoring of the application’s performance.”

Mobile and IoT devices, such as wearables, have increased the need for more comprehensive performance testing and there’s still a lot of room for improvement.

“As the industry has moved more to cloud-based technology, performance testing has become more paramount,” said Todd Lemmonds, QA architect at health benefits company Anthem, a Sauce Labs customer. “One of my current initiatives is to integrate performance testing into the CI/CD pipeline. It’s always done more toward UAT which, in my mind, is too late.”

To affect that change, the developers need to think about performance and how the analytics need to be structured in a way that allows the business to make decisions. The artifacts can be used later during a full systems performance test.

“We’ve migrated three channels on to cloud, [but] we’ve never done a performance test of all three channels working at capacity,” said Lemmonds. “We need to think about that stuff and predict the growth pattern over the next five years. We need to make sure that not only can our cloud technologies handle that but what the full system performance is going to look like. Then, you run into issues like all of our subsystems are not able to handle the database connections so we have to come up with all kinds of ways to virtualize the services, which is nothing new to Google and Amazon, but [for] a company like Anthem, it’s very difficult.”

DMW Group’s Lewis said some of his clients have ignored performance testing in cloud environments since cloud environments are elastic.

“We have to bring them back to reality and say, ‘Look, there is an art form here that has significantly changed and you really need to start thinking about it in more detail,” said Lewis.

UX testing beyond UI and UAT
While UI and UAT testing remain important, UI testing is only a subset of what needs to be done for UX testing, while traditional UAT happens late in the cycle. Feature flagging helps by providing early insight into what’s resonating and not resonating with users while generating valuable data. There’s also usability testing including focus groups, session recording, eye tracking and quick one-question in-app surveys that ask whether the user “loves” the app or not.

One area that tends to lack adequate focus is accessibility testing, however. 

“More than 54 million U.S. consumers have disabilities and face unique challenges accessing products, services and information on the web and mobile devices,” said SPR’s Kastl. “Accessibility must be addressed throughout the development of a project to ensure applications are accessible to individuals with vision loss, low vision, color blindness or learning loss, and to those otherwise challenged by motor skills.”

The main issue is a lack of awareness, especially among people who lack firsthand or secondhand experience with disabilities. While there are no regulations to enforce, accessibility-related lawsuits are growing exponentially. 

“The first step to ensuring an application’s accessibility is to include ADA Section 508 or WCAG 2.1 Accessibility standards as requirements in the product’s backlog along with functional requirements,” said Kastl.

Non-compliance to an accessibility standard on one web page tends to be repeated on all web pages or throughout a mobile application. To detect non-compliant practices as early as possible, wireframes and templates for web and mobile applications should be reviewed for potential non-compliant designed components, Kastl said. In addition to the design review, there should be a code review in which development teams perform self-assessments using tools and practices to identify standards that have not been followed in coding practices. Corrective action should be taken by the team prior to the start of application testing. Then, during in-sprint testing activities, assistive technologies and tools such as screen readers, screen magnification and speed recognition software should be used to test web pages and mobile applications against accessibility standards. Automated tools can detect and report non-compliance.

Gartner’s Murphy said organizations should be monitoring app ratings and reviews as well as social media sentiment on an ongoing basis.

“You have to monitor those things, and you should. You’re feeding stuff like that into a system such as Statuspage or PagerDuty so that you know something’s gone wrong,” said Murphy. “It may not just be monitoring your site. It’s also monitoring those external sources because they may be the leading indicator.”

The post There’s more to testing than simply testing appeared first on SD Times.

]]>
Autonomous testing: Are we there yet? https://sdtimes.com/test/autonomous-testing-are-we-there-yet/ Tue, 04 Aug 2020 17:30:17 +0000 https://sdtimes.com/?p=40870 A couple of years ago, there was a lot of hype about using AI and machine learning (ML) in testing, but not a lot to show for it. Today, there are many options that deliver important benefits, not the least of which are reducing the time and costs associated with testing. However, a hands-on evaluation … continue reading

The post Autonomous testing: Are we there yet? appeared first on SD Times.

]]>
A couple of years ago, there was a lot of hype about using AI and machine learning (ML) in testing, but not a lot to show for it. Today, there are many options that deliver important benefits, not the least of which are reducing the time and costs associated with testing. However, a hands-on evaluation may be sobering.

For example, Nate Custer, senior manager at testing automation consultancy TTC Global, has been researching autonomous testing tools for about a year. When he started the project, he was new to the company and a client had recently inquired about options. The first goal was to build a technique for evaluating how effective the tools were in testing. 

“The number one issue in testing is test maintenance. That’s what people struggle with the most. The basic idea is that you automate tests to save a little bit of time over and over again. When you test lots of times, you only run tests if the software’s changed, because if the software changes, the test may need to change,” said Custer. “So, when I first evaluate stuff, I care about how fast I can create tests, how much can I automate and the maintenance of those testing projects.”

RELATED CONTENT:
AI and ML make testing smarter… but autonomous tools are a long way from being enterprise-ready
What to look for in a web and mobile test automation tool
Continuous testing isn’t optional anymore

Custer’s job was to show how and where different tools could and could not make an impact. The result of his research is that he’s optimistic, but skeptical.

There’s a lot of potential, but…
Based on first-hand research, Custer believes that there are several areas where AI and ML could have a positive impact. At the top of the list is test selection. Specifically, the ability to test all of what’s in an enterprise, not just web and mobile apps.

“If I want to change my tools from this to that, the new tool has to handle everything in the environment. That’s the first hurdle,” said Custer. “But what tests to run based on this change can be independent from the platform you use to execute your test automation, and so I think that’s the first place where you’re going to see a breakthrough of AI in the enterprise. Here’s what’s changed, which tests should I run? Because if I can run 10% of my tests and get the same benefit in terms of risk management, that’s a huge win.”

The second area of promise is surfacing log differences, so if a test that should take 30 seconds to run suddenly took 90 seconds, the tool might suggest that the delay was caused by a performance issue. 

“Testing creates a lot of information and logs and AI/ML tools are pretty good at spotting things that are out of the ordinary,” said Custer. 

The third area is test generation using synthetic test data because synthetic data can be more practical (faster, cheaper and less risky) to use than production data. 

“I’m at a company right now that does a lot of credit card processing. I need profiles of customers doing the same number of transactions, the same number of cards per household that I would see in production. But I don’t want a copy of the production data because that’s a lot of important information,” said Custer.

Self-healing capabilities showed potential, although Custer wasn’t impressed with the results.

“Everything it healed already worked. So, you haven’t really changed maintenance. When a change is big enough to break my automation, the AI tool had a hard time fixing it,” said Custer. “It would surface really weird things. So, that to me is a little longer-term work for most enterprise applications.”

Are we there yet?
“Are We There Yet?” was the title of Custer’s research project and his conclusion is that autonomous testing isn’t ready for prime time in an enterprise environment.

“I’m not seeing anything I would recommend using for an enterprise customer yet. And the tools that I’ve tested didn’t perform any better. My method was to start with a three-year-old version of software, write some test cases, automate them, go through three years of upgrades and pay attention to the maintenance it took to do those upgrades,” said Custer. “When I did that, I found it didn’t save any maintenance time at all. Everybody’s talking about [AI], everyone’s working on it but there are some of them I’m suspicious about,” said Custer.

For example, one company requested the test script so they could parse it in order to understand it. When Custer asked how long it would take, the company said two or three hours. Another company said it would take two or three months to generate a logical map of a program.

“[T]hat doesn’t sound different from hiring a consultant to write your testing. AI/ML stuff has to actually make life easier and better,” said Custer.

Another disappointment was the lack of support for enterprise applications such as SAP and Oracle eBusiness Suite. 

“There are serious limitations on what technologies they support. If I were writing my own little startup web application, I would look at these tools. But if I were a Fortune 500 company, I think it’s going to take them a couple of years to get there,” said Custer. “The challenge is most of these companies aren’t selling a little add-on that you can add into your existing system. They’re saying change everything from one tool that works to my thing and that’s a huge risk.”

The post Autonomous testing: Are we there yet? appeared first on SD Times.

]]>
AI and ML make testing smarter… but autonomous tools are a long way from being enterprise-ready https://sdtimes.com/test/ai-and-ml-make-testing-smarter-but-autonomous-tools-are-a-long-way-from-being-enterprise-ready/ Tue, 04 Aug 2020 16:11:35 +0000 https://sdtimes.com/?p=40865 AI and machine learning (ML) are finding their way into more applications and use cases. The software testing vendors are increasingly offering “autonomous” capabilities to help customers become yet more efficient. Those capabilities are especially important for Agile and DevOps teams that need to deliver quality at speed. However, autonomous testing capabilities are relatively new, … continue reading

The post AI and ML make testing smarter… but autonomous tools are a long way from being enterprise-ready appeared first on SD Times.

]]>
AI and machine learning (ML) are finding their way into more applications and use cases. The software testing vendors are increasingly offering “autonomous” capabilities to help customers become yet more efficient. Those capabilities are especially important for Agile and DevOps teams that need to deliver quality at speed. However, autonomous testing capabilities are relatively new, so they’re not perfect or uniformly capable in all areas. Also, the “autonomous” designation does not mean the tools are in fact fully autonomous, they’re merely assistive.

“Currently, AI/ML works great for testing server-side glitches and, if implemented correctly, it can greatly enhance the accuracy and quantity of testing over time,” said Nate Nead, CEO of custom software development services company Dev.co. “Unfortunately, where AI/ML currently fails is in connecting to the full stack, including UX/UI interfaces with database testing. While that is improving, humans are still best at telling a DevOps engineer what looks best, performs best and feels best.”

RELATED CONTENT:
What to look for in a web and mobile test automation tool
Continuous testing isn’t optional anymore
Forrester’s recommendations for building a successful continuous testing capability

Dev.co has tried solutions from TextCraft.io and BMC, and attempted some custom internal processes, but the true “intelligence” is not where imaginations might lead yet, Nead said.

It’s early days
Gartner Senior Director Analyst Thomas Murphy said autonomous testing is “still on the left-hand side of the Gartner Hype cycle.” (That’s the early adopter stage characterized by inflated expectations.)

The good news is there are lots of places to go for help including industry research firms, consulting firms, and vendors’ services teams. Forrester VP and Principal Analyst Diego Lo Giudice created a five-level maturity model inspired by SAE International’s “Levels of Driving Automation” model. Level 5 (the most advanced level) of Lo Giudice’s model, explained in a report, is fully autonomous, but that won’t be possible anytime soon, he said. Levels one through four represent increasing levels of human augmentation, from minimal to maximum. 

The most recent Gartner Magic Quadrant for Software Test Automation included a section about emerging autonomous testing tools. The topic will be covered more in the future, Murphy said.

“We feel at this point in time that the current market is relatively mature, so we’ve retired that Magic Quadrant and our intent is to start writing more about autonomous capabilities and potentially launch a new market next year,” said Murphy. “But first, we’re trying to get the pieces down to talk about the space and how it works.”

Forrester’s Lo Giudice said AI was included in most of the criteria covered in this year’s Continuous Functional Test Automation Wave.

“There was always the question of, tell me if you’re using AI, what for and what are the benefits,” said Lo Giudice. “Most of the tools in the Wave are using AI, machine learning and automation at varying levels of degree, so it’s becoming mainstream of who’s using AI and machine learning.”

How AI and ML are being used in testing
AI and ML are available for use at different points in the SDLC and for different types of testing. The most popular and mature area is UI testing. 

“Applitools allows you to create a baseline of how tolerant you want to be on the differences. If something moved from the upper right-hand corner to the lower left-hand corner, is that a mistake or are you OK with accepting that as the tests should pass?” said Forrester’s Lo Giudice.  

There’s also log file analysis that can identify patterns and outliers. Gartner’s Murphy said some vendors are using log files and/or a web crawling technique to understand an application and how it’s used.

“I’ll look at the UI and just start exercising it and then figure out all the paths just like you used to have in the early days of web applications, so it’s just recursively building a map by talking through the applications,” said Murphy. “It’s useful when you have a very dynamic application that’s content-oriented [like] ecommerce catalogs, news and feeds.”

If the tool understands the most frequently used features of an application it may also be capable of comparing its findings with the tests that have been run.

“What’s the intersection between the use of the features and the test case that you’ve generated? If that intersection is empty, then you have a concern,” said Forrester’s Lo Giudice. “Am I designing and automating tests for the right features? If there’s a change in that space I want to create tests for those applications. This is an optimization strategy, starting from production.”

Natural language processing (NLP) is another AI technique that’s used in some of the testing tools, albeit to bring autonomous testing capabilities to less technical testers. For example, the Gherkin domain specific language (DSL) for Cucumber has a relatively simple syntax :”Given, When, Then,” but natural language is even easier to use.

“There’s a [free and open source] tool called Gauge created by ThoughtWorks [that] combines NLP together with the concept of BDD so now we can start to say you can write requirements using a relatively normal language and from that the tool can figure out what tests you need, when you met the requirement,” said Gartner’s Murphy. “[T]hen, they connect that up to a couple of different tools that create those [tests] for you and run them.”

Parasoft uses AI to simplify API testing by allowing a user to run the record-and-play tool and from that it generates APIs.

“It would tell you which APIs you need to test if you want to go beyond the UI,” said Forrester’s Lo Giudice. 

Some tools claim to be “self-healing,” such as noticing that a path changed based on a UI change. Instead of making the entire test fail, the tool may recognize that although a field moved, the URL is the same and that the test should pass instead of fail.

“Very often when you’re doing Selenium tests you get a bug, [but] you don’t know whether it’s a real bug of the UI or if it’s just the test that fails because of the locator,” said Lo Giudice. “AI and machine learning can help them get over those sorts of things.”

AI and ML can also be used to identify similar tests that have been created over time so the unnecessary tests can be eliminated. 

Dev.co uses AI and ML to find and fix runtime errors faster.

“The speed improvements of AI/ML allow for runtime errors to be navigated more quickly, typically by binding and rebinding elements in real time, and moving on to later errors that may surface in a particular batch of code,” said Dev.co’s Nead. “Currently, the machine augmentation typically occurs in the binding of the elements, real-time alerts and restarts of testing tools without typically long lags between test runtime.”

Do autonomous testing tools require special skills?
The target audience for autonomous software testing products are technical testers, business testers and developers, generally speaking. While it’s never a bad idea to understand the basics of AI and ML, one does not have to be a data scientist to use the products because the vendor is responsible for ensuring the ongoing accuracy of the algorithms and models used in their products. 

“In most cases, you’re not writing the algorithm, you’re just utilizing it. Being able to understand where it might go wrong and what the strengths or weaknesses of that style are can be useful. It’s not like you have to learn to write in Python,” said Gartner’s Murphy.

Dev.co’s Nead said his QA testing leads and DevOps managers are the ones using autonomous testing tools and that the use of the tools differs based on the role and the project in which the person is engaged.

If you want to build your own autonomous testing capabilities, then data scientists and testers should work together. For example, Capgemini explained in a webinar with Forrester that it had developed an ML model for optimizing Dell server testing. Before Dell introduces a new server, it tests all the possible hardware and software configurations, which exceed over one trillion tests.

“They said the 1.3 trillion possible test cases would take a year to test, so they sat down with smart testers and built a machine learning model that looked at the most frequent business configurations used in the last 3, 4, 5 years,” said Forrester’s Lo Giudice. “They used that data and basically leveraging that data, they identified the test cases they had to test for maximum coverage with a machine learning model that tells you this is the minimum number of test cases [you need to run].”

Instead of needing a year to run 1.3 trillion tests, they were able to run a subset of tests in 15 days. 

Benefits
The Dell example and the use cases outlined above show that autonomous testing can save time and money.

“Speed comes in two ways.  One is how quickly can I create tests? The other is how quickly can I maintain those tests?” said Gartner’s Murphy. “One of the issues people run into when they build automation is that they get swamped with maintenance. I’ve created tons of tests and now how do I run them in the amount of time I have to run them?”

For example, if a DevOps organization completes three builds per hour but testing a build takes an hour, the choices are to wait for the tests to run in sequence or run them in parallel.

“One of the things in CI is don’t break the build. If you start one build, you shouldn’t start another build until you know you have a good build, so if the tests [for three builds] are running [in parallel] I’m breaking the way DevOps works. If we’ve got to wait, then people are laying around before they can test their changes. So if you can say based on the changes you need, you don’t need to run 10,000 tests, just run these 500, that means I can get through a build much faster,” said Murphy.

Similarly, it may be that only 20 tests need to be created instead of 100. Creating fewer tests takes less time and a smaller number of tests takes less time to automate and execute. The savings also extend out to cloud resource usage and testing services.

“The more you can shift the use of AI to the left, the greater your benefits will be,” said Forrester’s Lo Giudice. 

Limitations
The use of AI and ML in testing is relatively new, with a lot of progress being made in the last 12 to 18 months. However, there is always room for improvement, expansion and innovation.

Perhaps the biggest limitation has to do with the tools themselves. While there’s a tendency to think of AI in general terms, there is no general AI one can apply to everything. Instead, the most successful applications of AI and ML are narrow, since artificial narrow intelligence (ANI) is the state of the art. So, no one tool will handle all types of tests on code regardless of how it was built.

“It’s not just the fact that it’s web or not. It’s this tool works on these frameworks or it works for Node.js but it doesn’t work for the website you built in Java, so we’re focused on JavaScript or PHP or Python,” said Gartner’s Murphy. “Worksoft is focused on traditional legacy things, but the way the tool works, I couldn’t just drop it in and test a generic website.”

Dev.co’s Nead considers a human in the loop a limitation.

“Fixes still require an understanding of the underlying code, [because one needs to] react and make notes when errors appear. The biggest boons to testing are the speed improvements offered over existing systems. It may not be huge yet as much of the testing still requires restarting and review from a DevOps engineer, but taken in the aggregate, the savings do go up over time,” said Nead.

Autonomous testing will continue to become more commonplace because it helps testers do a better job of testing faster and cheaper than they have done in the past. The best way to understand how the tools can help is to experiment with them to determine how they fit with existing processes and technologies.

Over time, some teams may find themselves adopting autonomous testing solutions by default, because their favorite tools have simply evolved.

The post AI and ML make testing smarter… but autonomous tools are a long way from being enterprise-ready appeared first on SD Times.

]]>
A guide to UI testing solutions https://sdtimes.com/test/a-guide-to-ui-testing-solutions/ Tue, 04 Feb 2020 17:41:44 +0000 https://sdtimes.com/?p=38786 HCL OneTest supports a DevOps testing approach with UI testing, API testing, Performance testing, data fabrication, and service Virtualization. The solution is designed to automate and run tests early and more frequently to discover errors faster. Recent additions to the HCL OneTest platform use cloud native technologies to offer users a solution that is both … continue reading

The post A guide to UI testing solutions appeared first on SD Times.

]]>
HCL OneTest supports a DevOps testing approach with UI testing, API testing, Performance testing, data fabrication, and service Virtualization. The solution is designed to automate and run tests early and more frequently to discover errors faster. Recent additions to the HCL OneTest platform use cloud native technologies to offer users a solution that is both secure and offers discoverability of tests to enable re-use and collaboration. HCL OneTest supports DevOps deployment solutions through a wide range of integrations, and the emerging Value Stream Management sector by integrating with UrbanCode Velocity.

Parasoft’s UI testing solutions make it easy to automate web UI tests and integrate into your CI/CD pipeline. For Selenium users, Parasoft Selenic self-heals Selenium scripts at runtime and provides quick fixes in the user’s IDE to automatically update Selenium scripts. For users with complex test scenarios, Parasoft SOAtest provides complete end-to-end functional test automation (e.g. UI, API, database), integrated with Parasoft Virtualize for creating virtualized test environments that are available anytime, anywhere

RELATED CONTENT:
How to solve your UI testing problems
The upside and downsides to Selenium

How does your company help facilitate UI testing?

Applitools modernizes functional and visual testing through Visual AI for increased coverage, higher quality, and better release velocity all for less time and money. Built for both developers and quality engineers, Applitools automatically validates the look, feel, and functionality of apps using 99.999% accurate computer vision technology leveraging images instead of flaky, cumbersome test code. Applitools integrates easily with all major test automation frameworks in any programming language covering web, mobile, and desktop apps.   

Appium is an open-source test automation framework for native, hybrid and mobile web apps. It is a JS Foundation project that graduated in 2017. It features the ability to automate any mobile apps from any language or any test framework, full access to back-end APIs and DBs from test code, and the ability to write tests with third-party development tools. It supports iOS, Android, Windows and Mac. 

Eggplant: Intelligent test automation is critical to success in the digital age. Eggplant enables companies to view their technology through the eyes of their users. The continuous, intelligent approach tests the end-to-end customer experience and investigates every possible user journey, providing unparalleled test coverage. Our technology taps AI and machine learning to test any technology on any device, operating system, or browser at any layer, from the UI to APIs to the database. The Eggplant platform can adapt as companies embrace new digital technologies, providing organizations with a framework for intelligent test automation that can easily scale to address tomorrow’s testing needs.

FrogLogic Squish is an automated GUI testing solution designed for cross-platform desktop, mobile, embedded and web apps. It is used for automating functional regression tests and system tests of graphical user interfaces and human machine interfaces. Features include support for all major GUI technologies, test script recording, object identification and verifications, an integrated development environment, popular script languages for test scripting, support for BDD, and integration into test management and CI-systems. 

Functionize, an automated testing solution is equipped combines natural language processing, deep-learning ML models and other AI-based technologies to enable teams to test faster and smarter. Its visual testing capabilities ensures the whole UI is tested with visual comparison, visual completion and visual confirmation.

Katalon The Katalon product suite is designed to generate automated tests across platforms. Katalon Recorder is a lightweight extension for test automation recording and playback. Katalon Studio aims to simplify test automation activities with built-in project templates, end-to-end testing, object spying, dual-editor interface, and a comprehensive BDD solution. Katalium is a new framework that provides blueprints of test automation projects based on Selenium and TestNG. The company also has a beta solution, Katalon TestOps, designed to get the ‘true value’ out of automation. 

Leapwork provides automated UI testing for everyone on the team. From technical testers to non-developers and even managers. Users can design and execute automated test cases as visual flowcharts and automate any applications with the help of natuve support and Selenium-based web automation. It features the ability to drive automation with external data, build reusable components, plug into the DevOps pipeline, report on project status, and schedule repeated execution.

Mabl is a codeless UI testing service. It enables continuous testing with an auto-healing automation framework and maintenance-free test infrastructure. Mabl advances traditional UI testing using proprietary machine learning models to automatically identify application issues, including JavaScript errors, visual regressions, broken links, increased latency, and more.

Percy is an “all-in-one visual review platform” that is designed to integrate with existing workflows and provide visual insight into product changes. It covers the entire UI and highlights relevant visual changes across web apps and component libraries. Some UI features include pixel-by-pixel diffs, responsive diffs, and snapshot stability to minimize false positives

Perfecto’s smart automation test platform features continuous testing capabilities for web and mobile apps. The platform is AI-powered to help users test smarter and get more insights with fewer false negatives; features enterprise-grade support; and has test creaton, execution, analysis and lab capabilities. 

Ranorex provides test automation for all. Users can take advantage of Rx Ranorex Studio for end-to-end testing of desktop, web and mobile apps. Ranorex Webtestit features out-of-the-box web test automation for Selenium or Protractor using Java, TypeScript of Python. In addition, Ranorex Studio features all the tools necessary for web test automation such as capture-and-replay, a full IDE, and built-in reporting capabilities. 

Sauce Labs provides the world’s largest cloud-based platform for automated testing of web and mobile applications. Optimized for use in CI and CD environments, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium, both widely adopted open-source standards for automating browser and mobile application functionality.

Selenium is an open-source browser and automation framework and ecosystem. It features the Selenium IDE, Selenium Grid and Selenium WebDriver. The IDE records and plays back test automation for the web. WebDriver is for creating browser-based regression automation suites and tests. Lastly, Grid enables users to distribute and run tests on several machines and manage multiple environments from a single point. 

SmartBear enables teams to build and run UI tests across desktop, mobile, and web applications. SmartBear solutions support a wide variety of scripting languages, and come with an extensive object library with over fifty thousand object properties. With powerful test planning, test creation, test data management, test execution, and test environment solutions, SmartBear is paving the way for teams to consistently deliver quality at both the UI and API layer.

The post A guide to UI testing solutions appeared first on SD Times.

]]>
How does your company help facilitate UI testing? https://sdtimes.com/test/how-does-your-company-help-facilitate-ui-testing/ Tue, 04 Feb 2020 16:38:26 +0000 https://sdtimes.com/?p=38783 Ashish Mathur, director and architect of testing products at HCL Software, a division of HCL Technologies Traditionally, UI layer testing has been mainly manual due to the brittle nature of typical automated UI testing tools. However, HCL OneTest UI delivers a much more robust test automation platform with both the Script Assure technology as well … continue reading

The post How does your company help facilitate UI testing? appeared first on SD Times.

]]>
Ashish Mathur, director and architect of testing products at HCL Software, a division of HCL Technologies
Traditionally, UI layer testing has been mainly manual due to the brittle nature of typical automated UI testing tools. However, HCL OneTest UI delivers a much more robust test automation platform with both the Script Assure technology as well as the guided and self-healing capabilities. So even if the application UI changes, the scripts are “smart” enough to see those changes and continue running, and then alert the user that the application UI has changed. This intelligent object recognition during playback makes scripts resilient to changes and easy to maintain. 

Novice test automation engineers can get up and running quickly with the HCL OneTest UI natural language syntax that is auto-generated when recording against the system under test. These scripts can then be augmented with additional steps and verification points while the application is “offline.” Coupled with the ability to interleave API tests, HCL OneTest UI offers complete traceability from UI to API and back.

RELATED CONTENT:
How to solve your UI testing problems
The upside and downsides to Selenium

HCL OneTest also provides the ability to reuse UI tests during performance testing and leads to efficiencies in script creation and earlier results. The Accelerated Functional Testing feature uses available test resources such as Docker, to help in achieving test results quickly by running as many tests simultaneously as possible. 

Coupled with seamless integration with most CI/CD systems, this enables quality in DevOps and with integrations to the emerging value stream management platforms, and the ability to continuously collaborate with the development teams in identifying new test scenarios to enhance the automation suites.

Mark Lambert, vice president of products at Parasoft
The move in the industry over the last 10 years has been accelerating towards open-source software, leveraging open source where appropriate and then leveraging vendor-driven solutions when things get more complex. At Parasoft, we take an open-source-first approach to testing. Our recently announced Selenic product is designed to help organizations with the adoption of the open-source framework Selenium, which we found the majority of people are using as their primary test automation practice. You can simply plug Selenic into your existing Selenium testing practice, and when things get complicated, Selenic supercharges the basic functionality that comes with Selenium. 

The number-one problem with Selenium is this problem of maintainability and stability that no one else is trying to address – without moving you away from Selenium. With Parasoft Selenic, it analyzes why tests fail and, by applying its AI analysis of prior executions, comes up with recommendations for updates to those tests. For instance, changes to locator strategies or wait conditions. It can also apply these recommendations at runtime, self-healing the tests when run as part of the CI/CD pipeline, and help avoid any unnecessary build failures. The recommendations are also provided as feedback to the tester, so that the tester can then edit the tests and make the changes that the AI engine is recommending.

The post How does your company help facilitate UI testing? appeared first on SD Times.

]]>
The upside and downsides to Selenium https://sdtimes.com/test/the-upside-and-downsides-to-selenium/ Tue, 04 Feb 2020 15:45:04 +0000 https://sdtimes.com/?p=38778 Selenium is one of the most popular UI testing frameworks out there because it is open source, easy to use and has a lot of community support. According to Max Saperstone, director of software test automation at consulting company Coveros, because there are many large enterprises and businesses that have adopted it, it is proven … continue reading

The post The upside and downsides to Selenium appeared first on SD Times.

]]>
Selenium is one of the most popular UI testing frameworks out there because it is open source, easy to use and has a lot of community support.

According to Max Saperstone, director of software test automation at consulting company Coveros, because there are many large enterprises and businesses that have adopted it, it is proven that it can work and there is also a huge backing of support with different languages and resources. But as with any open-source project, it does have its limitations and it can be difficult to get past those limitations. 

For instance, he explained that just because it is a free tool, that doesn’t mean it is free for an organization. What they may not pay in licensing, they still have to pay in knowledge and talent. “But overall if you have people that can learn it and developers willing to take that on, it is usually a lot cheaper and easier to get started with,” he said. 

RELATED CONTENT: How to solve your UI testing problems

While some organizations have released tools on top of Selenium to help extend its use cases, Mark Lambert, vice president of products at the automated software testing company Parasoft, warns users to make sure they aren’t getting locked into a solution. Other solutions will allow you to plug into the framework without lock-in. 

“What our solution Selenic does is it plugs right into an organization’s existing Selenium test automation practice leveraging their tests as they exist today, but then injects its AI to help with  what we see as being the number one challenge for organizations with UI test automation, which is that of maintainability,” he said. 

Within the HCL OneTest offering, users can run Selenium tests, or use Selenium to interact with browsers, according to Ashish Mathur, director and architect of testing products at HCL Software, a division of HCL Technologies. “Where we come into the picture is we provide value on top of Selenium, which is beyond any interactions with the browser. The whole aspect of being able to locate controls very easily and very intuitively in a manner that is very natural.” 

According to Chris Haggan, HCL OneTest UI product manager, when development for Selenium IDE ceased, it left a gap in the market. While Selenium IDE development has recently picked back up in the last couple of years, HCL now provides extra value that can be added and closes some of the gaps still left, such as API performance, test maintenance and ease of execution. 

“There is a class of testers who don’t necessarily want to write Selenium code, but understand the value that Selenium brings to the table. One of the things we did was we looked at how we can build an easy capability for testers to build scripts in the same way that the Selenium IDE had been doing,” said Haggan. 

The post The upside and downsides to Selenium appeared first on SD Times.

]]>
How to solve your UI testing problems https://sdtimes.com/test/how-to-solve-your-ui-testing-problems/ Tue, 04 Feb 2020 15:00:29 +0000 https://sdtimes.com/?p=38777 Enterprises want to deliver software fast in order to keep up with market demands and stay competitive, but at the end of the day it doesn’t matter how fast they deliver software if it’s not pleasing to the end user.  “Users are the ones who are going to be interacting with your application, so you … continue reading

The post How to solve your UI testing problems appeared first on SD Times.

]]>
Enterprises want to deliver software fast in order to keep up with market demands and stay competitive, but at the end of the day it doesn’t matter how fast they deliver software if it’s not pleasing to the end user. 

“Users are the ones who are going to be interacting with your application, so you want to make sure they get the best and correct experience they are looking for,” said Max Saperstone, director of software test automation at consulting company Coveros.

The way to do this is to perform UI testing, which ensures an application is performing the right way. “The make or break of an application is with the user’s experience within the UI. It’s more critical than ever for the UI portion of the application to be functional and behave as expected,” said Ashish Mathur, director and architect of testing products at HCL Software, a division of HCL Technologies.

One way of doing this is manually testing all the ways and scenarios users will be interacting with the application, although this can be timely and costly. The alternative to manual testing is automated testing, which automatically executes tests for testers so they can focus their time and effort in other areas. 

In today’s digital age, Mark Lambert, vice president of products at the automated software testing company Parasoft, explained organizations have to rely on automated testing in order to reduce the costs and efforts. “Organizations realize manual testing efforts don’t scale to the pace of delivery, so there’s an increased reliance on test automation to help with validating the increased cadence of delivery,” he said.

But, that doesn’t mean manual testing becomes obsolete. Lambert added that a successful UI testing strategy actually includes both manual and automated testing techniques. 

“When you think about UI testing, you want to think about what to automate and what to do manually,” he said. “Humans are very, very good at understanding if something feels right. They are very good at randomly exploring intuitively different execution paths and doing negative testing. What humans are not very good at are repetitive tasks. We get bored very quickly and if we are doing the same thing over and over again, we can often miss the details.”

However, manual and automated testing is not enough to properly validate the UI. 

Solving UI testing problems
Despite the fact that test automation is supposed to help speed things up, organizations still struggle to achieve high levels of test automation and run test automation continuously, Parasoft’s Lambert said. Once organizations get started with test automation, the number one problem they run into is maintenance as part of the ongoing effort. That is because things are constantly changing, making the test environment very complex. Lambert explained if testers don’t have reliable ways of locating elements in the page or handling wait conditions, it can put them at risk and costs days in maintenance. 

“As the application changes, the coverage also needs to change. The new areas of how the application behaves and flows need to be accommodated for that change,” HCL’s Mathur explained.

In order to overcome this, Lambert suggested adopting the Page Object Model, which promotes reuse across test scripts, resulting in more maintainable tests. “When the UI changes, you only have to change in one place and not in two, 200, 2,000 or whatever the number of tests you have that are touched by that UI change,” he said. 

New artificial intelligence-based tools are beginning to come out to also capitalize on that pain point, make it easier to recognize changes and automatically suggest or make updates to tests so they can still run, Coveros’ Saperstone explained. For instance, the Parasoft Selenic solution injects AI into the UI testing process to help analyze tests, understand where problems and instabilities are, and apply self-healing capabilities to the CI/CD pipeline so it doesn’t break due to failing builds. It also provides recommendations to the tester to improve test automation initiatives going forward. 

Artificial intelligence should also be able to help testers identify new test cases that need to be created, and assist beyond just maintaining what’s already there, according to Chris Haggan, HCL OneTest UI product manager.

Other ways in which AI is starting to be applied to UI testing is in understanding what actual users do and in going through those workflows. In addition, Mathur explained that modern-day applications are becoming much more graphical, and the old ways of locating a piece of text and interacting with it don’t work anymore. This is where he believes machine learning will really thrive, in being able to help understand the context of the page, compare it to what is already known about the application, and what is changing within the application. “This will lead to much more robust and much more reliable test cases than we ever had in this space. The incorporation of machine learning will make testing a lot easier,” he said.

However Saperstone doesn’t see this taking off for the next couple of years. Testers are still working on trusting AI and AI still has some maturing to do, he said. 

“People are still stuck in the old manual testing mindset. They think they can take a manual test and convert it into an automated script to get the same coverage and results, and that’s not how that works,” said Saperstone. “You need to think about what you are trying to accomplish, verify and understand.”

The way teams are building user interfaces are also changing, Haggan said. There is a move towards modern cloud-native applications and technologies like microservices. So instead of having UI built in one big monolith, they are now being developed and delivered in parts and pieces. The task of a UI testing organization like HCL is to go in and figure out how the pieces fit together, determine if all the pieces work together, and find out if it is seamless, according to Haggan. 

Aside from using tools, testers need to leverage smart execution, Lambert added. “Analyze the app and tests running against the app to determine what has changed and what tests need to be re-executed to those changes so you’re only executing tests and validating the changes in the app,” he said. This is extremely important because UI testing is slow; it takes time. There are many browsers involved and click paths. When you are testing, you are not talking about two or three automation tests or even two or three hundred. You are talking about thousands of automated tests that are running. Being able to only target the necessary changes can significantly cut down the amount of time and tests, he pointed out. 

The slowness of UI testing also becomes a problem in a Agile or DevOps environment where there are frequent releases and builds happening. Testing needs to be done at a fast rate in order for it to be useful, according to Mathur. He recommended using distribution technology, cloud technologies and containers to speed things up. “Adopting technologies like Docker to the test cases and using agents so you can run them in parallel and get all the results in one place and get an answer fast as to the state of the application is increasingly important as everyone tries to move towards Agile development,” he said.

Saperstone suggested seeing if there are things you can verify at lower levels, ways to break test times, and any tests that can be run in parallel. 

“For example, I have a standard web app that has a back end, some APIs associated with it and then a UI. If I need a new user to log into the system, rather than creating a new user through the UI for each and every test, I can use the APIs in order to generate the users. I can create the user through some sort of back-end API call and then automatically log in. That is going to save me some time,” he said. 

Haggan added that it is important to bring the API and back-end solutions together with the UI testing because applications are becoming more and more reliant on those, especially with the use of microservices. “It is important that your UI test also has an ability to be able to do some of that API validation in the back end and bring those two parts of the universe together so when you submit a piece of data, you can tell if it is updating the right microservices, the right databases, and what is happening in the back end,” he said. 

Mathur also said that unit testing overlaps with UI testing. “If we have units that are outside the development scope of the application itself, there is a good need for unit testing to also be incorporated in UI testing to give a leg up for the functional tests to get started and build on top of that,” he said. 

Parasoft’s Lambert turns to the testing pyramid, which groups tests into different granularities and gives an idea of how many tests you need in those groups. “What it does is it talks about how to organize your test automation strategy. You have a lot of tests at the lower level of the development stack, so your unit tests and API tests should cover as much as possible there. UI tests are difficult to automate, maintain and get the environment set up. The testing pyramid minimizes this,” he said. “I’m a very big proponent of the testing pyramid, which says a foundation of unit tests backed up by API or service level tasks, UI tests, and both automation and manual testing make manual testing much more efficient and much more effective and much more valuable. That is how you can really have a great strategy that’ll help you accelerate your delivery process,” he said. 

Other best practices for UI testing include modularization, behavior driven development, and service virtualization, the thought leaders added. 

“When you hear companies say automation isn’t working for us, it is mainly because they are not really doing automation the right way,” said Saperstone. 

The post How to solve your UI testing problems appeared first on SD Times.

]]>