Parasoft Archives - SD Times https://sdtimes.com/tag/parasoft/ Software Development News Thu, 04 Jul 2024 00:20:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Parasoft Archives - SD Times https://sdtimes.com/tag/parasoft/ 32 32 Parasoft’s latest release offers several new automated features for testing Java, C#, .NET apps https://sdtimes.com/test/parasofts-latest-release-offers-several-new-automated-features-for-testing-java-c-net-apps/ Wed, 03 Jul 2024 14:43:34 +0000 https://sdtimes.com/?p=55104 Parasoft recently released the 2024.1 releases of several of its products, including Java testing tool Jtest, C# and .NET testing tool dotTEST, and testing analytics solution DTP.  Jtest now includes test templates in Unit Test Assistant, which is a feature that uses AI to generate a suite of tests. With the new Jtest release, testers … continue reading

The post Parasoft’s latest release offers several new automated features for testing Java, C#, .NET apps appeared first on SD Times.

]]>
Parasoft recently released the 2024.1 releases of several of its products, including Java testing tool Jtest, C# and .NET testing tool dotTEST, and testing analytics solution DTP

Jtest now includes test templates in Unit Test Assistant, which is a feature that uses AI to generate a suite of tests. With the new Jtest release, testers get more control over the structure of their test classes and can specify common configurations that their tests require.

Jtest can also now run test impact analysis right from within the IDE. Whenever a code change is made, Jtest will identify and execute tests and provide feedback to the developer on the impact of their modifications.

“With the new Jtest release, developers get real-time insights into which tests are impacted by their code changes,” Igor Kirilenko, chief product officer at Parasoft, told SD Times. “While you are still modifying your code, Jtest automatically runs the relevant tests and delivers instant feedback. This groundbreaking feature not only saves time but also ensures that potential bugs are caught and fixed before they ever reach the build pipeline.”

In Jtest and dotTEST, an integration with OpenAI/Azure OpenAI Service provides AI-generated fixes for flow analysis violations. 

Jtest and dotTEST also now support the latest version of the Common Weakness Enumeration (CWE) list, 4.14. Additionally, both have improved out-of-the-box static analysis test configurations.

And finally, DTP’s integration with OpenAI/Azure OpenAI Service speeds up remediation of security vulnerabilities by matching security rule violations to known vulnerabilities, and then coming up with a probability score of the likelihood of each being a real vulnerability or a false positive. 

“Developers often face significant cognitive load when triaging static analysis violations, particularly those related to security,” Jeehong Min, technical product manager at Parasoft, told SD Times. “Each security rule comes with its own learning curve, requiring time to understand its nuances. To assist developers, Parasoft DTP offers recommendations powered by pre-trained machine learning models and models that learn from the development team’s triage behavior. The ultimate goal is to help developers make informed decisions when triaging and remediating static analysis violations.”


You may also like…

The human side of automation: Reclaiming work-life balance

Reducing the total cost of ownership of software testing

The post Parasoft’s latest release offers several new automated features for testing Java, C#, .NET apps appeared first on SD Times.

]]>
Parasoft offers new capabilities for API, microservices, and accessibility testing in latest release https://sdtimes.com/test/parasoft-offers-new-capabilities-to-api-microservices-and-accessibility-testing-in-latest-release/ Wed, 22 May 2024 13:41:35 +0000 https://sdtimes.com/?p=54678 The software testing company Parasoft has announced new updates for API, microservices, and accessibility testing. For API testing, the company is using AI to offer auto-parameterization of API scenario tests generated by the OpenAI integration.  According to Parasoft, this update will streamline the process of developing test scenarios that validate data flow.  In the realm … continue reading

The post Parasoft offers new capabilities for API, microservices, and accessibility testing in latest release appeared first on SD Times.

]]>
The software testing company Parasoft has announced new updates for API, microservices, and accessibility testing.

For API testing, the company is using AI to offer auto-parameterization of API scenario tests generated by the OpenAI integration. 

According to Parasoft, this update will streamline the process of developing test scenarios that validate data flow. 

In the realm of microservices testing, the platform now offers a single test environment for collecting code coverage metrics from multiple parallel test executions for Java and .NET microservices.  

Additionally, code coverage can now be published under a single project in Parasoft DTP, which provides tests an aggregated view of their microservices coverage, Parasoft explained.

And finally, for web accessibility, the company has added support for WCAG 2.2 as well as new reporting capabilities in Parasoft SOAtest and DTP. 

 

The post Parasoft offers new capabilities for API, microservices, and accessibility testing in latest release appeared first on SD Times.

]]>
Parasoft introduces new C and C++ testing solution https://sdtimes.com/parasoft/parasoft-introduces-new-c-and-c-testing-solution/ Mon, 08 Apr 2024 15:49:36 +0000 https://sdtimes.com/?p=54191 The software testing company Parasoft has announced a new testing solution for C and C++ applications. C/C++test CT was designed specifically to meet the needs of safety and security critical embedded applications. It integrates into existing development and CI/CD workflows, and features a command-line interface for running tests.  It also integrates with open source testing … continue reading

The post Parasoft introduces new C and C++ testing solution appeared first on SD Times.

]]>
The software testing company Parasoft has announced a new testing solution for C and C++ applications. C/C++test CT was designed specifically to meet the needs of safety and security critical embedded applications.

It integrates into existing development and CI/CD workflows, and features a command-line interface for running tests. 

It also integrates with open source testing frameworks such as GoogleTest, Boost.Test, and CppUnit. According to Parasoft, this provides customers with more flexibility in how they meet their testing needs. 

Another benefit of C/C++test CT is that it features automated code coverage to meet compliance needs and offers comprehensive reporting on the progress of tests.  

“Feedback from our lead C++ developers about the integration of Parasoft C/C++ CT was enthusiastic, concluding that the new product improves developer satisfaction and efficiency, providing a seamless coverage experience fully integrated with VS Code. Additionally, our panel highlighted how Parasoft C/C++ CT enables cleaner and more flexible integration as a coverage tool, thanks to its modularized nature,” said one of Parasoft’s customers, a multinational engineering and technology company. 

The post Parasoft introduces new C and C++ testing solution appeared first on SD Times.

]]>
premium The promise of generative AI in low-code, testing https://sdtimes.com/ai/the-promise-of-generative-ai-in-low-code-testing/ Thu, 30 Nov 2023 20:21:40 +0000 https://sdtimes.com/?p=53174 Over the past year, software companies have worked hard to incorporate generative AI into their products, doing whatever it takes to incorporate the latest technology and stay competitive.  One software category that is particularly well-suited to being boosted by AI is low code, as that is already a market that has a goal of making … continue reading

The post <span class="sdt-premium">premium</span> The promise of generative AI in low-code, testing appeared first on SD Times.

]]>
Over the past year, software companies have worked hard to incorporate generative AI into their products, doing whatever it takes to incorporate the latest technology and stay competitive. 

One software category that is particularly well-suited to being boosted by AI is low code, as that is already a market that has a goal of making things easier on developers. 

Just as low code lowered the bar to entry for development, generative AI will have a similar impact because of such things as code completion and workflow automation. But Kyle Davis, VP analyst at Gartner,  believes that the two technologies will interact in more of a collaborative effort than a competitive way, at least for citizen developers. “Even though you could use generative AI to generate code, if you don’t understand what the code is doing, there’s no way to validate that it’s correct,” he said. “Using low code, it’s declarative, so you can look at what’s there on the screen and say, ‘does that make sense?’”

RELATED CONTENT: A guide to low-code vendors that incorporate generative AI capabilities

However, Davis also says it’s really too new of a market to make any real predictions. “We’ve seen a lot of failure, we’ve seen a lot of success, because it’s so early days that, at best, you’re kind of experimenting with this now. But the hope is that it can offer a lot of potential,” he explained. 

According to Davis, there are three main ways AI is being incorporated into low-code platforms. 

First, there are generative AI capabilities that are designed to improve the developer experience.

Second, there are generative AI capabilities targeting the end users of the application created using low code. “So embedding like a Copilot or ChatGPT type control within the application. That way the user of the application can ask questions about the app’s data, as an example,” Davis said. 

Third, there are features related to process improvement. “When you’re creating workflows or automation, there’s usually a lot of steps that are very human-centric, when it comes to generating data or categorizing data or whatnot,” Davis said. “And so we’ve seen a lot of those steps being not displaced by a generative AI step, but rather kind of preceded by a generative AI step.”

He gave the example of a workflow that is designed to help hiring managers create requirements for a job position. Usually the hiring manager has to go in and manually add information, like the name of the position, the description, and other requirements. But, Davis said, “If generative AI were to step in first and do a draft of that, it allows the hiring manager to come in and just make refinements.” 

Davis believes that a major challenge experienced by these low-code vendors is the added work placed on them to enable this integration to work. Low code is very declarative and abstracted away, and the constructs that make up a low-code application are proprietary to the platform it belongs to, which requires the vendors to either have their own LLM or be able to take user prompts and create all the constructs within their platform to represent what was asked. 

“There’s a lot they can leverage from existing LLMs and, and generative AI vendors, but there’s still pieces that they have to do themselves,” he said. 

Using generative AI in testing is another promising area

Combining generative AI and testing is also a promising mashup, according to Arthur Hicken, chief evangelist at testing software company Parasoft. “We’re still at a relatively early stage, so it’ll be interesting to see how much of it is real and how much of it pans out,” he said. “It certainly shows a lot of promise in the ability to generate code, but perhaps more so in the ability to generate tests … I don’t believe we’re there yet, but we are seeing some pretty interesting capabilities that, you know, didn’t exist a year or two ago.”

The field of prompt engineering — phrasing generative AI requests in a way that will provide optimal results — is also an emerging practice, which will be crucial to how successful one is at getting good results from combining things like testing or low-code with AI, Hicken said.

He explained that those who have been working with tests for years will probably have a good chance of being a good prompt engineer. “That ability to look at something and break it into small component steps is what’s going to let the AI be most effective for you … You can’t go to one of these systems and say, ‘Hey, give me a bunch of tests for my application.’ It’s not going to work. You’ve got to be very, very detailed, and like working with a djinn or a genie, you can mess yourself up if you’re not very careful about what you ask for,” he said.

He likened this to how we see people interacting with search engines today. Some people claim they can find whatever they want in a search engine, because they know the queries to ask, while others will say they looked all over and couldn’t find what they were looking for. 

“It’s that ability to speak in a way that the AI can understand you, and the better you are at that the better answer you get back … The fact that you can just talk and ask for what you want is cool, but at the moment you better be pretty smart about what you’re asking because with these AIs the emphasis is on the A – the intelligence is very artificial,” said Hicken.

 This is why testing the outputs of these systems is crucial. Hicken said that he has spoken with folks who say they are going to use generative AI to generate both code and tests. “That’s really scary, right? Now we’ve got code a human didn’t review being checked by tests that weren’t reviewed by humans, like, are we going to compound the error?”

He advises against putting too much trust in these systems just yet.  “We’re already starting to see people jump back, they’re being bitten, because they’re trusting the system too early,” he said. “So I would encourage people not to blindly trust the system. It’s like hiring somebody and just letting them write your most important code without seeing first what they’re doing.”

The post <span class="sdt-premium">premium</span> The promise of generative AI in low-code, testing appeared first on SD Times.

]]>
Parasoft enhances its Continuous Quality Platform around API testing, virtualization https://sdtimes.com/test/parasoft-enhances-its-continuous-quality-platform-around-api-testing-virtualization/ Mon, 23 Oct 2023 19:05:31 +0000 https://sdtimes.com/?p=52702 Parasoft’s Continuous Quality Platform updates in version 2023.2 cover three main themes – a focus on continuous innovation, continuing to strengthen its core components, and addressing customer feedback and loyalty. Under the theme of continuous innovation, Grigori Trofimov, a senior solutions engineer at Parasoft, said the update introduces integrations with generative AI capabilities through LLMs … continue reading

The post Parasoft enhances its Continuous Quality Platform around API testing, virtualization appeared first on SD Times.

]]>
Parasoft’s Continuous Quality Platform updates in version 2023.2 cover three main themes – a focus on continuous innovation, continuing to strengthen its core components, and addressing customer feedback and loyalty.

Under the theme of continuous innovation, Grigori Trofimov, a senior solutions engineer at Parasoft, said the update introduces integrations with generative AI capabilities through LLMs and OpenAI, to build upon the company’s implementations of AI for UI testing, static analysis and API testing, among other things. “Now,” he said, “users can use their own definition files and text-based instructions or natural language instructions,” enhancing the test creation process.

And, he noted, as far as API testing is concerned, the update provides a clean sequence of API calls to work with, so testers don’t have to manually stitch together API calls. All of this, he said, brings new capabilities to SOAtest, the company’s API functional, load and security testing tool. “SOAtest already is that Swiss Army knife, with all the assertions, validations, databanks … everything we’ve built over the last 15 years. And now you have generative AI, so the combination is very powerful.”

Another feature under the continuous innovation banner is improving code coverage around distributed microservices architectures within SOAtest. “The idea here is that you’re testing some components within your microservices deployment, such as API tests, smoke tests, and health checks, but if you’re running regression suites using some external framework, you may not necessarily know what the impact of those tests are,” he explained. “You know you have test coverage, you might have some user stories and features that you’re covering with those tests. But as far as what actual microservices, what actual lines of code are being tested in those, you don’t really know. And you’re not really able to identify gaps or tie those types of tests to any metric, or to any criteria that can tell you you’re doing good testing.”

Parasoft’s introduction to code coverage for distributed microservices supports both Java and .NET microservices, and users can collect data from code coverage on each component, with merged coverage for your system or application as a whole across all microservices, and provide test impact analysis, Trofimov explained. That impact analysis can show that when one microservice changes, for example, the tool can tell which tests are impacted by those changes. The benefit is that if you have a small incremental change in your daily build, you don’t have to wait on the full regression suite, which could take 10 hours, so you can provide quick feedback to developers that if some test doesn’t pass, they can fix it right away.

Accessibility can enhance overall user experience, and in this SOAtest release, Parasoft is introducing a web accessibility scan, which is a tool that can be added to browser-based UI tests to catch accessibility violations. Trofimov said Parasoft adheres to the WCAG 2.1 AA specification. 

Finally, a new feature called Learning Mode is introduced in Parasoft Virtualize that Trofimov said automatically creates virtual services and updates and records data. “A common flow for service virtualization is that you have a real endpoint for a third-party endpoint that is not available in a test environment,” he said. “So you would record traffic and use that traffic to create the virtual asset that mimics the logic of the real service. So we’ve taken that flow and put it into a single checkbox called Learning Mode, so now when you have a real endpoint you need to virtualize, you can just set up the proxy, check the box that says Learning Mode, and starting from that point, it’s going to learn what the real service is doing. And if it finds a match on previous data that needs to be updated, it will update the data automatically.”

Parasoft’s product roadmap, Trofimov said, continues to be very much driven by its customers and partners. In this release, the company is tackling the Kafka protocol for data streaming and event-driven architectures, and is focusing on the Avro data serialization message format. “Our customers have been using our Kafka support and they’ve asked for this Arvo message format as well as Confluent schema registries,” he said. “Both of those together are basically like your JSON Swagger definition but oriented toward Kafka and data serialization.” This implementation is available to both SOAtest and Virtualize customers.

 

The post Parasoft enhances its Continuous Quality Platform around API testing, virtualization appeared first on SD Times.

]]>
Buyers Guide: AI and the evolution of test automation https://sdtimes.com/test/buyers-guide-the-evolution-of-test-automation/ Fri, 22 Sep 2023 14:35:53 +0000 https://sdtimes.com/?p=52402 Test automation has undergone quite an evolution in the decades since it first became possible.  Yet despite the obvious benefits, the digitalization of the software development industry has created some new challenges. It comes down to three big things, according to Kevin Parker, vice president of product at Appvance. The first is velocity and how … continue reading

The post Buyers Guide: AI and the evolution of test automation appeared first on SD Times.

]]>
Test automation has undergone quite an evolution in the decades since it first became possible. 

Yet despite the obvious benefits, the digitalization of the software development industry has created some new challenges.

It comes down to three big things, according to Kevin Parker, vice president of product at Appvance. The first is velocity and how organizations “can keep pace with the rate at which developers are moving fast and improving things, so that when they deliver new code, we can test it and make sure it’s good enough to go on to the next phase in whatever your life cycle is,” he said. 

RELATED CONTENT:
A guide to automated testing tools
Take advantage of AI-augmented software testing

The second area is coverage. Parker said it’s important to understand that enough testing is being done, and being done in the right places, to the right depth. And, he added, “It’s got to be the right kind of testing. If you Google test types, it comes back with several hundred kinds of testing.”

How do you know when you’ve tested enough? “If your experience is anything like mine,” Parker said, “the first bugs that get reported when we put a new release out there, are from when the user goes off the script and does something unexpected, something we didn’t test for. So how do we get ahead of that?”

And the final, and perhaps most important, area is the user interface, as this is where the rubber meets the road for customers and users of the applications. “The user interfaces are becoming so exciting, so revolutionary, and the amount of psychology in the design of user interfaces is breathtaking. But that presents even more challenges now for the automation engineer,” Parker said.

Adoption and challenges

According to a report by Research Nester, the test automation market is expected to grow to more than $108 billion by 2031, up from about $17 billion in 2021. Yet as for uptake, it’s difficult to measure the extent to which organizations are successfully using automated testing.

 “I think if you tried to ask anyone, ‘are you doing DevOps? Are you doing Agile?’ Everyone will say yes,” said Jonathan Wright, chief technologist at Keysight, which owns the Eggplant testing software. “And everyone we speak to says, ‘yes, we’re already doing automation.’ And then you dig a little bit deeper, they say, ‘well, we’re running some selenium, running some RPM, running some Postman script.’ So I think, yes, they are doing something.”

Wright said most enterprises that are having success with test automation have invested heavily in it, and have established automation as its own discipline. These organizations, he said, 

“They’ve got hundreds of people involved to keep this to a point where they can run thousands of scripts.” But in the same breath, he noted that the conversation around test case optimization, and risk-based testing, still needs to be had. “Is over-testing a problem?” he posited. “There’s a continuous view that we’re in a bit of a tech crunch at the moment. We’re expected to do more with less, and testing, as always, is one of those areas that have been put under pressure. And now, just saying I’ve got 5,000 scripts, kind of means nothing. Why don’t you have 6,000 or 10,000? You have to understand that you’re not just adding a whole stack of tech debt into a regression folder that’s giving you this feel-good feeling that I’m reading 5,000 scripts a day, but they’re not actually adding any value because they’re not covering new features.”

RELATED CONTENT:
How Cox Automotive found value in automated testing
Accessibility testing
Training the model for testing

Testing at the speed of DevOps

One effect of the need to release software faster is the ever-increasing reliance on open-source software, which may or may not have been tested fully before being let out into the wild.

Arthur Hicken, chief evangelist at Parasoft, said he believes it’s a little forward thinking to assume that developers aren’t writing code anymore, that they’re simply gluing things together and standing them up. “That’s as forward thinking as the people who presume that AI can generate all your code and all your tests now,” he said. “The interesting thing about this is that your cloud native world is relying on a massive amount of component reuse. The promises are really great. But it’s also a trust assumption that the people who built those pieces did a good job. We don’t yet have certification standards for components that help us understand what the quality of this component is.”

He suggested the industry create a bill of materials that includes testing. “This thing was built according to these standards, whatever they are, and tested and passed. And the more we move toward a world where lots of code is built by people assembling components, the more important it will be that those components are well built, well tested and well understood.”

Appvance’s Parker suggests doing testing as close to code delivery as possible. “If you remember when you went to test automation school, we were always taught that we don’t test

the code, we test against the requirements,” he said. “But the modern technologies that we use for test automation require us to have the code handy. Until we actually see the code, we can’t find those [selectors]. So we’ve got to find ways where we can do just that, that is bring our test automation technology as far left in the development lifecycle as possible. It would be ideal if we had the ability to use the same source that the developers use to be able to write our tests, so that as dev finishes, test finishes, and we’re able to test immediately, and of course, if we use the same source that dev is using, then we will find that Holy Grail and be testing against requirements. So for me, that’s where we have to get to, we have to get to that place where dev and test can work in parallel.”

As Parker noted earlier, there are hundreds of types of testing tools on the market – for functional testing, performance testing, UI testing, security testing, and more. And Parasoft’s Hicken pointed out the tension organizations have between using specialized, discrete tools or tools that work well together. “In an old school traditional environment, you might have an IT department where developers write some tests. And then testers write some tests, even though the developers already wrote tests, and then the performance engineers write some tests, and it’s extremely inefficient. So having performance tools, end-to-end tools, functional tools and unit test tools that understand each other and can talk to each other, certainly is going to improve not just the speed at which you can do things and the amount of effort, but also the collaboration that goes on between the teams, because now the performance team picks up a functional scenario. And they’re just going to enhance it, which means the next time, the functional team gets a better test, and it’s a virtuous circle rather than a vicious one. So I think that having a good platform that does a lot of this can help you.”

Coverage: How much is enough?

Fernando Mattos, director of product marketing at test company mabl, believes that test coverage for flows that are very important should come as close to 100% as possible. But determining what those flows are is the hard part, he said. “We have reports within mabl that we try to make easy for our customers to understand. Here are all the different pages that I have on my application. Here’s the complexity of each of those. And here are the tests that have touched on those, the elements on those pages. So at least you can see where you have gaps.”

It is common practice today for organizations to emphasize thorough testing of the critical pieces of an application, but Mattos said it comes down to balancing the time you have for testing and the quality that you’re shooting for, and the risk that a bug would introduce.

“If the risk is low, you don’t have time, and it’s better for your business to be introducing new features faster than necessarily having a bug go out that can be fixed relatively quickly… and maybe that’s fine,” he said.

Parker said AI can help with coverage when it comes to testing every conceivable user experience. “The problem there,” he said, “is this word conceivable, because it’s humans conceiving, and our imagination is limited. Whereas with AI, it’s essentially an unlimited resource to follow every potential possible path through the application. And that’s what I was saying earlier about those first bugs that get reported after a new release, when the end user goes off the script. We need to bring AI so that we can not only autonomously generate tests based on what we read in the test cases, but that we can also test things that nobody even thought about testing, so that the delivery of software is as close to being bug free as is technically possible.”

Parasoft’s Hicken holds the view that testing without coverage isn’t meaningful.  “If I turn a tool loose and it creates a whole bunch of new tests, is it improving the quality of my testing or just the quantity? We need to have a qualitative analysis and at the moment, coverage gives us one of the better ones. In and of itself, coverage is not a great goal. But the lack of coverage is certainly indicative of insufficient testing. So my pet peeve is that some people say, it’s not how much you test, it’s what you test. No. You need to have as broad code coverage as you can have.”

The all-important user experience

It’s important to have someone who is very close to the customer, who understands the customer journey but not necessarily anything about writing code, creating tests, according to mabl’s Mattos. “Unless it’s manual testing, it tends to be technical, requiring writing code and no updating test scripts. That’s why we think low code can really be powerful because it can allow somebody who’s close to the customer but not technical…customer support, customer success.  They are not typically the ones who can understand GitHub and code and how to write it and update that – or even understand what was tested. So we think low code can bridge this gap. That’s what we do.”

Where is this all going?

The use of generative AI to write tests is the evolution everyone wants to see, Mattos said. “We’ll get better results by combining human insights. We’re specifically working on AI technology that will allow implementing and creating test scripts, but still using human intellect to understand what is actually important for the user. What’s important for the business? What are those flows, for example, that go to my application on my website, or my mobile app that actually generates revenue?”

“We want to combine that with the machine,” he continued. “So the human understands the customer, the machine can replicate and create several different scenarios that traverse those. But of course, right, lots of companies are investing in allowing the machine to just navigate through your website and find out the different quarters, but they weren’t able to prioritize for us. We don’t believe that they’re gonna be able to prioritize which ones are the most important for your company.”

Keysight’s Wright said the company is seeing value in generative AI capabilities. “Is it game changing? Yes. Is it going to get rid of manual testers? Absolutely not. It still requires human intelligence around requirements, engineering, feeding in requirements, and then humans identifying that what it’s giving you is trustworthy and is valid. If it suggests that I should test (my application) with every single language and every single country, is it really going to find anything I might do? But in essence, it’s just boundary value testing, it’s not really anything that spectacular and revolutionary.”

Wright said organizations that have dabbled with automation over the years and have had some levels of success are now just trying to get that extra 10% to 20% of value from automation, and get wider adoption across the organization. “We’ve seen a shift toward not tools but how do we bring a platform together to help organizations get to that point where they can really leverage all the benefits of automation. And I think a lot of that has been driven by open testing.” 

“As easy as it should be to get your test,” he continued, “you should also be able to move that into what’s referred to in some industries as an automation framework, something that’s in a standardized format for reporting purposes. That way, when you start shifting up, and shifting the quality conversation, you can look at metrics. And the shift has gone from how many tests am I running, to what are the business-oriented metrics? What’s the confidence rating? Are we going to hit the deadlines? So we’re seeing a move toward risk-based testing, and really more agility within large-scale enterprises.”

 

The post Buyers Guide: AI and the evolution of test automation appeared first on SD Times.

]]>
A guide to automated testing tools https://sdtimes.com/test/a-guide-to-automated-testing-tools-5/ Fri, 22 Sep 2023 14:15:18 +0000 https://sdtimes.com/?p=52398 The following is a listing of automated testing tool providers, along with a brief description of their offerings. FEATURED PROVIDERS APPVANCE is the leader in generative AI for Software Quality.  Its premier product AIQ is an AI-native, unified software quality platform that delivers unprecedented levels of productivity to accelerate digital transformation in the enterprise.   Leveraging generative … continue reading

The post A guide to automated testing tools appeared first on SD Times.

]]>
The following is a listing of automated testing tool providers, along with a brief description of their offerings.

FEATURED PROVIDERS

APPVANCE is the leader in generative AI for Software Quality.  Its premier product AIQ is an AI-native, unified software quality platform that delivers unprecedented levels of productivity to accelerate digital transformation in the enterprise.   Leveraging generative AI and machine learning,  AIQ robots autonomously validate all the possible user flows to achieve complete application coverage.

KEYSIGHT is a leader in test automation, where our AI-driven, digital twin-based solutions help innovators push the boundaries of test case design, scheduling, and execution. Whether you’re looking to secure the best experience for application users, analyze high-fidelity models of complex systems, or take proactive control of network security and performance, easy-to-use solutions including Eggplant and our broad array of network, security, traffic emulation, and application test software help you conquer the complexities of continuous integration, deployment, and test.

MABL is the enterprise SaaS leader of intelligent, low-code test automation that empowers high-velocity software teams to embed automated end-to-end tests into the entire development lifecycle. Mabl’s platform for easily creating, executing, and maintaining reliable browser, API and mobile web tests helps teams quickly deliver high-quality applications with confidence. That’s why brands like Charles Schwab, jetBlue, Dollar Shave Club, Stack Overflow, and more rely on mabl to create the digital experiences their customers demand.

PARASOFT helps organizations continuously deliver high-quality software with its AI-powered software testing platform and automated test solutions. Supporting embedded and enterprise markets, Parasoft’s proven technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software by integrating everything from deep code analysis and unit testing to UI and API testing, plus service virtualization and complete code coverage, into the delivery pipeline. 

OTHER PROVIDERS

Applitools is built to test all the elements that appear on a screen with just one line of code, across all devices, browsers and all screen sizes. We support all major test automation frameworks and programming languages covering web, mobile, and desktop apps.

Digital.ai Continuous Testing provides expansive test coverage across 2,000+ real mobile devices and web browsers, and seamlessly integrates with best-in-class tools throughout the DevOps/DevSecOps pipeline.

RELATED CONTENT: The evolution of test automation

IBM: Quality is essential and the combination of automated testing and service virtualization from IBM Rational Test Workbench allows teams to assess their software throughout their delivery life cycle. IBM has a market leading solution for the continuous testing of end-to-end scenarios covering mobile, cloud, cognitive, mainframe and more. 

Micro Focus enables customers to accelerate test automation with one intelligent functional testing tool for web, mobile, API and enterprise apps. Users can test both the front-end functionality and back-end service parts of an application to increase test coverage across the UI and API.

Kobiton offers GigaFox on-premises or hosted, and solves mobile device sharing and management challenges during development, debugging, manual testing, and automated testing. A pre-installed and pre-configured Appium server provides “instant on” Appium test automation.

Orasi is a leading provider of software testing services, utilizing test management, test automation, enterprise testing, Continuous Delivery, monitoring, and mobile testing technology. 

ProdPerfect is an autonomous, end-to-end (E2E) regression testing solution that continuously identifies, builds and evolves E2E test suites via data-driven, machine-led analysis of live user behavior data. It addresses critical test coverage gaps, eliminates long test suite runtimes and costly bugs in production.  

Progress Software’s Telerik Test Studio is a test automation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production. 

Sauce Labs provides a cloud-based platform for automated testing of web and mobile applications. Optimized for use in CI and CD environment, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium.

SmartBear offers tools for software development teams worldwide, ensuring visibility and end-to-end quality through test management, automation, API development, and application stability. Popular tools include SwaggerHub, TestComplete, BugSnag, ReadyAPI, Zephyr, and others. 

testRigor helps organizations dramatically reduce time spent on test maintenance, improve test stability, and dramatically improve the speed of test creation. This is achieved through its support of “plain English” language that allows users to describe how to find elements on the screen and what to do with those elements from the end-user’s perspective. People creating tests on their system build 2,000+ tests per year per person. On top of it,  testRigor helps teams deploy their analytics library in production that will make systems automatically produce tests reflecting the most frequently used end-to-end flows from production.

 

The post A guide to automated testing tools appeared first on SD Times.

]]>
How Cox Automotive found value in automated testing https://sdtimes.com/test/how-cox-automotive-found-value-in-automated-testing/ Fri, 01 Sep 2023 21:07:56 +0000 https://sdtimes.com/?p=52390 How does a quality organization run? And how does it deliver a quality product for consumers? According to Roya Montazeri, senior director of test and quality at Cox Automotive, no one tool or approach can solve the quality problem. Cox Automotive portfolios, she said, is a specialized software company that addresses the buying, selling, trading … continue reading

The post How Cox Automotive found value in automated testing appeared first on SD Times.

]]>
How does a quality organization run? And how does it deliver a quality product for consumers?

According to Roya Montazeri, senior director of test and quality at Cox Automotive, no one tool or approach can solve the quality problem. Cox Automotive portfolios, she said, is a specialized software company that addresses the buying, selling, trading and everything about the car life cycle, with a broad portfolio of products that includes Dealertrack, Kelly Blue Book, AutoTrader, Manheim and more.

“Whatever we create from software automation and software delivery … needs to make sure that all clients are getting the best deal,” Montazeri said. “They can, and our dealers can, trust our software and at the end, the consumers can get the car they want. And this is about digitalization of the entire process.”

When Montazeri joined Cox Automotive, her area – Dealertrack – was mature about testing, with automations in place. But, she said, the focus on automation and the need to strengthen it started from two aspects: the quality of what was being delivered, and the impact of that on trust within the division.  “Basically, when you have an increased defect rate, and when you have more [calls into] customer support, these are indications of a quality problem,” she said. “That was the realization of investment … into more tools or more ability for automation.”

To improve quality, Dealertrack  began to shift testing left, and invested in automating their CI/CD pipeline. “You can’t have a CI/CD pipeline without automation,” she said. “It’s just a broken pipeline.” And to have a fully automated pipeline, she said, training it is critical. 

Another factor that led to the need for automation at Dealertrack was the complexity of how their products work. “Any product these days is not a standalone on its own; there is a lot of integration,” Montazeri said. “So how do you test those integrations? And that led us to look at where most of our problems were.. is it at the component-level testing? Or is it the complexity of the integration testing?” 

That, she said, led to Dealertrack using service virtualization software, from Parasoft, so they could mimic the same interactions and find the problems before they actually moved the software to production and make the integration happen. 

When they first adopted virtualization, Montazeri said they originally thought, “Oh, we can basically figure out how many defects we found. But that wasn’t the right KPI at the time for just virtualization. We needed to mature enough to say, “It’s not just that we found that defect, it’s about exercising the path so we know what’s not even working. So that’s how the investment came about for us.” 

The post How Cox Automotive found value in automated testing appeared first on SD Times.

]]>
ASTQ Summit brings together test practitioners to discuss implementing automation https://sdtimes.com/test/astq-summit-brings-together-test-practitioners-to-discuss-implementing-automation/ Thu, 11 May 2023 14:54:59 +0000 https://sdtimes.com/?p=51127 Is automated testing worth the expense? Real test practitioners will show how test automation solved many of their quality issues when the Automated Software Testing and Quality one-day virtual event returns on May 16. Produced by software testing company Parasoft, among the topics to be discussed are metrics, how automation can significantly cut test time, … continue reading

The post ASTQ Summit brings together test practitioners to discuss implementing automation appeared first on SD Times.

]]>
Is automated testing worth the expense?

Real test practitioners will show how test automation solved many of their quality issues when the Automated Software Testing and Quality one-day virtual event returns on May 16. Produced by software testing company Parasoft, among the topics to be discussed are metrics, how automation can significantly cut test time, shifting testing left, the use (or not) of generative AI, the synergy between automation and service virtualization, and more.

“We’ve worked really hard to make sure that most of the sessions are coming from the practitioner community,” said Arthur Hicken, chief evangelist at Parasoft. “So people are telling you how they solved their problem – what metrics they use to solve the problems, what the main challenge was, what kind of results they saw, you know what pitfalls they’ve hit.”

As for AI in testing, Hicken said Parasoft has created AI augmentations at every aspect of the testing pyramid, which he acknowledged is getting “kind of long in the tooth,” before adding that it still is useful, helpful advice. “Whether it’s static analysis, unit test, API testing, functional testing, performance testing, UX testing, we’ll talk about how these different things will help you in your day-to-day job.”

He went on to say that he doesn’t believe the things he’s talking about are job killers. “I think they’re just ways to help. I haven’t met any software engineer that says, I don’t have enough to do, I’ve got to pad my work with something. I think just being able to get their job done will make their life better.”

On the subject of generative AI, Hicken says it can be quite smart about some things but struggles with others. So, the more clearly you can draw the boundaries of what you expect it to be able to do, and the more narrow you can scope it down, AI just does a better job.

This, he said, is true of testing in general. “Service virtualization helps you decouple from real-world things that you can’t really control or can’t afford to play with,” he said. “Most people don’t have a spare mainframe. Some people interact with real-world objects. We see that in the healthcare space, where faxes are part of a normal workflow. And so testing becomes very, very difficult.”

Further, he said, “As we use AI to start to increase the amount of testing, we’re doing the permutations, we run into a data problem, we just don’t have enough real data. So it starts synthesizing virtual data. So the service virtualization is a way to synthesize data to get broader coverage. And because of that, there’s always a temptation to use real-world data as your starting point. But in many jurisdictions, real-world data is a pretty big no-no. GDPR doesn’t allow it.”

So, in the end, the question remains: How do you know it was worth it? What did you do to measure? Hicken said, “I don’t believe there’s a universal quality measure or ROI measure; I believe there are lots of fascinating different things that you can look at that might be interesting for you. So I would say look for that.”

Hicken also noted, humorously, that if test automation did not deliver value, the speakers he sought out for ASTQ would not have returned his calls. 

There is still time to register to learn more about automated software testing and Parasoft.

The post ASTQ Summit brings together test practitioners to discuss implementing automation appeared first on SD Times.

]]>
AI in API and UI software test automation https://sdtimes.com/test/ai-in-api-and-ui-software-test-automation/ Mon, 06 Mar 2023 16:03:29 +0000 https://sdtimes.com/?p=50475 Artificial intelligence is one of the digital marketplace’s most overused buzzwords. The term “AI” conjures up images of Alexa or Siri, computer chess opponents, and self-driving cars.  AI can help humans in a variety of ways, including reducing errors and automating repetitive tasks. Software test automation tools are maturing and have incorporated AI and machine learning … continue reading

The post AI in API and UI software test automation appeared first on SD Times.

]]>
Artificial intelligence is one of the digital marketplace’s most overused buzzwords. The term “AI” conjures up images of Alexa or Siri, computer chess opponents, and self-driving cars. 

AI can help humans in a variety of ways, including reducing errors and automating repetitive tasks. Software test automation tools are maturing and have incorporated AI and machine learning (ML) technology. The key point that separates the hype of AI from reality is that AI is not magic, nor the silver bullet promised with every new generation of tools. However, AI and ML do offer impressive enhancements to software testing tools.

More Software, More Releases

Software test automation is increasing in demand just as the worldwide demand for software continues to surge and the demand for developers increases. A recent report by Statista corroborates this expectation with a projection that suggests that the global developer population is expected to increase from 24.5 million in 2020 to 28.7 million by 2024.

Since testing and development resources are finite, there’s a need to make testing more efficient while increasing coverage to do more with the same. Focusing testing on exactly what needs to be validated after each code change is critical to accelerating testing, enabling continuous testing, and meeting delivery goals.

AI and ML play a key role in providing the data needed by test automation tools to focus testing while removing many of the tedious, error-prone, and mundane tasks.

  • Improve static analysis adoption.
  • Improve unit test creation.
  • Reduce test maintenance.
  • Reduce test execution.
  • Increase API test automation.
  • Improve UI test automation.
Real examples 

Let’s look at some real-life examples of what happens when you apply AI and ML technology to software testing.

Improve Unit Testing Coverage and Efficiency

Creating unit tests is a difficult task since it can be time-consuming to create unique tests that fully test a unit. One way to alleviate this is by making it easier to create stubs and mocks with assisted test creation for better isolation of the code under test. AI can assist in analyzing the unit under test to determine its dependencies on other classes. Then suggest mocking them to create more isolated tests.

The capabilities of AI in producing tests from code are impressive. However, it’s up to the developers to continuously invest in and build their own tests. Again, using AI test creation assistance, developers can:

  • Extend code coverage through clones and mutations.
  • Create the mocks.
  • Auto-generate assertions

Improve API Testing

The struggle to improve API testing has traditionally relied on the expertise and motivation of the development team because APIs are often outside the realm of QA. Moreover, APIs are sometimes poorly documented. Creating tests for them is difficult and time-consuming.

When it comes to API testing, AI and ML aim to accomplish the following:

  • Increase functional coverage with API and service layer testing.
  • Make it easier to automate and quicker to execute.
  • Reuse the results for load and performance testing.

This technology creates API tests by analyzing the traffic observed and recorded during manual UI tests. It then creates a series of API calls that are collected into scenarios and represent the underlying interface calls made during the UI flow. An ML algorithm is used to study interactions between different API resources and store those interactions as templates in a proprietary data structure. The goal of AI here is to create more advanced parameterized tests, not just repeat what the user was doing, as you get with simple record-and-playback testing.

Automate UI Testing Efficiently

Validating the application’s functionality with UI testing is another critical component of your testing strategy. The Selenium UI test automation framework is widely adopted for UI testing, but users still struggle with the common Selenium testing challenges of maintainability and stability.

AI helps by providing self-healing capabilities during runtime execution to address the common maintainability problems associated with UI testing.  AI can learn about internal data structures during the regular execution of Selenium tests by monitoring each test run and capturing detailed information about the web UI content of the application under test. This opens the possibility of self-healing of tests, which is a critical time-saver in cases when UI elements of web pages are moved or modified, causing tests to fail.

Remove Redundant Work With Smart Test Execution

Test impact analysis (TIA) assesses the impact of changes made to production code. The analysis and test selection are available to optimize the execution of unit tests, API tests, and Selenium web UI tests.

To prioritize test activities, a correlation from tests to business requirements is required. However, more is required since it’s unclear how recent changes have impacted the code. To optimize test execution, it’s necessary to understand the code that each test covers and then determine the code that has changed. Test impact analysis allows testers to focus only on the tests that validate the changes.

Benefits of AI/ML in Software Testing

AI and ML provide benefits throughout the SDLC and among the various tools that assist at each of these levels. Most importantly, these new technologies amplify the effectiveness of tools by first and foremost delivering better quality software and helping testing be more efficient and productive while reducing cost and risk.

For development managers, achieving production schedules becomes a reality with no late- cycle defects crippling release timetables. For developers, integrating test automation into their workflow is seamless with automated test creation, assisted test modification, and self-healing application testing. Testers and QA get quick feedback on test execution, so they can be more strategic about where to prioritize testing resources.

The post AI in API and UI software test automation appeared first on SD Times.

]]>