The post Broadcom adds on-premises version of its enterprise agility platform Rally appeared first on SD Times.
]]>Previously, Rally was only available as a SaaS offering, but this new on-premises version is designed specifically to enable companies that operate globally to plan, prioritize, manage, track, and measure the value they are delivering to customers while still maintaining security and compliance.
Rally Anywhere provides data sovereignty, meaning that data stays within the physical borders of where it originated, which allows companies to comply with international data protection regulations and alleviate data residency concerns.
It also provides the flexibility and scalability that is necessary for teams that are split up across multiple time zones and geographic locations to work together collaboratively.
According to Broadcom, with this announcement, the company’s entire ValueOps Value Stream Management Solution is now available as either a SaaS or on-premises option.
“We are committed to empowering enterprise teams with the tools they need to succeed, and Rally Anywhere exemplifies this commitment. With its focus on enterprise security, data sovereignty, and support for global value streams, we are confident that this new product will be a game-changer for organizations looking to elevate their collaborative efforts while maintaining control and security,” said Serge Lucio, general manager of the Agile Operations Division at Broadcom.
The post Broadcom adds on-premises version of its enterprise agility platform Rally appeared first on SD Times.
]]>The post ValueOps Insights provides unified view of analytics for software value planning and delivery appeared first on SD Times.
]]>The new solution, underpinned by the ConnectALL platform it acquired in June 2023, gathers, organizes and evaluates disparate DORA and flow metrics to provide real-time, role-based dashboards to development teams, dev managers and organization leaders to use for informed decision-making.
“By integrating and organizing data from diverse sources across the value chain, ValueOps Insights provides the information organizations need to make better business decisions,” said Jean-Louis Vignaud, Head of ValueOps in Broadcom’s Agile Operations Division. The ability to match investment with the capability to deliver products leads to “successful value realization,” the company noted in its announcement.
This enables monitoring of investment decisions against product outcomes, and confirmation that planned product capabilities translate into tangible investment outcomes. By aligning investment intent with execution capability, we help organizations ensure successful value realization.
DORA and Flow metrics are all about delivery efficiency, but Vignaud noted, “that doesn’t mean we’re smart in what we do. The ideal view of the word is, ‘I plan for value, I deliver value and I measure the value realization.” While the full vision for Insights includes a value planning tool that will be integrated in the next quarter, Vignuad said, “We can start to be a bit smarter because of Flow analytics and DORA metrics.”
To broaden the value proposition, Broadcom is working on value realization. As Vignaud explained, “We capture early metrics, ensuring that indeed you are realizing the value you said you would be realizing when you do the investment.”
You may also like…
Broadcom’s ‘Three Pillars’ of value stream management
Organizational alignment is the key to delivering customer value
The post ValueOps Insights provides unified view of analytics for software value planning and delivery appeared first on SD Times.
]]>The post Broadcom: Optimize organizational efficiency to drive customer value appeared first on SD Times.
]]>One common complaint from companies seeking greater efficiency is the difficulty in prioritizing innovation. Tasks like capacity planning and managing existing backlogs often take precedence, leaving little room for creative development.
Marla Schimke, Head of Product and Growth Marketing at Broadcom’s ValueOps Software Division, highlighted how the ValueOps approach addresses these challenges. “ValueOps helps organizations see the big picture, aligning development planning and resources with the business’s priorities, strategies, and OKRs. Using common data and software across teams makes communication faster and more intuitive. Different teams can pull their view of progress on demand, and access to real-time data reduces the need for so many meetings, saving everyone time,” she explained.
Capacity planning is crucial for ensuring development teams have the resources needed to complete projects while maintaining the bandwidth to develop innovative solutions. Schimke emphasized that flexible team-level work management, aggregated metrics, and the management of risks, dependencies, releases, and iterations can enhance the speed of market releases within the entire value stream.
Furthermore, the ValueOps approach supports continuous process improvement by incorporating these strategies. Broadcom’s solution delivers long-term flow metrics at scale and integrates customer metrics, creating investment guardrails that help organizations achieve peak efficiency.
“Utilizing Value Stream Management (VSM), teams in enterprises worldwide are experiencing substantial benefits. VSM has become essential for businesses, carving a path with increased visibility, alignment, and efficiency,” stated Schimke.
THIRD OF THREE PARTS
PART 1: Three pillars of value stream management
PART 2: Organizational alignment is the key to delivering customer value
The post Broadcom: Optimize organizational efficiency to drive customer value appeared first on SD Times.
]]>The post Broadcom delivers workload automation and orchestration with launch of Automic SaaS appeared first on SD Times.
]]>“Infrastructures are continuing to shift to the cloud, while organizations are struggling to operate multiple tools. This is driving a critical need for a centralized view of automated business processes,” said Serge Lucio, vice president and general manager, Agile Operations Division, Broadcom. “Automic SaaS helps organizations unify workload automation, gain critical observability, and simplify orchestration just like our on-premises solution with the additional benefits of lower costs, increased agility, and improved productivity.”
“Automic SaaS offers the same full-featured Workload Automation as on premises while optimizing the TCO, freeing organizations to focus on strategic automation. We’re excited to partner with Broadcom in bringing this modern automation solution to the market,” said Guenter Flamm, managing director, Tricise GmbH.
Complete Automation and Orchestration in SaaS
Automic SaaS is a complete solution that offers advanced workload automation and orchestration across hybrid and hybrid-cloud environments removing islands of automation. Unlike some vendors, including cloud-native automation solutions, Automic SaaS provides robust benefits including:
Learn more about Automic SaaS from Broadcom by clicking here.
The post Broadcom delivers workload automation and orchestration with launch of Automic SaaS appeared first on SD Times.
]]>The post Organizational alignment is the key to delivering customer value appeared first on SD Times.
]]>One answer to this is by ensuring there is alignment throughout the organization, from business priority to ideation to production and delivery, and with marketing and sales on board. And that is done through the creation of value streams, which provide insights into how those teams are operating and continuing (or not) to meet organizational goals.
“When you think about alignment, you have to think about it across our whole value streams of tools, people and processes,” explained Lance Knight, Chief Value Stream Architect at Broadcom-AOD.
According to Broadcom, its ValueOps platform helps tear down organizational silos and helps teams collaborate by providing a single platform that ensures planning a project and delivering outcomes are aligned. When thinking about alignment, it has to bring in all the organization’s value streams of tools, people and process, Knight explained, adding that alignment has to be both downward and upward.
“Let’s say you’re in operations, and you’re working on things, but do those activities align to the outcomes and goals that you’re trying to achieve?” he noted. “Do they align to your OKRs? Do they align to cost and spending?”
Or, he said, let’s say you’re a developer and a defect comes in from a customer, and you think that’s something you need to prioritize and fix right away. But that may not align to the business goal, and isn’t connected to the prioritization alignments in portfolio management. So, while pushing the goals and objectives downward from the business, it’s also important to allow upward alignment, where portfolio teams have awareness of what the different units are working on and say that, yes, there’s technical debt to be cleaned up, but perhaps it’s not important that it get done today to achieve the goals.
However, Knight noted, communication is a two-way street, and perhaps a developer could argue that something in the code needs to be fixed today, even though it might not align with the business goals.
Alignment, he pointed out, is about sharing the same vision by having all the information about what the teams are working on and knowledge about any particular artifact. That information flows up and down the value stream, in an automated and connected way, within the ValueOps platform.
Tying together Clarity, Rally, ConnectALL and Insights – ValueOps by Broadcom’s business stakeholders have an understanding of why teams are fixing what they’re fixing. And this alignment, Knight said, solves other problems as well. “With alignment,” he said, “you establish trust … trust that we’re building the right things.”
SECOND OF THREE PARTS
PART 1: Three pillars of value stream management
PART 3: Optimize organizational efficiency to drive customer value
The post Organizational alignment is the key to delivering customer value appeared first on SD Times.
]]>The post Broadcom’s Value Stream Management Virtual Summit: Learn how VSM delivers visibility, alignment and efficiency appeared first on SD Times.
]]>Recent research has shown VSM plays a critical role in ensuring that going fast and delivering more frequently also delivers a product that actually provides value. The techniques of value stream enable value to be seen, mapped to the goals of the business, then optimized by eliminating waste and removing roadblocks to deliver.
“There are more companies that are in the phase of using it with multiple product lines. And there’s more companies that are starting to use it and doing a POC with it,” explained Laureen Knudsen, Broadcom’s Chief Transformation Officer – AOD. “So we’re seeing that growth curve really starting to take off and happen in this past year than we have previously. It’s been snowballing and I think the results are what people are starting to pay attention to at this point.”
RELATED CONTENT:
Organizational Alignment is the Key to Delivering Customer Value
Broadcom’s ‘Three Pillars’ of value stream management
Value stream management seems to be following the path of both Agile and DevOps, which began as ways to more effectively build and deliver software and was then adopted and scaled throughout the organization to gain efficiencies and bring more stakeholders into the process.
“A lot of the noise in the market about value stream management really had to do with DevOps,” Knudsen said. “And I think that came from SAFe using value stream mapping in their DevOps course. But a lot of people were only using it in their DevOps processes. And then they realized, well, that’s not where the process starts. And if I’m talking about taking a systems view of how I create products, I need to start at the very beginning of the ideation phase, and move all the way through to ‘How did my customers perceive what I did?’ “
Since Agile development is not a prescriptive way to create software, some organizations are all in while others might only be doing Scrum yet believe they are Agile in their processes.
“Almost everybody today will tell you they’re agile, but whether they’re really following the principles of Agile remains to be seen,” Knudsen said. “A lot of them still don’t have data and transparency, which is fundamental to agility. I’m seeing companies that were taking this seriously, and really trying to fix all of the ills they had when they rolled out DevOps and they rolled out Agile, and it didn’t go very well. There’s a lot of companies that are saying, ‘Okay, we saw what we did before, and it didn’t work very well. And now we really want to get this right.’ And they’re using this to bring the entire organization together, and include everybody who’s involved in that process of product creation, whether it’s a legal approval of an open source product, or marketing, or an agent that has to know about what you’re creating to be able to do their job more fully. So it’s expanding out to reach everybody, and it’s being taken a lot more seriously.”
This year’s fourth iteration of VSM Virtual Summit – “Making Waves” – will feature speakers from Boston-based financial firm State Street Corp., vehicle transaction company Cox Automotive, and energy firm Southern Company, to speak about how value stream management has helped them efficiencies and deliver value to their customers.
Those speakers, as well as Broadcom’s experts, will provide real-world examples of findings that they have, both positive and challenges, and how they’ve overcome those challenges and how they’re solving issues.
Knudsen said value stream management is not the same for organizations. “Everybody’s got slightly different things that they’ve overcome recently, that they end up talking about, but they’re all harnessing the power of that end-to-end view or the value stream management view to optimize their transformations. So that’s our focus at VSM Virtual Summit.”
Register for the April 24 event here, and even if you can’t make it, the event recording will be made available immediately after the Summit for on-demand viewing.
The post Broadcom’s Value Stream Management Virtual Summit: Learn how VSM delivers visibility, alignment and efficiency appeared first on SD Times.
]]>The post Broadcom’s ‘Three Pillars’ of value stream management appeared first on SD Times.
]]>Despite the wrangling over terminology, the industry seems to have agreed that the notion of delivering value has three generally agreed-upon phases: visibility; alignment of strategy, planning and work; and optimizing efficiency. In this article, we will focus on increasing visibility.
The desired business outcomes are to enhance decision-making, improve trust in data, and minimize risk.
In a January research report by Broadcom on global value stream management trends, one of the biggest challenges organizations face in making a digital transformation and delivering value was the lack of data visibility across the enterprise, hampering efforts to make the kinds of business decisions that maximize value.
Laureen Knudsen, Chief Transformation Officer at Broadcom, said, “I think people now realize that their original view on digital transformation, where they just wanted to automate portions of the process, isn’t good enough. If you can’t see work flowing through your organization, or if data or processes still live in silos, you won’t have a realistic picture of where your organization stands.” Knudsen went on to say that the key to success lies in integrating and tying all of these pieces together with trustworthy data.
What ValueOps does for customers is to enable them to define, model, measure, prioritize and fund the initiatives that they value most. It accurately models complex business operations and scenarios to enable the proper definition and tracking of value. Organizationally, this helps organizations move beyond projects to include business value streams and product portfolios.
Another problem organizations say they have is that stakeholders don’t trust data from other teams, and that these silos create friction between different teams and roles. ValueOps brings metrics from different systems together to create a single source of truth, and connects this development data to the value definition, which enables the measurement of value creation and ROI in real time. Once generated, each stakeholder in the process can obtain the insights that are most relevant to them.
Finally, ValueOps can minimize risk by synchronizing business objectives and funding with ongoing development and delivery efforts, which allows the solution to flag risk and dependencies whenever change occurs. All of this enables organizations to pivot more quickly since they have real-time access to trusted data, which eliminates silos across the pipeline and allows for the rapid identification of delays and bottlenecks.
“We give organizations the ability to align their teams and gain visibility into every part of the delivery lifecycle, from an idea to customer value realization,” Knudsen said. “Did customers like what we did, and were we able to generate value for them quickly? Our solutions automate the processes and give visibility into all that data, up and down the organization.”
“This type of strategy can work well for your organization, allowing leaders to have the dashboards they need to make good prioritization decisions. It makes it a lot easier for companies to really understand what’s going on, so they can optimize any part of the product life cycle that isn’t working well.”
So what used to be called engineering efficiency, developer observability, or other buzzwords is now converging into value stream management – an overarching practice that organizations are quickly learning is a key solution that delivers on that promise of delivering more customer value.
This article was created by SD Times and Broadcom Software
FIRST OF THREE PARTS
PART 2: Organizational alignment is the key to delivering customer value
PART 3: Optimize organizational efficiency to drive customer value
The post Broadcom’s ‘Three Pillars’ of value stream management appeared first on SD Times.
]]>The post Broadcom acquires ConnectALL appeared first on SD Times.
]]>In a blog post announcing the deal, Broadcom said its plan is to integrate ConnectALL with the company’s ValueOps platform, which includes the Rally agile solution and the Clarity project portfolio management tools, to expand Broadcom’s offering for value stream management and digital transformation.
According to Broadcom’s website, “The combination of ValueOps and ConnectALL’s complementary technology will allow customers to connect and integrate a variety of third-party software tools and platforms and accelerate digital transformation efforts by radically improving visibility, alignment, and efficiency across the organization.”
This is an except from an article that was originally posted on VSM Times. Read the full article here.
The post Broadcom acquires ConnectALL appeared first on SD Times.
]]>The post Broadcom: 84% of orgs will be using VSM by end of the year appeared first on SD Times.
]]>And if that wasn’t enough proof of its growing popularity, a new survey from Broadcom provides numbers to back up. According to its survey of over 500 IT and business leaders, it is expected that 84% of enterprises will have adopted VSM by the end of the year. This is up from just 42% in 2021.
According to Broadcom, early adoption of VSM started around four years ago, and within the past two there has been a shift to mainstream adoption. Sixty percent of survey respondents said they will use VSM to deliver at least one product this year.
Read the full story on VSM Times.
The post Broadcom: 84% of orgs will be using VSM by end of the year appeared first on SD Times.
]]>The post How service virtualization supports cloud computing: Key use cases appeared first on SD Times.
]]>Several weeks ago, a customer of the Broadcom Service Virtualization solution posed the following question: “Now that we’re moving to the cloud, do we still need Service Virtualization?”
The question struck me as odd. My sense is that this confusion probably stemmed from the misperception that, since cloud environments can be spun up quickly, people think they can easily address test environment bottlenecks and, in the process, service virtualization capabilities would be rendered unnecessary. Obviously, that is not the case at all! Being able to spin up infrastructure quickly does not address the issue of what elements need to be established in order to make environments useful for desired testing efforts.
In fact, all the use cases for the Service Virtualization solution are just as relevant in the cloud as they are in traditional on-premises-based systems. Following are a few key examples of these use cases:
All of these use cases are documented in detail here.
Further, what’s more pertinent is that Service Virtualization helps to address many additional use cases that are unique to cloud-based systems.
Fundamentally, Service Virtualization and cloud capabilities complement each other. Combined, Service Virtualization and cloud services deliver true application development and delivery agility that would not be possible with only one of these technologies.
Using virtual services deployed to an ephemeral test environment in the cloud makes the setup of the environment fast, lightweight, and scalable. (Especially compared to setting up an entire SAP implementation in the ephemeral cloud environment, for example.)
Let’s examine some key ways to use Service Virtualization for cloud computing.
Service Virtualization Use Cases for Cloud Migration
Cloud migration typically involves re-hosting, re-platforming, re-factoring, or re-architecting existing systems. Regardless of the type of migration, Service Virtualization plays a key role in functional, performance, and integration testing of migrated applications—and the use cases are the same as those for on-premises applications.
However, there are a couple of special use cases that stand out for Service Virtualization’s support for cloud migration:
In most cases, migrating applications to the cloud will result in performance changes, typically due to differences in application distribution and network characteristics. For example, various application components may reside in different parts of a hybrid cloud implementation, or performance latencies may be introduced by the use of distributed cloud systems.
With Service Virtualization, we can easily simulate the performance of all the different application components, including their different response characteristics and latencies. Consequently, we can understand the performance impact, including both overall and at the component level, before the migration is initiated.
This allows us to focus on appropriate proactive performance engineering to ensure that performance goals can be met post migration.
In addition, Service Virtualization plays a key role in performance testing during and after the migration, which are common, well-understood use cases.
2. Easier Hybrid Test Environment Management for Testing During Migration
This is an extension to the common use case of Service Virtualization, which is focused on simplifying testing environments.
However, during application migration this testing becomes more crucial given the mix of different environments that are involved. Customers typically migrate their applications or workloads to the cloud incrementally, rather than all at once. This means that test environments during migration are much more complicated to set up and manage. That’s because tests may span multiple environments, both cloud, for migrated applications—and on-premises—for pre-migration applications. In some cases, specific application components (such as those residing on mainframes), may not be migrated at all.
Many customers are impeded from early migration testing due to the complexities of setting up test environments across evolving hybrid systems.
For example, applications that are being migrated to the cloud may have dependencies on other applications in the legacy environment. Testing of such applications requires access to test environments for applications in the legacy environment, which may be difficult to orchestrate using continuous integration/continuous delivery (CI/CD) tools in the cloud. By using Service Virtualization, it is much easier to manage and provision virtual services that represent legacy applications, while having them run in the local cloud testing environment of the migrated application.
On the other hand, prior to migration, applications running in legacy environments will have dependencies on applications that have been migrated to the cloud. In these cases, teams may not know how to set up access to the applications running in cloud environments. In many cases, there are security challenges in enabling such access. For example, legacy applications may not have been re-wired for the enhanced security protocols that apply to the cloud applications.
By using Service Virtualization, teams can provision virtual services that represent the migrated applications within the bounds of the legacy environments themselves, or in secure testing sandboxes on the cloud.
In addition, Service Virtualization plays a key role in parallel migrations, that is, when multiple applications that are dependent on each other are being migrated at the same time. This is an extension of the key principle of agile parallel development and testing, which is a well-known use case for Service Virtualization.
3. Better Support for Application Refactoring and Re-Architecting During Migration
Organizations employ various application re-factoring techniques as part of their cloud migration. These commonly include re-engineering to leverage microservices architectures and container-based packaging, which are both key approaches for cloud-native applications.
Regardless of the technique used, all these refactoring efforts involve making changes to existing applications. Given that, these modifications require extensive testing. All the traditional use cases of Service Virtualization apply to these testing efforts.
For example, the strangler pattern is a popular re-factoring technique that is used to decompose a monolithic application into a microservices architecture that is more scalable and better suited to the cloud. In this scenario, testing approaches need to change dramatically to leverage distributed computing concepts more generally and microservices testing in particular. Service Virtualization is a key to enabling all kinds of microservices testing. We will address in detail how Service Virtualization supports the needs of such cloud-native applications in section IV below.
4. Alleviate Test Data Management Challenges During Migration
In all of the above scenarios, the use of Service Virtualization also helps to greatly alleviate test data management (TDM) problems. These problems are complex in themselves, but they are compounded during migrations. In fact, data migration is one of the most complicated and time-consuming processes during cloud migration, which may make it difficult to create and provision test data during the testing process.
For example, data that was once easy to access across applications in a legacy environment may no longer be available to the migrated applications (or vice-versa) due to the partitioning of data storage. Also, the mechanism for synchronizing data across data stores may itself have changed. This often requires additional cumbersome and laborious TDM work to set up test data for integration testing—data that may eventually be thrown away post migration. With Service Virtualization, you can simulate components and use synthetic test data generation in different parts of the cloud. This is a much faster and easier way to address TDM problems. Teams also often use data virtualization in conjunction with Service Virtualization to address TDM requirements.
Service Virtualization Use Cases for Hybrid Cloud Computing
Once applications are migrated to the cloud, all of the classic use cases for Service Virtualization continue to apply.
In this section, we will discuss some of the key use cases for supporting hybrid cloud computing.
Post migration, many enterprises will operate hybrid systems based on a mix of on-premises applications in private clouds (such as those running on mainframes), different public cloud systems (including AWS, Azure, and Google Cloud Platform), and on various SaaS provider environments (such as Salesforce). See a simplified view in the figure below.
Setting up test environments for these hybrid systems will continue to be a challenge. Establishing environments for integration testing across multiple clouds can be particularly difficult.
Service Virtualization clearly helps to virtualize these dependencies, but more importantly, it makes virtual services easily available to developers and testers, where and when they need them.
For example, consider the figure above. Application A is hosted on a private cloud, but dependent on other applications, including E, which is running in a SaaS environment, and J, which is running in a public cloud. Developers and testers for application A depend on virtual services created for E and J. For hybrid cloud environments, we also need to address where the virtual service will be hosted for different test types, and how they will be orchestrated across the different stages of the CI/CD pipeline.
See figure below.
Generally speaking, during the CI process, developers and testers would like to have lightweight synthetic virtual services for E and J, and to have them created and hosted on the same cloud as A. This minimizes the overhead involved in multi-cloud orchestration.
However, as we move from left to right in the CD lifecycle, we would not only want the virtual services for E and J to become progressively realistic, but also hosted closer to the remote environments, where the “real” dependent applications are hosted. And these services would need to orchestrate a multi-cloud CI/CD system. Service Virtualization frameworks would allow this by packaging virtual services into containers or virtual machines (VMs) that are appropriate for the environment they need to run in.
Note that it is entirely possible for application teams to choose to host the virtual services for the CD lifecycle on the same host cloud as app A. Service Virtualization frameworks would allow that by mimicking the network latencies that arise from multi-cloud interactions.
The key point is to emphasize that the use of Service Virtualization not only simplifies test environment management across clouds, but also provides the flexibility to deploy the virtual service where and when needed.
2. Support for Agile Test Environments in Cloud Pipelines
In the introduction, we discussed how Service Virtualization complements cloud capabilities. While cloud services make it faster and easier to provision and set up on-demand environments, the use of Service Virtualization complements that agility. With the solution, teams can quickly deploy useful application assets, such as virtual services, into their environments.
For example, suppose our application under test has a dependency on a complex application like SAP, for which we need to set up a test instance of the app. Provisioning a new test environment in the cloud may take only a few seconds, but deploying and configuring a test installation of a complex application like SAP into that environment would take a long time, impeding the team’s ability to test quickly. In addition, teams would need to set up test data for the application, which can be complex and resource intensive. By comparison, deploying a lightweight virtual service that simulates a complex app like SAP takes no time at all, thereby minimizing the testing impediments associated with environment setup.
3. Support for Scalable Test Environments in Cloud Pipelines
In cloud environments, virtual service environments (VSEs) can be deployed as containers into Kubernetes clusters. This allows test environments to scale automatically based on testing demand by expanding the number of virtual service instances. This is useful for performance and load testing, cases in which the load level is progressively scaled up. In response, the test environment hosting the virtual services can also automatically scale up to ensure consistent performance response. This can also help the virtual service to mimic the behavior of a real automatically scaling application.
Sometimes, it is difficult to size a performance testing environment for an application so that it appropriately mimics production. Automatically scaling test environments can make this easier. For more details on this, please refer to my previous blog on Continuous Performance Testing of Microservices, which discusses how to do scaled component testing.
4. Support for Cloud Cost Reduction
Many studies (such as one done by Cloud4C) have indicated that enterprises often over-provision cloud infrastructure and a significant proportion (about 30%) of cloud spending is wasted. This is due to various reasons, including the ease of environment provisioning, idle resources, oversizing, and lack of oversight.
While production environments are more closely managed and monitored, this problem is seen quite often in test and other pre-production environments, which developers and teams are empowered to spin up to promote agility. Most often, these environments are over-provisioned (or sized larger than they need to be), contain data that is not useful after a certain time (for example, including aged test data or obsolete builds or test logs), and not properly cleaned up after their use—developers and testers love to quickly move on the next item on their backlog!
Use of Service Virtualization can help to alleviate some of this waste. As discussed above, replacing real application instances with virtual services helps to reduce the size of the test environment significantly. Compared to complex applications, virtual services are also easier and faster to deploy and undeploy, making it easier for pipeline engineers to automate cleanup in their CI/CD pipeline scripts.
In many cases, virtual service instances may be shared between multiple applications that are dependent on the same end point. Automatically scaling VSEs can also help to limit the initial size of test environments.
Finally, VSEs to which actual virtual services are deployed, can be actively monitored to ensure tracking, usage, and de-provisioning when not used.
The post How service virtualization supports cloud computing: Key use cases appeared first on SD Times.
]]>