feature experimentation Archives - SD Times https://sdtimes.com/tag/feature-experimentation/ Software Development News Wed, 01 Nov 2023 20:11:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg feature experimentation Archives - SD Times https://sdtimes.com/tag/feature-experimentation/ 32 32 Split brings feature experimentation to Microsoft Azure https://sdtimes.com/test/split-brings-feature-experimentation-to-microsoft-azure/ Wed, 01 Nov 2023 20:11:21 +0000 https://sdtimes.com/?p=52904 The feature experimentation company Split Software has partnered with Microsoft to help its customers implement feature experimentation within Microsoft Azure.  According to Split, this aligns with the needs of today’s product developers because feature experimentation is considered crucial for successful digital experiences, but many developers believe they are not successful at it. With this integration, … continue reading

The post Split brings feature experimentation to Microsoft Azure appeared first on SD Times.

]]>
The feature experimentation company Split Software has partnered with Microsoft to help its customers implement feature experimentation within Microsoft Azure. 

According to Split, this aligns with the needs of today’s product developers because feature experimentation is considered crucial for successful digital experiences, but many developers believe they are not successful at it. With this integration, Azure App Configuration users can effectively run experiments in Azure, using Split’s capabilities to test features in production environments and gather valuable experimentation data.

“With this new capability jointly delivered by Azure and Split in Azure App Configuration, teams can use experimentation and insights to reduce risk, fuel innovation, and create delightful digital experiences by adopting modern approaches for progressive delivery in app development,” says Amanda Silver, CVP of the Developer Division at Microsoft. “Experimentation from Split within Azure will further our customers’ ability to build intelligent apps and release them to market quickly and safely – driving maximum value for end users and fuel business growth.” 

Since 2020, Split has been collaborating with Microsoft and is currently available on Azure Marketplace and integrated with Azure DevOps, including a Visual Studio Code extension. 

The new offering within Azure App Configuration will be accessible in early 2024 through a Private Preview on Azure. Customers interested in early access can sign up for the Private Preview through the provided link and will be notified by Split when it becomes available.

The post Split brings feature experimentation to Microsoft Azure appeared first on SD Times.

]]>
2020: Testing goes hands off during the pandemic with with more AI and automation https://sdtimes.com/test/2020-testing-goes-hands-off-during-the-pandemic-with-with-more-ai-and-automation/ Tue, 08 Dec 2020 17:08:27 +0000 https://sdtimes.com/?p=42372 The importance of autonomous testing has mushroomed in importance in 2020 as many companies realized the primary way to connect with consumers is through apps and digital applications, which in turn has increased the amount of testing that needs to be done.  The pandemic has also created a distributed workforce and prompted the need for … continue reading

The post 2020: Testing goes hands off during the pandemic with with more AI and automation appeared first on SD Times.

]]>
The importance of autonomous testing has mushroomed in importance in 2020 as many companies realized the primary way to connect with consumers is through apps and digital applications, which in turn has increased the amount of testing that needs to be done. 

The pandemic has also created a distributed workforce and prompted the need for alternate methods of testing that don’t require being on site. 

“Before the pandemic, a lot of mobile testers were relying on the few physical devices they kept in a drawer at work. Now, they’re realizing they need access to a device cloud that provides the same interactive capabilities desktop and web developers get using virtual machines,” said Dan McFall, the president and CEO of Mobile Labs. 

This year, we saw automated testing, continuous testing and security testing continue to grow. Non-traditional testing such as feature experimentation, Visual AI, and chaos engineering also advanced to keep pace with organizational demands in the digital age.

About a decade ago, testing was mostly manual. Tests cases were written, functional and UI tests were done, regression, pen and load testing would happen, and the application was deemed ‘good to go.’ Testing has evolved dramatically since those times. 

“Application changes occur several times a day now. These changes need to work on many browsers, devices, operating systems and different environments, so you need to do far more work in far less time,” said Gil Sever, CEO and co-founder of Applitools. “You can’t manually write and maintain all the scripts needed, so you need Visual AI to take over these rote aspects of the work.”

Back in March, Applitools released Ultrafast Grid, which simplifies cross-browser testing by eliminating the need to tediously run functional and visual tests individually across all browsers and viewpoints. 

Solution providers have focused on the need for an AI-driven approach that can be utilized for both legacy and modern cloud-native technologies. 

For example, in October, Tricentis announced Tosca 14 with Vision AI, which automatically recognizes and identifies visual user interface elements and controls across any form factor the same way humans do to aid in the automated generation of robust test cases. 

“Test automation technology has evolved from script-based, to model-based, and is now moving towards AI-based approaches. This new approach will enable agile and DevOps teams to build automated test cases much earlier in the development process – starting with only a mockup or a low-fidelity prototype,” Tricentis wrote in a post.

This year also saw major acquisitions of testing platforms.

In June, Keysight Technologies acquired Eggplant, a software test automation platform that leverages artificial intelligence and analytics to automate test creation and test execution. 

Then in November, mobile experience platform provider Kobiton acquired its competitor Mobile Labs to enable developers and QA teams to deliver apps faster by leveraging artificial intelligence across real-devices spanning cloud and on-premises deployments. 

“There is an urgent need to master testing at both large scale and high velocity to ensure high-quality software delivery. To succeed, application leaders need to develop their teams’ competency in autonomous testing to remove testing bottlenecks and accelerate release cadence,” Gartner wrote in its recently published Innovation Insight for Autonomous Testing this year. 

The post 2020: Testing goes hands off during the pandemic with with more AI and automation appeared first on SD Times.

]]>
premium Feature experimentation: Walk before you run https://sdtimes.com/softwaredev/feature-experimentation-walk-before-you-run/ Thu, 09 Jul 2020 15:15:51 +0000 https://sdtimes.com/?p=40616 Software innovation doesn’t happen without taking risks along the way. But risks can be scary for businesses afraid of making mistakes.  There is another way, according to Jon Noronha, senior vice president of product at Optimizely, a progressive delivery and experimentation platform provider. Feature experimentation, he said, allows businesses to go to market quicker while … continue reading

The post <span class="sdt-premium">premium</span> Feature experimentation: Walk before you run appeared first on SD Times.

]]>
Software innovation doesn’t happen without taking risks along the way. But risks can be scary for businesses afraid of making mistakes. 

There is another way, according to Jon Noronha, senior vice president of product at Optimizely, a progressive delivery and experimentation platform provider. Feature experimentation, he said, allows businesses to go to market quicker while improving product quality and minimizing the fear of failure.  

“I like to think of feature experimentation as a safety net. It’s something that gives people the confidence to do something bold or risky,” he said. “Imagine you are jumping on a trapeze with no net. You’re going to be really scared to take even the smallest step because if you fall, you’re going to really hurt yourself. When there is a net, you know the worst thing that can happen is you land on the net and bounce a little bit.”

RELATED CONTENT:
Waving the flag for feature experimentation
Speed releases with feature flags

Feature experimentation is that net that allows you to leap, but catches you if you fall, Noronha explained. It enables businesses to take small risks, roll it out to a few users, and measure the impact of changes before releasing it to 100% of the user base. 

Christopher Condo, a principal analyst at the research firm Forrester, said, “In order to be innovative, you need to really understand what your customers want and be willing to try new experiences. Using feature experimentation allows businesses to be more Agile, more willing to put out smaller pieces of functionality, test it with users and continue to iterate and grow.” 

However, there are still some steps businesses need to take before they can squeeze out the benefits of feature experiment. They need to learn to walk before they can run.

Progressive Delivery: Walk
Progressive delivery is the walk that comes before the run (feature experimentation), according to Dave Karow, continuous delivery evangelist at Spilt, a feature flag, experimentation and CD solution provider. Progressive delivery assumes you have the “crawl” part already in place, which is continuous delivery and continuous integration. For instance, teams need to have a  centralized source of information in place where developers can check in code and have it automatically tested for basic sanity with no human intervention, Karow explained. 

Without that, you won’t see the true promise of progressive delivery, John Kodumal, CTO and co-founder of LaunchDarkly, a feature flag and toggle management company, added.

“Imagine a developer is going to work on a feature, take a copy of the source code and take a copy of their plan and work on it for some time. When they are done, they have to merge their code back into the source code that is going to go out into production,” Karow explained. “In the meantime, other developers have been making other changes. What happens is literally referred to in the community as ‘merge hell.’ You get to a point where you think you finished your work and you have to merge back in and then you discover all these conflicts. That’s the crawl stuff. It’s about making changes to the software faster and synchronizing with coworkers to find problems in near real-time.” 

Once you have the crawl part situated, the progressive delivery part leverages feature flags (also known as feature toggles, bits or flippers) to get features into production faster without breaking the application. According to Optmizely’s Noronha, feature flags are one layer off the safety net that feature experimentation offers. It allows the development teams to try things at lower risks and roll out by slowly and gradually enabling developers to expose key functionalities with the goal of catching bugs or errors before they become widespread. “It’s making it easier to roll things out faster, but be able to stop rollouts without a lot of drama,” Karow said. 

Some examples of feature flags

Feature flags come in several different flavors. Among them are: 

  • Release flags that enable trunk-based development. “Release Toggles allow incomplete and un-tested codepaths to be shipped to production as latent code which may never be turned on,” Pete Hodgson, an independent software delivery consultant, wrote in a post on MartinFowler.com.
  • Experiment flags that leverage A/B testing to make data-driven optimizations. “By their nature Experiment Toggles are highly dynamic – each incoming request is likely on behalf of a different user and thus might be routed differently than the last,” Hodgson wrote.
  • Ops flags, which enable teams to control operational aspects of their solution’s behavior. Hodgson explained “We might introduce an Ops Toggle when rolling out a new feature which has unclear performance implications so that system operators can disable or degrade that feature quickly in production if needed.”
  • Permission flags that can change the features or experience for certain users. “For example we may have a set of ‘premium’ features which we only toggle on for our paying customers. Or perhaps we have a set of “alpha” features which are only available to internal users and another set of “beta” features which are only available to internal users plus beta users,” Hodgson wrote.

One way to look at it is through the concept of canary releases, according to Kodumal, which is the idea of being able to release some change and controlling the exposure of that change to a smaller audience to validate that change before rolling it out more broadly. 

These flags help minimize the blast radius of possible messy situations, according to Forrester’s Condo. “You’re slowly gauging the success of your application based on: Is it working as planned? Do customers find it useful? Are they complaining? Has the call value gone up or stayed steady? Are the error logs growing?” As developers implement progressive delivery, they will become better at detecting when things are broken, Condo explained.

“The first thing is to get the hygiene right so you can build software more often with less drama. Implement progressive delivery so you can get that all the way to production. Then dip your toes into experimentation by making sure you have that data automated,” said Split’s Karow. 

Feature experimentation: Run
Feature experimentation is similar to progessive delivery, but with better data, according to Karow. 

“Feature experimentation takes progressive delivery further by looking at the data and not just learning whether or not something blew up, but why it did,” he said. 

By being able to consume the data and understand why things happen, it enables businesses to make better data-driven decisions. The whole reason you do smaller releases is to actually confirm they were having the impact you were looking for, that there were no bugs, and you are meeting users’ expectations, according to Optmizely’s Noronha. 

It does that through A/B testing, multi-armed bandits, and chaos experiments, according to LaunchDarkly’s Kodumal. A/B testing tests multiple versions of a feature to see how it is accepted. Multi-armed bandits is a variation of an A/B test, but instead of waiting for a test to complete it uses algorithms to increase traffic allocations to see how features work. And chaos experiments refer to finding out what doesn’t work rather than looking for what does work. 

“You might drive a feature experiment that is intended to do something like improve engagement around a specific feature you are building,” said Kodumal. “You define the metric, build the experiment, and validate whether or not the change being made is being received positively.”

The reason why feature experimentation is becoming so popular is because it enables development teams to deploy code without actually turning it on right away. You can deploy it into production, test it in production, without the general user base seeing it, and either release it or keep it hidden until it’s ready, Forrester’s Condo explained. 

In some cases, a business may decide to release the feature or new solution to its users, but give them the ability to turn it on or off themselves and see how many people like the enhanced experience. “Feature experimentation makes that feature a system of record. It becomes part of how you deliver experiences to your customers in a varied experience,” said Condo. “It’s like the idea of Google. How many times on Google or Gmail has it said ‘here is a brand new experience, do you want to use it?’ And you said ‘no I’m not ready.’ It is allowing companies to modernize in smaller pieces rather than all at once.”

What feature experimentation does is it focuses on the measurement side, while progressive delivery focused on just releasing smaller pieces. “Now you are comparing the 10% release against the other 90% to see what the difference is, measuring that, understanding the impact, quantifying it, and learning what’s actually working,” said Opitmizely’s Noronha.

While it does reduce risks for businesses, it doesn’t eliminate the chance for failure. Karow explained businesses have to be willing to accept failure or they are not going to get very far. “At the end of the day, what really matters is whether a feature is going to help a user or make them want to use it or not. What a lot of these techniques are about is how do I get hard data to prove what actually works,” Karow explained. 

To get started, Noronha recommends to look for parts of the user experience that drive traffic and make simple changes to experiment with. Once they prove it out and get it entrenched in one area, then it can be quickly spread out to other areas more easily. 

“It’s sort of addictive. Once people get used to working in this way, they don’t want to go back to just launching things. They start to resent not knowing what the adoption of their product is,” he said. 

Noronha expects progressive delivery and feature experimentation will eventually merge. “Everyone’s going to roll out into small pieces, and everyone’s going to measure how those things are doing against the control,” he said. 

What both progressive delivery and feature experimentation do is provide the ability to de-risk your investment in new software and R&D. They give you the tooling you need to think about decomposing those big risky things into smaller, achievable things where you have faster feedback loops from customers,” LaunchDarkly’s Kodumal added.

Experimenting with A/B testing
A/B testing is one of the most common types of experiments, according to John Kodumal, CTO and co-founder of LaunchDarkly, a feature flag and toggle management company

It is the method of comparing two versions of an application or functionality. Previously, it was more commonly used for front-end or visual aesthetic changes done to a website rather than a product. For instance, one could take a button that was blue and make it red, and see if that drives more clicks, Jon Noronha, senior vice president of product at Optimizely, a progressive delivery and experimentation platform provider, explained. “In the past several years, we’ve really transitioned to focusing more on what I would call feature experimentation, which is really building technology that helps people test the core logic of how their product is actually built,” he said. 

A/B testing is used in feature experimentation to test out two competing theories and see which one achieves the result the team is looking for. Christopher Condo, a principal analyst at the research firm Forrester, explained that “It requires someone to know and say ‘I think if we alter this experience to the end user, we can improve the value.’ You as a developer want to get a deeper understanding of what kind of changes can improve the UX and so A/B testing comes into play now to show different experiences from different people and how they are being used.”

According to Dave Karow, continuous delivery evangelist at Spilt, a feature flag, experimentation and CD solution provider, this is especially useful in environments where a “very important person” within the business has an opinion or the “highest paid person” on the team wants you to do something and a majority of the team members don’t agree. He explained normally what someone thinks is going to work, doesn’t work 8 or 9 times out of 10. But with A/B testing, developers can still test out that theory, and if it fails they can provide metrics and data on why it didn’t work without having to release it to all their customers. 

A good A/B test statistical engine should be able to tell you within a few days which experience or feature is better. Once you know which version is performing better, you can slowly replace it and continue to iterate to see if you can make it work even better, Condo explained. 

Kodumal explained A/B testing works better with feature experimentation because in progressive delivery the customer base you are gradually delivering to is too small to run full experiments on and achieve the statistical significance of a fully rigorous experiment.

“We often find that teams get value out of some of the simpler use cases in progressive delivery before moving onto full experimentation,” he said.  

Feature experimentation is for any company with user-facing technology
Feature experimentation has already been used among industry leaders like eBay, LinkedIn and Netflix for years. 

“Major redesigns…improve your service by allowing members to find the content they want to watch faster. However, they are too risky to roll out without extensive A/B testing, which enables us to prove that the new experience is preferred over the old,” Netflix wrote in a 2016 blog post explaining its experimentation platform. 

Up until recently it was only available to those large companies because it was expensive. The alternative was to build your own product, with the time and costs associated with that.  “Now there is a growing marketplace of solutions that allow anyone to do the same amount of rigor without having to spend years and millions of dollars building it in-house,” said Dave Karow, continuous delivery evangelist at Spilt, a feature flag, experimentation and CD solution provider

Additionally,  feature experimentation used to be a hard process to get started with, with no real guidelines to follow. What has started to happen is the large companies are getting to share how their engineering teams operate and provide more information on what goes on behind the scenes, according to Christopher Condo, a principal analyst at the research firm Forrester. “In the past, you never gave away the recipe or what you were doing. It was always considered intellectual property. But today, sharing information, people realize that it’s really helping the whole industry for everybody to get better education about how these things work,” Condo said. 

Today, the practice has expanded into something that every major company with some kind of user-facing technology can and should take advantage of, according to Jon Noronha, senior vice president of product at Optimizely, a progressive delivery and experimentation platform provider.

Norona predicts feature experimentation “will eventually grow to be adopted the same way we see things like source control and branching. It’s going to go from something that just big technology companies do to something that every business has to have to keep up.”

“Companies that are able to provide that innovation faster and bring that functionality that consumers are demanding, they are the ones that are succeeding, and the ones that aren’t are the ones that are left behind and that consumers are starting to move away from,” John Kodumal, CTO and co-founder of LaunchDarkly, a feature flag and toggle management company, added.

The post <span class="sdt-premium">premium</span> Feature experimentation: Walk before you run appeared first on SD Times.

]]>
How statistics can lead to a successful experiment https://sdtimes.com/softwaredev/how-statistics-can-lead-to-a-successful-experiment/ Fri, 20 Dec 2019 18:33:09 +0000 https://sdtimes.com/?p=38311 It’s human nature to want things to go your way; dropping little hints for your birthday presents, avoiding certain topics, even companies commissioning surveys tailored to provide results they want has been well documented. But on a more basic level, we subconsciously (and sometimes consciously) will try to influence variables to provide the results we … continue reading

The post How statistics can lead to a successful experiment appeared first on SD Times.

]]>
It’s human nature to want things to go your way; dropping little hints for your birthday presents, avoiding certain topics, even companies commissioning surveys tailored to provide results they want has been well documented. But on a more basic level, we subconsciously (and sometimes consciously) will try to influence variables to provide the results we want. This could be something as simple as asking a question in a certain way or setting up a test to deliver the statistics you want.

So, when it comes to experimentation, how do you safeguard yourself against the perils of wishful thinking and hidden biases? First, it is important to remember that great teams don’t run experiments to prove they are right; they run them to answer questions. Keeping this in mind will help when creating an experiment in which to test your new feature or code.

RELATED CONTENT: Waving the flag for feature experimentation

It all starts with a handful of core principles in the design, execution and analysis of experiments that have been proven by teams that run tons of experiments, sometimes in the thousands, every month. Following these principles increases the chances of learning something truly useful, reduces the time wasted and avoids leading the team on an unproductive direction due to false signals.

It has been noted by the Harvard Business Review that between 80 and 90 percent of features shipped have negative or neutral impact on metrics they were designed to approve. The issue is, if you’re shipping these features without doing experimentation, you may not notice that they are not moving the needle. This would mean, while you feel accomplished by releasing the features, you haven’t actually accomplished anything.

Another issue to look out for when it comes to metrics is the HiPPO syndrome. HiPPO stands for Highest Paid Person’s Opinion. The acronym, which was first popularized by Avinash Kaushik, refers to the most senior person in the room imposing their opinions onto the company which can sway the decision making and their presence can stifle ideas being presented during meetings. This has a negative effect on the design and ultimately the metrics.

Right now, you may be thinking “but things like A/B testing replaces or diminishes design.” This could not be further from the truth. Good design always comes first, and design is always included. In fact, product managers have a team of designers, coders and testers that all put a lot of effort into setting up their experiment with a new design. A/B testing is an integral tool that informs us if the end-users have done what you wanted them to do based on the success of the design.

But, if you’re receiving metrics that look better than prior the feature being released, then what’s the problem? The problem is that it’s pretty easy to be fooled. False signals are a very real part of analyzing metrics and can cost a lot of money on wasting time and energy basing your results on these false signals.

The best way to avoid these traps is to remember these four things: users are the final arbiters of your design decisions, experimentation allows us to watch our users vote with their actions, it is important to know what you are testing and invest time into choosing the right metrics, and, most importantly, any well designed, implemented and analyzed experiment is a successful experiment. 

The post How statistics can lead to a successful experiment appeared first on SD Times.

]]>
Industry Watch: What follows CD? Progressive delivery https://sdtimes.com/devops/industry-watch-what-follows-cd-progressive-delivery/ Tue, 10 Dec 2019 15:14:00 +0000 https://sdtimes.com/?p=38143 Software development and delivery practices continue to evolve and change, so on the heels of the late October DevOps Enterprise Summit, attendees and journalists alike have been asking, ‘Where does it all go from here?’ One area involves value streams, the creation of which allow organizations to see waste in their organization and eliminate for … continue reading

The post Industry Watch: What follows CD? Progressive delivery appeared first on SD Times.

]]>
Software development and delivery practices continue to evolve and change, so on the heels of the late October DevOps Enterprise Summit, attendees and journalists alike have been asking, ‘Where does it all go from here?’

One area involves value streams, the creation of which allow organizations to see waste in their organization and eliminate for better efficiency and, ultimately, quality. 

Another is CI/CD. The practice of continually introducing changes to the codebase and deploying those changes out for testing and feedback prior to wide release is well understood. So, how does the industry improve on continuous delivery?

RELATED CONTENT:
Waving the flag for feature experimentation
Feature flags simplify feature development and testing 

According to Adam Zimman, VP of platform at feature experimentation software provider LaunchDarkly, the future is through ‘progressive delivery.’ 

As defined by Redmonk analyst James Governor and Zimman in mid-2018, progressive delivery allows organizations to roll out changes while being mindful of the users’ experience. In a nutshell, Zimman said it’s about having increasing control over who gets to see what, when.  “The idea of being able to garner feedback from specific cohorts prior to broad release of features or products has been a thing since people started selling stuff,” Zimman said. “In the past five years, as the ideas around more continuous delivery — a faster cadence of release cycles — has shifted what tools people look to to be able to do this type of experimentation or controlled rollout.”

A key tool to creating control points in software is a feature flag.  Developers add these flags into their code that can be turned on or off for certain cohorts. It can be as simple as turning a feature on or off, or giving access in certain contexts but not in others. For instance, you might want to allow access to features in a staging environment for testing but not in production. And those types of access controls ultimately enable an organization to delegate who gets to manage them.

“If it’s something the developer is providing the feature flag mechanism themselves, or they’re storing the values in a simple database, then the ownership of that control resides with developers,” Zimman said. “You’re talking about accessing a database directly; you’re talking about changing value in that database, so you need to have some level of engineering background to be able to do that without shooting yourself in the foot, basically.”

Then, he added, as you start to roll that into production, “you may look to transition that ownership to an operations team, commensurate with a DevOps model where you want to have equal visibility and shared responsibility.” Finally, as you start to look at this functionality as it starts to impact users, you want to include the individual business owners that are closest to those business outcomes. “That could be product management, that could be product marketing, that could be sales, or customer success,” he continued. “All of these teams that are closer to business outcomes, we have seen in our customers being given responsibility for these control points.”

The use of feature flags has been associated with feature experimentation — A/B testing, blue-green deployments — but Zimman sees distinctions between experimentation and progressive delivery. “Often times, you think of an experiment, I want to put something out there and test if it’s better than what I had before. Or, if it’s something net new, then I want to see if people like it. But the goal of the experiment often times implies that you’re going to have an outcome that either is going to roll it back for everyone, so that no one gets that new thing because it was worse, or you’re going to roll it out to everyone.

“One of the key distinctions with progressive delivery,” he continued, “is the idea that you actually want to have an end state of this control point that is being thoughtful of the user base that it’s applicable to. I talk about this in the context of B2B. That makes a lot of sense for anybody who has kind of looked at the idea of mult-tiers of service, where you have a premium tier, and entry-level tier, and so on. You actually want new features to potentially to only go to one of those groups. You’re not actually rolling it out to everyone.” 

In a post from Aug. 2018, Redmonk’s Governor wrote: “A great deal of our thinking in application delivery has been about consistency between development and deployment targets – see the promise of Docker and then Kubernetes. A core aspect of cattle vs pets as an analogy is that the cattle are all the same. The fleet is homogeneous. But we’re moving into a multicloud, multiplatform, hybrid world, where deployment targets are vary and we may want to route deployment and make it more, well, progressive.”

And that’s progress.

The post Industry Watch: What follows CD? Progressive delivery appeared first on SD Times.

]]>
The feature launch in 5 key phases: A DevOps cheat sheet https://sdtimes.com/devops/the-feature-launch-in-5-key-phases-a-devops-cheat-sheet/ Thu, 15 Aug 2019 16:45:05 +0000 https://sdtimes.com/?p=36625 The process of launching a new feature has changed a lot over the last decade. Ten years ago, a feature launch was commonly tied to code release. This meant that when the release branch was merged into master and pushed to production, new features riding on that branch would be launched to customers.  RELATED CONTENT:  … continue reading

The post The feature launch in 5 key phases: A DevOps cheat sheet appeared first on SD Times.

]]>
The process of launching a new feature has changed a lot over the last decade. Ten years ago, a feature launch was commonly tied to code release. This meant that when the release branch was merged into master and pushed to production, new features riding on that branch would be launched to customers. 

RELATED CONTENT: 
Going ‘lights-out’ with DevOps
Feature flags simplify feature development and testing for Dev teams and QA

This all changed with the introduction of feature flags. Feature flags meant that DevOps teams were able to separate code release from new feature launches by putting a new feature behind a flag and then allowing the feature to be slowly released to a certain demographic of users until it was ready to be fully released. The addition of feature flags meant that the flag was able to be completely off, completely on, or partially on (allowing a certain segment or percentage of users to experience the new feature).

This revolutionized the industry, allowing engineers to test the scalability of systems that support the feature and product managers to tie metrics to every feature launch and better track it. But these new possibilities raised some questions. Many DevOps teams were stuck wondering how many steps are required in this ramp and how long they should spend on each step.

The steps taken for a feature launch using a feature flag should be chosen with care. Taking too many steps or taking too long at any step can slow down innovation. Taking big jumps or not spending enough time at each step can lead to suboptimal outcomes. As daunting as this may seem, approaching your feature launch in the following five key phases will help ensure that it runs smoothly. 

The first of these phases is known as the dogfooding phase. The purpose of this phase is to detect any integration bugs, get feedback from team members on the design and feel, ensure that the production gets certified by Quality Assurance, and conduct training for the sales and support team. This step should be quick, as it is not part of the process where performance challenges are identified, or the impact of the feature is measured.

The next phase is the debugging phase. The goal of this phase is to reduce any risk of obvious bugs or bad user experience. Ensure that any UI component renders correctly, and that the system can take the load of the feature. This phase should be conducted via a few quick ramps (i.e. 1 percent, 5 percent, or 10 percent of users) lasting no more than a day, and the focus should not be on the feature impact on user experience but on debugging the feature.

The phase after that is the maximum power ramp (MPR) phase. Once the feature has been debugged and the risks have been significantly reduced, the new goal is decision-making. This is the part of the process where you determine whether or not the new feature is positively impacting the metrics it is supposed to enhance. At this point, you should release the feature to 50 percent of the users; this is the quickest way to collect the customer impact data. 

Next comes the scalability phase. The previous phase should have provided data on whether the feature was successful and, assuming it was, the next step would be to release the feature to all users. However, concerns about the ability of the system to deal with 100 percent of users having access to the new feature may be something to consider. The resolution for this worry about the operational scalability is to increase the release of the feature from 50 percent to 75 percent and leave it there for about one day of peak traffic to ensure confidence that the system will be able to handle the new feature.

The last phase of the feature launch is the learning phase. This phase is to understand the long-term impact of the features on users. For example, if your platform uses advertisements, did the new feature cause long-term ad blindness? The way to address these concerns is by holding back the new feature from 5 percent of the users for about a month. This will enable you to better analyze the long-term impacts and give you time to fix them. 

Overall, remember that the dogfooding phase is for internal feedback, the debugging and scalability phases are meant for risk mitigation, and the MPR and learning phases are meant to speed up learning and decision-making.

The introduction of feature flags has enabled DevOps teams to take more control of the process and significantly reduce the odds of “failed” launches along the way. While feature flags can pose a lot of new questions, implementing these five phases, while having specific objectives for each, will help ensure your launch goes off without a hitch.

The post The feature launch in 5 key phases: A DevOps cheat sheet appeared first on SD Times.

]]>
Flagging new software features https://sdtimes.com/softwaredev/flagging-new-software-features/ Mon, 08 Oct 2018 13:00:16 +0000 https://sdtimes.com/?p=32647 Decisions, decisions. That’s what attendees said their organizations were struggling with, at the inaugural DECISIONS conference, put on by software experimentation platform provider Split.io in San Francisco last week. Many organizations today find themselves with a mix of senior employees, who have years of institutional knowledge and understanding of their market, and younger workers who … continue reading

The post Flagging new software features appeared first on SD Times.

]]>
Decisions, decisions.

That’s what attendees said their organizations were struggling with, at the inaugural DECISIONS conference, put on by software experimentation platform provider Split.io in San Francisco last week.

Many organizations today find themselves with a mix of senior employees, who have years of institutional knowledge and understanding of their market, and younger workers who look at the world through more of a digital lens.

That mix also affects the decisions these companies make. The older workers tend to make decisions based on their knowledge of their products, their customers and their competitors. The younger workers make decisions based primarily on data.

The older workers sometimes “go with their gut” when it comes to product updates. Sometimes they’re right and sometimes they’re not. The younger workers rely on data, which can also lead to incorrect decisions if the data collected was not the right data, or if the metrics the organization has chosen to measure are not the right ones.

Split and some of its partners were able to show attendees how to use data collected from customers to inform development decisions. With the use of feature flags, Split empowers developers to try functionality out, deploy to as small or large group of users as they want, and roll back to the prior status if things don’t go well.

All in the name of making better decisions.

The post Flagging new software features appeared first on SD Times.

]]>