Datadog Archives - SD Times https://sdtimes.com/tag/datadog/ Software Development News Wed, 17 Apr 2024 19:59:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Datadog Archives - SD Times https://sdtimes.com/tag/datadog/ 32 32 Report: Java is the language that’s most prone to third-party vulnerabilities https://sdtimes.com/security/report-java-is-the-language-thats-most-prone-to-third-party-vulnerabilities/ Wed, 17 Apr 2024 20:05:18 +0000 https://sdtimes.com/?p=54309 According to Datadog’s State of DevSecOps 2024 report, 90% of Java services have at least one or more critical or higher severity vulnerabilities.  This is compared to around 75% for JavaScript services, 64% for Python, and 50% for .NET. The average for all languages studied was 47% The company found that Java services are also … continue reading

The post Report: Java is the language that’s most prone to third-party vulnerabilities appeared first on SD Times.

]]>
According to Datadog’s State of DevSecOps 2024 report, 90% of Java services have at least one or more critical or higher severity vulnerabilities. 

This is compared to around 75% for JavaScript services, 64% for Python, and 50% for .NET. The average for all languages studied was 47%

The company found that Java services are also more likely to be actively exploited compared to other languages. Fifty-five percent have suffered from this, compared to a 7% average for other languages.

Datadog believes this may be due to the fact that there are many prevalent vulnerabilities in popular Java libraries, such as Tomcat, Spring Framework, Apache Struts, Log4j, and ActiveMQ. 

“The hypothesis is reinforced when we examine where these vulnerabilities typically originate. In Java, 63 percent of high and critical vulnerabilities derive from indirect dependencies— i.e., third-party libraries that have been indirectly packaged with the application. These vulnerabilities are typically more challenging to identify, as the additional libraries in which they appear are often introduced into an application unknowingly,” Datadog wrote in the report.

The company says this serves as a reminder that developers need to consider the full dependency tree when scanning for application vulnerabilities, not just the direct dependencies.

The second major finding of the report is that the largest number of exploitation attempts is done by automated security scanners, but that most of those attacks aren’t harmful and are just a source of noise for companies trying to defend against attacks.

Only 0.0065 percent of attacks performed by automated security scanners actually triggered vulnerabilities. 

Given the prevalence of these attacks but their harmlessness, Datadog believes this underscores the need for a good system for prioritizing alerts. 

According to the report, over 4,000 high and 1,000 critical vulnerabilities were discovered by the CVE project last year. However, research published in the Journal of Cybersecurity in 2020 found that only 5 percent of vulnerabilities are ever actually exploited. 

“Given these numbers, it’s easy to see why practitioners are overwhelmed with the amount of vulnerabilities they face, and why they need prioritization frameworks to help them focus on what matters,” Datadog wrote. 

Datadog found that organizations who have made efforts to address their critical vulnerabilities have success in removing them. Sixty-three percent of organizations that had a critical CVE at one point no longer have any, and 30% have seen the number of critical vulnerabilities reduced by half.  

The company recommends that organizations prioritize vulnerabilities based on if the impacted service is publicly exposed, the vulnerability is running in production, or there is publicly available code for the exploit. 

“While other vulnerabilities might still carry risk, they should likely be addressed only after issues that meet these three criteria,” Datadog wrote. 

Other interesting findings in Datadog’s report are that lightweight container images lead to fewer vulnerabilities, adoption of infrastructure as code is high, manual cloud deployments are still widespread, and usage of short-lived credentials in CI/CD pipelines is still low.

The post Report: Java is the language that’s most prone to third-party vulnerabilities appeared first on SD Times.

]]>
Datadog introduces new continuous testing platform https://sdtimes.com/testing/datadog-introduces-new-continuous-testing-platform/ Thu, 20 Oct 2022 17:53:57 +0000 https://sdtimes.com/?p=49323 The team at the monitoring and security platform for cloud applications, Datadog, has announced the general availability of Datadog Continuous Testing. This helps developers and quality engineers create, manage, and run end-to-end tests for their web applications. This release is intended to simplify test creation in order to speed up software release cycles by providing … continue reading

The post Datadog introduces new continuous testing platform appeared first on SD Times.

]]>
The team at the monitoring and security platform for cloud applications, Datadog, has announced the general availability of Datadog Continuous Testing. This helps developers and quality engineers create, manage, and run end-to-end tests for their web applications.

This release is intended to simplify test creation in order to speed up software release cycles by providing users with a complete testing workbench that simplifies test creation and maintenance.

According to the company, this release allows engineers to create tests directly from the UI without the need for scripting, run tests in parallel, and integrate with CI tools so tests can become a part of their existing CI process.

“Creating and running end-to-end tests today is a time-consuming and error-prone process that many teams struggle with as they scale. This impacts release velocity as engineers need wide test coverage in order to safely deploy new code in production while avoiding regressions. On top of this, teams need tests to be fast and resilient, otherwise developers start to avoid the CI so that they can ship code faster,” said Renaud Boutet, SVP of product at Datadog. “Continuous Testing solves this problem by giving engineers a platform to quickly create, run and manage their tests in one place.”

Key features of this release include

  • No-code test creation to allow any team member to click through their application like an end user would and create end-to-end tests

  • The ability to run several tests simultaneously to reduce testing time

  • Self-healing capabilities that work to adjust to changes without intervention from the user

  • Failure troubleshooting to allow users to drill down into backend traces and session replay to pinpoint the cause of a failure

  • The ability to integrate with existing CI tools and leverage Continuous Testing with their technology stack

Pricing and details for Datadog Continuous Testing can be found here. To learn more, visit the website.

The post Datadog introduces new continuous testing platform appeared first on SD Times.

]]>
SD Times news digest: SharePoint Framework 1.14 Release Candidate; Salt Security raises $140 million in Series D; Datadog completes acquisition of CoScreen https://sdtimes.com/softwaredev/sd-times-news-digest-sharepoint-framework-1-14-release-candidate-salt-security-raises-140-million-in-series-d-datadog-completes-acquisition-of-coscreen/ Thu, 10 Feb 2022 20:05:45 +0000 https://sdtimes.com/?p=46578 The availability of the SharePoint Framework 1.14 Release Candidate brings users updates for Viva Connections, Microsoft Teams, and SharePoint Online experiences. The general availability of SharePoint Framework 1.14 is set for mid-February with no planned adjustments.  Microsoft encourages users to submit any feedback on the Release Candidate using the  sp-dev-docs issue list. Key features of … continue reading

The post SD Times news digest: SharePoint Framework 1.14 Release Candidate; Salt Security raises $140 million in Series D; Datadog completes acquisition of CoScreen appeared first on SD Times.

]]>
The availability of the SharePoint Framework 1.14 Release Candidate brings users updates for Viva Connections, Microsoft Teams, and SharePoint Online experiences. The general availability of SharePoint Framework 1.14 is set for mid-February with no planned adjustments. 

Microsoft encourages users to submit any feedback on the Release Candidate using the  sp-dev-docs issue list. Key features of this Release Candidate include: 

  • Updated Adaptive Card Extensions scaffolding to be more succinct 
  • Streamlined Yeoman generator experience further for all solution types 
  • Additional issue fixes based on the customer and partner reports at sp-dev-docs issue list
  • Additional fixes and adjustments on the component capabilities 

For a full list of release features, see here.

Salt Security raises $140 million in Series D

The API security company Salt Security today announced that it has closed a Series D funding round accumulating $140 million. The round was led by CapitalG with participation from all existing investors. 

Salt Security intends to use this financing to expand R&D investment, fuel sales and marketing, and more quickly grow its international operations in order to address the increasing number of cyber threats currently targeting APIs.

This funding brings the company’s total valuation to $1.4 billion and comes just eight months after it raised $70 million in a Series C funding round.

Datadog completes acquisition of CoScreen

Datadog, the monitoring and security platform for cloud applications, today announced the completion of its acquisition of the collaboration platform for technical teams, CoScreen. With this, several new capabilities are brought to Datadog’s platform, all intended to assist engineers in sharing their screens and working together during incident and security response, pair programming, prototyping, debugging, and other activities in an integrated, joint workspace. 

“Bringing teams together has always been Datadog’s core mission,” said Ilan Rabinovitch, senior VP of product and community at Datadog. “Adding CoScreen’s real-time communication capabilities helps our customers bring distributed teams closer together and move forward with in-product collaboration. The end result is higher developer productivity, faster incident response and reduced mean time to resolution.”

Creatio 8.0 Atlas released

The low code solution for process management and CRM, Creatio, released Creatio 8.0 Atlas, bringing additional tools in order to successfully build enterprise applications and workflows using no-code. 

Key features of this release include:

  • Application Hub to jump start the creation of new apps 
  • No-code Designer to create and modify pages, data models, workflows, and integrations 
  • Freedom UI Designer to create any type of UI page leveraging tools for layout, UI actions and behavior, color schemes, and more

See here to request a free trial and get started using Creatio 8.0 Atlas. 

The post SD Times news digest: SharePoint Framework 1.14 Release Candidate; Salt Security raises $140 million in Series D; Datadog completes acquisition of CoScreen appeared first on SD Times.

]]>
SD Times news digest: LogDNA announces Spike Protection, Boomi adds Data Catalog and Preparation service to AtomSphere Platform, and Cloudflare launches new integrations https://sdtimes.com/data/sd-times-news-digest-logdna-announces-spike-protection-boomi-adds-data-catalog-and-preparation-service-to-atomsphere-platform-and-cloudflare-launches-new-integrations/ Tue, 22 Jun 2021 14:40:24 +0000 https://sdtimes.com/?p=44457 LogDNA has announced Spike Protection to give companies more control over fluctuations in their data and spend.  LogDNA Spike Protection gives DevOps teams tools to understand and manage increases through Index Rate Alerting and Usage Quotas to provide additional insight into anomalous spikes.  The company also today announced its Agent 3.2 release for Kubernetes and … continue reading

The post SD Times news digest: LogDNA announces Spike Protection, Boomi adds Data Catalog and Preparation service to AtomSphere Platform, and Cloudflare launches new integrations appeared first on SD Times.

]]>
LogDNA has announced Spike Protection to give companies more control over fluctuations in their data and spend. 

LogDNA Spike Protection gives DevOps teams tools to understand and manage increases through Index Rate Alerting and Usage Quotas to provide additional insight into anomalous spikes. 

The company also today announced its Agent 3.2 release for Kubernetes and OpenShift, which introduces the configuration of log inclusion/exclusion rules, along with log redaction, using regex patterns. 

Additional details are available here

Boomi adds Data Catalog and Preparation service to AtomSphere Platform

Boomi announced the addition of the Data Catalog and Preparation AtomSphere Service to the Boomi AtomSphere Platform along with a new Data operations Professional Services Offering (DataOps PSO). 

Data Cloud Platform is also now available as a fully managed cloud-based service to work with the rest of the Boomi AtomSphere Platform.

“We are leading the evolution of iPaaS to include key aspects of data management that address the common problems of data discovery, preparation, and governance that we often see integration projects suffer from,” said Ed Macosky, head of product at Boomi. “Customers no longer need to integrate multiple point solutions to share data between systems and data repositories – they can simply access the Boomi platform to holistically accelerate data readiness processes across their enterprise.”

Cloudflare launches new integrations with Microsoft, Splunk, Datadog, and Sumo Logic 

The new integrations with analytics partners will make it easier for businesses to connect and analyze key insights across their infrastructure, according to Cloudflare.

“CISOs want their security teams to focus on security, not building clunky and costly integrations just to get insights from all of the different applications and tools in their infrastructure,” said Matthew Prince, co-founder and CEO of Cloudflare. “We saw an opportunity to make that process faster, easier, and cheaper, working with other top analytics platforms to bring added value to our customers.”

Cloudflare is also giving customers the ability to get insights from new datasets, take logs from anywhere with support for any storage destination and easily visualize data in a new user interface. 

Splunk announces $1 billion investment 

Splunk announced a $1 billion investment from Silver Lake that it will use to fund growth initiatives and manage its capital structure. 

In connection with Silver Lake’s investment, Kenneth Hao, the chairman and managing partner of Silver Lake, will be appointed to Splunk’s board of directors.

Under the terms of the investment, Silver Lake will purchase $1 billion in aggregate principal amount of Splunk’s convertible senior notes. 

The post SD Times news digest: LogDNA announces Spike Protection, Boomi adds Data Catalog and Preparation service to AtomSphere Platform, and Cloudflare launches new integrations appeared first on SD Times.

]]>
Report: Serverless now a critical part of many software stacks https://sdtimes.com/softwaredev/report-serverless-now-a-critical-part-of-many-software-stacks/ Wed, 26 May 2021 20:15:24 +0000 https://sdtimes.com/?p=44133 Serverless isn’t just a fad; it’s here to stay. According to Datadog’s State of Serverless 2021 report, AWS Lambda functions were invoked 3.5 times more often than they were in 2019. The company explained this is an indication that teams are making serverless a critical part of their software stacks, not just experimenting with it.  … continue reading

The post Report: Serverless now a critical part of many software stacks appeared first on SD Times.

]]>
Serverless isn’t just a fad; it’s here to stay. According to Datadog’s State of Serverless 2021 report, AWS Lambda functions were invoked 3.5 times more often than they were in 2019. The company explained this is an indication that teams are making serverless a critical part of their software stacks, not just experimenting with it. 

AWS Lambda invocations are also much faster than they were even just a year ago. In 2020 the median Lambda invocation took 60 milliseconds, which Datadog says is half the time it took in the previous year. One possible reason for this is that more organizations are following Lambda best practices and designing functions that are specific to their workloads. 

RELATED CONTENT: Evaluating if serverless is right for you

According to the report, the tail of latency distribution is long. This is an indication that Lambda isn’t just powering short-lived jobs, but powering more computationally intense use cases.

Adoption of Azure Functions and Google Cloud Functions are also gaining momentum. Over the last year, the share of Azure companies running Azure Functions increased from 20% to 36%. Nearly a quarter of companies using Google Cloud are using Google Cloud Functions. 

“What we saw in this report, and then last year’s, is that serverless is really here to stay and that it’s growing super fast,” said Stephen Pinkerton, product manager at Datadog. “What we see is serverless is kind of in every type of organization and people are using it to solve a variety of problems.”

AWS Step Functions is also becoming important for those practicing serverless. AWS Step Functions allows developers to build workflows that involve multiple Lambda functions and AWS services. It can coordinate error handling, retries, timeout, and other application logic, helping to reduce operational complexity as applications scale. According to the report, the average Step Functions workflow contains four Lambda functions. This number continues to grow month over month, according to Datadog. 

There are two types of workflows that can be executed by Step Functions: Standard and Express. Datadog believes that since over 40% of workflows are being executed in under a minute, it’s likely that organizations are using Express workflows to support high-volume event processing workloads.

In addition, while many Step Functions workflows execute quickly, others run for a long time. The longest workflows run for over a week, according to the report.  

Developers are turning to Lambda for edge computing as well. According to the report, a quarter of organizations that use Amazon CloudFront use Lambda@Edge, which can be used for tasks like transforming images based on user characteristics or serving different versions of an application for A/B testing. 

The report found 67% of Lambda@Edge functions run in under 20 milliseconds, possibly indicating that serverless edge computing has the potential to support even the most latency-critical applications. “As this technology matures, we expect to see more organizations relying on it to improve their end-user experience,” the report stated. 

Datadog also found that organizations are overspending on Provisional Concurrency, which is an AWS feature designed to reduce the delay in execution of functions, otherwise called a “cold start.” Over half of functions use less than 80% of their Provisional Concurrency, while 40% use their entire allocation, leaving them open to the possibility of still encountering cold starts. 

“We see overallocation… and so I think this is just a sign that people are getting very sophisticated with serverless and with running applications here, but there’s still a lot of learning that we all have to do on how to best use the tools,” said Pinkerton. 

Python is the most popular Lambda runtime in large environments, by far. Fifty-eight percent of deployed Lambda functions use Python, which is an increase of 11% since last year. Another 31% run Node.js, which is down 8% from last year. However, when looking at small AWS environments only, Node.js is used more often than Python. The remaining runtimes, Java, Go, .NET Core, and Ruby, are used in less than 10% of Lambda runtimes in large organizations.   

The Serverless Framework seems to be the top serverless deployment tool, among options like AWS Cloud Development Kit (CDK) and AWS Serverless Application Model (SAM). Ninety percent of CloudFormation members use the Serverless Framework, 19% use vanilla CloudFormation, 18% use AWS CDK, and 13% use AWS SAM, according to the report.

The post Report: Serverless now a critical part of many software stacks appeared first on SD Times.

]]>
LaunchDarkly adds flag triggers for Honeycomb and Datadog https://sdtimes.com/test/launchdarkly-adds-flag-triggers-for-honeycomb-and-datadog/ Wed, 09 Sep 2020 15:22:01 +0000 https://sdtimes.com/?p=41284 Feature management company LaunchDarkly has announced that it is adding flag triggers through new integrations with Honeycomb and Datadog.  Flag triggers are one-step automations that can be triggered after a specific alert goes off or a performance metric is met. They work by sending webhooks to a URL. This allows them to be turned on … continue reading

The post LaunchDarkly adds flag triggers for Honeycomb and Datadog appeared first on SD Times.

]]>
Feature management company LaunchDarkly has announced that it is adding flag triggers through new integrations with Honeycomb and Datadog. 

Flag triggers are one-step automations that can be triggered after a specific alert goes off or a performance metric is met. They work by sending webhooks to a URL. This allows them to be turned on and off from any tool that fires a webhook, like a CI or APM tool.

RELATED CONTENT: Waving the flag for feature experimentation

According to LaunchDarkly, a benefit of feature flagging is that developers can disable features that are negatively impacting users or systems. They also can be used to smoothly advance a rollout when everything is going as planned. 

In addition to the integration with Honeycomb and Datadog, developers can create generic webhooks triggers that can be invoked from other systems or tools. 

LaunchDarkly also plans on adding configurations for SignalFx, New Relic, Dynatrace, AppDynamics, LogDNA, Splunk, and more. 

The post LaunchDarkly adds flag triggers for Honeycomb and Datadog appeared first on SD Times.

]]>
Datadog brings security, performance monitoring together with four product releases https://sdtimes.com/security/datadog-brings-security-performance-monitoring-together-with-four-product-releases/ Tue, 11 Aug 2020 15:00:55 +0000 https://sdtimes.com/?p=40951 Datadog today is revealing its vision for bringing security and performance monitoring into a single platform in the form of updates and new product features for its cloud infrastructure monitoring platform. At its virtual DASH conference this week, the company announced Error Tracking, Incident Management, Compliance Monitoring and Continuous Profiler, rounding out its platform to … continue reading

The post Datadog brings security, performance monitoring together with four product releases appeared first on SD Times.

]]>
Datadog today is revealing its vision for bringing security and performance monitoring into a single platform in the form of updates and new product features for its cloud infrastructure monitoring platform.

At its virtual DASH conference this week, the company announced Error Tracking, Incident Management, Compliance Monitoring and Continuous Profiler, rounding out its platform to make it easier for developers to find deep performance issues with their applications. For operations teams, the new Incident Management product enables debugging and issue resolution, and for security and compliance teams, full visibility into cloud environments gives them a means to ensure misconfigurations don’t create problems.

These products join the company’s already existing infrastructure monitoring, APM and log management capabilities in the Datadog platform.

“In our opinion, security and observability are both coming together in modern applications. What used to be siloed security teams, and development teams and operations teams, in modern web-based applications they’re all starting to come together,” Amit Agarwal, chief product officer at Datadog, said in a briefing on the announcements. “Applications have become Agile; you make changes to it every day. So they need to be in lockstep and sync to solve many of the problems. What we are offering to our customers is a single platform to do both monitoring and security, because it’s all based on the same data… the same logs are used in one context by developers and operations people, to see why performance is poor, and the same ones are used by security people to see, well, maybe the performance is bad because someone is doing a denial of service attack.”

The Error Tracking tool, which becomes available today, focuses on how errors are affecting the customer experience, and aggregates all the errors that might be occurring across all of the application’s users into a small list of issues that represent the specific bugs users are encountering. “This provides us a better overview of the health of the application, rather than a firehose of data,” said Ilan Rabinovitch, vice president of product and community at Datadog. “Developers take advantage of our RUM product, APM and logging. Logs and APM let them get a good sense of what the experience looks like server-side, and our real user monitoring product admits telemetry from the user side, either web or mobile traffic, to see how it’s performing on the actual users’ computers. By combining the three, we get a pretty good picture of the customers’ experience.”

The Continuous Profiler, like traditional profilers, measures the performance of an application and gives visibility down to the line of code where the problem exists. “When deploying code, every application developer has these three questions in mind,” explained Renaud Boutet, vice president of product at Datadog. “Am I delivering a fast user experience? Am I over-consuming resources? And, probably more stressful, am I going to create an incident in production? Historically, people have been using profiling solutions to mediate and solve these problems… however, legacy profiling tools have such a performance overhead that they are used almost exclusively at the development stage. Meanwhile the production environment, which represents the real world and all the unexpected behaviors, is actually not covered.”

According to the company’s announcement, “Datadog Continuous Profiler closes this visibility gap with minimal resource-overhead that allows for always-on profiling. Having constant visibility into code performance allows developers to more effectively identify hidden performance bottlenecks.” 

On the incident management side, Datadog’s new product understands that as much as the practice involves a technical response, it’s also very much a human one. “It’s not just a question of finding that line of code … but there’s also a lot of time spent assembling your team, deciding who needs to be on that team, what resources they need at their fingertips, and what data you want to give them to convince them of an incident,” Rabinovitch said. “So time to detection and resolution of an incident is just as much about getting your team coordinated as it about those technical responses.”

The Incident Management product brings together a set of tools that let you launch an investigation with your team and pull in all the people you need, it helps you create a timeline of all the actions your team has taken, and to collect all those signals and share those with your teams on various collaboration platforms, Rabinovitch said. 

To support Incident Management workflow, the company announced that an Android and iOS application for interacting with Datadog monitors and dashboards on the go is now generally available. Also, a ChatBot that integrates with Slack enables access to Datadog data, and improvements to Datadog Notebooks allows for real-time collaboration and feeds directly into postmortems.

On the security side, Datadog is releasing its new Compliance Monitoring product into beta today. “Security has always been a priority, moreso now than ever, as businesses move online, and devs and ops teams are moving faster,” Boutet said. The compliance tool, according to the company announcement, “tracks the state of all cloud-native resources, such as security groups, storage buckets, load balancers, and Kubernetes.”

Among the key features are security observability that enables users to discover assets and their configurations and combine it with Datadog’s full telemetry, a compliance status snapshot, file integrity monitoring, continuous configuration assessment, and a simple WYSIWYG interface for creating custom security and governance policies.   

A big part of the problem organizations are looking to overcome is that developers aren’t trained well in security, and security teams don’t have a solid understanding of the software development lifecycle.

“What used to be siloed security teams, and development teams and operations teams, in modern web-based applications, they’re all starting to come together,” said Agarwal. “Applications have become agile; you make changes to it every day. So they need to be in lockstep and sync to solve many of the problems.” 

The post Datadog brings security, performance monitoring together with four product releases appeared first on SD Times.

]]>
SD Times news digest: GitLab.com transitions CDN to Cloudflare, LaunchDarkly raises $53 million, and Datadog launches partner network https://sdtimes.com/softwaredev/sd-times-news-digest-gitlab-com-transitions-cdn-to-cloudflare-launchdarkly-raises-53-million-and-datadog-launches-partner-network/ Fri, 17 Jan 2020 18:08:05 +0000 https://sdtimes.com/?p=38610 GitLab.com has announced that it is changing its content delivery network to Cloudflare. Currently, they are using Fastly to serve content, but switching to Cloudflare will allow them to have a single vendor for CDN, WAF, and DDoS protection.   According to GitLab, this will only affect some GitLab.com users, not GitLab self-managed users. Affected users … continue reading

The post SD Times news digest: GitLab.com transitions CDN to Cloudflare, LaunchDarkly raises $53 million, and Datadog launches partner network appeared first on SD Times.

]]>
GitLab.com has announced that it is changing its content delivery network to Cloudflare. Currently, they are using Fastly to serve content, but switching to Cloudflare will allow them to have a single vendor for CDN, WAF, and DDoS protection.  

According to GitLab, this will only affect some GitLab.com users, not GitLab self-managed users. Affected users will need to reconfigure their firewalls to new IP ranges. Custom runner images or private runners that cache DNS or SSL certificates may also be affected.

GitLab will start exploring other Cloudflare features like WAF once they have confirmed that traffic is flowing properly. 

LaunchDarkly raises $54 million in funding
The feature management platform just completed a $54 million funding round, bringing its total funding to $130 million. This round was led by Bessemer Venture Partners and Threshold Ventures, Redpoint Ventures, Uncork Capital, Vertex Ventures, and Bloomberg Beta also participated.

The company will use this funding to address the increased demand for their platform. According to the company, last year, they served over 1 trillion feature flags per day for 1,000 of their customers. This was an increase of 500% from 2018. LaunchDarkly believes the increased demand was due to increasing pressure for companies to innovate faster and “deliver exceptional user experiences.”

Datadog announces new partner network
The monitoring and analytics provider will be using this new program to expand its existing support for channel partners. Members of the Partner Network will receive benefits such as go-to-market collateral, self-service training for implementation, opportunity registration in the Partner Portal, and a Partner Locator Listing. 

Managed service providers, system integrators, resellers, referral partners, and technology partners building on the Datadog platform are eligible to join. 

“Partners have been an important part of Datadog’s success, bringing our cloud monitoring platform to customers through a wide variety of channels,” said Deniz Tortop, VP of worldwide channels & alliances at Datadog. “The Datadog Partner Network will strengthen these commitments and increase our support for alliances, benefitting our partners, our customers, and the industry.”

Synopsys joins new Autonomous Vehicle Computing Consortium
The Consortium brings automotive, automotive supply, semiconductor, and computing experts together to accelerate the delivery of safer, more affordable vehicles. Synopsys will actively contribute to the development of recommendations for architectures and computing platforms that can be used to address the challenges of deploying self-driving vehicles at scale. 

“The Autonomous Vehicle Computing Consortium is focused on tackling the complexities and obstacles associated with the deployment of autonomous vehicles,” said Pereira, Armando, president of the Autonomous Vehicle Computing Consortium. “We look forward to Synopsys’ active contribution to the consortium, helping to define a reference architecture and platform that address the design requirements for autonomous driving and move today’s prototype systems to reality.”

The post SD Times news digest: GitLab.com transitions CDN to Cloudflare, LaunchDarkly raises $53 million, and Datadog launches partner network appeared first on SD Times.

]]>
Google and Netflix introduce open-source automated canary analysis service https://sdtimes.com/os/google-and-netflix-introduce-open-source-automated-canary-analysis-service/ https://sdtimes.com/os/google-and-netflix-introduce-open-source-automated-canary-analysis-service/#comments Tue, 10 Apr 2018 16:40:34 +0000 https://sdtimes.com/?p=30171 Google and Netflix have announced a new project designed to reduce the risk of rapidly rolling out deployments to production. Kayenta is an open-source automated canary analysis service designed to enable teams to quickly push production changes and perform continuous delivery at scale. Kayenta is based off of Netflix’s internal canary system, but has been … continue reading

The post Google and Netflix introduce open-source automated canary analysis service appeared first on SD Times.

]]>
Google and Netflix have announced a new project designed to reduce the risk of rapidly rolling out deployments to production. Kayenta is an open-source automated canary analysis service designed to enable teams to quickly push production changes and perform continuous delivery at scale.

Kayenta is based off of Netflix’s internal canary system, but has been updated to handle more advanced use cases and reduce error-prone and time-consuming ad-hoc canary analysis.

“Automated canary analysis is an essential part of the production deployment process at Netflix and we are excited to release Kayenta,” Greg Burrell, senior reliability engineer at Netflix, said in a blog post. “Our partnership with Google on Kayenta has yielded a flexible architecture that helps perform automated canary analysis on a wide range of deployment scenarios such as application, configuration and data changes.”

Kayenta is integrated with the open-source multi-cloud continuous delivery platform Spinnaker. The integration will allow teams to set up an automated canary analysis stage within a Spinnaker pipeline. According to Google, this allows users to specify what metrics and sources to check. Monitoring tools include Stackdriver, Prometheus, Datadog, and Netflix’s internal tool Atlas.

“Spinnaker’s integration with Kayenta allows teams to stay close to their pipelines and deployments without having to jump into a different tool for canary analysis,” said Burrell. “By the end of the year, we expect Kayenta to be making thousands of canary judgments per day. Spinnaker and Kayenta are fast, reliable and easy-to-use tools that minimize deployment risk, while allowing high velocity at scale.”

Other benefits of Kayenta include ability to perform automated canary analysis without vendor lock-in, detect problems across canaries, perform automated canary analysis across multiple environments, and adjust boundaries and parameters while performing automated canary analysis.

“With Kayenta, you now have an open, automated way to perform canary analysis and quickly deploy changes to production with confidence. By open-sourcing Kayenta, our goal is to build a community where metric stores and judges are provided both by the open source community and via proprietary systems,” the Google team wrote in a post.

The post Google and Netflix introduce open-source automated canary analysis service appeared first on SD Times.

]]>
https://sdtimes.com/os/google-and-netflix-introduce-open-source-automated-canary-analysis-service/feed/ 4
Efforts to standardize tracing through OpenTracing https://sdtimes.com/apm/efforts-standardize-tracing-opentracing/ https://sdtimes.com/apm/efforts-standardize-tracing-opentracing/#comments Thu, 05 Apr 2018 13:00:47 +0000 https://sdtimes.com/?p=29990 Industry efforts toward distributed tracing have been evolving for decades, and one of the latest initiatives in this arena is OpenTracing, an open distributed standard for apps and OSS packages. APMs like LightStep and Datadog are eagerly pushing forward the emerging specification, as are customer organizations like HomeAway, PayPal and Pinterest, while some other industry … continue reading

The post Efforts to standardize tracing through OpenTracing appeared first on SD Times.

]]>
Industry efforts toward distributed tracing have been evolving for decades, and one of the latest initiatives in this arena is OpenTracing, an open distributed standard for apps and OSS packages. APMs like LightStep and Datadog are eagerly pushing forward the emerging specification, as are customer organizations like HomeAway, PayPal and Pinterest, while some other industry leaders – including Dynatrace, NewRelic, and App Dynamics  – are holding back from full support. Still, contributors to the open-source spec are forging ahead with more and more integrations, and considerable conference activities are in store for later this year.

“Distributed tracing is absolutely essential to building microservices in highly scalable distributed environments,” contended Ben Sigelman, co-creator of OpenTracing and co-founder and CEO at LightStep, in an interview with SD Times. In contrast to other types of tracing familiar to some developers, such as kernel tracing or stack tracing, distributed tracing is all about understanding the complex journeys that transactions take in propagating across distributed systems.

Where academic papers about distributed tracing started appearing even sooner, Google first began using a distributed tracing system called Dapper some 14 years ago, publishing the Dapper paper online about six years later. As a Google employee during the early phases of his career, Sigelman worked on Dapper, in addition to several other Google projects. He became intrigued with Dapper as a solution to the issues posed when a single user query would hit hundreds of processes and thousands of surfaces, overwhelming existing logging systems. Zipkin, another distributed tracing system, went open source a couple of years after Dapper.

A spec is born
Where Dapper was geared to Google’s own internally controlled repository, however, the OpenTracing specification, launched in 2015, is designed to be a “single, standard mechanism to describe the behavior of [disparate] systems,” according to Sigelman. Tracing contexts are passed to both self-contained OSS services (like Cassandra and NGINX) and OSS packages locked into custom services (such as ORM amds amd grpc), as well as “arbitrary application glue and business logic built around the above,” Sigelman wrote in a blog.

As might be expected, among the earliest customer adopters of OpenTracing are many large, cloud-enabled online services dealing with massive numbers of transactions across myriad distributed systems. HomeAway, for example, is a vacation rental marketplace dealing with 2 million vacation homes in 190 countries, across 50 websites around the globe.

“Our system is composed of different services written in different languages,” said Eduardo Solis, architect at HomeAway, in an email to SD Times. “We are also seeing many teams using patterns like CQRS and a lot of streaming where transactions have real-time patterns and asynchronous ones. Being able to visualize and measure all of this is critical!”

Why OpenTracing?
“OpenTracing is a ‘must have’ tool for microservices and cloud-native applications. It is the API to adopt,” Solis continued. “Observability of the system is critical for business success in a containerized cloud world where applications are spinning up and down, having degradation or failure, and there is a very complex dependency graph. Instrumenting code properly is hard. Assuming you have the resources and knowledge to do it you end up using either some proprietary API or getting the system baked to a vendor system. There are APM solutions that auto-instrument but then you end up losing some of the powerful context capabilities. OpenTracing solves all of the above.

“You have the whole open source community instrumenting popular frameworks and libraries,” Solis added, “you get a vendor neutral interface for instrumentation, and you can use that same API to do other more interesting things at the application level without getting married to one single solution.”

How OpenTracing is different
Sigelman, of course, concurs that OpenTracing carries significant advantages for developers. For one thing, developers of application code and OSS packages and services can instrument their own code without binding to any specific tracing vendor. Beyond that, each component of a distributed system can be instrumented in isolation, “and the distributed application maintainer can choose (or switch, or multiplex) a downstream tracing technology with a configuration change,” he said.

Sigelman points to a number of different ways in which distributed tracing can be standardized, such as the following:

  • Standardized span management. Here, programmatic APIs are used to start, finish, and decorate time operations, which are called “spans” in the jargons of both Dapper and Zipkin.
  • Standardized inter-process propagation. Programmatic APIs are used to help in transferring tracing context across process boundaries.
  • Standardized active span management. In a single process, programmatic APIs store and retrieve the active span across package boundaries.
  • Standardized in-band context encoding. Specifications are made as to an exact wire-encoding format for tracing context passed alongside application data between processes.
  • Standardized out-of-band trace data encoding. Specifications are made about how decorated trace and span data should be encoded as it moves toward the distributed tracing vendor.

Earlier standardization efforts in distributed tracing have focused on the last two of these scenarios, meaning the encoding and representation of trace and context data, both in-and out-of-band, as opposed to APIs. In so doing, these earlier efforts have failed to provide several benefits that developers actually need, Sigelman argued.

“Standardization of encoding formats has few benefits for instrumentation-API consistency, tracing vendor lock-in, or the tidiness of dependencies for OSS projects, the very things that stand in the way of turnkey tracing today,” he wrote. “What’s truly needed – and what OpenTracing provides – is standardization of span management APIs, inter-process propagation APIs, and ideally active span management APIs.”

OpenTracing isn’t for everything (or everyone)
Sigelman told SD Times that he sees three main use scenarios for OpenTracing: “The first of these is basic storytelling. What happens to a transaction across processes? The second is root cause analysis. What’s broken?” he noted. “The third main use case scenario is greenfield long-term analysis, to help bring improvements that would prevent the need for engineering changes in the future.”

Still, leading APMs like Dynatrace, New Relic, and App Dynamics are hanging back from full support for OpenTracing. Why is this so?

Alois Reitbauer, chief technology strategist at Dynatrace, agreed that OpenTracing does offer some important benefits to developers.

“There’s a lot going on in the industry right now in terms of creating a standardized way for instrumenting applications, and OpenTracing is one part of that. What it tries to achieve is something really important, and something that the industry needs to solve, in terms of defining what a joint API can look like. Some frameworks are using OpenTracing already today, but it’s mainly targeted for library and some middleware developers. End users will not necessarily have first-hand contact as frameworks and middleware either come already instrumented or instrumentation is handled by the monitoring provider,” Reitbauer told SD Times, in an email.

“It’s a good first step, but it’s in its early stages, and the reality is that OpenTracing doesn’t paint the whole picture. Beyond just traces, systems need metrics and logs to give a comprehensive view of the ecosystem, with a full APM system in the backend as well.”

In a recent blog post, Reitbauer went further to maintain that interoperability has become much more necessary lately with the rise of cloud services apps from third-party vendors, but that the only way to achieve interoperability is to solve two problems that OpenTracing doesn’t address. The problems involve abilities to “create an end-to-end trace with multiple full boundaries” and to “access partial trace data in a well defined way and link it together for end-to-end visibility,” he wrote.

Many APM and cloud providers and well aware of these issues and have started to work on solving them by agreeing on two things: a standardized method for propagating trace context information of vendors end-to-end, and a discussion of how to be able to ingest trace fragment data from each other, according to Reitbauer.

“The first [of these] is on the way to be resolved within the next year. There is a W3C working group forming that will define a standardized way to deal with trace information referred to as Trace-Context, which basically defines two new HTTP-Headers that can store and propagate trace information. Today every vendor would use their own headers, which means they will very likely get dropped by intermediaries that do not understand them,” said the Dynatrace exec.

“Now let us move on to data formats. Unfortunately, a unified data format for trace data is further away from becoming reality,” he acknowledged. “Today there are practically as many formats available as there are tools. There isn’t even the conceptual agreement whether the data format should be standardized or if there should be a standardized API and everyone can build an exporter that fits their specific needs. There are pros and cons for both approaches and the future will reveal what implementers consider the best approach. The only thing that cannot be debated is that eventually we will need a means to easily collect trace fragments and link them together.”

For his part, though, Sigelman has suggested that one of the big reasons why OpenTracing is progressing so rapidly is precisely due to the narrow, well defined, and manageable focus of the spec.

New support for the spec
Now Datadog. a major monitoring platform for cloud environments, is another force avidly backing OpenTracing. In December of 2017, Datadog announced its support for OpenTracing as well as its membership in the Cloud Native Computing Foundation (CNCF). The vendor also unveiled plans to join the OpenTracing Specification Committee (OTSC) and to invest in developing the standard going forward.

Datadog’s support for OpenTracing will let customers instrument their code for distributed tracing without concerns about getting locked in to a single vendor or making costly modifications to their code in the future, according to Ilan Rabinovitch, VP product and community for Datadog.

“Open source technologies and open standards have long been critical to Datadog’s success. Customers want to emit metrics and traces with the tooling that best fits their own workflows and want to enable them to do so, rather than force them to specific client-side tooling,” he told SD Times.

“Many of our most popular integrations in infrastructure monitoring, including OpenStack and Docker, started off as community-driven contributions and collaborations around our open-source projects. In the world of OpenTracing we have seen our community build and open source their own OT-based tracers that enable new languages on Datadog, beyond our existing support for Java, Python, Ruby and Go.

In addition to the Specifications Committee, OpenTracing also runs multiple working groups. The Documentation Working Group meets every Thursday, while the Cross Language Working Group – entrusted with maintaining the OpenTracing APIs and ecosystem – meets on Fridays.  

Conference fare
Want to find out more about OpenTracing? This year, developers have an opportunity to meet with OpenTracing experts and discuss the emerging spec at a number of different conference venues.

At the end of March, HomeAway held an end user meetup group together with Indeed, PayPal, and Under Armour. Talking with SD Times just before the event in Austin, HomeAway’s Solis said that he planned to give a presentation detailing how his development team is using the new spec.

“As infrastructure groups we are providing platforms and frameworks that deliver instrumentation to developers so they don’t have to do anything to get quality first level (entry/exit) tracing in their applications. We have also worked on an internal standard that developers using other technologies that we don’t support can instrument themselves. OpenTracing gives us this ability to just delegate to standard documentation and open-source forums if developers want to enrich their tracing. We are also doing a slow rollout so we can build capabilities in small but fast iterations,” the architect elaborated.  

Yet in case you missed the meetup in Austin, you have several other chances ahead for getting together with developers from the OpenTracing community.

KubeCon EU, happening from May 2 to 4 in Copenhagen, will feature two talks about OpenTracing, along with two salons. Salons are breakout sessions where folks interested in learning about distributed tracing can discuss the subject with speakers and mentors.

OSCON, going on from July 17 to 19 in Portland, OR, will include three talks on OpenTracing, along with a workshop and salons. If you’d like to attend an OpenTracing salon at either venue, you can email OpenTracing at hello@opentracing.io to pose questions in advance. OpenTracing would also love to hear from participants who are willing to help out by mentoring.

Recent OpenTracing feats
Sigelman is quick to observe that his co-creators on OpenTracing and his co-founders on LightStep are two distinctly separate groups, and that many OpenTracing adopters are not LightStep customers.

He also cites large numbers of recent contributions from both OpenTracing and customer and vendor contributors, including the following.

Core API and official OpenTracing contributions

  • OpenTracing-C++ has now added support for dynamic loading, meaning that they will dynamically load tracing libraries at runtime rather than needing them to be linked at compile-time. Users can use any tracing system that supports OpenTracing. Support currently includes Envoy and NGINX.
  • OpenTracing-Python 2.0 and OpenTracing-C#v.0.12 have both been released. The main addition to each is Scopes and ScopeManager.

Content from the community

  • Pinterest presented its Pintrace Trace Analyzer at the latest OTSC meeting. “The power of this tool is its ability to compare two batches of traces – displaying stats for each of the two and highlighting the changes,” explained Pinterest’s Naoman Abbas. “An unexpected and significant change in a metric can indicate that something is going wrong in a deployment.”
  • RedHat has shared best practices for using OpenTracing with Envoy or Istio. “We have seen that tracing system and with Istio is very simple to set up. It does not require any additional libraries. However, there are still some actions needed for header propagation. This can be done automatically with OpenTracing, and it also adds more visibility into the monitored process,” according to RedHat’s Pavol Loffay.
  • HomeAway presented at the Testing in Production meetup at Heavybit. LightStep’s Priyanka Sharma showed ways to use tracing to lessen the pain when developers are running microservices using CI/CD.
  • Idit Levine, founder of Solo.io, delivered a presentation at Qcon about her OpenTracing native open-source project, Squash, and how it can be used for debugging containerized microservices.

Community contributions

  • Software development firm Alibaba has created an application manager called Pandora.js, which integrates capabilities such as monitoring, debugging and resiliency while supplying native OpenTracing support to assist in inspecting applications at runtime.
  • Xavier Canal from Barcelona has built Opentracing-rails, a distributed tracing instrumentation for Ruby on Rails apps based on OpenTracing. The tool includes examples of how to initialize Zipkin and Jaeger tracers.
  • Gin, a web framework written in the Golong language, has begun to add helpers for request-level tracing.
  • Daniel Schmidt of Mesosphere has created Zipkin-playground, a repo with examples of Zipkin-OpenTracing-compatible APIs for client-side tracing.
  • The Akka and Concurrency utilities have both added support for Java and Scala.
  • Michael Nitschinger of Couchbase is now leading a community exploration into an OpenTracing API to be written in the Rust programming language.

The post Efforts to standardize tracing through OpenTracing appeared first on SD Times.

]]>
https://sdtimes.com/apm/efforts-standardize-tracing-opentracing/feed/ 12