shift-left Archives - SD Times https://sdtimes.com/tag/shift-left/ Software Development News Tue, 05 Nov 2024 20:25:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg shift-left Archives - SD Times https://sdtimes.com/tag/shift-left/ 32 32 Shifting left with telemetry pipelines: The future of data tiering at petabyte scale https://sdtimes.com/monitor/shifting-left-with-telemetry-pipelines-the-future-of-data-tiering-at-petabyte-scale/ Tue, 05 Nov 2024 20:01:22 +0000 https://sdtimes.com/?p=55996 In today’s rapidly evolving observability and security use cases, the concept of “shifting left” has moved beyond just software development. With the consistent and rapid rise of data volumes across logs, metrics, traces, and events, organizations are required to be a lot more thoughtful in efforts to turn chaos into control when it comes to … continue reading

The post Shifting left with telemetry pipelines: The future of data tiering at petabyte scale appeared first on SD Times.

]]>
In today’s rapidly evolving observability and security use cases, the concept of “shifting left” has moved beyond just software development. With the consistent and rapid rise of data volumes across logs, metrics, traces, and events, organizations are required to be a lot more thoughtful in efforts to turn chaos into control when it comes to understanding and managing their streaming data sets. Teams are striving to be more proactive in the management of their mission critical production systems and need to achieve far earlier detection of potential issues. This approach emphasizes moving traditionally late-stage activities — like seeing, understanding, transforming, filtering, analyzing, testing, and monitoring — closer to the beginning of the data creation cycle. With the growth of next-generation architectures, cloud-native technologies, microservices, and Kubernetes, enterprises are increasingly adopting Telemetry Pipelines to enable this shift. A key element in this movement is the concept of data tiering, a data-optimization strategy that plays a critical role in aligning the cost-value ratio for observability and security teams.

The Shift Left Movement: Chaos to Control 

“Shifting left” originated in the realm of DevOps and software testing. The idea was simple: find and fix problems earlier in the process to reduce risk, improve quality, and accelerate development. As organizations have embraced DevOps and continuous integration/continuous delivery (CI/CD) pipelines, the benefits of shifting left have become increasingly clear — less rework, faster deployments, and more robust systems.

In the context of observability and security, shifting left means accomplishing the analysis, transformation, and routing of logs, metrics, traces, and events very far upstream, extremely early in their usage lifecycle — a very different approach in comparison to the traditional “centralize then analyze” method. By integrating these processes earlier, teams can not only drastically reduce costs for otherwise prohibitive data volumes, but can even detect anomalies, performance issues, and potential security threats much quicker, before they become major problems in production. The rise of microservices and Kubernetes architectures has specifically accelerated this need, as the complexity and distributed nature of cloud-native applications demand more granular and real-time insights, and each localized data set is distributed when compared to the monoliths of the past.

This leads to the growing adoption of Telemetry Pipelines.

What Are Telemetry Pipelines?

Telemetry Pipelines are purpose-built to enable next-generation architectures. They are designed to give visibility and to pre-process, analyze, transform, and route observability and security data from any source to any destination. These pipelines give organizations the comprehensive toolbox and set of capabilities to control and optimize the flow of telemetry data, ensuring that the right data reaches the right downstream destination in the right format, to enable all the right use cases. They offer a flexible and scalable way to integrate multiple observability and security platforms, tools, and services.

For example, in a Kubernetes environment, where the ephemeral nature of containers can scale up and down dynamically, logs, metrics, and traces from those dynamic workloads need to be processed and stored in real-time. Telemetry Pipelines provide the capability to aggregate data from various services, be granular about what you want to do with that data, and ultimately send it downstream to the appropriate end destination — whether that’s a traditional security platform like Splunk that has a high unit cost for data, or a more scalable and cost effective storage location optimized for large datasets long term, like AWS S3.

The Role of Data Tiering

As telemetry data continues to grow at an exponential rate, enterprises face the challenge of managing costs without compromising on the insights they need in real time, or the requirement of data retention for audit, compliance, or forensic security investigations. This is where data tiering comes in. Data tiering is a strategy that segments data into different levels (tiers) based on its value and use case, enabling organizations to optimize both cost and performance.

In observability and security, this means identifying high-value data that requires immediate analysis and applying a lot more pre-processing and analysis to that data, compared to lower-value data that can simply be stored more cost effectively and accessed later, if necessary. This tiered approach typically includes:

  1. Top Tier (High-Value Data): Critical telemetry data that is vital for real-time analysis and troubleshooting is ingested and stored in high-performance platforms like Splunk or Datadog. This data might include high-priority logs, metrics, and traces that are essential for immediate action. Although this can include plenty of data in raw formats, the high cost nature of these platforms typically leads to teams routing only the data that’s truly necessary. 
  2. Middle Tier (Moderate-Value Data): Data that is important but doesn’t meet the bar to send to a premium, conventional centralized system and is instead routed to more cost-efficient observability platforms with newer architectures like Edge Delta. This might include a much more comprehensive set of logs, metrics, and traces that give you a wider, more useful understanding of all the various things happening within your mission critical systems.
  3. Bottom Tier (All Data): Due to the extremely inexpensive nature of S3 relative to observability and security platforms, all telemetry data in its entirety can be feasibly stored for long-term trend analysis, audit or compliance, or investigation purposes in low-cost solutions like AWS S3. This is typically cold storage that can be accessed on demand, but doesn’t need to be actively processed.

This multi-tiered architecture enables large enterprises to get the insights they need from their data while also managing costs and ensuring compliance with data retention policies. It’s important to keep in mind that the Middle Tier typically includes all data within the Top Tier and more, and the same goes for the Bottom Tier (which includes all data from higher tiers and more). Because the cost per Tier for the underlying downstream destinations can, in many cases, be orders of magnitude different, there isn’t much of a benefit from not duplicating all data that you’re putting into Datadog also into your S3 buckets, for instance. It’s much easier and more useful to have a full data set in S3 for any later needs.

How Telemetry Pipelines Enable Data Tiering

Telemetry Pipelines serve as the backbone of this tiered data approach by giving full control and flexibility in routing data based on predefined, out-of-the-box rules and/or business logic specific to the needs of your teams. Here’s how they facilitate data tiering:

  • Real-Time Processing: For high-value data that requires immediate action, Telemetry Pipelines provide real-time processing and routing, ensuring that critical logs, metrics, or security alerts are delivered to the right tool instantly. Because Telemetry Pipelines have an agent component, a lot of this processing can happen locally in an extremely compute, memory, and disk efficient manner.
  • Filtering and Transformation: Not all telemetry data is created equal, and teams have very different needs for how they may use this data. Telemetry Pipelines enable comprehensive filtering and transformation of any log, metric, trace, or event, ensuring that only the most critical information is sent to high-cost platforms, while the full dataset (including less critical data) can then be routed to more cost-efficient storage.
  • Data Enrichment and Routing: Telemetry Pipelines can ingest data from a wide variety of sources — Kubernetes clusters, cloud infrastructure, CI/CD pipelines, third-party APIs, etc. — and then apply various enrichments to that data before it’s then routed to the appropriate downstream platform.
  • Dynamic Scaling: As enterprises scale their Kubernetes clusters and increase their use of cloud services, the volume of telemetry data grows significantly. Due to their aligned architecture, Telemetry Pipelines also dynamically scale to handle this increasing load without affecting performance or data integrity.
The Benefits for Observability and Security Teams

By adopting Telemetry Pipelines and data tiering, observability and security teams can benefit in several ways:

  • Cost Efficiency: Enterprises can significantly reduce costs by routing data to the most appropriate tier based on its value, avoiding the unnecessary expense of storing low-value data in high-performance platforms.
  • Faster Troubleshooting: Not only can there be some monitoring and anomaly detection within the Telemetry Pipelines themselves, but critical telemetry data is also processed extremely quickly and routed to high-performance platforms for real-time analysis, enabling teams to detect and resolve issues with much greater speed.
  • Enhanced Security: Data enrichments from lookup tables, pre-built packs that apply to various known third-party technologies, and more scalable long-term retention of larger datasets all enable security teams to have better ability to find and identify IOCs within all logs and telemetry data, improving their ability to detect threats early and respond to incidents faster.
  • Scalability: As enterprises grow and their telemetry needs expand, Telemetry Pipelines can naturally scale with them, ensuring that they can handle increasing data volumes without sacrificing performance.
It all starts with Pipelines!

Telemetry Pipelines are the core foundation to sustainably managing the chaos of telemetry — and they are crucial in any attempt to wrangle growing volumes of logs, metrics, traces, and events. As large enterprises continue to shift left and adopt more proactive approaches to observability and security, we see that Telemetry Pipelines and data tiering are becoming essential in this transformation. By using a tiered data management strategy, organizations can optimize costs, improve operational efficiency, and enhance their ability to detect and resolve issues earlier in the life cycle. One additional key advantage that we didn’t focus on in this article, but is important to call out in any discussion on modern Telemetry Pipelines, is their full end-to-end support for Open Telemetry (OTel), which is increasingly becoming the industry standard for telemetry data collection and instrumentation. With OTel support built-in, these pipelines seamlessly integrate with diverse environments, enabling observability and security teams to collect, process, and route telemetry data from any source with ease. This comprehensive compatibility, combined with the flexibility of data tiering, allows enterprises to achieve unified, scalable, and cost-efficient observability and security that’s designed to scale to tomorrow and beyond.


To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon North America, in Salt Lake City, Utah, on November 12-15, 2024.

The post Shifting left with telemetry pipelines: The future of data tiering at petabyte scale appeared first on SD Times.

]]>
Achieving Security by Design is a question of accountability https://sdtimes.com/ai/achieving-security-by-design-is-a-question-of-accountability/ Mon, 13 May 2024 14:23:16 +0000 https://sdtimes.com/?p=54541 The software industry is no longer functional. Last year alone saw over 28,000 new CVEs published, a record rise that perfectly illustrates the ongoing patching crisis facing security and development teams, which are under constant pressure to patch vulnerabilities or risk exposure. In the last 12 months, software vulnerabilities led to over 50 percent of … continue reading

The post Achieving Security by Design is a question of accountability appeared first on SD Times.

]]>
The software industry is no longer functional. Last year alone saw over 28,000 new CVEs published, a record rise that perfectly illustrates the ongoing patching crisis facing security and development teams, which are under constant pressure to patch vulnerabilities or risk exposure. In the last 12 months, software vulnerabilities led to over 50 percent of organizations suffering 8 or more breaches. The same survey found that only 11 percent believe that they patch effectively and in a timely manner. This dilemma is the result of a software industry that is far too comfortable releasing insecure applications to end-users. Software vendors have long prioritized speed to market, with security becoming an afterthought addressed through updates and patches, and we can no longer accept it.

Security leaders, regulators, and the industry itself must embrace a higher security standard, holding software vendors and developers to a higher standard of security from the outset, truly embracing secure by design principles, clearer disclosure and faster remediation of vulnerabilities, and more regular and rigorous security testing of applications, even after their release.  

So, whose responsibility is it?

This crisis is perpetuated by the well-publicized security skills gap. In fact, 47 percent of organizations blame their challenges remediating vulnerabilities in production on a lack of qualified personnel – showing that even within the software development lifecycle (SDLC), there is an unfairly spread security burden. In large organizations, though, resources should not be an accepted explanation for poor security standards. End users with tight security budgets and smaller teams should never have to shoulder the security shortfalls of a solution that they’ve paid for and expected to be trustworthy. 

But competing aggressively to acquire talent from the limited pool with security expertise is not the only solution: the shift left and shift everywhere movements have long emphasized the importance of security skills across the SLDC, even within development teams. 

With many developers now turning to AI code to increase efficiency even further, it is critical that they are also equipped with the secure coding knowledge to thoroughly assess the output for security risks. Fostering the security skills of their developers is a critical way for large software vendors to reduce the number of vulnerabilities in production while showing a real commitment to improving the security of the applications they release. 

Moving beyond ticking boxes

Developing a security-centric mindset within all software vendors will be crucial to overcoming today’s patching crisis. There is often a disconnect between security and development teams, with the goal of security often appearing to be at odds with competitive success. Driving a culture of shared responsibility would help establish accountability in all departments and stages of the SDLC, without penalizing organizations who prioritize security over speed to market. 

Well-trained and knowledgeable development teams and project managers are the foundation of this change. The unfortunate reality is that many organizations don’t see security training for developers as a priority, with 68 percent only providing secure coding training for the purposes of compliance or in the event of an exploit. The urge to create code faster than ever often means that developers’ schedules cannot account for even small sessions of secure coding training, so organizations train only when they have to. Checking the box for compliance is easy but it doesn’t build a security-centric culture, opening the door for complacency, oversight, and poor retention from secure code training sessions when they do happen. 

The industry as a whole is severely lacking in the prevalence, frequency, and quality of training. Software vendors need to understand that software security is a central concern for their customers, one that justifies continuous training and allots time for rigorous code reviews. 

Proactivity is always the answer

Building a comprehensive and proactive approach to software security can help organizations mitigate security risks when software vendors fail. A concerning 55 percent of security leaders report that a misalignment between development, compliance, and security teams causes delays in patching. In giant tech corporations, this misalignment is heightened. By taking a proactive approach that assesses and responds to CVEs based on risk prioritization, organizations can realign their teams with clear patching protocols. 

In a threat landscape where reactive methods are no longer sufficient, investing in education and detection is crucial. When developing in-house applications or configurations, developers should be capable of sniffing out any code that could potentially give threat actors a foothold into their networks. Although it is the responsibility of software vendors to release secure applications, many vulnerabilities arise from misconfigurations when software is uploaded onto a new or existing system. It is absolutely crucial that in-house developers have the proper education and skills to ensure that applications are configured and used as designed, scanning regularly for new vulnerabilities before a bad actor can exploit them. 

The current patching crisis is the result of the rapid innovations that are happening in the industry today, and this is not an inherently bad thing. But as customers and regulators come to expect higher standards of software security, organizations can help themselves to meet the patching crisis head on by embracing “security by design” principles and proactive patch management strategies   in their own internal teams. 

The post Achieving Security by Design is a question of accountability appeared first on SD Times.

]]>
The importance of prevention: How shifting left, static analysis and unit testing create better code quality https://sdtimes.com/test/the-importance-of-prevention-how-shifting-left-static-analysis-and-unit-testing-create-better-code-quality/ Thu, 15 Feb 2024 18:16:34 +0000 https://sdtimes.com/?p=53782 Developers are constantly balancing demands to provide quality features of the highest standard at a fast pace. Every aspect of business now relies on software, which means developers are constantly working to write and produce the best software they can. Continuous Integration (CI) and Continuous Delivery (CD) help facilitate the creation of that software, but … continue reading

The post The importance of prevention: How shifting left, static analysis and unit testing create better code quality appeared first on SD Times.

]]>
Developers are constantly balancing demands to provide quality features of the highest standard at a fast pace. Every aspect of business now relies on software, which means developers are constantly working to write and produce the best software they can. Continuous Integration (CI) and Continuous Delivery (CD) help facilitate the creation of that software, but without the right quality assurance steps in place, they can inadvertently let potentially major code issues fall through the cracks. 

Maintaining a balance between building high-quality software and doing it quickly can be challenging. Shift-left often appears as a common solution, but to be truly lean and agile we must shift-left on quality that takes into consideration both unit testing and static code analysis. This way, developers can ensure they produce good, clean code that results in top-quality software. By catching small bugs or quality issues early on in the process, developers can mitigate the possibility of writing code that causes security risk or breaks down further into the deployment process — at a high cost to the business. 

Shifting Left on Quality

We must first agree on a new mindset — we shouldn’t be focused on finding problems. We should be focused on preventing those problems in the first place. All developers strive to write the best code they possibly can, but errors tend to be inevitable. Testing software code early — shifting left — helps catch errors and bugs soon enough in the development process that they don’t become sizable, expensive, disastrous problems later on. This kind of early testing on quality enables developers to create code that is reliable, adaptable, maintainable, and of course, secure. That’s where shifting left toward a focus on the code quality first, versus finding security issues already existing in code, can create significant inroads and provide a clearer path. 

Shifting left on quality can also help mitigate errors caused by an increasing dependency on AI code generators. While AI coding assistants can make an impact on the developer workload and help boost efficiency or productivity at a time when demands for output are greater than ever, they aren’t a failsafe. They need to be thoughtfully governed and controlled. For example, in a recent study, it was found that ChatGPT-generated code is prone to various code quality issues, including compilation and runtime errors, wrong outputs, and maintainability problems. In fact, GitHub Copilot docs acknowledge this, recommending that those using Copilot conduct rigours testing to ensure the generated code is of high quality: 

“You are responsible for ensuring the security and quality of your code. We recommend you take the same precautions when using code generated by GitHub Copilot that you would when using any code you didn’t write yourself. These precautions include rigorous testing, IP scanning, and tracking for security vulnerabilities.”

Quality checks still rely on special tools and human review to ensure code quality overall. The more code we write with the help of AI, the more safeguards must be in place to check that the code is accurate and issue-free. That’s why developers must enhance typical testing processes, shifting them further left, to avoid or help identify future errors that could also affect the quality of software. Employing the right combination of unit testing and static analysis throughout the software development lifecycle (SDLC) is a pivotal part of these guardrails that pave the path for top-quality software.

Balancing Unit Testing and Static Analysis

Developers often prioritize unit testing while embracing a shift-left mentality as a means of ensuring features and functionality work correctly. However, unit testing on its own cannot test for quality or cover every bug and problem within software code. The more bugs fall through the cracks, the more developers compromise the quality and security of their software as it reaches deployment.

The solution? Developers need to incorporate static code analysis, which can be done through automation. In comparison to dynamic analysis, which works at runtime, static analysis looks at the internal structure of an application and works on a variety of different code languages. By incorporating both unit testing and static analysis, developers can control code quality through the development stages, quickly detect and fix bugs, and improve overall software reliability.

Further, while developers may think of static analysis as purely a tool for finding bugs — or patterns that might lead to bugs — the right static analyzer can also help understand the why behind an issue or a better way to do something, helping them to learn as they code. Context matters, and becomes even more crucial for developers who are increasingly strapped for bandwidth.

Clean code Makes Better Software

A shift-left approach to quality that strikes a balance between static analysis and unit testing ultimately allows developers to write clean code — code that is consistent, intentional, adaptable, and responsible ultimately becomes easier to maintain. More than that, this Clean-as-you-Code process accelerates testing as a whole and gives developers the power to detect and address quality issues as soon as possible. 

Earlier and more comprehensive testing in the SDLC enables a much more efficient way for developers to work. Waiting to address poor-quality code creates delays in deployment in addition to allowing that bad code to slip through to deployment, requiring reverting and refactoring of software code. The kind of feedback loops associated with checking for issues later in the process are lengthy and iterative, and can disrupt the development process by forcing a developer to return to work they might have done weeks or months ago, when it’s no longer fresh on the brain.

As developers try to reuse or repurpose code where possible, making sure it’s top quality is paramount. Incorporating static analysis with unit testing ultimately allows developers to continue building software they know is secure, maintainable, reliable, and accessible at any point in its lifecycle. It’s the best way to keep up with increasing development speeds.

Maintaining Code Quality

Particularly as developers balance increasing workloads with new AI coding assistants and tools, quality assurance is more important than ever. While some tools may enable more efficiency and developer productivity, they’re never a full replacement for the kind of analysis that prohibits costly bugs and mistakes from slipping through to production. 

Developer teams must understand that a shift-left on quality approach, employing both unit testing and static analysis, helps strike the necessary balance between delivering software quickly and delivering software that’s high-quality. As those two characteristics become more and more crucial to developer demands, quality, maintaining that balance, and understanding the principles behind it, puts teams in a position to help their organizations see business results.

To learn more about Kubernetes and the cloud native ecosystem, plan to attend KubeCon + CloudNativeCon Europe in Paris from March 19-22.

The post The importance of prevention: How shifting left, static analysis and unit testing create better code quality appeared first on SD Times.

]]>
Quality engineering should be an enterprise-wide endeavor https://sdtimes.com/testing/quality-engineering-should-be-an-enterprise-wide-endeavor/ Thu, 22 Jun 2023 15:04:36 +0000 https://sdtimes.com/?p=51504 Everyone, it seems, wants to shift all the steps required to produce and deliver quality, performant software to the left. The assumption is that by asking developers to take on a greater role in quality assurance and security, the cost to remediate problems is lowered by discovering those issues earlier.  The downside of this is … continue reading

The post Quality engineering should be an enterprise-wide endeavor appeared first on SD Times.

]]>
Everyone, it seems, wants to shift all the steps required to produce and deliver quality, performant software to the left. The assumption is that by asking developers to take on a greater role in quality assurance and security, the cost to remediate problems is lowered by discovering those issues earlier. 

The downside of this is that developers now say that they spend not even half of their time on coding, meaning that instead of working on innovative new products or features, they’re learning how to test all aspects of their application or trying to understand how to secure their code. (Thanks, “you build it, you own it!”)  

Many of these same developers also report in surveys that testing is a big headache for them. “Rather than reducing stress for developers, shift left has introduced new obstacles,” said Gevorg Hovsepyan, head of product at test automation platform mabl. “They built something. They try to deploy but testing breaks. Then they are responsible for fixing the test, trying to update the test, then running the test again. And if things don’t work out, they can spend a lot of time trapped in this cycle.” 

But developers often lack the proper tools and proper training to handle the burden of testing. And, as mandated delivery cycles grow ever shorter, it’s easy to see how testing becomes a significant stress driver for developers. 

As deployment frequency increases, so does the level of testing required to ensure that quality is maintained. This is where test automation can relieve many of the mundane tasks that slow developers down. “We see in our most recent Testing in DevOps Report that teams who have high test coverage have less stress in deployment,” Hovsepyan said. “In fact, if we compare the teams that have high coverage to the teams that don’t, the high-coverage teams are twice as likely to have stress-free deployments.” 

But, Hovsepyan noted, it’s not only about automation – it’s also about testing strategies. 

And this is where shifting testing to the left also impacts quality assurance engineers, whose role is changing – from writing and running the tests to becoming the architects of quality. In these scenarios, QA becomes a center of excellence that enables product teams – developers, owners, and designers – to deliver high-quality software. 

QA engineers in organizations that have shifted testing left are defining what quality looks like and enabling more people to participate in building quality software. “We’re seeing people in our customer base taking more of a leadership role in thinking about what quality really means and what it looks like in the supply chain, versus focusing exclusively on automating test cases.” 

Quality is a team effort 

Organizations doing very frequent deployments must understand that the test effort should be shared, and everyone should participate in building quality into the software, Hovsepyan said. 

“This means that developers are not always the ones building the tests, which makes it much easier for developers to support testing” he explained. “They might build the initial set, but low code means that test creation is faster, and more roles can contribute there. That’s number one, and number two, quality efforts are built around improving the customer experience, rather than a binary pass/fail mindset. This helps everyone – developers, QA, and business stakeholders – understand the true impact of quality engineering and see the value in testing efforts” 

With quality engineering, teams test the full user journey by going through the steps of logging in, finding what they need, checking out, and those steps in between. “I think using an outcomes-focused approach to testing that’s focused on the user journey versus spending cycles of editing the code in the script helps improve testing efficiency and accelerates development cycles.” 

Reduce the test effort 

To ease the load on developers, organizations need to start thinking beyond the typical, traditional technology stack and looking into modern testing solutions, Hovsepyan suggested. For instance, he said, adopting low-code automation tools enables teams beyond developers to join the testing practices. 

These cloud-based solutions often employ AI to reduce the burden of test creation and maintenance for everyone, including developers, he noted. These modern tools leverage intelligence to help teams reap the benefits of thorough testing without slowing development cycles.  

Finally, Hovsepyan pointed out that organizations following Agile practices – iterate on small changes, get customer feedback, and iterate again – further reduce the risk of delivering suboptimal experiences.  

“These transformations – cloud, Agile, and quality engineering – all fit together,” he said. “Modern technology stacks give teams the means to deploy multiple times per day, methodologies like Agile give them the processes, and quality engineering ensures that changes only improve the customer experience.” 

Content provided by SD Times and mabl

The post Quality engineering should be an enterprise-wide endeavor appeared first on SD Times.

]]>
Report: Companies prioritize securing open-source components in modern software https://sdtimes.com/security/report-companies-prioritize-securing-open-source-components-in-modern-software/ Tue, 28 Sep 2021 17:38:47 +0000 https://sdtimes.com/?p=45374 The rapid adoption of the cloud has led companies to increasingly secure open-source components in modern software.  The newly released 12th Building Security In Maturity Model (BSIMM12) report found a 61% increase in software security groups’ identification and management of open source over the past two years.  The report was created by Synopsys, a company … continue reading

The post Report: Companies prioritize securing open-source components in modern software appeared first on SD Times.

]]>
The rapid adoption of the cloud has led companies to increasingly secure open-source components in modern software. 

The newly released 12th Building Security In Maturity Model (BSIMM12) report found a 61% increase in software security groups’ identification and management of open source over the past two years. 

The report was created by Synopsys, a company that focuses on software security and quality. 

Synopsys gathered data from 128 firms from multiple industry verticals including financial services, independent software vendors, cloud, health care, and IoT. It describes the work of nearly 3,000 software security group members and over 6,000 satellite members.

The increased security for open-source components is both due to the prevalence of open-source components and the rise of attacks on those popular components, according to the report. 

Security leaders are prioritizing cloud and open-source capabilities by developing in-house capabilities for managing cloud security rather than having a reliance on cloud vendors, and also, organizations are placing increased emphasis on software suppliers and open-source risk management. 

The report also found a 30% increase in the “publish data about software security internally” activity over the past 24 months, meaning that organizations are exerting more effort to collect and publish their software security initiative data. 

Software Bill of Materials activities increased by 367%, which shows an emphasis on understanding how software is built, configured, and deployed, and it increased the organizations’ ability to re-deploy based on security telemetry.

Also, security teams are lending resources, staff, and knowledge to DevOps practices, and the concept of “shift left” progressed to “shift everywhere,” according to the report. “Shift everywhere” encourages companies to use containers to enforce security controls, orchestration, and scanning infrastructure as code.

 

The post Report: Companies prioritize securing open-source components in modern software appeared first on SD Times.

]]>
The key pillars to a successful shift-left strategy https://sdtimes.com/test/the-key-pillars-to-a-successful-shift-left-strategy/ Wed, 09 Jun 2021 15:28:15 +0000 https://sdtimes.com/?p=44326 The shift-left movement is already underway. Organizations can no longer wait to test at the end of the life cycle and hope things are in order before they release into production. Baking quality in from the beginning rather and testing quality later has become a key tenet in today’s software testing initiatives.  A recent report … continue reading

The post The key pillars to a successful shift-left strategy appeared first on SD Times.

]]>
The shift-left movement is already underway. Organizations can no longer wait to test at the end of the life cycle and hope things are in order before they release into production. Baking quality in from the beginning rather and testing quality later has become a key tenet in today’s software testing initiatives. 

A recent report from the software testing company Applause found 86% of respondents report their organizations are testing features immediately as they are being developed to reduce bugs, reduce the costs of fixing later-stage bugs, and reduce the need for hotfixes. However, this new shift in quality assurance is having a significant impact on developer productivity, with respondents reporting it takes at least eight hours per week to test new features. According to Mike McKethan, director of quality engineering and automation at Applause, shifting left requires the right mindset to improve testing and save developer time. 

In a recent webinar on SD Times, McKethan explained that when people think about shifting left, a majority  immediately turn to tools and automation. While those are foundational layers of a good shift-left strategy, the overarching theme should be that quality is a habit, not an act.  

According to McKethan and Mike Plachta, senior manager of solutions engineer at Applause who also presented the webinar, the key pillars of a successful shift-left strategy include:

 

  • Quality ownership: Having the whole team be responsible for quality, moving beyond just the QA sign-off and having management buy-in
  • Valuable features: Developing accurate, executable and valuable features from the beginning by leveraging the Pareto principle and behavior-driven development
  • Automation-first mentality: Integrating automation into the build process and DevOps pipeline
  • Fail or learn fast: With continuous feedback, CI/CD, code quality from an automation perspective, and the “three amigos”
  • Continuous improvement: Through retrospectives, idea boards, predictive analytics, AI and real-time analytics

 

To learn more about these pillars, watch the full webinar here

The post The key pillars to a successful shift-left strategy appeared first on SD Times.

]]>
Analyst View: Shift testing left, but bank right https://sdtimes.com/test/analyst-view-shift-testing-left-but-bank-right/ Tue, 13 Apr 2021 16:13:55 +0000 https://sdtimes.com/?p=43641 I’ve spent most of my professional life convincing businesses to shift things left — shift-left testing for software, shift-left demand and supply forecasts for supply chains, shift-left analytics to understand future implications earlier than your competition. Hopefully that explains why it seems heretical for me to talk about shift-right testing at all. Will shift-right testing … continue reading

The post Analyst View: Shift testing left, but bank right appeared first on SD Times.

]]>

I’ve spent most of my professional life convincing businesses to shift things left — shift-left testing for software, shift-left demand and supply forecasts for supply chains, shift-left analytics to understand future implications earlier than your competition.

Hopefully that explains why it seems heretical for me to talk about shift-right testing at all.

Will shift-right testing somehow cheapen shift- left testing, making it old news? Or could it cause some teams to stop early preventative testing, just like internet memes can prevent some otherwise rational people from getting vaccinations?

Shift-right is happening anyway
With intelligent CI/CD automation, DevOps practices and cloud-native delivery of software into microservices architectures, our software pipelines are moving at such breakneck speeds that much of the activity has moved into ensuring resiliency at change time and post-deployment phases.

Shift-right everything — including testing — seems to be inevitable.

Given how software development incentives are usually aligned with delivering more features to production, faster — rather than ensuring complete and early testing, I don’t expect many organizations will let shift-left testing activities gate or delay release cycles for very long.

RELATED CONTENT: 
What constraints disrupt the software supply chain?
What a successful shift-left security program looks like

So what should we do now, allow end customers to become software testers?

No matter how much we try testing earlier in the software lifecycle, with greater automation, there will always be too much change and complexity to prevent all defects from escaping into production — especially when the ever-changing software is likely executing on ephemeral cloud microservices and depending on calls to disparate APIs.

There are several interesting vendors that offer pieces of the shift-right puzzle, and to their credit, none really touch the third rail of saying you can leave out QA teams, or call themselves ‘shift-right testing.’ That’s smart marketing.

And it doesn’t really matter, they can call it progressive delivery. Canary releases, blue/green deployments, feature flagging, and even some observability, chaos engineering and fast issue resolution workflows. All things that advanced teams do to improve quality and performance nearer to production, and even post-delivery.

Shift left, but bank right
Like a bike on a velodrome, or a NASCAR race track banking around the left turns — shift-right testing has less to do with validating what the soft- ware does, and more to do with accounting for everything the software might do under stress.

Not to be dogmatic, but I don’t consider it test- ing to put trial releases in front of perhaps smaller groups of customers who aren’t being told they are beta testing. (I wouldn’t want to be holding a pager when a graduated release doesn’t blow up until it scales to half my user base…)

It is, however, quite valid to call it validation. Or — maybe risk mitigation. Damage control. Blast radius reduction. Those are all great shift-right aspects of operational excellence to strive for.

When you are shifting right you aren’t really shifting testing at all, you are banking the track. You are engineering more tolerance into the system.

Bank-Right and build in more operational tolerance to your release track, so you can afford to Shift-Left testing and automated release, to go even faster.

You still need early testing, but all the testing in the world will never reach the asymptote of 100% perfection in production. Bank-Right approaches offer slopes and guardrails to keep the race on the track, and put out fires faster, even if the racers behave abnormally.

The Intellyx Take
Shift-Left and Bank-Right go hand-in-hand, just like design and engineering in the real world.

When you drive on a bridge, you hope that it was designed and tested using simulations to flex gracefully when confronted with a variety of natu- ral forces and traffic contingencies.

You would also want that bridge to be engi- neered and monitored post-production to provide early warnings and failsafes to mitigate risk and reduce harm if anything does go wrong.

Ultimately, we’ll see both approaches as two dif- ferent lenses for improving customer experience, no matter what they are called.

The post Analyst View: Shift testing left, but bank right appeared first on SD Times.

]]>
What a successful shift-left security program looks like https://sdtimes.com/security/what-a-successful-shift-left-program-looks-like/ Wed, 17 Mar 2021 16:41:35 +0000 https://sdtimes.com/?p=43307 In today’s ever-changing world, businesses need to have a strong application security (AppSec) program in order to succeed and survive. Many businesses are taking a shift-left approach to security, moving security earlier in the application life cycle — but this puts a lot of pressure on the development team that is already pressured to move … continue reading

The post What a successful shift-left security program looks like appeared first on SD Times.

]]>
In today’s ever-changing world, businesses need to have a strong application security (AppSec) program in order to succeed and survive. Many businesses are taking a shift-left approach to security, moving security earlier in the application life cycle — but this puts a lot of pressure on the development team that is already pressured to move faster, write better code and work smarter. 

There are some ways to alleviate the stress for developers while making it easier to catch bugs earlier and reducing the cost to fix them.

“Having a good policy in place to properly assess your application and make sure you have good practices will be critical to protecting everything —  the whole entire infrastructure, not just the application,” said Rey Bango, developer and security advocate at Veracode, who spoke in an SD Times webinar with Tim Jarrett, Veracode’s director of product management, on how to set up security programs for success.  

The first piece of advice Jarrett and Bango gave is to automate, but also recognize automation is not just a security thing. While automation can help, security really needs to figure out where their function fits alongside automated workflows, and which of those workflows can be automated, according to Jarrett.

He went on to explain that a lot of security concerns can be automated, but the ones that should be automated are the ones that are widely prevalent and easy to address. The security vulnerabilities that are more unique or require more security expertise should not be automated.

Baking security into the code is another best practice Jarrett recommended because it enables security workflows to be managed and tracked just like every other piece of code associated with the project. This helps developers take advantage of processes they are already used to working in. 

Bango highlighted the need to appoint a security companion within a shift-left program. A security companion is not a security decision maker, but rather a neutral person that can bridge the conversation between development and security teams. They should help bridge the communication and manage priorities between the two teams. 

For more ways on how to set up a strong AppSec program watch the full webinar

The post What a successful shift-left security program looks like appeared first on SD Times.

]]>
SD Times news digest: Sauce Labs’ new shift-left capabilities, Nintex Workflow Cloud launched, CircleCI privacy enhancements https://sdtimes.com/cicd/sd-times-news-digest-sauce-labs-new-shift-left-capabilities-nintex-workflow-cloud-launched-circleci-privacy-enhancements/ Fri, 19 Feb 2021 17:09:20 +0000 https://sdtimes.com/?p=43052 Sauce Labs announced new shift-left capabilities such as new end-to-end visual testing as well as Sauce Testrunner, which supports a host of developer-preferred test frameworks such as Cypress, Playwright, and TestCafe.  “Successful testing in the DevOps era is about giving developers the optionality and flexibility to work within the frameworks with which they’re most comfortable, … continue reading

The post SD Times news digest: Sauce Labs’ new shift-left capabilities, Nintex Workflow Cloud launched, CircleCI privacy enhancements appeared first on SD Times.

]]>
Sauce Labs announced new shift-left capabilities such as new end-to-end visual testing as well as Sauce Testrunner, which supports a host of developer-preferred test frameworks such as Cypress, Playwright, and TestCafe. 

“Successful testing in the DevOps era is about giving developers the optionality and flexibility to work within the frameworks with which they’re most comfortable, and about giving them the ability to harness and understand the different test signals proliferating throughout the dev cycle,” said Matt Wyman, chief product officer at Sauce Labs.

The end-to-end testing also enables users to compare both screenshots and DOM snapshots to visual changes, automatically pull in the initial baseline and accept updates, and integrate seamlessly into CI/CD processes. 

Nintex Workflow Cloud launched
The company’s workflow automation cloud platform includes advanced data technology, added functionality and pre-built connectors to automate and optimize enterprise-grade workflows faster. 

Pre-built dashboards and widgets provide immediate insights into workflows and automated processes with easy-to-use data visualization and the new functionality such as Repeating Sections, Draft Forms Save and Continue, and Multiple Approvers. 

“We are committed to delivering process management, automation and optimization technology that improves how people work and provides competitive advantages for every organization that standardizes on Nintex,” said Neal Gottsacker, chief of product at Nintex. “By seamlessly integrating Nintex Workflow Cloud with Nintex Analytics, our customers and partners benefit from a robust data infrastructure that reports on workflows across an organization’s entire Nintex deployment.”

CircleCI privacy enhancements 
CircleCI announced private orbs which help developers automate repeated processes with reusable packages of YAML configurations to help with use-cases such as vulnerability scanning and test coverage of applications.

Developers also now have the ability to create private orbs to allow teams to share configurations within their organization. 

CircleCI also helps users ensure their pipelines are secure via added product security features including environment variables, multiple contexts, and admin controls.

RediSearch 2.0 released
RediSearch 2.0 enables customers to build modern applications with interactive search experiences.

Users can automatically index and then query their Redis datasets without changing their application. 

With the latest release, users can also scale RediSearch easily and can be deployed in a globally distributed manner by leveraging Redis Enterprise’s Active-Active technology. 

“RediSearch now enables organizations to quickly build indexes which require low latency querying and full-text search. All of this is delivered with the familiar ease of scaling and speed of Redis,” said Pieter Cailliau, director of product management at Redis Labs.

 

The post SD Times news digest: Sauce Labs’ new shift-left capabilities, Nintex Workflow Cloud launched, CircleCI privacy enhancements appeared first on SD Times.

]]>
Test automation does away with the mundane, and frees up testers for the creative domain https://sdtimes.com/test/test-automation-does-away-with-the-mundane-and-frees-up-testers-for-the-creative-domain/ Tue, 01 Dec 2020 17:20:07 +0000 https://sdtimes.com/?p=42280 The drastic increase in volume of tests and the speed of software production has necessitated more efficient automated testing to handle repetitive tasks. The growing “shift-left” approach in Agile development processes has also pushed testing much earlier in the application life cycle.  “There is a challenge to testing in the sense that we need to … continue reading

The post Test automation does away with the mundane, and frees up testers for the creative domain appeared first on SD Times.

]]>
The drastic increase in volume of tests and the speed of software production has necessitated more efficient automated testing to handle repetitive tasks. The growing “shift-left” approach in Agile development processes has also pushed testing much earlier in the application life cycle. 

“There is a challenge to testing in the sense that we need to do it more frequently, we need to do it for more complex applications, and we need to do it at a higher scale. This is not feasible without automation, so test automation is a must,” said Gartner senior director Joachim Herschmann, who is on the App Design and Development team. 

In fact, in last year’s Forrester Wave: Global Continuous Testing Service Providers, found that traditional testing services don’t cut it for many organizations anymore: 20 of 25 reference customers said that they are adopting continuous testing (CT) services to support their Agile and DevOps initiatives within a digital transformation journey. Of those CT services, clients say automation is the most impactful and differentiating for delivering better software faster. 

RELATED CONTENT:
testRigor helps to convert manual testers to QA automation engineers

How does your solution help organizations implement automated testing?
A guide to automated testing tools

Investment in automated testing is expected to rise from $12.6 billion in 2019 to $28.8 billion by 2024, according to a report by B2B research company MarketsandMarkets.com.

The pandemic has also driven the importance of autonomous testing, as many companies realized the primary way to connect with consumers is through apps and digital applications, which in turn increased the amount of testing that needs to be done. The situation created a distributed workforce that needed to evolve the way they do testing. “With the effects of COVID, organizations had to execute a two year plan in two months,” said Mark Lambert, vice president of strategic initiatives at testing solutions provider Parasoft. 

The current major shift that has occurred in autonomous testing is that it is no longer primarily driven by code but is actually driven by data, according to Herschmann. Anything that involves AI is driven by data. 

These data sources include user stories or requirements that could stem from documents that say what an expected piece of functionality is. This requires natural language processing and technologies that can read the document and instruct the intent and then create a test case.

Other data points include existing test results in which users can identify patterns in their tests and see what their failure points were before. 

Automated testing tools can also scan data or feedback that’s supplied in app stores or even social media to find information that the testers may have missed. “Very often there is a discrepancy between what the project manager envisions about a product versus how it’s used in reality. There’s a gap in testing there and now we can capture that,” said Herschmann. 

Tooling can also generate unit tests automatically because it looks at GitHub where there are millions of projects, scans it, and trains the model based on that code. “By the way, writing unit tests is a task that developers hate, so if that can be done automatically, that’s great.”

Test automation also looks at log data such as web server logs or other log files and captures the information of how users have used the applications. This can then be used to to extract customer journeys and create common test scenarios based on that. 

“We’re for the first time really tapping into these data sources, and we’re using that to enhance test automation. Where it all leads to is we’re finally getting to a point where the full life cycle of testing is actually increasingly automated,” Herschmann said. 

As the move to Agile has increased, more companies implemented the test automation pyramid strategy with unit level testing at the base, where the largest amount of automated tests need to be done, followed by API testing, and lastly UI testing. 

“There are a lot of excellent open source tools in the market when it comes to unit testing, but UI-based functional end-to-end testing is where there are a lot of challenges,” said Artem Golubev, the co-founder and CEO of testRigor, which offers behavior-based testing software. 

Golubev stressed the need for an effective solution in this particular area. “These are difficult in particular because of stability and maintainability and it is difficult for teams to even build tests for this in the first place.”

Automated testing does not eliminate manual testing 
Although companies have become increasingly aware of the speed and accuracy that comes with autonomous testing, this did not eliminate the necessity of manual testing at organizations. 

“In general humans are really good at the creative domain, domain knowledge workflows, but they’re very bad at repetitive tasks. So if I can point a machine and tell it to go ahead and verify a particular use case, such as looking for specific numbers on a page and making sure they all match, that is a great job for a machine to do. It’s a bad job for a human, because as we really start to have more domain knowledge, those kinds of workflows bore us and we make mistakes,” Parasoft’s Lambert said. 

Meanwhile, people add value in understanding how the application should be used and the problem that the application is trying to solve. Manual testing is a very valuable part of the  process, Lambert explained. 

Testing teams can also focus more on maintaining test scripts and increasing total test coverage. This has put some of the responsibility onto developers who are now working alongside testers to create test automation frameworks. 

The expansion of AI in test automation has also led to tremendous benefits in test stability, maintainability, and being able to generate the tests, however, AI will not be able to replace humans in the near future when it comes to testing, according to Golubev. 

“In cases of bot-based generated tests, it’s the AI that guides the bot through your application in order to be able to build proper end-to-end tests out of the box. There are also machine learning-based models that automatically assess if your page is rendered properly from an end user’s perspective,” Golubev said. 

Golubev noted that AI will not be able to replace humans in the near future when it comes to testing.

“There is no such thing, and there won’t be in the next 20 years, something such as overarching AI. With the current models and how they work in 2020, the compute is just not there,” said Golubev.  

Test automation drivers
Lambert said that there are three primary use cases that are driving the adoption and application of test automation: compliance, the need to accelerate delivery, and the reduction of operational outages.

“First, compliance is one of those things that’s non-negotiable and it really is a bottleneck at the end of the delivery pipeline,” Lambert said.  “Whether it’s for following PII, GDPR, PCI, or countless other regulations, the organizations that implement compliance in an automated manner are the organizations that really succeed in really delivering on the second important use case: accelerating delivery, according to Lambert. 

However, accelerating delivery is not just about the quantity of tests put out in the shortest period of time. This phase primarily has to be about focusing on the quality of automated tests.

“It’s not just about the level of test automation that’s the biggest problem. The biggest problem is actually a commitment to quality or a quality-first approach within organizations,” said Lambert. 

“What we have seen is that management that makes a commitment can significantly reduce the number of outages that they have and accelerate delivery with confidence.”  

The third major point of automated testing focuses on eliminating production outages and on doing continuous verification and validation as one goes through the process. 

“If you’re just accelerating and not worrying about quality, that might work for the first release,, maybe the second release iteration, but certainly if you don’t have that in place, and if you don’t have the testing to check, you’re going to start failing as you move forward,” Lambert added. “If you build quality into your accelerated delivery process, then you could deliver with confidence and make sure you don’t have those production outages.”

When beginning with test automation, organizations not only have to figure out how to create their test automation, but identify what things to automate because not everything can be automated, according to Lambert. Then, organizations need ways, practices and technologies to help them with the creation process.

While many organizations getting started with test automation tend to look for the simplest approach by looking for tools that are easy to use and that can be plugged into the pipeline, Lambert said that it is best to think long term. 

“One thing you have to look at is how is that going to scale? So a technology that you’re bringing in, or a capability that you’re bringing in might satisfy the use case that you have today, but is it going to satisfy the use case in six months time when you start expanding out to additional use cases or additional applications in your organization?” Lambert said. 

Once the tests are created, organizations then have to consider how to maintain their tests.

“Say I get up and running and everything starts rolling great. And then the next sprint starts and that next sprint is not actually only introducing new functionality. It’s actually making changes to existing functionality. So my tests need to be maintained along with the underlying code and capabilities of the underlying application,” Lambert said. 

This is where testing functionalities such as self-healing come in to make sure that everything doesn’t collapse in the middle of a sprint. This functionality stops the continuous integration process from failing, and then also giving users ways of easily refactoring existing test cases so that they don’t have to throw them away and start again.

“As I’m moving further through my development life cycle, my number of tests grow, and this is where test execution becomes critical. So you have to start looking at your test suites and saying, okay, what capabilities are available for me that can optimize my test execution to focus on the key business risks and optimize my test suites. This is so that I can get rapid feedback inside of a sprint and can continue accelerated delivery from sprint to sprint,” said Lambert. 

Other key functionalities to accelerate execution and feedback include traceability, which ensures that the verification and validation of the product is complete. Also important is integration with the CI/CD pipeline. 

“What I want to achieve is not more and more tests. What I actually want is as few tests as I possibly can because that will minimize the maintenance effort, and still get the kind of risk coverage that I’m looking for,” Gartner’s Herschmann said.  

Herschmann explained that while test automation solves one problem, it can create another if not utilized properly. 

“I’m doing everything manually and I’m using automation now to accelerate that. Well that solves my problem of not able to run enough tests,” Herschmann said. “The new problem that I’ve potentially created now is that with all the tests that I run, I can no longer actually look at all of the test results and make sense of what I’m seeing here. So that’s why the test insights part, as an example, is now becoming the focus. I need something that helps me to do this in an automated fashion so that the result is that now I’m notified of the specific instances of where a test has failed or the patterns of where they have failed.”

The post Test automation does away with the mundane, and frees up testers for the creative domain appeared first on SD Times.

]]>