SD Times https://sdtimes.com/ Software Development News Thu, 07 Nov 2024 19:43:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg SD Times https://sdtimes.com/ 32 32 Report: Only 1 in 5 organizations have full visibility into their software supply chain https://sdtimes.com/security/report-only-1-in-5-organizations-have-full-visibility-into-their-software-supply-chain/ Thu, 07 Nov 2024 19:43:06 +0000 https://sdtimes.com/?p=56018 Several high profile software supply chain security incidents over the last few years have put more of a spotlight on the need to have visibility into the software supply chain. However, it seems as though those efforts may not be leading to the desired outcomes, as a new survey found that only one out of … continue reading

The post Report: Only 1 in 5 organizations have full visibility into their software supply chain appeared first on SD Times.

]]>
Several high profile software supply chain security incidents over the last few years have put more of a spotlight on the need to have visibility into the software supply chain. However, it seems as though those efforts may not be leading to the desired outcomes, as a new survey found that only one out of five organizations believe they have that visibility into every component and dependency in their software.

The survey, Anchore’s 2024 Software Supply Chain Security Report, also found that less than half of respondents are following supply chain best practices like creating software bill-of-materials (SBOMs) for the software they develop (49% of respondents) or for open source projects they use (45%) of respondents. Additionally, only 41% of respondents request SBOMs from the third-party vendors they use. Despite these low numbers, this is a significant improvement from 2022’s survey, when less than a third of respondents were following these practices. 

The report found that 78% of respondents are planning on increasing their use of SBOMs in the next 18 months, and 32% of them plan to significantly increase use. 

“The SBOM is now a critical component of software supply chain security. An SBOM provides visibility into software ingredients and is a foundation for understanding software vulnerabilities and risks,” Anchore wrote in the report.

The report also found that currently 76% of respondents are prioritizing software supply chain security.

Many companies are having to make this a priority as part of their efforts to comply with regulations. According to the report, organizations are now having to comply with an average of 4.9 regulations and standards, putting more pressure on them to get security right. 

Of the companies surveyed, more than half have a cross-functional (51%) or fully dedicated team (8%) that works on supply chain security. 

Finally, 77% of respondents are worried about how embedded AI libraries will impact their software supply chain security.  

For the survey, Anchore interviewed 106 leaders and practitioners that are involved in software supply chain security at their company.

The post Report: Only 1 in 5 organizations have full visibility into their software supply chain appeared first on SD Times.

]]>
Tricentis Launches qTest Copilot https://sdtimes.com/ai/tricentis-launches-qtest-copilot/ Thu, 07 Nov 2024 17:05:24 +0000 https://sdtimes.com/?p=56016  Tricentis, a global leader in continuous testing and quality engineering, today announced the expansion of its test management and analytics platform, Tricentis qTest, with the launch of Tricentis qTest Copilot. The latest addition to its suite of generative AI-powered Tricentis Copilot solutions, qTest Copilot harnesses the power of generative AI to simplify and accelerate test case generation, allowing … continue reading

The post Tricentis Launches qTest Copilot appeared first on SD Times.

]]>
 Tricentis, a global leader in continuous testing and quality engineering, today announced the expansion of its test management and analytics platform, Tricentis qTest, with the launch of Tricentis qTest Copilot. The latest addition to its suite of generative AI-powered Tricentis Copilot solutions, qTest Copilot harnesses the power of generative AI to simplify and accelerate test case generation, allowing for greater test coverage and higher quality software releases.

qTest Copilot is a generative AI assistant that automatically drafts test cases and test steps based on source documents and user requirements, offering considerable time-saving benefits when compared to manual approaches. Embedded into the newest version of the qTest platform, qTest Copilot combines Tricentis’ scalable and unified test management technology, with new AI-augmented features to allow QA and developer teams to greatly accelerate software delivery.

Users can quickly create test coverage of any application, as well as explore unidentified quality gaps by broadening the test scope to include tests for additional scenarios and unexpected events. With a single click, both test steps and expected results are generated in seconds, enabling users to deliver higher quality releases more confidently and with fewer escaping defects.

The addition of generative AI features into qTest also enables more common and consistent test case descriptions, which both new and existing teams can use to create standards for how test cases are written across their entire test coverage.

Other features include:

  • Select and easily control which projects and users are enabled for qTest Copilot.
  • Approve drafted test cases after modifying, deleting, or creating new steps as needed.
  • Prompt qTest Copilot to summarize for more concise outputs or to elaborate with more details.
  • Regenerate test steps or the entire test case without losing the overall test scope.

 Learn more about how qTest Copilot, Tosca Copilot and Testim Copilot can help QA and development teams move faster and achieve better quality at https://www.tricentis.com/products/copilot

 

The post Tricentis Launches qTest Copilot appeared first on SD Times.

]]>
GitHub Copilot chat now provides guidance on rewording prompts https://sdtimes.com/ai/github-copilot-chat-now-provides-guidance-on-rewording-prompts/ Thu, 07 Nov 2024 16:05:16 +0000 https://sdtimes.com/?p=56009 GitHub Copilot’s chat functionality is being updated to provide developers guidance on how to reword their prompts so that they can get better responses.  Microsoft shared that user feedback on GitHub Copilot indicated that some developers struggle with creating prompts, including understanding phrasing and what context to include.  “In some cases, the experience left users … continue reading

The post GitHub Copilot chat now provides guidance on rewording prompts appeared first on SD Times.

]]>
GitHub Copilot’s chat functionality is being updated to provide developers guidance on how to reword their prompts so that they can get better responses. 

Microsoft shared that user feedback on GitHub Copilot indicated that some developers struggle with creating prompts, including understanding phrasing and what context to include.  “In some cases, the experience left users feeling like they were getting too much or too little from their interactions,” Microsoft wrote in a blog post

In response to this, GitHub Copilot’s chat will now be a more conversational experience that can adapt to a developer’s specific context and needs. 

For example, if a developer asks a question that is too vague, like “what is this?,” Copilot will now respond back saying that the “question is ambiguous because it lacks specific context or content” and will suggest some prompts that are more specific and will lead to better responses. In this example, the response included other sample prompts, like “What is the purpose of the code in #file:’BasketService.cs’?” or “Can you explain the errors in #file:’BasketService.cs’?”

Those suggested new prompts are clickable, so all a developer has to do is select one of the provided prompts, and GitHub Copilot will try again with the new prompt.  

“Our guided chat experience takes Copilot beyond simple input-output exchanges, turning it into a collaborative assistant. When the context is clear, Copilot provides direct and relevant answers. When it isn’t, Copilot guides you by asking follow-up questions to ensure clarity and precision,” Microsoft wrote.

The new chat experience is available in Visual Studio 2022 17.12 Preview 3 and above, according to Microsoft.

The post GitHub Copilot chat now provides guidance on rewording prompts appeared first on SD Times.

]]>
Google researchers successfully found a zero-day vulnerability using LLM assisted vulnerability detection https://sdtimes.com/security/google-researchers-successfully-found-a-zero-day-vulnerability-using-llm-assisted-vulnerability-detection/ Wed, 06 Nov 2024 19:42:47 +0000 https://sdtimes.com/?p=56004 One of Google’s security research initiatives, Project Zero, has successfully managed to detect a zero-day memory safety vulnerability using LLM assisted detection. “We believe this is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software,” the team wrote in a post. Project Zero is … continue reading

The post Google researchers successfully found a zero-day vulnerability using LLM assisted vulnerability detection appeared first on SD Times.

]]>
One of Google’s security research initiatives, Project Zero, has successfully managed to detect a zero-day memory safety vulnerability using LLM assisted detection. “We believe this is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software,” the team wrote in a post.

Project Zero is a security research team at Google that studies zero-day vulnerabilities, and back in June they announced Project Naptime, a framework for LLM assisted vulnerability research. In recent months, Project Zero teamed up with Google DeepMind and turned Project Naptime into Big Sleep, which is what discovered the vulnerability. 

The vulnerability discovered by Big Sleep was a stack buffer overflow in SQLite. The Project Zero team reported the vulnerability to the developers in October, who were able to fix it on the same day. Additionally, the vulnerability was discovered before it appeared in an official release.

“We think that this work has tremendous defensive potential,” the Project Zero team wrote. “Finding vulnerabilities in software before it’s even released, means that there’s no scope for attackers to compete: the vulnerabilities are fixed before attackers even have a chance to use them.”

According to Project Zero, SQLite’s existing testing infrastructure, including OSS-Fuzz and the project’s own infrastructure, did not find the vulnerability.

This feat follows security research team Team Atlanta earlier this year also discovering a vulnerability in SQLite using LLM assisted detection. Project Zero used this as inspiration in its own research. 

According to Project Zero, the fact that Big Sleep was able to find a vulnerability in a well fuzzed open source project is exciting, but they also believe the results are still experimental and that a target-specific fuzzer would also be as effective at finding vulnerabilities. 

“We hope that in the future this effort will lead to a significant advantage to defenders – with the potential not only to find crashing testcases, but also to provide high-quality root-cause analysis, triaging and fixing issues could be much cheaper and more effective in the future. We aim to continue sharing our research in this space, keeping the gap between the public state-of-the-art and private state-of-the-art as small as possible,” the team concluded. 

The post Google researchers successfully found a zero-day vulnerability using LLM assisted vulnerability detection appeared first on SD Times.

]]>
Symbiotic Security Announces Funding, Introduces First Real-Time Detection and Remediation of Software Development Including Just-in-Time Training https://sdtimes.com/symbiotic-security-announces-funding-introduces-first-real-time-detection-and-remediation-of-software-development-including-just-in-time-training/ Wed, 06 Nov 2024 16:34:40 +0000 https://sdtimes.com/?p=56012 Symbiotic Security today launched the industry’s first real-time security for software development that combines detection and remediation with just-in-time training – incorporating security testing and training directly into the development process without breaking developers’ workflows. Backed with $3 million of seed funding from investors including Lerer Hippeau, Axeleo Capital, Factorial Capital, and others, the company has introduced its software-as-a-service … continue reading

The post Symbiotic Security Announces Funding, Introduces First Real-Time Detection and Remediation of Software Development Including Just-in-Time Training appeared first on SD Times.

]]>
Symbiotic Security today launched the industry’s first real-time security for software development that combines detection and remediation with just-in-time training – incorporating security testing and training directly into the development process without breaking developers’ workflows.

Backed with $3 million of seed funding from investors including Lerer HippeauAxeleo CapitalFactorial Capital, and others, the company has introduced its software-as-a-service that works with the developer’s Integrated Development Environment (IDE) and enables them to develop software more securely.

Ponemon survey of 634 IT and IT security practitioners reported the top challenges to shift-left security were: a lack of integrated security tools (51%); an increase in work for developers (43%); too many vulnerabilities to fix (40%). These are precisely the challenges addressed by Symbiotic Security.

“Traditional approaches to code security are broken, which we fix by integrating security at the time code is written,” said Jerome Robert, co-founder and CEO, Symbiotic Security. “Symbiotic requires no additional developer training – it is the training. Our mission is to be the developer’s partner in security and we believe that this is the defining moment for cyber security where the vision of ‘shift-left’ is finally realized.”

The concept of ‘shift-left’ is to integrate security into the earliest parts of the Software Development Life Cycle (SDLC), which includes passing security responsibilities to developers. The initiative hasn’t yet been successful because, until now, developers have not been properly equipped nor have they found any operational gain in being responsible for securing their assets.

Symbiotic provides developers with real-time feedback on potential security vulnerabilities as they write code, as well as remediation recommendations, and training with information that helps further educate developers on the specific security issues encountered. The company has launched its minimum viable product for iteration, feedback, and testing and already has active deployments at eight companies. All are actively leveraging both the remediation plugin and the training, while providing user feedback that Symbiotic is using to further enhance the product.

Symbiotic Security helps developers ship clean code, which helps eliminate security backlogs without disrupting workflows.

With Symbiotic’s software, security is no longer an afterthought; it is where it should have always been – integrated into the SDLC as a foundational part of the coding process. It continuously scans code that has both already been written and as it is created, so that potential threats are identified and resolved immediately. In addition, Symbiotic Security offers developers contextual remediations right within their IDE, boosting efficiency and reducing costs, while improving security.

“Jerome and co-founder Edouard Viot have a deep understanding of the problems underlying traditional code security and demonstrated remarkable foresight with their approach to addressing the growing demand for shift-left security solutions,” said Graham Brown, managing partner, Lerer Hippeau. “Symbiotic has the potential to transform the industry, empowering developers and security teams alike.”

“Symbiotic Security is a security solution that truly understands developers and makes them more productive,” said Simon Elcham, co-founder and chief technology officer, Trustpair. “By integrating into our existing workflows, it has helped our development and security teams work more efficiently, reducing security backlogs and enhancing code quality. Symbiotic Security is outpacing market standards in both functionality and business impact.”

For more information about Symbiotic Security, or to inquire about becoming a design partner, visit the website at www.symbioticsec.ai. To see a demonstration, click the video link here.

The post Symbiotic Security Announces Funding, Introduces First Real-Time Detection and Remediation of Software Development Including Just-in-Time Training appeared first on SD Times.

]]>
Using certifications to level up your development career https://sdtimes.com/softwaredev/using-certifications-to-level-up-your-development-career/ Wed, 06 Nov 2024 15:43:14 +0000 https://sdtimes.com/?p=56000 Building a career as a software developer can be valuable, but can be a competitive field to break into, especially in 2024 when over 130,000 layoffs have occurred at tech companies already. While not all 130,000 may have been software engineers, they have not been immune from the cuts. One way developers can set themselves … continue reading

The post Using certifications to level up your development career appeared first on SD Times.

]]>
Building a career as a software developer can be valuable, but can be a competitive field to break into, especially in 2024 when over 130,000 layoffs have occurred at tech companies already. While not all 130,000 may have been software engineers, they have not been immune from the cuts.

One way developers can set themselves up for better opportunities is to pursue certifications for skills that are relevant to their career. A certification offers an opportunity for developers to show others that they have a particular skill; It’s one thing to list Kubernetes as a core competency on their resume, and another to say they’ve passed the certification exam for one of the CNCF’s Kubernetes certifications.  

“People are really happy by taking a certification, because it is the validation of some knowledge,” said  Christophe Sauthier, head of CNCF certifications and trainings, in a recent episode of our What the Dev? podcast. “It is something that we feel is really important because anybody can say that they know something, but proving that usually makes a real difference.”

A 2023 CompTIA report found that 80% of US HR professionals surveyed relied on technical certifications during the hiring process. Sauthier said the CNCF has conducted a survey looking into the impact of certifications as well, and has also seen that people who obtain them generally benefit. 

“More than half the people who answered the survey said that taking some training or certification helped them get a new job,” said Sauthier. “It is a way for people to be more recognized for what they know, and also to usually get better pay. And when I say a lot of people get better pay, it was about one third of the people who answered our survey who said that they had a higher pay because of taking training or certifications.”

Another survey from CompTIA in 2022 showed that IT professionals that obtained a new certification saw an average $13,000 increase in salary. 

How to select a certification

In order to see these benefits, it’s important for anyone pursuing a certification to think about which one will best suit their needs, because they come in all shapes and sizes.

Sauthier says he recommends starting with an entry-level certification first, as this can enable someone to get used to what it means to take a certification. 

Then, it might make sense to move onto more advanced certifications. For instance, the CNCF’s Certified Kubernetes Security Specialist (CKS) certification is “quite tough”, he said. However, its difficulty is what appeals to people.  

“People are really attracted by it because it really proves something,” he said. “You need to actually solve real problems to be able to pass it. So we give you an environment and we tell you, ‘okay, there is this issue,’ or ‘please implement that,’ and we are then evaluating what you did.”

Sauthier did note that difficulty alone shouldn’t be a deciding factor. “When I’m looking at the various certifications, I am more interested in looking at something which is widely adopted and which is not opinionated,” he said. Having it not be opinionated, or not tied to a specific vendor, will ensure that the skills are more easily transferable. 

“Many vendors from our community are building their bricks on top of the great project we have within the CNCF, but the certifications we are designing are targeting those bricks so you will be able to reuse that knowledge on the various products that have been created by the vendors,” he said.

He went on to explain how this informs the CNCF’s process of certification development. He said that each question is approved by at least two people, which ensures that there is wide agreement. 

“That is something that is really important so that you are sure when you’re taking a certification from us that the knowledge that you will validate is something that you will be able to use with many vendors and many products over our whole community,” he said. “That’s really something important for us. We don’t want you to be vendor locked with the knowledge you have when you take one of a certification. So that’s really the most important thing for me, and not the difficulty of the certification itself.”

The CNCF recently took its certification program a step further by introducing Kubestronaut, an achievement people can get for completing all five of its Kubernetes certifications. Currently, there are 788 Kubestronauts, who get added benefits like a private Slack channel, coupons for other CNCF certifications, and a discount on CNCF events, like KubeCon. 

The post Using certifications to level up your development career appeared first on SD Times.

]]>
Shifting left with telemetry pipelines: The future of data tiering at petabyte scale https://sdtimes.com/monitor/shifting-left-with-telemetry-pipelines-the-future-of-data-tiering-at-petabyte-scale/ Tue, 05 Nov 2024 20:01:22 +0000 https://sdtimes.com/?p=55996 In today’s rapidly evolving observability and security use cases, the concept of “shifting left” has moved beyond just software development. With the consistent and rapid rise of data volumes across logs, metrics, traces, and events, organizations are required to be a lot more thoughtful in efforts to turn chaos into control when it comes to … continue reading

The post Shifting left with telemetry pipelines: The future of data tiering at petabyte scale appeared first on SD Times.

]]>
In today’s rapidly evolving observability and security use cases, the concept of “shifting left” has moved beyond just software development. With the consistent and rapid rise of data volumes across logs, metrics, traces, and events, organizations are required to be a lot more thoughtful in efforts to turn chaos into control when it comes to understanding and managing their streaming data sets. Teams are striving to be more proactive in the management of their mission critical production systems and need to achieve far earlier detection of potential issues. This approach emphasizes moving traditionally late-stage activities — like seeing, understanding, transforming, filtering, analyzing, testing, and monitoring — closer to the beginning of the data creation cycle. With the growth of next-generation architectures, cloud-native technologies, microservices, and Kubernetes, enterprises are increasingly adopting Telemetry Pipelines to enable this shift. A key element in this movement is the concept of data tiering, a data-optimization strategy that plays a critical role in aligning the cost-value ratio for observability and security teams.

The Shift Left Movement: Chaos to Control 

“Shifting left” originated in the realm of DevOps and software testing. The idea was simple: find and fix problems earlier in the process to reduce risk, improve quality, and accelerate development. As organizations have embraced DevOps and continuous integration/continuous delivery (CI/CD) pipelines, the benefits of shifting left have become increasingly clear — less rework, faster deployments, and more robust systems.

In the context of observability and security, shifting left means accomplishing the analysis, transformation, and routing of logs, metrics, traces, and events very far upstream, extremely early in their usage lifecycle — a very different approach in comparison to the traditional “centralize then analyze” method. By integrating these processes earlier, teams can not only drastically reduce costs for otherwise prohibitive data volumes, but can even detect anomalies, performance issues, and potential security threats much quicker, before they become major problems in production. The rise of microservices and Kubernetes architectures has specifically accelerated this need, as the complexity and distributed nature of cloud-native applications demand more granular and real-time insights, and each localized data set is distributed when compared to the monoliths of the past.

This leads to the growing adoption of Telemetry Pipelines.

What Are Telemetry Pipelines?

Telemetry Pipelines are purpose-built to enable next-generation architectures. They are designed to give visibility and to pre-process, analyze, transform, and route observability and security data from any source to any destination. These pipelines give organizations the comprehensive toolbox and set of capabilities to control and optimize the flow of telemetry data, ensuring that the right data reaches the right downstream destination in the right format, to enable all the right use cases. They offer a flexible and scalable way to integrate multiple observability and security platforms, tools, and services.

For example, in a Kubernetes environment, where the ephemeral nature of containers can scale up and down dynamically, logs, metrics, and traces from those dynamic workloads need to be processed and stored in real-time. Telemetry Pipelines provide the capability to aggregate data from various services, be granular about what you want to do with that data, and ultimately send it downstream to the appropriate end destination — whether that’s a traditional security platform like Splunk that has a high unit cost for data, or a more scalable and cost effective storage location optimized for large datasets long term, like AWS S3.

The Role of Data Tiering

As telemetry data continues to grow at an exponential rate, enterprises face the challenge of managing costs without compromising on the insights they need in real time, or the requirement of data retention for audit, compliance, or forensic security investigations. This is where data tiering comes in. Data tiering is a strategy that segments data into different levels (tiers) based on its value and use case, enabling organizations to optimize both cost and performance.

In observability and security, this means identifying high-value data that requires immediate analysis and applying a lot more pre-processing and analysis to that data, compared to lower-value data that can simply be stored more cost effectively and accessed later, if necessary. This tiered approach typically includes:

  1. Top Tier (High-Value Data): Critical telemetry data that is vital for real-time analysis and troubleshooting is ingested and stored in high-performance platforms like Splunk or Datadog. This data might include high-priority logs, metrics, and traces that are essential for immediate action. Although this can include plenty of data in raw formats, the high cost nature of these platforms typically leads to teams routing only the data that’s truly necessary. 
  2. Middle Tier (Moderate-Value Data): Data that is important but doesn’t meet the bar to send to a premium, conventional centralized system and is instead routed to more cost-efficient observability platforms with newer architectures like Edge Delta. This might include a much more comprehensive set of logs, metrics, and traces that give you a wider, more useful understanding of all the various things happening within your mission critical systems.
  3. Bottom Tier (All Data): Due to the extremely inexpensive nature of S3 relative to observability and security platforms, all telemetry data in its entirety can be feasibly stored for long-term trend analysis, audit or compliance, or investigation purposes in low-cost solutions like AWS S3. This is typically cold storage that can be accessed on demand, but doesn’t need to be actively processed.

This multi-tiered architecture enables large enterprises to get the insights they need from their data while also managing costs and ensuring compliance with data retention policies. It’s important to keep in mind that the Middle Tier typically includes all data within the Top Tier and more, and the same goes for the Bottom Tier (which includes all data from higher tiers and more). Because the cost per Tier for the underlying downstream destinations can, in many cases, be orders of magnitude different, there isn’t much of a benefit from not duplicating all data that you’re putting into Datadog also into your S3 buckets, for instance. It’s much easier and more useful to have a full data set in S3 for any later needs.

How Telemetry Pipelines Enable Data Tiering

Telemetry Pipelines serve as the backbone of this tiered data approach by giving full control and flexibility in routing data based on predefined, out-of-the-box rules and/or business logic specific to the needs of your teams. Here’s how they facilitate data tiering:

  • Real-Time Processing: For high-value data that requires immediate action, Telemetry Pipelines provide real-time processing and routing, ensuring that critical logs, metrics, or security alerts are delivered to the right tool instantly. Because Telemetry Pipelines have an agent component, a lot of this processing can happen locally in an extremely compute, memory, and disk efficient manner.
  • Filtering and Transformation: Not all telemetry data is created equal, and teams have very different needs for how they may use this data. Telemetry Pipelines enable comprehensive filtering and transformation of any log, metric, trace, or event, ensuring that only the most critical information is sent to high-cost platforms, while the full dataset (including less critical data) can then be routed to more cost-efficient storage.
  • Data Enrichment and Routing: Telemetry Pipelines can ingest data from a wide variety of sources — Kubernetes clusters, cloud infrastructure, CI/CD pipelines, third-party APIs, etc. — and then apply various enrichments to that data before it’s then routed to the appropriate downstream platform.
  • Dynamic Scaling: As enterprises scale their Kubernetes clusters and increase their use of cloud services, the volume of telemetry data grows significantly. Due to their aligned architecture, Telemetry Pipelines also dynamically scale to handle this increasing load without affecting performance or data integrity.
The Benefits for Observability and Security Teams

By adopting Telemetry Pipelines and data tiering, observability and security teams can benefit in several ways:

  • Cost Efficiency: Enterprises can significantly reduce costs by routing data to the most appropriate tier based on its value, avoiding the unnecessary expense of storing low-value data in high-performance platforms.
  • Faster Troubleshooting: Not only can there be some monitoring and anomaly detection within the Telemetry Pipelines themselves, but critical telemetry data is also processed extremely quickly and routed to high-performance platforms for real-time analysis, enabling teams to detect and resolve issues with much greater speed.
  • Enhanced Security: Data enrichments from lookup tables, pre-built packs that apply to various known third-party technologies, and more scalable long-term retention of larger datasets all enable security teams to have better ability to find and identify IOCs within all logs and telemetry data, improving their ability to detect threats early and respond to incidents faster.
  • Scalability: As enterprises grow and their telemetry needs expand, Telemetry Pipelines can naturally scale with them, ensuring that they can handle increasing data volumes without sacrificing performance.
It all starts with Pipelines!

Telemetry Pipelines are the core foundation to sustainably managing the chaos of telemetry — and they are crucial in any attempt to wrangle growing volumes of logs, metrics, traces, and events. As large enterprises continue to shift left and adopt more proactive approaches to observability and security, we see that Telemetry Pipelines and data tiering are becoming essential in this transformation. By using a tiered data management strategy, organizations can optimize costs, improve operational efficiency, and enhance their ability to detect and resolve issues earlier in the life cycle. One additional key advantage that we didn’t focus on in this article, but is important to call out in any discussion on modern Telemetry Pipelines, is their full end-to-end support for Open Telemetry (OTel), which is increasingly becoming the industry standard for telemetry data collection and instrumentation. With OTel support built-in, these pipelines seamlessly integrate with diverse environments, enabling observability and security teams to collect, process, and route telemetry data from any source with ease. This comprehensive compatibility, combined with the flexibility of data tiering, allows enterprises to achieve unified, scalable, and cost-efficient observability and security that’s designed to scale to tomorrow and beyond.


To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon North America, in Salt Lake City, Utah, on November 12-15, 2024.

The post Shifting left with telemetry pipelines: The future of data tiering at petabyte scale appeared first on SD Times.

]]>
AI-Powered Solutions from Parasoft Slash Testing Failure Rates and Boost Developer Efficiency https://sdtimes.com/ai-powered-solutions-from-parasoft-slash-testing-failure-rates-and-boost-developer-efficiency/ Tue, 05 Nov 2024 17:53:14 +0000 https://sdtimes.com/?p=55991 Parasoft, a leader in AI-powered software testing, has made another step in strategically integrating AI and ML quality enhancements where development teams need them most – such as using natural language for troubleshooting, or checking code in real time.  In their campaign to create reliable, customizable testing solutions, their investment in AI serves a singular … continue reading

The post AI-Powered Solutions from Parasoft Slash Testing Failure Rates and Boost Developer Efficiency appeared first on SD Times.

]]>
Parasoft, a leader in AI-powered software testing, has made another step in strategically integrating AI and ML quality enhancements where development teams need them most – such as using natural language for troubleshooting, or checking code in real time. 

In their campaign to create reliable, customizable testing solutions, their investment in AI serves a singular purpose: Address the quality risk developers and testers face as they’re pressured throughout product release cycles. While some AI advancements may emerge as a trend, Parasoft’s strategic AI investment has been a steadfast necessity, built on a decade of development. As more development teams turn to AI, the foresight of this approach underscores their commitment to helping users augment quality. It helps organizations integrate the Large Language Models (LLMs) they already use into advanced testing workflows.

“Parasoft’s latest innovations are the result of a long-term commitment to AI in software testing,” said Igor Kirilenko, Chief Product Officer, Parasoft. “We’ve been incrementally improving, one release at a time, toward a foundation that offers unparalleled reliability, choice, and control in testing workflows.”

He added, “These latest enhancements are backed by dozens of our technology patents, designed to ease the journey to fully autonomous software testing.”

In Parasoft’s latest product releases, developers gain greater control and feedback in strategic stages of the software development lifecycle – from continuous code validation to robust support for various LLMs.

Faster Time to First Feedback

Live Unit Testing has been added for real-time code verification in this version of Jtest, an AI-powered Java developer productivity solution. This enhancement automatically lets developers continuously execute unit tests impacted by code changes in their Integrated Development Environment (IDE). The ability to validate code changes before checking them into source control is a major time saver, resulting in fewer build and regression failures.

Parasoft’s machine-learning engine operates in the background, correlating recent code changes to impacted unit tests while autonomously executing tests in the IDE, giving the engineer continuous feedback. As an extension of Parasoft’s CLI-based test impact analysis announced earlier this year, this new combination offers two major benefits – it can help accelerate feedback on testing by 90% or more while slashing build and regression failures.

Parasoft’s Jtest and dotTEST solutions for C#/VB.NET development offer Live Static Analysis to automate continuous code scanning and remediate defects as they occur. Coupled with Parasoft’s AI-generated code fixes and new extended support for LLM providers, this gives development teams a new advantage. Most notably the ability to receive continuous feedback on quality, security, reliability and maintainability – all while remediating static analysis findings faster.

Accelerate Learning and Troubleshooting with AI Assistant

A new AI Assistant as part of Parasoft SOAtest and Virtualize now integrates with various LLM providers, such as OpenAI and Azure OpenAI.

To harness the power of this highly intuitive functionality, developers can simply ask questions in natural language and receive immediate answers about SOAtest and Virtualize. It’s designed to help users learn faster, but also troubleshoot problems with higher efficiency. The overall effect is that it enhances testing workflows by integrating AI-powered support into existing tester and developer toolsets.

Empowering Users with Choice of LLM

Customers need to clear hurdles as they implement LLMs into their development and testing processes. To help them, Parasoft has expanded LLM support for various providers in the newest releases of Jtest, dotTEST, SOAtest, and Virtualize.

In being able to integrate their preferred LLMs with Parasoft’s automated software testing solutions, teams have the ability to select the LLM that best fits their needs. It also addresses data security and privacy concerns by letting them use on-prem deployment options.

The post AI-Powered Solutions from Parasoft Slash Testing Failure Rates and Boost Developer Efficiency appeared first on SD Times.

]]>
Microsoft enhances Data Wrangler with the ability to prepare data using natural language with new GitHub Copilot integration https://sdtimes.com/data/microsoft-enhances-data-wrangler-with-the-ability-to-prepare-data-using-natural-language-with-new-github-copilot-integration/ Tue, 05 Nov 2024 17:45:50 +0000 https://sdtimes.com/?p=55987 Microsoft has announced that GitHub Copilot is now integrated with Data Wrangler, an extension for VS Code for viewing, cleaning, and preparing data.  By integrating GitHub Copilot capabilities into the tool, users will now be able to clean and transform data in VS Code with natural language prompts. It will also be able to provide … continue reading

The post Microsoft enhances Data Wrangler with the ability to prepare data using natural language with new GitHub Copilot integration appeared first on SD Times.

]]>
Microsoft has announced that GitHub Copilot is now integrated with Data Wrangler, an extension for VS Code for viewing, cleaning, and preparing data. 

By integrating GitHub Copilot capabilities into the tool, users will now be able to clean and transform data in VS Code with natural language prompts. It will also be able to provide suggestions of how to fix errors in data transformation code. 

According to Microsoft, one of the current limitations of using AI for exploratory data analysis is that the AI often lacks context of the data, leading to more generalized responses. Further, the process of verifying that the generated code is correct can be a very manual and time-consuming process. 

The integration of Data Wrangler and GitHub Copilot addresses these issues because it allows the user to provide GitHub Copilot with data context, enabling the tool to generate code for a specific dataset. It also provides a preview of the behavior of the code, which allows users to visually validate the response. 

Some examples of how GitHub Copilot can be used in Data Wrangler include formatting a datetime column, removing columns with over 40% missing values, or fixing an error in a data transformation — all using natural language prompts. 

Using this new integration will require having the Data Wrangler VS Code extension, the GitHub Copilot VS code extension, and an active GitHub Copilot subscription.

Microsoft also announced that this is just the first of many Copilot enhancements planned for Data Wrangler, and additional functionality will be added in the future. 

The post Microsoft enhances Data Wrangler with the ability to prepare data using natural language with new GitHub Copilot integration appeared first on SD Times.

]]>
WSO2’s latest product release allows AI services to be managed like APIs https://sdtimes.com/api/wso2s-latest-product-release-allows-ai-services-to-be-managed-like-apis/ Tue, 05 Nov 2024 16:35:35 +0000 https://sdtimes.com/?p=55984 The API management platform WSO2 has announced a slew of new updates aimed at helping customers manage APIs in a technology landscape increasingly dependent on AI and Kubernetes. The updates span the releases of WSO2 API Manager 4.4, WSO2 API Platform for Kubernetes (APK) 1.2, and WSO2 API Microgateway 3.2, which are all available today.  … continue reading

The post WSO2’s latest product release allows AI services to be managed like APIs appeared first on SD Times.

]]>
The API management platform WSO2 has announced a slew of new updates aimed at helping customers manage APIs in a technology landscape increasingly dependent on AI and Kubernetes. The updates span the releases of WSO2 API Manager 4.4, WSO2 API Platform for Kubernetes (APK) 1.2, and WSO2 API Microgateway 3.2, which are all available today. 

“As organizations seek a competitive edge through innovative digital experiences, they need to invest equally in state-of-the-art technologies and in fostering the productivity of their software development teams,” said Christopher Davey, vice president and general manager of API management at WSO2. “With new functionality for managing AI services as APIs and extended support for Kubernetes as the preferred platform for digital innovation, WSO2 API Manager and WSO2 APK are continuing to enhance developers’ experiences while delivering a future-proof environment for their evolving needs.”

The company announced its Egress API Management capability, which allows developers to manage their AI services as APIs. It supports both internal and external AI services, and offers full life cycle API management, governance, and built-in support for providers such as OpenAI, Mistral AI, and Microsoft Azure OpenAI. 

The egress, or outbound, gateway experience enforces policies, providing secure and efficient access to AI models, as well as reducing costs by allowing companies to control AI traffic via backend rate limiting and subscription-level rate limiting of AI APIs. 

WSO2 also announced many new features to support the increase of APIs running on Kubernetes. A new version of the WSO2 API Microgateway — a cloud-native gateway for microservices — has been released, and it aligns with the latest WSO2 API Manager release, improving scalability while also maintaining governance, reliability, and security.

WSO2 APK was updated to align with the gRPC Route specification, improving integration with Kubernetes environment and facilitating better control over gRPC services. 

The latest version of WSO2 APK also includes new traffic filters for HTTP Routes, providing more flexibility and precision when routing HTTP traffic.

For better developer productivity in general, WSO2 also improved API discoverability by updating the unified control plane in the WSO2 API Manager so that now developers can search for APIs using the content in API definition files directly in the Developer Portal and Publisher portal.

And finally, to improve security and access control, the control plane also now supports the ability to configure separate mTLS authentication settings for production and sandbox environments. The latest release also adds support for personal access tokens (PAT), which provide secure, time-limited authentication to APIs without a username and password. 

The post WSO2’s latest product release allows AI services to be managed like APIs appeared first on SD Times.

]]>