Latest News Archives - SD Times https://sdtimes.com/category/latest-news/ Software Development News Thu, 07 Nov 2024 21:02:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Latest News Archives - SD Times https://sdtimes.com/category/latest-news/ 32 32 Navigating the complexities of managing global address data https://sdtimes.com/data/navigating-the-complexities-of-managing-global-address-data/ Fri, 08 Nov 2024 16:00:30 +0000 https://sdtimes.com/?p=56021 The U.S. Postal Service (USPS) delivers mail to almost 167 million addresses in the United States, and anyone who has tried to order something online has likely had the experience of not getting a package delivered on time (or at all) because the address was entered incorrectly or in a weird format, causing shipping delays. … continue reading

The post Navigating the complexities of managing global address data appeared first on SD Times.

]]>
The U.S. Postal Service (USPS) delivers mail to almost 167 million addresses in the United States, and anyone who has tried to order something online has likely had the experience of not getting a package delivered on time (or at all) because the address was entered incorrectly or in a weird format, causing shipping delays.

The USPS has a standard format it accepts, but it’s not standard around the world. Internationally there are over 200 different address formats used and more than 20 language scripts used to write those addresses. 

Given the complexity of considering all of these different global formats, using a verification service like Melissa’s Global Address service can help ensure that all addresses are properly formatted based on where they need to go, which improves deliverability.

John DeMatteo, solutions engineer I at Melissa, explained in a recent SD Times microwebinar that “fewer errors and returns equals more time to be working on other things, as well as less money spent on returned packages,” he said. 

Melissa’s Global Address service takes in addresses and returns them as validated, enriched, and standardized addresses for more than 250 countries and territories. According to DeMatteo, validated means an address was confirmed through official sources as being accurate and deliverable; enriched means the address was appended with additional data not present in the original request; and standardized means an address is output in the preferred format.

During the microwebinar, DeMatteo gave a demo of Melissa’s Global Address service with the following input address:

FF: 10 Dziadoszaska, Pozna, 61-248, PL

Global Address identifies this as being an address in Poland and reformats it in that country’s preferred address format, and also adds diacritics, which are the symbols that appear over certain letters in the Polish alphabet (ex. ć, ń, ó, ś, ź).

AddressLine1: ul. Dziadoszańska 10
AddressLine2: 61-248 Poznań

The addresses can also be transliterated to Native, Latin, or the Input script. According to DeMatteo, the difference between transliteration and translation is that transliteration converts character by character whereas translation converts whole words.  

To show transliteration in action, another example he shared during the microwebinar is an input written in Kanji — a set of characters used in Japanese writing — that was requested to be output into Latin script. “This is a lot more readable for me if I’m a data steward or someone working with the data,” he said. 

When it comes to making the most out of Global Address, like with any data verification process, “the better the data we have for the input, the better data we have for the output,” DeMatteo said. Therefore, there are a couple of best practices that he recommends following when working with Global Address.

While Global Address is good at detecting the country from the input, he says that when possible, the country code and name should be included with every record. He also recommends sending in multiple addresses at once for batch processing, which can improve speed and efficiency. And finally, customers should avoid including extraneous information not related to the address. 

Global Address can be used on its own, or better yet, in combination with Melissa’s other verification services including Global Name, Global Email, and Global Phone

“The Global Suite works together to provide a comprehensive Validation, Enrichment, and Standardization solution for the big four data types,” he said. “Used together, customers can ensure their data is of the highest quality possible.”

The post Navigating the complexities of managing global address data appeared first on SD Times.

]]>
Report: Only 1 in 5 organizations have full visibility into their software supply chain https://sdtimes.com/security/report-only-1-in-5-organizations-have-full-visibility-into-their-software-supply-chain/ Thu, 07 Nov 2024 19:43:06 +0000 https://sdtimes.com/?p=56018 Several high profile software supply chain security incidents over the last few years have put more of a spotlight on the need to have visibility into the software supply chain. However, it seems as though those efforts may not be leading to the desired outcomes, as a new survey found that only one out of … continue reading

The post Report: Only 1 in 5 organizations have full visibility into their software supply chain appeared first on SD Times.

]]>
Several high profile software supply chain security incidents over the last few years have put more of a spotlight on the need to have visibility into the software supply chain. However, it seems as though those efforts may not be leading to the desired outcomes, as a new survey found that only one out of five organizations believe they have that visibility into every component and dependency in their software.

The survey, Anchore’s 2024 Software Supply Chain Security Report, also found that less than half of respondents are following supply chain best practices like creating software bill-of-materials (SBOMs) for the software they develop (49% of respondents) or for open source projects they use (45%) of respondents. Additionally, only 41% of respondents request SBOMs from the third-party vendors they use. Despite these low numbers, this is a significant improvement from 2022’s survey, when less than a third of respondents were following these practices. 

The report found that 78% of respondents are planning on increasing their use of SBOMs in the next 18 months, and 32% of them plan to significantly increase use. 

“The SBOM is now a critical component of software supply chain security. An SBOM provides visibility into software ingredients and is a foundation for understanding software vulnerabilities and risks,” Anchore wrote in the report.

The report also found that currently 76% of respondents are prioritizing software supply chain security.

Many companies are having to make this a priority as part of their efforts to comply with regulations. According to the report, organizations are now having to comply with an average of 4.9 regulations and standards, putting more pressure on them to get security right. 

Of the companies surveyed, more than half have a cross-functional (51%) or fully dedicated team (8%) that works on supply chain security. 

Finally, 77% of respondents are worried about how embedded AI libraries will impact their software supply chain security.  

For the survey, Anchore interviewed 106 leaders and practitioners that are involved in software supply chain security at their company.

The post Report: Only 1 in 5 organizations have full visibility into their software supply chain appeared first on SD Times.

]]>
GitHub Copilot chat now provides guidance on rewording prompts https://sdtimes.com/ai/github-copilot-chat-now-provides-guidance-on-rewording-prompts/ Thu, 07 Nov 2024 16:05:16 +0000 https://sdtimes.com/?p=56009 GitHub Copilot’s chat functionality is being updated to provide developers guidance on how to reword their prompts so that they can get better responses.  Microsoft shared that user feedback on GitHub Copilot indicated that some developers struggle with creating prompts, including understanding phrasing and what context to include.  “In some cases, the experience left users … continue reading

The post GitHub Copilot chat now provides guidance on rewording prompts appeared first on SD Times.

]]>
GitHub Copilot’s chat functionality is being updated to provide developers guidance on how to reword their prompts so that they can get better responses. 

Microsoft shared that user feedback on GitHub Copilot indicated that some developers struggle with creating prompts, including understanding phrasing and what context to include.  “In some cases, the experience left users feeling like they were getting too much or too little from their interactions,” Microsoft wrote in a blog post

In response to this, GitHub Copilot’s chat will now be a more conversational experience that can adapt to a developer’s specific context and needs. 

For example, if a developer asks a question that is too vague, like “what is this?,” Copilot will now respond back saying that the “question is ambiguous because it lacks specific context or content” and will suggest some prompts that are more specific and will lead to better responses. In this example, the response included other sample prompts, like “What is the purpose of the code in #file:’BasketService.cs’?” or “Can you explain the errors in #file:’BasketService.cs’?”

Those suggested new prompts are clickable, so all a developer has to do is select one of the provided prompts, and GitHub Copilot will try again with the new prompt.  

“Our guided chat experience takes Copilot beyond simple input-output exchanges, turning it into a collaborative assistant. When the context is clear, Copilot provides direct and relevant answers. When it isn’t, Copilot guides you by asking follow-up questions to ensure clarity and precision,” Microsoft wrote.

The new chat experience is available in Visual Studio 2022 17.12 Preview 3 and above, according to Microsoft.

The post GitHub Copilot chat now provides guidance on rewording prompts appeared first on SD Times.

]]>
Google researchers successfully found a zero-day vulnerability using LLM assisted vulnerability detection https://sdtimes.com/security/google-researchers-successfully-found-a-zero-day-vulnerability-using-llm-assisted-vulnerability-detection/ Wed, 06 Nov 2024 19:42:47 +0000 https://sdtimes.com/?p=56004 One of Google’s security research initiatives, Project Zero, has successfully managed to detect a zero-day memory safety vulnerability using LLM assisted detection. “We believe this is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software,” the team wrote in a post. Project Zero is … continue reading

The post Google researchers successfully found a zero-day vulnerability using LLM assisted vulnerability detection appeared first on SD Times.

]]>
One of Google’s security research initiatives, Project Zero, has successfully managed to detect a zero-day memory safety vulnerability using LLM assisted detection. “We believe this is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software,” the team wrote in a post.

Project Zero is a security research team at Google that studies zero-day vulnerabilities, and back in June they announced Project Naptime, a framework for LLM assisted vulnerability research. In recent months, Project Zero teamed up with Google DeepMind and turned Project Naptime into Big Sleep, which is what discovered the vulnerability. 

The vulnerability discovered by Big Sleep was a stack buffer overflow in SQLite. The Project Zero team reported the vulnerability to the developers in October, who were able to fix it on the same day. Additionally, the vulnerability was discovered before it appeared in an official release.

“We think that this work has tremendous defensive potential,” the Project Zero team wrote. “Finding vulnerabilities in software before it’s even released, means that there’s no scope for attackers to compete: the vulnerabilities are fixed before attackers even have a chance to use them.”

According to Project Zero, SQLite’s existing testing infrastructure, including OSS-Fuzz and the project’s own infrastructure, did not find the vulnerability.

This feat follows security research team Team Atlanta earlier this year also discovering a vulnerability in SQLite using LLM assisted detection. Project Zero used this as inspiration in its own research. 

According to Project Zero, the fact that Big Sleep was able to find a vulnerability in a well fuzzed open source project is exciting, but they also believe the results are still experimental and that a target-specific fuzzer would also be as effective at finding vulnerabilities. 

“We hope that in the future this effort will lead to a significant advantage to defenders – with the potential not only to find crashing testcases, but also to provide high-quality root-cause analysis, triaging and fixing issues could be much cheaper and more effective in the future. We aim to continue sharing our research in this space, keeping the gap between the public state-of-the-art and private state-of-the-art as small as possible,” the team concluded. 

The post Google researchers successfully found a zero-day vulnerability using LLM assisted vulnerability detection appeared first on SD Times.

]]>
Using certifications to level up your development career https://sdtimes.com/softwaredev/using-certifications-to-level-up-your-development-career/ Wed, 06 Nov 2024 15:43:14 +0000 https://sdtimes.com/?p=56000 Building a career as a software developer can be valuable, but can be a competitive field to break into, especially in 2024 when over 130,000 layoffs have occurred at tech companies already. While not all 130,000 may have been software engineers, they have not been immune from the cuts. One way developers can set themselves … continue reading

The post Using certifications to level up your development career appeared first on SD Times.

]]>
Building a career as a software developer can be valuable, but can be a competitive field to break into, especially in 2024 when over 130,000 layoffs have occurred at tech companies already. While not all 130,000 may have been software engineers, they have not been immune from the cuts.

One way developers can set themselves up for better opportunities is to pursue certifications for skills that are relevant to their career. A certification offers an opportunity for developers to show others that they have a particular skill; It’s one thing to list Kubernetes as a core competency on their resume, and another to say they’ve passed the certification exam for one of the CNCF’s Kubernetes certifications.  

“People are really happy by taking a certification, because it is the validation of some knowledge,” said  Christophe Sauthier, head of CNCF certifications and trainings, in a recent episode of our What the Dev? podcast. “It is something that we feel is really important because anybody can say that they know something, but proving that usually makes a real difference.”

A 2023 CompTIA report found that 80% of US HR professionals surveyed relied on technical certifications during the hiring process. Sauthier said the CNCF has conducted a survey looking into the impact of certifications as well, and has also seen that people who obtain them generally benefit. 

“More than half the people who answered the survey said that taking some training or certification helped them get a new job,” said Sauthier. “It is a way for people to be more recognized for what they know, and also to usually get better pay. And when I say a lot of people get better pay, it was about one third of the people who answered our survey who said that they had a higher pay because of taking training or certifications.”

Another survey from CompTIA in 2022 showed that IT professionals that obtained a new certification saw an average $13,000 increase in salary. 

How to select a certification

In order to see these benefits, it’s important for anyone pursuing a certification to think about which one will best suit their needs, because they come in all shapes and sizes.

Sauthier says he recommends starting with an entry-level certification first, as this can enable someone to get used to what it means to take a certification. 

Then, it might make sense to move onto more advanced certifications. For instance, the CNCF’s Certified Kubernetes Security Specialist (CKS) certification is “quite tough”, he said. However, its difficulty is what appeals to people.  

“People are really attracted by it because it really proves something,” he said. “You need to actually solve real problems to be able to pass it. So we give you an environment and we tell you, ‘okay, there is this issue,’ or ‘please implement that,’ and we are then evaluating what you did.”

Sauthier did note that difficulty alone shouldn’t be a deciding factor. “When I’m looking at the various certifications, I am more interested in looking at something which is widely adopted and which is not opinionated,” he said. Having it not be opinionated, or not tied to a specific vendor, will ensure that the skills are more easily transferable. 

“Many vendors from our community are building their bricks on top of the great project we have within the CNCF, but the certifications we are designing are targeting those bricks so you will be able to reuse that knowledge on the various products that have been created by the vendors,” he said.

He went on to explain how this informs the CNCF’s process of certification development. He said that each question is approved by at least two people, which ensures that there is wide agreement. 

“That is something that is really important so that you are sure when you’re taking a certification from us that the knowledge that you will validate is something that you will be able to use with many vendors and many products over our whole community,” he said. “That’s really something important for us. We don’t want you to be vendor locked with the knowledge you have when you take one of a certification. So that’s really the most important thing for me, and not the difficulty of the certification itself.”

The CNCF recently took its certification program a step further by introducing Kubestronaut, an achievement people can get for completing all five of its Kubernetes certifications. Currently, there are 788 Kubestronauts, who get added benefits like a private Slack channel, coupons for other CNCF certifications, and a discount on CNCF events, like KubeCon. 

The post Using certifications to level up your development career appeared first on SD Times.

]]>
Shifting left with telemetry pipelines: The future of data tiering at petabyte scale https://sdtimes.com/monitor/shifting-left-with-telemetry-pipelines-the-future-of-data-tiering-at-petabyte-scale/ Tue, 05 Nov 2024 20:01:22 +0000 https://sdtimes.com/?p=55996 In today’s rapidly evolving observability and security use cases, the concept of “shifting left” has moved beyond just software development. With the consistent and rapid rise of data volumes across logs, metrics, traces, and events, organizations are required to be a lot more thoughtful in efforts to turn chaos into control when it comes to … continue reading

The post Shifting left with telemetry pipelines: The future of data tiering at petabyte scale appeared first on SD Times.

]]>
In today’s rapidly evolving observability and security use cases, the concept of “shifting left” has moved beyond just software development. With the consistent and rapid rise of data volumes across logs, metrics, traces, and events, organizations are required to be a lot more thoughtful in efforts to turn chaos into control when it comes to understanding and managing their streaming data sets. Teams are striving to be more proactive in the management of their mission critical production systems and need to achieve far earlier detection of potential issues. This approach emphasizes moving traditionally late-stage activities — like seeing, understanding, transforming, filtering, analyzing, testing, and monitoring — closer to the beginning of the data creation cycle. With the growth of next-generation architectures, cloud-native technologies, microservices, and Kubernetes, enterprises are increasingly adopting Telemetry Pipelines to enable this shift. A key element in this movement is the concept of data tiering, a data-optimization strategy that plays a critical role in aligning the cost-value ratio for observability and security teams.

The Shift Left Movement: Chaos to Control 

“Shifting left” originated in the realm of DevOps and software testing. The idea was simple: find and fix problems earlier in the process to reduce risk, improve quality, and accelerate development. As organizations have embraced DevOps and continuous integration/continuous delivery (CI/CD) pipelines, the benefits of shifting left have become increasingly clear — less rework, faster deployments, and more robust systems.

In the context of observability and security, shifting left means accomplishing the analysis, transformation, and routing of logs, metrics, traces, and events very far upstream, extremely early in their usage lifecycle — a very different approach in comparison to the traditional “centralize then analyze” method. By integrating these processes earlier, teams can not only drastically reduce costs for otherwise prohibitive data volumes, but can even detect anomalies, performance issues, and potential security threats much quicker, before they become major problems in production. The rise of microservices and Kubernetes architectures has specifically accelerated this need, as the complexity and distributed nature of cloud-native applications demand more granular and real-time insights, and each localized data set is distributed when compared to the monoliths of the past.

This leads to the growing adoption of Telemetry Pipelines.

What Are Telemetry Pipelines?

Telemetry Pipelines are purpose-built to enable next-generation architectures. They are designed to give visibility and to pre-process, analyze, transform, and route observability and security data from any source to any destination. These pipelines give organizations the comprehensive toolbox and set of capabilities to control and optimize the flow of telemetry data, ensuring that the right data reaches the right downstream destination in the right format, to enable all the right use cases. They offer a flexible and scalable way to integrate multiple observability and security platforms, tools, and services.

For example, in a Kubernetes environment, where the ephemeral nature of containers can scale up and down dynamically, logs, metrics, and traces from those dynamic workloads need to be processed and stored in real-time. Telemetry Pipelines provide the capability to aggregate data from various services, be granular about what you want to do with that data, and ultimately send it downstream to the appropriate end destination — whether that’s a traditional security platform like Splunk that has a high unit cost for data, or a more scalable and cost effective storage location optimized for large datasets long term, like AWS S3.

The Role of Data Tiering

As telemetry data continues to grow at an exponential rate, enterprises face the challenge of managing costs without compromising on the insights they need in real time, or the requirement of data retention for audit, compliance, or forensic security investigations. This is where data tiering comes in. Data tiering is a strategy that segments data into different levels (tiers) based on its value and use case, enabling organizations to optimize both cost and performance.

In observability and security, this means identifying high-value data that requires immediate analysis and applying a lot more pre-processing and analysis to that data, compared to lower-value data that can simply be stored more cost effectively and accessed later, if necessary. This tiered approach typically includes:

  1. Top Tier (High-Value Data): Critical telemetry data that is vital for real-time analysis and troubleshooting is ingested and stored in high-performance platforms like Splunk or Datadog. This data might include high-priority logs, metrics, and traces that are essential for immediate action. Although this can include plenty of data in raw formats, the high cost nature of these platforms typically leads to teams routing only the data that’s truly necessary. 
  2. Middle Tier (Moderate-Value Data): Data that is important but doesn’t meet the bar to send to a premium, conventional centralized system and is instead routed to more cost-efficient observability platforms with newer architectures like Edge Delta. This might include a much more comprehensive set of logs, metrics, and traces that give you a wider, more useful understanding of all the various things happening within your mission critical systems.
  3. Bottom Tier (All Data): Due to the extremely inexpensive nature of S3 relative to observability and security platforms, all telemetry data in its entirety can be feasibly stored for long-term trend analysis, audit or compliance, or investigation purposes in low-cost solutions like AWS S3. This is typically cold storage that can be accessed on demand, but doesn’t need to be actively processed.

This multi-tiered architecture enables large enterprises to get the insights they need from their data while also managing costs and ensuring compliance with data retention policies. It’s important to keep in mind that the Middle Tier typically includes all data within the Top Tier and more, and the same goes for the Bottom Tier (which includes all data from higher tiers and more). Because the cost per Tier for the underlying downstream destinations can, in many cases, be orders of magnitude different, there isn’t much of a benefit from not duplicating all data that you’re putting into Datadog also into your S3 buckets, for instance. It’s much easier and more useful to have a full data set in S3 for any later needs.

How Telemetry Pipelines Enable Data Tiering

Telemetry Pipelines serve as the backbone of this tiered data approach by giving full control and flexibility in routing data based on predefined, out-of-the-box rules and/or business logic specific to the needs of your teams. Here’s how they facilitate data tiering:

  • Real-Time Processing: For high-value data that requires immediate action, Telemetry Pipelines provide real-time processing and routing, ensuring that critical logs, metrics, or security alerts are delivered to the right tool instantly. Because Telemetry Pipelines have an agent component, a lot of this processing can happen locally in an extremely compute, memory, and disk efficient manner.
  • Filtering and Transformation: Not all telemetry data is created equal, and teams have very different needs for how they may use this data. Telemetry Pipelines enable comprehensive filtering and transformation of any log, metric, trace, or event, ensuring that only the most critical information is sent to high-cost platforms, while the full dataset (including less critical data) can then be routed to more cost-efficient storage.
  • Data Enrichment and Routing: Telemetry Pipelines can ingest data from a wide variety of sources — Kubernetes clusters, cloud infrastructure, CI/CD pipelines, third-party APIs, etc. — and then apply various enrichments to that data before it’s then routed to the appropriate downstream platform.
  • Dynamic Scaling: As enterprises scale their Kubernetes clusters and increase their use of cloud services, the volume of telemetry data grows significantly. Due to their aligned architecture, Telemetry Pipelines also dynamically scale to handle this increasing load without affecting performance or data integrity.
The Benefits for Observability and Security Teams

By adopting Telemetry Pipelines and data tiering, observability and security teams can benefit in several ways:

  • Cost Efficiency: Enterprises can significantly reduce costs by routing data to the most appropriate tier based on its value, avoiding the unnecessary expense of storing low-value data in high-performance platforms.
  • Faster Troubleshooting: Not only can there be some monitoring and anomaly detection within the Telemetry Pipelines themselves, but critical telemetry data is also processed extremely quickly and routed to high-performance platforms for real-time analysis, enabling teams to detect and resolve issues with much greater speed.
  • Enhanced Security: Data enrichments from lookup tables, pre-built packs that apply to various known third-party technologies, and more scalable long-term retention of larger datasets all enable security teams to have better ability to find and identify IOCs within all logs and telemetry data, improving their ability to detect threats early and respond to incidents faster.
  • Scalability: As enterprises grow and their telemetry needs expand, Telemetry Pipelines can naturally scale with them, ensuring that they can handle increasing data volumes without sacrificing performance.
It all starts with Pipelines!

Telemetry Pipelines are the core foundation to sustainably managing the chaos of telemetry — and they are crucial in any attempt to wrangle growing volumes of logs, metrics, traces, and events. As large enterprises continue to shift left and adopt more proactive approaches to observability and security, we see that Telemetry Pipelines and data tiering are becoming essential in this transformation. By using a tiered data management strategy, organizations can optimize costs, improve operational efficiency, and enhance their ability to detect and resolve issues earlier in the life cycle. One additional key advantage that we didn’t focus on in this article, but is important to call out in any discussion on modern Telemetry Pipelines, is their full end-to-end support for Open Telemetry (OTel), which is increasingly becoming the industry standard for telemetry data collection and instrumentation. With OTel support built-in, these pipelines seamlessly integrate with diverse environments, enabling observability and security teams to collect, process, and route telemetry data from any source with ease. This comprehensive compatibility, combined with the flexibility of data tiering, allows enterprises to achieve unified, scalable, and cost-efficient observability and security that’s designed to scale to tomorrow and beyond.


To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon North America, in Salt Lake City, Utah, on November 12-15, 2024.

The post Shifting left with telemetry pipelines: The future of data tiering at petabyte scale appeared first on SD Times.

]]>
Microsoft enhances Data Wrangler with the ability to prepare data using natural language with new GitHub Copilot integration https://sdtimes.com/data/microsoft-enhances-data-wrangler-with-the-ability-to-prepare-data-using-natural-language-with-new-github-copilot-integration/ Tue, 05 Nov 2024 17:45:50 +0000 https://sdtimes.com/?p=55987 Microsoft has announced that GitHub Copilot is now integrated with Data Wrangler, an extension for VS Code for viewing, cleaning, and preparing data.  By integrating GitHub Copilot capabilities into the tool, users will now be able to clean and transform data in VS Code with natural language prompts. It will also be able to provide … continue reading

The post Microsoft enhances Data Wrangler with the ability to prepare data using natural language with new GitHub Copilot integration appeared first on SD Times.

]]>
Microsoft has announced that GitHub Copilot is now integrated with Data Wrangler, an extension for VS Code for viewing, cleaning, and preparing data. 

By integrating GitHub Copilot capabilities into the tool, users will now be able to clean and transform data in VS Code with natural language prompts. It will also be able to provide suggestions of how to fix errors in data transformation code. 

According to Microsoft, one of the current limitations of using AI for exploratory data analysis is that the AI often lacks context of the data, leading to more generalized responses. Further, the process of verifying that the generated code is correct can be a very manual and time-consuming process. 

The integration of Data Wrangler and GitHub Copilot addresses these issues because it allows the user to provide GitHub Copilot with data context, enabling the tool to generate code for a specific dataset. It also provides a preview of the behavior of the code, which allows users to visually validate the response. 

Some examples of how GitHub Copilot can be used in Data Wrangler include formatting a datetime column, removing columns with over 40% missing values, or fixing an error in a data transformation — all using natural language prompts. 

Using this new integration will require having the Data Wrangler VS Code extension, the GitHub Copilot VS code extension, and an active GitHub Copilot subscription.

Microsoft also announced that this is just the first of many Copilot enhancements planned for Data Wrangler, and additional functionality will be added in the future. 

The post Microsoft enhances Data Wrangler with the ability to prepare data using natural language with new GitHub Copilot integration appeared first on SD Times.

]]>
WSO2’s latest product release allows AI services to be managed like APIs https://sdtimes.com/api/wso2s-latest-product-release-allows-ai-services-to-be-managed-like-apis/ Tue, 05 Nov 2024 16:35:35 +0000 https://sdtimes.com/?p=55984 The API management platform WSO2 has announced a slew of new updates aimed at helping customers manage APIs in a technology landscape increasingly dependent on AI and Kubernetes. The updates span the releases of WSO2 API Manager 4.4, WSO2 API Platform for Kubernetes (APK) 1.2, and WSO2 API Microgateway 3.2, which are all available today.  … continue reading

The post WSO2’s latest product release allows AI services to be managed like APIs appeared first on SD Times.

]]>
The API management platform WSO2 has announced a slew of new updates aimed at helping customers manage APIs in a technology landscape increasingly dependent on AI and Kubernetes. The updates span the releases of WSO2 API Manager 4.4, WSO2 API Platform for Kubernetes (APK) 1.2, and WSO2 API Microgateway 3.2, which are all available today. 

“As organizations seek a competitive edge through innovative digital experiences, they need to invest equally in state-of-the-art technologies and in fostering the productivity of their software development teams,” said Christopher Davey, vice president and general manager of API management at WSO2. “With new functionality for managing AI services as APIs and extended support for Kubernetes as the preferred platform for digital innovation, WSO2 API Manager and WSO2 APK are continuing to enhance developers’ experiences while delivering a future-proof environment for their evolving needs.”

The company announced its Egress API Management capability, which allows developers to manage their AI services as APIs. It supports both internal and external AI services, and offers full life cycle API management, governance, and built-in support for providers such as OpenAI, Mistral AI, and Microsoft Azure OpenAI. 

The egress, or outbound, gateway experience enforces policies, providing secure and efficient access to AI models, as well as reducing costs by allowing companies to control AI traffic via backend rate limiting and subscription-level rate limiting of AI APIs. 

WSO2 also announced many new features to support the increase of APIs running on Kubernetes. A new version of the WSO2 API Microgateway — a cloud-native gateway for microservices — has been released, and it aligns with the latest WSO2 API Manager release, improving scalability while also maintaining governance, reliability, and security.

WSO2 APK was updated to align with the gRPC Route specification, improving integration with Kubernetes environment and facilitating better control over gRPC services. 

The latest version of WSO2 APK also includes new traffic filters for HTTP Routes, providing more flexibility and precision when routing HTTP traffic.

For better developer productivity in general, WSO2 also improved API discoverability by updating the unified control plane in the WSO2 API Manager so that now developers can search for APIs using the content in API definition files directly in the Developer Portal and Publisher portal.

And finally, to improve security and access control, the control plane also now supports the ability to configure separate mTLS authentication settings for production and sandbox environments. The latest release also adds support for personal access tokens (PAT), which provide secure, time-limited authentication to APIs without a username and password. 

The post WSO2’s latest product release allows AI services to be managed like APIs appeared first on SD Times.

]]>
Aerospike Kubernetes Operator 3.4 adds better backup and scalability capabilities https://sdtimes.com/data/aerospike-kubernetes-operator-3-4-adds-better-backup-and-scalability-capabilities/ Mon, 04 Nov 2024 16:27:53 +0000 https://sdtimes.com/?p=55981 The database company Aerospike has announced the latest version of its Kubernetes Operator with new features that improve backup and scalability.  The Aerospike Kubernetes Operator (AKO) enables users to simplify management and monitoring of their Aerospike databases.  AKO 3.4 incorporates the recently launched Aerospike Backup Service (ABS), which allows for easy management of backup jobs … continue reading

The post Aerospike Kubernetes Operator 3.4 adds better backup and scalability capabilities appeared first on SD Times.

]]>
The database company Aerospike has announced the latest version of its Kubernetes Operator with new features that improve backup and scalability. 

The Aerospike Kubernetes Operator (AKO) enables users to simplify management and monitoring of their Aerospike databases. 

AKO 3.4 incorporates the recently launched Aerospike Backup Service (ABS), which allows for easy management of backup jobs across Aerospike clusters. ABS runs on a VM or Docker container and provides a set of REST API endpoints for backing up and restoring database clusters. It allows for both full and incremental backups, supports the creation of different backup policies and schedules, and offers usability improvements over the traditional asbackup and asrestore command line tools. 

Additionally, with this release, the company has doubled the default resource limits to better support customers needing to scale. 

Another new capability in AKO 3.4 is the ability to pause all AKO operations and then easily resume them when ready. According to Aerospike, this is useful for triaging incidents. 

This version also supports Aerospike 7.2, which was released in early October and brought with it new capabilities like Active Rack, a multi-zone deployment option that cuts the costs of interzone data transfers. 

Other features of note in this release include the ability to trigger warm and cold restarts to Aeropsike clusters and integration of the Aerospike Monitoring Stack with AKO. 

The post Aerospike Kubernetes Operator 3.4 adds better backup and scalability capabilities appeared first on SD Times.

]]>
3 common missteps of product-led growth https://sdtimes.com/softwaredev/3-common-missteps-of-product-led-growth/ Fri, 01 Nov 2024 18:51:26 +0000 https://sdtimes.com/?p=55978 Product-led growth (PLG) has become the golden standard for SaaS companies aiming to scale rapidly and efficiently. In fact, a 2024 survey from ProductLed.com found that 91% of respondents are planning to invest more resources in PLG initiatives this year. As an advocate for this approach personally, I’ve witnessed firsthand the transformative power of putting … continue reading

The post 3 common missteps of product-led growth appeared first on SD Times.

]]>
Product-led growth (PLG) has become the golden standard for SaaS companies aiming to scale rapidly and efficiently. In fact, a 2024 survey from ProductLed.com found that 91% of respondents are planning to invest more resources in PLG initiatives this year. As an advocate for this approach personally, I’ve witnessed firsthand the transformative power of putting the product at the center of customer acquisition and retention strategies. 

Admittedly, the path to successful PLG implementation has some challenges that can derail even the most promising companies. Specifically, the organizations that are transitioning from more traditional enterprise growth models may, in fact, have difficulty when navigating the change in dynamic – either from technology or leadership transitioning. As such, I’d like to explain three common missteps that organizations often encounter when adopting a PLG strategy and discuss how to overcome them. By understanding these pitfalls, organizations can better position themselves to harness the full potential of PLG and drive sustainable growth.

Before I dig in, it’s important to note that it’s a misconception that organizations need to choose a PLG or sales-led approach. In reality, there are companies that have succeeded by having both. It matters on who the customer is and what level of hybrid motion works for each company. For example, a product-led approach may not be well suited for organizations that rely heavily on an outbound sales motion. For organizations with a strong inbound sales motion, however, PLG can be a value add.

With that, I’ll dive into the missteps: 

1. Failing to Maintain a Product-Centric Culture

One of the most critical aspects of PLG is fostering a product-centric culture throughout the organization. This means aligning every department – from engineering and design, to marketing and sales – around the product’s value proposition and user experience. Many companies stumble by treating PLG as merely a go-to-market strategy rather than a holistic approach that permeates the entire organization. This misalignment can lead to inconsistent messaging, disjointed user experiences, and ultimately, a failure to deliver on the promise of PLG.

To succeed, companies should:

  • Prioritize cross-functional collaboration and communication;
  • Invest in continuous product education for all employees; and
  • Empower teams to make data-driven decisions that enhance the product experience.

By fostering a genuine product-centric culture, organizations can ensure that every team member contributes to the overall PLG strategy, creating a cohesive and compelling user journey

2. Getting Distracted by Individual Customer Requests

In the pursuit of customer satisfaction, it’s easy to fall into the trap of catering to individual customer requests at the expense of the broader product vision. While customer feedback is invaluable, allowing it to dictate product direction entirely can lead to feature bloat and a diluted value proposition.

Successful PLG requires a delicate balance between addressing user needs and maintaining a focused product roadmap. To strike this balance:

  • Develop a process for prioritizing feature requests based on their potential impact on the overall user base;
  • Communicate transparently with customers about product decisions, features, and timelines; and
  • Use data and user research to validate assumptions and guide product development.

By maintaining a clear product vision while remaining responsive to user feedback, companies can create a product that resonates with a broader audience and drives organic growth.

3. Struggling to Balance Stakeholder Needs with Product Vision

PLG doesn’t exist in a vacuum. While the product is the primary growth driver, other stakeholders – including investors, partners, and internal teams – often have their own goals and expectations. Balancing these diverse needs with the overarching product vision can be challenging.

Companies may falter by prioritizing short-term gains over long-term product health or by compromising on user experience to meet arbitrary growth targets. To navigate this challenge:

  • Establish clear, measurable metrics that align with both product and business goals;
  • Educate stakeholders on the principles and benefits of PLG to gain buy-in and support; and
  • Regularly review and adjust the product roadmap to ensure it aligns with both user needs and business objectives.

By fostering alignment between stakeholder expectations and product vision, organizations can create a sustainable PLG strategy that drives both user satisfaction and business growth.

Beyond the Basics: Additional Considerations for PLG Success

While addressing these three common missteps is crucial, there are additional factors that can make or break a PLG strategy:

  • Hiring for PLG expertise: Many organizations underestimate the importance of bringing in specialized talent with PLG experience. Look for individuals with a growth mindset and a track record of success in product-led environments, especially in SaaS.
  • Investing in robust instrumentation: PLG demands a data-driven approach. Ensure you have the right tools and processes in place to collect, analyze, and act on user data effectively.
  • Continuous optimization: Both your product and your acquisition funnel should be subject to ongoing refinement. Establish a culture of experimentation and iteration to drive continuous improvement. Additionally, a touch of customer obsession cannot hurt! Obsess over your customer experience and evaluate their journey through your product to inform experiments. By truly understanding your user’s journey, you can clearly see where customers encounter friction or obstacles. This allows you to proactively enhance these touchpoints, leading to a smoother and more satisfying experience. 
  • Empowering marketing: While the product leads the way, marketing plays a crucial role in amplifying its reach. Equip your marketing team with the resources and autonomy they need to effectively drive the pipeline.

Product-led growth offers immense potential for SaaS companies looking to scale efficiently and deliver exceptional user experiences. By avoiding these common missteps and focusing on building a truly product-centric organization, companies can unlock the full power of PLG.

Successful PLG is not about perfection from day one. It’s about creating a culture of continuous learning, experimentation, and improvement. By staying true to the core principles of PLG while remaining flexible in its implementation, organizations can build products that not only meet user needs but also drive sustainable business growth.

The post 3 common missteps of product-led growth appeared first on SD Times.

]]>