Kubernetes Archives - SD Times https://sdtimes.com/tag/kubernetes/ Software Development News Tue, 05 Nov 2024 20:25:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Kubernetes Archives - SD Times https://sdtimes.com/tag/kubernetes/ 32 32 Shifting left with telemetry pipelines: The future of data tiering at petabyte scale https://sdtimes.com/monitor/shifting-left-with-telemetry-pipelines-the-future-of-data-tiering-at-petabyte-scale/ Tue, 05 Nov 2024 20:01:22 +0000 https://sdtimes.com/?p=55996 In today’s rapidly evolving observability and security use cases, the concept of “shifting left” has moved beyond just software development. With the consistent and rapid rise of data volumes across logs, metrics, traces, and events, organizations are required to be a lot more thoughtful in efforts to turn chaos into control when it comes to … continue reading

The post Shifting left with telemetry pipelines: The future of data tiering at petabyte scale appeared first on SD Times.

]]>
In today’s rapidly evolving observability and security use cases, the concept of “shifting left” has moved beyond just software development. With the consistent and rapid rise of data volumes across logs, metrics, traces, and events, organizations are required to be a lot more thoughtful in efforts to turn chaos into control when it comes to understanding and managing their streaming data sets. Teams are striving to be more proactive in the management of their mission critical production systems and need to achieve far earlier detection of potential issues. This approach emphasizes moving traditionally late-stage activities — like seeing, understanding, transforming, filtering, analyzing, testing, and monitoring — closer to the beginning of the data creation cycle. With the growth of next-generation architectures, cloud-native technologies, microservices, and Kubernetes, enterprises are increasingly adopting Telemetry Pipelines to enable this shift. A key element in this movement is the concept of data tiering, a data-optimization strategy that plays a critical role in aligning the cost-value ratio for observability and security teams.

The Shift Left Movement: Chaos to Control 

“Shifting left” originated in the realm of DevOps and software testing. The idea was simple: find and fix problems earlier in the process to reduce risk, improve quality, and accelerate development. As organizations have embraced DevOps and continuous integration/continuous delivery (CI/CD) pipelines, the benefits of shifting left have become increasingly clear — less rework, faster deployments, and more robust systems.

In the context of observability and security, shifting left means accomplishing the analysis, transformation, and routing of logs, metrics, traces, and events very far upstream, extremely early in their usage lifecycle — a very different approach in comparison to the traditional “centralize then analyze” method. By integrating these processes earlier, teams can not only drastically reduce costs for otherwise prohibitive data volumes, but can even detect anomalies, performance issues, and potential security threats much quicker, before they become major problems in production. The rise of microservices and Kubernetes architectures has specifically accelerated this need, as the complexity and distributed nature of cloud-native applications demand more granular and real-time insights, and each localized data set is distributed when compared to the monoliths of the past.

This leads to the growing adoption of Telemetry Pipelines.

What Are Telemetry Pipelines?

Telemetry Pipelines are purpose-built to enable next-generation architectures. They are designed to give visibility and to pre-process, analyze, transform, and route observability and security data from any source to any destination. These pipelines give organizations the comprehensive toolbox and set of capabilities to control and optimize the flow of telemetry data, ensuring that the right data reaches the right downstream destination in the right format, to enable all the right use cases. They offer a flexible and scalable way to integrate multiple observability and security platforms, tools, and services.

For example, in a Kubernetes environment, where the ephemeral nature of containers can scale up and down dynamically, logs, metrics, and traces from those dynamic workloads need to be processed and stored in real-time. Telemetry Pipelines provide the capability to aggregate data from various services, be granular about what you want to do with that data, and ultimately send it downstream to the appropriate end destination — whether that’s a traditional security platform like Splunk that has a high unit cost for data, or a more scalable and cost effective storage location optimized for large datasets long term, like AWS S3.

The Role of Data Tiering

As telemetry data continues to grow at an exponential rate, enterprises face the challenge of managing costs without compromising on the insights they need in real time, or the requirement of data retention for audit, compliance, or forensic security investigations. This is where data tiering comes in. Data tiering is a strategy that segments data into different levels (tiers) based on its value and use case, enabling organizations to optimize both cost and performance.

In observability and security, this means identifying high-value data that requires immediate analysis and applying a lot more pre-processing and analysis to that data, compared to lower-value data that can simply be stored more cost effectively and accessed later, if necessary. This tiered approach typically includes:

  1. Top Tier (High-Value Data): Critical telemetry data that is vital for real-time analysis and troubleshooting is ingested and stored in high-performance platforms like Splunk or Datadog. This data might include high-priority logs, metrics, and traces that are essential for immediate action. Although this can include plenty of data in raw formats, the high cost nature of these platforms typically leads to teams routing only the data that’s truly necessary. 
  2. Middle Tier (Moderate-Value Data): Data that is important but doesn’t meet the bar to send to a premium, conventional centralized system and is instead routed to more cost-efficient observability platforms with newer architectures like Edge Delta. This might include a much more comprehensive set of logs, metrics, and traces that give you a wider, more useful understanding of all the various things happening within your mission critical systems.
  3. Bottom Tier (All Data): Due to the extremely inexpensive nature of S3 relative to observability and security platforms, all telemetry data in its entirety can be feasibly stored for long-term trend analysis, audit or compliance, or investigation purposes in low-cost solutions like AWS S3. This is typically cold storage that can be accessed on demand, but doesn’t need to be actively processed.

This multi-tiered architecture enables large enterprises to get the insights they need from their data while also managing costs and ensuring compliance with data retention policies. It’s important to keep in mind that the Middle Tier typically includes all data within the Top Tier and more, and the same goes for the Bottom Tier (which includes all data from higher tiers and more). Because the cost per Tier for the underlying downstream destinations can, in many cases, be orders of magnitude different, there isn’t much of a benefit from not duplicating all data that you’re putting into Datadog also into your S3 buckets, for instance. It’s much easier and more useful to have a full data set in S3 for any later needs.

How Telemetry Pipelines Enable Data Tiering

Telemetry Pipelines serve as the backbone of this tiered data approach by giving full control and flexibility in routing data based on predefined, out-of-the-box rules and/or business logic specific to the needs of your teams. Here’s how they facilitate data tiering:

  • Real-Time Processing: For high-value data that requires immediate action, Telemetry Pipelines provide real-time processing and routing, ensuring that critical logs, metrics, or security alerts are delivered to the right tool instantly. Because Telemetry Pipelines have an agent component, a lot of this processing can happen locally in an extremely compute, memory, and disk efficient manner.
  • Filtering and Transformation: Not all telemetry data is created equal, and teams have very different needs for how they may use this data. Telemetry Pipelines enable comprehensive filtering and transformation of any log, metric, trace, or event, ensuring that only the most critical information is sent to high-cost platforms, while the full dataset (including less critical data) can then be routed to more cost-efficient storage.
  • Data Enrichment and Routing: Telemetry Pipelines can ingest data from a wide variety of sources — Kubernetes clusters, cloud infrastructure, CI/CD pipelines, third-party APIs, etc. — and then apply various enrichments to that data before it’s then routed to the appropriate downstream platform.
  • Dynamic Scaling: As enterprises scale their Kubernetes clusters and increase their use of cloud services, the volume of telemetry data grows significantly. Due to their aligned architecture, Telemetry Pipelines also dynamically scale to handle this increasing load without affecting performance or data integrity.
The Benefits for Observability and Security Teams

By adopting Telemetry Pipelines and data tiering, observability and security teams can benefit in several ways:

  • Cost Efficiency: Enterprises can significantly reduce costs by routing data to the most appropriate tier based on its value, avoiding the unnecessary expense of storing low-value data in high-performance platforms.
  • Faster Troubleshooting: Not only can there be some monitoring and anomaly detection within the Telemetry Pipelines themselves, but critical telemetry data is also processed extremely quickly and routed to high-performance platforms for real-time analysis, enabling teams to detect and resolve issues with much greater speed.
  • Enhanced Security: Data enrichments from lookup tables, pre-built packs that apply to various known third-party technologies, and more scalable long-term retention of larger datasets all enable security teams to have better ability to find and identify IOCs within all logs and telemetry data, improving their ability to detect threats early and respond to incidents faster.
  • Scalability: As enterprises grow and their telemetry needs expand, Telemetry Pipelines can naturally scale with them, ensuring that they can handle increasing data volumes without sacrificing performance.
It all starts with Pipelines!

Telemetry Pipelines are the core foundation to sustainably managing the chaos of telemetry — and they are crucial in any attempt to wrangle growing volumes of logs, metrics, traces, and events. As large enterprises continue to shift left and adopt more proactive approaches to observability and security, we see that Telemetry Pipelines and data tiering are becoming essential in this transformation. By using a tiered data management strategy, organizations can optimize costs, improve operational efficiency, and enhance their ability to detect and resolve issues earlier in the life cycle. One additional key advantage that we didn’t focus on in this article, but is important to call out in any discussion on modern Telemetry Pipelines, is their full end-to-end support for Open Telemetry (OTel), which is increasingly becoming the industry standard for telemetry data collection and instrumentation. With OTel support built-in, these pipelines seamlessly integrate with diverse environments, enabling observability and security teams to collect, process, and route telemetry data from any source with ease. This comprehensive compatibility, combined with the flexibility of data tiering, allows enterprises to achieve unified, scalable, and cost-efficient observability and security that’s designed to scale to tomorrow and beyond.


To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon North America, in Salt Lake City, Utah, on November 12-15, 2024.

The post Shifting left with telemetry pipelines: The future of data tiering at petabyte scale appeared first on SD Times.

]]>
WSO2’s latest product release allows AI services to be managed like APIs https://sdtimes.com/api/wso2s-latest-product-release-allows-ai-services-to-be-managed-like-apis/ Tue, 05 Nov 2024 16:35:35 +0000 https://sdtimes.com/?p=55984 The API management platform WSO2 has announced a slew of new updates aimed at helping customers manage APIs in a technology landscape increasingly dependent on AI and Kubernetes. The updates span the releases of WSO2 API Manager 4.4, WSO2 API Platform for Kubernetes (APK) 1.2, and WSO2 API Microgateway 3.2, which are all available today.  … continue reading

The post WSO2’s latest product release allows AI services to be managed like APIs appeared first on SD Times.

]]>
The API management platform WSO2 has announced a slew of new updates aimed at helping customers manage APIs in a technology landscape increasingly dependent on AI and Kubernetes. The updates span the releases of WSO2 API Manager 4.4, WSO2 API Platform for Kubernetes (APK) 1.2, and WSO2 API Microgateway 3.2, which are all available today. 

“As organizations seek a competitive edge through innovative digital experiences, they need to invest equally in state-of-the-art technologies and in fostering the productivity of their software development teams,” said Christopher Davey, vice president and general manager of API management at WSO2. “With new functionality for managing AI services as APIs and extended support for Kubernetes as the preferred platform for digital innovation, WSO2 API Manager and WSO2 APK are continuing to enhance developers’ experiences while delivering a future-proof environment for their evolving needs.”

The company announced its Egress API Management capability, which allows developers to manage their AI services as APIs. It supports both internal and external AI services, and offers full life cycle API management, governance, and built-in support for providers such as OpenAI, Mistral AI, and Microsoft Azure OpenAI. 

The egress, or outbound, gateway experience enforces policies, providing secure and efficient access to AI models, as well as reducing costs by allowing companies to control AI traffic via backend rate limiting and subscription-level rate limiting of AI APIs. 

WSO2 also announced many new features to support the increase of APIs running on Kubernetes. A new version of the WSO2 API Microgateway — a cloud-native gateway for microservices — has been released, and it aligns with the latest WSO2 API Manager release, improving scalability while also maintaining governance, reliability, and security.

WSO2 APK was updated to align with the gRPC Route specification, improving integration with Kubernetes environment and facilitating better control over gRPC services. 

The latest version of WSO2 APK also includes new traffic filters for HTTP Routes, providing more flexibility and precision when routing HTTP traffic.

For better developer productivity in general, WSO2 also improved API discoverability by updating the unified control plane in the WSO2 API Manager so that now developers can search for APIs using the content in API definition files directly in the Developer Portal and Publisher portal.

And finally, to improve security and access control, the control plane also now supports the ability to configure separate mTLS authentication settings for production and sandbox environments. The latest release also adds support for personal access tokens (PAT), which provide secure, time-limited authentication to APIs without a username and password. 

The post WSO2’s latest product release allows AI services to be managed like APIs appeared first on SD Times.

]]>
Aerospike Kubernetes Operator 3.4 adds better backup and scalability capabilities https://sdtimes.com/data/aerospike-kubernetes-operator-3-4-adds-better-backup-and-scalability-capabilities/ Mon, 04 Nov 2024 16:27:53 +0000 https://sdtimes.com/?p=55981 The database company Aerospike has announced the latest version of its Kubernetes Operator with new features that improve backup and scalability.  The Aerospike Kubernetes Operator (AKO) enables users to simplify management and monitoring of their Aerospike databases.  AKO 3.4 incorporates the recently launched Aerospike Backup Service (ABS), which allows for easy management of backup jobs … continue reading

The post Aerospike Kubernetes Operator 3.4 adds better backup and scalability capabilities appeared first on SD Times.

]]>
The database company Aerospike has announced the latest version of its Kubernetes Operator with new features that improve backup and scalability. 

The Aerospike Kubernetes Operator (AKO) enables users to simplify management and monitoring of their Aerospike databases. 

AKO 3.4 incorporates the recently launched Aerospike Backup Service (ABS), which allows for easy management of backup jobs across Aerospike clusters. ABS runs on a VM or Docker container and provides a set of REST API endpoints for backing up and restoring database clusters. It allows for both full and incremental backups, supports the creation of different backup policies and schedules, and offers usability improvements over the traditional asbackup and asrestore command line tools. 

Additionally, with this release, the company has doubled the default resource limits to better support customers needing to scale. 

Another new capability in AKO 3.4 is the ability to pause all AKO operations and then easily resume them when ready. According to Aerospike, this is useful for triaging incidents. 

This version also supports Aerospike 7.2, which was released in early October and brought with it new capabilities like Active Rack, a multi-zone deployment option that cuts the costs of interzone data transfers. 

Other features of note in this release include the ability to trigger warm and cold restarts to Aeropsike clusters and integration of the Aerospike Monitoring Stack with AKO. 

The post Aerospike Kubernetes Operator 3.4 adds better backup and scalability capabilities appeared first on SD Times.

]]>
Speedrunning Kubernetes in the enterprise https://sdtimes.com/softwaredev/speedrunning-kubernetes-in-the-enterprise/ Mon, 14 Oct 2024 15:10:04 +0000 https://sdtimes.com/?p=55836 Around 50% of attendees to KubeCon in Salt Lake City will be first-timers. If that’s you: welcome, it’s gonna be an awesome show.  Like thousands of others in businesses around the world, you’ve kicked the tires on K8s and decided that it’s worth committing to, at least enough to justify the cost of a week … continue reading

The post Speedrunning Kubernetes in the enterprise appeared first on SD Times.

]]>
Around 50% of attendees to KubeCon in Salt Lake City will be first-timers. If that’s you: welcome, it’s gonna be an awesome show. 

Like thousands of others in businesses around the world, you’ve kicked the tires on K8s and decided that it’s worth committing to, at least enough to justify the cost of a week in SLC. You’re on site to scope out technologies and vendors and learn best practices as you put Kubernetes into production in some shape or form.

So here’s the no-nonsense advice you need to make your next 12 months hurt less.

1. DIY does not work at scale

If you’re serious about Kubernetes, the data says you will end up with tens or hundreds of clusters. You need them to look and behave the same, consistently, otherwise you’ll drive yourself mad with troubleshooting and policy violations. You need the ability to stand a new cluster up for a new requirement in minutes, not weeks, or you’ll be very unpopular with your app dev teams.

We all love rolling up our sleeves and tinkering, and when you were learning K8s principles and building your first cluster (‘the hard way’ or not), that’s the right way to do it. You’re in there, writing scripts, wrangling kubectl, tweaking yaml.

But DIY does not scale.

Yes, there are companies out there that rolled their own Kubernetes ‘management platform’ over the past six or seven years, and got it working pretty well. If you asked them over a beer what they’d do if they were starting afresh today, most of them would do it differently. They would look for an easy way.

Learn from them: you need repeatable templates and push-button automation, but it probably doesn’t make sense to DIY your own tooling to do that.

2. Building the cluster is the easy bit

K8s beginners naturally focus on getting their first clusters up and running, and the end goal is seeing their handful of nodes in a ‘ready’ state. Yes, it’s challenging — but believe it or not, it’s the easy bit. 

Now you’ve got to build the rest of the enterprise-grade stack, everything from load balancers to secrets management, logging and observability. In meme parlance, it’s “the rest of the ****ing owl”. 

Oh, and you need to patch, upgrade, scale, reconfigure, secure, monitor and troubleshoot that full stack. At scale. Frequently. Forever.

Unless you are blessed with unlimited headcount or very patient internal customers, you probably need to look at automation for this part, too. You’re not looking for a build tool — you’re looking for fleet lifecycle management.

One of our customers is well on their journey to enterprise-wide Kubernetes, primarily on-prem, and in a highly regulated industry. Last week we interviewed him (on condition of anonymity) about his journey, and he explained how this realization hit him, too:

“I didn’t know what my team size was going to be, and at that point it was just me, and I wasn’t going to go around manually building 60 clusters or 600 clusters. There’s no way I could do that. I’d be spending all my time doing it. 

“If we’re going to do this and be able to reliably create clusters the same way at scale, we cannot be doing it by hand. So I wanted to build a platform that was mostly automated. 

“We need not only automation to create the clusters, but we also need to make sure that they’re maintained and updated. Someone’s got to sit in the chair for hours and do that. And that’s what led us down the path of trying to find an enterprise container management solution.”

3. Prepare for your future, today

For a decade now, Kubernetes has been surprising us all with its versatility and extensibility, with custom resources and operators and the power of the K8s API. 

You may have just a few mainstream use cases today, likely self-service ‘Kubernetes as a Service’ (KaaS) in the cloud or virtualized data center. But who knows what the future holds for K8s in your business? 

  • Maybe you’ll start looking to K8s as a way to modernize your VM workloads, as well as orchestrating containers.
  • Perhaps your environment needs will change: if you need to deploy clusters at the edge, on bare metal, in different clouds — can your current toolset do it? 
  • And what happens if one of your favored projects, Linux OSs or distributions changes license or gets abandoned — how hard is it to swap out?

You can’t predict the future, but you can certainly prepare for it: protecting your agency and freedom of choice.

So make your tech stack decisions today to protect the freedom of ‘future you’. Watch out for highly opinionated services and toolsets that will lock you in. But equally, remember that DIY won’t be the easy answer in any of these situations.

Don’t be afraid to follow your unique journey

We work with dozens and dozens of enterprises, from defense contractors to pharma manufacturers, small software vendors to the biggest telcos. Every one of them has the same basic pains — they need to make it safe and easy to design, deploy and manage Kubernetes clusters to run enterprise applications. But every one of them is also unique!

Some are running small form-factor edge devices in airgapped environments with high security. Some are spinning up clusters in the cloud for dev teams. Some have crazy network setups and proxies, or complex integrations with existing tooling like ServiceNow and enterprise identity providers. Some have big, highly expert teams, others just have one or two people working on Kubernetes.

So when you’re standing in the hall with thousands of other K8s enthusiasts, don’t get swept away by the cool stuff. Look for those that can help you navigate your own, unique path to business results. And enjoy the ride!


To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon North America, in Salt Lake City, Utah, on November 12-15, 2024.

The post Speedrunning Kubernetes in the enterprise appeared first on SD Times.

]]>
WSO2 API Manager’s control plane can now be used to manage WSO2’s Kubernetes platform https://sdtimes.com/api/wso2-api-managers-control-plane-can-now-be-used-to-manage-wso2s-kubernetes-platform/ Wed, 24 Apr 2024 18:01:06 +0000 https://sdtimes.com/?p=54367 The API management company WSO2 has announced updates across several of its products: WSO2 API Manager, WSO2 API Platform for Kubernetes (WSO2 APK), and WSO2 Micro Integrator.  The WSO2 API Manager control plane was updated to now be able to manage both itself and WSO2 APK. This allows WSO2 APK APIs to benefit from the … continue reading

The post WSO2 API Manager’s control plane can now be used to manage WSO2’s Kubernetes platform appeared first on SD Times.

]]>
The API management company WSO2 has announced updates across several of its products: WSO2 API Manager, WSO2 API Platform for Kubernetes (WSO2 APK), and WSO2 Micro Integrator. 

The WSO2 API Manager control plane was updated to now be able to manage both itself and WSO2 APK. This allows WSO2 APK APIs to benefit from the API Manager’s capabilities and the Developer Portal and Marketplace.

According to the company, benefits include a more streamlined development process, a unified platform for building comprehensive API strategies, and the ability to deploy large numbers of APIs in a scalable way. 

The control plane also gives both platforms access to the WSO2 AI Developer Assistant, providing new features like AI-based search functionality in the Developer Portal and AI-based API testing. 

WSO2 APK also added support for the GraphQL query language to make it easier for developers to request data from their own services. 

The company also introduced a VS Code extension for the WSO2 Micro Integrator, which is an integration platform for connecting different applications. The extension will be available as a developer preview on May 7th. 

The new extension also provides access to the MI Copilot, which allows developers to describe their integration in natural language and then receive recommended configurations. 

“With our new AI-based assistants, unified control plane for WSO2 API Manager and WSO2 APK, and WSO2 Micro Integrator for VS Code extension, we are enhancing these developers’ experiences by offering a more user-friendly, productive, and future-proof environment that aligns with their evolving needs,” said Christopher Davey, vice president and general manager of WSO2’s API & integration software business unit.

The post WSO2 API Manager’s control plane can now be used to manage WSO2’s Kubernetes platform appeared first on SD Times.

]]>
Cloud Foundry updates Korifi to further simplify the Kubernetes developer experience https://sdtimes.com/kubernetes/cloud-foundry-updates-korifi-to-further-simplify-the-kubernetes-developer-experience/ Wed, 08 Nov 2023 22:13:28 +0000 https://sdtimes.com/?p=52979 Cloud Foundry announced the latest release of Korifi, a platform that aims to simplify Kubernetes and enhance the application deployment process. It now supports Docker images and allows developers to easily deploy them to Kubernetes. This update streamlines container-based workflows by making Docker images compatible with existing containers in various development stages, which makes it … continue reading

The post Cloud Foundry updates Korifi to further simplify the Kubernetes developer experience appeared first on SD Times.

]]>
Cloud Foundry announced the latest release of Korifi, a platform that aims to simplify Kubernetes and enhance the application deployment process. It now supports Docker images and allows developers to easily deploy them to Kubernetes.

This update streamlines container-based workflows by making Docker images compatible with existing containers in various development stages, which makes it a valuable tool for teams already utilizing container-based solutions.

With support for Docker images, users no longer need to write or maintain intricate YAML configurations, as most lifecycle operations come with readily available workflows, Cloud Foundry explained.

It also enhances productivity by simplifying scaling and streamlining container lifecycle management, all without the complexities of Kubernetes configuration.

Additionally, a new installer simplifies the deployment of Korifi for first-time users, further enhancing its usability.

“Korifi now offers users the power to transform container-based workflows and take advantage of Kubernetes scalability and resilience,” said Chris Clark, the program manager at Cloud Foundry. “We’re enabling developers to focus on innovation, without having to deal with infrastructure and become experts in Kubernetes.”

The post Cloud Foundry updates Korifi to further simplify the Kubernetes developer experience appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: Kargo https://sdtimes.com/open-source/sd-times-open-source-project-of-the-week-kargosd-times-open-source-project-of-the-week-kargo/ Fri, 13 Oct 2023 13:47:27 +0000 https://sdtimes.com/?p=52632 Kargo is a multi-stage application lifecycle orchestrator designed to help with continuous delivery and deployment of changes across various environments.  Kargo, created by the developers behind the Argo Project, represents a novel approach to CD pipelines, tailored for the cloud-native landscape, featuring robust GitOps support, progressive delivery capabilities, and complete open-source accessibility. The name “Kargo” … continue reading

The post SD Times Open-Source Project of the Week: Kargo appeared first on SD Times.

]]>
Kargo is a multi-stage application lifecycle orchestrator designed to help with continuous delivery and deployment of changes across various environments. 

Kargo, created by the developers behind the Argo Project, represents a novel approach to CD pipelines, tailored for the cloud-native landscape, featuring robust GitOps support, progressive delivery capabilities, and complete open-source accessibility.

The name “Kargo” reflects its core function of transporting build and configuration artifacts (referred to as “freight”) to multiple environments through a GitOps approach. GitOps has played a pivotal role in elevating infrastructure-as-code practices, yet it has introduced challenges for traditional CI/CD pipelines, according to the maintainers. 

Pull-based GitOps operators, such as Argo CD, have disrupted the direct access of CI pipelines to production environments. The asynchronous nature of Kubernetes declarative APIs and the eventual consistency have made it challenging to coordinate imperative processes like testing and analysis.

Argo CD has addressed some of these issues by providing interfaces to Kubernetes clusters, including health assessments, sync hooks, and waved deployments, but there is room for improvement, says the maintainers.

“Fundamentally, Kargo takes an entirely different approach to the problem of effecting change to multiple environments. Unlike CI, Kargo deployment pipelines are not generic “jobs” with a beginning, a middle, and an end, relying on executing shell commands against each environment,” Jesse Suen, co-founder and CTO at Akuity, the developers of the project, wrote in a blog post

The post SD Times Open-Source Project of the Week: Kargo appeared first on SD Times.

]]>
Armory launches new hub full of resources for developers https://sdtimes.com/cicd/armory-launches-new-hub-full-of-resources-for-developers/ Tue, 22 Aug 2023 16:52:28 +0000 https://sdtimes.com/?p=52097 Armory introduced developer.armory.io, a platform to enhance the developer experience through their declarative continuous deployment solution (Armory Continuous Deployment-as-a-Service).  This developer hub offers a comprehensive website aimed at providing developers and end-users with convenient access to detailed resources. The platform offers various learning materials such as videos, tutorials, and reference documents, enabling users to learn … continue reading

The post Armory launches new hub full of resources for developers appeared first on SD Times.

]]>
Armory introduced developer.armory.io, a platform to enhance the developer experience through their declarative continuous deployment solution (Armory Continuous Deployment-as-a-Service). 

This developer hub offers a comprehensive website aimed at providing developers and end-users with convenient access to detailed resources. The platform offers various learning materials such as videos, tutorials, and reference documents, enabling users to learn and progress at their preferred pace.

“Continuous deployment is critical to us delivering value to our customers, so our engineering team not only code our products, they consume them,” said Jim Douglas, CEO of Armory. “They live and breathe the developer experience daily, so it’s front and center in all our product decisions.”

Developer Hub offers the ability to visualize deployment configurations to understand existing setups, share URLs from exposed services to webhooks for service identification, access and review Kubernetes manifests on the deployment graph screen, and expose external preview URLs for deployed Kubernetes services, granting developers independent access without relying on other teams for networking setup.

Every user, including Freemium tier users, have full access to the site, allowing them to utilize their account to the fullest extent.

The post Armory launches new hub full of resources for developers appeared first on SD Times.

]]>
KubeMQ updates its Dashboard to become a complete command center for managing microservices https://sdtimes.com/kubernetes/kubemq-updates-its-dashboard-to-become-a-complete-command-center-for-managing-microservices/ Wed, 21 Jun 2023 15:55:52 +0000 https://sdtimes.com/?p=51491 KubeMQ announced the most recent enhancement to the KubeMQ Dashboard that turns it into a complete command center for handling microservices connectivity.  The upgrade introduces two major features: auto-discovery and charts, offering users immediate insights and visualization abilities to optimize their microservices environment. With the new auto-discovery feature, the KubeMQ Dashboard provides users with an … continue reading

The post KubeMQ updates its Dashboard to become a complete command center for managing microservices appeared first on SD Times.

]]>
KubeMQ announced the most recent enhancement to the KubeMQ Dashboard that turns it into a complete command center for handling microservices connectivity. 

The upgrade introduces two major features: auto-discovery and charts, offering users immediate insights and visualization abilities to optimize their microservices environment.

With the new auto-discovery feature, the KubeMQ Dashboard provides users with an intuitive and real-time view of microservices connections. Users can now easily identify connectors as senders and receivers for each queue or channel. 

This granular visibility empowers users to quickly troubleshoot and optimize their messaging infrastructure. It can also identify which clients or connectors are connected to each node in the KubeMQ cluster. This detailed view allows teams to better understand how efficient the workloads in their clusters are to ensure optimal performance and scalability for microservices architectures.

In addition to its auto-discovery feature, the KubeMQ Dashboard now offers charts that yield significant insights into messaging activity over time, such as showing the number of messages and the volume of data moving in and out of the microservices environment.

Through data visualization, users can readily observe the performance trends of their microservices, identify possible bottlenecks, and make informed choices to enhance their messaging infrastructure. The introduction of the charts feature adds clarity and transparency to microservices connectivity, simplifying system monitoring and management.

The post KubeMQ updates its Dashboard to become a complete command center for managing microservices appeared first on SD Times.

]]>
Red Hat Service Interconnect facilitates communication between multiple platforms and clouds https://sdtimes.com/cloud/red-hat-service-interconnect-facilitates-communication-between-multiple-platforms-and-clouds/ Tue, 23 May 2023 16:23:51 +0000 https://sdtimes.com/?p=51216 Red Hat Service Interconnect, which can simplify application connectivity and security across platforms, clusters, and clouds, is now generally available after being announced at Red Hat Summit. The solution is based on the open-source project Skupper.io, which enables secure communication across Kubernetes clusters with no VPNs or special firewall rules. According to Red Hat, application … continue reading

The post Red Hat Service Interconnect facilitates communication between multiple platforms and clouds appeared first on SD Times.

]]>
Red Hat Service Interconnect, which can simplify application connectivity and security across platforms, clusters, and clouds, is now generally available after being announced at Red Hat Summit.

The solution is based on the open-source project Skupper.io, which enables secure communication across Kubernetes clusters with no VPNs or special firewall rules.

According to Red Hat, application architectures are changing to take advantage of the open hybrid cloud and require flexible, secure connections for applications. AI/ML applications can be distributed across on-premises, edge and cloud systems, and businesses require connections across multiple clouds and infrastructures. This requires coordination between developers, network admins and security admins to set up trusted, specific connections, which can slow developer productivity and innovation, the company explained.

Red Hat Service Interconnect facilitates communication between multiple platforms and clouds, allowing developers to add reliable, secure connections between applications running on Kubernetes clusters, virtual machines, and bare-metal hosts. These connections can be implemented across any infrastructure, from traditional data centers to the edge and cloud, according to Red Hat. 

With this simplified process, developers no longer need extensive privilege or networking knowledge to establish connections to speed up the development process and stay compliant with security requirements.

Red Hat Service Interconnect is a service customers can use in their hybrid and multi-cloud strategies to either modernize existing apps or migrate them across infrastructures or between clouds. This ensures application connections are migrated without any downtime, helping with effective compliance and risk management, as well as maximizing operational efficiency for application and network teams.

The post Red Hat Service Interconnect facilitates communication between multiple platforms and clouds appeared first on SD Times.

]]>