SD Times, Author at SD Times https://sdtimes.com/author/sd-times-staff/ Software Development News Thu, 17 Oct 2024 19:01:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg SD Times, Author at SD Times https://sdtimes.com/author/sd-times-staff/ 32 32 Q&A: Why the Developer Relations Foundation is forming https://sdtimes.com/softwaredev/qa-why-the-developer-relations-foundation-is-forming/ Thu, 17 Oct 2024 18:14:52 +0000 https://sdtimes.com/?p=55851 Developer relations (DevRel) is an important role within the development space, acting as a liaison between a company making development tools and the developers actually using those tools. Recently, the Linux Foundation announced its intent to form the Developer Relations Foundation to support people in that career. On the most recent episode of our podcast, … continue reading

The post Q&A: Why the Developer Relations Foundation is forming appeared first on SD Times.

]]>
Developer relations (DevRel) is an important role within the development space, acting as a liaison between a company making development tools and the developers actually using those tools.

Recently, the Linux Foundation announced its intent to form the Developer Relations Foundation to support people in that career.

On the most recent episode of our podcast, we interviewed Stacey Kruczek, director of developer relations at Aerospike and steering committee member of the Developer Relations Foundation, to learn more.

Here is an edited and abridged version of that conversation.

How do you define the role of developer relations?

Developer relations is really the practice of elevating developers and their world. We’re all about helping the developers, helping them solve their problems. But more importantly, part of my role and the importance of it is also to be the voice of the community to the company. So it’s important for us to be able to recognize that the developers are the influencers of much of our technology business today, and so my role as a DevRel lead is to help elevate them and share in their pains, glories and challenges, and help them solve those issues, if they have them.

I always like to sort of describe myself as the PR person for developers, if you will, promoting them, promoting the importance of them, their value to the organization and their value to the community.

Why did the Linux Foundation create the Developer Relations Foundation? Why did they see a need for that?

The importance of developer relations is really to add business value. When you think about the journey of a developer and when they first experience your company, your products, your tools, and solutions, they’re really starting their discovery period. At first, they’re discovering what your tools are, the advantages of them, and as a DevRel leader, it’s my job basically, to help them along that journey. I help them down the path from discovery to engaging with us to sharing their feedback.

So it’s really about the developer being the influencer. And I’ve seen a lot of this personally at Aerospike, and we’re having a lot of conversations with developers and architects about evaluating our tools, our products, providing us feedback, and even coming in and evaluating our community.

And oftentimes, more than not, there’s a misunderstanding of what DevRel is. Each company has their own placement of a DevRel team or practice that really depends on the needs of the company, and that can be customized to them, but from a developer relations standpoint, it’s still all one in the same. 

And the major benefit of forming this foundation is to create some synergy and some common best practices, common terms that we all can share as a wider global community, to elevate the practice and practitioners in DevRel. So the major benefit of forming the foundation, why we did this is that it’s really to promote participatory governance. That means that no single company can monopolize the project or dictate its direction.

So our focus is on that community-driven governance. We’re taking and absorbing all the feedback of all of the DevRel practitioners across the globe. We’re ensuring that all of their contributions are reviewed, they’re all based on their merit and their expertise, and moreover, we’re creating a trusted, credible, and export resource to all of those professionals in the field. It will help promote best practices, what it means for businesses, and how we can add value.

What can DevRels do to elevate the development profession?

So really we have to look first at the challenges that some of this has faced. And I just want to tie in my own personal experience with this. I’ve been in this for over a decade. And when I initially came in, I came in as a technical marketer to a developer relations team, and I had the experience of sitting with five experienced engineers on the Android and Google development platform. And then we actually sat in the sea of engineers. So I always like to attribute my role as being the loan marketer in the sea of engineers, and they kind of kept me afloat. 

The beauty of that relationship, and what it really gave me, personally, was just really inspiration. There was a lot of collaboration. I became their voice to the broader customer partner ISV community, but more importantly to all the developers, I was helping them elevate their expertise. 

I think at that point, DevRel was sort of a newer thing. It had been around for a while, but people really didn’t understand it. I was thankful during my time at Zebra Technologies that we had that experience with them. It ignited my passion for helping developers in the long term in DevRel, and I soon moved into creating the first developer marketing strategy for that company, and then I really wanted to take on more as an individual for that, in terms of helping elevate DevRel as a practice, because dev marketing is just one component of it. 

When we look at it long term, there’s a reason for having a foundation. There’s actually a DevRel survey that just came out, and a large portion of developer advocates who responded to the survey stated that they felt like they needed a professional practice in one space, one community of where they could go to and share their experiences, but also learn from others. Because a lot of what I’ve learned in DevRel, I’ve learned from peers in the industry, and that’s been so important and so crucial for my learning and my development. So it’s not only the engineers, it’s also the other DevRel professionals. And when we were dealing with Covid for so long, I was fortunate to be included in many groups of DevRel advocates and professionals that would hop onto a Slack or a Discord channel, and we’d just start talking about the challenges of DevRel and how we were all dealing with it, especially during that challenging time. 

What emerged from that was the thought that we really need a foundation. We need an association of some sort that’s inclusive, and it includes our wider developer relations community and allows them a voice to be heard. 

We all bring our own personal expertise into it, but what we also bring is the ability to share with each other and collaborate. And that’s the beauty of DevRel. That’s what I love so much about it.

What is the ultimate goal, beyond bringing the community of DevRel together and having people share and exchange ideas? 

The broader developer relations umbrella as I’ve experienced it, we’re talking about community at the very core. You need to have a community in order to grow your business. And so that’s where it really starts. And then the various branches of that are related to developer experience. What kind of experience are they having? Are your tech docs easy for them to find? From a developer marketing standpoint, are we communicating the right messages? Are they technical? Are they authentic? And then we talk about developer success and education. We want to educate them just as much as they educate us. We want to make sure that we’re providing them the right tools, and we’re setting them up for success. 

And so these various components under the DevRel umbrella become so important. This foundation will essentially help define some of these areas and provide more clarity, but being that it’s open to the community, and it’s a community-driven project, we’re going to get varying viewpoints and opinions and it’s going to create this, this awesome catalog of knowledge. And then, by partnering with the Linux Foundation, they offer global credibility and they offer this robust governance structure that supports long term sustainability. 

Now, we’re an intent to form a DevRel foundation, so we’re still in the area of exploration and learning, and we do have a mission statement that we’ve created in collaboration with the community that we’ve shared. Everything is open and out there. We have a wiki page, we have a GitHub, and we welcome anybody to participate and communicate with us. 

We have weekly community calls across the globe, and many developer relations professionals are joining us on those calls and sharing their experience and their knowledge. We assign topics for the week, we review our proposals of how this will roll out, and the idea is that as a steering committee, we’re there to help guide the ship, we’re guiding the boat through the sea, and we’re going to help them stay on target, if you will. 

The project itself, the foundation, it really is going to rely on contributions from the community, individuals, supporters of the organization, and they’re going to provide expertise, guidance and content.

The post Q&A: Why the Developer Relations Foundation is forming appeared first on SD Times.

]]>
Podcast: The importance of buildpacks in developing cloud native applications https://sdtimes.com/containers/podcast-the-importance-of-buildpacks-in-developing-cloud-native-applications/ Thu, 26 Sep 2024 17:39:52 +0000 https://sdtimes.com/?p=55725 Buildpacks help ease the burden on developers by taking source code and turning it into fully functional apps. To learn more about this technology, we interviewed Ram Iyengar, chief evangelist of the Cloud Foundry Foundation, on the most recent episode of our podcast, What the Dev? Here is an edited and abridged version of that … continue reading

The post Podcast: The importance of buildpacks in developing cloud native applications appeared first on SD Times.

]]>
Buildpacks help ease the burden on developers by taking source code and turning it into fully functional apps.

To learn more about this technology, we interviewed Ram Iyengar, chief evangelist of the Cloud Foundry Foundation, on the most recent episode of our podcast, What the Dev?

Here is an edited and abridged version of that conversation:

How do buildpacks — and the Paketo Buildpacks in particular — help developers deploy cloud native applications?

I think buildpacks have been very important in making a lot of applications get pushed to production and get containerized without having to deal with a lot of overhead that usually comes with the process of containerization. What can I say that we haven’t said already in the webinar and in the article and things like that? Well, there’s a community angle to this. Buildpacks is somewhat headed towards graduation within the CNCF, and we expect that it will graduate in the next six to 12 months. If there’s any show of support that you can do as a community, I highly welcome people giving it a star, opening important issues, trying the project out, and seeing how you can consume it, and giving us feedback about how the project can be improved.

One thing that I wanted to get into a little bit is Korifi, which is your platform for creating and deploying Kubernetes applications. Can you talk a little bit about Korifi and how it ties in with buildpacks?

Absolutely, one of the main areas where we see a lot of buildpacks being consumed is when people are getting into the job of building platforms on Kubernetes. Now, any sort of talk you see about Kubernetes these days, whether it’s at KubeCon or one of the other events, is it’s extremely complex, and it’s been said so many times over and over again, there’s memes, there’s opinion pieces, there’s all kinds of internet subculture about how complex Kubernetes can be. 

The consequence of this complexity is that some teams and companies have started to come up with a platform where they say you want to make use of Kubernetes? Well, install another substrate over Kubernetes and abstract a lot of the Kubernetes internals from interacting with your developers. So that resonates perfectly with what the Cloud Foundry messaging has been all these years. People want a first-class, self-service, multi-tenant experience over VMs, and they want that same kind of experience on Kubernetes today for somewhat slightly different reasons, but the ultimate aim being that developers need to be able to get to that velocity that they’re most optimal at. They need to be able to build fast and deploy faster and keep pushing applications out into production while folding down a lot of the other areas of importance, like, how do we scale this, and how do we maintain load balances on this? How do we configure networking and ingress?

All of these things should fall down into a platform. And so Korifi is what has emerged from the community for actually implementing that kind of behavior, and buildpacks fits perfectly well into this world. So by using buildpacks — and I think Korifi is like the numero uno consumer of buildpacks — they’ve actually built an experience to be able to deploy applications onto Kubernetes, irrespective of the language and family, and taking advantage of all of those buildpacks features.

I’m hearing a lot of conversation about the Cloud Foundry Foundation in general, that it’s kind of old, and perhaps Kubernetes is looking to displace what you guys are doing. So how would you respond to that? And what is the Cloud Foundry Foundation offering in the Kubernetes world? 

It’s a two part or a two pronged answer that I have. On the one hand, there is the technology side of things. On the other, there’s a community and a human angle to things. Engineers want new tools and new things and new infrastructure and new kinds and ways to work. And so what has happened in the larger technology community is that a sufficiently adequate technology like Cloud Foundry suddenly found itself being relegated to as legacy technology and the old way to do things and not modern enough in some cases. That’s the human angle to it. So when people started to look at Kubernetes, when the entire software development community learned of Kubernetes, what they did was to somehow pick up on this new trend, and they wanted to sort of ride the hype train, so to say. And Kubernetes started to occupy a lot of the mind space, and now, as Gartner puts it quite well, you’re past that elevated expectations phase. And you’re now getting into productivity, and the entire community is yearning for a way to consume Kubernetes minus the complexity. And they want a very convenient way in which to deploy applications on Kubernetes while not worrying about networking and load balancing and auto scalars and all of these other peripheral things that you have to attach to an application.

I think it’s not really about developers just wanting new things. I think they want better tools and more efficient ways of doing their jobs, which frees them up to do more of the innovation that they like and not get bogged down with all of those infrastructure issues and things that that you know now can be taken care of. So I think what you’re saying is very important in terms of positioning Cloud Foundry as being useful and helpful for developers in terms of gaining efficiency and being able to work the way they want to work.

Well, yes, I agree in principle, which is why I’m saying Cloud Foundry and some others like Heroku, they all perfected this experience of here’s what a developer’s workflow should be. Now, developers are happy to adopt new ways to work, but the problem is, when you’re on the path to gain that kind of efficiency and velocity, you often unintentionally build a lot of opinionated workflows around yourself. So, all developers will have a very specific way in which they’ll actually create deployments and create these immutable artifacts, and they’re going to build themselves a fort from where they’d like to be kings of the castle, lord of the manor, but it’s really assailing a lot of the mental image and any apprehensions that come from deviating from that mental image. And at the moment, Kubernetes seems to offer one of the best ways to build and package and deploy an app, given that it can accomplish so many different things. 

Now, if you take a point by point comparison between what Cloud Foundry was capable of in, let’s say, 2017 versus what Kubernetes is capable of right now, it will be almost the same. So in terms of feature parity, we are now at a point, and this might be very controversial to say on a public podcast, but in terms of feature parity, Cloud Foundry has always offered the kind of features that are available in the Kubernetes community right now. 

Now, of course, Kubernetes imagines applications to be built and and deployed in a slightly different way, but in terms of getting everything into containers and shipping into a container orchestrator and providing the kind of reliability that applications need, and allowing sidecars and services and multi-tenancy. 

I strongly believe that the Cloud Foundry offering was quite compelling even four or five years ago, while Kubernetes is still sort of navigating some fairly choppy waters in terms of multi-tenancy and services and things like that. But hey, as a community, they’re doing wonderful innovation. And yeah, I agree with you when I say engineers are always after the best way in which to, you know, gain that efficiency.

The post Podcast: The importance of buildpacks in developing cloud native applications appeared first on SD Times.

]]>
Podcast: How time series data is revolutionizing data management https://sdtimes.com/data/podcast-how-time-series-data-is-revolutionizing-data-management/ Wed, 04 Sep 2024 19:23:32 +0000 https://sdtimes.com/?p=55606 Time series data is an important component of having IoT devices like smart cars or medical equipment that work properly because it is collecting measurements based on time values.  To learn more about the crucial role time series data plays in today’s connected world, we invited Evan Kaplan, CEO of InfluxData, onto our podcast to … continue reading

The post Podcast: How time series data is revolutionizing data management appeared first on SD Times.

]]>
Time series data is an important component of having IoT devices like smart cars or medical equipment that work properly because it is collecting measurements based on time values. 

To learn more about the crucial role time series data plays in today’s connected world, we invited Evan Kaplan, CEO of InfluxData, onto our podcast to talk about this topic.

Here is an edited and abridged version of that conversation:

What is time series data?

It’s actually fairly easy to understand. It’s basically the idea that you’re collecting measurement or instrumentation based on time values. The easiest way to think about it is, say sensors, sensor analytics, or things like that. Sensors could measure pressure, volume, temperature, humidity, light, and it’s usually recorded as a time based measurement, a time stamp, if you will,  every 30 seconds or every minute or every nanosecond. The idea is that you’re instrumenting systems at scale, and so you want to watch how they perform. One, to look for anomalies, but two, to train future AI models and things like that. 

And so that instrumentation stuff is done, typically, with a time series foundation. In the years gone by it might have been done on a general database, but increasingly, because of the amount of data that’s coming through and the real time performance requirements, specialty databases have been built.  A specialized database to handle this sort of stuff really changes the game for system architects building these sophisticated real time systems.

So let’s say you have a sensor in a medical device, and it’s just throwing data off, as you said, rapidly. Now, is it collecting all of it, or is it just flagging what an anomaly comes along?

It’s both about data in motion and data at rest. So it’s collecting the data and there are some applications that we support, that are billions of points per second —  think hundreds or  thousands of sensors reading every 100 milliseconds. And we’re looking at the data as it’s being written, and it’s available for being queried almost instantly. There’s almost zero time, but it’s a database, so it stores the data, it holds the data, and it’s capable of long term analytics on the same data. 

So storage, is that a big issue? If all this data is being thrown off, and if there are no anomalies, you could be collecting hours of data that nothing has changed?

If you’re getting data — some regulated industries require that you keep this data around for a really long period of time — it’s really important that you’re skillful at compressing it. It’s also really important that you’re capable of delivering an object storage format, which is not easy for a performance-based system, right? And it’s also really important that you be able to downsample it. And downsample means we’re taking measurements every 10 milliseconds, but every 20 minutes, we want to summarize that. We want to downsample it to look for the signal that was in that 10 minute or 20 minute window. And we downsample it and evict a lot of data and just keep the summary data. So you have to be very good at that kind of stuff. Most databases are not good at eviction or downsampling, so it’s a really specific set of skills that makes it highly useful, not just us, but our competitors too. 

We were talking about edge devices and now artificial intelligence coming into the picture. So how does time series data augment those systems? Benefit from those advances? Or how can they help move things along even further?

I think it’s pretty darn fundamental. The concept of time series data has been around for a long time. So if you built a system 30 years ago, it’s likely you built it on Oracle or Informatics or IBM Db2. The canonical example is financial Wall Street data, where you know how stocks are trading one minute to the next, one second to the next. So it’s been around for a really long time. But what’s new and different about the space is we’re sensifying the physical world at an incredibly fast pace. You mentioned medical devices, but smart cities, public transportation, your cars, your home, your industrial factories, everything’s getting sensored — I know that’s not a real word, but easy to understand. 

And so sensors speak time series. That’s their lingua franca. They speak pressure, volume, humidity, temperature, whatever you’re measuring over time. And it turns out, if you want to build a smarter system, an intelligent system, it has to start with sophisticated instrumentation. So I want to have a very good self-driving car, so I want to have a very, very high resolution picture of what that car is doing and what that environment is doing around the car at all times. So I can train a model with all the potential awareness that a human driver or better, might have in the future. In order to do that, I have to instrument. I then have to observe, and then have to re-instrument, and then I have to observe. I run that process of observing, correcting and re-instrumenting over and over again 4 billion times. 

So what are some of the things that we might look forward to in terms of use cases? You mentioned a few of them now with, you know, cities and cars and things like that. So what other areas are you seeing that this can also move into?

So first of all, where we were really strong is energy, aerospace, financial trading, network, telemetry. Our largest customers are everybody from JPMorgan Chase to AT&T to Salesforce to a variety of stuff. So it’s a horizontal capability, that instrumentation capability. 

I think what’s really important about our space, and becoming increasingly relevant, is the role that time series data plays in AI, and really the importance of understanding how systems behave. Essentially, what you’re trying to do with AI is you’re trying to say what happened to train your model and what will happen to get the answers from your model and to get your system to perform better. 

And so, “what happened?” is our lingua franca, that’s a fundamental thing we do, getting a very good picture of everything that’s happening around that sensor around that time, all that sort of stuff, collecting high resolution data and then feeding that to training models where people do sophisticated machine learning or robotics training models and then to take action based on that data. So without that instrumentation data, the AI stuff is basically without the foundational pieces, particularly the real world AI, not necessarily talking about the generative LLMs, but I’m talking about cars, robots, cities, factories, healthcare, that sort of stuff.

The post Podcast: How time series data is revolutionizing data management appeared first on SD Times.

]]>
Podcast: Misconceptions around Agile in an AI world https://sdtimes.com/agile/podcast-misconceptions-around-agile-in-an-ai-world/ Wed, 28 Aug 2024 19:44:28 +0000 https://sdtimes.com/?p=55561 In this week’s episode of our podcast, What the Dev?, we spoke with David Ross, Agile evangelist for Miro, about some of the misconceptions people have about Agile today, and also how Agile has evolved since its early days. Here is an edited and abridged version of that conversation: Where do you see the change … continue reading

The post Podcast: Misconceptions around Agile in an AI world appeared first on SD Times.

]]>
In this week’s episode of our podcast, What the Dev?, we spoke with David Ross, Agile evangelist for Miro, about some of the misconceptions people have about Agile today, and also how Agile has evolved since its early days.

Here is an edited and abridged version of that conversation:

Where do you see the change from people doing Agile and thinking they understood it, to now? What do they have to take into consideration for this new modern era?

I have been in software development for almost 20 years, and it’s been an interesting evolution for me to watch what Agile meant maybe 15-20 years ago versus how it’s perceived today. I just remember back in the early days of some of the very first Agile transformations that I was part of, it was very much all about following a process and having fealty to specific frameworks, be it Scrum or Kanban or whatever the case might be. And the closer you were to perfection by following those frameworks, the closer you were to God, as it were, like the more Agile you could claim to be. 

And what we forgot in all of that was, of course, that the Agile values and principles don’t prescribe any particular framework or approach. You’re supposed to put people and interactions over tools and processes. Well, if you are enforcing processes and you’re asking people to interact via tools, that kind of defeats a lot of the very fundamental sort of values of Agile right from the get go.

We also have problems, in that a lot of people came into the industry, and maybe people who were not sufficiently trained or had enough experience in real, good Agile practices, and there was just a lot of bad, bad Agile out there. You know, people who got a two-day certificate stamped and said, hey, I’m going to come in and now enforce Scrum processes on this team and coach them to higher levels of agility, and that’s not a recipe for success.

This has been true of DevOps, value stream management, you you, these are just vague, non-prescriptive processes to follow. But nobody says you have to be doing X, Y and Z to be Agile, or be doing full DevOps, or be doing value stream. It’s kind of like, well, we’re just going to leave it up to you, adopt what you want, throw out what you don’t want, we don’t mean to be prescriptive. But, I think that has added to so much confusion in these markets over the years. So where we’re at now, and you’re talking about evolving into this modern era, what’s impacting it? Is it simply cloud-native computing? Is it AI? Is it all of the above? 

I feel like Agile reached this sort of peak, where people were finding that they weren’t really getting the value that had been promised as a part of an Agile transformation. They weren’t seeing the value for their customers, they weren’t seeing their value for their teams. And, you know, the house of cards started to fall apart a little bit. And let’s be honest as well, one of the things about Agile was you had to have co-located teams, so that’s one sacred cow that got sacrificed during Covid, because co-located teams just wasn’t a possibility, and we’re not in that world anymore. 

And honestly, from where I sit, Agile was invented to solve a very specific, defined problem within software development, which was software development delivery and making sure that you weren’t constantly missing deadlines, and that you were delivering the right level of value. And I think a lot of those problems have kind of been solved, and Agile has kind of expanded beyond the boundaries of just software development as well. And people are kind of seeing that it’s not one size fits all. It needs to be more adaptive. It needs to be more pragmatic and less prescriptive. 

And so that’s kind of where we are right now. I feel like where we’re in a period of retrenchment and reinvention of Agile. People are starting to see that prescriptive frameworks just aren’t going to work for them. And a lot of the customers that I talk to are evolving and coming up with their own sort of custom approach. And they’re maybe using different vocabulary, different language, but they’re still doing things that are Agile, but they’re just not recognizable to somebody 10-15 years ago.

You bring in cloud-native computing, where now you have a whole lot of moving parts, where it isn’t just a monolithic code base going through, but you’re calling APIs, you’re using Kubernetes, containers. And all of these complexities kind of change the looks of things, so how do those things affect the way that people have been doing Agile, and what adjustments have they had to make for those types of things?

I think they’ve kind of stepped away from prescriptive frameworks, and many times they’re just adapting. This is really, honestly what they should have been doing all along. You should have not been prescriptive, you should have been able to adapt your processes, and even if it’s not pure to the framework that you started with, it’s okay for you to move in that direction. So people are, I think, moving away from those defined roles that were part of those frameworks. I think that that’s probably a good thing. Rather than, you know, you’re a product owner or you’re a Scrum master, or all of those kinds of things, moving away from prescriptive titles I think is one thing that I’ve seen them do.

Also, working with tool sets that are less rigid and more flexible. So if you are trying to run everything within a very defined set of tools, and those tools define your workflow, that’s very constrictive, I feel like for a lot of a lot of companies and a lot of teams, and they’re trying to find a better way to organize themselves and to support their ways of working using more flexible tool sets.

How is AI impacting Agile development?

Well, you know, I would be lying if I could say that anybody knows the answer to that, right? We’re still in the very early days of that revolution. But some things that I can kind of see on the horizon as potential outcomes and potential impacts of AI are is it going to affect the team size? If you think about an Agile team generally, they used to prescribe that the ideal size is six plus or minus three, and you have to have these specific skill sets on it. Maybe team sizes are going to shrink a little, and you’re going to have maybe one or two developers on a team, and then they can orchestrate a series of AI agents that do a lot of the work that other specialists would have done in the past, like QA or specific database tasks or things like that. So I definitely think it’s going to affect the team composition, the team structure, and the team size. 

The other thing that I think it’s going to really impact as well is a lot of the monotony of some of the tasks that get done are probably going to be taken over by AI. And you see that across all industries, right? What does that mean? It means that it’s going to free up the really talented people on Agile teams to do sort of those higher level strategic thinking. You know, the things that AI can’t do yet. Maybe it’ll do it one day, but it can’t do it today where it’s thinking strategically and thinking about human dimensions of what they’re building and making sure that it’s being guided in that direction. The actual coding work or testing work will probably be taken over by some form of an AI, but we are going to have the ability to focus our efforts on those higher order or higher complexity activities. 

So you really have to prepare yourself individually. You have to bring your skill set up, and you also have to know how to work with an AI, because if those AIs are going to be your assistants, or they’re going to be an embedded part of your team, you have to know how to be able to orchestrate and run a series of AI agents that are going to get the work done that other human beings would have done before. So I really think that’s going to happen. What does that mean for Scrum masters specifically? I think Scrum masters, again, will have to evolve in a different direction and focus more on the human element. We’ve always said that Scrum masters are also Agile coaches, but we haven’t really taken that to heart. And I feel like that’s something that Scrum masters really need to embrace in this new era of being able to coach human beings and have high emotional intelligence. AI doesn’t have emotional intelligence. We do. So we need to be able to make sure that the human beings on our team are supported and have what they need to collaborate and to be successful, and then leave the drudgery to the AI.

The post Podcast: Misconceptions around Agile in an AI world appeared first on SD Times.

]]>
Podcast: AI testing AI? A look at CriticGPT https://sdtimes.com/podcast-ai-testing-ai-a-look-at-criticgpt/ Tue, 20 Aug 2024 19:44:39 +0000 https://sdtimes.com/?p=55495 OpenAI recently announced CriticGPT, a new AI model that provides critiques of ChatGPT responses in order to help the humans training GPT models better evaluate outputs during reinforcement learning from human feedback (RLFH). According to OpenAI, CriticGPT isn’t perfect, but it does help trainers catch more problems than they do on their own. But is … continue reading

The post Podcast: AI testing AI? A look at CriticGPT appeared first on SD Times.

]]>
OpenAI recently announced CriticGPT, a new AI model that provides critiques of ChatGPT responses in order to help the humans training GPT models better evaluate outputs during reinforcement learning from human feedback (RLFH). According to OpenAI, CriticGPT isn’t perfect, but it does help trainers catch more problems than they do on their own.

But is adding more AI into the quality step such a good idea? In the latest episode of our podcast, we spoke with Rob Whiteley, CEO of Coder, about this idea. 

Here is an edited and abridged version of that conversation:

A lot of people are working with ChatGPT, and we’ve heard all about hallucinations and all kinds of problems, you know, violating copyrights by plagiarizing things and all this kind of stuff. So OpenAI, in its wisdom, decided that it would have an untrustworthy AI be checked by another AI that we’re now supposed to trust is going to be better than their first AI. So is that a bridge too far for you?

I think on the surface, I would say yes, if you need to pin me down to a single answer, it’s probably a bridge too far. However, where things get interesting is really your degree of comfort in tuning an AI with different parameters. And what I mean by that is, yes, logically, if you have an AI that is producing inaccurate results, and then you ask it to essentially check itself, you’re removing a critical human in the loop. I think the vast majority of customers I talk to kind of stick to an 80/20 rule. About 80% of it can be produced by an AI or a GenAI tool, but that last 20% still requires that human.

And so on the surface, I worry that if you become lazy and say, okay, I can now leave that last 20% to the system to check itself, then I think we’ve wandered into dangerous territory. But, if there’s one thing I’ve learned about these AI tools, it’s that they’re only as good as the prompt you give them, and so if you are very specific in what that AI tool can check or not check —  for example, look for coding errors, look for logic fallacies, look for bugs, do not look for or do not hallucinate, do not lie, if you do not know what to do, please prompt me  — there’s things that you can essentially make explicit instead of implicit, which will have a much better effect. 

The question is do you even have access to the prompt, or is this a self-healing thing in the background? And so to me, it really comes down to, can you still direct the machine to do your bidding, or is it now just kind of semi-autonomous, working in the background?

So how much of this do you think is just people kind of rushing into AI really quickly? 

We are definitely in a classic kind of hype bubble when it comes to the technology. And I think where I see it is, again, specifically, I want to enable my developers to use Copilot or some GenAI tool. And I think victory is declared too early. Okay, “we’ve now made it available.” And first of all, if you can even track its usage, and many companies can’t, you’ll see a big spike. The question is, what about week two? Are people still using it? Are they using it regularly? Are they getting value from it? Can you correlate its usage with outcomes like bugs or build times? 

And so to me, we are in a ready fire aim moment where I think a lot of companies are just rushing in. It kind of feels like cloud 20 years ago, where it was the answer regardless. And then as companies went in, they realized, wow, this is actually expensive or the latency is too bad. But now we’re sort of committed, so we’re going to do it. 

I do fear that companies have jumped in. Now, I’m not a GenAI naysayer. There is value, and I do think there’s productivity gains. I just think, like any technology, you have to make a business case and have a hypothesis and test it and have a good group and then roll it out based on results, not just, open the floodgates and hope.

Of the developers that you speak with, how are they viewing AI. Are they looking at this as oh, wow, this is a great tool that’s really going to help me? Or is it like, oh, this is going to take my job away? Where are most people falling on that?

Coder is a software company, so of course, I employ a lot of developers, and so we sort of did a poll internally, and what we found was 60% were using it and happy with it. About 20% were using it but had sort of abandoned it, and 20% hadn’t even picked it up. And so I think first of all, for a technology that’s relatively new, that’s already approaching pretty good saturation. 

For me, the value is there, the adoption is there, but I think that it’s the 20% that used it and abandoned it that kind of scare me. Why? Was it just because of psychological reasons, like I don’t trust this? Was it because of UX reasons? Was it that it didn’t work in my developer flow? If we could get to a point where 80% of developers — we’re never going to get 100%  — so if you get to 80% of developers getting value from it, I think we can put a stake in the ground and say this has kind of transformed the way we develop code. I think we’ll get there, and we’ll get there shockingly fast. I just don’t think we’re there yet.

I think that that’s an important point that you make about keeping humans in the loop, which circles back to the original premise of AI checking AI. It sounds like perhaps the role of developers will morph a little bit. As you said, some are using it, maybe as a way to do documentation and things like that, and they’re still coding. Other people will perhaps look to the AI to generate the code, and then they’ll become the reviewer where the AI is writing the code.

Some of the more advanced users, both in my customers and even in my own company, they were before AI an individual contributor. Now they’re almost like a team lead, where they’ve got multiple coding bots, and they’re asking them to perform tasks and then doing so, almost like pair programming, but not in a one-to-one. It’s almost a one-to-many. And so they’ll have one writing code, one writing documentation, one assessing a code base, one still writing code, but on a different project, because they’re signed into two projects at the same time.

So absolutely I do think developer skill sets need to change. I think a soft skill revolution needs to occur where developers are a little bit more attuned to things like communicating, giving requirements, checking quality, motivating, which, believe it or not, studies show, if you motivate the AI, it actually produces better results. So I think there is a definite skill set that will kind of create a new — I hate to use the term 10x — but a new, higher functioning developer, and I don’t think it’s going to be, do I write the best code in the world? It’s more, can I achieve the best outcome, even if I have to direct a small virtual team to achieve it?

The post Podcast: AI testing AI? A look at CriticGPT appeared first on SD Times.

]]>
Q&A: Developing software-defined vehicles https://sdtimes.com/softwaredev/qa-developing-software-defined-vehicles/ Tue, 13 Aug 2024 20:27:52 +0000 https://sdtimes.com/?p=55422 Cars today are complex pieces of software. You’ve got the infotainment system connected to your phone. You’ve got the lane keep assist that lets you know when you’re starting to sway from your lane. You may even have a backup alert system that warns you that there’s a person walking near your car. So now, … continue reading

The post Q&A: Developing software-defined vehicles appeared first on SD Times.

]]>
Cars today are complex pieces of software. You’ve got the infotainment system connected to your phone. You’ve got the lane keep assist that lets you know when you’re starting to sway from your lane. You may even have a backup alert system that warns you that there’s a person walking near your car.

So now, on top of all the other components a car needs to function, software is also now in the mix, creating a complex ecosystem that cannot fail at any point.

In the most recent episode of our podcast What the Dev, we were joined by Cameron van Orman, chief strategy & marketing officer and GM of Automotive Solutions at Planview, to talk about how these automakers are managing their software development life cycles.

Here is an edited and abridged version of that conversation: 

Let’s talk a little bit about the complexity in making these cars happen, the software. What goes into making these autonomous vehicles?

As you said, David, it’s very complex. You’re taking an industry that drove the Industrial Revolution and became experts over 100 years of mechanical, physical engineering, bending metal, combustion as part of vehicle propulsion. And now this same group that has this 100 years of physical supply chains is now coming a little bit late (but fast) to the party on software. Depending on which auto manufacturer you talk to, you have somewhere between 100 and 500 million lines of code in a current automobile — and I’m not just talking EVs. Even in a traditional internal combustion engine propelled car there’s a lot of complexity in all that software built and designed from not just the OEM, but a multi-tiered supply chain. How do you get all that integrated, working, and effective and delivering transformative experiences for us as drivers and passengers?

Building cars had always been a very mechanical kind of a process. Now it’s much more of a digital process in many ways. I mean, it’s the merger of both, actually. How are automakers adapting? 

It’s a complete change, arguably. I heard one of the world’s largest cloud infrastructure providers accuse the automobile industries of being the last stalwarts in adopting cloud, and many of them are still on-prem, yet they’re really adopting all this modern software so quickly. In the last 10 years, there’s just been this explosion of code and software in a car, but there’s still a challenge in this Agile transformation, digital transformation, that’s going on in an industry that has this deep heritage in physical manufacturing and bending metal. 

Launches of a new car platform or a new car model are often now dependent on software. Mark Fields — he’s the former CEO and chairman of Ford — is chairman of Planview, and so I’ve had the opportunity to talk at length with him on this topic. And over 100 years, auto manufacturers have really perfected and have this great visibility into everything physical that goes into the launch of a new vehicle, all the design and aero and compulsion and combustion and all the tooling of factories, but now it’s software that’s causing models to be delayed. In some cases, it’s causing executives — and we saw it over in Europe — to lose their jobs.

And unlike physical manufacturing with this long history and understanding of the burn down — you start with a gazillion items to do, and every week you have your meeting, and items just get reduced until it’s ready to launch — that’s not the way software development works. And auto companies are grappling with predictability and efficiency of their software supply chain, not just their physical supply chain. If software is late or is going to delay a launch of a platform, that can cost ten of millions of dollars, as you have physical plants that have been tooled up and sitting idle.

What about the testing of that software? Obviously, this has to be mission critical stuff. You can’t have a software defined vehicle have a failure, that would be catastrophic. So how does that work in terms of when you talk about portfolio planning, how much of the pre-planning has to go into it to ensure things like that aren’t happening? 

A lot. How do you have that visibility into the full life cycle effectiveness, flow, predictability and throughput of your software tool chain and software development processes. And what’s really unique about the auto industry is when we talk about technology buzzwords like DevOps or value stream management, most often we think about it in the confines of a single organization. But in automotive you’ve got to think about it across their distributed set of suppliers and companies, from the OEMs the tier ones to the tier twos. 

As a driver or passenger in an automobile you don’t know  — whether it’s the braking system or the infotainment center — was the software that manages it and runs it, was that built and coded by the OEM, by the tier one, by a sub component supplier? And you don’t care. It’s all got to work together. 

And so the complexity of your software development life cycle and the need for visibility is far greater. Single companies struggle with visibility across their DevOps or software life cycles across all the steps and tools. Magnify that by OEMs, who have their own divisions and regions and silos, and then they have their own complex configuration of suppliers that can number in the hundreds. You need that visibility. And you talked about quality. You need that traceability. 

As we were sort of preparing for the call you talked about your wife having issues with the infotainment system. So, you go to the local dealer or mechanic shop, and they’ve got to flag that IT software issue up to the OEM. The OEM has to figure out who really created that code, tier one, tier two, and it’s got to trace it all the way through to that development team. They’ve got to see it. They’ve got to then fix it, and it’s got to push it all the way back up and ultimately, into the car, right? And that traceability is so important.

The post Q&A: Developing software-defined vehicles appeared first on SD Times.

]]>
Q&A: Lessons NOT learned from CrowdStrike and other incidents https://sdtimes.com/test/qa-lessons-not-learned-from-crowdstrike-and-other-incidents/ Wed, 31 Jul 2024 20:11:01 +0000 https://sdtimes.com/?p=55310 When an event like the CrowdStrike failure literally brings the world to its knees, there’s a lot to unpack there. Why did it happen? How did it happen? Could it have been prevented?  On the most recent episode of our weekly podcast, What the Dev?, we spoke with Arthur Hicken, chief evangelist at the testing … continue reading

The post Q&A: Lessons NOT learned from CrowdStrike and other incidents appeared first on SD Times.

]]>
When an event like the CrowdStrike failure literally brings the world to its knees, there’s a lot to unpack there. Why did it happen? How did it happen? Could it have been prevented? 

On the most recent episode of our weekly podcast, What the Dev?, we spoke with Arthur Hicken, chief evangelist at the testing company Parasoft, about all of that and whether we’ll learn from the incident. 

Here’s an edited and abridged version of that conversation:

AH: I think that is the key topic right now: lessons not learned — not that it’s been long enough for us to prove that we haven’t learned anything. But sometimes I think, “Oh, this is going to be the one or we’re going to get better, we’re going to do things better.” And then other times, I look back at statements from Dijkstra in the 70s and go, maybe we’re not gonna learn now. My favorite Dijkstra quote is “if debugging is the act of removing bugs from software, then programming is the act of putting them in.” And it’s a good, funny statement, but I think it’s also key to one of the important things that went wrong with CrowdStrike. 

We have this mentality now, and there’s a lot of different names for it — fail fast, run fast, break fast —  that certainly makes sense in a prototyping era, or in a place where nothing matters when failure happens. Obviously, it matters. Even with a video game, you can lose a ton of money, right? But you generally don’t kill people when a video game is broken because it did a bad update. 

David Rubinstein, editor-in-chief of SD Times: You talk about how we keep having these catastrophic failures, and we keep not learning from them. But aren’t they all a little different in certain ways, like you had Log4j that you thought would be the thing that oh, people are now definitely going to pay more attention now. And then we get CrowdStrike, but they’re not all the same type of problem?

AH: Yeah, that is true, I would say, Log4j was kind of insidious, partly because we didn’t recognize how many people use this thing. Logging is one of those less worried about topics. I think there is a similarity in Log4j and in CrowdStrike, and that is we have become complacent where software is built without an understanding of what the rigors are for quality, right? With Log4j, we didn’t know who built it, for what purpose, and what it was suitable for. And with CrowdStrike, perhaps they hadn’t really thought about what if your antivirus software makes your computer go belly up on you? And what if that computer is doing scheduling for hospitals or 911 services or things like that? 

And so, what we’ve seen is that safety critical systems are being impacted by software that never thought about it. And one of the things to think about is, can we learn something from how we build safety critical software or what I like to call good software? Software meant to be reliable, robust, meant to operate under bad conditions. 

I think that’s a really interesting point. Would it have hurt CrowdStrike to have built their software to better standards? And the answer is it wouldn’t. And I posit that if they were building better software, speed would not be impacted negatively and they’d spend less time testing and finding things.

DR: You’re talking about safety critical, you know, back in the day that seemed to be the purview of what they were calling embedded systems that really couldn’t fail. They were running planes and medical devices and things that really were life and death. So is it possible that maybe some of those principles could be carried over into today’s software development? Or is it that you needed to have those specific RTOSs to ensure that kind of thing?

AH: There’s certainly something to be said for a proper hardware and software stack. But even in the absence of that, you have your standard laptop with no OS of choice on it and you can still build software that is robust. I have a little slide up on my other monitor from a joint webinar with CERT a couple of years ago, and one of the studies that we used there is that 64% of vulnerabilities in NIST are programming errors. And 51% of those are what they like to call classic errors. I look at what we just saw in CrowdStrike as a classic error. A buffer overflow, reading null pointers on initialized things, integer overflows, these are what they call classic errors. 

And they obviously had an effect.  We don’t have full visibility into what went wrong, right? We get what they tell us. But it appears that there’s a buffer overflow that was caused by reading a config file, and one can argue about the effort and performance impact of protecting against buffer overflows, like paying attention to every piece of data. On the other hand, how long has that buffer overflow been sitting in that code? To me a piece of code that’s responding to an arbitrary configuration file is something you have to check. You just have to check this. 

The question that keeps me up at night, like if I was on the team at CrowdStrike, is okay, we find it, we fix it, then it’s like, where else is this exact problem? Are we going to go and look and find six other or 60 other or 600 other potential bugs sitting in the code only exposed because of an external input?

DR: How much of this comes down to technical debt, where you have these things that linger in the code that never get cleaned up, and things are just kind of built on top of them? And now we’re in an environment where if a developer is actually looking to eliminate that and not writing new code, they’re seen as not being productive. How much of that is feeding into these problems that we’re having?

AH: That’s a problem with our current common belief about what technical debt is, right? I mean the original metaphor is solid, the idea that stupid things you’re doing or things that you failed to do now will come back to haunt you in the future. But simply running some kind of static analyzer and calling every undealt with issue technical debt is not helpful. And not every tool can find buffer overflows that don’t yet exist. There are certainly static analyzers that can look for design patterns that would allow or enforce design patterns that would disallow buffer overflow. In other words, looking for the existence of a size check. And those are the kinds of things that when people are dealing with technical debt, they tend to call false positives. Good design patterns are almost always viewed as false positives by developers. 

So again, it’s that we have to change the way we think, we have to build better software. Dodge said back in, I think it was the 1920s, you can’t test quality into a product. And the mentality in the software industry is if we just test it a little more, we can somehow find the bugs. There are some things that are very difficult to protect against. Buffer overflow, integer overflow, uninitialized memory, null pointer dereferencing, these are not rocket science.


You may also like…

Lessons learned from CrowdStrike outages on releasing software updates

Software testing’s chaotic conundrum: Navigating the Three-Body Problem of speed, quality, and cost

Q&A: Solving the issue of stale feature flags

The post Q&A: Lessons NOT learned from CrowdStrike and other incidents appeared first on SD Times.

]]>
Introducing eLxr: Delivering Enterprise-Grade Linux for Edge-to-Cloud Deployments https://sdtimes.com/introducing-elxr-delivering-enterprise-grade-linux-for-edge-to-cloud-deployments/ Mon, 29 Jul 2024 18:06:06 +0000 https://sdtimes.com/?p=55294 The eLxr project has launched its first release of a Debian derivative inheriting intelligent edge capabilities of Debian, with plans to expand these for a streamlined edge-to-cloud deployment approach. eLxr is an open source, enterprise-grade Linux distribution that addresses the unique challenges of near-edge networks and workloads. What Is the eLxr Project? The eLxr project … continue reading

The post Introducing eLxr: Delivering Enterprise-Grade Linux for Edge-to-Cloud Deployments appeared first on SD Times.

]]>
The eLxr project has launched its first release of a Debian derivative inheriting intelligent edge capabilities of Debian, with plans to expand these for a streamlined edge-to-cloud deployment approach. eLxr is an open source, enterprise-grade Linux distribution that addresses the unique challenges of near-edge networks and workloads.
What Is the eLxr Project?
The eLxr project is a community-driven effort dedicated to broadening access to cutting-edge technologies for both enthusiasts and enterprise users seeking reliable and innovative solutions that scale from edge to cloud. The project produces and maintains an open source, enterprise-grade Debian-derivative distribution called eLxr that is easy for users to adopt and that fully honors the open source philosophy.
The eLxr project’s mission is centered on accessibility, innovation, and maintaining the integrity of open source software. Making these advancements in an enterprise-grade Debian-derivative ensures that users benefit from a freely available Linux distribution.
By emphasizing ease of adoption alongside open source principles, eLxr aims to attract a broad range of users and contributors who value both innovation and community-driven development, fostering collaboration and transparency and the spread of new technologies.
The eLxr project is establishing a robust strategy for building on Debian’s ecosystem while also contributing back to it. Because “Debian citizens” contribute eLxr innovations and improvements upstream, they are actively participating in the community’s development activities. This approach not only enhances eLxr’s own distribution but also strengthens Debian by expanding its feature set and improving its overall quality.
The ability to release technologies at various stages of Debian’s development lifecycle and to introduce innovative new content not yet available in Debian highlights eLxr’s agility and responsiveness to emerging needs. Moreover, the commitment to sustainability ensures that contributions made by eLxr members remain accessible and beneficial to the broader Debian community over the long term.
A Unified Approach for Intelligent Deployments at the Edge
Today’s technology demands agility and responsiveness to rapidly changing requirements and operational challenges. By integrating cutting-edge technologies from open source communities and technology companies into its distribution, the eLxr project enables users to leverage innovations that may not yet be widely distributed or easily accessible through other channels.
Over the past decade, “build from source” solutions such as the Yocto Project and Buildroot have been favored for enabling various use cases at the intelligent edge. Traditional methods of building embedded Linux devices, which offer extensive customizations and the ability to generate a software development kit (SDK) providing a cross-development toolchain, have allowed developers to maximize the performance of resource-constrained devices while offloading build tasks to more powerful machines.
However, the increasing connectivity demands of edge deployments, including over-the-air (OTA) updates and new paradigms such as data aggregation, edge processing, predictive maintenance, and various machine learning features, necessitate a different architectural approach for both near-edge devices and servers. This results in using multiple distributions, creating a heterogeneous landscape of operating environments and increasing complexity and cost. Such complexities impose significant burdens — the need to monitor for CVEs and bugs, use of additional SBOMs and diverse update cadences, and many other challenges.
To address these issues and provide a more homogeneous solution, eLxr has been introduced as a Debian derivative, using modern tools to ease maintenance while combining traditional installers with a new set of distro-to-order tools that allow a single distribution to better service edge and server deployment. Coupled with a unified tech stack, this initiative offers a strategic advantage for enterprises aiming to optimize their edge deployments, create a seamless operating environment across devices, and set the foundation for future innovations in edge-to-cloud deployments. Existing enterprise solutions move more slowly than the speed that users require to quickly innovate and rapidly adopt new technologies.
As a distribution partner for open source communities and technology companies that have developed innovative solutions but lack the means to widely distribute them, the eLxr project bridges a crucial technology delivery gap.
Why Debian?
The eLxr project chose Debian for two primary reasons: Debian’s staunch defense and adherence to the open source philosophy for more than 30 years and its embrace of derivative efforts.
Debian encourages the creation of new distributions and derivatives, such as eLxr, that help expand its reach into various use cases. Debian sees sharing experiences with derivatives as a way to expand the community, improve the code for the existing users, and make Debian suitable for a more diverse audience.
Get Involved with eLxr
Wind River® contributed the initial eLxr release as the first step in a journey that grows a community committed to timely distribution and delivery of new, ready technology in a guaranteed open source distribution.
The eLxr project believes that our approach promotes accessibility and flexibility for anyone who wishes to join, allowing them to close their gap between technology-ready and distribution-delivery, or simply to take advantage of gaps that have been closed already.
No matter whether you want to use or contribute to eLxr, you are part of the eLxr project, and we encourage and welcome your participation. If you are developing new technologies, let’s get them validated and integrated to drive innovation. If you plan to use eLxr, let the rest of the community know about your experience, your use cases, the problems you solved, and how we can further improve. We look forward to seeing your contributions!

The post Introducing eLxr: Delivering Enterprise-Grade Linux for Edge-to-Cloud Deployments appeared first on SD Times.

]]>
Q&A: Solving the issue of stale feature flags https://sdtimes.com/test/qa-solving-the-issue-of-stale-feature-flags/ Thu, 25 Jul 2024 20:15:07 +0000 https://sdtimes.com/?p=55272 As we saw last week with what happened as a result of a bad update from CrowdStrike, it’s more clear than ever that companies releasing software need a way to roll back updates if things go wrong.  In the most recent episode of our podcast, What the Dev?, we spoke with Konrad Niemiec, founder and … continue reading

The post Q&A: Solving the issue of stale feature flags appeared first on SD Times.

]]>
As we saw last week with what happened as a result of a bad update from CrowdStrike, it’s more clear than ever that companies releasing software need a way to roll back updates if things go wrong. 

In the most recent episode of our podcast, What the Dev?, we spoke with Konrad Niemiec, founder and CEO of the feature flagging tool, Lekko, to talk about the importance of adding feature flags to your code, but also what can go wrong if flags aren’t properly maintained.

Here is an edited and abridged version of that conversation:

David Rubinstein, editor-in-chief of SD Times: For years we’ve been talking about feature flagging in the context of code experimentation, where you can release to a small cohort of people. And if they like it, you can spread it out to more people, or you can roll it back without really doing any damage if it doesn’t work the way you thought it would. What’s your take on the whole feature flag situation?

Konrad Niemiec, founder and CEO of Lekko: Feature flagging is now considered the mainstream way of releasing software features. So it’s definitely a practice that we want people to continue doing and continue evangelizing.  

When I was at Uber we used a dynamic configuration tool called Flipper, and I left Uber to a smaller startup called Sisu, where we used one of the leading feature flagging tools on the market. And when I used that, although it let us feature flag and it did solve a bunch of problems for us, we encountered different issues that resulted in risk and complexity being added to our system. 

So we ended up having a bunch of stale flags littered around our codebase, and things we needed to keep around because the business needed them. And so we ended up in a situation where code became very difficult to maintain, and it was very hard to keep things clean. And we just ended up causing issues left and right.

DR: What do you mean by a stale flag?

KN: An implementation of a feature flag often looks like an if statement in the code. It’ll say if feature flag is enabled, I’ll do one thing, otherwise, I’ll do the old version of the code. This is how it looks like when you’re actually adding it as an engineer. And what a stale flag will mean is the flag will be all the way on. So you’ll have fully rolled it out, but you’re leaving that ‘else’ code path in there. So you basically have some code that’s pretty much never going to get run, but it’s still sitting in your binaries. And it almost turns into this zombie. We like to call them zombie flags, where it kind of pops up when you least expect them. You think they’re dead, but they come back to life.

And this often happens in startups that are trying to move fast. You want to get features out as soon as possible so you don’t have time to have a flag clean update and go through and categorize to see if you should remove all this stuff from the code. And they end up accumulating and potentially causing issues because of these stale code paths.

DR: What kind of issues?

KN: So an easy example is you have some sort of untested code based on a combination of feature flags. Let’s say you have two feature flags that are in a similar part of the code base, so there are now four different paths. And if one of them hasn’t been executed in a while, odds are there’s a bug. So one thing that happened at Sisu was that one of our largest customers encountered an issue when we mistakenly turned off the wrong flag. We thought we were kind of rolling back a new feature for them, but we jumped into a stale code path, and we ended up causing a big issue for that customer.

DR: Is that something that artificial intelligence could take on as a way to go through the code and suggest removing these zombie flags?

KN: With current tools, it is a very manual process. You’re expected to just go through and clean things up yourself. And this is exactly what we’re seeing. We think that generative AI has a big role to play here. Right now we’re starting off with simple heuristic approaches as well as some generative AI approaches to figure out hey, what are some really complicated code paths here? Can we flag these and potentially bring these stale code paths down significantly? Can we define allowable configurations? 

Something we see as a big difference between dynamic configuration and feature flagging itself is that you can combine different flags or different pieces of dynamic behavior in the code together as one defined configuration. And that way, you can reduce the number of possible options out there, and different code paths that you have to worry about. And we think that AI has a huge place in improving safety and reducing the risk of using this kind of tooling.

DR: How widely adopted is the use of feature flags at this point?

KN: We think that especially amongst mid market to large tech companies, it’s probably a majority of companies that are currently using feature flagging in some capacity. You do find a significant portion of companies building their own. Often engineers will take it into their own hands and build a system. But often, when you grow to some level of complexity, you quickly realize there’s a lot involved in making the system both scalable and also work in a variety of different use cases. And there are lots of problems that end up coming up as a result of this. So we think it’s a good portion of companies, but they may not all be using third-party feature flagging tools. Some companies even go through the whole lifecycle, they start off with a feature flagging tool, they rip it out, then they spend significant effort building similar tooling to what Google, Uber, and Facebook have, these dynamic configuration tools.


You may also like…

Lessons learned from CrowdStrike outages on releasing software updates

Q&A on the Rust Foundation’s new Safety-Critical Rust Consortium

The post Q&A: Solving the issue of stale feature flags appeared first on SD Times.

]]>
Q&A on the Rust Foundation’s new Safety-Critical Rust Consortium https://sdtimes.com/softwaredev/qa-on-the-rust-foundations-new-safety-critical-rust-consortium/ Wed, 17 Jul 2024 19:47:39 +0000 https://sdtimes.com/?p=55206 Last month, the Rust Foundation announced the Safety-Critical Rust Consortium, a new group dedicated to advancing the use of Rust in safety-critical software, which is software that can severely impact human life or cause damage if it fails.  To talk more about the new group, Bec Rumbul, executive director and CEO of the Rust Foundation, … continue reading

The post Q&A on the Rust Foundation’s new Safety-Critical Rust Consortium appeared first on SD Times.

]]>
Last month, the Rust Foundation announced the Safety-Critical Rust Consortium, a new group dedicated to advancing the use of Rust in safety-critical software, which is software that can severely impact human life or cause damage if it fails. 

To talk more about the new group, Bec Rumbul, executive director and CEO of the Rust Foundation, joined us on the most recent episode of our podcast, What the Dev? 

Here is an edited and abridged version of that conversation:

Jenna Barron, news editor of SD Times: Can you tell me about this new consortium and why it was created?

Bec Rumbul: Rust is a relatively young programming language compared to a lot of them out there, but it’s a language that has enormous potential; it has really great memory safety features, performance, it has an awful lot of great stuff to recommend it. So there’s a lot of people out there that are kind of Rust curious at the moment. They’re looking at it as a language that can smooth off some of those rough edges or plug some of those potential vulnerabilities that you might see in other languages, or indeed, improve performance.

Memory safety is obviously a huge one. And it’s something that governments around the world as well as the tech giants are getting really serious about, especially because of supply chain security. 

So we wanted to make sure as the Rust Foundation that we’re advocating for the language, that we’re providing whatever we possibly can to all of those people in the world that are interested in using the tools, the libraries, the support, whatever they need in order to be able to use Rust successfully in their chosen businesses. Safety critical is a group of industries that have really seen the potential of Rust, and those are industries that have gotten really interested very early on. We have members from those industries, and what we’ve heard from them is that they really need a bit extra in order to use Rust successfully in their businesses and in their products. 

And we felt this was a really good place for the foundation to provide some kind of support and facilitation, to try and plug whatever gaps might exist or to improve and iterate on what’s already there so that people can take this and run with it and have confidence in it. 

So yeah, after quite a lot of these conversations over the last couple of years, we’ve decided to try and formulate that a little bit more, try and provide a safe space for people in industry to sit around a table and talk frankly about what they need, where they feel that there are gaps in the system, or identify things that they would like to work on. 

So the consortium was formed by some key members, like Ferrous Systems, who have been very, very early adopters of Rust; Arm, who were obviously in the safety critical space; Woven by Toyota, who were really very interested in Rust going forward, and various other organizations. We spoke to all of them, and they were really excited to have this kind of space to come to the table to talk about these issues and find a common pathway forward.

JB: What are some of the long-term goals of the consortium?

BR: We want to close the gap. We want to make sure that we can provide a useful pathway for development, hopefully moving towards standards, hopefully moving towards common requirements, and hopefully ensuring that the projects and their maintainers are not overwhelmed by lots of individual companies or individuals out there trying to kind of do lots of things. Having a unified approach to this will hopefully also ease potential pressure in the long term on those maintainers upstream. 

We’re not going to be competing or trying to make SAE obsolete, for instance. What we’re trying to do is provide a much easier and more unified approach to what safety critical industry needs.

JB: How can people get involved with this? 

BR: Membership is by agreement with the consortium members. We don’t have really strict rules, you know, this is supposed to be a kind of Rainbow Coalition. So yes, obviously, companies that are looking to develop in the safety-critical space, but also, we’re bringing people to the table with legal backgrounds or other kinds of business function backgrounds. So we’re not trying to restrict membership too much. Because we want that diversity of voices around the table.

Potentially, there might come a point where there are too many people, and we’ll have to figure that out. But certainly in this initial stage, I think the hope is that lots of people will turn up and figure out, “Okay, I am interested in this, and I have the ability to contribute to it.” 

We’re not looking at this as something where there’s just going to be a briefing call once a month, and people turn up and listen, and then leave again. We’re very much hoping this is going to be a collaborative working process, so people that really want to contribute are going to be very much appreciated around the table. 

If anyone is interested in joining, we’re very happy for people to contact us at the Rust Foundation. My colleague, our head of technology, Joel Marcey, is leading this, and he has already had a phenomenally positive response since the release went out. I think we’ve got like 30 or 40 organizations already that have come and said, “Hey, this sounds cool. We’d like to get involved.” So yeah, the door is very much open and it’s going to be in the spirit of open source collaboration. So we would love to see people who want to come and have opinions and contribute in one way or another.

JB: Why should developers who are building these safety critical systems look at Rust versus other programming languages?

BR: Obviously, I am the executive director and CEO of the Rust Foundation, so it’s my job to push Rust, but I know I do not live in a world where I can say to people, “just stop using everything else and rewrite it in Rust, because Rust is the best.” That’s not the kind of foundation we want to be. We want to work with everyone. 

And we believe that different programming languages are right for different things. We also recognize that the world is not going to change overnight and that we have to operate with the existing landscape. A lot of that existing landscape is written in C++, for instance, and whilst that has been a very solid and much loved language for many years, it does have some vulnerability issues. It’s not a memory safe language, whereas Rust is, so it kind of stopped some of those security vulnerabilities that you’ll see in other languages. So that’s one reason that people are becoming attracted to Rust. 

I think the other side of it is it’s very fast, it’s a very performant language. It doesn’t have a garbage collector, so there’s not that delay that you get with some garbage collector languages.

And I think with Rust, there’s an opportunity for interoperability as well. Another initiative that we have going at the moment, which we’re just at the beginning at, is an interop initiative with C++ and Rust. So, you know, acknowledging that no one is going out to rewrite all of their C++ code. We’re going to have C++ code around for way longer than I’m going to be alive. But, we can use some Rust to make some of that safer, with wrappers and various other tools to make code safer. 

Because, you know, while it’s easy to talk at this level about how safety is important, security is important, we’re not doing it to bug developers or get them to learn another language. We’re ultimately doing all of this because the normal person on the street doesn’t want their bank hacked. They don’t want their car to go haywire when they do 70 down the motorway. So you know, keeping that in mind where we’re pushing Rust, because we believe in some cases, it’s the best tool for the job, in terms of safety and security.


You may also like…

Q&A: Evaluating the ROI of AI implementation

Q&A: Why over half of developers are experiencing burnout

The post Q&A on the Rust Foundation’s new Safety-Critical Rust Consortium appeared first on SD Times.

]]>