Podcast Archives - SD Times https://sdtimes.com/category/podcast/ Software Development News Thu, 17 Oct 2024 19:01:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Podcast Archives - SD Times https://sdtimes.com/category/podcast/ 32 32 Q&A: Why the Developer Relations Foundation is forming https://sdtimes.com/softwaredev/qa-why-the-developer-relations-foundation-is-forming/ Thu, 17 Oct 2024 18:14:52 +0000 https://sdtimes.com/?p=55851 Developer relations (DevRel) is an important role within the development space, acting as a liaison between a company making development tools and the developers actually using those tools. Recently, the Linux Foundation announced its intent to form the Developer Relations Foundation to support people in that career. On the most recent episode of our podcast, … continue reading

The post Q&A: Why the Developer Relations Foundation is forming appeared first on SD Times.

]]>
Developer relations (DevRel) is an important role within the development space, acting as a liaison between a company making development tools and the developers actually using those tools.

Recently, the Linux Foundation announced its intent to form the Developer Relations Foundation to support people in that career.

On the most recent episode of our podcast, we interviewed Stacey Kruczek, director of developer relations at Aerospike and steering committee member of the Developer Relations Foundation, to learn more.

Here is an edited and abridged version of that conversation.

How do you define the role of developer relations?

Developer relations is really the practice of elevating developers and their world. We’re all about helping the developers, helping them solve their problems. But more importantly, part of my role and the importance of it is also to be the voice of the community to the company. So it’s important for us to be able to recognize that the developers are the influencers of much of our technology business today, and so my role as a DevRel lead is to help elevate them and share in their pains, glories and challenges, and help them solve those issues, if they have them.

I always like to sort of describe myself as the PR person for developers, if you will, promoting them, promoting the importance of them, their value to the organization and their value to the community.

Why did the Linux Foundation create the Developer Relations Foundation? Why did they see a need for that?

The importance of developer relations is really to add business value. When you think about the journey of a developer and when they first experience your company, your products, your tools, and solutions, they’re really starting their discovery period. At first, they’re discovering what your tools are, the advantages of them, and as a DevRel leader, it’s my job basically, to help them along that journey. I help them down the path from discovery to engaging with us to sharing their feedback.

So it’s really about the developer being the influencer. And I’ve seen a lot of this personally at Aerospike, and we’re having a lot of conversations with developers and architects about evaluating our tools, our products, providing us feedback, and even coming in and evaluating our community.

And oftentimes, more than not, there’s a misunderstanding of what DevRel is. Each company has their own placement of a DevRel team or practice that really depends on the needs of the company, and that can be customized to them, but from a developer relations standpoint, it’s still all one in the same. 

And the major benefit of forming this foundation is to create some synergy and some common best practices, common terms that we all can share as a wider global community, to elevate the practice and practitioners in DevRel. So the major benefit of forming the foundation, why we did this is that it’s really to promote participatory governance. That means that no single company can monopolize the project or dictate its direction.

So our focus is on that community-driven governance. We’re taking and absorbing all the feedback of all of the DevRel practitioners across the globe. We’re ensuring that all of their contributions are reviewed, they’re all based on their merit and their expertise, and moreover, we’re creating a trusted, credible, and export resource to all of those professionals in the field. It will help promote best practices, what it means for businesses, and how we can add value.

What can DevRels do to elevate the development profession?

So really we have to look first at the challenges that some of this has faced. And I just want to tie in my own personal experience with this. I’ve been in this for over a decade. And when I initially came in, I came in as a technical marketer to a developer relations team, and I had the experience of sitting with five experienced engineers on the Android and Google development platform. And then we actually sat in the sea of engineers. So I always like to attribute my role as being the loan marketer in the sea of engineers, and they kind of kept me afloat. 

The beauty of that relationship, and what it really gave me, personally, was just really inspiration. There was a lot of collaboration. I became their voice to the broader customer partner ISV community, but more importantly to all the developers, I was helping them elevate their expertise. 

I think at that point, DevRel was sort of a newer thing. It had been around for a while, but people really didn’t understand it. I was thankful during my time at Zebra Technologies that we had that experience with them. It ignited my passion for helping developers in the long term in DevRel, and I soon moved into creating the first developer marketing strategy for that company, and then I really wanted to take on more as an individual for that, in terms of helping elevate DevRel as a practice, because dev marketing is just one component of it. 

When we look at it long term, there’s a reason for having a foundation. There’s actually a DevRel survey that just came out, and a large portion of developer advocates who responded to the survey stated that they felt like they needed a professional practice in one space, one community of where they could go to and share their experiences, but also learn from others. Because a lot of what I’ve learned in DevRel, I’ve learned from peers in the industry, and that’s been so important and so crucial for my learning and my development. So it’s not only the engineers, it’s also the other DevRel professionals. And when we were dealing with Covid for so long, I was fortunate to be included in many groups of DevRel advocates and professionals that would hop onto a Slack or a Discord channel, and we’d just start talking about the challenges of DevRel and how we were all dealing with it, especially during that challenging time. 

What emerged from that was the thought that we really need a foundation. We need an association of some sort that’s inclusive, and it includes our wider developer relations community and allows them a voice to be heard. 

We all bring our own personal expertise into it, but what we also bring is the ability to share with each other and collaborate. And that’s the beauty of DevRel. That’s what I love so much about it.

What is the ultimate goal, beyond bringing the community of DevRel together and having people share and exchange ideas? 

The broader developer relations umbrella as I’ve experienced it, we’re talking about community at the very core. You need to have a community in order to grow your business. And so that’s where it really starts. And then the various branches of that are related to developer experience. What kind of experience are they having? Are your tech docs easy for them to find? From a developer marketing standpoint, are we communicating the right messages? Are they technical? Are they authentic? And then we talk about developer success and education. We want to educate them just as much as they educate us. We want to make sure that we’re providing them the right tools, and we’re setting them up for success. 

And so these various components under the DevRel umbrella become so important. This foundation will essentially help define some of these areas and provide more clarity, but being that it’s open to the community, and it’s a community-driven project, we’re going to get varying viewpoints and opinions and it’s going to create this, this awesome catalog of knowledge. And then, by partnering with the Linux Foundation, they offer global credibility and they offer this robust governance structure that supports long term sustainability. 

Now, we’re an intent to form a DevRel foundation, so we’re still in the area of exploration and learning, and we do have a mission statement that we’ve created in collaboration with the community that we’ve shared. Everything is open and out there. We have a wiki page, we have a GitHub, and we welcome anybody to participate and communicate with us. 

We have weekly community calls across the globe, and many developer relations professionals are joining us on those calls and sharing their experience and their knowledge. We assign topics for the week, we review our proposals of how this will roll out, and the idea is that as a steering committee, we’re there to help guide the ship, we’re guiding the boat through the sea, and we’re going to help them stay on target, if you will. 

The project itself, the foundation, it really is going to rely on contributions from the community, individuals, supporters of the organization, and they’re going to provide expertise, guidance and content.

The post Q&A: Why the Developer Relations Foundation is forming appeared first on SD Times.

]]>
The state of open source in the Global South https://sdtimes.com/softwaredev/the-state-of-open-source-in-the-global-south/ Thu, 10 Oct 2024 18:12:47 +0000 https://sdtimes.com/?p=55820 The Eclipse Foundation recently conducted a report on open source in the Global South, the region of the world which the United Nations defines as “the developing and emerging industrial economies across Africa, Asia, the Caribbean, Latin America and Oceania.” To learn about the findings of the report and what they mean, we spoke with … continue reading

The post The state of open source in the Global South appeared first on SD Times.

]]>
The Eclipse Foundation recently conducted a report on open source in the Global South, the region of the world which the United Nations defines as “the developing and emerging industrial economies across Africa, Asia, the Caribbean, Latin America and Oceania.”

To learn about the findings of the report and what they mean, we spoke with Thabang Mashologu, VP of community at the Eclipse Foundation, on the most recent episode of our podcast, What the Dev?

Here is an edited and abridged version of that conversation: 

How did this survey come about? What made you want to study the impact of open source in these areas?

First of all, since it’s the global majority and that’s where the population growth is coming from, we consider that as a real key part of the sustainability of the open source ecosystem. And frankly, we hear a lot from developers in the Global North, and we just haven’t seen a lot in the way of data and actual insights on the perspectives and challenges of developers in the Global South. 

The sustainability of open source really hinges on having a strong pipeline of contributors and maintainers, so we’ve been asking ourselves at the Eclipse Foundation a number of really big questions, namely, where are the next generation of developers? Who are they? What’s their relationship with open source, and what challenges are they facing? 

And we started by looking at our own contributors and committers and research, including GitHub’s Octoverse report, and we noticed something that was interesting. The fastest growing developer communities are almost exclusively in the Global South, but what struck us more was that there wasn’t much research coming from these areas, and that’s why we decided to dive deeper. We wanted to understand the work they were doing, their perspectives, and also we had a hunch that the impact of open source was being felt far beyond just software development and also having broader socioeconomic effects.

Getting into the findings, 77% of respondents said they used open source software, 37% contribute to open source projects, 27% maintain them, and 22% create new projects. What has been the positive impact that these open source developers have been seeing?

The positive impact of these developers is something that we were positively surprised by. Three things in particular stood out for us in terms of that impact and the potential of these developers. 

First of all, they’re not just users of open source. They’re actively shaping its future. The fact that 28% are maintainers, and a quarter of them are creating new projects, it really means that they’re increasingly driving the agenda for these technologies that the rest of the world relies on. I think by now, pretty much everyone in the tech industry accepts that diversity is a good thing. I hope these developers are bringing their fresh perspectives and approaches and contributions to the communities that they’re part of. 

The second really big idea that we uncovered in terms of the positive impacts is that these developers are leveraging open source for career growth, very much like the rest of the world, and they’re using open source to acquire new skills, to learn new technologies and techniques and approaches to problem solving, and they’re also seeing that translate into better paying jobs and really seeing the financial benefits related to that. 

The third thing is that they’re also leveraging their involvement in open source to drive positive change more broadly in their communities. They identified three areas where they see that impact happening most greatly, and that’s improved educational opportunities for young people, for women, for other underrepresented groups in tech; the development of a stronger workforce overall in terms of software developers and those technology related functional areas; and  increased entrepreneurship, so business creation and economic contributions related to that innovation that’s based in software.

What are the ways in which they’re leveraging open source to advance their careers? And does it differ from how, for instance, developers in the US use open source to do that?

What we observed in our research is that there’s this democratizing effect that you see with open source because it’s permissionless. There’s no gatekeepers between someone in Johannesburg, South Africa, and a project that they want to use. It’s really removing and lowering barriers to access, and that’s huge.

Another thing is the fact that they’re able to leverage open source to build their skills, to advance their learning in a way that doesn’t require them to go to college or to university. That’s also really powerful, and it also extends to women. 

That’s something that we heard consistently, is that there are many countries in which women don’t have the same access to educational opportunities as they do in the in the West, and open source offers a very convenient and easily accessible way for those folks to get the skills they need to improve their lives, and then it also helps them collaborate with people from around the world. So you see this effect where technology, and particularly open source technology, really enables the borders of the world to come down, and people are able to relate to each other as community members.

You touched on the gender inequality part of this, the fact that women are able to kind of advance their careers better using open source, and the report pointed out that it’s also positive because open source solutions are being created that can impact gender specific issues, like apps for women’s specific healthcare issues or apps that provide like more educational resources. Can you share a little bit more about how open source is having that positive impact there, and also how policymakers can continue supporting women in open source in those countries?

I think one of the things that we did early on as we were developing the survey questionnaire is we talked to a number of experts, not only technology experts, but policymakers and folks who work at the UN, to get a better understanding of the Sustainable Development Goals, the SDGs, that factor into a lot of these larger questions around policy. They gave us some very helpful and concrete examples that we were able to test out in our quantitative research. 

Specifically, they said two things. They said that women are able to find mentors and role models and allies in these global open source communities. Again, this idea of being able to break through and go beyond the borders of their countries and those regions, and that’s the kind of thing that helps them build confidence. It reduces the sense of isolation and creates those new career opportunities through networking, especially in areas and countries where women are underrepresented in tech.

The second big idea was that women can contribute to open source projects that address the issues they care about. Now you mentioned healthcare and other applications that could be particularly targeted towards females and women. The idea that they can use their creativity, ingenuity, and passion to build solutions that work for everyone, not just a limited or small group of people, is really quite powerful, and that level of advocacy, and let’s say, focused enablement and participation, is the kind of thing that that really helps drive gender equality. 

We also heard a lot about how open source was empowering women and girls by offering them opportunities to better learn and contribute and lead in the tech industry. 

I’ll just share a bit of a small anecdote. At the Eclipse Foundation, we’ve actually partnered with the Girls Coding Academy in Lesotho — that’s the country that I’m from in southern Africa — and we’re working on an initiative with them to teach coding skills to about 200 teachers and girls in in that country using the Eclipse IDE. Now that’s just a small example. And we’re looking to do more along those lines, but hopefully that illustrates the fact that there’s a real connection between what is software and then how that software impacts people in their real day-to-day lives, and gives them opportunity and breaks down barriers that might otherwise exist.

Moving beyond the positive social impact. Another element of this report was that a majority think that open source is going to influence their country’s economic growth. Do you have any insights into why that is?

I think open source can help these countries drive economic growth in a few really important ways. One that we’ve already touched on is skill development and training. I think that’s a real way to help equalize and bridge the digital divide that exists between the Global South and North. The fact is that with open source, someone sitting in Palo Alto, California now has the same access to technology as someone sitting in Johannesburg, South Africa, or Lagos, Nigeria. And that’s a really an important shift in the world, frankly, and it allows for folks to unlock the often underused potential of a huge swath of the world. As we were talking about, this is the global majority, so now getting those people into the spheres of technology development and innovation is something that’s going to be beneficial, not only to these countries, but to the rest of the world. 

The other big impact is the fact open source enables startups and businesses to leverage technology to create opportunity. What we found is that developers in the Global South are having significant impact across a variety of industries, in existing businesses, in financial services, telecom, and healthcare, and increasingly, they’re creating new ventures. 

Over the last several years, we’ve seen startups be funded in Latin America and in Africa and in Asia from the Global North. We’re seeing these new ventures attract startup capital and interest and really advance the global technology scene from these countries.  So it’s not just a matter of outsourcing anymore and using these talented folks as a source of cheap labor, you’re actually seeing the developers and engineers from these regions make a mark on the international economy.

I know we’ve covered a lot of the highlights of the report, but were there any other takeaways from the report that developers might find interesting that we didn’t touch on?

I think maybe what I’d like to underline is that often, when we think of the Global South, you know, we think, how do we help these people? How do we assist them? And maybe the biggest takeaway for me was that this research shifts the narrative, where it’s not about what the rest of the world can do for the Global South, but particularly around the sustainability of open source, where you’re seeing a lot of the folks that created and maintained for many years these core technologies and infrastructure, you’re seeing those folks age out. So I think the narrative and discussion has shifted to now, how can the Global South help open source and help the tech industry? 

I think the fact is that with the leveling of the playing field that open source provides, and the fact that you’re seeing a lot of creative solutions and technologies come out of these countries, I would encourage developers in the Global North to look to their peers in the South as potential contributors, maintainers and leaders, and they should welcome and encourage and mentor them and ensure that they feel welcomed. That kind of engagement can help distribute the workload and reduce the burnout among maintainers today, and also inject new innovations into the global ecosystem.

The post The state of open source in the Global South appeared first on SD Times.

]]>
Podcast: The importance of buildpacks in developing cloud native applications https://sdtimes.com/containers/podcast-the-importance-of-buildpacks-in-developing-cloud-native-applications/ Thu, 26 Sep 2024 17:39:52 +0000 https://sdtimes.com/?p=55725 Buildpacks help ease the burden on developers by taking source code and turning it into fully functional apps. To learn more about this technology, we interviewed Ram Iyengar, chief evangelist of the Cloud Foundry Foundation, on the most recent episode of our podcast, What the Dev? Here is an edited and abridged version of that … continue reading

The post Podcast: The importance of buildpacks in developing cloud native applications appeared first on SD Times.

]]>
Buildpacks help ease the burden on developers by taking source code and turning it into fully functional apps.

To learn more about this technology, we interviewed Ram Iyengar, chief evangelist of the Cloud Foundry Foundation, on the most recent episode of our podcast, What the Dev?

Here is an edited and abridged version of that conversation:

How do buildpacks — and the Paketo Buildpacks in particular — help developers deploy cloud native applications?

I think buildpacks have been very important in making a lot of applications get pushed to production and get containerized without having to deal with a lot of overhead that usually comes with the process of containerization. What can I say that we haven’t said already in the webinar and in the article and things like that? Well, there’s a community angle to this. Buildpacks is somewhat headed towards graduation within the CNCF, and we expect that it will graduate in the next six to 12 months. If there’s any show of support that you can do as a community, I highly welcome people giving it a star, opening important issues, trying the project out, and seeing how you can consume it, and giving us feedback about how the project can be improved.

One thing that I wanted to get into a little bit is Korifi, which is your platform for creating and deploying Kubernetes applications. Can you talk a little bit about Korifi and how it ties in with buildpacks?

Absolutely, one of the main areas where we see a lot of buildpacks being consumed is when people are getting into the job of building platforms on Kubernetes. Now, any sort of talk you see about Kubernetes these days, whether it’s at KubeCon or one of the other events, is it’s extremely complex, and it’s been said so many times over and over again, there’s memes, there’s opinion pieces, there’s all kinds of internet subculture about how complex Kubernetes can be. 

The consequence of this complexity is that some teams and companies have started to come up with a platform where they say you want to make use of Kubernetes? Well, install another substrate over Kubernetes and abstract a lot of the Kubernetes internals from interacting with your developers. So that resonates perfectly with what the Cloud Foundry messaging has been all these years. People want a first-class, self-service, multi-tenant experience over VMs, and they want that same kind of experience on Kubernetes today for somewhat slightly different reasons, but the ultimate aim being that developers need to be able to get to that velocity that they’re most optimal at. They need to be able to build fast and deploy faster and keep pushing applications out into production while folding down a lot of the other areas of importance, like, how do we scale this, and how do we maintain load balances on this? How do we configure networking and ingress?

All of these things should fall down into a platform. And so Korifi is what has emerged from the community for actually implementing that kind of behavior, and buildpacks fits perfectly well into this world. So by using buildpacks — and I think Korifi is like the numero uno consumer of buildpacks — they’ve actually built an experience to be able to deploy applications onto Kubernetes, irrespective of the language and family, and taking advantage of all of those buildpacks features.

I’m hearing a lot of conversation about the Cloud Foundry Foundation in general, that it’s kind of old, and perhaps Kubernetes is looking to displace what you guys are doing. So how would you respond to that? And what is the Cloud Foundry Foundation offering in the Kubernetes world? 

It’s a two part or a two pronged answer that I have. On the one hand, there is the technology side of things. On the other, there’s a community and a human angle to things. Engineers want new tools and new things and new infrastructure and new kinds and ways to work. And so what has happened in the larger technology community is that a sufficiently adequate technology like Cloud Foundry suddenly found itself being relegated to as legacy technology and the old way to do things and not modern enough in some cases. That’s the human angle to it. So when people started to look at Kubernetes, when the entire software development community learned of Kubernetes, what they did was to somehow pick up on this new trend, and they wanted to sort of ride the hype train, so to say. And Kubernetes started to occupy a lot of the mind space, and now, as Gartner puts it quite well, you’re past that elevated expectations phase. And you’re now getting into productivity, and the entire community is yearning for a way to consume Kubernetes minus the complexity. And they want a very convenient way in which to deploy applications on Kubernetes while not worrying about networking and load balancing and auto scalars and all of these other peripheral things that you have to attach to an application.

I think it’s not really about developers just wanting new things. I think they want better tools and more efficient ways of doing their jobs, which frees them up to do more of the innovation that they like and not get bogged down with all of those infrastructure issues and things that that you know now can be taken care of. So I think what you’re saying is very important in terms of positioning Cloud Foundry as being useful and helpful for developers in terms of gaining efficiency and being able to work the way they want to work.

Well, yes, I agree in principle, which is why I’m saying Cloud Foundry and some others like Heroku, they all perfected this experience of here’s what a developer’s workflow should be. Now, developers are happy to adopt new ways to work, but the problem is, when you’re on the path to gain that kind of efficiency and velocity, you often unintentionally build a lot of opinionated workflows around yourself. So, all developers will have a very specific way in which they’ll actually create deployments and create these immutable artifacts, and they’re going to build themselves a fort from where they’d like to be kings of the castle, lord of the manor, but it’s really assailing a lot of the mental image and any apprehensions that come from deviating from that mental image. And at the moment, Kubernetes seems to offer one of the best ways to build and package and deploy an app, given that it can accomplish so many different things. 

Now, if you take a point by point comparison between what Cloud Foundry was capable of in, let’s say, 2017 versus what Kubernetes is capable of right now, it will be almost the same. So in terms of feature parity, we are now at a point, and this might be very controversial to say on a public podcast, but in terms of feature parity, Cloud Foundry has always offered the kind of features that are available in the Kubernetes community right now. 

Now, of course, Kubernetes imagines applications to be built and and deployed in a slightly different way, but in terms of getting everything into containers and shipping into a container orchestrator and providing the kind of reliability that applications need, and allowing sidecars and services and multi-tenancy. 

I strongly believe that the Cloud Foundry offering was quite compelling even four or five years ago, while Kubernetes is still sort of navigating some fairly choppy waters in terms of multi-tenancy and services and things like that. But hey, as a community, they’re doing wonderful innovation. And yeah, I agree with you when I say engineers are always after the best way in which to, you know, gain that efficiency.

The post Podcast: The importance of buildpacks in developing cloud native applications appeared first on SD Times.

]]>
Podcast: How time series data is revolutionizing data management https://sdtimes.com/data/podcast-how-time-series-data-is-revolutionizing-data-management/ Wed, 04 Sep 2024 19:23:32 +0000 https://sdtimes.com/?p=55606 Time series data is an important component of having IoT devices like smart cars or medical equipment that work properly because it is collecting measurements based on time values.  To learn more about the crucial role time series data plays in today’s connected world, we invited Evan Kaplan, CEO of InfluxData, onto our podcast to … continue reading

The post Podcast: How time series data is revolutionizing data management appeared first on SD Times.

]]>
Time series data is an important component of having IoT devices like smart cars or medical equipment that work properly because it is collecting measurements based on time values. 

To learn more about the crucial role time series data plays in today’s connected world, we invited Evan Kaplan, CEO of InfluxData, onto our podcast to talk about this topic.

Here is an edited and abridged version of that conversation:

What is time series data?

It’s actually fairly easy to understand. It’s basically the idea that you’re collecting measurement or instrumentation based on time values. The easiest way to think about it is, say sensors, sensor analytics, or things like that. Sensors could measure pressure, volume, temperature, humidity, light, and it’s usually recorded as a time based measurement, a time stamp, if you will,  every 30 seconds or every minute or every nanosecond. The idea is that you’re instrumenting systems at scale, and so you want to watch how they perform. One, to look for anomalies, but two, to train future AI models and things like that. 

And so that instrumentation stuff is done, typically, with a time series foundation. In the years gone by it might have been done on a general database, but increasingly, because of the amount of data that’s coming through and the real time performance requirements, specialty databases have been built.  A specialized database to handle this sort of stuff really changes the game for system architects building these sophisticated real time systems.

So let’s say you have a sensor in a medical device, and it’s just throwing data off, as you said, rapidly. Now, is it collecting all of it, or is it just flagging what an anomaly comes along?

It’s both about data in motion and data at rest. So it’s collecting the data and there are some applications that we support, that are billions of points per second —  think hundreds or  thousands of sensors reading every 100 milliseconds. And we’re looking at the data as it’s being written, and it’s available for being queried almost instantly. There’s almost zero time, but it’s a database, so it stores the data, it holds the data, and it’s capable of long term analytics on the same data. 

So storage, is that a big issue? If all this data is being thrown off, and if there are no anomalies, you could be collecting hours of data that nothing has changed?

If you’re getting data — some regulated industries require that you keep this data around for a really long period of time — it’s really important that you’re skillful at compressing it. It’s also really important that you’re capable of delivering an object storage format, which is not easy for a performance-based system, right? And it’s also really important that you be able to downsample it. And downsample means we’re taking measurements every 10 milliseconds, but every 20 minutes, we want to summarize that. We want to downsample it to look for the signal that was in that 10 minute or 20 minute window. And we downsample it and evict a lot of data and just keep the summary data. So you have to be very good at that kind of stuff. Most databases are not good at eviction or downsampling, so it’s a really specific set of skills that makes it highly useful, not just us, but our competitors too. 

We were talking about edge devices and now artificial intelligence coming into the picture. So how does time series data augment those systems? Benefit from those advances? Or how can they help move things along even further?

I think it’s pretty darn fundamental. The concept of time series data has been around for a long time. So if you built a system 30 years ago, it’s likely you built it on Oracle or Informatics or IBM Db2. The canonical example is financial Wall Street data, where you know how stocks are trading one minute to the next, one second to the next. So it’s been around for a really long time. But what’s new and different about the space is we’re sensifying the physical world at an incredibly fast pace. You mentioned medical devices, but smart cities, public transportation, your cars, your home, your industrial factories, everything’s getting sensored — I know that’s not a real word, but easy to understand. 

And so sensors speak time series. That’s their lingua franca. They speak pressure, volume, humidity, temperature, whatever you’re measuring over time. And it turns out, if you want to build a smarter system, an intelligent system, it has to start with sophisticated instrumentation. So I want to have a very good self-driving car, so I want to have a very, very high resolution picture of what that car is doing and what that environment is doing around the car at all times. So I can train a model with all the potential awareness that a human driver or better, might have in the future. In order to do that, I have to instrument. I then have to observe, and then have to re-instrument, and then I have to observe. I run that process of observing, correcting and re-instrumenting over and over again 4 billion times. 

So what are some of the things that we might look forward to in terms of use cases? You mentioned a few of them now with, you know, cities and cars and things like that. So what other areas are you seeing that this can also move into?

So first of all, where we were really strong is energy, aerospace, financial trading, network, telemetry. Our largest customers are everybody from JPMorgan Chase to AT&T to Salesforce to a variety of stuff. So it’s a horizontal capability, that instrumentation capability. 

I think what’s really important about our space, and becoming increasingly relevant, is the role that time series data plays in AI, and really the importance of understanding how systems behave. Essentially, what you’re trying to do with AI is you’re trying to say what happened to train your model and what will happen to get the answers from your model and to get your system to perform better. 

And so, “what happened?” is our lingua franca, that’s a fundamental thing we do, getting a very good picture of everything that’s happening around that sensor around that time, all that sort of stuff, collecting high resolution data and then feeding that to training models where people do sophisticated machine learning or robotics training models and then to take action based on that data. So without that instrumentation data, the AI stuff is basically without the foundational pieces, particularly the real world AI, not necessarily talking about the generative LLMs, but I’m talking about cars, robots, cities, factories, healthcare, that sort of stuff.

The post Podcast: How time series data is revolutionizing data management appeared first on SD Times.

]]>
Podcast: Misconceptions around Agile in an AI world https://sdtimes.com/agile/podcast-misconceptions-around-agile-in-an-ai-world/ Wed, 28 Aug 2024 19:44:28 +0000 https://sdtimes.com/?p=55561 In this week’s episode of our podcast, What the Dev?, we spoke with David Ross, Agile evangelist for Miro, about some of the misconceptions people have about Agile today, and also how Agile has evolved since its early days. Here is an edited and abridged version of that conversation: Where do you see the change … continue reading

The post Podcast: Misconceptions around Agile in an AI world appeared first on SD Times.

]]>
In this week’s episode of our podcast, What the Dev?, we spoke with David Ross, Agile evangelist for Miro, about some of the misconceptions people have about Agile today, and also how Agile has evolved since its early days.

Here is an edited and abridged version of that conversation:

Where do you see the change from people doing Agile and thinking they understood it, to now? What do they have to take into consideration for this new modern era?

I have been in software development for almost 20 years, and it’s been an interesting evolution for me to watch what Agile meant maybe 15-20 years ago versus how it’s perceived today. I just remember back in the early days of some of the very first Agile transformations that I was part of, it was very much all about following a process and having fealty to specific frameworks, be it Scrum or Kanban or whatever the case might be. And the closer you were to perfection by following those frameworks, the closer you were to God, as it were, like the more Agile you could claim to be. 

And what we forgot in all of that was, of course, that the Agile values and principles don’t prescribe any particular framework or approach. You’re supposed to put people and interactions over tools and processes. Well, if you are enforcing processes and you’re asking people to interact via tools, that kind of defeats a lot of the very fundamental sort of values of Agile right from the get go.

We also have problems, in that a lot of people came into the industry, and maybe people who were not sufficiently trained or had enough experience in real, good Agile practices, and there was just a lot of bad, bad Agile out there. You know, people who got a two-day certificate stamped and said, hey, I’m going to come in and now enforce Scrum processes on this team and coach them to higher levels of agility, and that’s not a recipe for success.

This has been true of DevOps, value stream management, you you, these are just vague, non-prescriptive processes to follow. But nobody says you have to be doing X, Y and Z to be Agile, or be doing full DevOps, or be doing value stream. It’s kind of like, well, we’re just going to leave it up to you, adopt what you want, throw out what you don’t want, we don’t mean to be prescriptive. But, I think that has added to so much confusion in these markets over the years. So where we’re at now, and you’re talking about evolving into this modern era, what’s impacting it? Is it simply cloud-native computing? Is it AI? Is it all of the above? 

I feel like Agile reached this sort of peak, where people were finding that they weren’t really getting the value that had been promised as a part of an Agile transformation. They weren’t seeing the value for their customers, they weren’t seeing their value for their teams. And, you know, the house of cards started to fall apart a little bit. And let’s be honest as well, one of the things about Agile was you had to have co-located teams, so that’s one sacred cow that got sacrificed during Covid, because co-located teams just wasn’t a possibility, and we’re not in that world anymore. 

And honestly, from where I sit, Agile was invented to solve a very specific, defined problem within software development, which was software development delivery and making sure that you weren’t constantly missing deadlines, and that you were delivering the right level of value. And I think a lot of those problems have kind of been solved, and Agile has kind of expanded beyond the boundaries of just software development as well. And people are kind of seeing that it’s not one size fits all. It needs to be more adaptive. It needs to be more pragmatic and less prescriptive. 

And so that’s kind of where we are right now. I feel like where we’re in a period of retrenchment and reinvention of Agile. People are starting to see that prescriptive frameworks just aren’t going to work for them. And a lot of the customers that I talk to are evolving and coming up with their own sort of custom approach. And they’re maybe using different vocabulary, different language, but they’re still doing things that are Agile, but they’re just not recognizable to somebody 10-15 years ago.

You bring in cloud-native computing, where now you have a whole lot of moving parts, where it isn’t just a monolithic code base going through, but you’re calling APIs, you’re using Kubernetes, containers. And all of these complexities kind of change the looks of things, so how do those things affect the way that people have been doing Agile, and what adjustments have they had to make for those types of things?

I think they’ve kind of stepped away from prescriptive frameworks, and many times they’re just adapting. This is really, honestly what they should have been doing all along. You should have not been prescriptive, you should have been able to adapt your processes, and even if it’s not pure to the framework that you started with, it’s okay for you to move in that direction. So people are, I think, moving away from those defined roles that were part of those frameworks. I think that that’s probably a good thing. Rather than, you know, you’re a product owner or you’re a Scrum master, or all of those kinds of things, moving away from prescriptive titles I think is one thing that I’ve seen them do.

Also, working with tool sets that are less rigid and more flexible. So if you are trying to run everything within a very defined set of tools, and those tools define your workflow, that’s very constrictive, I feel like for a lot of a lot of companies and a lot of teams, and they’re trying to find a better way to organize themselves and to support their ways of working using more flexible tool sets.

How is AI impacting Agile development?

Well, you know, I would be lying if I could say that anybody knows the answer to that, right? We’re still in the very early days of that revolution. But some things that I can kind of see on the horizon as potential outcomes and potential impacts of AI are is it going to affect the team size? If you think about an Agile team generally, they used to prescribe that the ideal size is six plus or minus three, and you have to have these specific skill sets on it. Maybe team sizes are going to shrink a little, and you’re going to have maybe one or two developers on a team, and then they can orchestrate a series of AI agents that do a lot of the work that other specialists would have done in the past, like QA or specific database tasks or things like that. So I definitely think it’s going to affect the team composition, the team structure, and the team size. 

The other thing that I think it’s going to really impact as well is a lot of the monotony of some of the tasks that get done are probably going to be taken over by AI. And you see that across all industries, right? What does that mean? It means that it’s going to free up the really talented people on Agile teams to do sort of those higher level strategic thinking. You know, the things that AI can’t do yet. Maybe it’ll do it one day, but it can’t do it today where it’s thinking strategically and thinking about human dimensions of what they’re building and making sure that it’s being guided in that direction. The actual coding work or testing work will probably be taken over by some form of an AI, but we are going to have the ability to focus our efforts on those higher order or higher complexity activities. 

So you really have to prepare yourself individually. You have to bring your skill set up, and you also have to know how to work with an AI, because if those AIs are going to be your assistants, or they’re going to be an embedded part of your team, you have to know how to be able to orchestrate and run a series of AI agents that are going to get the work done that other human beings would have done before. So I really think that’s going to happen. What does that mean for Scrum masters specifically? I think Scrum masters, again, will have to evolve in a different direction and focus more on the human element. We’ve always said that Scrum masters are also Agile coaches, but we haven’t really taken that to heart. And I feel like that’s something that Scrum masters really need to embrace in this new era of being able to coach human beings and have high emotional intelligence. AI doesn’t have emotional intelligence. We do. So we need to be able to make sure that the human beings on our team are supported and have what they need to collaborate and to be successful, and then leave the drudgery to the AI.

The post Podcast: Misconceptions around Agile in an AI world appeared first on SD Times.

]]>
Q&A: 10 emerging technologies to watch in 2024 https://sdtimes.com/ai/qa-10-emerging-technologies-to-watch-in-2024/ Wed, 07 Aug 2024 19:21:44 +0000 https://sdtimes.com/?p=55383 Every year, Forrester puts together a list of 10 emerging technologies to watch. This year’s list was released in June, and in the most recent episode of our podcast, What the Dev?, we were able to sit down with Brian Hopkins, VP of Emerging Tech Portfolio at Forrester, about the list. Here is an edited … continue reading

The post Q&A: 10 emerging technologies to watch in 2024 appeared first on SD Times.

]]>
Every year, Forrester puts together a list of 10 emerging technologies to watch. This year’s list was released in June, and in the most recent episode of our podcast, What the Dev?, we were able to sit down with Brian Hopkins, VP of Emerging Tech Portfolio at Forrester, about the list.

Here is an edited and abridged version of that conversation:

One of the things that stuck out to me in this year’s list is this idea that there’s been this shift from generative AI to agentic AI. Can you explain what agentic AI is and what the shift means?

Absolutely, the trend you’re talking about is a shift from focusing purely on generation of text using artificial intelligence to building AI agents that actually are capable of accomplishing actions on people’s behalf. When we think about an AI agent, we think about a piece of software that’s going to actually be able to take a general set of instructions, and be able to generate a visualization or access a database or trigger actions within another application.  The most exciting thing we see right now is the shift towards actually using these next generation models in more of an action taking context. 

As you wrote in the report, the rise of these AI agents is sort of giving way to a number of other emerging technologies on the list, including TuringBots, edge intelligence, autonomous mobility and extended reality. Can you briefly explain those other technologies and why AI agents are so important for their growth?

I think it’s important, before I get to those other technologies, to also explain the idea of an AI creating an intelligent agent. Earlier AIs that could go do things were narrow and constrained to a particular environment, using things like reinforcement learning. What we’re seeing today is taking the capabilities of large language models to break those instructions into specific steps and then go execute those steps with different tools. 

When we think about that kind of design and how that might play out across a bunch of other emerging technologies in our list, a really interesting story starts to emerge. For example, one of the other emerging technologies we have is TuringBots, which we’ve been writing about since 2020. TuringBots are autonomous coding bots, and what we saw in 2020 was the ability, in theory, that given enough training data — like all the source code in a repository that you kept in a GitHub repository — you could train a machine learning algorithm to go write code based on that training data. 

What we saw with generative AI in 2022 was that capability was dramatically accelerated, because before it gets compiled, software code is just text. So we saw that accelerate. When we first identified TuringBots as an emerging technology, we put it in the five to 10 years before we thought there would be benefit for most average enterprises. This year we moved it into the one to two year near term benefit horizon because of the acceleration that state of the art generative AI models are improving their ability to generate useful code. 

We recognize that a TuringBot is itself an agent, and what we’re seeing is the use of agent methodologies to create, perhaps swarms of TuringBots that are operating in different developer capacities, from design to coding to testing to deployment. 

And when we think about this, if we can produce software code with much lower human effort, we can iterate much more quickly. And if we can iterate on innovation ideas more quickly, we can get through the ones that aren’t good and produce the ones that are much, much faster. And we know that leads to an uptick in the pace of business changes. 

You asked about the other emerging technologies, and I’ll be a little more brief. Edge intelligence is about using information that’s outside the data center or outside the cloud, outside of a centralized location, to process information and use that information to create action and intelligence. Prior to this year, it was mostly focused on things like computer vision. So you had a very narrow model trained to recognize certain kinds of objects, and it would go do that computer vision recognition well. But what you did with that recognition, frankly, then had to be programmed in some kind of heuristics or code. 

What we’re beginning to see is — for instance, in the Apple Intelligence announcement but there’s others as well — how we are able to take agents that can do things, train them, and make them small enough to run on various edge devices. And then those edge environments, beyond just being able to perhaps converse in natural language with humans on the edges, can converse among themselves. 

The example that we give is there is a vendor who is looking at creating augmented reality overlays in the next generation firefighting helmet, which is an effort being sponsored by Homeland Security. If we begin to think about putting agents in those helmets, then a lot of the communication those firefighters would have to do themselves could be handled by agents in each one of these helmets, looking at those augmented reality displays and making decisions about where different firefighting assets need to be placed to offload that from the need for human communication in a scenario like that. 

So that’s an example of how generative AI is kind of serving as the foundation for agents, and those are then creating new innovation possibilities in edge intelligence. Same is true for autonomous mobility. We’re going to see these agents deployed in IoT environments, so that drones and robots can be a lot smarter in their communication with the surrounding environment. So we just see this whole idea of acceleration from generative AI creating a revolution in the ability to use it to do things, and then that’s moving into a bunch of other emerging technologies in our list and accelerating them.

A lot of the items on the list were AI related, but there are also three security items on it, so I kind of want to shift over to that side of things. So what have you been seeing in the security space that’s influenced the technologies that were on the list?

Actually, I’d like to answer that question the other way around. Why are things happening with the other emerging technologies that make security so important? This year when we did the research the idea dawned on us that those who achieve these future benefits are going to be the ones with the presence of mind to invest in security today. 

I’m seeing this play out over and over as I talk to clients who are telling me stories that they’ve been meaning to invest in better IoT security for years, and they just don’t see the value in it, because it costs money and it’s complicated and it doesn’t have an immediate top line impact. IoT security is on our list and IoT security has been around as long as there have been devices to secure, you know, 30 years. Why is it there this year? We see an enormous amount happening in the space, and the reason for that is, very simply, all these AI tools that the enterprises are getting and figuring out how to use to their advantage are also tools available to the bad actors. 

What’s happening is organizations invest in more devices, more smart connected things, and we’re essentially increasing the attack surface by which smart hackers with an army of very smart bots can launch attacks to get in an operational technology environment, your faxes, your printers, that thing that you haven’t updated the firmware for in 15 years is sitting in the corner in your office, connected to your network. 

So IoT securities have really become really important, and there’s an awful lot going on in that space right now in terms of vendors and how they’re providing new capabilities for inventory and remediating all your IoT devices.

The other two are zero trust edge and quantum security. Zero trust edge is essentially a packaged set of technologies that give you a whole bunch of capabilities that combine networking and security into a cloud-based, as-a-service delivery model. So you get all the features of managed cloud services, and you get the ability to manage your security down at the network level, which means, according to principles of zero trust, you don’t have a firewall anymore. You inspect everything and trust nothing, and therefore you’re looking at all the packets going across your network. 

The problem is that it requires firms to be pretty modern in their approach to cloud native software deployment and management, and a lot of firms are still pretty behind on that. There’s a lot of legacy devices out there, that fax machine sitting in the corner that doesn’t use modern protocols, modern security, doesn’t easily connect to this kind of agent-based zero trust edge architecture. It’s complicated, and the vendors are busy consolidating. So that’s why we think it’s going to take five more years before this really pays off. That doesn’t mean that today you can’t start working on it by making sure that you’re ready for a modern cloud-native way to manage both networking and security together. There’s a lot that needs to be done. 

There’s also been a lot of hype around quantum computers for the last 10 years, and we think quantum computers are 10 to 15 years out from actually being able to threaten today’s best PKI encryption. So it’s easy to say, well, it’s 10 to 15 years out, I don’t need to do anything now. But nobody knows how fast quantum computers are going to advance. It could be a lot faster, could be five years. What you have to worry about is the attack of save now, decrypt later. You’ve got to start now implementing quantum safe algorithms to make sure your data is protected.

But the real reason we put it on the on the top 10 list this year is because implementing quantum safe algorithms and being able to rapidly change algorithms as quantum computers advance and new quantum safe algorithms are put forward, is part of a broader effort around cryptographic agility, and cryptographic agility has many benefits beyond protecting you from quantum attacks. New hacks are coming out all the time, so by looking at cryptographic agility solutions today in preparation for being ready for quantum attacks, you’re actually improving your whole security posture. There’s many benefits to starting now, which is why we put it in the top 10. 

We covered a lot here today, so is there a takeaway that developers and leaders should come away with as they think about what to focus on in the next year? 

You have to spread your investments out. Short term is easy, it gives me benefits that I can measure, my finance people like it. But you have to take some of those mid and long-term shots as well. And a lot of the long term things that we have will require big foundational investments to be ready. 

I think corollary to that is with the speed of acceleration that we’re seeing happening primarily because of the advancements in AI today, we’re much less certain what the future is going to hold, and we’ll have much less time to deal with it. What that means is instead of saying here’s what the future is going to be, here’s our bet, you’re going to have to spread your bets out across a range of possible options. So you’re going to have to hedge your bets a little bit and use more of an options-based strategy to figure out where you spend your money, so that no matter which things break and go, we have a better chance of being ready for whatever happens.

The post Q&A: 10 emerging technologies to watch in 2024 appeared first on SD Times.

]]>
Q&A: Lessons NOT learned from CrowdStrike and other incidents https://sdtimes.com/test/qa-lessons-not-learned-from-crowdstrike-and-other-incidents/ Wed, 31 Jul 2024 20:11:01 +0000 https://sdtimes.com/?p=55310 When an event like the CrowdStrike failure literally brings the world to its knees, there’s a lot to unpack there. Why did it happen? How did it happen? Could it have been prevented?  On the most recent episode of our weekly podcast, What the Dev?, we spoke with Arthur Hicken, chief evangelist at the testing … continue reading

The post Q&A: Lessons NOT learned from CrowdStrike and other incidents appeared first on SD Times.

]]>
When an event like the CrowdStrike failure literally brings the world to its knees, there’s a lot to unpack there. Why did it happen? How did it happen? Could it have been prevented? 

On the most recent episode of our weekly podcast, What the Dev?, we spoke with Arthur Hicken, chief evangelist at the testing company Parasoft, about all of that and whether we’ll learn from the incident. 

Here’s an edited and abridged version of that conversation:

AH: I think that is the key topic right now: lessons not learned — not that it’s been long enough for us to prove that we haven’t learned anything. But sometimes I think, “Oh, this is going to be the one or we’re going to get better, we’re going to do things better.” And then other times, I look back at statements from Dijkstra in the 70s and go, maybe we’re not gonna learn now. My favorite Dijkstra quote is “if debugging is the act of removing bugs from software, then programming is the act of putting them in.” And it’s a good, funny statement, but I think it’s also key to one of the important things that went wrong with CrowdStrike. 

We have this mentality now, and there’s a lot of different names for it — fail fast, run fast, break fast —  that certainly makes sense in a prototyping era, or in a place where nothing matters when failure happens. Obviously, it matters. Even with a video game, you can lose a ton of money, right? But you generally don’t kill people when a video game is broken because it did a bad update. 

David Rubinstein, editor-in-chief of SD Times: You talk about how we keep having these catastrophic failures, and we keep not learning from them. But aren’t they all a little different in certain ways, like you had Log4j that you thought would be the thing that oh, people are now definitely going to pay more attention now. And then we get CrowdStrike, but they’re not all the same type of problem?

AH: Yeah, that is true, I would say, Log4j was kind of insidious, partly because we didn’t recognize how many people use this thing. Logging is one of those less worried about topics. I think there is a similarity in Log4j and in CrowdStrike, and that is we have become complacent where software is built without an understanding of what the rigors are for quality, right? With Log4j, we didn’t know who built it, for what purpose, and what it was suitable for. And with CrowdStrike, perhaps they hadn’t really thought about what if your antivirus software makes your computer go belly up on you? And what if that computer is doing scheduling for hospitals or 911 services or things like that? 

And so, what we’ve seen is that safety critical systems are being impacted by software that never thought about it. And one of the things to think about is, can we learn something from how we build safety critical software or what I like to call good software? Software meant to be reliable, robust, meant to operate under bad conditions. 

I think that’s a really interesting point. Would it have hurt CrowdStrike to have built their software to better standards? And the answer is it wouldn’t. And I posit that if they were building better software, speed would not be impacted negatively and they’d spend less time testing and finding things.

DR: You’re talking about safety critical, you know, back in the day that seemed to be the purview of what they were calling embedded systems that really couldn’t fail. They were running planes and medical devices and things that really were life and death. So is it possible that maybe some of those principles could be carried over into today’s software development? Or is it that you needed to have those specific RTOSs to ensure that kind of thing?

AH: There’s certainly something to be said for a proper hardware and software stack. But even in the absence of that, you have your standard laptop with no OS of choice on it and you can still build software that is robust. I have a little slide up on my other monitor from a joint webinar with CERT a couple of years ago, and one of the studies that we used there is that 64% of vulnerabilities in NIST are programming errors. And 51% of those are what they like to call classic errors. I look at what we just saw in CrowdStrike as a classic error. A buffer overflow, reading null pointers on initialized things, integer overflows, these are what they call classic errors. 

And they obviously had an effect.  We don’t have full visibility into what went wrong, right? We get what they tell us. But it appears that there’s a buffer overflow that was caused by reading a config file, and one can argue about the effort and performance impact of protecting against buffer overflows, like paying attention to every piece of data. On the other hand, how long has that buffer overflow been sitting in that code? To me a piece of code that’s responding to an arbitrary configuration file is something you have to check. You just have to check this. 

The question that keeps me up at night, like if I was on the team at CrowdStrike, is okay, we find it, we fix it, then it’s like, where else is this exact problem? Are we going to go and look and find six other or 60 other or 600 other potential bugs sitting in the code only exposed because of an external input?

DR: How much of this comes down to technical debt, where you have these things that linger in the code that never get cleaned up, and things are just kind of built on top of them? And now we’re in an environment where if a developer is actually looking to eliminate that and not writing new code, they’re seen as not being productive. How much of that is feeding into these problems that we’re having?

AH: That’s a problem with our current common belief about what technical debt is, right? I mean the original metaphor is solid, the idea that stupid things you’re doing or things that you failed to do now will come back to haunt you in the future. But simply running some kind of static analyzer and calling every undealt with issue technical debt is not helpful. And not every tool can find buffer overflows that don’t yet exist. There are certainly static analyzers that can look for design patterns that would allow or enforce design patterns that would disallow buffer overflow. In other words, looking for the existence of a size check. And those are the kinds of things that when people are dealing with technical debt, they tend to call false positives. Good design patterns are almost always viewed as false positives by developers. 

So again, it’s that we have to change the way we think, we have to build better software. Dodge said back in, I think it was the 1920s, you can’t test quality into a product. And the mentality in the software industry is if we just test it a little more, we can somehow find the bugs. There are some things that are very difficult to protect against. Buffer overflow, integer overflow, uninitialized memory, null pointer dereferencing, these are not rocket science.


You may also like…

Lessons learned from CrowdStrike outages on releasing software updates

Software testing’s chaotic conundrum: Navigating the Three-Body Problem of speed, quality, and cost

Q&A: Solving the issue of stale feature flags

The post Q&A: Lessons NOT learned from CrowdStrike and other incidents appeared first on SD Times.

]]>
Q&A: Evaluating the ROI of AI implementation https://sdtimes.com/ai/qa-evaluating-the-roi-of-ai-implementation/ Wed, 10 Jul 2024 17:42:43 +0000 https://sdtimes.com/?p=55151 Many development teams are beginning to experiment with how they can use AI to benefit their efficiency, but in order to have a successful implementation, they need to have ways to assess that their investment in AI is actually providing value proportional to that investment.  A recent Gartner survey from May of this year said … continue reading

The post Q&A: Evaluating the ROI of AI implementation appeared first on SD Times.

]]>
Many development teams are beginning to experiment with how they can use AI to benefit their efficiency, but in order to have a successful implementation, they need to have ways to assess that their investment in AI is actually providing value proportional to that investment. 

A recent Gartner survey from May of this year said that 49% of respondents claimed the primary obstacle to AI adoption is the difficulty in estimating and demonstrating the value of AI projects. 

On the most recent episode of our podcast What the Dev?, Madeleine Corneli, lead product manager of AI/ML at Exasol, joined us to share tips on doing just that. Here is an edited and abridged version of that conversation:

Jenna Barron, news editor of SD Times: AI is everywhere. And it almost seems unavoidable, because it feels like every development tool now has some sort of AI assistance built into it. But despite the availability and accessibility, not all development teams are using it. And a recent Gartner survey from May of this year said that 49% of respondents claimed the primary obstacle to AI adoption is the difficulty in estimating and demonstrating the value of AI projects. We’ll get into specifics of how to assess the ROI later, but just to start our discussion, why do you think companies are struggling to demonstrate value here?

Madeleine Corneli: I think it starts with actually identifying the appropriate uses, and use cases for AI. And I think what I hear a lot both in the industry and kind of just in the world right now is we have to use AI, there’s this imperative to use AI and apply AI and be AI driven. But if you kind of peel back the onion, what does that actually mean? 

I think a lot of organizations and a lot of people actually struggle to answer that second question, which is what are we actually trying to accomplish? What problem are we trying to solve? And if you don’t know what problem you’re trying to solve, you can’t gauge whether or not you’ve solved the problem, or whether or not you’ve had any impact. So I think that lies at the heart of the struggle to measure impact.

JB: Do you have any advice for how companies can ask that question and, and get to the bottom of what they are trying to achieve?

MC: I spent 10 years working in various analytics industries, and I got pretty practiced at working with customers to try to ask those questions. And even though we’re talking about AI today, it’s kind of the same question that we’ve been asking for many years, which is, what are you doing today that is hard? Are your customers getting frustrated? What could be faster? What could be better? 

And I think it starts with just examining your business or your team or what you’re trying to accomplish, whether it’s building something or delivering something or creating something. And where are the sticking points? What makes that hard? 

Start with the intent of your company and work backwards. And then also when you’re thinking about your people on your team, what’s hard for them? Where do they spend a lot of their time? And where are they spending time that they’re not enjoying? 

And you start to get into like more manual tasks, and you start to get into like questions that are hard to answer, whether it’s business questions, or just where do I find this piece of information? 

And I think focusing on the intent of your business, and also the experience of your people, and figuring out where there’s friction on those are really good places to start as you attempt to answer those questions.

JB: So what are some of the specific metrics that could be used to show the value of AI?

MC: There’s lots of different types of metrics and there’s different frameworks that people use to think about metrics. Input and output metrics is one common way to break it down. Input metrics are something you can actually change that you have control over and output metrics are the things that you’re actually trying to impact. 

So a common example is customer experience. If we want to improve customer experience, how do we measure that? It’s a very abstract concept. You have customer experience scores and things like that. But it’s an output metric, it’s something you tangibly want to improve and change, but it’s hard to do so. And so an input metric might be how quickly we resolve support tickets. It’s not necessarily telling you you’re creating a better customer experience, but it’s something you have control over that does affect customer experience? 

I think with AI, you have both input and output metrics. So if you’re trying to actually improve productivity, that’s a pretty nebulous thing to measure. And so you have to pick these proxy metrics. So how fast did the test take before versus how fast it takes now? And it really depends on the use case, right? So if you’re talking about productivity, time saved is going to be one of the best metrics. 

Now a lot of AI is also focused not on productivity, but it is kind of experiential, right? It’s a chatbot. It’s a widget. It’s a scoring mechanism. It’s a recommendation. It’s things that are intangible in many ways. And so you have to use proxy metrics. And I think, interactions with AI is a good starting place. 

How many people actually saw the AI recommendation? How many people actually saw the AI score? And then was a decision made? Or was an action taken because of that? If you’re building an application of almost any kind, you can typically measure those things. Did someone see the AI? And did they make a choice because of it? I think if you can focus on those metrics, that’s a really good place to start.

JB: So if a team starts measuring some specific metrics, and they don’t come out favorably, is that a sign that they should just give up on AI for now? Or does it just mean they need to rework how they’re using it, or maybe they don’t have some important foundations in place that really need to be there in order to meet those KPIs?

MC:  It’s important to start with the recognition that not meeting a goal at your first try is okay. And especially as we’re all very new to AI, even customers that are still evolving their analytics practices, there are plenty of misses and failures. And that’s okay. So those are great opportunities to learn. Typically, if you’re unable to hit a metric or a goal that you’ve set, the first thing you want to go back to is double check your use case.

So let’s say you built some AI widget that does a thing and you’re like, I want it to hit this number. Say you miss the number or you go too far over it or something, the first check is, was that actually a good use of AI? Now, that’s hard, because you’re kind of going back to the drawing board. But because we’re all so new to this, and I think because people in organizations struggle to identify appropriate AI applications, you do have to continually ask yourself that, especially if you’re not hitting metrics, that creates kind of an existential question. And it might be yes, this is the right application of AI. So if you can revalidate that, great. 

Then the next question is, okay, we missed our metric, was it the way we were applying AI? Was it the model itself? So you start to narrow into more specific questions. Do we need a different model? Do we need to retrain our model? Do we need better data? 

And then you have to think about that in the context of the experience that you are trying to provide. It was the right model and all of those things, but were we actually delivering that experience in a way that made sense to customers or to people using this?

So those are kind of like the three levels of questions that you need to ask: 

  1. Was it the right application? 
  2. Was I hitting the appropriate metrics for accuracy?
  3. Was it delivered in a way that makes sense to my users? 

Check out other recent podcast transcripts:

Why over half of developers are experiencing burnout

Getting past the hype of AI development tools

The post Q&A: Evaluating the ROI of AI implementation appeared first on SD Times.

]]>
Q&A: Why over half of developers are experiencing burnout https://sdtimes.com/softwaredev/qa-why-over-half-of-developers-are-experiencing-burnout/ Tue, 02 Jul 2024 21:14:12 +0000 https://sdtimes.com/?p=55099 According to a recent report from Jellyfish, 65% of respondents said they experienced burnout in the last year.  To dig deep into why that’s happening at such a high rate, we invited the company’s CEO and co-founder, Andrew Lau, onto the latest episode of our podcast, What the Dev? Here’s an abridged and edited version … continue reading

The post Q&A: Why over half of developers are experiencing burnout appeared first on SD Times.

]]>
According to a recent report from Jellyfish, 65% of respondents said they experienced burnout in the last year. 

To dig deep into why that’s happening at such a high rate, we invited the company’s CEO and co-founder, Andrew Lau, onto the latest episode of our podcast, What the Dev?

Here’s an abridged and edited version of that conversation.

Jenna Barron: What are some factors that have contributed to the percentage of burnout among developers being so high? 

Andrew Lau: I think it’s a compounding of a number of effects. First and foremost, I would just say it’s been a crazy four years. I mean, it’s been so volatile; you’ve got pandemics, you’ve got booms and busts. And so, like there’s a health crisis, first and foremost, and people are kind of getting through that. And there have been really huge economic swings in the tech industry. I think the broad economy seems to be doing okay, but if you look at the tech sector as a whole, it’s been hard. And that puts pressure on everybody around what that actually means for them. 

And I actually think that we are at a time where we’ve learned as a community, as an industry how to work remotely or hybrid. But I think those environments aren’t always the most conducive to actually making it a human experience or easier. For me, I go from Zoom to Zoom to Zoom without any breaks. And we haven’t learned the rhythms of keeping sanity in that. 

And then like the last layer on top is like we’re at a time where AI is already causing change, there’s looming change coming. And there’s a lot of unknown and pressure. 

I think it’s the backbone theme through all of this that is causing a lot of stress and inevitably leads to burnout.

JB: What do you think that companies and leadership can do to ensure that their employees don’t reach that point?

AL: Well, if everyone had a magic formula, we’d all just do it and be done. But I think in some ways, it is simple in the sense that we have to acknowledge it first. I think if you don’t say this is an issue and how we address it, then that’s a problem. You’re never going to fix it otherwise. 

So I think it’s important to ask and understand how your team is actually doing with respect to that. I do think though, one aspect of this is about why change is happening. And so some is clear, like pandemics you can’t control and hybrid happened to all of us in this way. But I think in some ways, we have to talk about the intentionality around some of this too. And just saying, like, “hey, this is happening and this is why we’re doing it.” I think that helps people understand the why, and often that can either make it easier to accept the changes that need to happen, or figure out better ways to do it.

JB:  Another finding from the report is that 90% of the respondents said that engineering teams are actually informing business strategy. So what does that look like in practice?

AL: This is, I think, an acknowledgment from that old Marc Andreesson statement that software is eating the world. I think we’re there and ate the world. 

Now, what does it actually mean? Well, it first meant that every company is actually making software. But more than that, that software is the manifestation of the company’s offerings now. And you see this in every industry, whether it’s healthcare or banking, or whatever it is, software is the lingua franca of the thing you’re making, or the thing that makes the thing you’re making. 

And so how does it look in practice? Well, if that’s true, then it means the ideas are coming from the teams. So I think you’re actually now starting to see this around AI innovation. Historically, someone might say, we ought to solve this business problem, okay. But now you have to ask yourself, what are the technical limitations? Can we even do that or not? Or, we have this AI thing, what are we going to do with it? And how does it actually manifest? It’s teams actually trying stuff out, plugging it in, etc. So whether it manifests in a hackathon or side projects, or people dealing with ChatGPT to try to go ahead and change something, it’s just manifesting very quickly. At the very least, it may not be a complete product yet, but at least it sets a direction and stage and unlocks the facility. And people are realizing what can and can’t be done. And it’s no longer like it can be a business decision alone. In fact, the innovation of spirit is coming from the technologists and the manifestations in that way.

JB: It seems kind of counter to what we’ve heard over the years where there’s been sort of like this friction between leaders and engineers, and engineers want to build one thing and the leaders want something else. And engineers are like, “No, we can’t do that.” But it seems like based on these numbers, the engineers are having more of a say in what’s realistic and what can be done.

AL: I think I’m with you. I think there’s actually a confluence of two forces going on there. One is that things are so novel and complicated, but also idea inspiring, so the manifestations can come from the engineers because they’re making it happen. It’s like everyone just suddenly gets it or they see competitors suddenly do this. So in some sense the medium is allowing the makers to actually suddenly blossom those ideas on the other side. I just led this conversation earlier, which is we’re in a time of economic uncertainty and potentially just like tough times in the tech space. Again, no one seeks it, but there are some silver linings to here, which is it forces alignment, like in the sense that when waste is prevalent, like both parties were like, “Well, I think we should be making this,” “I think we should be making that.” In a time where we’re focused on efficiency, you have to reconcile the two. It can’t be like, I wish this and I wish that. It’s like, no, we have time for one of those things. And we’re going to reconcile it. And so the confluence of those two forces, I think, allows some alignment to suddenly happen in that way.

JB: I know the report had a lot of other interesting things. And we really only covered two of them. What would you say is the biggest takeaway from the report, or maybe an interesting thing that we didn’t touch on that you would want developers and engineers to leave with? 

AL: Change is so deeply in play in our industry right now, and in every way, from the way we work, to the tools we use, the technologies, manifestations we can actually make. It’s happening at every single layer. 

And so with that, I actually think you have to have all parties fully embrace change. Like it is happening, you can’t fight it. Accept it and figure out how you’re going to adopt it, and bring it in and talk about what has to change along the way. 

I think you can’t just keep doing the same thing again. Both for your company’s survival and thriving, and for your happiness, contentment, and reducing stress. Embrace it in that way.


You may also like…

Report: Software engineers increasingly seen as strategic business partners

The real problems IT still needs to tackle for platforms

The post Q&A: Why over half of developers are experiencing burnout appeared first on SD Times.

]]>
Q&A: Getting past the hype of AI development tools https://sdtimes.com/ai/podcast-getting-past-the-hype-of-ai-development-tools/ Thu, 06 Jun 2024 18:02:26 +0000 https://sdtimes.com/?p=54828 Assisting development with AI tools can be quite a decisive topic. Some people feel they’re going to replace developers entirely, some feel they can’t produce good enough code to be useful at all, and a lot of people fall somewhere in the middle. Given the interest in these types of tools over the last few … continue reading

The post Q&A: Getting past the hype of AI development tools appeared first on SD Times.

]]>
Assisting development with AI tools can be quite a decisive topic. Some people feel they’re going to replace developers entirely, some feel they can’t produce good enough code to be useful at all, and a lot of people fall somewhere in the middle. Given the interest in these types of tools over the last few years, we spoke with Phillip Carter, principal product manager at Honeycomb, in the latest episode of our podcast, about his thoughts on them.

He believes that overall these tools can be beneficial, but only if you can narrow down your use case, have the right level of expertise to verify the output, and set realistic expectations for what they can do for you.

The following is an abridged version of the conversation.

SD Times: Do you believe that these AI tools are good or bad for development teams?

Phillip Carter: I would say I lean towards good and trending better over time. It depends on a couple of different factors. I think the first factor is seniority. The tools that we have today are sort of like the worst versions of these tools that we’re going to be using in the next decade or so. It’s kind of like how when cloud services came out in like 2010, 2011, and there were clear advantages to using them. But for a lot of use cases, these services were just not actually solving a lot of problems that people had. And so over a number of years, there was a lot of “hey, this might be really helpful” and they eventually sort of lived up to those to those aspirations. But it wasn’t there at that point in time.

I think for aiding developers, these AI models are kind of at that point right now, where there’s some more targeted use cases where they do quite well, and then many other use cases where they don’t do very well at all, and they can be actively misleading. And so what you do about that depends very heavily on what kind of developer you are, right? If you’re fresh out of college, or you’re still learning how to program and you’re not really an expert in software development, the misleading nature of these tools can be quite harmful, because you don’t really have a whole lot of experience and sort of like a gut feel for what’s right or wrong to compare that against. Whereas if you are a more senior engineer, you can say, okay, well, I’ve kind of seen the shape of problem before. And this code that it spat out is looks like it’s mostly right.

And there’s all sorts of use it to it, such as creating a few tests and making sure those tests are good, and it is a time saver in that regard. But if you don’t have that sense of okay, well, this is how I’m going to verify that it’s actually correct, this is how I’m going to compare what I see with what I have seen in the past, then that can be really difficult. And we have seen cases where some junior engineers in particular have struggled with actually solving problems, because they sort of try it and it doesn’t quite do it, they try it again, it doesn’t quite do it. And they spend more time doing that than just sort of sitting through and thinking through the problem.

One of the more junior engineers at our company, they leaned on these tools at first and realized that they were misleading a little bit and they stepped away to build up some of their own expertise. And then they actually came back to using some of those tools, because they found that they still were useful, and now that they had more of an instinct for what was good and bad, they could actually use a little bit more.

It’s great for when you know how to use it, and you know how to compare it against things that that you know are good or bad. But if you don’t, then you’ve basically added more chaos into the system than there should have been.

SDT: At what point in their career would a developer be at the point where they should feel they’re experienced enough to use these tools effectively?

PC: The most obvious example that comes to mind for me is writing test cases. There this understanding that that’s a domain that you can apply this to even when you’re a little bit more junior in your career. Stuff is going to either pass or fail, and you can take a look at that and be like, should this have passed? Or should this have failed? It’s a very clear signal.

Whereas if you’re using it to edit more sophisticated code inside of your code base, it’s like, well, I’m not really sure if this is doing the right thing, especially if I don’t have a good test harness that validates that it should be doing the right thing. And that that’s where that seniority and just more life experience building software really comes into play, because you can sort of have that sense as you’re building it, and you don’t need to sort of fall back on having a robust test suite that really sort of checks if you’re doing the right thing.

The other thing that I’ll say is that I have observed several junior engineers thrive with these tools quite a bit. Because it’s not really about being junior, it’s just that some engineers are better at reading and understanding code than they are at writing it. Or maybe they’re good at both, but their superpower is looking at code and analyzing it, and seeing if it’s going to do the job that it should do. And this really pushes the bottleneck in that direction. Because if you imagine for a moment, let’s say they were perfect at generating code. Well, now the bottleneck is entirely on understanding that code, it really has nothing to do with writing the code itself. And a lot of more junior people in their career can thrive in that environment, if the writing of the code is more of a bottleneck for them. But if they’re really good at understanding stuff and reading it, then they can say, this thing actually does do things faster. And they can almost use it to sort of like generate different variations of things and read with the output and see if it actually does what it should be doing.

And so I don’t know if this is necessarily like something that is universal across all engineers and junior engineers but like if you have that mindset where you’re really good at reading and understanding code, you can actually use these tools to a significant advantage today and I suspect that will get better over time.

SDT: So even for more senior developers (or junior devs that have a special skill at reading and understanding code), are there ways in which these tools could be overused in a negative way? What best practices should teams put in place to make sure they’re not like relying too heavily on these AI tools?

PC: So there’s a couple of things that can happen. I’ve done this before, I’ve had other people on the team do this as well, where they’ve used it and they sort of cycled through the suggestions and so on, and then they’ve sort of been like, wait a minute, this would have been faster if I just wrote this myself. That does happen from time to time, it actually doesn’t happen that often, but it can.

And there are some cases where the code that you need to write is just, for whatever reason, it’s too complicated for the model. It may not necessarily be super conceptually complicated code, it’s just that it might be something that the model right now is just not particularly good at. And so if you recognize that it’s outputting something where you’re scratching your head and going like I don’t really agree with that suggestion, that’s usually a pretty good signal that you should not be relying on this too heavily for at this moment in time.

There’s the ChatGPT model of you say you want something and it outputs like a whole block of code, you copy + paste it or do something. That’s one model. The other model that I think is more effective that people lean on more, and that, frankly, is more helpful is the completions model where you’re, you’re actually writing the code still, but son like a single line by single line basis, it makes a suggestion. Sometimes that suggestion is bonkers, but usually, it’s actually pretty good. And you’re still kind of a little bit more in control and you’re not just blindly copy + pasting large blocks of code without ever reading it.

And so I think in terms of tool selection, the ones that are deeply ingrained in you actually writing the code are going to lead to a lot more actual understanding of what’s going on, when you compare that to the tools that just output whole big blocks of code that you copy + paste and sort of hopes it works. I think organizations should focus on that, rather than the AI coding tools that barely even work. And maybe it’ll get better over time, but that’s definitely not something organizations should really depend on.

There’s another model of working with these tools that’s developing right now, by GitHub as well, that I think could show promise. It’s through their product called GitHub Copilot Workspace. And so basically, you start with like a natural language task and then it produces an interpretation of that task in natural language. And it asks you to sort of validate like, “hey, is this the right interpretation of what I should be doing?” And then you can add more steps and more sub interpretations and edit it. And then it takes the next step, and it generates a specification of work. And then you say, okay, like, do I agree with the specification of work or not? And you can’t really continue unless you either modify it or you say, “yes, this looks good.” And then it says, “Okay, I’ve analyzed your codebase. And these are the files that I want to touch. So like, are these the right places to look? Am I missing something?” At every step of the way, you intervene, and you have this opportunity to like, disagree with it and ask it to generate something new. And eventually it outputs a block of code as a diff. So it’ll say, “hey, like, this is what we think the changes should be.”

What I love about that model, in theory, and I have used it in practice, it works. It really just says, software development is not just about code, but it’s about understanding tasks. It’s about interpreting things. It’s about revising plans. It’s about creating a formal spec of things. Sometimes it’s about understanding where you need to work.

Because if I’m being honest, I don’t think these automated agents are going to go anywhere, anytime soon, because the space that they’re trying to operate in so complicated, and they might have a place for, tiny tasks that people today shunt off to places like Upwork, but for like replacing teams of engineers actually solving real business problems that are complicated and nuanced, I just don’t see it. And so I feel like it’s almost like a distraction to focus on that. And the AI powered stuff can really be helpful, but it has to be centered in keeping your development team engaged the entire time, and letting them use their brains to like really drive this stuff effectively.

SDT: Any final thoughts or takeaways from this episode?

PC: I would say that the tools are not magic, do not believe the hype. The marketing is way overblown for what these things can do. But when you get past all that, and especially if you narrow your tasks to like very concrete, small things, these tools can actually really be wonderful for helping you save time and sometimes even consider approaches to things that you may not have considered in the past. And so focus on that, cut through the hype, just see it as a good tool. And if it’s not a good tool for you discard it, because it’s not going to be helpful. That that’s probably what I would advise anyone in any capacity to, to frame up these things with.

The post Q&A: Getting past the hype of AI development tools appeared first on SD Times.

]]>