development Archives - SD Times https://sdtimes.com/tag/development/ Software Development News Mon, 10 Jun 2024 15:05:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg development Archives - SD Times https://sdtimes.com/tag/development/ 32 32 Q&A: Getting past the hype of AI development tools https://sdtimes.com/ai/podcast-getting-past-the-hype-of-ai-development-tools/ Thu, 06 Jun 2024 18:02:26 +0000 https://sdtimes.com/?p=54828 Assisting development with AI tools can be quite a decisive topic. Some people feel they’re going to replace developers entirely, some feel they can’t produce good enough code to be useful at all, and a lot of people fall somewhere in the middle. Given the interest in these types of tools over the last few … continue reading

The post Q&A: Getting past the hype of AI development tools appeared first on SD Times.

]]>
Assisting development with AI tools can be quite a decisive topic. Some people feel they’re going to replace developers entirely, some feel they can’t produce good enough code to be useful at all, and a lot of people fall somewhere in the middle. Given the interest in these types of tools over the last few years, we spoke with Phillip Carter, principal product manager at Honeycomb, in the latest episode of our podcast, about his thoughts on them.

He believes that overall these tools can be beneficial, but only if you can narrow down your use case, have the right level of expertise to verify the output, and set realistic expectations for what they can do for you.

The following is an abridged version of the conversation.

SD Times: Do you believe that these AI tools are good or bad for development teams?

Phillip Carter: I would say I lean towards good and trending better over time. It depends on a couple of different factors. I think the first factor is seniority. The tools that we have today are sort of like the worst versions of these tools that we’re going to be using in the next decade or so. It’s kind of like how when cloud services came out in like 2010, 2011, and there were clear advantages to using them. But for a lot of use cases, these services were just not actually solving a lot of problems that people had. And so over a number of years, there was a lot of “hey, this might be really helpful” and they eventually sort of lived up to those to those aspirations. But it wasn’t there at that point in time.

I think for aiding developers, these AI models are kind of at that point right now, where there’s some more targeted use cases where they do quite well, and then many other use cases where they don’t do very well at all, and they can be actively misleading. And so what you do about that depends very heavily on what kind of developer you are, right? If you’re fresh out of college, or you’re still learning how to program and you’re not really an expert in software development, the misleading nature of these tools can be quite harmful, because you don’t really have a whole lot of experience and sort of like a gut feel for what’s right or wrong to compare that against. Whereas if you are a more senior engineer, you can say, okay, well, I’ve kind of seen the shape of problem before. And this code that it spat out is looks like it’s mostly right.

And there’s all sorts of use it to it, such as creating a few tests and making sure those tests are good, and it is a time saver in that regard. But if you don’t have that sense of okay, well, this is how I’m going to verify that it’s actually correct, this is how I’m going to compare what I see with what I have seen in the past, then that can be really difficult. And we have seen cases where some junior engineers in particular have struggled with actually solving problems, because they sort of try it and it doesn’t quite do it, they try it again, it doesn’t quite do it. And they spend more time doing that than just sort of sitting through and thinking through the problem.

One of the more junior engineers at our company, they leaned on these tools at first and realized that they were misleading a little bit and they stepped away to build up some of their own expertise. And then they actually came back to using some of those tools, because they found that they still were useful, and now that they had more of an instinct for what was good and bad, they could actually use a little bit more.

It’s great for when you know how to use it, and you know how to compare it against things that that you know are good or bad. But if you don’t, then you’ve basically added more chaos into the system than there should have been.

SDT: At what point in their career would a developer be at the point where they should feel they’re experienced enough to use these tools effectively?

PC: The most obvious example that comes to mind for me is writing test cases. There this understanding that that’s a domain that you can apply this to even when you’re a little bit more junior in your career. Stuff is going to either pass or fail, and you can take a look at that and be like, should this have passed? Or should this have failed? It’s a very clear signal.

Whereas if you’re using it to edit more sophisticated code inside of your code base, it’s like, well, I’m not really sure if this is doing the right thing, especially if I don’t have a good test harness that validates that it should be doing the right thing. And that that’s where that seniority and just more life experience building software really comes into play, because you can sort of have that sense as you’re building it, and you don’t need to sort of fall back on having a robust test suite that really sort of checks if you’re doing the right thing.

The other thing that I’ll say is that I have observed several junior engineers thrive with these tools quite a bit. Because it’s not really about being junior, it’s just that some engineers are better at reading and understanding code than they are at writing it. Or maybe they’re good at both, but their superpower is looking at code and analyzing it, and seeing if it’s going to do the job that it should do. And this really pushes the bottleneck in that direction. Because if you imagine for a moment, let’s say they were perfect at generating code. Well, now the bottleneck is entirely on understanding that code, it really has nothing to do with writing the code itself. And a lot of more junior people in their career can thrive in that environment, if the writing of the code is more of a bottleneck for them. But if they’re really good at understanding stuff and reading it, then they can say, this thing actually does do things faster. And they can almost use it to sort of like generate different variations of things and read with the output and see if it actually does what it should be doing.

And so I don’t know if this is necessarily like something that is universal across all engineers and junior engineers but like if you have that mindset where you’re really good at reading and understanding code, you can actually use these tools to a significant advantage today and I suspect that will get better over time.

SDT: So even for more senior developers (or junior devs that have a special skill at reading and understanding code), are there ways in which these tools could be overused in a negative way? What best practices should teams put in place to make sure they’re not like relying too heavily on these AI tools?

PC: So there’s a couple of things that can happen. I’ve done this before, I’ve had other people on the team do this as well, where they’ve used it and they sort of cycled through the suggestions and so on, and then they’ve sort of been like, wait a minute, this would have been faster if I just wrote this myself. That does happen from time to time, it actually doesn’t happen that often, but it can.

And there are some cases where the code that you need to write is just, for whatever reason, it’s too complicated for the model. It may not necessarily be super conceptually complicated code, it’s just that it might be something that the model right now is just not particularly good at. And so if you recognize that it’s outputting something where you’re scratching your head and going like I don’t really agree with that suggestion, that’s usually a pretty good signal that you should not be relying on this too heavily for at this moment in time.

There’s the ChatGPT model of you say you want something and it outputs like a whole block of code, you copy + paste it or do something. That’s one model. The other model that I think is more effective that people lean on more, and that, frankly, is more helpful is the completions model where you’re, you’re actually writing the code still, but son like a single line by single line basis, it makes a suggestion. Sometimes that suggestion is bonkers, but usually, it’s actually pretty good. And you’re still kind of a little bit more in control and you’re not just blindly copy + pasting large blocks of code without ever reading it.

And so I think in terms of tool selection, the ones that are deeply ingrained in you actually writing the code are going to lead to a lot more actual understanding of what’s going on, when you compare that to the tools that just output whole big blocks of code that you copy + paste and sort of hopes it works. I think organizations should focus on that, rather than the AI coding tools that barely even work. And maybe it’ll get better over time, but that’s definitely not something organizations should really depend on.

There’s another model of working with these tools that’s developing right now, by GitHub as well, that I think could show promise. It’s through their product called GitHub Copilot Workspace. And so basically, you start with like a natural language task and then it produces an interpretation of that task in natural language. And it asks you to sort of validate like, “hey, is this the right interpretation of what I should be doing?” And then you can add more steps and more sub interpretations and edit it. And then it takes the next step, and it generates a specification of work. And then you say, okay, like, do I agree with the specification of work or not? And you can’t really continue unless you either modify it or you say, “yes, this looks good.” And then it says, “Okay, I’ve analyzed your codebase. And these are the files that I want to touch. So like, are these the right places to look? Am I missing something?” At every step of the way, you intervene, and you have this opportunity to like, disagree with it and ask it to generate something new. And eventually it outputs a block of code as a diff. So it’ll say, “hey, like, this is what we think the changes should be.”

What I love about that model, in theory, and I have used it in practice, it works. It really just says, software development is not just about code, but it’s about understanding tasks. It’s about interpreting things. It’s about revising plans. It’s about creating a formal spec of things. Sometimes it’s about understanding where you need to work.

Because if I’m being honest, I don’t think these automated agents are going to go anywhere, anytime soon, because the space that they’re trying to operate in so complicated, and they might have a place for, tiny tasks that people today shunt off to places like Upwork, but for like replacing teams of engineers actually solving real business problems that are complicated and nuanced, I just don’t see it. And so I feel like it’s almost like a distraction to focus on that. And the AI powered stuff can really be helpful, but it has to be centered in keeping your development team engaged the entire time, and letting them use their brains to like really drive this stuff effectively.

SDT: Any final thoughts or takeaways from this episode?

PC: I would say that the tools are not magic, do not believe the hype. The marketing is way overblown for what these things can do. But when you get past all that, and especially if you narrow your tasks to like very concrete, small things, these tools can actually really be wonderful for helping you save time and sometimes even consider approaches to things that you may not have considered in the past. And so focus on that, cut through the hype, just see it as a good tool. And if it’s not a good tool for you discard it, because it’s not going to be helpful. That that’s probably what I would advise anyone in any capacity to, to frame up these things with.

The post Q&A: Getting past the hype of AI development tools appeared first on SD Times.

]]>
Report: As DevOps adoption nears 100%, these factors determine maturity https://sdtimes.com/devops/report-as-devops-adoption-nears-100-these-factors-determine-maturity/ Tue, 16 Apr 2024 15:00:40 +0000 https://sdtimes.com/?p=54299 Most developers at this point in time have adopted DevOps in some form or another, whether they are a full-blown DevOps engineer or a developer utilizing parts of the DevOps practice.  According to a new report from the Continuous Delivery Foundation (CDF), 83% of developers were “involved in DevOps-related activities” in the first quarter of … continue reading

The post Report: As DevOps adoption nears 100%, these factors determine maturity appeared first on SD Times.

]]>
Most developers at this point in time have adopted DevOps in some form or another, whether they are a full-blown DevOps engineer or a developer utilizing parts of the DevOps practice. 

According to a new report from the Continuous Delivery Foundation (CDF), 83% of developers were “involved in DevOps-related activities” in the first quarter of 2024. The report was based on data over the past three and a half years from SlashData. Because of the wide time period being examined, the organization was able to compare this to a 77% involvement in DevOps in early 2022, a 6% increase.

Even though the total number of developers involved in DevOps in some way has risen, there has at the same time been a small decrease in the number of developers who involve themselves in all DevOps-related activities. In other words, developers are specializing on a specific DevOps task rather than trying to do it all. CDF sees this as an indicator of DevOps maturity.

The most common DevOps task developers take on is monitoring software or infrastructure performance, which was done by 33% of developers in the first quarter of the year. Other popular activities include approving code deployments to production (29%), testing applications for security vulnerabilities (29%), and using continuous integration to automatically build and test code changes (29%).

The report also pointed out that there is a strong correlation between the number of tools in use and maturity level. However, there is also a decrease in deployment performance when developers use multiple CI/CD tools of the same type, because it introduces interoperability challenges. 

Another indicator of maturity is simply the experience level of the developer. Developers with more than 11 years of experience are twice as likely to be top performers in lead time for code changes, compared to less experienced colleagues. Only 10% of those with 5 or less years of experience are considered to be top performers. 

When measuring time to restore services, only 5% of developers with two years or less experience are top performers. 

In addition, more experienced developers are more likely to be using more tools. Developers with two or less years experience use an average of 2.3 tools and those with 16 or more years experience use an average of 5.2 tools. 

“The CD Foundation has been promoting standards in CD, securing the software supply chain, and advocating for better interoperability,” said Dadisi Sanyika, governing board chair at CDF. “The report findings reflect our community’s ongoing efforts and provide a framework for organizations to compare their practices with those of their industry peers, offering insights into where they stand and highlighting areas that require attention to enhance organizational efficiency.”

The post Report: As DevOps adoption nears 100%, these factors determine maturity appeared first on SD Times.

]]>
LEADTOOLS Version 23 introduces new Excel API and .NET MAUI support https://sdtimes.com/softwaredev/leadtools-version-23-introduces-new-excel-api-and-net-maui-support/ Tue, 05 Mar 2024 18:40:31 +0000 https://sdtimes.com/?p=53946 Apryse, the company that recently acquired LEAD Technologies, has announced the release of LEADTOOLS Version 23, which includes a number of new tools for developers. LEADTOOLS is a development toolkit with components that allow developers to incorporate things like form processing, document/image viewing, OCR/ICR, and more into their applications.  Highlights of this release include a … continue reading

The post LEADTOOLS Version 23 introduces new Excel API and .NET MAUI support appeared first on SD Times.

]]>
Apryse, the company that recently acquired LEAD Technologies, has announced the release of LEADTOOLS Version 23, which includes a number of new tools for developers.

LEADTOOLS is a development toolkit with components that allow developers to incorporate things like form processing, document/image viewing, OCR/ICR, and more into their applications. 

Highlights of this release include a new Excel API and Web Editor, .NET MAUI support, and a redesigned React Medical Web Viewer. 

The new Excel API allows developers to load, create, edit, and save Excel sheets, and the Excel Web Editor integrates into existing HTML and JavaScript applications. According to the company, both tools offer features like formatting, formula creation, styling, merging, and saving. 

Next, the company integrated .NET MAUI across the entire LEADTOOLS product line, providing multi-platform development capabilities for developers working across Android, iOS, and Windows. 

Finally, the React Medical Web Viewer is intended for developers building applications for healthcare providers. It offers access to not only the LEADTOOLS feature set, but specific features that are useful in healthcare applications, like medical image processing and 3D volume rendering. 

Other new features include multi-capture video support, speech recognition demos, and updates to the recognition engine. 

“We are proud to continuously bring to market powerful SDKs with plug-and-play features that address a wide variety of functionality and cross-platform needs,” said Khalil El-Dana, head of product development for LEADTOOLS. “These integrated, powerful, and customizable features make it possible for developers to build better, more sophisticated applications quickly without compromising on quality.”

The post LEADTOOLS Version 23 introduces new Excel API and .NET MAUI support appeared first on SD Times.

]]>
How to ensure open-source longevity https://sdtimes.com/open-source/how-to-ensure-open-source-longevity/ Thu, 02 Mar 2023 18:13:37 +0000 https://sdtimes.com/?p=50446 Most code in existence today utilizes open-source components, but it’s important to remember where, and who, that open-source code comes from.  Open-source software is mostly developed and maintained by volunteers. Unlike a company with resources to hire more developers, the maintainers of most open-source projects have to carry the burden of what comes after them.  … continue reading

The post How to ensure open-source longevity appeared first on SD Times.

]]>
Most code in existence today utilizes open-source components, but it’s important to remember where, and who, that open-source code comes from. 

Open-source software is mostly developed and maintained by volunteers. Unlike a company with resources to hire more developers, the maintainers of most open-source projects have to carry the burden of what comes after them. 

For example, at the end of 2022, the maintainers of the Gorilla toolkit announced they were archiving the project, meaning that they wouldn’t develop new features for it, and wouldn’t make any security fixes. Gorilla contains a number of different tools for Go developers, one of which is mux, a URL router and dispatcher that has been forked nearly 2,000 times on GitHub.

When the current maintainers decided they wanted to move on, they had put out a call to the community asking new people to start contributing. In their goodbye letter, they said the call wasn’t successful. 

RELATED ARTICLE: Open-source software sees growth across the board

“As we said in the original call for maintainers: ‘no maintainer is better than an adversarial maintainer!’ — just handing the reins of even a single software package that has north of 13k unique clones a week (mux) is just not something I’d ever be comfortable with. This has tended to play out poorly with other projects,” the maintainers wrote in a farewell letter announcing the archiving of the project. 

Open source is like a garden

Tom Bereknyei, lead engineer at flox, likens open source to a garden. “Most people enjoy the scenery at almost no cost. Malicious people can ruin the place if left unchecked. There are few gardeners and even fewer supervisors. Some gardens are organized, some are chaotic. Some have been around for generations, and some are abandoned after a month. Maintenance can be invisible and thus not appreciated, until the moment that maintenance disappears,” he said. 

This doesn’t necessarily mean that open-source components should be avoided. After all, Bereknyei points out that proprietary software doesn’t necessarily have guarantees either, as a company could go out of business or change things in a way you don’t like. 

But it is important to know how the open-source projects you rely on are planning for the future, and it underscores the importance of having trusted maintainers in the pipeline. That way, when a top maintainer needs to leave the project, there is someone who has built that trust that can step up and do a good job stewarding the project. 

“Being a good reviewer is a lot of work: you have to have a clear vision for a project

and make sure contributions are consistent with that, in addition to making sure everything’s

tested and documented,” said Jay Conrod, software engineer at EngFlow

The way to handle contributors and maintainers will vary depending on project size and company support. For example, Conrod previously worked at Google where he was the maintainer of the projects rules_go and Gazelle, and he has also worked full-time maintaining Go. 

At one point, maintaining rules_go and Gazelle was too much in addition to his regular work. His plan for transitioning off the project was to invite a group of regular contributors to become maintainers, providing them with write access to the project. Then, over the course of a year he met with them regularly to continue solidifying the relationship. 

“I think this approach of inviting specific people, building relationships with them, and making sure they have the resources they need is important,” said Conrod. 

Climbing the leadership ladder

The Kubernetes project is a good example of this. According to Eddie Zaneski, software engineer at Chainguard and maintainer of Kubernetes and Sigstore, Kubernetes has a contributor ladder that is designed for helping people grow into leadership roles with the following rankings:

  • Members, who are active contributors to the project and must be sponsored by at least two reviewers
  • Reviewers, who are responsible for reviewing code
  • Approvers, who can review and approve contributions
  • Subproject owners, who are technical authorities on a specific subproject within Kubernetes

Each of these roles has increasingly strict requirements as you work up the ladder. For example, in order to become an approver, you would have had to have been a reviewer for 3 months, been the primary reviewer for at least “10 substantial PRs,” reviewed or merged 30 PRs, and have been nominated by a subproject owner.  

According to Conrod, another way to ensure that an open-source project is maintainable in the long-term is having contributors from a number of different companies. For example, with Go, though the majority of maintenance is done by Google, a few of the big packages are maintained by external contributors. 

Conrod also emphasized the importance of building a strong community, in which people are able to ask each other questions and just generally help each other out. It can even lead to business partnerships or the creation of related projects.

For example, EngFlow, is a business built around the open-source build project Bazel, and there are a number of open-source projects built on top of Bazel too. Because of this, he believes that if Google ever stopped supporting Bazel, the Bazel community could continue on because there’s already so much existing expertise outside of Google. 

Chainguard’s Zaneski believes that companies that benefit from using open-source technologies should also be committing time back to those projects. His company practices what they preach, too, as Chainguard is one of the top contributors to Kubernetes. 

This would involve actively ensuring that a developer’s  workload is such that they have the time to contribute to the projects. He believes the bare minimum is enabling developers to spend 20% of their working time on contributions to open source.. 

Bereknyei also offered the advice to start a support contract with a maintainer if you rely on their project. “This provides a business relationship and goes a long way to ensuring support.”

The post How to ensure open-source longevity appeared first on SD Times.

]]>
7 Steps For Effective Hiring and Collaboration https://sdtimes.com/software-development/7-steps-for-effective-hiring-and-collaboration/ Thu, 23 Feb 2023 16:51:07 +0000 https://sdtimes.com/?p=50390 Rapid growth is a great measure of a company’s success, but it comes with potentially serious growing pains that can hurt collaboration and overall effectiveness of your teams.  Here, rapid scaling means hiring more people to maintain a consistent growth rate, since headcount growth follows revenue. For instance, hiring more developers to build new features … continue reading

The post 7 Steps For Effective Hiring and Collaboration appeared first on SD Times.

]]>
Rapid growth is a great measure of a company’s success, but it comes with potentially serious growing pains that can hurt collaboration and overall effectiveness of your teams. 

Here, rapid scaling means hiring more people to maintain a consistent growth rate, since headcount growth follows revenue. For instance, hiring more developers to build new features that will generate new revenue, then expanding the sales team that will sell these new features, which leads to hiring customer success managers to support these new users.

This comes with three major problems: We need to scale product, people and processes at the same time. You can’t just scale one thing and keep the others unchanged. With processes, it’s critical to remember that what works for 10 people won’t work for 25 and so on. The more your team expands, the more your processes need to be streamlined. Additionally, fast growth depends on your company’s ability to hire well and onboard quickly, which also relies on the effiсency of your processes.

Growing quickly can create challenges for non-engineer collaboration as well. For example, it makes consistent, normal documentation of processes difficult. It’s even more complicated when considering the fact that much of the testing expertise needs to be transferred personally, slowing down workflows and potentially impacting seasoned team members’ motivation.

Let’s look at six steps to improve information sharing, smooth onboarding and maintain development speed through QA and non-engineer collaboration.

      1. Hire experienced managers from outside the company.

While it’s important to grow people within the company, it is also immensely helpful to bring in experienced management, such as engineering managers, department heads and team leads. Leaders with outside experience can streamline growth by taking on some aspects of scaling independently and act as a go-between for less experienced teammates.

Hiring tips: Hire people who can hire people, particularly those with experience in hypergrowth in a previous position. They know what to do and can significantly boost overall performance. 

      2. Make a detailed onboarding plan.

For new hires, information is power, and onboarding is the best way to start them off on the right path. Clear, comprehensive onboarding processes ensure all new hires get the same level of information. This prevents them from having to ask excessive amounts of questions and also limits the amount of accidental misinformation they receive, facilitating fast closing of the knowledge gap.

Onboarding tips: If you don’t have an onboarding plan, the easiest way to make one is to collect feedback from the recent newcomers and ask them describe what challenges they had (e.g. couldn’t get access to product repository or setup dev environment). That’s your starting point.

     3. Pair new employees with an experienced mentor to answer questions.

At Qase, we follow a one-to-one policy of pairing one newbie with a seasoned veteran. This gives the new hire access to a trove of practical knowledge that cannot be easily gleaned from training texts. This “buddy system” also makes the transfer of employee-specific knowledge much easier and helps integrate the new and old teams into a more collaborative blended unit, which is critical for fast scaling.

Buddy tips: This method is a good way to improve soft skills for existing employees and give them a better understanding of the company if they want to choose a management path in the future.

     4. Form teams that are half newbies, half “oldies.”

In that same vein, it is wise to follow a similar principle among various teams. We aim for blending half new and half “old” staff whenever we form teams. Newbies often have a fresh perspective, which is great for innovation, but they may lack the practical, company-specific insight that veteran employees have. 

Additionally, infusing a team with seasoned employees helps drastically cut down the “forming-storming-norming-performing” cycle and helps newbies get a better feel for company culture.

Team formation tip: If your company has established values and clear goals, that can help to align newcomers.

     5. Automate your routine whenever possible.

Before massive scaling takes place, comb through your company’s processes and identify any potential bottlenecks, such as how frequently you’ll need to grant access rights. Formulate solutions for these problems ahead of time to prevent lags during scaling.

Automation tips: No-code or low-code solutions and chatbots can save a lot of time. At Qase, we do a lot through Slack: deployments, vacations, new microservice generation and other tasks. 

      6. Create a company wide tech radar.

One easy way to prevent chaos and confusion during periods of rapid growth is to create a single repository outlining the programs and software being used for each project. This keeps miscommunications from slowing development time, e.g., starting a project in redux and realizing later that someone else was using mobx, so now you have two halves of a project in two different languages.

Tech radar tip: Follow this guide to build your own tech radar.

      7. Set clear and transparent goals for your team

When you scale your company, it is critical for everyone to understand the direction of the company. There is nothing more frustrating than a new employee who doesn’t know what to do or why they were hired. Unfortunately, that is a common problem for hypergrowth companies. 

Every team and department should have clear and transparent goals aligned with the company’s mission. 

Goal-setting tips: You can use different techniques, approaches and their combinations: OKRs, North Star Metrics, All Hands meetings, Public product demos and many others to create a solution that works for your company culture.

The post 7 Steps For Effective Hiring and Collaboration appeared first on SD Times.

]]>
How observability prevents developers from flying blind https://sdtimes.com/monitoring/how-observability-prevents-developers-from-flying-blind/ Thu, 02 Feb 2023 17:15:45 +0000 https://sdtimes.com/?p=50217 When changing lanes on the highway, one of the most important things for drivers to remember is to always check their blind spot. Failing to do this could lead to an unforeseen, and ultimately avoidable, accident.  The same is true for development teams in an organization. Failing to provide developers with insight into their tools … continue reading

The post How observability prevents developers from flying blind appeared first on SD Times.

]]>
When changing lanes on the highway, one of the most important things for drivers to remember is to always check their blind spot. Failing to do this could lead to an unforeseen, and ultimately avoidable, accident. 

The same is true for development teams in an organization. Failing to provide developers with insight into their tools and processes could lead to unaddressed bugs and even system failures in the future.

This is why the importance of providing developers with ample observability cannot be overstated. Without it, the job of the developer becomes one big blind spot. 

Why is it important? 

“One of the important things that observability enables is the ability to see how your systems behave,” said Josep Prat, open-source engineering director at data infrastructure company Aiven. “So, developers build features which belong to a production system, and then observability gives them the means to see what is going on within that production system.”

He went on to say that developer observability tools don’t just function to inform the developer when something is wrong; rather, they dig even deeper to help determine the root cause of why that thing has gone wrong. 

David Caruana, UK-based software architect at content services company Hyland, stressed that these deep insights are especially important in the context of DevOps. 

“That feedback is essential for continuous improvement,” Caruana said. “As you go around that loop, feedback from observability feeds into the next development iteration… So, observability really gives teams the tools to increase the quality of service for customers.” 

The in-depth insights it provides are what sets observability apart from monitoring or visibility, which tend to address what is going wrong on a more surface level. 

According to Prat, visibility tools alone are not enough for development teams to address flaws with the speed and efficiency that is required today. 

The deeper insights that observability brings to the table need to work in conjunction with visibility and monitoring tools. 

With this, developers gain the most comprehensive view into their tools and processes. 

“It’s more about connecting data as well,” Prat explained. “So, if you look at monitoring or visibility, it’s a collection of data. We can see these things and we can understand what happened, which is good, but observability gives us the connection between all of these pieces that are collected. Then we can try to make a story and try to find out what was going on in the system when something happened.” 

John Bristowe, community director at deployment automation company Octopus Deploy, expanded on this, explaining that observability empowers development teams to make the best decisions possible going forward.

These decisions affect things such as increasing reliability and fixing bugs, leading to major performance enhancements. 

“And developers know this… There are a lot of moving parts and pieces and it is kind of akin to ‘The Wizard of Oz’ … ‘ignore the man behind the curtain,’” Bristowe said. “When you pull back that curtain, you’re seeing the Wizard of Oz and that is really what observability gives you.” 

According to Vishnu Vasudevan, head of product at the continuous orchestration company Opsera, developer interest in observability is still somewhat new. 

He explained that in the last five years, as DevOps has become the standard for organizations, developer interest in observability has grown exponentially. 

“Developers used to think that they can push products into the market without actually learning about anything around security or quality because they were focusing only on development,” Vasudevan said. “But without observability… the code might go well at first but sometime down the line it can break and it is going to be very difficult for development teams to fix the issue.”

The move to cloud native 

In recent years, the transition to cloud native has shaken up the software development industry. Caruana said that he believes the move into the cloud has been a major driver for observability.

He explained that with the complexity that cloud native introduces, gaining deep insights into the developer processes and tooling is more essential than ever before. 

“If you have development teams that are looking to move towards cloud-native architectures, I think that observability needs to be a core part of that conversation,” Caruana said. “It’s all about getting that data, and if you want to make decisions… having the data to drive those decisions is really valuable.” 

According to Prat, this shift to cloud native has also led to observability tools becoming more dynamic.

“When we had our own data centers, we knew we had machines A,B,C, and D; we knew that we needed to connect to certain boxes; and we knew exactly how many machines were running at each point in time,” he said. “But, when we go to the cloud, suddenly systems are completely dynamic and the number of servers that we are running depends on the load that the system is having.”

Prat explained that because of this, it is no longer enough to just know which boxes to connect; teams now have to have a full understanding of which machines are entering into and leaving the system so that connections can be made and the development team can determine what is going on.

Bristowe also explained that while the shift to cloud native can be a positive thing for the observability space, it has also made it more complicated.

“Cloud native is just a more complex scenario to support,” he said. “You have disparate systems and different technologies and different ways in which you’ll do things like logging, tracing, metrics, and things of that sort.”

Because of this, Bristowe emphasized the importance of integrating proper tooling and processes in order to work around any added complexities. 

Prat believes that the transition to cloud native not only brings new complexities, but a new level of dynamism to the observability space. 

“Before it was all static and now it is all dynamic because the cloud is dynamic. Machines come, machines go, services are up, services are down and it is just a completely different story,” he said. 

Opsera’s Vasudevan also stressed that moving into the cloud has put more of an emphasis on the security benefits that observability can offer. 

He explained that while moving into the cloud has helped the velocity of deployments, it has added a plethora of possible security vulnerabilities. 

“And this is where that shift happened and developers really started to understand that they do need to have this observability in place to understand what the bottlenecks and the inefficiencies are that the development team will face,” he said.

The risks of insufficient observability  

When companies fail to provide their development teams with high level observability, Prat said it can feel like regressing to the dark ages. 

He explained that without observability, the best developers can do is venture a guess as to why things are behaving the way that they are. 

“We would need to play a lot of guessing games and do a lot more trial and error to try and reproduce mistakes… this leads to countless hours and trying to understand what the root cause was,” said Prat.

This, of course, reduces an organization’s ability to remain competitive, something that companies cannot afford to risk. 

He emphasized that while investing in observability is not some kind of magic cure-all for bugs and system failures, it can certainly help in remediation as well as prevention. 

Bristowe went on to explain that observability is really all about the DevOps aspect of investing in people, processes, and tools alike. 

He said that while there are some really helpful tools available in the observability space, making sure the developers are onboard to learn with these tools and integrate them properly into their processes is really the key element to successful observability. 

Observability and productivity 

Prat also emphasized that investing in observability heavily correlates to more productivity in an organization. This is because it enables developers to feel more secure in the products they are building.

He said that this sense of security also helps when applying user feedback and implementing new features per customer requests, leading to heightened productivity as well as strengthening the organization’s relationship with its customer base. 

With proper observability tools, a company will be able to deliver better features more quickly as well as constantly work to improve the resiliency of its systems. Ultimately, this provides end users with a better overall experience as well as boosts speeds. 

“The productivity will improve because we can develop features faster, because we can know better when things break, and we can fix the things that break much faster because we know exactly why things are being broken,” Prat said. 

Vasudevan explained that when code is pushed to production without developers truly understanding it, technical debt and bottlenecks are pretty much a guarantee, resulting in a poorer customer experience. 

“If you don’t have the observability, you will not be able to identify the bottlenecks, you will not be able to identify the inefficiencies, and the code quality is going to be very poor when it goes into production,” he said.

Bristowe also explained that there are times when applications are deployed into production and yield unplanned results. Without observability, the development team may not even notice this until damage has already been caused. 

“The time to fix bugs, time to resolution, and things like that are critical success factors and you want to fix those problems before they are discovered in production,” Bristowe said. “Let’s face it, there is no software that’s perfect, but having observability will help you quickly discover bottlenecks, inefficiencies, bugs, or whatever it may be, and being able to gain insight into that quickly is going to help with productivity for sure.” 

Aiven’s Prat noted that observability also enables developers to see where and when they are spending most of their time so that they can tweak certain processes to make them more efficient.

When working on a project, developers strive for immediate results. Observability helps them when it comes to understanding why certain processes are not operating as quickly as desired. 

“So, if we are spending more time on a certain request, we can try and find why,” Prat explained. “It turns out there was a query on the database or that it was a system that was going rogue or a machine that needed to be decommissioned and wasn’t, and that is what observability can help us with.”

Automation and observability 

Bristowe emphasized the impact that AI and automation can have on the observability space. 

He explained that tools such as ChatGPT have really brought strong AI models into the mainstream and showcased the power that this technology holds. 

He believes this same power can be brought to observability tools. 

“Even if you are gathering as much information as possible, and you are reporting on it, and doing all these things, sometimes even those observations still aren’t evident or apparent,” he said. “But an AI model that is trained on your dataset, can look and see that there is something going on that you may not realize.”

Caruana added that AI can help developers better understand what the natural health of a system is, as well as quickly alert teams when there is an anomaly. 

He predicts that in the future we will start to see automation play a much bigger role in observability tools, such as filtering through alerts to select the key, root cause alerts that the developer should focus on.

“I think going forward, AI will actually be able to assist in the resolution of those issues as well,” Caruana said. “Even today, it is possible to fix things and to resolve issues automatically, but with AI, I think resolution will become much smarter and much more efficient.” 

Both Bristowe and Caruana agreed that AI observability tools will yield wholly positive results for both development teams and the organization in general.  

Bristowe explained that this is because the more tooling brought in and the more insights offered to developers, the better off organizations will be. 

However, Vishnu Vasudevan, head of product at the continuous orchestration company Opsera, had a slightly different take. 

He said that bringing automation into the observability space may end up costing organizations more than they would gain.

Because of this risk, he stressed that organizations would need to be sure to implement the right automation tools so that teams can gain the actionable intelligence and the predictive insights that they actually need.

“I would say that having a secure software supply chain is the first thing and then having observability as that second layer and then the AI and automation can come in,” Vasudevan said. “If you try to build AI into your systems and you do not have those first two things, it may not add any value to the customer.”

How to approach observability 

When it comes to making sure developers are provided with the highest level of observability possible, Prat has one piece of advice: utilize open-source tooling.

He explained that with tools like these, developers are able to connect several different solutions rather than feeling boxed into one single tool. This ensures that they are able to have the most well-rounded and comprehensive approach to observability.

“You can use several tools and they can probably play well together, and if they are not then you can always try and build a connection between them to try and help to close the gap between two tools so that they can talk to each other and share data and you can get more eyes looking at your problem,” Prat said. 

Caruana also explained the importance of implementing observability with room for evolution.

He said that starting small and building observability out based on feedback from developers is the best way to be sure teams are being provided with the deepest insights possible. 

“As you do with all agile processes, iteration is really key, so start small, implement something, get that feedback, and make adjustments as you go along,” Caruana said. “I think a big bang approach is a high risk approach, so I choose to evolve, and iterate, and see where it leads.”

The post How observability prevents developers from flying blind appeared first on SD Times.

]]>
Snyk closes $196.5 million funding round https://sdtimes.com/security/snyk-closes-196-5-million-funding-round/ Tue, 13 Dec 2022 19:53:33 +0000 https://sdtimes.com/?p=49819 Developer security company Snyk today announced a $196.5 million Series G investment. The round was led by Qatar Investment Authority with participation from new investors Evolution Equity Partners, G Squared, and Irving Investors as well as existing investors boldstart ventures, Sands Capital, and Tiger Global.  According to the company, this comes after a year of … continue reading

The post Snyk closes $196.5 million funding round appeared first on SD Times.

]]>
Developer security company Snyk today announced a $196.5 million Series G investment. The round was led by Qatar Investment Authority with participation from new investors Evolution Equity Partners, G Squared, and Irving Investors as well as existing investors boldstart ventures, Sands Capital, and Tiger Global. 

According to the company, this comes after a year of rapid customer adoption for Snyk, with over 2,300 users who have fixed more than 5.2 million vulnerabilities over the last year. 

Snyk has also released successful cross-portfolio deployments, with over 70% of users currently leveraging Snyk’s Developer Security platform. Snyk believes that this reveals an increase in the desire to shift from legacy approaches and the hardships of managing several security vendors.

“In 2022, I’m proud that Snyk achieved a 100% year-over-year increase in revenue as well as net revenue retention of over 130%,” said Peter McKay, CEO of Snyk. “In this challenging macroeconomic environment, it is more critical than ever for global enterprises to increase their developer productivity and be able to continue their pace of innovation securely. In 2023, we look forward to leveraging this latest investment to continue enhancing our platform and help more global enterprises reap the benefits of DevSecOps.”

In 2022, the company also executed the SnykLaunch event, revealing Snyk Cloud GA along with new supply chain security capabilities and better reporting features. Lastly, Snyk helped user remediation of more than 11.5 million security risks over the last year. 

According to the company, this latest influx of funding will serve to drive more product innovation, enabling the team at Snyk to grow both organically and inorganically through strategic acquisition.

To learn more, visit the website.   

The post Snyk closes $196.5 million funding round appeared first on SD Times.

]]>
Android announces 2022 safety initiatives for Google Play https://sdtimes.com/android/android-announces-2022-safety-initiatives-for-google-play/ Fri, 04 Mar 2022 16:10:48 +0000 https://sdtimes.com/?p=46780 The Android development team announced initiatives being put in place for Google Play to ensure user safety. According to the team, over the past year safety has been at top of mind and the team has partnered with developers to help protect their apps, prepare them to share their data safety practices with users, and … continue reading

The post Android announces 2022 safety initiatives for Google Play appeared first on SD Times.

]]>
The Android development team announced initiatives being put in place for Google Play to ensure user safety. According to the team, over the past year safety has been at top of mind and the team has partnered with developers to help protect their apps, prepare them to share their data safety practices with users, and collaborated on building more private advertising technology.

Additionally, the team has a continuing investment in ML detection as well as enhanced app review processes geared at preventing apps with malicious content before anyone can install them. 

In 2022, the Android development team intends to continue along this path to ensure safety in Google Play. With this, in the upcoming Data safety section in an app’s Play Store listing, developers can share how their app collects, shares, and protects users data. This works to give everyone a clear and concise understanding of an app’s data safety practices so they can better determine if an app is right for them. 

The Data safety section is set to begin displaying in the Google Play store in late April and completed Data Safety Forms will be required for all app updates starting on July 20, 2022. 

The Android team also has a plan in place to protect developers’ apps from fraudulent activity with the new Play Integrity API. This API is now available to everyone and offers developers the ability to protect their apps and ensure their users have the experience that was originally intended.  

Another initiative Android has set forth is to help developers better navigate SDKs. The team will do this by sharing information such as an SDK’s adoption levels, retention rates, and the runtime permission it uses in order to help developers select the correct SDK for their business and users.

In addition, in 2022 Android is placing a heightened focus on responsible data collection and use. The development team has compiled the best tips on how to do this in the best practices guide. With this, the Android team has been communicating with developers about the best methods to mitigate risks from apps that leverage APIs in older Android OS versions. 

For more information on Android’s 2022 initiatives for Google Play, see here.

The post Android announces 2022 safety initiatives for Google Play appeared first on SD Times.

]]>
Craft.io: The unified platform for product management teams https://sdtimes.com/softwaredev/craft-io-the-unified-platform-for-product-management-teams/ Mon, 15 Nov 2021 20:07:46 +0000 https://sdtimes.com/?p=45839 As organizations continue with their digital transformation  efforts, cutting-edge software is no longer a luxury. This lack of top-tier software oftentimes leaves product management teams feeling ill equipped to deliver the best possible products. In response to this need, Craft.io was created to provide product managers, product owners, and senior product executives with the tools … continue reading

The post Craft.io: The unified platform for product management teams appeared first on SD Times.

]]>
As organizations continue with their digital transformation  efforts, cutting-edge software is no longer a luxury. This lack of top-tier software oftentimes leaves product management teams feeling ill equipped to deliver the best possible products. In response to this need, Craft.io was created to provide product managers, product owners, and senior product executives with the tools needed to build strategy and roadmaps, gain customer feedback, and prioritize certain features and tasks while building their products. Elad Simon, co-founder and CEO of Craft.io explained this need for software in more detail and the ways in which Craft.io fills this gap in the technology industry. 

“Product management was probably one of the only places left within business functions that didn’t have its own system and it was a very strange situation,” Simon said. “Product managers had basically nothing. They used to be stuck with Excel spreadsheets, PowerPoints, and Word documents… so that was kind of the basic need that we are trying to solve.” According to Simon, Craft.io was created in order to provide product management teams with an all-inclusive platform to help them manage their processes in order to output the best possible products to customers. 

Simon went on to explain that the Craft.io platform spans through the entirety of the product development life cycle, offering tools to assist product managers from the inception of a product up until the finalization and distribution of that product. Simon explained, “We take care of the entire product life cycle… starting with stuff like story mapping or an ideation board that we have as a part of the platform.” Simon explained that these early stages in the process are the aspects of the Craft.io platform that function as a jumping-off point for product management teams and leads them to the next step in the process. “After that, we have feedback collection from customers where they basically bring in ideas, thoughts, or requests and from that point you would start doing product definition or basically just defining the building blocks of what is actually being produced,” Simon said.

For defining the product and beginning the build, the platform brings product management teams the Spec Editor. According to Simon, this tool works as an instruction manual for developers to ensure that the specifications of what is being built is correct. Continuing through the process, the next step would be the Prioritization Engine which, according to Simon,  functions to help product managers not only do prioritization, but also apply best practices from the market. With this, users gain access to Guru, a layer that Craft.io adds across the entirety of a project in order to provide built-in templates, views, and processes to strategize, prioritize, plan, and gather feedback more effectively. 

Following the Prioritization engine, the user would then move to Capacity Planning. “This is a unique module,” Simon said. “Capacity Planning is basically the wish list versus reality.” According to Simon, capacity planning helps product managers measure their expectations and priorities against their ability to actually achieve these goals. From there the user moves into Roadmapping which, according to Simon, is the most exciting part of the product management process. “This is where you actually communicate to all of your stakeholders, be it the senior management, the CIO, the business partners, or the engineers, what it is you are planning to build and when,” Simon said. 

Finally, product managers using Craft.io also have access to a feedback loop and then the process restarts in a cyclical manner. “Product development is an ongoing, never-ending process,” he explained. “For example, we’re now using Microsoft Teams, but it’s not like they’ve ended the development of Teams, they’re continuously releasing new features and then getting feedback and updating the platform.”

Providing product managers with a unified space to work independent of different aspects of the organization is another reason why Craft.io was developed. Simon believes that having a separate platform specific to product managers and their teams will allow for them to build a more well-rounded product before the developers gain access to it. This serves to allow product managers the opportunity to think, prioritize, and play around with the idea before anybody else can voice their opinions on the product. Giving product managers a quiet place to round out their ideas will end up saving an organization time in the long run and make for a better final product. Craft.io addresses this problem of needing an isolated environment to work while also offering easy collaboration when the user is ready for it.

According to Simon, there are a few aspects of Craft.io that make it stand out among its competition. One of these is its aforementioned Guru layer. Simon believes what makes the Guru layer special is that “It is embedded throughout the product and allows users to apply and leverage product management best practices in a single click.” He went on to explain the process of using Guru to get the best results. “If a product manager wants to use a specific prioritization method, she can go to Guru Views, select her desired method, and the system will build a dedicated board for that method.”

In addition to this, Simon believes that Craft.io’s user experience also makes the platform stand out among the rest. He said, “One of the topics dearest to our heart is making sure our product balances complexity with ease-of-use. We are an all-in-one system and as such have many tools to help product teams during their various stages of work.” Simon explained that all-in-one tools can oftentimes feel overwhelming and complicated for users and Craft.io feels that that does not have to be the case. “We invest a lot in making sure almost every action is available from any view, that the set up stage of views is as simple as it can possibly be, and that actions by users take as few clicks as possible,” He said. 

According to Simon, Craft.io has become even more essential throughout the shift to remote work that we are currently seeing. “We’ve seen a lot of growth with the remote movement because a lot of what product managers used to do in most organizations was based on ‘water cooler conversations’ and this is why the need for a collaborative environment is probably now stronger than ever,” Simon said. Craft.io offers product management teams the ability to have this kind of collaboration again. 

The post Craft.io: The unified platform for product management teams appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: Google Fuchsia https://sdtimes.com/open-source/sd-times-open-source-project-of-the-week-google-fuchsia/ Fri, 11 Dec 2020 14:23:16 +0000 https://sdtimes.com/?p=42418 Fuchsia is an open-source capability-based operating system that was initially released in 2016, and is currently under development by Google. Google announced this week that it would be expanding on the project and making it easier for the public to contribute. The company released a new public mailing list for project discussions, added a governance … continue reading

The post SD Times Open-Source Project of the Week: Google Fuchsia appeared first on SD Times.

]]>
Fuchsia is an open-source capability-based operating system that was initially released in 2016, and is currently under development by Google.

Google announced this week that it would be expanding on the project and making it easier for the public to contribute. The company released a new public mailing list for project discussions, added a governance model to help users understand how strategic decisions are made, and opened up the issue tracker for public contributors to visualize ongoing work. There is also a technical roadmap that will highlight project direction and priorities. 

Currently, the key highlights in the roadmap include a driver framework for updating the kernel independently of the drivers, improving file systems for performance, and expanding the input pipeline to increase accessibility.

“As an open source effort, we welcome high-quality, well-tested contributions from all. There is now a process to become a member to submit patches, or a committer with full write access,” Wayne Piekarski, developer advocate for Fuchsia, stated in the post.  

According to Piekarski, the project is not yet ready for product development or as a development target. However, developers can currently clone, compile, and contribute to it. For those who want to take part in code reviews, Fuchsia has the contribution guidelines and community resources available here.

The post SD Times Open-Source Project of the Week: Google Fuchsia appeared first on SD Times.

]]>