In-Depth Archives - SD Times https://sdtimes.com/category/in-depth/ Software Development News Fri, 27 Sep 2024 13:54:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg In-Depth Archives - SD Times https://sdtimes.com/category/in-depth/ 32 32 Harnessing AI and knowledge graphs for enterprise decision-making https://sdtimes.com/ai/harnessing-ai-and-knowledge-graphs-for-enterprise-decision-making/ Fri, 27 Sep 2024 13:54:55 +0000 https://sdtimes.com/?p=55729 Today’s business landscape is arguably more competitive and complex than ever before: Customer expectations are at an all-time high and businesses are tasked with meeting (or exceeding) those needs, while simultaneously creating new products and experiences that will provide consumers with even more value. At the same time, many organizations are strapped for resources, contending … continue reading

The post Harnessing AI and knowledge graphs for enterprise decision-making appeared first on SD Times.

]]>
Today’s business landscape is arguably more competitive and complex than ever before: Customer expectations are at an all-time high and businesses are tasked with meeting (or exceeding) those needs, while simultaneously creating new products and experiences that will provide consumers with even more value. At the same time, many organizations are strapped for resources, contending with budgetary constraints, and dealing with ever-present business challenges like supply chain latency. 

Businesses and their success are defined by the sum of the decisions they make every day. These decisions (bad or good) have a cumulative effect and are often more related than they seem to be or are treated. To keep up in this demanding and constantly evolving environment, businesses need the ability to make decisions quickly, and many have turned to AI-powered solutions to do so. This agility is critical for maintaining operational efficiency, allocating resources, managing risk, and supporting ongoing innovation. Simultaneously, the increased adoption of AI has exaggerated the challenges of human decision-making.

Problems arise when organizations make decisions (leveraging AI or otherwise) without a solid understanding of the context and how they will impact other aspects of the business. While speed is an important factor when it comes to decision-making, having context is paramount, albeit easier said than done. This begs the question: How can businesses make both fast and informed decisions?

It all starts with data. Businesses are acutely aware of the key role data plays in their success, yet many still struggle to translate it into business value through effective decision-making. This is largely due to the fact that good decision-making requires context, and unfortunately, data does not carry with it understanding and full context. Therefore, making decisions based purely on shared data (sans context) is imprecise and inaccurate.  

Below, we’ll explore what’s inhibiting organizations from realizing value in this area, and how they can get on the path to making better, faster business decisions. 

Getting the full picture

Former Siemens CEO Heinrich von Pierer famously said, “If Siemens only knew what Siemens knows, then our numbers would be better,” underscoring the importance of an organization’s ability to harness its collective knowledge and know-how. Knowledge is power, and making good decisions hinges on having a comprehensive understanding of every part of the business, including how different facets work in unison and impact one another. But with so much data available from so many different systems, applications, people and processes, gaining this understanding is a tall order.

This lack of shared knowledge often leads to a host of undesirable situations: Organizations make decisions too slowly, resulting in missed opportunities; decisions are made in a silo without considering the trickle-down effects, leading to poor business outcomes; or decisions are made in an imprecise manner that is not repeatable.

In some instances, artificial intelligence (AI) can further compound these challenges when companies indiscriminately apply the technology to different use cases and expect it to automatically solve their business problems. This is likely to happen when AI-powered chatbots and agents are built in isolation without the context and visibility necessary to make sound decisions. 

Enabling fast and informed business decisions in the enterprise

Whether a company’s goal is to increase customer satisfaction, boost revenue, or reduce costs, there is no single driver that will enable those outcomes. Instead, it’s the cumulative effect of good decision-making that will yield positive business outcomes.

It all starts with leveraging an approachable, scalable platform that allows the company to capture its collective knowledge so that both humans and AI systems alike can reason over it and make better decisions. Knowledge graphs are increasingly becoming a foundational tool for organizations to uncover the context within their data.

What does this look like in action? Imagine a retailer that wants to know how many T-shirts it should order heading into summer. A multitude of highly complex factors must be considered to make the best decision: cost, timing, past demand, forecasted demand, supply chain contingencies, how marketing and advertising could impact demand, physical space limitations for brick-and-mortar stores, and more. We can reason over all of these facets and the relationships between using the shared context a knowledge graph provides.

This shared context allows humans and AI to collaborate to solve complex decisions. Knowledge graphs can rapidly analyze all of these factors, essentially turning data from disparate sources into concepts and logic related to the business as a whole. And since the data doesn’t need to move between different systems in order for the knowledge graph to capture this information, businesses can make decisions significantly faster. 

In today’s highly competitive landscape, organizations can’t afford to make ill-informed business decisions—and speed is the name of the game. Knowledge graphs are the critical missing ingredient for unlocking the power of generative AI to make better, more informed business  decisions.

The post Harnessing AI and knowledge graphs for enterprise decision-making appeared first on SD Times.

]]>
Building a platform engineering team that’s set up for success https://sdtimes.com/softwaredev/building-a-platform-engineering-team-thats-set-up-for-success/ Mon, 02 Sep 2024 13:00:24 +0000 https://sdtimes.com/?p=55587 Platform engineering can make development teams more productive by enabling self-service for developers, so that they’re not stuck waiting on IT tickets for days or weeks on end just to set up some infrastructure needed for a project. But in order to realize the benefits, it’s important to set the platform engineering team up for … continue reading

The post Building a platform engineering team that’s set up for success appeared first on SD Times.

]]>
Platform engineering can make development teams more productive by enabling self-service for developers, so that they’re not stuck waiting on IT tickets for days or weeks on end just to set up some infrastructure needed for a project. But in order to realize the benefits, it’s important to set the platform engineering team up for success by ensuring that they have the necessary skills, structure, and working processes in place.

“Having a solid team makes the experience a lot easier for the people receiving and the people building the platform,” said Ryan Cook, senior principal software engineer at Red Hat.

Luca Galante, VP of product and growth at IDP company Humanitec and organizer of PlatformCon, believes that one of those important skills is the ability to have a product mindset, approaching things from a continuous development perspective based on a tight feedback loop with the teams they are building the platforms for, rather than building and shipping software and then being done with it.  

RELATED: IDPs may be how we solve the development complexity problem

“It’s really about seeing developers in a different light, which is the internal customers, and we’re serving them and solving their pain points,” Galante said. 

Cook agrees with that, adding “understanding what the teams need, what the people building the platforms need, is the best way to be successful.”

Communication is also key, because platforms interact with everything — and multiple teams — in an engineering organization. This includes developers, infrastructure and operations (I&O) teams, security teams, architects, executives and more.

“In order for everybody to be on board, there needs to be a driving internal marketer on the platform team that effectively aligns the development of the platform and the benefits that it drives to the vested interests of the different stakeholder groups,” Galante explained. 

For instance, a development team that is experiencing long waits from the infrastructure team could be sold on a platform by being told it’s going to reduce wait times and improve developer experience. It could be sold to the security team as something that is going to enforce governance and policy by default. And it could be sold to the infrastructure team as something that is going to reduce the need to do manual configurations every time a developer needs something. 

Thus, there needs to be someone on the platform engineering team who is able to articulate and communicate these benefits to the various stakeholders, so that everyone understands this is a worthwhile endeavor. 

A third important skill is deep technical capability and understanding, said Zohar Einy, CEO of Port, another IDP provider. He explained that it’s important for a platform engineer to have an understanding of how the components of the company’s technical stack are set up, what development tools are being used, and so on.

“They need to have a very good understanding on how things are wired and how the platform is built behind the scenes,” he said. 

Red Hat’s Cook believes it’s a good idea to have different people with different areas of expertise, like someone that’s really good at telemetry or security, or development or virtualization – or whatever it may be. 

“We all have this unique expertise, but the same goal, and I feel that expertise helps because it gives the ones that are experts in their space the confidence to continue to be experts there, while it gives the other folks breathing room that they don’t have to become experts outside of their individual realms. So everybody kind of leans on each other, which creates a good, friendly relationship internally with the team,” he said. 

Specific roles that make up a platform team

According to Galante, there are four main roles that all platform teams should have: head of platform, platform product manager, platform engineers, and infrastructure platform engineers. 

The head of platform is ultimately the person that is going to motivate and sell the platform to higher-ups in legal and compliance, finance and the executive suite. They are responsible for explaining the value that platforms can have, and to “make sure that they see the platform as a value driver, as opposed to a cost center.” 

They will also continuously update those stakeholders on the progress throughout the platform’s life cycle.

The platform product manager is the person responsible for making sure that the platform is actually made. They’re also there to facilitate compromise for the different stakeholders, like making sure that the security team is happy because security is enforced by the platform or that the architects are happy because the platform fits within the broader enterprise architecture.

They are also responsible for making sure that the end users — the developers — are happy with the platform and actually want to use it. According to Galante, there is a fine line between abstracting away the underlying complexity of the infrastructure while also keeping enough context for developers to do their jobs properly. 

“You want to provide developers with paved roads and really intuitive ways of interacting with your increasingly complex tool chain … But at the end of the day, they’re still engineers. They want to be able to still have some level of control and context around the work that they’re doing. And so that’s what the platform product manager is really focused on,” said Galante.

The final two roles are the platform engineers and infrastructure platform engineers. The reason for the differentiation is that platform engineers are the voice of the developers they’re building for, while infrastructure platform engineers are the voice of the I&O team. 

According to Galante, there can often be so much focus put on improving developer experience, but it’s important to make sure that the needs of the I&O team are also being considered. 

“You can think of the platform essentially as a vending machine that you’re maintaining, growing, and providing as a service to the rest of the organization,” he said. “And so that is where it’s very important to have this kind of role of the infrastructure platform engineer that oftentimes can come from the infrastructure scene and build that bridge to make sure that both perspectives are represented on the platform team.” 

The job types that transition well into platform teams

Einy believes many existing roles can transition well into the platform engineering team, such as DevOps engineers, technical product managers, and SREs.  

According to Einy, DevOps is a spectrum, and there may be DevOps engineers who are more infrastructure oriented and ones that are more experience oriented. He believes that the ones who were responsible for the user-facing processes can translate well into a platform engineer. 

“In the past, the platform engineering responsibility was part of the DevOps responsibility, but now it’s like it went to an entire role of its own,” Port said. 

Cook added that DevOps has likely felt the pain of what it takes to release and maintain software, so they can bring what they’ve learned to the table. 

Einy believes that technical product managers would also do well on a platform engineering team, because they are used to needing to have deep technical knowledge of their products.

And finally, SREs translate well into platform engineers because they’re responsible for quality, making sure that the MTTR is low, and improving the overall resiliency of an organization.

“One of the main values for platform engineering is to create the standards and to maintain the resiliency and efficiency of things,” Einy said.

Now that a team is in place, what’s next? 

Once the platform engineering team is established, it’s important to have strong collaboration within the team, and also with the stakeholders they are building for. In terms of building a good platform engineering culture, Cook recommends establishing an environment where the engineers are respectful of each other and of each other’s time. 

He also added that by bringing in different experts to the team, they will by nature start to depend on each other and get to know each other. “Having those smaller teams with the expertise kind of helps on the friction side, because they’re in it together,” he said.

When it comes to collaborating with the different stakeholders that the platforms are being built for, that platform-as-a-product mindset comes back into play. This collaboration should be a continuous loop, not a one-and-done approach.

According to Einy, platform engineering teams should be conducting surveys, which means they need to know how to run a good survey, which entails knowing what questions to ask, setting goals for the responses, and then finally being able to digest and understand the results. 

He added that it’s also good to be doing data analysis on usage of the platform, who is using it, what parts are getting used, how often it’s used, etc. 

“Again, talking with the people, not in a structured way, but creating some kind of closed group of people that can represent the wider audience and collecting feedback from the field,” Einy said. “I think that these are the things that they need to do continuously to know that they are solving the right problem for the organization.”

Cook added that when he started working at Red Hat, they hosted a “complaint fest” where the development teams came to them and let them know what was wrong in an open, constructive way. He said that developers were a bit hesitant to speak up at first, but once one person started the discussion, that broke the ice for the rest of the team to be open with what’s wrong. 

“If you can let everybody know that you do really care about the concerns and you are trying to fix them, they’re going to be a lot more willing to use your product than if you just do it without them,” Cook explained.

The post Building a platform engineering team that’s set up for success appeared first on SD Times.

]]>
AI regulations are coming: Here’s how to build and implement the best strategy https://sdtimes.com/ai/ai-regulations-are-coming-heres-how-to-build-and-implement-the-best-strategy/ Thu, 15 Aug 2024 16:40:06 +0000 https://sdtimes.com/?p=55454 In April 2024, the National Institute of Standards and Technology released a draft publication aimed to provide guidance around secure software development practices for generative AI systems. In light of these requirements, software development teams should begin implementing a robust testing strategy to ensure they adhere to these new guidelines. Testing is a cornerstone of … continue reading

The post AI regulations are coming: Here’s how to build and implement the best strategy appeared first on SD Times.

]]>
In April 2024, the National Institute of Standards and Technology released a draft publication aimed to provide guidance around secure software development practices for generative AI systems. In light of these requirements, software development teams should begin implementing a robust testing strategy to ensure they adhere to these new guidelines.

Testing is a cornerstone of AI-driven development as it validates the integrity, reliability, and soundness of AI-based tools. It also safeguards against security risks and ensures high-quality and optimal performance.

Testing is particularly important within AI because the system under test is far less transparent than a coded or constructed algorithm. AI has new failure modes and failure types, such as tone of voice, implicit biases, inaccurate or misleading responses, regulatory failures, and more. Even after completing development, dev teams may not be able to confidently assess the reliability of the system under different conditions. Because of this uncertainty, quality assurance (QA) professionals must step up and become true quality advocates. This designation means not simply adhering to a strict set of requirements, but exploring to determine edge cases, participating in red teaming to try to force the app to provide improper responses, and exposing undetected biases and failure modes in the system. Thorough and inquisitive testing is the caretaker of well-implemented AI initiatives.

Some AI providers, such as Microsoft, require test reports to provide legal protections against copyright infringement. The regulation of safe and confident AI uses these reports as core assets, and they make frequent appearances in both the October 2023 Executive Order by U.S. President Joe Biden on safe and trustworthy AI  and the EU AI Act. Thorough testing of AI systems is no longer only a recommendation to ensure a smooth and consistent user experience, it is a responsibility.

What Makes a Good Testing Strategy?

There are several key elements that should be included in any testing strategy: 

Risk assessment – Software development teams must first assess any potential risks associated with their AI system. This process includes considering how users interact with a system’s functionality, and the severity and likelihood of failures. AI introduces a new set of risks that need to be addressed. These risks include legal risks (agents making erroneous recommendations on behalf of the company), complex-quality risks (dealing with nondeterministic systems, implicit biases, pseudorandom results, etc.), performance risks (AI is computationally intense and cloud AI endpoints have limitations), operational and cost risks (measuring the cost of running your AI system), novel security risks (prompt hijacking, context extraction, prompt injection, adversarial data attacks) and reputational risks.

An understanding of limitations – AI is only as good as the information it is given. Software development teams need to be aware of the boundaries of its learning capacity and novel failure modes unique to their AI, such as lack of logical reasoning, hallucinations, and information synthesis issues.

Education and training – As AI usage grows, ensuring teams are educated on its intricacies – including training methods, data science basics, generative AI, and classical AI – is essential for identifying potential issues, understanding the system’s behavior, and to gain the most value from using AI.

Red team testing – Red team AI testing (red teaming) provides a structured effort that identifies vulnerabilities and flaws in an AI system. This style of testing often involves simulating real-world attacks and exercising techniques that persistent threat actors might use to uncover specific vulnerabilities and identify priorities for risk mitigation. This deliberate probing of an AI model is critical to testing the limits of its capabilities and ensuring an AI system is safe, secure, and ready to anticipate real-world scenarios. Red teaming reports are also becoming a mandatory standard of customers, similar to SOC 2 for AI.

Continuous reviews – AI systems evolve and so should testing strategies. Organizations must regularly review and update their testing approaches to adapt to new developments and requirements in AI technology as well as emerging threats.

Documentation and compliance – Software development teams must ensure that all testing procedures and results are well documented for compliance and auditing purposes, such as aligning with the new Executive Order requirements. 

Transparency and communication – It is important to be transparent about AI’s capabilities, its reliability, and its limitations with stakeholders and users. 

While these considerations are key in developing robust AI testing strategies that align with evolving regulatory standards, it’s important to remember that as AI technology evolves, our approaches to testing and QA must evolve as well.

Improved Testing, Improved AI

AI will only become bigger, better, and more widely adopted across software development in the coming years. As a result, more rigorous testing will be needed to address the changing risks and challenges that will come along with more advanced systems and data sets. Testing will continue to serve as a critical safeguard to ensure that AI tools are reliable, accurate and responsible for public use. 

Software development teams must develop robust testing strategies that not only meet regulatory standards, but also ensure AI technologies are responsible, trustworthy, and accessible.

With AI’s increased use across industries and technologies, and its role at the forefront of relevant federal standards and guidelines, in the U.S. and globally, this is the opportune time to develop transformative software solutions. The developer community should see itself as a central player in this effort, by developing efficient testing strategies and providing safe and secure user experience rooted in trust and reliability.


You may also like…

The impact of AI regulation on R&D

EU passes AI Act, a comprehensive risk-based approach to AI regulation

The post AI regulations are coming: Here’s how to build and implement the best strategy appeared first on SD Times.

]]>
The impact of AI regulation on R&D https://sdtimes.com/ai/the-impact-of-ai-regulation-on-rd/ Fri, 12 Jul 2024 20:04:15 +0000 https://sdtimes.com/?p=55174 Artificial intelligence (AI) continues to maintain its prevalence in business, with the latest analyst figures projecting the economic impact of AI to have reached between $2.6 trillion and $4.4 trillion annually.  However, advances in the development and deployment of AI technologies continue to raise significant ethical concerns such as bias, privacy invasion and disinformation. These concerns … continue reading

The post The impact of AI regulation on R&D appeared first on SD Times.

]]>
Artificial intelligence (AI) continues to maintain its prevalence in business, with the latest analyst figures projecting the economic impact of AI to have reached between $2.6 trillion and $4.4 trillion annually. 

However, advances in the development and deployment of AI technologies continue to raise significant ethical concerns such as bias, privacy invasion and disinformation. These concerns are amplified by the commercialization and unprecedented adoption of generative AI technologies, prompting questions about how organizations can regulate accountability and transparency. 

There are those who argue that regulating AI “could easily prove counterproductive, stifling innovation and slowing progress in this rapidly-developing field.”  However, the prevailing consensus is that AI regulation is not only necessary to balance innovation and harm but is also in the strategic interests of tech companies to engender trust and create sustainable competitive advantages.   

Let’s explore ways in which AI development organizations can benefit from AI regulation and adherence to AI risk management frameworks: 

The EU Artificial Intelligence Act (AIA) and Sandboxes  

Ratified by the European Union (EU), this law is a comprehensive regulatory framework that ensures the ethical development and deployment of AI technologies. One of the key provisions of the EU Artificial Intelligence Act is the promotion of AI sandboxes, which are controlled environments that allow for the testing and experimentation of AI systems while ensuring compliance with regulatory standards. 

AI sandboxes provide a platform for iterative testing and feedback, allowing developers to identify and address potential ethical and compliance issues early in the development process before they are fully deployed.  

Article 57(5) of the EU Artificial Intelligence Act specifically provides for “a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems.” It further states, “such sandboxes may include testing in real world conditions supervised therein.”  

AI sandboxes often involve various stakeholders, including regulators, developers, and end-users, which enhances transparency and builds trust among all parties involved in the AI development process. 

Accountability for Data Scientists 

Responsible data science is critical for establishing and maintaining public trust in AI. This approach encompasses ethical practices, transparency, accountability, and robust data protection measures. 

By adhering to ethical guidelines, data scientists can ensure that their work respects individual rights and societal values. This involves avoiding biases, ensuring fairness, and making decisions that prioritize the well-being of individuals and communities. Clear communication about how data is collected, processed, and used is essential. 

When organizations are transparent about their methodologies and decision-making processes, they demystify data science for the public, reducing fear and suspicion. Establishing clear accountability mechanisms ensures that data scientists and organizations are responsible for their actions. This includes being able to explain and justify decisions made by algorithms and taking corrective actions when necessary. 

Implementing strong data protection measures (such as encryption and secure storage) safeguards personal information against misuse and breaches, reassuring the public that their data is handled with care and respect. These principles of responsible data science are incorporated into the provisions of the EU Artificial Intelligence Act (Chapter III).  They drive responsible innovation by creating a regulatory environment that rewards ethical practices and penalizes unethical behavior

Voluntary Codes of Conduct 

While the EU Artificial Intelligence Act regulates high risk AI systems, it also encourages AI providers to institute voluntary codes of conduct

By adhering to self-regulated standards, organizations demonstrate their commitment to ethical principles, such as transparency, fairness, and respect for consumer rights. This proactive approach fosters public confidence, as stakeholders see that companies are dedicated to maintaining high ethical standards even without mandatory regulations.  

AI developers recognize the value and importance of voluntary codes of conduct, as evidenced by the Biden Administration having secured the commitments of leading AI developers to develop rigorous self-regulated standards in delivering trustworthy AI, stating: “These commitments, which the companies have chosen to undertake immediately underscore three principles that must be fundamental to the future of AI—safety, security, and trust—and mark a critical step toward developing responsible AI.” 

Commitment from developers 

AI developers also stand to benefit from adopting emerging AI risk management frameworks — such as the NIST RMF and ISO/IEC JTC 1/SC 42 — to facilitate the implementation of AI governance and processes for the entire life cycle of AI, through the design, development and commercialization phases to understand, manage, and reduce risks associated with AI systems. 

None more important is the implementation of AI risk management associated with generative AI systems. In recognition of the societal threats of generative AI, NIST published a compendium “AI Risk Management Framework Generative Artificial Intelligence Profile” that focuses on mitigating risks amplified by the capabilities of generative AI, such as access “to materially nefarious information” related to weapons, violence, hate speech, obscene imagery, or ecological damage.  

The EU Artificial Intelligence Act specifically mandates AI developers of generative AI based on Large Language Models (LLMs) to comply with rigorous obligations prior to placing on the market such systems, including design specifications, information relating to training data, computational resources to train the model, estimated energy consumption, and compliance with copyright laws associated with harvesting of training data.  

AI regulations and risk management frameworks provide the basis for establishing ethical guidelines that developers ought to follow. They ensure that AI technologies are developed and deployed in a manner that respects human rights and societal values.

Ultimately embracing responsible AI regulations and risk management frameworks deliver positive business outcomes as there is “an economic incentive to getting AI and gen AI adoption right. Companies developing these systems may face consequences if the platforms they develop are not sufficiently polished – and a misstep can be costly. 

Major gen AI companies, for example, have lost significant market value when their platforms were found hallucinating (when AI generates false or illogical information). Public trust is essential for the widespread adoption of AI technologies, and AI laws can enhance public trust by ensuring that AI systems are developed and deployed ethically. 


You may also like…

Q&A: Evaluating the ROI of AI implementation

From diagrams to design: How AI transforms system design

The post The impact of AI regulation on R&D appeared first on SD Times.

]]>
From diagrams to design: How AI transforms system design https://sdtimes.com/ai/from-diagrams-to-design-how-ai-transforms-system-design/ Wed, 03 Jul 2024 15:53:45 +0000 https://sdtimes.com/?p=55109 I’ve always been captivated by AI’s potential, not just to execute programmed tasks but to learn and perform complex functions. However, it’s disingenuous not to recognize the cycles of heightened expectations and subsequent disillusionments that AI has suffered from, often marked by swings in funding and interest in the field. Since my days studying Mathematics … continue reading

The post From diagrams to design: How AI transforms system design appeared first on SD Times.

]]>
I’ve always been captivated by AI’s potential, not just to execute programmed tasks but to learn and perform complex functions. However, it’s disingenuous not to recognize the cycles of heightened expectations and subsequent disillusionments that AI has suffered from, often marked by swings in funding and interest in the field.

Since my days studying Mathematics & Computer Science at Syracuse University, I’ve personally witnessed at least three “AI springs” and two “AI winters”!

Currently, we are in another period of AI hype, bombarded by articles ranging from ‘AI will replace software engineers‘ to ‘Top 5 AI tools for faster coding‘. Yet, these narratives often overlook that software engineering encompasses far more than just typing code. It involves a range of skilled tasks such as gathering requirements, designing solutions, validating designs, collaborating on problems, and predicting potential issues.

The advances in AI we’re seeing today—and those on the horizon—promise not only to streamline coding but also to profoundly transform how we design software systems.

System Design is a Core Engineering Competency

System design is an essential engineering skill necessary for the successful development, maintenance, and evolution of software systems. This discipline involves making critical decisions about system structure and component interactions and integrating architectural considerations into daily development activities. Effective system design not only mitigates technical debt but also ensures that software can adapt to future changes without significant overhauls.

At the individual level, a solid understanding of architectural principles can greatly improve a developer’s ability to make informed coding decisions, participate in design discussions, and understand the impact of their work on the entire system. At the team level, it aligns efforts towards shared objectives, enhancing coherence and efficiency in development practices.

Indeed, system design is most effective when it’s implemented with a collaborative approach. That’s why we’re currently seeing the evolution of the Software Architect role from being in an ‘Ivory Tower’ to becoming a ‘Team Player.’

Yet, we still often see system design mistakenly equated with outdated practices like Big Design Up Front, rigid frameworks like TOGAF, or specific documentation outputs (e.g. diagrams or architecture decision records).

Instead, system design should be an ongoing practice, embedded throughout the software development lifecycle (SDLC). This includes some degree of upfront planning, continual design reviews during development, and meticulous documentation of requirements, decisions, and constraints.

The Evolving Role of System Design in the Job Market

Traditionally, junior developers have been encouraged to focus primarily on learning coding skills and mastering the fundamentals of their chosen technologies. However, with AI assistants significantly accelerating coding tasks (55% of GitHub Copilot users report coding faster), they now have more time to allocate toward understanding system intricacies.

This shift, in addition to the following factors, is making system design skills increasingly essential in today’s job market:

  • Complexity of Systems: Modern software applications are intricately complex, involving vast data sets, diverse technology stacks, and heightened user expectations. Proficiency in system design is critical for managing this complexity to ensure robust, scalable, and maintainable systems.
  • Integration of Technologies: With businesses integrating a mix of new and legacy systems across various platforms, the ability to design seamless system integrations is vital. Not to mention that due to the higher proportion of brown-field vs green-field projects, those developers that can understand, navigate and improve upon legacy architectures, are advantaged.
  • Agility and Flexibility: The rapid pace of market changes and evolving customer needs demands systems that can be quickly updated or extended. Effective system design enhances a company’s agility and ability to scale operations swiftly and efficiently, leveraging the latest technologies and cloud capabilities.
  • Security Concerns: In an era of escalating cyber threats, incorporating robust security measures into the system architecture from the outset is paramount. System designers must be adept at identifying and mitigating potential security risks at all levels of the architecture.

Developers must adopt a holistic view of software system development to remain relevant and competitive. Understanding and contributing to the broader architectural landscape—seeing the big picture and how systems interconnect—will be crucial for future career success.

AI Enhances System Design, it doesn’t Replace it

When discussing AI’s role in system design, many might first think of AI-powered diagramming tools. Yet, effective system design encompasses more than just diagramming—it involves collaborative continuous reviews and system evolution based on informed decision-making.

Current AI diagramming tools often focus on producing static diagrams or system documentation. But the true potential of AI in this field lies in helping engineers understand system requirements, assess the impact of their decisions, and proactively suggest system solutions or optimizations.

Large Language Models (LLMs) excel at recognizing patterns, which is crucial in system architecture where reusing successful design patterns and choosing appropriate resources that worked for same/similar use cases can dramatically enhance efficiency and effectiveness.

Here’s how I envision AI transforming system design:

  • Enhanced Decision-Making:
    • AI can recommend proven architecture patterns tailored to specific needs and simulate different scenarios to make architectural decisions more data-driven. For instance, it could analyze usage patterns to suggest optimal database solutions or architecture designs that improve performance.
    • AI’s predictive capabilities could identify potential issues early in development, such as predicting the accumulation of technical debt based on development practices and system changes.
    • AI can facilitate natural language interactions with APIs, streamlining how developers interact and develop with system components.
  • Automation: AI can automate the creation and updating of system documentation—from architecture diagrams to decision records, ensuring documentation is always current and accurate.
  • Optimization: AI system design tools could support self-diagnosing and intelligent resource allocation. This ensures efficient utilization of resources, reducing waste and improving system performance overall.

AI is poised not just to assist but to significantly enhance how engineers design, manage, and evolve software systems, making complex tasks more accessible and less time-consuming and leaving developers with more time to focus on the refinement and optimization phases.

Biggest Challenges to AI-enabled System Design tools

To effectively assist engineers in system design tasks, AI-enabled tools must overcome two challenges:

  1. Data Quality and Availability: Although there are numerous resources on system design available online, detailed examples of real-world system architectures—complete with their components, dependencies, APIs, and the necessary context like requirements and design decisions—are scarce. For AI-enabled system design tools to be truly effective, they require access to high-quality, comprehensive datasets. These tools need models that are not only trained on diverse architectural data but also a broad array of real-world systems to generate valuable insights.
  2. Integration into a Comprehensive System Design Platform: System design is a complex practice that demands more than just AI assistance; it requires a holistic platform approach. An effective tool must address the entire spectrum of challenges that teams face during system design. This includes real-time visualization of system architecture, streamlined communication and collaboration among team members, and robust version control. Only when implementing AI within a platform that addresses all these pain points can we meet the needs of software engineers.
Conclusion

AI is a powerful tool to assist engineers in performing effective system design, yet it is unlikely to supplant the role of humans.

Software development is a complex, highly-skilled knowledge job that demands more than just coding skills—it requires innovation, abstract reasoning, and creative problem-solving, capabilities where human intelligence excels and AI often falls short.

By harnessing AI to manage routine tasks and analyze extensive datasets, engineers can redirect their focus towards more strategic and innovative activities. This synergy allows AI to enhance efficiency while humans tackle complex challenges, ensuring that the nuanced, contextual decisions necessary for system design are thoughtfully addressed.

Crucially, the adoption of AI-enabled system design tools should not overlook the need for human oversight to mitigate risks like unnecessary complexity or inappropriate system recommendations that may arise from AI’s lack of contextual understanding.

The future of system design will most effectively harness the distinct strengths of both humans and AI, developing a symbiotic relationship that allows each to excel in their respective domains.


You may also like…

Accelerating digital transformation means creating a great engineering culture

Q&A: Why over half of developers are experiencing burnout

The post From diagrams to design: How AI transforms system design appeared first on SD Times.

]]>
The real problems IT still needs to tackle for platforms https://sdtimes.com/softwaredev/the-real-problems-it-still-needs-to-tackle-for-platforms/ Tue, 02 Jul 2024 18:47:05 +0000 https://sdtimes.com/?p=55091 Platforms like ServiceNow and Salesforce (to name a few) were introduced to address and solve the many overwhelmingly burdensome tasks associated with building enterprise-specific applications and keeping companies agile, automated, and scalable. However, to adopt these platforms in the organization and maximize their value, they require development practices, principles, and discipline similar to classic software … continue reading

The post The real problems IT still needs to tackle for platforms appeared first on SD Times.

]]>
Platforms like ServiceNow and Salesforce (to name a few) were introduced to address and solve the many overwhelmingly burdensome tasks associated with building enterprise-specific applications and keeping companies agile, automated, and scalable. However, to adopt these platforms in the organization and maximize their value, they require development practices, principles, and discipline similar to classic software development.

Platform engineering, and Instance Management Platforms, emerged as a way to codify and standardize the management of the platform including its CI/CD production pipelines. However, in the age of low-code/no-code (LCNC) platforms like the ones named above, applying platform engineering principles to these platforms is beneficial for non-developers and classic developers alike. LCNC platforms allow developers to immediately focus directly on developing sound business logic without coding the requisite application logic. Theoretically, this should shorten the time to market and lower maintenance costs since the platform handles all the application infrastructure (memory, storage, network, etc.). However, it’s critical not to overlook that organizations onboarding citizen developers will face the same challenges pro-coders see in enterprise development. 

Addressing the Root Causes of Chronic Delays

Most prominent players are still experiencing chronic delays in their operations, so they have turned to platforms. However, they often quickly find that even with these platforms, they are still experiencing chronic delays at pivotal times in the development lifecycle, which can be due to several factors. 

Inefficient deployment practices, slow approval processes, and lengthy manual testing all contribute to delays. Fixed release schedules are another big contributor. When companies can’t release on demand, they have to wait for the next change window, which limits how often they can release to production.

Beyond this, for companies using platforms like ServiceNow or Salesforce, processes like cloning databases or instances to serve as production environments can also be time-consuming. Cloning is typically used to copy production data/information to pre-production environments to test developed changes. 

While cloning is necessary to align production updates across all non-prod environments, this process (typically being database-heavy) can take up to 10, 20, or even 30 hours. That’s a lot of time for developers to sit idle; lost time is only the tip of the iceberg. 

These are just a few of the hurdles platform engineering teams are helping companies overcome, and they are doing it in a variety of ways. 

First, platform engineering teams and technology are helping to navigate the transition from fixed release schedules to on-demand releases by introducing better infrastructure, tools and processes that enable continuous integration and continuous delivery (CI/CD) pipelines. Beyond that, with automated deployment processes, companies can push changes to production without manual intervention, allowing for frequent and smaller releases.

Second, when it comes to processes like cloning, automation and accuracy are everything. If platform engineering teams can automate and accelerate their cloning process, they can minimize the discrepancies between source and target. The key is to establish and standardize better ways to minimize downtime and errors so that the platforms themselves can support a better service delivery standard. 

Who Owns that Delivery Pipeline?

Governance and standardization are crucial elements in the context of platform engineering. The platform engineering movement began when software engineers realized that building a CI/CD delivery pipeline involved significant coding. They recognized that the pipeline itself should be treated as an application platform, requiring a dedicated team of engineers. 

Many enterprises don’t anticipate hiring people specifically to maintain and build delivery pipelines. They might assume that using cloud services means everything is automatically taken care of. Consequently, part of the development team’s time is often allocated to managing the delivery pipeline as an application, which can be feasible since they are already responsible for app maintenance. This hidden burden is typically integrated into the overall maintenance costs of all the applications the development team is working on.

However, issues can arise in delivery pipeline governance when admin privileges become too widespread, and deployment practices too inconsistent. Beyond this, platform environments can spiral out of governance when there are too many changes in non-production environments. 

This is where we are seeing platform engineering teams begin to own the delivery pipeline, and introduce more automation surrounding governance and deployment flows and around the software development lifecycle in general. The reality is that platform teams should be looking to operationalize governance in the same way they standardize how code is developed, built, and deployed. The tools are out there to mindfully and intentionally embed governance in processes, and the results are helping teams to become better aligned. 

Keeping Environments as Production-Like as Possible

Often, when companies think about platform engineering, they think about the pipeline, not what environment the pipeline is passing through, or how to keep non-prod environments as production-like as possible. Without this alignment, the classic ‘works in development, not in production’ conundrum may be inevitable. 

Successful platform engineering teams keep environments as production-like as possible because they understand the value of testing and pushing tiny snippets of code to reduce the risk of something going wrong. When new functionality is tested in production-like environments all the way through, companies can demonstrably reduce the risk by size and volume, and improve quality. This is all part of the practice of scaling and building sustainable large enterprise systems

Ultimately, platform engineering has been tasked with solving the enterprise development problems encroaching on developer’s lives, and there is still a lot of work to be done. Without a strategic approach to managing platform engineering within modern LCNC platforms themselves, the enterprise development community won’t be anywhere near close to delivering at the speed today’s business demands without compromising quality or compliance.


You may also like…

Platform Engineering is not (just) about infrastructure!

Analyst View: What’s new, what’s now, and what’s next in platform engineering

The post The real problems IT still needs to tackle for platforms appeared first on SD Times.

]]>
Accelerating digital transformation means creating a great engineering culture https://sdtimes.com/softwaredev/accelerating-digital-transformation-means-creating-a-great-engineering-culture/ Fri, 28 Jun 2024 14:53:17 +0000 https://sdtimes.com/?p=55072 It is no surprise that the rapid acceleration of technology and the growing inventory of tools at our disposal means software engineers need to start rethinking the way we harness existing and emerging resources to develop the next cutting-edge infrastructure that transforms financial services.  To transform with success and grow, collaboration is key. Collaboration not … continue reading

The post Accelerating digital transformation means creating a great engineering culture appeared first on SD Times.

]]>
It is no surprise that the rapid acceleration of technology and the growing inventory of tools at our disposal means software engineers need to start rethinking the way we harness existing and emerging resources to develop the next cutting-edge infrastructure that transforms financial services. 

To transform with success and grow, collaboration is key. Collaboration not only accelerates the adoption and dissemination of new technologies, it also fosters the culture of innovation required where new, complex engineering solutions are needed to address unique problems. 

This culture-building was demonstrated at our recent Accelerate Conference where we brought together 400 of our top software engineers and Chief Information Officers for an intensive three-day collaboration in Kuala Lumpur, Malaysia for our ‘Accelerate Global Engineering Conference’. Our end goal is to leverage emerging technologies to address customer pain points; transforming the way data can be enabled enterprise-wide; and accelerating our engineering to simplify, standardize and digitize our processes to become fit for growth. 

Building on this momentum is critical in helping us become a client-focused, data-driven digital bank. Equally important is ensuring we have a diverse workforce engaged to contribute new ideas, innovation and creativity which can lead to greater productivity and business performance.

Creating a great engineering culture 

Building an engaged team should be a priority for every leader. Happy employees are productive, collaborative, and willing to work through challenges. Software engineers are no different and need the right tools, inspiration, and autonomy to deliver impact. 

First, many organizations still struggle to equip their software engineers with the right, up-to-date tools. In many instances, engineers are given the same computers as call center employees while senior managers get the latest and most powerful. At times broken processes are applied to the very cohort of experts that are charged with automating and eliminating them. Software engineers need more powerful CPUs for complex algorithm optimizations, or the additional RAM to host VMs locally, or the GPUs for machine learning, or access to production data to build models.

Upskilling and reskilling engineers should also be a priority to ensure they reap the benefits of new technologies like AI and Machine Learning with agility. At Standard Chartered, our Axess Academies help us ensure the skills of our software engineering workforce are continually upgraded and recalibrated to match the ever-changing demands of the market. For instance, we have over 130 classroom technology courses across the entire stack of technologies used in the bank, from full stack development to GenAI and Cloud Computing. New courses are added every quarter and existing ones are upgraded to reflect industry trends and changes.

Second, many organizations struggle to inspire their engineers primarily because the leaders in charge of this cohort generally do not ‘get’ software engineering. From top down as a bank, we believe that applying our technology in the right way is critical to accelerating our transformation. This enables us to standardize end-to-end, transform digitally while simplifying our business faster and permanently reducing structural costs.

Finally, autonomy is key for software engineers. Autonomy unshackles software engineering teams to ideate and deliver for the business on their terms while fostering a work culture that fulfills employee needs for meaning and personal growth. I would contend that digital disruption and Fintechs are not only about amassing more technology, or even newer technology, but about giving software engineers the space to deliver their agendas and being pivotal in delivering solutions.

With 3 trillion lines of code written every day and around 93 billion lines of code being added every year, and with things only set to increase, it is important software engineers play an instrumental role in the process of determining and shaping the development of new technologies, processes, and outcomes.

With over 10,000 software engineers, we continually build a bank that offers diverse experiences and opportunities for everyone to work on compelling and impactful projects. As our Accelerate Conference highlighted, we can do more to elevate our engineering community by increasing knowledge sharing, breaking down silos and raising the standards of technical excellence. By doing so, we empower the current, as well as the next, generation of software engineers with future-focused skills and experiences to be effective catalysts for digital transformation. 


You may also like…

Report: 7 qualities of highly effective teams

SD Times Open-Source Project of the Week: Developer Productivity and Happiness Framework

Q&A: How cognitive fatigue impacts developer productivity

The post Accelerating digital transformation means creating a great engineering culture appeared first on SD Times.

]]>
Are developers and DevOps converging? https://sdtimes.com/devops/are-developers-and-devops-converging/ Fri, 14 Jun 2024 14:56:49 +0000 https://sdtimes.com/?p=54946 Are your developers on PagerDuty? That’s the core question, and for most teams the answer is emphatically “yes.” This is a huge change from a few years ago when, unless you did not have DevOps or SRE teams, the answer was a resounding “no.”  So, what’s changed? A long-term trend is happening across large and … continue reading

The post Are developers and DevOps converging? appeared first on SD Times.

]]>
Are your developers on PagerDuty? That’s the core question, and for most teams the answer is emphatically “yes.” This is a huge change from a few years ago when, unless you did not have DevOps or SRE teams, the answer was a resounding “no.” 

So, what’s changed?

A long-term trend is happening across large and small companies, and that is the convergence of developers, those who code apps, and DevOps, those who maintain the systems on which apps run and developers code. There are three core reasons for this shift – (1) transformation to the cloud, (2) a shift to a single store of observability data, and (3) a focus of technical work efforts on business KPIs.

The impending impact on DevOps in terms of role, workflow, and alignment to the business will be profound. Before diving into the three reasons shortly, first, why should business leaders care? 

The role of DevOps and team dynamics – The lines are blurring between traditionally separate teams as developers, DevOps, and SREs increasingly collide. The best organizations will adjust team roles and skills, and they will change workflows to more cohesive approaches. One key way is via communicating around commingled data sets as opposed to distinct and separate vendors built and isolated around roles. While every technical role will be impacted, the largest change will be felt by DevOps as companies redefine its role and the mentalities that are required by its team members going forward.

Cost efficiency As organizations adjust to the new paradigm, their team makeup must adjust accordingly. Different skills will be needed, different vendors will be used, and costs will consolidate.

Culture and expectations adaptation – Who will you be on call with PagerDuty? How will the roles of DevOps and SREs change when developers can directly monitor, alert, and resolve their own questions? What will the expectation of triage be when teams are working closer together and focused on business outcomes rather than uptime? DevOps will not just be setting up vendors, maintaining developer tools, and monitoring cloud costs.

Transformation to the cloud

This is a well-trodden topic, so the short story is… Vendors would love to eliminate roles on your teams entirely, especially DevOps and SREs. Transformation to the cloud means everything is virtual. While the cloud is arguably more immense in complexity, teams no longer deal with physical equipment that literally requires someone onsite or in an office. With virtual environments, cloud and cloud-related vendors manage your infrastructure, vendor setup, developer tooling, and cost measures… all of which have the goals of less setup and zero ongoing maintenance.

The role of DevOps won’t be eliminated… at least not any time soon, but it must flex and align. As cloud vendors make it so easy for developers to run and maintain their applications, DevOps in its current incarnation is not needed. Vendors and developers themselves can support the infrastructure and applications respectively.

Instead, DevOps will need to justify their work efforts according to business KPIs such as revenue and churn. A small subset of the current DevOps team will have KPIs around developer efficiency, becoming the internal gatekeeper to enforce standardization across your developers and the entire software lifecycle, including how apps are built, tested, deployed, and monitored. Developers can then be accountable for the effectiveness and efficiency of their apps (and underlying infrastructure) from end-to-end. This means developers – not DevOps – are on PagerDuty, monitor issues across the full stack, and respond to incidents. 

Single store of observability data

Vendors and tools are converging on a single set of data types. Looking at the actions of different engineering teams, efforts can easily be bucketed into analytics (e.g., product, experience, engineering), monitoring (e.g., user, application, infrastructure), and security. What’s interesting is that these buckets currently use different vendors built for specific roles, but the underlying datasets are quickly becoming the same. This was not true just a few years ago. 

The definition of observability data is to collect *all* the unstructured data that’s created within applications (whether server-side or client-side) and the surrounding infrastructure. While the structure of this data varies by discipline, it is always transformed into four forms – metrics, logs, traces, and, more recently, events. 

Current vendors generally think of these four types separately, with one used for logs, another for traces, a third for metrics, and yet another for analytics. However, when you combine these four types, you create the underpinnings of a common data store. The use cases of these common data types become immense because analytics, monitoring, and security all use the same underlying data types and thus should leverage the same store. The question is then less about how to collect and store the data (which is often the source of vendor lock-in), and more about how to use the combined data to create analysis that best informs and protects the business.

The convergence between developers and DevOps teams – and in this case eventually product as well – is that the same data is needed for all their use cases. With the same data, teams can increasingly speak the same language. Workflows that were painful before now become possible. (There’s no more finger-pointing between DevOps and developers.) The work efforts become more aligned around what drives the business and less about what each separate vendor tells you is most important. The roles then become blurred instead of having previously clean dividing lines. 

Focus of work efforts on business KPIs

Teams are increasingly driven by business goals and the top line. For DevOps, the focus is shifting from the current low bar of uptime and SLAs to those KPIs that correlate to revenue, churn, and user experience. And with business alignment, developers and DevOps are being asked to report differently and to justify their work efforts and prioritization. 

For example, one large Fortune 500 retailer has monthly meetings across their engineering groups (no product managers included). They review the KPIs on which business leaders are focused, especially top-line revenue loss. The developers (not DevOps) select specific metrics and errors as leading indicators of revenue loss and break them down by type (e.g., crashes, error logs, ANRs), user impact (e.g., abandonment rate), and area of the app affected (e.g., startup, purchase flow). 

Notice there’s no mention of DevOps metrics. The group does not review the historically used metrics around uptime and SLAs because those are assumed… and are not actionable to prioritize work and better grow the business.

The goal is to prioritize developer and DevOps efforts to push business goals. This means engineering teams must now justify work, which requires total team investment into this new approach. In many ways, this is easier than the previous methodology of separately driving technical KPIs. 

DevOps must flex and align

DevOps is not disappearing altogether, but it must evolve alongside the changing technology and business landscapes of today’s business KPI-driven world. Those in DevOps adapted to the rapid adoption of the cloud, and must adapt again to the fact that technological advancements and consolidation of data sources will impact them. 

As cloud infrastructures become more modular and easier to maintain, vendors will further force a shift in the roles and responsibilities of DevOps. And as observability, analytics, and security data consolidates, a set of vendors will emerge – looking at Databricks, Confluent, and Snowflake – to manage this complexity. Thus, the data will become more accessible and easier to leverage, allowing developers and business leaders to connect the data to the true value – aligning work efforts to business impact. 

DevOps must follow suit, aligning their efforts to goals that have the greatest impact on the business. 

The post Are developers and DevOps converging? appeared first on SD Times.

]]>
Recent restrictions on data scraping don’t have to derail your generative AI initiatives https://sdtimes.com/data/recent-restrictions-on-data-scraping-dont-have-to-derail-your-generative-ai-initiatives/ Thu, 21 Sep 2023 20:03:02 +0000 https://sdtimes.com/?p=52372 Businesses and developers building generative AI models got some bad news this summer. Twitter, Reddit and other social media networks announced that they would either stop providing access to their data, cap the amount of data that could be scraped or start charging for the privilege. Predictably, the news set the internet on fire, even … continue reading

The post Recent restrictions on data scraping don’t have to derail your generative AI initiatives appeared first on SD Times.

]]>
Businesses and developers building generative AI models got some bad news this summer. Twitter, Reddit and other social media networks announced that they would either stop providing access to their data, cap the amount of data that could be scraped or start charging for the privilege. Predictably, the news set the internet on fire, even sparking a sitewide revolt from Reddit users who protested the change. Nevertheless, the tech giants carried on and, over the past several months, have started implementing new data policies that severely restrict data mining on their sites.

Fear not, developers and data scientists. The sky is not falling. Don’t hand over your corporate credit cards just yet. There are other, more relevant ways for organizations to empower their employees with other sources of data and keep their data-driven initiatives from being derailed.

The Big Data Opportunity in Generative AI

The billions of human-to-human interactions that take place on these sites have always been a gold mine for developers who need an enormous dataset in which to train AI models. Without access (or without affordable access), developers would have to find another source of this type of data or risk using incomplete data sets for training their models. Social media sites know what they have and are looking to cash in.

And, honestly, who can blame them? We’ve all heard the quip that data is the new oil, and generative AI’s rise is the most accurate example of that truism I’ve seen in a long time. Companies that control access to large datasets hold the key to creating the next-generation AI engines that will soon radically change the world. There are billions of dollars to be made, and Twitter, Reddit, Meta and other social media sites want their share of the pie. It’s understandable, and they have that right.

So, What Can Organizations Do Now?

Developers and engineers are going to have to adapt their data use and collection in this new environment. This requires new controllable sources of data, as well as new data use policies that can ensure the resiliency of this data. The good news is that most enterprises are already collecting this data. It lives in the thousands of customer interactions that occur inside their organization every day. It’s in the reams of research data that went toward years of development. It’s in the day-to-day interactions between employees and with partners as they go about their business. All the data in your organization can and should be used to train new generative AI models.

While scraping data from across the internet provides a sense of scale that would be impossible for a single organization to achieve, the result of general data scraping is that it produces generic outputs. Look at ChatGPT. Every answer is a mishmash of broad generalities and corporate speak that seems to say a whole lot but doesn’t actually mean anything of significance. It’s eighth-grade level at best, which isn’t what will help most business users or their customers.

On the other hand, proprietary AI models that have been trained on more specific datasets that are relevant to their intended purpose. A tool that’s trained with millions of legal briefs, for example, will produce much more relevant, thoughtful and worthwhile results. These models use language that customers and other stakeholders understand. They operate within the correct context of the situation. And, they produce results while understanding sentiment and intent. When it comes to experience, relevant beats generic every day of the week.

However, businesses can’t just collect all the data across their organization and dump it into a data lake somewhere, never to be touched again. More than 100 zettabytes (yes, that’s zettabytes with a z) were created worldwide in 2022, and that number is expected to continue to explode over the next several years. You’d think that this volume of data would be more than enough to train virtually any generative AI model. However, a recent Salesforce survey revealed that 41% of business leaders cite a lack of understanding of data because it is too complex or not accessible enough. It’s clear that volume is not the issue. Putting the data into the right context, sorting and labeling the relevant information and making sure developers and other priority users have the right access is paramount.

In the past, data storage policies were written by lawyers seeking to limit regulatory and audit risk. Rules governed where and how long data had to be stored. Instead, organizations need to amend their data storage policies to make the right data more accessible and consumable. Data policies need to be modernized – dictating how the data should be used and reused, how long it needs to be kept and how to manage redundant data (copies, for example) that could skew results. 

Harnessing Highly Relevant Data that You Already Own

Recent data scraping restrictions don’t have to derail big data and AI initiatives. Instead, organizations should look internally at their own data to train generative AI models that produce more relevant, thoughtful and worthwhile results. This will require getting a better handle on the data they already collect by modernizing existing data storage policies to put information in the right context and make it more consumable for developers and AI models. Data may be the new oil, but businesses don’t have to go beyond their own borders to cash in. The answer is right there in the organization already – that data is just waiting to be thoughtfully managed and fed into new generative AI models to create powerful experiences that inform and delight.

The post Recent restrictions on data scraping don’t have to derail your generative AI initiatives appeared first on SD Times.

]]>
When only one SBOM will do, consider these formats https://sdtimes.com/softwaredev/when-only-one-sbom-will-do-consider-these-formats/ Wed, 20 Sep 2023 14:55:31 +0000 https://sdtimes.com/?p=52341 A software bill of materials (SBOM) is a tool designed to share detailed information on code components in a standardized way. The SBOM has become an increasingly important tool for both application security purposes and governmental compliance.  To minimize inconsistencies and encourage greater transparency, three primary SBOM formats have emerged, each of which allow companies to … continue reading

The post When only one SBOM will do, consider these formats appeared first on SD Times.

]]>
A software bill of materials (SBOM) is a tool designed to share detailed information on code components in a standardized way. The SBOM has become an increasingly important tool for both application security purposes and governmental compliance. 

To minimize inconsistencies and encourage greater transparency, three primary SBOM formats have emerged, each of which allow companies to generate, share, and consume supply chain data. Before you choose, it’s important to understand what the current SBOM format options are and how they are best suited to you.

Here, we’ll explore all three formats – SPDX, CDX, and SWID – share their attributes and weaknesses, and offer guidance to help you find the perfect match. 

First, let’s discuss why there are so many different formats. The simplest reason is that guidance around the use and requirements of SBOMs is still quite new. While SBOMs have been around for a while, it was less than two years ago that the software bill of materials was advanced by NIST in accordance with the Biden administration’s Executive Order on Improving the Nation’s Cybersecurity. Since then, government agencies have released guidance that increasingly requires SBOMs. That, combined with ever-expanding use of open source software components, will drive increased SBOM adoption and subsequently greater demand for a standardized format. Until that time, organizations have three predominant formats to choose from.

Software package data exchange (SPDX)

What is it? SPDX is a data exchange format created to easily share information about software packages and related content including components, licenses, copyrights, security references, and other metadata. It is intended to save time and improve data accuracy in support of supply chain transparency.   

What are its origins? SPDX is authored by the SPDX workgroup, a community driven project supported by the Linux Foundation. 

What are its best features? Using a standardized, machine-readable format ensures consistency across different organizations and reduces the need to reformat information, makes it easier to share, and consequently improves compliance and security efficiency. 

Its size and capacity make it a particularly flexible option. One of its biggest strengths is the ability to provide a detailed big picture of your software supply chain, components, and dependencies.  SPDX identifies the software package, package level, file-level licensing, and copyright data, and also shows the file creator, and when and how it was created. This allows for a multiplicity of annotations and the most detail of the three formats. 

Of the three main SBOM formats, SPDX is the largest and most robust and is the only format with an ISO (International Organization for Standardization) accreditation. 

Potential weaknesses: There aren’t really any notable weaknesses with this all-inclusive format. 

Best suited to: Primarily designed to improve license compliance, SPDX is typically used by large, complex organizations. Linux users naturally tend to adopt SPDX, and it is preferred by those that build commercial software or operate enterprise software. SPDX adoption is growing significantly as the use of open source projects increases. 

CycloneDX (CDX)

What is it? CycloneDX is a full-stack bill of materials standard.

What are its origins? CycloneDX is backed and maintained by the OWASP Foundation

What are its best features? A main differentiator of CDX is its broad support of various specifications including SBOM, Software-as-a-Service Bill of Materials (SaaSBOM), Hardware Bill of Materials (HBOM), Operations Bill of Materials (OBOM) and VEX use cases. The format identifies BOM metadata, components, services, dependencies, compositions, vulnerabilities, and extensions.

The CycloneDX format provides standards in XML Schema, JSON Schema, and protocol buffers. The project also supports various community-supported tools and extensions that target specialized or industry-specific use cases. 

Like SPDX, there is strong community direction and development. Additionally, the involvement of the OWASP Foundation provides educational support opportunities which help to ensure continuous development and advancement of SBOM.

Potential weaknesses: CDX offers many of the same attributes and capabilities as SPDX but is not quite as robust. 

Best suited to: Preferred by nimbler organizations and by teams that use open source heavily, CDX is more agile and easier to use than SPDX. 

Software Identification Tags (SWID) 

What is it? SWID is an industry standard that allows organizations to track the software inventories installed on managed devices with a simple, easy-to-use format. SWID tag files contain descriptive information about a specific release of a software product, including an end tag to define the product lifecycle. There are four types of SWID tags: 

  1. Primary Tag: Identifies and describes a software product installed on a computing device. 
  2. Patch Tag: Identifies and describes an installed patch that has made incremental changes to a software product installed on a computing device. 
  3. Corpus Tag: Identifies and describes an installable software product in its pre-installation state. It can be used to represent metadata about an installation package or installer for a software product, a software update, or a patch. 
  4. Supplemental Tag: Allows additional information to be associated with another SWID tag to ensure Primary and Patch Tags provided by a software provider are not modified by software management tools, while allowing these tools to provide their own software metadata.

What are its origins? SWID was created and is maintained by the National Institute of Standards and Technology (NIST).

What are its best features? Because SWID’s primary purpose is inventory, it is far less complex than SPDX and CycloneDX and therefore is faster and easier to use. 

SWID is widely used by standards bodies such as the Trusted Computing Group (TCG) and the Internet Engineering Task Force (IETF). 

Potential weaknesses: Its capabilities are limited, and it doesn’t provide details such as vulnerability information, annotations, or license information. 

Best suited to: For organizations that want to create an inventory of software components and dependencies quickly and easily, SWID is a good option. 

Which SBOM will you choose?

Each of these three formats serve the purposes of an SBOM though some offer additional capabilities that go beyond those requirements. Before making your choice, consider your organization’s specific needs. For example, organizations in highly regulated fields, e.g., financial services or healthcare, and most government agencies require a greater level of granularity and detail than may be available with the SWID format. This may also be the case in an M&A situation. However, a simple format may be enough to provide peace of mind to a prospect or customer.

No matter which format, or combination of formats, you choose, there’s no doubt SBOMs will play an increasingly important role in the development and security of software and the software supply chain. To ensure your organization is ready, it’s important to get started with an SBOM today.

 

The post When only one SBOM will do, consider these formats appeared first on SD Times.

]]>