Edge Archives - SD Times https://sdtimes.com/tag/edge/ Software Development News Fri, 23 Feb 2024 21:16:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Edge Archives - SD Times https://sdtimes.com/tag/edge/ 32 32 The software-defined trend for embedded devices https://sdtimes.com/embedded/the-software-defined-trend-for-embedded-devices/ Fri, 23 Feb 2024 21:10:36 +0000 https://sdtimes.com/?p=53857 The landscape of IoT devices is transforming, marked by a fundamental evolution toward software-driven innovation. In this era, the paradigm of Software-Defined IoT devices is redefining traditional notions, where software supremacy over mechanical hardware unleashes a wave of dynamic, upgradable smart devices embedding distributed intelligence. Visionary OEMs are at the forefront of this transformation, harnessing … continue reading

The post The software-defined trend for embedded devices appeared first on SD Times.

]]>
The landscape of IoT devices is transforming, marked by a fundamental evolution toward software-driven innovation. In this era, the paradigm of Software-Defined IoT devices is redefining traditional notions, where software supremacy over mechanical hardware unleashes a wave of dynamic, upgradable smart devices embedding distributed intelligence.

Visionary OEMs are at the forefront of this transformation, harnessing the power of software to revolutionize their offerings, paving the way for diverse advancements and a spectrum of opportunities.

Emergence of Tailored, Dynamic Products

The foremost advantage of a software-defined approach in embedded devices lies in adopting an agile process, where a product can iterate and evolve quickly, and its features can augment seamlessly post-production.

For instance, it allows crafting tailored products that resonate deeply with niche markets, showcasing an era of hyper segmentation at a fast pace. Zebra Technologies serves as an example, personalizing printers for industry giants like UPS and FedEx, illustrating the potent flexibility of software in meeting specific customer needs.

Another example is post-purchase service additions, akin to mobile apps but in the IoT industry context, paving the way for innovation. Landis+Gyr’s Revelo electricity meter, customizable to efficiently manage distributed energy resources like solar panels or electric vehicles, exemplifies this evolution in providing adaptable solutions for changing energy needs.

The infusion of value-added applications and services elevates the intrinsic worth of products. Smart wearable devices (such as smartwatches, smart rings, and smart bands) leverage software functionalities, like actionable data, to offer diverse health monitoring capabilities. These devices continuously integrate new features, apps, and healthcare system integrations through a software-defined approach, empowering users to manage their well-being proactively.

Transitioning challenges

Transitioning from traditional waterfall software development models presents substantial challenges. Agile frameworks supporting rapid validations on simulated devices through shorter iteration cycles are essential, necessitating a departure from rigid development methodologies. Additionally, integrating legacy systems seamlessly with agile software development remains an obstacle.

Resource constraints and cost considerations compound these challenges. Moving from resource-optimized embedded development models to software-defined approaches traditionally increased costs due to the requirement for sophisticated processors and modern development tools. However, newer solutions on the market offer the same functionalities as high-level OS but on a much lighter and optimized footprint to run on microcontrollers and microprocessors, saving up costs while keeping the same ease of use as high-end operating systems.

Software containers in this IoT world

Software containerization is a major trend reshaping the development and deployment of applications, particularly in the context of edge computing. Its ability to facilitate faster application development and deployment, coupled with heightened portability and flexibility, marks a significant shift toward the desired state of “write once, run anywhere.”

While initially considered too bulky and inefficient for embedded systems operating with 32-bit microcontrollers and real-time operating systems (RTOSs), recent advancements have shattered these limitations. Tailored container versions designed for smaller CPUs running an RTOS are emerging, effectively bridging the gap for embedded systems.

These app containers deliver numerous benefits highly relevant in the IoT industry:

  • Isolation: App containers securely isolate apps from the underlying OS/RTOS, creating a fortified software architecture that ensures a more secure environment. This architecture fosters higher software portability, guarantees consistent app operation across diverse environments, enables safe integration of third-party apps, enhances device reliability, and allows leveraging legacy software assets and IPs.
  • Standardization: As software gains increasing importance in IoT devices, complexity rises due to fragmented technological environments and diverse configuration challenges. The need for standardization becomes more crucial. Given the scale, ranging from millions to billions of electronic devices worldwide, containers can play a huge role due to their flexibility, ease, and consistency in deployment—attributes akin to why they gained popularity in the IT and smartphone contexts.

The shift to a software-defined landscape represents a pivotal change driven by industry needs and the surge in data. It demands sophisticated software algorithms and seamless AI/ML integration, empowering interconnected edge devices with unparalleled computational capabilities. Simultaneously, consumer expectations, shaped by the smartphone era, fuel the desire for uniform functionalities across diverse interconnected devices.

This transformation demands proactive adaptation and innovation. Embracing a software-first approach and leveraging app containers emerges as the fastest, most cost-effective route. Placing software at the core fosters a culture of continuous improvement and rapid innovation.

Beyond technological advancements, the software-defined approach heralds an era of adaptable technology that enriches our lives through embedded intelligence, continual enhancements, and an environment fostering swift innovation. It not only revolutionizes devices but also dynamically shapes our interactions and experiences with everyday objects. Embracing this shift opens doors to a world where technology evolves alongside us, propelling us toward a future where innovation knows no bounds.

The post The software-defined trend for embedded devices appeared first on SD Times.

]]>
Next for Gen AI: Small, hyper-local and what innovators are dreaming up https://sdtimes.com/ai/next-for-gen-ai-small-hyper-local-and-what-innovators-are-dreaming-up/ Wed, 21 Feb 2024 16:44:48 +0000 https://sdtimes.com/?p=53830 In late 2022, ChatGPT had its “iPhone moment” and quickly became the poster child of the Gen AI movement after going viral within days of its release. For LLMs’ next wave, many technologists are eyeing the next big opportunity: going small and hyper-local.  The core factors driving this next big shift are familiar ones: a … continue reading

The post Next for Gen AI: Small, hyper-local and what innovators are dreaming up appeared first on SD Times.

]]>
In late 2022, ChatGPT had its “iPhone moment” and quickly became the poster child of the Gen AI movement after going viral within days of its release. For LLMs’ next wave, many technologists are eyeing the next big opportunity: going small and hyper-local. 

The core factors driving this next big shift are familiar ones: a better customer experience tied to our expectation of immediate gratification, and more privacy and security baked into user queries within smaller, local networks such as the devices we hold in our hands or within our cars and homes without needing to make the roundtrip to data server farms in the cloud and back, with inevitable lag times increasing over time. 

While there’s some doubts on how quickly local LLMs could catch up with GPT-4’s capabilities such as its 1.8 trillion parameters across 120 layers that run on a cluster of 128 GPUs, some of the world’s best known tech innovators are working on bringing AI “to the edge” so new services like faster, intelligent voice assistants, localized computer imaging to rapidly produce image and video effects, and other types of consumer apps. 

For example, Meta and Qualcomm announced in July they have teamed up to run big AI models on smartphones. The goal is to enable Meta’s new large language model, Llama 2, to run on Qualcomm chips on phones and PCs starting in 2024. That promises new LLMs that can avoid cloud’s data centers and their massive data crunching and computing power that is both costly and becoming a sustainability eye-sore for big tech companies as one of the budding AI’s industry’s “dirty little secrets” in the wake of climate-change concerns and other natural resources required like water for cooling. 

The challenges of Gen AI running on the edge

Like the path we’ve seen for years with many types of consumer technology devices, we’ll most certainly see more powerful processors and memory chips with smaller footprints driven by innovators such as Qualcomm. The hardware will keep evolving following Moore’s Law. But in the software side, there’s been a lot of research, development, and progress being made in how we can miniaturize and shrink down the neural networks to fit on smaller devices such as smartphones, tablets and computers. 

Neural networks are quite big and heavy. They consume huge amounts of memory and need a lot of processing power to execute because they consist of many equations that involve multiplication of matrices and vectors that extend out mathematically, similar in some ways to how the human brain is designed to think, imagine, dream, and create. 

There are two approaches that are broadly used to reduce memory and processing power required to deploy neural networks on edge devices: quantization and vectorization: 

Quantization means to convert floating-point into fixed-point arithmetic, that is more or less like simplifying the calculations made. If in floating-point you perform calculations with decimal numbers, with fixed-point you do them with integers. Using these options  lets neural networks take up less memory, since floating-point numbers occupy four bytes and fixed-point numbers generally occupy two or even one byte. 

Vectorization, in turn, intends to use special processor instructions to execute one operation over several data at once (by using Single Instruction Multiple Data – SIMD – instructions). This speeds up the mathematical operations performed by neural networks, because it allows for additions and multiplications to be carried out with several pairs of numbers at the same time.

Other approaches gaining ground for running neural networks on edge devices, include the use of Tensor Processor Units (TPUs) and Digital Signal Processors (DSPs) which are processors specialized in matrix operations and signal processing, respectively; and the use of Pruning and Low-Rank Factorization techniques, which involves analyzing and removing parts of the network that don’t make relevant difference to the result.

Thus, it is possible to see that techniques to reduce and accelerate neural networks could make it possible to have Gen AI running on edge devices in the near future.

The killer applications that could be unleashed soon 

Smarter automations

By combining Gen AI running locally – on devices or within networks in the home, office or car –  with various IoT sensors connected to them, it will be possible to perform data fusion on the edge. For example, there could be smart sensors paired with devices that can listen and understand what’s happening in your environment,  provoking an awareness of context and enabling intelligent actions to happen on their own – such as automatically turning down music playing in the background during incoming calls, turning on the AC or heat if it becomes too hot or cold, and other automations that can occur without a user programming them. 

Public safety 

From a public-safety perspective, there’s a lot of potential to improve what we have today by connecting an increasing number of sensors in our cars to sensors in the streets so they can intelligently communicate and interact with us on local networks connected to our devices. 

For example, for an ambulance trying to reach a hospital with a patient who needs urgent care to survive, a connected intelligent network of devices and sensors could automate traffic lights and in-car alerts to make room for the ambulance to arrive on time. This type of connected, smart system could be tapped to “see” and alert people if they are too close together in the case of a pandemic such as COVID-19, or to understand suspicious activity caught on networked cameras and alert the police. 

Telehealth 

Using the Apple Watch model extended to LLMs that could monitor and provide initial advice for health issues, smart sensors with Gen AI on the edge could make it easier to identify potential health issues – from unusual heart rates, increased temperature, or sudden falls with no limited to no movement. Paired with video surveillance for those who are elderly or sick at home, Gen AI on the edge could be used to send out urgent alerts to family members and physicians, or provide healthcare reminders to patients. 

Live events + smart navigation

IoT networks paired with Gen AI at the edge has great potential to improve the experience at live events such as concerts and sports in big venues and stadiums. For those without floor seats, the combination could let them choose a specific angle by tapping into a networked camera so they can watch along with live event from a particular angle and location, or even re-watch a moment or play instantly like you can today using a TiVo-like recording device paired with your TV. 

That same networked intelligence in the palm of your hand could help navigate large venues – from stadiums to retail malls – to help visitors find where a specific service or product is available within that location simply by asking for it. 

While these new innovations are at least a few years out, there’s a sea change ahead of us for valuable new services that can be rolled out once the technical challenges of shrinking down LLMs for use on local devices and networks have been addressed. Based on the added speed and boost in customer experience, and reduced concerns about privacy and security of keeping it all local vs the cloud, there’s a lot to love.

The post Next for Gen AI: Small, hyper-local and what innovators are dreaming up appeared first on SD Times.

]]>
JFrog introduces native integrations with developer tools at KubeCon https://sdtimes.com/devops/jfrog-has-introduced-native-integrations-with-developer-tools-at-kubecon/ Wed, 08 Nov 2023 18:20:25 +0000 https://sdtimes.com/?p=52971 JFrog, a company that powers organizations to build, distribute, and automate software updates to the edge, has introduced native integrations with developer tools like Atlassian, Datadog, and Splunk at KubeCon + CloudNativeCon North America 2023 Chicago. The company also enhanced its own platform to support secure application development in the cloud.  With the growing shift … continue reading

The post JFrog introduces native integrations with developer tools at KubeCon appeared first on SD Times.

]]>
JFrog, a company that powers organizations to build, distribute, and automate software updates to the edge, has introduced native integrations with developer tools like Atlassian, Datadog, and Splunk at KubeCon + CloudNativeCon North America 2023 Chicago. The company also enhanced its own platform to support secure application development in the cloud. 

With the growing shift towards the cloud, organizations are under pressure to scale rapidly, and JFrog’s integrations aim to address concerns about software supply chain security. The company emphasizes its commitment to innovation and investment in its global partner ecosystem.

The new JFrog Security within Jira Cloud allows JFrog security data to be integrated into Jira, making vulnerability management, application security, and compliance an integral part of developers’ workflows. It enhances collaboration and automation to ensure trusted releases at scale, and it is currently available in beta.

JFrog Workers, available in open beta for JFrog SaaS customers, offers a serverless execution environment for managing JFrog and third-party execution flows. It allows the creation and execution of custom scripts to further automate and connect developer workflows securely.

Other capabilities include PagerDuty Security Incident Alerts as part of the integration of JFrog Xray with PagerDuty, Datadog Log Analytics, and out-of-the-box log streaming for JFrog SaaS Customers to Datadog and Splunk, which will be available in open beta in Q4 ’23.

“The increasing complexity of today’s software ecosystems requires best-of-breed integrations between developer tools to help accelerate time to market without compromising security,” said Gal Marder, executive vice president of strategy at JFrog. “To truly protect your software supply chain you need to consider code both in development and in production at the binary level. I look forward to further collaborating with our partners on solutions and go-to-market strategies that provide significant value to our customers wanting to migrate and innovate securely in the cloud.”

The post JFrog introduces native integrations with developer tools at KubeCon appeared first on SD Times.

]]>
Microsoft Bing AI moves to Open Preview, eliminating waitlist https://sdtimes.com/microsoft/microsoft-bing-ai-moves-to-open-preview-eliminating-waitlist/ Thu, 04 May 2023 19:58:09 +0000 https://sdtimes.com/?p=51091 Microsoft announced that it is opening Bing’s new AI chat feature to more people by moving from limited preview to open preview and eliminating the waitlist for trial as part of its initiative for the next generation of AI-powered Bing and Edge. Users can simply sign into Bing with their Microsoft account. Microsoft also announced … continue reading

The post Microsoft Bing AI moves to Open Preview, eliminating waitlist appeared first on SD Times.

]]>
Microsoft announced that it is opening Bing’s new AI chat feature to more people by moving from limited preview to open preview and eliminating the waitlist for trial as part of its initiative for the next generation of AI-powered Bing and Edge. Users can simply sign into Bing with their Microsoft account.

Microsoft also announced that it’s moving from text-only search & chat to one that’s more visual with rich image/video answers and new multimodal support coming shortly. Users can get more visual answers including charts and graphs and updated formatting of answers, to help them find information more easily. Image Creator has also been expanded to all languages in Bing.

Microsoft Edge will be redesigned with a sleeker and enhanced UI and is adding the ability to incorporate visual search in chat so that users can upload images and search the web for related content.

Chat history allows users to pick up where they left off and return to previous chats in Bing chat with chat history. Chats can then be moved to Edge Sidebar so that they can be kept on hand while browsing. 

Microsoft stated that it will soon add export and share functionalities into chat for times when people want to easily share conversations with others on social media.

“The new AI-powered Bing has already helped people more easily find or create what they are looking for, making chat a great tool for both understanding and taking action. The integration of Image Creator saves you time by completing the task of creating the image you need right within chat,”  Yusuf Mehdi, corporate vice president and consumer chief marketing officer wrote in a blog post that contains additional details on the new features. 

The post Microsoft Bing AI moves to Open Preview, eliminating waitlist appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: Luos https://sdtimes.com/software-development/sd-times-open-source-project-of-the-week-luos/ Fri, 16 Sep 2022 13:00:07 +0000 https://sdtimes.com/?p=48910 Luos is an open-source lightweight library that enables developers to develop and scale their edge and embedded distributed software.  Developers can create portable and scalable packages that they can share with teams and communities and the project’s engine encapsulates embedded features in services with APIs, providing direct access to hardware.  Remote control enables users to … continue reading

The post SD Times Open-Source Project of the Week: Luos appeared first on SD Times.

]]>
Luos is an open-source lightweight library that enables developers to develop and scale their edge and embedded distributed software. 

Developers can create portable and scalable packages that they can share with teams and communities and the project’s engine encapsulates embedded features in services with APIs, providing direct access to hardware. 

Remote control enables users to access the topology and routing table from anywhere and they can monitor their devices with several SDKs including Python, TS, Browser app, and others coming soon. Luos detects all services in a system and allows one to access and adapt to any feature anywhere. 

“Most of the embedded developments are made from scratch. By using the Luos engine, you will be able to capitalize on the development you, your company, or the Luos community already did. The re-usability of features encapsulated in Luos engine services will fasten the time your products reach the market and reassure the robustness and the universality of your applications,” the developers behind the project wrote on its website. 

Additional features that Luos can power include event-based polling, service aliases management, data auto-update, self healing and more.

The post SD Times Open-Source Project of the Week: Luos appeared first on SD Times.

]]>
Akka switches to Business Source License version 1.1 https://sdtimes.com/software-development/akka-switches-to-business-source-license-version-1-1/ Wed, 07 Sep 2022 17:04:54 +0000 https://sdtimes.com/?p=48801 Lightbend announced that it is switching the license for Akka, a set of open-source libraries for designing scalable, resilient systems that span cores and networks. The project ran on the Apache 2.0 license which has become increasingly risky when a small company solely carries the maintenance effort even though it is still the de facto … continue reading

The post Akka switches to Business Source License version 1.1 appeared first on SD Times.

]]>
Lightbend announced that it is switching the license for Akka, a set of open-source libraries for designing scalable, resilient systems that span cores and networks.

The project ran on the Apache 2.0 license which has become increasingly risky when a small company solely carries the maintenance effort even though it is still the de facto license for the open-source community, according to Jonas Bonér, CEO and founder of Lightbend in a blog post. 

The new license, Business Source License (BSL) v1.1, freely allows for using code for development and other non-production work such as testing. Production use of the software now requires a commercial license from Lightbend, the company behind Akka. 

“Sadly, open source is prone to the infamous ‘Tragedy of the commons’, which shows that we are prone to act in our self-interest, contrary to the common good of all parties, abdicating responsibility if we assume others will take care of things for us. This situation is not sustainable and one in which everyone eventually loses,” Bonér wrote. “So what does sustainable open source look like? I believe it’s where everyone—users and developers—contributes and are in it together, sharing accountability and ownership.”

Bonér added that BSL v1.1 provides an incentive for large businesses to contribute back to Akka and to Lightbend. 

The BSL v1.1 license also has an additional usage grant to cover open source usage of Akka, such as part of the Play Framework and it will indefinitely return to Apache 2.0 after three years. 

The commercial license for Akka will be available at no charge for companies with less than $25 million in annual revenue.

“By enabling early-stage companies to use Akka in production for free, we hope to continue to foster the innovation synonymous with the startup adoption of Akka,” Bonér wrote. 

Moving forward, Akka will also gain new short-term features, security fixes, JDK and Scala support, and long-term innovation projects such as Akka Edge, which provides a feature set for building edge-native applications. 

 

The post Akka switches to Business Source License version 1.1 appeared first on SD Times.

]]>
Section’s new Kubernetes Edge interface allows organizations to deploy apps to the edge https://sdtimes.com/softwaredev/sections-new-kubernetes-edge-interface-allows-organizations-to-deploy-apps-to-the-edge/ Tue, 05 Apr 2022 15:13:47 +0000 https://sdtimes.com/?p=47145 Section announced a new Kubernetes Edge Interface (KEI) to allow organizations to deploy application workloads across a distributed edge as if it were a single cluster.  With the new interface, development teams can use familiar tools such as kubectl or Helm and deploy applications to a multi-cloud, multi-region and multi-provider network.  Section’s patented Adaptive Edge … continue reading

The post Section’s new Kubernetes Edge interface allows organizations to deploy apps to the edge appeared first on SD Times.

]]>
Section announced a new Kubernetes Edge Interface (KEI) to allow organizations to deploy application workloads across a distributed edge as if it were a single cluster. 

With the new interface, development teams can use familiar tools such as kubectl or Helm and deploy applications to a multi-cloud, multi-region and multi-provider network. 

Section’s patented Adaptive Edge Engine (AEE) employs policy-driven controls to automatically tune, shape and optimize application workloads in the background across Section’s Composable Edge Cloud.

“Edge deployment is simply better than centralized data centers or single clouds in most every important metric – performance, scale, efficiency, resilience, usability, etc.,” said Stewart McGrath, Section’s CEO. “Yet organizations historically put off edge adoption because it’s been complicated. With Section’s KEI, teams don’t have to change tools or workflows; the distributed edge effectively becomes a cluster of Kubernetes clusters and our AEE automation and Composable Edge Cloud handles the rest.”

Developers can use it to configure service discovery, routing users to the best container instance, define complex applications such as composite ones that consist of many containers, define system resource allocations, and much more. 

The post Section’s new Kubernetes Edge interface allows organizations to deploy apps to the edge appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: WireMock https://sdtimes.com/softwaredev/sd-times-open-source-project-of-the-week-wiremock/ Fri, 10 Dec 2021 14:00:57 +0000 https://sdtimes.com/?p=46041 WireMock is a simulator for HTTP-based APIs that enables users to stay productive when an API that one depends on doesn’t exist or is incomplete. It supports the testing of edge use cases and failure modes that the real API won’t reliably produce.  The company behind the project, MockLab, was recently acquired by UP9. The … continue reading

The post SD Times Open-Source Project of the Week: WireMock appeared first on SD Times.

]]>
WireMock is a simulator for HTTP-based APIs that enables users to stay productive when an API that one depends on doesn’t exist or is incomplete. It supports the testing of edge use cases and failure modes that the real API won’t reliably produce. 

The company behind the project, MockLab, was recently acquired by UP9. The rapid growth of microservice adoption and the booming API economy grew the popularity of WireMock to 1.6 million monthly downloads.

“The number of APIs created every day is growing exponentially. Developers need tools to ensure the reliability and security of their APIs, while still staying productive,” said Alon Girmonsky, CEO and co-founder of UP9. “WireMock is a significant player in the API economy, and by combining it with UP9’s existing API monitoring and traffic analysis capabilities, modern cloud-native developers can now develop faster and find problems quicker.”

Users can run WireMock from within their Java application, JUnit test, Servlet container, or as a standalone process.

The project can also match request URLs, methods, headers, cookies, and bodies using a wide variety of strategies. 

WireMock is distributed via Maven Central and can be included in your project using common build tools’ dependency management.

“With the rise in popularity of microservices along with supplier, partner and cloud APIs as essential building blocks of modern software, developers need tools that help manage the complexity and uncertainty this brings,” said Tom Akehurst, creator of WireMock and CTO of UP9. “WireMock allows developers to quickly create mocks (or simulations) of APIs they depend on, allowing them to keep building and testing when those APIs haven’t been built yet, don’t provide (reliable!) developer sandboxes, or cost money to call. It simulates faults and failure modes that are hard to create on demand and can be used in many environments, from unit test on a laptop all the way up to a high-load stress test.”

Additional details on WireMock are available here.

 

The post SD Times Open-Source Project of the Week: WireMock appeared first on SD Times.

]]>
Developers are gaining more tools for the edge https://sdtimes.com/iot/developers-are-gaining-more-tools-for-the-edge/ Mon, 04 Oct 2021 13:00:08 +0000 https://sdtimes.com/?p=45454 The edge is growing, and cloud providers know it. That’s why they’re creating more tools to help with embedded programming.  According to IDC’s research, edge computing is growing, with 73% of companies in 2021 saying that computing is a strategic initiative for them and they are already making investments to adopt it. Last year, especially, … continue reading

The post Developers are gaining more tools for the edge appeared first on SD Times.

]]>
The edge is growing, and cloud providers know it. That’s why they’re creating more tools to help with embedded programming. 

According to IDC’s research, edge computing is growing, with 73% of companies in 2021 saying that computing is a strategic initiative for them and they are already making investments to adopt it. Last year, especially, saw a lot of that growth, according to Dave McCarthy, the research vice president of Cloud and Edge Infrastructure Services at IDC.

Major cloud providers have already realized the potential for the technology and are adding edge capabilities to their toolkit, which now change the way developers can build for that technology. 

“AWS was trying to ignore what was happening in the on-premises and edge world thinking that everything would go to the cloud,” McCarthy said. “So they finally kind of realized that in some cases, cloud technologies, the cloud mindset, I think works in a lot of different places, but the location of where those resources are has to change.”

For example, in December 2020, AWS came out with AWS Wavelength, which is a service that enables users to deliver ultra-low latency applications for 5G devices. In a way, AWS is embedding some of their cloud platform inside of telco networks such as Verizon, McCarthy explained. 

Also, last year, AWS rewrote Greengrass, an open-source edge runtime, to be more friendly to cloud-native types of environments. Meanwhile, Microsoft is doing the same with its own IoT platform. 

“This distribution of infrastructure is becoming more and more relevant. And the good news for developers is it gives them so much more flexibility than they had in the past; flexibility about saying, I don’t have to compromise anymore because my cloud native kind of development strategy is limited to certain deployment locations. I can go all-in on cloud native, but now I have that freedom to deploy anywhere,” McCarthy said. 

Development for these types of devices has also significantly changed since its early stages. 

At first, the world of embedded systems was that intelligent devices gathered info on the world. Then, AI was introduced and all of that data that was acquired began being processed in the cloud. Now, the world of edge computing is about moving real-time analysis to happen at the edge. 

“Where edge computing came in was to marry the two worlds of IoT and AI or just this intelligence system concept in general, but to do it completely autonomously in these locations,” McCarthy said. “Not only were you collecting that data, but you had the ability to understand it and take action, all within that sort of edge location. That opened the door to so many more things.”

In the early days of the embedded software world, everything seemed very unique, which required specialized frameworks and a firm understanding of how to develop for embedded operating systems. That has now changed with the adoption of standardized development platforms, according to McCarthy. 

Support for edge deployments

A lot more support for deployments at the edge can now be seen in cloud native and container-based applications.

“The fact that the industry, in general, has started to align around Kubernetes as being the main orchestration platform for being able to do this just means that now it’s easier for developers to think about building applications using that microservices mindset, they’re putting that code in containers with the ability to place those out at the edge,” McCarthy said. “Before, if you were an embedded developer, you had to have this specialized skill set. Now, this is becoming more available to a wider set of developers that maybe didn’t have that background.”

Some of the more traditional enterprise environments, like VMware or Red Hat, also have been looking at how to extend their platforms to the edge. Their strategy, however, has been to take their existing products and figure out how to make them more edge-friendly. 

In many cases, that means being able to support smaller configurations, being able to handle situations where the edge environment might be disconnected. 

This is different from the approach of a company like SUSE, which has a strategy to create some edge-specific things, according to McCarthy. When you look at SUSE’s Enterprise Linux, you know, they have created a micro version that’s specifically designed for the edge.

“These are two different ways of tackling the same problem,” McCarthy said. “Either way, I think they’re both trying to attack this from that perspective of let’s create standardization with familiar tools so that developers don’t have to relearn how to do things. In some respects, what you’re doing is abstracting some of the complexity of what might be at the edge, but give them that flexibility of deployment.”

This standardization has proven essential because the further you move towards the edge, there is greater diversity in hardware types. Depending on the type of sensors being dealt with, there can be issues with communication protocols and data formats. 

This happens especially in vertical industries such as manufacturing that already have legacy technology that needs to be brought into this new world, McCarthy said. However, this level of uniqueness is becoming rarer than before with less on the unique side and more being standardized. 

Development requirements differ 

Developing for the edge is different than for other form factors because edge devices have a longer lifespan than things that can be found in a data center; something that’s always been true in the embedded world. Developers now have to think about the longer lifespan of both the hardware and the software that sits on top of it.

At the same time, though, the fast pace of today’s development world has driven the demand to deliver new features and functionalities faster, even for these devices, according to McCarthy. 

That’s why the edge space has seen the prevalence of device management capabilities offered by cloud providers that give enterprises information about whether they can turn off that device, update the firmware of that device, or change configurations. 

In addition to elucidating the life cycle, device management also helps out with security, because it offers guidance on what data to pull back to a centralized location versus what can potentially be left out on the edge. 

“This is so you can get a little bit more of that agility that you’ve seen in the cloud, and try to bring it to the edge,” McCarthy said. “It will never be the same, but it’s getting closer.”

Decentralization a challenge

Developing for the edge still faces challenges due to its decentralization nature, which requires more monitoring and control than a traditional centralized computing model would need, according to Mrudul Shah, the CTO of Technostacks, a mobile app development company in the United States and India.

Connectivity issues can cause major setbacks on operations, and often the data that is processed at the edge is not discarded, which causes unnecessary data stuffing, Shah added. 

The demand for application use cases at these different edge environments is certainly extending the need for developers to consider the requirements in that environment for that particular vertical industry, according to Michele Pelino, a principal analyst at Forrester.

Also, the industry has had a lot of device fragmentation, so there is going to be a wide range of vendors that say they can help out with one’s edge requirements. 

“You need to be sure you know what your requirements are first, so that you can really have an apples to apples conversation because they are going to be each of those vendor categories that are going to come from their own areas of expertise to say, ‘of course, we can answer your question,’ but that may not be what you need,” Pelino said. 

Currently, for most enterprise use cases for edge computing, commodity hardware and software will suffice. When sampling rates are measured in milliseconds or slower, the norms are low-power CPUs, consumer-grade memory and storage, and familiar operating systems like Linux and Windows, according to Brian Gilmore, the director of IoT Product Management at InfluxData,  an open-source time series database.

The analytics here are applied to data and events measured in human time, not scientific time, and vendors building for the enterprise edge are likely able to adapt applications and architectures built for desktops and servers to this new form factor.  

“Any developer building for the edge needs to evaluate which of these edge models to support in their applications. This is especially important when it comes to time series data, analytics, and machine learning,” Gilmore said. “Edge autonomy, informed by centralized — currently in the cloud — evaluation and coordination, and right-place right-time task execution in the edge, cloud, or somewhere in between, is a challenge that we, as developers of data analytics infrastructure and applications, take head on.”

No two edge deployments the same

An edge architecture deployment asks for comprehensive monitoring, critical planning, and strategy as no two edge deployments are the same. It is next to impossible to get IT staff to a physical edge site, so deployments should be critically designed as a remote configuration to provide resilience, fault tolerance and self-healing capabilities, Technostacks’ Shah explained.

In general, a lot of the requirements that developers need to account for will depend on the environment that edge use case is being developed for, according to Forrester’s Pelino. 

“It’s not that everybody is going in one specific direction when it comes to this. So you sort of have to think about the individual enterprise requirements for these edge use cases and applications with their developer approach, and sort of what makes sense,” Pelino said. 

To get started with their edge strategy, organizations need to first make sure that they have their foundation in place, usually starting with their infrastructure, IDC’s McCarthy explained. 

“So it means making sure that you have the ability to place applications where you need so that you have the management and control planes to address the hardware, the data, and the applications,” McCarthy explained. 

Companies also need to layer that framework for future expansion as the technology becomes even more prevalent. 

“Start with the use cases that you need to address for analytics, for insight for different kinds of applications, where those environments need to be connected and enabled, and then say ok, these are the types of edge requirements I have in my organization,” Forrester’s Pelino said. “Then you can speak to your vendor ecosystem about do I have the right security, analytics, and developer capabilities in-house, or do I need some additional help?” 

When adopted correctly, edge environments can provide many benefits.

Low latency is one of the key benefits of computing at the edge, along with the ability to do AI and ML analytics in different locations which might have not been possible before, which can save cost by not sending everything to the cloud. 

At the edge, data collection speeds can approach near-continuous analog to digital signal conversion outputs of millions of values per second, and maintaining that precision is key to many advanced use cases in signal processing and anomaly detection. In theory, this requires specific hardware and software considerations — FPGA, ASIC, DSP, and other custom processors, highly accurate internal clocks, hyper-fast memory, real-time operating systems, and low-level programming which eliminates internal latency, InfluxData’s Gilmore explained. 

Despite popular opinion, the edge is beneficial for security

Security has come up as a key challenge for edge adoption because there are more connected assets that contain data, and there is also an added physical component for those devices to get hacked. But, it can also improve security. 

“You see people are concerned about the fact that you’re increasing the attack surface, and there’s all of this chance for somebody to insert malware into the device. And unfortunately, we’ve seen examples of this in the news where devices have been compromised. But, there’s another side of that story,” IDC’s McCarthy said. “If you look at people who are concerned about data sovereignty, like having more control about where data lives and limiting the movement of data, there is another storyline here about the fact that edge actually helps security.”

Security comes into play at many different levels of the edge environment. It is necessary at the point of connecting the device to the network, at the data insight analytics piece in terms of ensuring who gets access to it, and security of the device itself, Forrester’s Pelino explained.

Also, these devices are now operating in global ecosystems, so organizations need to determine if they match the regulatory requirements of that area. 

Security capabilities to address many of these concerns are now coming from the different cloud providers, and also chipset manufacturers offer different levels of security to their components. 

In edge computing, any data traversing the network back to the cloud or data center can also be secured through encryption against malicious attacks, Technostacks’ Shah added. 

What constitutes edge is now expanding

The edge computing field, in general, is now expanding to fields such as autonomous driving, real-time insight into what’s going on in a plant or a manufacturing environment, or even what’s happening with particular critical systems in buildings or different spaces such as transportation or logistics, according to Pelino. It is growing in any business that has a real-time need or has distributed operations. 

“When it comes to the more distributed operations, you see a lot happening in retail. If you think about typical physical retailers that are trying to close that gap between the commerce world, they have so much technology now being inserted into those environments,  whether it’s just the point of sale system, and digital signage, and inventory tracking,” IDC’s McCarthy said. 

The edge is being applied to new use cases as well. For example, Auterion builds drones that they can then give to fire services. Whenever there’s a fire, the drone immediately shoots and sends back footage of what is happening in that area before the fire department gets there and says what kind of fire to prepare for and to be able to scan whether there are any people in there. Another new edge use case is the unmanned Boeing MQ-25 aircraft that can connect with a fighter at over 500 miles per hour autonomously. 

“While edge is getting a lot of attention it is still not a replacement for cloud or other computing models, it’s really a complement,” McCarthy said. “The more that you can distribute some of these applications and the infrastructure underneath, it just enables you to do things that maybe you were constrained on before.”

Also, with remote work on the rise and the aggressive acceleration of businesses leveraging digital services, edge computing is imperative for a cheaper and reliable data processing architecture, according to Technostacks’ Shah. 


Companies are seeing benefits in moving to the edge
Infinity Dish

Infinity Dish, which offers satellite television packages, has adopted edge computing in the wake of the transition to the remote workplace. 

“We’ve found that edge computing offers comparable results to the cloud-based solutions we were using previously, but with some added benefits,” said Laura Fuentes, operator of Infinity Dish. “In general, we’ve seen improved response times and latency during data processing.”

Further, by processing data on a local device, Fuentes added that the company doesn’t need to worry nearly as much when it comes to data leaks and breaches as it did using cloud solutions.

Lastly, the transmission costs were substantially less than they would be otherwise. 

However, Fuentes noted that there were some challenges with the adoption of edge. 

On the flip side, we have noticed some geographic discrepancies when attempting to process data. Additionally, we had to put down a lot of capital to get our edge systems up and running—a challenge not all businesses will have the means to solve,” Fuentes said.  

Memento Memorabilia

Kane Swerner, the CEO and co-founder of Memento Memorabilia, said that as her company began implementing edge throughout the organization, hurdles and opportunities began to emerge. 

Memento Memorabilia is a company that offers private signing sessions to guarantee authentic memorabilia from musicians, celebrities, actors, and athletes to fans.

“We can simply target desired areas by collaborating with local edge data centers without engaging in costly infrastructure development,” Swerner said. “To top it all off, edge computing enables industrial and enterprise-level companies to optimize operating efficiency, improve performance and safety, automate all core business operations, and guarantee availability most of the time.” 

However, she said that one significant worry regarding IoT edge computing devices is that they might be exploited as an entrance point for hackers. Malware or other breaches can infiltrate the whole network via a single weak spot.


There are 4 critical markers for success at the edge

A recent report by Wind River, a company that provides software for intelligent connected systems, found that there are four critical markers for successful intelligent systems: true compute on the edge, a common workflow platform, AI/ML capabilities, and ecosystems of real-time applications. 

The report “13 Characteristics of an Intelligent Systems Future” surveyed technology executives across various mission-critical industries and revealed the 13 requirements of the intelligent systems world for which industry leaders must prepare. The research found that 80% of these technology leaders desire intelligent systems success in the next five years.

True compute at the edge, by far the largest of the characteristics of the survey at 25.5% of the total share, is the ability of devices to fully function in near-latency-free mode on the farthest edge of the cloud, for example, a 5G network, an autonomous vehicle, or a highly remote sensor in a factory system. 

The report stated that by 2030, $7 trillion of the U.S. economy will be driven by the machine economy, in which systems and business models increasingly engage in unlocking the power of data and new technology platforms. Intelligent systems are helping to drive the machine economy and more fully realize IoT, according to the report. 

Sixty-two percent of technology leaders are putting into place strategies to move to an intelligent systems future, and 16% are already committed, investing, and performing strongly. It’s estimated that this 16% could realize at least four times higher ROI than their peers who are equally committed but not organized for success in the same way. 

The report also found that the two main challenges for adopting an intelligent systems infrastructure are a lack of skills in this field and security concerns. 

“So when we did the simulation work with about 500 executives, and said, look, here are the characteristics, play with them, we got like 4,000-plus simulations, things like common workflow platform, having an ecosystem for applications that matter, were really important parts of trying to break that lack of skill or lack of human resource in this journey,” said Michael Gale, Chief Marketing Officer at Wind River.

For some industries, the move to edge is essential for digital transformation, Gale added. 

“Digital Transformation was an easy construct in finance, retail services business. It’s really difficult to understand in industrial because you don’t really have to have a lot of humans to be part of it. It’s a machine-based environment,” Gale said. “I think it’s a realization intelligence systems model is the transformation moment for the industrial sector. If you’re going to have a full lifecycle intelligence systems business, you’re going to be a leader. If you’re still trying to do old things, and wrap them with intelligent systems, you’re not going to succeed, you have to undergo this full transformational workflow.”

 

The post Developers are gaining more tools for the edge appeared first on SD Times.

]]>
Wind River acquires Particle Design https://sdtimes.com/softwaredev/wind-river-acquires-particle-design/ Mon, 20 Sep 2021 16:37:57 +0000 https://sdtimes.com/?p=45306 Wind River has announced that it completed the acquisition of the UI/UX design company Particle Design which brings UI/UX capabilities to the new Wind River Studio offering.  Particle Design offers end-to-end UX research services that employ a range of methodologies  from ethnographic research to user evaluations and usability testing; its design services include prototyping, interaction … continue reading

The post Wind River acquires Particle Design appeared first on SD Times.

]]>
Wind River has announced that it completed the acquisition of the UI/UX design company Particle Design which brings UI/UX capabilities to the new Wind River Studio offering. 

Particle Design offers end-to-end UX research services that employ a range of methodologies  from ethnographic research to user evaluations and usability testing; its design services include prototyping, interaction design, and wireframing.

RELATED CONTENT: New Wind River Studio release delivers automation across SDLC

The new Wind River Studio is a cloud-native platform for the development, deployment, operations, and servicing of mission-critical intelligent systems through one source. 

The acquisition will expand the UI/UX capabilities to include cognitive UI, which uses AI/ML to predict and anticipate the needs and behaviors of the user bringing a more contextual, personalized, intelligent assistant-type UX. 

“In the new intelligent machine economy that we’re enabling with our customers, the user experience is more important than ever. We’re thrilled to welcome the industry-leading Particle design team to Wind River,” said Kevin Dallas, president and CEO of Wind River. “The graphical, natural, and cognitive UI/UX expertise that Particle brings to Wind River Studio will further advance our mission of enabling our customers to realize the AI-infused, digital future of the planet.”

The post Wind River acquires Particle Design appeared first on SD Times.

]]>