OpenAI Archives - SD Times https://sdtimes.com/tag/openai/ Software Development News Thu, 31 Oct 2024 19:26:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg OpenAI Archives - SD Times https://sdtimes.com/tag/openai/ 32 32 ChatGPT can now include web sources in responses https://sdtimes.com/ai/chatgpt-can-now-include-web-sources-in-responses/ Thu, 31 Oct 2024 19:26:15 +0000 https://sdtimes.com/?p=55965 OpenAI is updating ChatGPT so that its responses include results from the web, bringing the power of the search engine directly into the chat interface. “This blends the benefits of a natural language interface with the value of up-to-date sports scores, news, stock quotes, and more,” OpenAI wrote in a post. According to OpenAI, ChatGPT … continue reading

The post ChatGPT can now include web sources in responses appeared first on SD Times.

]]>
OpenAI is updating ChatGPT so that its responses include results from the web, bringing the power of the search engine directly into the chat interface.

“This blends the benefits of a natural language interface with the value of up-to-date sports scores, news, stock quotes, and more,” OpenAI wrote in a post.

According to OpenAI, ChatGPT will automatically decide whether a web search is warranted based on the prompt. Users can also directly tell it to search the web by selecting the web search icon under the prompt field.  

Chats will include a link to the web source so that the user can visit that site for more information. A new Sources panel will display on the right hand side of the chat with a list of all sources. 

OpenAI partnered with specific news and data providers to get up-to-date information and visual designers for weather, stocks, sports, news, and maps. For instance, asking about the weather will result in a graphic that shows the five day forecast and stock questions will include a chart of that stock’s performance. 

Some partners OpenAI worked with include Associated Press, Axel Springer, Condé Nast, Dotdash Meredith, Financial Times, GEDI, Hearst, Le Monde, News Corp, Prisa (El País), Reuters, The Atlantic, Time, and Vox Media.

“ChatGPT search connects people with original, high-quality content from the web and makes it part of their conversation. By integrating search with a chat interface, users can engage with information in a new way, while content owners gain new opportunities to reach a broader audience,” OpenAI wrote. 

This feature is available on chatgpt.com, the desktop app, and the mobile app. It is available today to ChatGPT Plus and Team subscribers and people on the SearchGPT waitlist. In the next few weeks it should be available to Enterprise and Edu users, and in the next few months, all Free users will get access as well.

The post ChatGPT can now include web sources in responses appeared first on SD Times.

]]>
ChatGPT Canvas offers a new visual interface for working with ChatGPT in a more collaborative way https://sdtimes.com/ai/chatgpt-canvas-offers-a-new-visual-interface-for-working-with-chatgpt-in-a-more-collaborative-way/ Fri, 04 Oct 2024 15:00:48 +0000 https://sdtimes.com/?p=55784 OpenAI now offers a more collaborative way of interacting with ChatGPT. ChatGPT Canvas is a new interface for conversations that makes it easier to iterate on a writing or coding project. When triggered, it opens a separate window where ChatGPT and the user can collaborate side by side.  “People use ChatGPT every day for help … continue reading

The post ChatGPT Canvas offers a new visual interface for working with ChatGPT in a more collaborative way appeared first on SD Times.

]]>
OpenAI now offers a more collaborative way of interacting with ChatGPT. ChatGPT Canvas is a new interface for conversations that makes it easier to iterate on a writing or coding project.

When triggered, it opens a separate window where ChatGPT and the user can collaborate side by side. 

“People use ChatGPT every day for help with writing and code. Although the chat interface is easy to use and works well for many tasks, it’s limited when you want to work on projects that require editing and revisions. Canvas offers a new interface for this kind of work,” OpenAI wrote in a post.

With Canvas, a user could highlight a specific section of the text that it wants ChatGPT to focus on, and then receive inline feedback and suggestions on that section, and ChatGPT will consider the context of the project as a whole in its response.

ChatGPT Canvas also features a number of shortcuts for specific tasks the user wants ChatGPT to do. Writing shortcuts include suggest edits, adjust the length, change reading level, add final polish, and add emojis. Coding shortcuts include review code, add logs, add comments, fix bugs, and port to a language.

Canvas will open automatically when a prompt is given where working in Canvas might be helpful, such as “Write a blog post about the history of coffee beans.” Users can also specify “use canvas” in their prompt to launch it. 

This announcement marks the first major visual update OpenAI has made to the ChatGPT interface since it was first launched. “Making AI more useful and accessible requires rethinking how we interact with it,” OpenAI wrote. 

It is beginning to roll out to ChatGPT Plus and Team users now, and Enterprise and Edu users will get access to it starting next week. OpenAI also has plans to make it available to free users once the beta is over. 

The post ChatGPT Canvas offers a new visual interface for working with ChatGPT in a more collaborative way appeared first on SD Times.

]]>
OpenAI announces Realtime API, prompt caching, and more at DevDay https://sdtimes.com/ai/openai-announces-realtime-api-prompt-caching-and-more-at-devday/ Wed, 02 Oct 2024 15:05:15 +0000 https://sdtimes.com/?p=55769 OpenAI held its annual DevDay conference yesterday, where it announced its Realtime API, as well as features like prompt caching, vision fine-tuning, and model distillation. The Realtime API is designed for building low-latency, multimodal experiences, and it’s now available as a public beta. The company shared a couple of examples of companies that are using … continue reading

The post OpenAI announces Realtime API, prompt caching, and more at DevDay appeared first on SD Times.

]]>
OpenAI held its annual DevDay conference yesterday, where it announced its Realtime API, as well as features like prompt caching, vision fine-tuning, and model distillation.

The Realtime API is designed for building low-latency, multimodal experiences, and it’s now available as a public beta.

The company shared a couple of examples of companies that are using the Realtime API already, such as fitness coaching app Healthify, which used it to enable more natural conversations with its AI coach, or Speak, which is a language learning app that used the Realtime API to enable customers to practice conversations in the language they are learning. 

The API supports the six preset voices in ChatGPT’s Advanced Voice Mode, according to OpenAI. 

Audio input and output have also been added to the Chat Completions API to support voice in use cases that don’t require the low latency benefits of the Realtime API. This enables developers to pass text or audio into GPT-4o and have it respond with text, audio, or both. 

According to the company, the Realtime API and the addition of audio to the Chat Completions API will enable developers to build natural conversational experiences using a single API call, rather than needing to combine multiple models to build those experiences. 

In the future, OpenAI plans to add features like new modalities like vision and video, increased rate limits, official SDK support, prompt caching, and expanded model support. 

Speaking of prompt caching, that was another feature announced during DevDay. Prompt caching allows developers to reuse recent input tokens to save money and have their prompts processed faster. Cached inputs cost 50% less than uncached tokens, and this functionality is now available by default in the latest versions of GPT-4o, GPT-4o mini, o1-preview, and o1-mini, in addition to fine-tuned versions of them.  

Next, it announced fine-tuning for vision in GPT-4o, allowing users to customize the model to have stronger image understanding. This can then be used for scenarios like advanced visual search, improved object detection for autonomous vehicles, or more accurate medical image analysis. 

Through the end of the month, the company will be offering 1 million free training tokens per day for fine-tuning GPT-4o with images. 

And finally, OpenAI announced Model Distillation, which allows developers to use the outputs of more capable models to fine-tune smaller, more cost-efficient models. For example, it would enable GTP-4o or o1-preview outputs to be used to improve GPT-4o mini.

Its Model Distillation suite includes the ability to capture and store input-output pairs generated by a model, the ability to create and run evaluations, and integration with OpenAI’s fine-tuning capabilities. 

This feature can be used now on any of OpenAI’s models, and the company will be offering 2 million free training tokens per day on GPT-4o mini and 1 million free training tokens per day on GPT-4o through the end of the month to encourage people to try it out. 

OpenAI raises $6.6 billion in funding

Post-DevDay, today the company announced it had secured $6.6 billion in funding and was valued at $157 billion. The company didn’t specify the investors in its press release, but CNBC reports that the round was led by Thrive Capital and had participation from Microsoft, NVIDIA, SoftBank, and others.

“The new funding will allow us to double down on our leadership in frontier AI research, increase compute capacity, and continue building tools that help people solve hard problems. We aim to make advanced intelligence a widely accessible resource. We’re grateful to our investors for their trust in us, and we look forward to working with our partners, developers, and the broader community to shape an AI-powered ecosystem and future that benefits everyone. By collaborating with key partners, including the U.S. and allied governments, we can unlock this technology’s full potential,” OpenAI wrote in a statement.

The post OpenAI announces Realtime API, prompt caching, and more at DevDay appeared first on SD Times.

]]>
OpenAI Academy launches to support developers using AI in low- and middle-income countries https://sdtimes.com/ai/openai-academy-launches-to-support-developers-using-ai-in-low-and-middle-income-countries/ Mon, 23 Sep 2024 16:11:21 +0000 https://sdtimes.com/?p=55702 OpenAI is attempting to level up developers and enable them to use AI to solve issues in their communities and drive economic growth.  OpenAI Academy is a new program that will provide technical guidance and support from OpenAI experts, distribute $1 million in API credits (with more potentially being added later), hold contests and incubator … continue reading

The post OpenAI Academy launches to support developers using AI in low- and middle-income countries appeared first on SD Times.

]]>
OpenAI is attempting to level up developers and enable them to use AI to solve issues in their communities and drive economic growth. 

OpenAI Academy is a new program that will provide technical guidance and support from OpenAI experts, distribute $1 million in API credits (with more potentially being added later), hold contests and incubator programs in partnership with investors, and build a global network of developers to collaborate, share knowledge, and drive innovation together.

According to the company, the Academy will start in low- and middle-income countries for now. “Many countries have fast-growing technology sectors with talented developers and innovative organizations, but access to advanced training and technical resources remains limited. Investing in the development of local AI talent can fuel economic growth and innovation across sectors like healthcare, agriculture, education, and finance,” OpenAI wrote in a post

In its announcement, OpenAI also highlighted the fact that it has donated API credits and technical assistance to the winners of The Tools Competition, turn.io Chat for Impact contest, and other organizations working to address community challenges around the world. 

It also translated the Massive Multitask Language Understanding (MMLU) benchmark into 14 languages, including Arabic, Bengali, Chinese, French, German, Hindi, Indonesian, Italian, Japanese, Korean, Portuguese, Spanish, Swahili, and Yoruba. 

“Supporting those who understand the unique cultures, economies, and social dynamics of their communities will help ensure that AI applications are tailored to meet local needs. Developers and organizations are key to making artificial intelligence more widely accessible and enabling people around the world—regardless of where they live or what language they speak—to use the technology to solve hard problems,” the company concluded.

The post OpenAI Academy launches to support developers using AI in low- and middle-income countries appeared first on SD Times.

]]>
OpenAI announces changes to its safety and security practices based on internal evaluations https://sdtimes.com/ai/openai-announces-changes-to-its-safety-and-security-practices-based-on-internal-evaluations/ Tue, 17 Sep 2024 16:08:22 +0000 https://sdtimes.com/?p=55672 Back in May, OpenAI announced that it was forming a new Safety and Security Committee (SSC) to evaluate its current processes and safeguards and make recommendations for changes to make. When announced, the company said the SSC would do evaluations for 90 days and then present its findings to the board. Now that the process … continue reading

The post OpenAI announces changes to its safety and security practices based on internal evaluations appeared first on SD Times.

]]>
Back in May, OpenAI announced that it was forming a new Safety and Security Committee (SSC) to evaluate its current processes and safeguards and make recommendations for changes to make. When announced, the company said the SSC would do evaluations for 90 days and then present its findings to the board.

Now that the process has been completed, OpenAI is sharing five changes it will be making based on the SSC’s evaluation. 

First, the SSC will become an independent oversight committee on the OpenAI board to continue providing independent governance on safety and security. The board committee will be led by Zico Kolter, director of the machine learning department with the School of Computer Science at Carnegie Mellon University. Other members will include Adam D’Angelo, co-founder and CEO of Quora; Paul Nakasone, a retired US Army General; and Nicole Seligman, former EVP and general counsel of Sony Corporation. 

The SSC board has already reviewed the o1 release of safety and will continue reviewing future releases both during development and after release. It will also have oversight for model launches, and will have the power to delay releases with safety concerns until those concerns have been sufficiently addressed. 

Second, the SSC will work to advance the company’s security measures by expanding internal information segmentation, adding staffing to deepen around-the-clock security operations teams, and continuing to invest in things that enhance the security of the company’s research and product infrastructure.

“Cybersecurity is a critical component of AI safety, and we’ve been a leader in defining the security measures that are needed for the protection of advanced AI. We will continue to take a risk-based approach to our security measures, and evolve our approach as the threat model and the risk profiles of our models change,” OpenAI wrote in a post

The third recommendation is that the company be more transparent about the work it is doing. It already produces system cards that detail the capabilities and risks of models, and will continue evaluating new ways to share and explain safety work. 

Its system cards for the GPT-4o and o1-preview releases included the results of external red teaming, results of frontier risk evaluations within the Preparedness Framework, and an overview of risk mitigations built into the systems.

Fourth, it will explore new ways to independently test its systems by collaborating with more external companies. For instance, OpenAI is building new partnerships with safety organizations and non-governmental labs to conduct model safety assessments. 

It is also working with government agencies like Los Alamos National Labs to study how AI can be used safely in labs to advance bioscientific research.

OpenAI also recently made agreements with the U.S. and U.K. AI Safety Institutes to work on researching emerging AI safety risks.

The final recommendation by the SSC is to unify the company’s safety frameworks for model development and monitoring. 

“Ensuring the safety and security of our models involves the work of many teams across the organization. As we’ve grown and our work has become more complex, we are building upon our model launch processes and practices to establish an integrated safety and security framework with clearly defined success criteria for model launches,” said OpenAI.

The framework will be based on risk assessments by the SSC and will evolve as complexity and risks increase. To help with this process, the company has already reorganized its research, safety, and policy teams to improve collaboration. 

The post OpenAI announces changes to its safety and security practices based on internal evaluations appeared first on SD Times.

]]>
OpenAI’s o1 reasoning models are a significant step forward in complex reasoning https://sdtimes.com/ai/openais-o1-reasoning-models-are-a-significant-step-forward-in-complex-reasoning/ Thu, 12 Sep 2024 20:17:56 +0000 https://sdtimes.com/?p=55648 OpenAI has released the first preview for OpenAI o1, a new series of AI reasoning models that are able to handle more complex tasks than previous models. This is because they spend more time thinking through the problem before responding.  “We trained these models to spend more time thinking through problems before they respond, much … continue reading

The post OpenAI’s o1 reasoning models are a significant step forward in complex reasoning appeared first on SD Times.

]]>
OpenAI has released the first preview for OpenAI o1, a new series of AI reasoning models that are able to handle more complex tasks than previous models. This is because they spend more time thinking through the problem before responding. 

“We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes,” OpenAI wrote in a post

OpenAI claims that these models perform similar to a PhD student in physics, chemistry, and biology-related tasks. It is also highly capable of solving math and coding problems. For instance, it could be used by healthcare researchers to annotate cell sequencing data, by physicists for generating quantum optics formulas, or by developers to build and execute multi-step workflows. 

In testing, the o1 model correctly solved 83% of problems on the International Mathematics Olympiad qualifying exam, while GPT-4o only solved 13% of the problems. O1 also scored in the 89th percentile in Codeforces competitions.

The company said that the models represent such a significant step forward in complex reasoning and a new level of AI capability, inspiring the name of “o1.” “Given this, we are resetting the counter back to 1 and naming this series OpenAI o1,” OpenAI said. 

It is being released as a preview in ChatGPT and the OpenAI API, but it doesn’t quite have all of the features that typically make ChatGPT useful, like browsing the internet for information or uploading files. For those common use cases, OpenAI says that GPT-4o is more capable for now.

OpenAI is also releasing o1-mini, which is smaller, faster and 80% cheaper than o1. It is advantageous for scenarios where complex reasoning is needed, but real world knowledge isn’t necessary. 

ChatGPT Plus and Team users can now access the o1 preview models, but are limited to 30 messages for o1-preview and 50 for o1-mini. The company says it will be working to increase those limits and to enable ChatGPT to automatically select the right model for a prompt (for now o1 needs to be manually selected).

“This is an early preview of these reasoning models in ChatGPT and the API. In addition to model updates, we expect to add browsing, file and image uploading, and other features to make them more useful to everyone. We also plan to continue developing and releasing models in our GPT series, in addition to the new OpenAI o1 series,” the company concluded. 

The post OpenAI’s o1 reasoning models are a significant step forward in complex reasoning appeared first on SD Times.

]]>
OpenAI launches fine-tuning for GPT-4o https://sdtimes.com/ai/openai-launches-fine-tuning-for-gpt-4o/ Tue, 20 Aug 2024 17:46:04 +0000 https://sdtimes.com/?p=55489 Developers will now be able to fine-tune GPT-4o to get more customized responses that are suited to their unique needs. With fine-tuning, GPT-4o can be improved using custom datasets, resulting in better performance at a lower cost, according to OpenAI. For example, developers can use fine-tuning to customize the structure and tone of a GPT-4o … continue reading

The post OpenAI launches fine-tuning for GPT-4o appeared first on SD Times.

]]>
Developers will now be able to fine-tune GPT-4o to get more customized responses that are suited to their unique needs.

With fine-tuning, GPT-4o can be improved using custom datasets, resulting in better performance at a lower cost, according to OpenAI.

For example, developers can use fine-tuning to customize the structure and tone of a GPT-4o response, or have it follow domain-specific instructions.

According to OpenAI, developers can start seeing results with as few as a dozen examples in a training data set.

“From coding to creative writing, fine-tuning can have a large impact on model performance across a variety of domains. This is just the start—we’ll continue to invest in expanding our model customization options for developers,” OpenAI wrote in a blog post.

The company also explained that it has put in place safety guardrails for fine-tuned models to prevent misuse. It is continuously running safety evaluations and monitoring usage to ensure that these models are meeting its policies for use.

In addition, the company announced it would be giving companies 1 million free training tokens per day until September 23.


You may also like…

Deepfakes: An existential threat to security emerges

AI Regulations are coming: Here’s how to build and implement the best strategy

The post OpenAI launches fine-tuning for GPT-4o appeared first on SD Times.

]]>
OpenAI adds support for Structured Outputs for JSON in its API https://sdtimes.com/ai/openai-adds-support-for-structured-outputs-for-json-in-its-api/ Wed, 07 Aug 2024 13:52:46 +0000 https://sdtimes.com/?p=55375 OpenAI is updating its API with support for Structured Outputs to ensure that the outputs of its models match the JSON Schemas provided by developers.  According to OpenAI, one of the core use cases for AI today is the ability to generate structured data from unstructured inputs, but previously, developers needed to utilize open source … continue reading

The post OpenAI adds support for Structured Outputs for JSON in its API appeared first on SD Times.

]]>
OpenAI is updating its API with support for Structured Outputs to ensure that the outputs of its models match the JSON Schemas provided by developers. 

According to OpenAI, one of the core use cases for AI today is the ability to generate structured data from unstructured inputs, but previously, developers needed to utilize open source tools, specific prompts, or just retry requests until it matched the format they were looking for. 

Structured Outputs will make this process easier by forcing the models to match the schema provided by the developers. 

Some sample use cases for using Structured Outputs include generating UIs based on user intent, separating an answer from its supporting reasoning or commentary, or extracting structured data from unstructured sources, like meeting notes or to-do lists. 

Developers can access this new functionality one of two ways. First they can set strict: true under the tool definition when writing function calls. The second option is to supply a JSON Schema using the new json_schema option in the response_format parameter. This second option is ideal for situations where the model is responding to a user rather than calling a tool.

OpenAI said that Structured Outputs follow the company’s existing safety policies, enabling the models to continue refusing unsafe requests when using this feature.

Structured Outputs is generally available in the API now. In addition, OpenAI’s Python and Node SDKs have both been updated with native support for this new functionality. 


You may also like…

Data scientists and developers need a better working relationship for AI

How to maximize your ROI for AI in software development

The post OpenAI adds support for Structured Outputs for JSON in its API appeared first on SD Times.

]]>
OpenAI starts rolling out advanced Voice Mode to ChatGPT Plus users https://sdtimes.com/ai/openai-starts-rolling-out-advanced-voice-mode-to-chatgpt-plus-users/ Wed, 31 Jul 2024 15:35:55 +0000 https://sdtimes.com/?p=55303 OpenAI has announced that it is starting to roll out its advanced Voice Mode to a select group of ChatGPT Plus users.  According to the company, this new mode “offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions.” Advanced Voice Mode was tested by over 100 external … continue reading

The post OpenAI starts rolling out advanced Voice Mode to ChatGPT Plus users appeared first on SD Times.

]]>
OpenAI has announced that it is starting to roll out its advanced Voice Mode to a select group of ChatGPT Plus users. 

According to the company, this new mode “offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions.”

Advanced Voice Mode was tested by over 100 external red teamers across 45 languages. 

It was first announced in May, and since then OpenAI has been working on reinforcing the safety and quality of voice conversations. 

When it was first announced, the company received backlash because one of the voices, named Sky, sounded very similar to Scarlett Johansson. The company’s CEO Sam Altman had previously reached out to Johansson asking if she would provide her voice (as a nod to the movie Her), but she said no. When the voice came out, however, it had a clear resemblance, and her legal team started demanding OpenAI reveal how the voice was developed. 

“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” Johansson said in a statement at the time. “Mr. Altman even insinuated that the similarity was intentional, tweeting a single word, ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.”

Following the backlash, OpenAI took down the voice, which was one of five voice options, the others being Breeze, Cove, Ember, and Juniper. OpenAI says that it partnered with voice actors in 2023 to record the voices, and that the voice actress for Sky was already selected when Altman reached out to Johansson, and that she would have been recording a sixth voice had she agreed. 

The actors were selected on a number of criteria, including actors who were from diverse backgrounds or multilingual, and a voice that feels timeless, sounds approachable, and is easy to listen to. 

OpenAI expects that advanced Voice Mode will become available to all ChatGPT Plus users by the fall. More information on its training, safety, and usage can be found in the company’s FAQ page.


 You may also like…

OpenAI taking on Google Search with prototype of SearchGPT

Microsoft gives up its observer seat on OpenAI’s board

The post OpenAI starts rolling out advanced Voice Mode to ChatGPT Plus users appeared first on SD Times.

]]>
Microsoft provides guidance for upcoming support of OpenAI library v2 in Semantic Kernel https://sdtimes.com/msft/microsoft-provides-guidance-for-upcoming-support-of-openai-library-v2-in-semantic-kernel/ Tue, 30 Jul 2024 16:18:54 +0000 https://sdtimes.com/?p=55296 Last month, Microsoft announced an official .NET library for OpenAI, which included full support for the OpenAI API.  Now, the company is revealing that its Semantic Kernel team has been working on upgrading its connectors to use version 2 of the OpenAI library and Azure.AI.OpenAI library.  According to the company, there were significant updates to … continue reading

The post Microsoft provides guidance for upcoming support of OpenAI library v2 in Semantic Kernel appeared first on SD Times.

]]>
Last month, Microsoft announced an official .NET library for OpenAI, which included full support for the OpenAI API. 

Now, the company is revealing that its Semantic Kernel team has been working on upgrading its connectors to use version 2 of the OpenAI library and Azure.AI.OpenAI library. 

According to the company, there were significant updates to the underlying APIs in the upgrade from v1 to v2, which is going to result in breaking changings that might impact Semantic Kernel developers using the library. 

Abstractions in Semantic Kernel isolate code from a majority of the changes, but there are still some that are unavoidable. Developers will need to update the name of the library they are importing because the names of the Semantic Kernel connectors have been updated to reflect that there are now two libraries that connect to OpenAI models. The new names are Microsoft.SemanticKernel.Connectors.OpenAI and Microsoft.SemanticKernel.Connectors.AzureOpenAI.

Other changes that may need to be made can be found in Microsoft’s blog post here

“Uptalking a major update can be challenging, but we in the Semantic Kernel team want to make it as painless as possible. As we get closer to adopting the new v2 libraries, we will provide a detailed migration guide to help you with the process of upgrading your code,” Mark Wallace, principal software engineer for Semantic Kernel, wrote in the blog post. 


You may also like…

The evolution and future of AI-driven testing: Ensuring quality and addressing bias

OpenAI taking on Google Search with prototype of SearchGPT

The post Microsoft provides guidance for upcoming support of OpenAI library v2 in Semantic Kernel appeared first on SD Times.

]]>