chatgpt Archives - SD Times https://sdtimes.com/tag/chatgpt/ Software Development News Thu, 31 Oct 2024 19:26:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg chatgpt Archives - SD Times https://sdtimes.com/tag/chatgpt/ 32 32 ChatGPT can now include web sources in responses https://sdtimes.com/ai/chatgpt-can-now-include-web-sources-in-responses/ Thu, 31 Oct 2024 19:26:15 +0000 https://sdtimes.com/?p=55965 OpenAI is updating ChatGPT so that its responses include results from the web, bringing the power of the search engine directly into the chat interface. “This blends the benefits of a natural language interface with the value of up-to-date sports scores, news, stock quotes, and more,” OpenAI wrote in a post. According to OpenAI, ChatGPT … continue reading

The post ChatGPT can now include web sources in responses appeared first on SD Times.

]]>
OpenAI is updating ChatGPT so that its responses include results from the web, bringing the power of the search engine directly into the chat interface.

“This blends the benefits of a natural language interface with the value of up-to-date sports scores, news, stock quotes, and more,” OpenAI wrote in a post.

According to OpenAI, ChatGPT will automatically decide whether a web search is warranted based on the prompt. Users can also directly tell it to search the web by selecting the web search icon under the prompt field.  

Chats will include a link to the web source so that the user can visit that site for more information. A new Sources panel will display on the right hand side of the chat with a list of all sources. 

OpenAI partnered with specific news and data providers to get up-to-date information and visual designers for weather, stocks, sports, news, and maps. For instance, asking about the weather will result in a graphic that shows the five day forecast and stock questions will include a chart of that stock’s performance. 

Some partners OpenAI worked with include Associated Press, Axel Springer, Condé Nast, Dotdash Meredith, Financial Times, GEDI, Hearst, Le Monde, News Corp, Prisa (El País), Reuters, The Atlantic, Time, and Vox Media.

“ChatGPT search connects people with original, high-quality content from the web and makes it part of their conversation. By integrating search with a chat interface, users can engage with information in a new way, while content owners gain new opportunities to reach a broader audience,” OpenAI wrote. 

This feature is available on chatgpt.com, the desktop app, and the mobile app. It is available today to ChatGPT Plus and Team subscribers and people on the SearchGPT waitlist. In the next few weeks it should be available to Enterprise and Edu users, and in the next few months, all Free users will get access as well.

The post ChatGPT can now include web sources in responses appeared first on SD Times.

]]>
ChatGPT Canvas offers a new visual interface for working with ChatGPT in a more collaborative way https://sdtimes.com/ai/chatgpt-canvas-offers-a-new-visual-interface-for-working-with-chatgpt-in-a-more-collaborative-way/ Fri, 04 Oct 2024 15:00:48 +0000 https://sdtimes.com/?p=55784 OpenAI now offers a more collaborative way of interacting with ChatGPT. ChatGPT Canvas is a new interface for conversations that makes it easier to iterate on a writing or coding project. When triggered, it opens a separate window where ChatGPT and the user can collaborate side by side.  “People use ChatGPT every day for help … continue reading

The post ChatGPT Canvas offers a new visual interface for working with ChatGPT in a more collaborative way appeared first on SD Times.

]]>
OpenAI now offers a more collaborative way of interacting with ChatGPT. ChatGPT Canvas is a new interface for conversations that makes it easier to iterate on a writing or coding project.

When triggered, it opens a separate window where ChatGPT and the user can collaborate side by side. 

“People use ChatGPT every day for help with writing and code. Although the chat interface is easy to use and works well for many tasks, it’s limited when you want to work on projects that require editing and revisions. Canvas offers a new interface for this kind of work,” OpenAI wrote in a post.

With Canvas, a user could highlight a specific section of the text that it wants ChatGPT to focus on, and then receive inline feedback and suggestions on that section, and ChatGPT will consider the context of the project as a whole in its response.

ChatGPT Canvas also features a number of shortcuts for specific tasks the user wants ChatGPT to do. Writing shortcuts include suggest edits, adjust the length, change reading level, add final polish, and add emojis. Coding shortcuts include review code, add logs, add comments, fix bugs, and port to a language.

Canvas will open automatically when a prompt is given where working in Canvas might be helpful, such as “Write a blog post about the history of coffee beans.” Users can also specify “use canvas” in their prompt to launch it. 

This announcement marks the first major visual update OpenAI has made to the ChatGPT interface since it was first launched. “Making AI more useful and accessible requires rethinking how we interact with it,” OpenAI wrote. 

It is beginning to roll out to ChatGPT Plus and Team users now, and Enterprise and Edu users will get access to it starting next week. OpenAI also has plans to make it available to free users once the beta is over. 

The post ChatGPT Canvas offers a new visual interface for working with ChatGPT in a more collaborative way appeared first on SD Times.

]]>
OpenAI starts rolling out advanced Voice Mode to ChatGPT Plus users https://sdtimes.com/ai/openai-starts-rolling-out-advanced-voice-mode-to-chatgpt-plus-users/ Wed, 31 Jul 2024 15:35:55 +0000 https://sdtimes.com/?p=55303 OpenAI has announced that it is starting to roll out its advanced Voice Mode to a select group of ChatGPT Plus users.  According to the company, this new mode “offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions.” Advanced Voice Mode was tested by over 100 external … continue reading

The post OpenAI starts rolling out advanced Voice Mode to ChatGPT Plus users appeared first on SD Times.

]]>
OpenAI has announced that it is starting to roll out its advanced Voice Mode to a select group of ChatGPT Plus users. 

According to the company, this new mode “offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions.”

Advanced Voice Mode was tested by over 100 external red teamers across 45 languages. 

It was first announced in May, and since then OpenAI has been working on reinforcing the safety and quality of voice conversations. 

When it was first announced, the company received backlash because one of the voices, named Sky, sounded very similar to Scarlett Johansson. The company’s CEO Sam Altman had previously reached out to Johansson asking if she would provide her voice (as a nod to the movie Her), but she said no. When the voice came out, however, it had a clear resemblance, and her legal team started demanding OpenAI reveal how the voice was developed. 

“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” Johansson said in a statement at the time. “Mr. Altman even insinuated that the similarity was intentional, tweeting a single word, ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.”

Following the backlash, OpenAI took down the voice, which was one of five voice options, the others being Breeze, Cove, Ember, and Juniper. OpenAI says that it partnered with voice actors in 2023 to record the voices, and that the voice actress for Sky was already selected when Altman reached out to Johansson, and that she would have been recording a sixth voice had she agreed. 

The actors were selected on a number of criteria, including actors who were from diverse backgrounds or multilingual, and a voice that feels timeless, sounds approachable, and is easy to listen to. 

OpenAI expects that advanced Voice Mode will become available to all ChatGPT Plus users by the fall. More information on its training, safety, and usage can be found in the company’s FAQ page.


 You may also like…

OpenAI taking on Google Search with prototype of SearchGPT

Microsoft gives up its observer seat on OpenAI’s board

The post OpenAI starts rolling out advanced Voice Mode to ChatGPT Plus users appeared first on SD Times.

]]>
OpenAI taking on Google Search with prototype of SearchGPT https://sdtimes.com/ai/openai-taking-on-google-search-with-prototype-of-searchgpt/ Fri, 26 Jul 2024 16:51:00 +0000 https://sdtimes.com/?p=55278 OpenAI has announced a prototype for its upcoming AI search features that are intended to rival existing search engines. “Getting answers on the web can take a lot of effort, often requiring multiple attempts to get relevant results. We believe that by enhancing the conversational capabilities of our models with real-time information from the web, … continue reading

The post OpenAI taking on Google Search with prototype of SearchGPT appeared first on SD Times.

]]>
OpenAI has announced a prototype for its upcoming AI search features that are intended to rival existing search engines.

“Getting answers on the web can take a lot of effort, often requiring multiple attempts to get relevant results. We believe that by enhancing the conversational capabilities of our models with real-time information from the web, finding what you’re looking for can be faster and easier,” OpenAI wrote in a statement

Google had implemented AI into its search engine several months ago, and now sometimes an AI Overview will show at the top of the results page summarizing information from several sources. 

Unlike Google’s offering, SearchGPT will function more like ChatGPT in the sense that it maintains context throughout a conversation, and users will also be able to ask follow-up questions to their search.

Similar to Google’s AI Overview, SearchGPT will provide links to sources when it provides its responses, allowing users to verify the validity of the source or click through the link for more information. 

Google’s AI Overview had a lot of criticism at launch for sometimes giving incorrect information in its summaries, such as telling people to eat one rock per day to ease digestion or put glue on pizza, which was traced back to a joke comment on a Reddit thread. Google has said it’s made improvements to the system following some of these instances. “We’ve learned a lot over the past 25 years about how to build and maintain a high-quality search experience, including how to learn from these errors to make Search better for everyone. We’ll keep improving when and how we show AI Overviews and strengthening our protections, including for edge cases, and we’re very grateful for the ongoing feedback,” they said in a statement. 

OpenAI said it is partnering with select publishers and creators for SearchGPT, such as The Atlantic, so that it can surface high-quality content in its responses. 

“SearchGPT is designed to help users connect with publishers by prominently citing and linking to them in searches. Responses have clear, in-line, named attribution and links so users know where information is coming from and can quickly engage with even more results in a sidebar with source links,” OpenAI wrote.

The prototype will only be available temporarily to a small group of people, and the feedback from the initial users will be used to incorporate these search features into ChatGPT sometime down the line. 

Improving model safety with Rule-Based Rewards

In addition, OpenAI has been working on improving the safety of its models, and has developed a new method for doing so that utilizes Rule-Based Rewards (RBRs).

Human feedback has typically been used to develop reward models that encourage desired behaviors, but collecting this feedback can be time-consuming and can become outdated if safety policies change.

As an alternative, OpenAI began experimenting with RBRs, which use step-by-step rules to evaluate how well a model is meeting safety standards. This method delivers comparable safety performance to the human feedback method while also cutting down on the number of times a safe request was incorrectly refused, OpenAI explained. 

Some of the limitations of RBRs are that they don’t work as well for subjective tasks, like writing, and shifting safety checks from humans to AI can reduce human oversight and amplify biases. As such, the company recommends anyone experimenting with RBRs ensure that they are carefully designed and consider using a combined approach that uses both RBRs and human feedback.

According to OpenAI, it has used this method in the training of GPT-4 and will use it in models going forward as well. 


You may also like…

Coalition for Secure AI forms to address security risks of AI

The impact of AI regulation on R&D

Microsoft gives up its observer seat on OpenAI’s board

The post OpenAI taking on Google Search with prototype of SearchGPT appeared first on SD Times.

]]>
GPT-4o launches as OpenAI’s newest model https://sdtimes.com/ai/gpt-4o-launches-as-openais-newest-model/ Mon, 13 May 2024 20:25:45 +0000 https://sdtimes.com/?p=54547 OpenAI today rolled out GPT-4o, its newest flagship model that is faster than GPT-4 yet maintains the same level of intelligence and builds on its voice, vision and text capabilities, according to the organization’s announcement. The new model takes understanding of images to a higher level. In its announcement, OpenAI gave this example: “You can … continue reading

The post GPT-4o launches as OpenAI’s newest model appeared first on SD Times.

]]>
OpenAI today rolled out GPT-4o, its newest flagship model that is faster than GPT-4 yet maintains the same level of intelligence and builds on its voice, vision and text capabilities, according to the organization’s announcement.

The new model takes understanding of images to a higher level. In its announcement, OpenAI gave this example: “You can now take a picture of a menu in a different language and talk to GPT-4o to translate it, learn about the food’s history and significance, and get recommendations.”

Future features will improve its natural language, real-time voice conversations, and will add functionality to enable ChatGPT conversations over real-time video, OpenAI said, enabling ChatGPT to “see” a live sporting event and the user can ask it questions about such things as the rules of the sport, and more. A new Voice Mode will be rolling out in an alpha release within weeks.

Among the features ChatGPT Free users will be able to access when using GPT-4o are, according to the news announcement:

The post GPT-4o launches as OpenAI’s newest model appeared first on SD Times.

]]>
ChatGPT Plus users now get access to GPT-4 Turbo model https://sdtimes.com/ai/chatgpt-plus-users-now-get-access-to-gpt-4-turbo-model/ Fri, 12 Apr 2024 16:27:33 +0000 https://sdtimes.com/?p=54270 OpenAI has announced that paid ChatGPT users now have access to GPT-4 Turbo, which is the company’s most advanced model. The new model improves writing, math, logical reasoning, and coding capabilities.  According to OpenAI, the use of GPT-4 Turbo will result in more direct and concise results from ChatGPT that use more conversational language compared … continue reading

The post ChatGPT Plus users now get access to GPT-4 Turbo model appeared first on SD Times.

]]>
OpenAI has announced that paid ChatGPT users now have access to GPT-4 Turbo, which is the company’s most advanced model. The new model improves writing, math, logical reasoning, and coding capabilities. 

According to OpenAI, the use of GPT-4 Turbo will result in more direct and concise results from ChatGPT that use more conversational language compared to previous iterations. 

For example, a prompt asking for a text message you can send to your friends reminding them to RSVP to your birthday dinner would now result in a 23 word response instead of a 51 word response.

In addition to now being available for ChatGPT Plus users, GPT-4 Turbo has already been available to OpenAI’s Team and Enterprise customers and through the API. 

OpenAI has made its benchmarks for GPT-4 Turbo in ChatGPT available here

The post ChatGPT Plus users now get access to GPT-4 Turbo model appeared first on SD Times.

]]>
ChatGPT no longer requires users to sign up https://sdtimes.com/softwaredev/chatgpt-no-longer-requires-users-to-sign-up/ Mon, 01 Apr 2024 19:40:31 +0000 https://sdtimes.com/?p=54156 OpenAI is opening up ChatGPT to more users by enabling use of the platform without needing to create an OpenAI account.  “It’s core to our mission to make tools like ChatGPT broadly available so that people can experience the benefits of AI. More than 100 million people across 185 countries use ChatGPT weekly to learn … continue reading

The post ChatGPT no longer requires users to sign up appeared first on SD Times.

]]>
OpenAI is opening up ChatGPT to more users by enabling use of the platform without needing to create an OpenAI account. 

“It’s core to our mission to make tools like ChatGPT broadly available so that people can experience the benefits of AI. More than 100 million people across 185 countries use ChatGPT weekly to learn something new, find creative inspiration, and get answers to their questions. Starting today, you can use ChatGPT instantly, without needing to sign-up,” OpenAI wrote in a blog post

According to the company, this capability will be rolled out gradually, so some users may still be prompted to create an account until the roll out is complete. 

Even without creating an account, users will have access to the option to turn off permission for OpenAI to use what you provide in prompts to train ChatGPT, OpenAI confirmed.

However, some downsides to not making an account include not being able to save chat history, share chats, or use certain features like voice and custom instructions.

The company says it is also introducing new content safeguards along with this change. These include blocking prompts and generations for a wider range of categories. 

 

The post ChatGPT no longer requires users to sign up appeared first on SD Times.

]]>
ChatGPT can now read responses out loud https://sdtimes.com/ai/chatgpt-can-now-read-responses-out-loud/ Mon, 04 Mar 2024 20:13:52 +0000 https://sdtimes.com/?p=53937 OpenAI today announced that ChatGPT now has the capability to read responses aloud to you, allowing for a more hands-free experience.  For example, it could read aloud the ingredients for a recipe so that you can be roaming around your kitchen as you listen, rather than having to carry your phone around with you and … continue reading

The post ChatGPT can now read responses out loud appeared first on SD Times.

]]>
OpenAI today announced that ChatGPT now has the capability to read responses aloud to you, allowing for a more hands-free experience. 

For example, it could read aloud the ingredients for a recipe so that you can be roaming around your kitchen as you listen, rather than having to carry your phone around with you and be reading from the screen. 

On iOS and Android, you can tap and hold ChatGPT’s response and you will see an option to “Read Aloud.” To access this feature on the web, tap the new speaker icon that now shows up under the message. Users can set it to automatically read aloud all of their future messages in a particular conversation as well. 

Users can also ask it to pause, fast forward, or rewind, and can change the preferred voice of the reader. 

This feature is available now on iOS and Android and has begun rolling out on the web. 

Additionally, OpenAI has updated the voice-to-text icon to be a microphone, which it believes will help users find that option more easily. 

The post ChatGPT can now read responses out loud appeared first on SD Times.

]]>
Next for Gen AI: Small, hyper-local and what innovators are dreaming up https://sdtimes.com/ai/next-for-gen-ai-small-hyper-local-and-what-innovators-are-dreaming-up/ Wed, 21 Feb 2024 16:44:48 +0000 https://sdtimes.com/?p=53830 In late 2022, ChatGPT had its “iPhone moment” and quickly became the poster child of the Gen AI movement after going viral within days of its release. For LLMs’ next wave, many technologists are eyeing the next big opportunity: going small and hyper-local.  The core factors driving this next big shift are familiar ones: a … continue reading

The post Next for Gen AI: Small, hyper-local and what innovators are dreaming up appeared first on SD Times.

]]>
In late 2022, ChatGPT had its “iPhone moment” and quickly became the poster child of the Gen AI movement after going viral within days of its release. For LLMs’ next wave, many technologists are eyeing the next big opportunity: going small and hyper-local. 

The core factors driving this next big shift are familiar ones: a better customer experience tied to our expectation of immediate gratification, and more privacy and security baked into user queries within smaller, local networks such as the devices we hold in our hands or within our cars and homes without needing to make the roundtrip to data server farms in the cloud and back, with inevitable lag times increasing over time. 

While there’s some doubts on how quickly local LLMs could catch up with GPT-4’s capabilities such as its 1.8 trillion parameters across 120 layers that run on a cluster of 128 GPUs, some of the world’s best known tech innovators are working on bringing AI “to the edge” so new services like faster, intelligent voice assistants, localized computer imaging to rapidly produce image and video effects, and other types of consumer apps. 

For example, Meta and Qualcomm announced in July they have teamed up to run big AI models on smartphones. The goal is to enable Meta’s new large language model, Llama 2, to run on Qualcomm chips on phones and PCs starting in 2024. That promises new LLMs that can avoid cloud’s data centers and their massive data crunching and computing power that is both costly and becoming a sustainability eye-sore for big tech companies as one of the budding AI’s industry’s “dirty little secrets” in the wake of climate-change concerns and other natural resources required like water for cooling. 

The challenges of Gen AI running on the edge

Like the path we’ve seen for years with many types of consumer technology devices, we’ll most certainly see more powerful processors and memory chips with smaller footprints driven by innovators such as Qualcomm. The hardware will keep evolving following Moore’s Law. But in the software side, there’s been a lot of research, development, and progress being made in how we can miniaturize and shrink down the neural networks to fit on smaller devices such as smartphones, tablets and computers. 

Neural networks are quite big and heavy. They consume huge amounts of memory and need a lot of processing power to execute because they consist of many equations that involve multiplication of matrices and vectors that extend out mathematically, similar in some ways to how the human brain is designed to think, imagine, dream, and create. 

There are two approaches that are broadly used to reduce memory and processing power required to deploy neural networks on edge devices: quantization and vectorization: 

Quantization means to convert floating-point into fixed-point arithmetic, that is more or less like simplifying the calculations made. If in floating-point you perform calculations with decimal numbers, with fixed-point you do them with integers. Using these options  lets neural networks take up less memory, since floating-point numbers occupy four bytes and fixed-point numbers generally occupy two or even one byte. 

Vectorization, in turn, intends to use special processor instructions to execute one operation over several data at once (by using Single Instruction Multiple Data – SIMD – instructions). This speeds up the mathematical operations performed by neural networks, because it allows for additions and multiplications to be carried out with several pairs of numbers at the same time.

Other approaches gaining ground for running neural networks on edge devices, include the use of Tensor Processor Units (TPUs) and Digital Signal Processors (DSPs) which are processors specialized in matrix operations and signal processing, respectively; and the use of Pruning and Low-Rank Factorization techniques, which involves analyzing and removing parts of the network that don’t make relevant difference to the result.

Thus, it is possible to see that techniques to reduce and accelerate neural networks could make it possible to have Gen AI running on edge devices in the near future.

The killer applications that could be unleashed soon 

Smarter automations

By combining Gen AI running locally – on devices or within networks in the home, office or car –  with various IoT sensors connected to them, it will be possible to perform data fusion on the edge. For example, there could be smart sensors paired with devices that can listen and understand what’s happening in your environment,  provoking an awareness of context and enabling intelligent actions to happen on their own – such as automatically turning down music playing in the background during incoming calls, turning on the AC or heat if it becomes too hot or cold, and other automations that can occur without a user programming them. 

Public safety 

From a public-safety perspective, there’s a lot of potential to improve what we have today by connecting an increasing number of sensors in our cars to sensors in the streets so they can intelligently communicate and interact with us on local networks connected to our devices. 

For example, for an ambulance trying to reach a hospital with a patient who needs urgent care to survive, a connected intelligent network of devices and sensors could automate traffic lights and in-car alerts to make room for the ambulance to arrive on time. This type of connected, smart system could be tapped to “see” and alert people if they are too close together in the case of a pandemic such as COVID-19, or to understand suspicious activity caught on networked cameras and alert the police. 

Telehealth 

Using the Apple Watch model extended to LLMs that could monitor and provide initial advice for health issues, smart sensors with Gen AI on the edge could make it easier to identify potential health issues – from unusual heart rates, increased temperature, or sudden falls with no limited to no movement. Paired with video surveillance for those who are elderly or sick at home, Gen AI on the edge could be used to send out urgent alerts to family members and physicians, or provide healthcare reminders to patients. 

Live events + smart navigation

IoT networks paired with Gen AI at the edge has great potential to improve the experience at live events such as concerts and sports in big venues and stadiums. For those without floor seats, the combination could let them choose a specific angle by tapping into a networked camera so they can watch along with live event from a particular angle and location, or even re-watch a moment or play instantly like you can today using a TiVo-like recording device paired with your TV. 

That same networked intelligence in the palm of your hand could help navigate large venues – from stadiums to retail malls – to help visitors find where a specific service or product is available within that location simply by asking for it. 

While these new innovations are at least a few years out, there’s a sea change ahead of us for valuable new services that can be rolled out once the technical challenges of shrinking down LLMs for use on local devices and networks have been addressed. Based on the added speed and boost in customer experience, and reduced concerns about privacy and security of keeping it all local vs the cloud, there’s a lot to love.

The post Next for Gen AI: Small, hyper-local and what innovators are dreaming up appeared first on SD Times.

]]>
OpenAI announces GPT Store and ChatGPT Team https://sdtimes.com/ai/openai-announces-gpt-store-and-chatgpt-team/ Wed, 10 Jan 2024 20:21:56 +0000 https://sdtimes.com/?p=53483 OpenAI announced two major initiatives: the GPT Store and ChatGPT Team. The GPT Store aims to connect users with highly rated and practical custom versions of ChatGPT and ChatGPT Team is a new plan tailored for smaller teams, granting access to GPT-4, DALL·E 3, and advanced data analysis tools.  Users of ChatGPT have already created … continue reading

The post OpenAI announces GPT Store and ChatGPT Team appeared first on SD Times.

]]>
OpenAI announced two major initiatives: the GPT Store and ChatGPT Team. The GPT Store aims to connect users with highly rated and practical custom versions of ChatGPT and ChatGPT Team is a new plan tailored for smaller teams, granting access to GPT-4, DALL·E 3, and advanced data analysis tools. 

Users of ChatGPT have already created over 3 million custom versions of ChatGPT over the two months since GPTs were announced, according to OpenAI. Now the extensive array of GPTs is featured in the GPT Store. This store showcases a wide variety of GPTs, categorized into areas such as DALL·E, writing, research, programming, education, and lifestyle, and highlights the most popular and trending ones. Additionally, there are plans to introduce a revenue program for GPT builders, starting in the US. Payment will be based on user engagement with their GPTs, with more specific criteria to be announced nearer to the launch.

Some of the first featured GPTs include personalized trail recommendations from AllTrails, the ability to search and synthesize results from 200M academic papers with Consensus, and expanding coding skills with with Khan Academy’s Code Tutor. 

Creating your own GPT is now straightforward and doesn’t require coding skills, according to OpenAI. To list a GPT in the store, builders must save their GPT for ‘Everyone’ and verify their Builder Profile with a name or a verified website. 

Compliance with the latest usage policies and GPT brand guidelines is essential, and a new review system, combining human and automated processes, has been implemented to enforce these standards. Users can also report GPTs that violate guidelines. A GPT builder revenue program is set to launch in Q1, initially in the US. Builders will earn based on the user engagement with their GPTs, with detailed payment criteria to be announced later.

With the new ChatGPT Team, customers have access to a private section of the GPT Store which includes GPTs securely published in the workspace.

The GPT Store will be available soon for ChatGPT Enterprise customers and will include enhanced admin controls like choosing how internal-only GPTs are shared and which external GPTs may be used inside your business. Like all usage on ChatGPT Team and Enterprise, we do not use your conversations with GPTs to improve our models,” OpenAI stated in a blog post with additional details.

The post OpenAI announces GPT Store and ChatGPT Team appeared first on SD Times.

]]>