SME News

GPT-4: how to use, new features, availability, and more

GPT-4 has officially arrived, confirming the longtime rumors around its improvements to the already incredibly impressive language skills of OpenAI’s ChatGPT.

OpenAI calls it the company’s “most advanced system, producing safer and more useful responses.” Here’s everything you need to know about it, including how to use it and what it can do.

Availability

A laptop opened to the ChatGPT website.Shutterstock

GPT-4 was officially announced on March 13, as was confirmed ahead of time by Microsoft, even though the exact day was unknown. As of now, however, it’s only available in the ChatGPT Plus paid subscription. The current free version of ChatGPT will still be based on GPT-3.5, which is less accurate and capable by comparison.

GPT-4 will also be available as an API “for developers to build applications and services.” Some of the companies that have already integrated GPT-4 include Duolingo, Be My Eyes, Stripe, and Khan Academy. The first public demonstration of GPT-4 was also livestreamed on YouTube, showing off some of its new capabilities.

How to use GPT-4

Bing Chat shown on a laptop.Jacob Roach / Digital Trends

The easiest way to get started with GPT-4 today is to try it out as part of Bing Chat. Microsoft revealed that it’s been using GPT-4 in Bing Chat, which is free to use. Some GPT-4 features are missing from Bing Chat, however, such as visual input. But you’ll still have access to that expanded LLM (large language model) and the advanced intelligence that comes with it. It should be noted that while Bing Chat is free, it is limited to 15 chats per session and 150 sessions per day.

The only other way to access GPT-4 right now is to upgrade to ChatGPT Plus. To jump up to the $20 paid subscription, just click on “Upgrade to Plus” in the sidebar in ChatGPT. Once you’ve entered your credit card information, you’ll be able to toggle between GPT-4 and older versions of the LLM. You can even double-check that you’re getting GPT-4 responses since they use a black logo instead of the green logo used for older models.

From there, using GPT-4 is identical to using ChatGPT Plus with GPT-3.5. It’s more capable than ChatGPT and allows you to do things like fine-tune a dataset to get tailored results that match your needs.

What’s new in GPT-4?

GPT-4 is a new language model created by OpenAI that can generate text that is similar to human speech. It advances the technology used by ChatGPT, which is currently based on GPT-3.5. GPT is the acronym for Generative Pre-trained Transformer, a deep learning technology that uses artificial neural networks to write like a human.

According to OpenAI, this next-generation language model is more advanced in three key areas: creativity, visual input, and longer context. In terms of creativity, OpenAI says GPT-4 is much better at both creating and collaborating with users on creative projects. Examples of these include music, screenplays, technical writing, and even “learning a user’s writing style.”

GPT-4 Developer Livestream

The longer context plays into this as well. GPT-4 can now process up to 25,000 words of text from the user. You can even just send GPT-4 a web link and ask it to interact with the text from that page. OpenAI says this can be helpful for the creation of long-form content, as well as “extended conversations.”

GPT-4 can also now receive images as a basis for interaction. In the example provided on the GPT-4 website, the chatbot is given an image of a few baking ingredients and is asked what can be made with them. It is not currently known if video can also be used in this same way.

Lastly, OpenAI also says GPT-4 is significantly safer to use than the previous generation. It can reportedly produce 40% more factual responses in OpenAI’s own internal testing, while also being 82% less likely to “respond to requests for disallowed content.”

OpenAI says it’s been trained with human feedback to make these strides, claiming to have worked with “over 50 experts for early feedback in domains including AI safety and security.”

Over the weeks since it launched, users have posted some of the amazing things they’ve done with it, including inventing new languages, detailing how to escape into the real world, and making complex animations for apps from scratch. As the first users have flocked to get their hands on it, we’re starting to learn what it’s capable of. One user apparently made GPT-4 create a working version of Pong in just sixty seconds, using a mix of HTML and JavaScript.

Limitations

While discussing the new capabilities of GPT-4, OpenAI also notes some of the limitations of the new language model. Like previous versions of GPT, OpenAI says the latest model still has problems with “social biases, hallucinations, and adversarial prompts.”

In other words, it’s not perfect, but OpenAI says these are all issues the company is working to address.

Does Bing Chat use GPT-4?

Microsoft originally states that the new Bing, or Bing Chat, was more powerful than ChatGPT. Since OpenAI’s chat uses GPT-3.5, there was an implication at the time that Bing Chat could be using GPT-4. And now, Microsoft has confirmed that Bing Chat is, indeed, built on GPT-4.

Still, features such as visual input weren’t available on Bing Chat, so it’s not yet clear what exact features have been integrated and which have not.

Regardless, Bing Chat clearly has been upgraded with the ability to access current information via the internet, a huge improvement over the current version of ChatGPT, which can only draw from the training it received through 2021.

In addition to internet access, the AI model used for Bing Chat is much faster, something that is extremely important when taken out of the lab and added to a search engine.

An evolution, not a revolution?

StrictlyVC in conversation with Sam Altman, part two (OpenAI)

We haven’t tried out GPT-4 in ChatGPT Plus yet ourselves, but it’s bound to be more impressive, building on the success of ChatGPT. In fact, if you’ve tried out the new Bing Chat, you’ve apparently already gotten a taste of it. Just don’t expect it to be something brand new.

Prior to the launch of GPT-4, OpenAI CEO Sam Altman said in a StrictlyVC interview posted by Connie Loizos on YouTube that “people are begging to be disappointed, and they will be.”

There, Altman acknowledged the potential of AGI (artificial general intelligence) to wreak havoc on world economies and expressed that a quick rollout of several small changes is better than a shocking advancement that provides little opportunity for the world to adapt to the changes. As impressive as GPT-4 seems, it’s certainly more of a careful evolution than a full-blown revolution.

Editors’ Recommendations

count) { list.classList.add(listModifier); showBtnText(btn, count); } } changeList(list, btn, itemsCount); changeBtn(btn, list, itemsCount); let contentChildElements = document.getElementById(“dt-post-content”); let headingArray = []; let firstHeadingY = 0; let stateMapHeadings = []; let currentFilledEl = -1; contentChildElements = contentChildElements ? Array.from(contentChildElements.querySelectorAll(“*”)) : []; for (let el of contentChildElements) { if (el.id && el.id === “dt-toc”) continue; if (el.tagName === “H2”) { headingArray.push(el); currentFilledEl++; stateMapHeadings[currentFilledEl] = [{ element: el, isVisible: false, }]; } else if (currentFilledEl !== -1) { stateMapHeadings[currentFilledEl].push({ element: el, isVisible: false, }); } } if (headingArray[0]){ firstHeadingY = headingArray[0].getBoundingClientRect().top + window.pageYOffset; } let observerItems = [].concat.apply([], stateMapHeadings); const options = { rootMargin: “0px 0px -50% 0px”, }; const links = document.querySelectorAll(“.b-toc__item a”); const observer = new IntersectionObserver(entries => { let leaveFirstItem = false; entries.forEach((entry) => { //search entry element in map and set status let elFound = false; for (let i = 0; i < stateMapHeadings.length; i++) { let arrayElements = stateMapHeadings[i]; for (let j = 0; j < arrayElements.length; j++) { if (arrayElements[j].element === entry.target) { arrayElements[j].isVisible = entry.isIntersecting; elFound = true; break; } } if (elFound) break; } if (!entry.isIntersecting && entry.target === headingArray[0] && window.pageYOffset { return !!listElements.find(item => item.isVisible); }); if (!itemToHighlight && !leaveFirstItem) return; itemToHighlight = itemToHighlight && itemToHighlight[0].element; let itemToHighlightId = “”; if (itemToHighlight) { itemToHighlightId = (itemToHighlight.firstElementChild && itemToHighlight.firstElementChild.getAttribute(“id”)) || itemToHighlight.getAttribute(“id”); } links.forEach(link => { if (itemToHighlight && link.getAttribute(“href”).substring(1) === itemToHighlightId) { link.parentElement.classList.add(“is-active”); } else { link.parentElement.classList.remove(“is-active”); } link.addEventListener(“click”, (e) => { if (link.href === window.location.href) { e.preventDefault(); } }); }); }, options); observerItems.forEach(item => { observer.observe(item.element); }); }); })(); ]]>

Source

Leave a Reply

Your email address will not be published. Required fields are marked *