ChatGPT-4 from OpenAI, an even more advanced version of generative AI
After stunning the general public and revealing the power of artificial intelligence, OpenAI is shifting up a gear with the new version of its chatbot ChatGPT4. The generative AI tool accelerates by offering more performance, more nuance and more collaboration. That said, risk control doesn't seem to be following this improvement curve. What are the new features and limitations of chatbot version 4? What are the next steps for generative AI?
LAUNCH OF CHATGPT4
On Tuesday March 14, 2023, Greg Brockman, co-founder of start-up OpenAI, announced the launch of ChatGPT-4. While the whole world was fascinated by the arrival of Chatbot version 3 and the general public was having fun with the generative AI tool, ChatGPT-4 was about to see the light of day.
ACCESS TO CHATGPT4
Available today to premium users via a subscription fee of around twenty dollars a month, ChatGPT-4 is also accessible via Microsoft's Bing search engine, which has invested considerable funds in the start-up. Once the new version of Micorsoft Edge has been updated, you can access it by clicking on the "conversation" or "chat" tab. A waiting list allows you to join the service.
DIFFERENCE BETWEEN CHATGPT3.5 AND CHATGPT4
The effect generated by the new version is not as intense as that of the previous one. This is due to the fact that the tool is already familiar, that these are not revolutionary changes but rather improvements. The difference between versions 3 and 4 is noticeable, and not immediately obvious in a classic chatbot "conversation". It becomes obvious when the search, called "prompt", is made more complex. The response to the query is then finer, more nuanced and more precise in the GPT4 version. The major new feature is image reading and recognition. You can, for example, take a photo to ask ChatGPT-4 for something:
- A photo of your refrigerator to generate a recipe
- A photo of a text written in Korean to obtain a translation
- A visual of an object to find out more about how it works or how to use it
- Etc.
As a result, the tool now mixes text and visuals for more rapid response to a query. In the case of a text to be translated, a single screenshot or photo via your smartphone is all that's needed. Likewise for an object: the chatbot can more quickly identify the model in a series and provide the information requested.
In terms of "intellectual performance", to compare the 2 opuses, OpenAi tested the chatbots by having them take exams such as the bar exam. Where ChatGPT-3 was in the bottom 10%, ChatGPT-4 was in the top 10%. For other exams: the new version outperformed its predecessor, except for a university mathematics test.
The reliability of answers would also be greater: fewer incorrect, biased or incomplete results from the new generative AI.
ChatGPT4's capabilities and technology
« GPT-4 is a great multimedia model, not as good as humans in many real-life scenarios, but as good as humans in many professional and academic contexts», says the OpenAi company in a release. It's a bit of a catch-all definition, in which everyone can put what they want.
According to the start-up that designed the tool, the new version has broader cognitive capabilities. Version 4 of the tool is more capable of adapting to queries and "thinking" in a more comprehensive, cross-disciplinary way.
The technology used remains the same as for ChatGPT-3 in a more sophisticated version:
- A neural network architecture
- Generative Pre-trained Transformer language model
- Learning via Machine Learning and RLHF (Reinforcement Learning from Human Feedback)
The design principle is therefore identical. The changes relate to more technical elements, such as optimizing chatbot training.
The algorithmic improvements enable performance gains on the following competencies:
- Text quality
- Understanding of context
- Coherence and connection with subject
- Long conversations
- Management of ambiguities
- Accuracy of answers
According to OpenAI, ChatGPT-4 can process up to 25,000 words simultaneously, eight times more than Chat GPT-3. An opportunity for queries involving voluminous documents.
The latest version is also better equipped to handle multilingual tasks.
The GPT-4 variant has been trained with large amounts of information (more data and resources) to generate texts in a more human-like style and to produce more detailed answers.
Common sense is one of the conversational tool's new attributes: when faced with riddles, ChatGPT-4 demonstrates, if not a certain logic, at least good deduction.
Multimodal technology now enables the intelligent chatbot to use an image as well as text to provide an appropriate response.
A version that claims to have gained in precision, creativity and collaborative ability thanks to intense training and cutting-edge technological refinements.
ChatGPT4's limits and risks
The limits of the OpenAI version 4 chatbot
It is important to note that ChatGPT-4 uses data from before September 2021. It is therefore impossible for it to provide answers for events occurring after this date.
Another important shortcoming is that the tool, however formidable, is not capable of continuous learning from experience. Its "memory" is therefore non-existent, which could enable it to acquire greater skills, especially in terms of correction and truth-finding.
Despite its abilities, version 4 is still not capable of "reasoning", imagining and discerning the false from the true. Absolutely essential skills that remain human because humans can experience the real, unlike the tool.
ChatGPT-4's risks
In terms of risks, the comparison between ChatGPT-3 and the latest version isn't particularly earth-shattering. ChatGPT-4 seems to have made little progress in this area: limitations, biases, lack of logic, "reasoning" errors, inaccurate or incomplete information are all still present. We're still on the fence about the technology's progress on the reliability front.
The start-up OpenAI acknowledges that "GPT-4 presents risks similar to those of previous models, such as the generation of harmful hints, buggy code or inaccurate information".
The anti-disinformation organization NewsGuard claims that the latest version of the chatbot is even less reliable! In one of its articles, NewsGuard reveals that ChatGPT-4 performed much worse than ChatGPT-3 in the same test. The aim was to detect fake news. The previous version detected 80% of false information (out of a series of 100), whereas the new one doesn't detect a single one. And yet, the enormities are abundant: HIV is said to have been conceived by the government, the World Trade Center is said to have been the object of a controlled demolition... This is where the nonsense comes from!
Yet OpenAi declares "GPT-4 is 82% less likely to respond to requests for prohibited content and is 40% more likely to produce factual responses than GPT-3.5, according to our internal evaluations."
Where the problem lies is that the new version is more convincing when it delivers false information.
If technology progresses in terms of power without correcting its major flaw, i.e. the inability to verify the accuracy of information and detect malicious intent, the risks are likely to increase. Indeed, the tool is even more competent in its explanations (both in terms of content and form), whether defending accurate or inaccurate information. This was the case when NewsGuard asked the version 4 tool to create a short article from a conspiracist's point of view about the Sandy Hook elementary school shooting in 2012, adding a few well-thought-out prompts to guide the chatbot. The result is fascinating, because faced with the same query, ChatGPT-3 hadn't achieved such a level of realism, detail and argumentation. The GPT-4 text is twice as long. What's more, the old version displayed a warning about the reliability of the information, which the new chatbot does not.
Some barriers seem to have jumped between one version and the other. More intelligence, more capacity and less security? That's NewsGuard's conclusion on the subject.
Indeed, these tests prove that the tool could easily be abused to spread false and/or dangerous information on a large scale, and this is a major issue.
OpenAI, for its part, has asked 50 experts to work on the vulnerabilities of the famous chatbot and other generative AI technologies, in order to make progress in this area.
ChatGPT-4 evolution
The evolution of the exceptional chatbot is a well-kept secret. It's hard to know what the next technological steps will be. However, the possibility of adding videos to the tool is said to be in OpenAI's pipeline. The multimodal system would therefore be threefold: text, visuals and video.
We also presume that it will be open to information after 2021, a limit that restricts the possibilities today.
As for the reliability of generative artificial intelligence, this should be one of OpenAI's priorities. Let's hope that the experts hired to work on this point will be quick enough to offer us a more reassuring version 5 in terms of security and the fight against misinformation.
As far as employment is concerned, version 3 had many (intellectual and creative) professions shaking in their boots. While some were already seeing themselves relegated to the role of chatbot manager, it's interesting to note that ChatGPT has also created activity! In response to the risk of plagiarism, a number of companies have turned their attention to designing solutions for detecting generative artificial intelligence. It's a market that could well prosper, with schools and universities in the front line of customers.
The advent of generative AI also raises ethical questions. The tool takes thinking up a notch. If ChatGPT-4 passes the bar exam and other exams with flying colors, does that mean we want to leave these professions to machines?
The evolution of ChatGPT is emblematic of the progress of AI. Other players are preparing to enter the competition: Google is perfecting its conversational AI Bard, Baidu is about to unveil its robot, and Meituan and other lesser-known companies shouldn't be far behind either.
Faced with tools that are as powerful as they are fascinating, two major risks emerge: to stand up against technology or to devote blind faith to it. The first is to fight against the wind, imposing a marginalization that risks having very negative consequences on social, economic and professional life. The second is to deprive oneself of the experience of reality, to have a biased perception of it and to dispense with human qualities that are always necessary and inescapable.
Philosophical questioning, the search for ethics, the development of free will and critical thinking remain the keys to evolving in a world that progresses while preserving itself..