GPT-4: another breakthrough in AI?
GPT-4, OpenAI's newest large language model, was released in mid-March to global acclaim for its vast improvements over the previous iterations. So huge, in fact, that some experts, including Elon Musk, have called for a temporary halt in further development of AI. So, what caused such a stir, what is GPT-4 really, and how is it different from other GPT models?
GPT – The Evolution of AI
With all the talk today about the cutting-edge development of GPT-4, it almost seems difficult to imagine a time when AI was remote from everyday life. But it had to start somewhere.
It was the 11th of June in 2018 when OpenAI released the original GPT ("Generative pre-trained transformer", now called GPT-1), an NLP model with 117 million parameters and 4.5GB of training data text. What set GPT-1 apart from the rest of the NLP models of the day, however, was that it could do a significant portion of training without human supervision.
This "semi-supervised" approach was generative AI's big bang.
Fast-forward to June 11th, 2020 and OpenAI releases GPT-3, with more than a thousand times more parameters than its original precursor and the capacity of 570GB of plaintext. That same year ChatGPT became available, powered by GPT-3.5, which was essentially GPT-3 improved and boosted with reinforcement learning from human feedback (RLHF).
GPT models work by taking in enormous amounts of text and assigning "tokens" to individual words. In the training process, tokens get "hidden", and the model attempts to guess the word behind it.
Mastering language in this way allowed ChatGPT (or GPT-3.5) to write articles, paint, and synthesize voice, among other things. To get a grasp of GPT-4 and its capabilities, we have to understand just how big of a leap forward it is compared to what we have seen GPT-3.5 do.
GPT-4: The Most Powerful GPT Yet
OpenAI has decided to withhold most of the technical data for several reasons, including safety and increasing competitiveness of the emerging industry.
We know GPT-4's token capacity, though. It is 32,000 tokens, nearly eight times more than GPT-3's 4,000 tokens.
Eight times the capacity of GPT-3? Let that sink in for a moment.
GPT-4's boosted capacity significantly raises the bar on the complexity of tasks it can perform and improves the accuracy of its results across the board. And that's certainly not all.
Human-level of performance in AI is reserved for artificial general intelligence. While GPT-4 has no such aspirations, OpenAI tested it on several benchmarks to compare its performance both to GPT-3.5 and humans.
What immediately strikes the eye is that GPT-4 performs in the top 10% on a simulated bar exam, starkly contrasting GPT-3.5's terrible performance among the bottom 10%. The whole list of tests can be found on OpenAI's site, but a general rule can be observed that GPT-4 beats GPT-3.5 in most tests designed for people. Astonishing!
The most visible improvement over GPT-3.5 is GPT-4's ability to receive images, alongside or without text, as input.
It would be the most visible improvement if it were within reach of regular users and still reserved for developer API only.
What we have been shown and told, however, holds great promise. With GPT-4, you can upload pictures of documents, sketches, screenshots, or any other piece of text that contains visual information. The potential looks endless: GPT-4 can build websites based on sketches, recommend recipes based on an image of fridge content, and understand visual jokes, such as memes.
With a deeper understanding of text and context, GPT-4 is even better at conversations. Compared to GPT-3.5, it will provide much more accurate and relevant answers.
Additionally, GPT-4, unlike GPT-3.5, can be assigned a "personality". Instead of having a fixed style and tone for every situation, GPT-4 can be told to follow custom styles within the prescribed bounds of safety and privacy guidelines.
Ever chatted with a pirate? GPT-4 will hopefully be able to fulfill your (my, tbh) wish. Sails ahoy!
Processing Longer Texts
With 4,000 tokens, GPT-3.5's attention span was approximately three pages long. Now, GPT-4's 32,000 tokens allow a context window that can contain about 50 pages of text.
How does this improve GPT in practice? For example, GPT-4 can take whole URLs as input and create long-form content that is much more precise, relevant, and engaging. You could feed GPT-4 a dozen-page-long lawsuit and ask for a summary or an analysis. Users could even upload medical records for AI's opinion.
We tend to focus on GPT creating content, often neglecting its just-as-useful ability to read text. GPT-4 will be able to summarize texts up to 50 pages long and deliver critical points, valuable takes, and relevant interpretations within seconds.
AI that can read 50 pages at once and interpret the text, whether it's finances or science research, is a game-changer for businesses and individuals from all branches of human commerce.
Reduced (But Not Eliminated!) "Hallucinations"
One of GPT-3.5's weakest points is AI hallucinations, or a tendency to present incorrect information as facts. While GPT-4 isn't entirely free of hallucinations, OpenAI states it is 60% less likely to invent data.
Still far from the level of accuracy we desire, though. GPT-4, like its predecessor, requires human supervision and constant fact-checking to evade disseminating false information and deceiving end users.
On a side note, GPT-4 does not come with memories of events past September 2021. Just like GPT-3.5, it is highly prone to errors and miscalculations when dealing with events that occurred past that date.
What Are GPT-4's Limits?
Apart from increased energy consumption due to sheer volume of computation and a hard cap on the number of queries per hour, well, we really can't tell yet.
Its coding skills have dwarfed whatever GPT-3.5 had to offer, with non-coders being able to recreate games with no source code available online. It still requires humans to direct it, but it can self-reflect and improve on its outputs, which is new to GPT-4.
Yes, GPT-4 can self-reflect, self-develop, and self-improve. And this has left quite a number of people scared.
The ability to reflect upon the result that AI itself produced is new to GPT-4, and the technique is similar to how humans reason and ponder things. When it looks back at what it did, the new GPT model displays up to 30% better performance.
To correct an AI, it appears you don't always need a human, just more AI. In other words, AI can use AI to perform better, which is a loop that some people fear might represent a point of no return, especially when the developing team goes from being fully transparent to closed and confidential.
Is GPT-4 Safe?
As the letter signed by Elon Musk and a number of high-profile experts in the field circles around the web and calls for a pause in further AI development, people are questioning whether AI is going forward too quickly.
Is GPT-4 gaining sentience? Is it AGI? Is it able to self-replicate and dominate our world?
Somewhat anticlimactically, the answers are no, no, and no. GPT-4 is powerful, but nowhere near as dangerous as some are suggesting. And the whole "open letter" affair gains a bit of context if you consider the fact that Musk was part of the GPT development team before he left due to conflicting interests with his own Tesla AI research.
Is it safe, then?
Depends on what you mean by safe. GPT-4's restrictions can be circumvented, and it can be used to create malicious, offensive, and inappropriate content. However, those issues are not unique to GPT-4, and expert teams are working on making GPT-4 as safe as possible, which OpenAI claims is the reason for restricting parts of their newest large language model.
Should you be excited? Definitely! In the following days and weeks we will likely witness even more of GPT-4's potential. AI is still our co-pilot and companion, but more helpful than ever.
Speaking of AI assistance, here at Skippet, we have dedicated ourselves to making a no-code AI solution to help with anything from business admin to everyday chores. Sounds interesting? Join our beta!