14 Things the 1 Million ChatGPT Users Should and Shouldn't Expect
Also, the service crashed briefly earlier today ๐
Sam Altman, CEO of OpenAI, Tweeted a few hours ago that ChatGPT had already crossed 1 million users. Less than 12 hours later, it crashed.
Users in the OpenAI Discord noticed the downtime immediately. However, the outage did not last for long, and Altman, in a seemingly unrelated Tweet shortly after this incident, praised Microsoft Azure hosting service in another Tweet. This was not the first crash for ChatGPT. It also had brief unreported downtime on December 1st, shortly after the beta service launched, which was also quickly resolved.
These data points suggest, at the very least, there is tremendous interest in ChatGPT. This is not surprising. ChatGPT is based on GPT-3.5, and the model output is superior to the 3.0 version. It also offers chat functionality that makes it easier to build on an initial question by maintaining a conversationโs context, similar to what you might expect when interacting with a human expert on a topic.
Synthedia conducted an analysis of ChatGTP and compared it to the beta Google LaMDA service over the weekend, and the post with the results quickly became our most popular since launching the newsletter in August. A link is listed immediately below.
What Should You Expect from ChatGPT
ChatGPT has several features that were unavailable in previous versions of the GPT-3 large language model.
A chat interface - GPT Playground still enables you to ask questions and make requests. However, it is a canvas similar to a large text box that includes your prompts and the responses. That still makes it a little awkward to ask follow-up questions or extend the original query. Should you delete everything and start over? Should it all remain? Does GPT get confused sometimes if you leave everything? Yes. The chat interface of ChatGPT removes this uncertainty.
Context maintenance - When you speak with a human, if you begin by speaking about planning a trip but then ask about landmarks and return to the trip, the conversation moves along without a problem. That is because humans do a great job remembering the context of the entire conversation. Most chatbots do not. And GPT-3 playground did not. The change in topic typically resets or clouds the context. ChatGPT is not perfect on this front, but it is fairly reliable. That means you can reference something that you or the language model said earlier in the conversation, and ChatGPT will recall the context and build upon the earlier idea.
High quality, humanlike writing - You will also notice the writing quality is noticeably better than earlier large language models. ChatGPT is better at composing the prose of an answer and is also better at crafting it in a specific style than the earlier version.
Conceptual ideas - AI model text generations can sometimes sound like a list of facts and details delivered in a barrage of words. ChatGPT is also good at handling conceptual ideas, such as why something should be done or what ideas should be considered.
More details - ChatGPT will often go beyond just answering the question and offer additional details that may be useful context for understanding the response. That can lead to the serendipity of surfacing information that a user didnโt know to ask for but adds to the value of the answer.
A free service (for now) - ChatGPT is currently in beta testing, and OpenAI is using the interactions to further train and refine the GPT-3.5 model and the ChatGTP service.
Offer feedback on the responses - ChatGPT is free. A small contribution on your part will be to click the thumb up or thumb down button and potentially make a short comment about the response. We can all benefit from a better model, and feedback can help make that a reality.
What you Shouldnโt Expect
There are also some important factors that you should not expect to get from ChatGPT. Three of them are listed in the graphic above.
Truth - OpenAI says that ChatGPT โmay occasionally generate incorrect information.โ
Only inoffensive responses - You may, according to OpenAI, receive offensive content in Chat GPT responses. The model is trained on the internet. There is plenty of offensive internet-hosted content, so that information is in the ChatGPT / GPT-3.5 training set. Some model fine-tuning and filtering were used to reduce the incidence of offensive material, but this will eventually occur.
Information about 2022 - ChatGPT is not continually plugged into the internet. It was last trained in 2021 and, as a result, has virtually no knowledge of the last 12-18 months.
Source information - This is no different than when using GPT-3 Playground using the 3.0 model. ChatGTP will write confidently about nearly any topic, but donโt confuse confidence with accuracy. The model produces content that most often includes an authoritative tone of voice and may even cite facts. However, it will not provide any clues on the source of those โfactsโ or โideasโ or any other way to easily validate the veracity of the written responses. You will need to do your own source discovery and validation. This can be very hard to do as GPT-3.x models do not copy directly from internet information.
Humanlike reasoning based on your input - It is true that ChatGPT will often make mistakes related to the information you provide in a prompt when you ask it to tell you something about the prompt. For example, โMy brother and I went for a run around the mountain. Do I have a brother?โ ChatGPT may or may not get the right answer and may actually offer several reasons why it cannot be sure.
Math - While ChatGPT, like GPT-3 before it, can answer many math questions correctly, donโt count on it. There is no reason to believe GPT-3.5 was trained to do math calculations. The reason why GPT-3 can often do math is that it was trained on a great deal of data related to math. That means it can answer many math problems because it has seen the exact question or something similar in the past. This was a feature that OpenAI engineers said earlier they did not expect from the model. It appeared to be either emergent behavior or a function of the data it had seen. The latter seems most likely. However, GPT-4 could well include math and reasoning features.
A free service (for much longer) - When asked on Twitter whether ChatGPT will be free forever, Sam Altman responded, โWe will have to monetize it somehow at some point; the compute costs are eye-watering.โ He also indicated in a response to a Tweet from Elon Musk that each chat was costing โsingle-digit cents.โ With more than a million users, those costs clearly add up quickly.
Whatโs Next?
Let me know what you think about ChatGPT. Check it out here and drop me a note or tag me on Twitter (@bretkinsella) or LinkedIn to let me know what you learned. If you would like to review some large language model text generations and provide feedback as part of some research we are conducting, check out this article and register to participate.