What Is The Reason Behind ChatGPT’s Unbelivevable Number Of Users? Costs are Eye-watering

Sam Altman, CEO of OpenAI, went to Twitter to inform that ChatGPT clocked a million users within a few days after its introduction. Based on the GPT-3.5 architecture, ChatGPT communicates with people using natural language. Since its debut, the chatbot has given new life to conceptualising AI-human interaction. The outcomes have been substantially better than its prior model in that it gives greater quality, longer output, and better instruction-following.

Chatbots have existed before—such as Amazon’s Alexa—which had to recently draw a plug on its ‘Amazon Alexa’ voice-assisted capability. OpenAI’s success with ChatGPT is consequently something that Amazon wishes it had done when it had the opportunity.

The model is provided in its Beta form and is free to use. However, Altman has claimed OpenAI would attempt to commercialise the model with an average cost of single-digit cents each conversation. He further noted that monetisation especially becomes vital since “the compute expenses are eye-watering”.

Microsoft Azure supports OpenAI, providing them with the computing capacity necessary for running models like ChatGPT. However, Altman’s reference to the “eye-watering” expense apparently didn’t sit well with Microsoft. As a consequence, we also saw Altman rescuing the situation by alluding to Azure’s major contribution to the OpenAI launches, adding that “they have developed by far the greatest AI infrastructure out there.”

The GPT-3.5 architecture is based on the newest text-Davinci-003 provided by OpenAI. However, the corporation also sells models like Ada, Babbage, and Curie. These versions are available at a comparatively cheaper cost than Davinci and perform as well as Davinci on a number of jobs. For its clients, OpenAI suggests that its users utilise Davinci first and eventually move to cheaper versions according on their needs.

What exactly is ChatGPT?

ChatGPT is a conversing robot that has been taught using artificial intelligence (AI) and machine learning. It understands and reacts to normal human language, answers inquiries, and speaks as if to people. Its name is derived from GPT, which stands for Generative Pre-Trained Transformer. It is a deep-learning language model that specialises in producing written material that is human-like.

Deep Learning is a machine learning approach that uses three or more layers of neural networks to replicate the function of a human brain, allowing it to learn like humans. But how does it differ from Siri or Alexa, both of which can communicate and answer, tell a joke or perform a poem? What distinguishes ChatGPT from other AI models currently available? ChatGPT is distinct because it remembers previous talks for context, admits errors, challenges premises, and sometimes even refuses to respond.

How does ChatGPT function?

A user may begin by visiting OpenAI’s website, clicking on the Try ChatGPT button, and then beginning to use.

OpenAI taught ChatGPT utilising the Reinforcement Learning from Human Feedback (RLHF) approach. To teach AI, it employs both a reward and punishment mechanism. As a result, anytime it does an action, it is labelled as either desirable or punished. The desired activity is rewarded, while the undesirable action is penalised. AI then learns what works and what doesn’t by using this trial-and-error process.

Humans were also employed as AI trainers by OpenAI. It is via dialogues that these trainers performed both the roles of a user and of an AI helper. However, this training strategy might be troublesome since it often leads to model misinterpretation. An ideal response would be based on what the model knows rather than what the human demonstrator knows, which might be a restriction of this fascinating new item on the internet.

As a result, if a user asks a hard inquiry or fails to frame the query correctly, the bot may reject to respond. It will also refuse to respond if the query is not suitable.

Vidit Atrayee, creator of e-commerce business Meesho, came to LinkedIn to issue this warning, stating that until a few months ago, everyone assumed AI could only do monotonous tasks, but ChatGPT clearly proves otherwise. AI can also go for creative employment.

ChatGPT is currently free to use for research purposes only. CEO Sam Altman has previously suggested that the site would be monetized in the future. “We will have to monetize it somehow at some time; the compute expenses are eye-watering,” he tweeted when asked whether the service will always be free.