
What is ChatGPT, and is it really that good?
The year 2022 proved a breakthrough for AI art. A multitude of algorithms emerged that allow anyone to create unique images from a fragment of text.
Various firms and independent developers raced to build systems and web applications based on neural networks. DALL-E 2, Midjourney and Stable Diffusion continue to be in demand and are set to remain popular for some time.
However, in 2023 the community was captivated by another technology—the new “revolutionary” neural network ChatGPT.
ForkLog has investigated what this AI system is, how it is used, and why it is banned in some educational institutions.
- In November 2022, OpenAI released the ChatGPT chat-bot, to which you can pose a question or enter a prompt and receive an “almost human” answer.
- The technology has a number of limitations, including a limited knowledge base, an inability to express emotions, and the generation of incorrect facts.
- Two months after launch, the number of ChatGPT users reached 100 million.
- Some experts expressed doubts about the “revolutionary” nature of the service, while others compared its release to the launch of the first iPhone.
- Like any powerful system that automates work processes, it will affect those who offer similar skills on the labor market.
What is the ChatGPT neural network?
ChatGPT — a chatbot that became the fastest-growing service in history. It was released by the AI lab OpenAI in November 2022.
The system is based on the updated language model GPT-3 — the GPT-3.5 neural network — and trained on the Azure AI supercomputer.
Users can ask a question, enter a query or a prompt and receive a comprehensive “almost human” textual answer. The algorithm is capable of discussing a range of topics, understanding context, acknowledging errors, joking and arguing.

The chatbot supports multiple languages, including English, Spanish, Italian, Chinese, German, Russian, Ukrainian, French and Japanese. In these languages the model can answer questions, generate text on given topics and perform other tasks.
The developers built the model using reinforcement learning from human feedback (RLHF). They used the same methods as with InstructGPT, but augmented them with data from human dialogues.

To assemble the conversation dataset, OpenAI enlisted instructors. They simulated both sides of the dialogue—the AI and a human. The trainers also had access to modeled prompts to help craft the responses.
The resulting dataset was blended with the InstructGPT dataset, converted into dialog format.
To create the reward model for training, engineers used conversations between instructors and the chat-bot. Then they randomly selected generated AI responses and asked trainers to rank them.
To improve the model’s accuracy, the developers used proximal policy optimization. Several iterations were carried out for this process.

Additionally, engineers installed filters in ChatGPT to block toxic, biased and harmful content.
In January 2021, to label such content, OpenAI hired contractors from Kenya through outsourcing company Sama. At $1.32–2 per hour they reviewed tens of thousands of NSFW data, which often contained detailed descriptions of sexual violence against children, murders, tortures, self-harm, bestiality and incest.
Limitations of the ChatGPT chatbot
One of the bot’s main limitations is that it has access to data current up to 2021. Therefore you cannot obtain the weather forecast for tomorrow in Kyiv, details about the 2023 HBO series The Last of Us, or whether Queen Elizabeth II is alive.

The system sometimes produces plausible-sounding yet wrong or nonsensical answers. You should not take it at face value and you must verify information.





The bot also does not provide predictions or assessments and cannot express emotions, as it is “only a language AI model.”

ChatGPT is sensitive to paraphrasing and repeated attempts to input the same prompt. Under certain formulations the model may claim it does not know the answer. However, slightly changing the sentence yields a relevant result.
Limitations also include “excessive verbosity” and the frequent use of phrases like “I am a language model trained by OpenAI.”
According to the developers, this problem stems from biases in the training data—trainers prefer longer answers because they look more exhaustive. And also from over-optimisation.
Instead of asking clarifying questions to an ambiguous prompt, the neural network guesses what the user intended.
Also, despite OpenAI’s efforts to force the model to block inappropriate prompts, it sometimes responds to harmful instructions and exhibits biased behaviour.
Researchers have established that repeating the same query with different phrasings or insisting on showing necessary information can “bypass” the filters set by developers.

For example, cybersecurity firm CyberArk allegedly forced ChatGPT to display code for specific malware and used it to create complex exploits that bypass antivirus protections.

Where is the “revolutionary” technology used?
In January 2023, just two months after launch, the number of ChatGPT users reached 100 million.
UBS investment bank analysts studied data provided by SimilarWeb. They found that the service was visited daily by about 13 million unique accounts, more than double the December figures.
People use the chatbot in various ways: for entertainment as well as for work.
ChatGPT can write poems and songs, comment on tweets, generate unique content for blogs or social media, do homework across various subjects, create programming code and perform many other tasks.

A devotee of musician Nick Cave used the chatbot to create a track in the singer’s style. But the latter criticised the result, calling the AI piece “nonsense” and a “grotesque mockery of what it means to be human.”
Ammaar Reshi created a book about a girl named Alice and her robot assistant Sparkle using ChatGPT. However, its publication on Amazon drew a storm of criticism and death threats on social media.
The Nansen.ai platform used AI to map cryptocurrencies to zodiac signs. According to the bot, Bitcoin aligns with the pioneering spirit of Aries, Ethereum with Taurus’ stability, and Dogecoin with Sagittarius’ adventurism.
Professor Christian Tervish of the Wharton School tested ChatGPT on an MBA final exam. According to him, the AI would have earned a “B” or “B-” on the final.
Ironically, Australian universities have found that AI is used to write every fifth exam paper.
In the United States students were also caught using the chat-bot. For example, a professor at Furman University, Darren Hicks, said that his student plagiarised an essay with ChatGPT.
Such situations have occurred repeatedly. For this reason, in some institutions around the world, including the United States, AOPEN_6 Australia and France, OpenAI’s AI on campus devices and internal Wi‑Fi networks has been banned.
“Although the tool can provide quick and simple answers to questions, it does not develop the critical thinking and problem-solving skills necessary for academic and life success,” said Jenna Lyle, a representative of New York public schools.
Moreover, major Australian universities changed the way exams and other activities are conducted due to concerns about students using ChatGPT to write essays. Under the new rules, the use of AI is treated as cheating.
But while educational institutions ban the technology, authorities are actively using it.
Democratic congressman Jake Okincloss wrote a speech for a US House of Representatives address using the ChatGPT chatbot wrote. The hearing concerned a bill to establish an American-Israeli AI centre. According to the congressman’s office, this was the first time in history that a speech generated by an algorithm was delivered in the Capitol.
Also, the neural network was used by US Democratic Party member Ted Liu. ChatGPT helped him draft a non-binding resolution on AI regulation.
In Colombia, Cartagena judge Juan Manuel Padilla applied a chatbot to decide whether a child’s autism-spectrum disorder insurance should cover treatment costs. The algorithm answered in the affirmative, citing the country’s laws. This became the judge’s final ruling.
How to use OpenAI ChatGPT?
To use the ChatGPT AI, you need to sign up on the project site, provide a mobile phone number and enter a six-digit verification code.
The system is available in all countries except Afghanistan, Belarus, Venezuela, Iran, China, Russia and Ukraine.
Users in these regions can chat with the chatbot only with a VPN service and an active phone number from a country open to the network.
The algorithm is free to use. However, in January 2023 the company introduced a premium version of ChatGPT for $20 a month.
Also, the startup Semafor is considering launching a mobile app with a chatbot. This will likely lead to an even larger influx of users.
As for the API version of the system—there is no public version yet. However, engineers interested in its release can join the waitlist.
In November 2021 OpenAI opened the GPT-3 text generator API, on which GPT-3.5 is based, to developers in 152 countries. They only need to sign up and start using the service.
Also, according to CNBC, Microsoft plans to release software that will enable third-party companies, schools, and government agencies to create their own chatbots and improve existing ones using ChatGPT.
Is ChatGPT truly “revolutionary”?
Meta vice-president and leading AI researcher Yann LeCun stated that, in terms of the methods underpinning it, the ChatGPT model is not “particularly innovative” and there is nothing “revolutionary” about it.
“The system is simply well put together, beautifully made,” said the researcher.
Co-founder of Ethereum Vitalik Buterin tested the tool in a task to locate a specific code to help write apps. In his words, the chatbot made several errors.
“Right now, AI is far from replacing human programmers,” said Buterin.
Nevertheless, he added that in many cases ChatGPT can do well and write fairly good code, especially for routine tasks. But developers should use the tool with caution, noted the Ethereum co-founder.
According to some experts, ChatGPT did for AI what the iPhone did for the mobile phone industry when it launched in 2007.
“The Apple product wasn’t the first smartphone, but it killed off the competition with its touchscreen, ease of use, and app ecosystem, moving the computer experience into your pocket,” said Forrester Research analyst Rowan Curran.
He added that OpenAI’s neural network also shapes public consciousness by giving non-technical users the “power of artificial intelligence.”
ChatGPT is reshaping school curricula and policy, impacting workflows and already becoming a household term. The chatbot is not the first AI tool, but it has become a major driver of mainstream adoption of the technology.
Large corporations are watching these projects closely. For instance, in January Microsoft announced multi-year, multi‑billion-dollar investments in OpenAI.
Later it emerged that the tech giant integrated a faster version of the OpenAI neural network known as GPT-4 into its Bing search engine.
Other major tech firms have avoided releasing similar products for legal and ethical reasons.
In December 2022, Alphabet CEO Sundar Pichai and head of Google’s AI division Jeff Dean called AI chatbot technology “immature.”
They also pointed to issues of bias, toxicity and a tendency to “make up” information. They added that the tech giant would need to move more “conservatively” than a small startup.
Nevertheless, a month later it emerged that Google’s leadership had put the development of a ChatGPT competitor high on the agenda. According to CNBC, employees are actively testing the Apprentice Bard system based on the LaMDA language model, capable of answering questions.
In February Google introduced its own chatbot. For now it is available to a limited number of testers.
The future of the ChatGPT neural network
Large language models are making creativity and intellectual work accessible to all. Anyone with internet access can use systems like DALL-E 2 and ChatGPT to express themselves and explore vast amounts of information.
Language AI enables even newcomers to generate marketing copy in minutes, spark ideas to overcome writer’s block and generate code that can perform specific functions. And it is almost at the level of human professionals.
Yet the technology requires human involvement—crafting prompts to obtain the desired results, monitoring and oversight.
In spite of its advantages, there are drawbacks. First, such tools may accelerate the loss of important human skills like writing.
Educational institutions will have to develop and implement a policy on the acceptable use of large language models to achieve desired learning outcomes.
AI systems raise questions about intellectual property. While people draw inspiration from the world around them, questions remain about proper and ethical use of information protected by copyright.
Developers need to share details about training data. They should also seek permission to use needed resources during development and compensate authors.
There remain inaccuracies in AI outputs. It can produce false facts, non-working code or a meaningless string of sentences.
Also, the algorithms are susceptible to biases: trained on biased data they begin to reproduce them.
ChatGPT is a breakthrough technology, but it comes with these limitations.
Still, AI algorithms are already shaping society and this influence is likely to grow in the future.
Like any powerful system that automates workflows—in this case the creation of coherent and universal text—it will affect those who offer such skills in the market, including marketers, writers, SEO specialists, programmers and sales managers.
The buzz around the chatbot’s capabilities has also found its way into the Web3 space.
According to industry experts, ChatGPT could affect the security, development and testing of smart contracts. It can fix human-coded vulnerabilities or, conversely, exploit them to conduct attacks.
Others argue that the AI system will accelerate the industry’s evolution, acting as a catalyst for education, inspiration and iteration in Web3.
The OpenAI neural network itself will continue to evolve.
During the launch of the updated Bing search engine, Microsoft noted that the system runs on a next-generation language model developed by the AI lab.
“The algorithm is built on the core knowledge and progress of ChatGPT and GPT-3.5; it has become faster, more accurate and more efficient,” the press release states.
Likely, OpenAI will soon present the GPT-4 language model and update the chatbot to allow it to use more up-to-date data.
Perhaps in the future people will no longer need others to write coherent and universal text, search for information for academic work or create code. On the other hand, chatbots like ChatGPT will open new career paths and grant access to unimaginable tasks.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!