- Written by Chris Vallance
- Technology reporter
The new chatbot has surpassed 1 million users in less than a week, according to the project behind it.
ChatGPT was announced on Wednesday by OpenAI, an artificial intelligence research firm that counts Elon Musk among its founders.
But the company warns that they can produce problematic responses and show biased behaviour.
Open AI says it is “looking forward to gathering user feedback to help us improve this system.”
ChatGPT is the latest in a line of AI systems that the company refers to as GPT, an acronym that stands for Generative Pre-Trained Transformer.
To develop the system, an early version was improved through conversations with human trainers.
The system also learned by accessing data from Twitter, according to a tweet from Elon Musk, who is no longer on OpenAI’s board. The Twitter boss wrote that it has paused access “for the time being.”
The results impressed people who tried the chatbot. Sam Altman, CEO of OpenAI, revealed in a tweet the level of interest in the artificial conversation.
“ChatGPT went live on Wednesday. Today it exceeded 1 million users!” Sam Altman tweeted.
According to the project, the chat format allows the AI to answer “follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests.”
A reporter from technology news site Mashable, who tried out ChatGPT, said it was difficult to provoke the model into saying offensive things.
Mike Pearl writes that in his own tests, “his taboo avoidance system is quite comprehensive”.
However, OpenAI warns that “ChatGPT sometimes writes answers that seem reasonable but are incorrect or illogical.”
According to the company, training the model to be more careful causes them to refuse to answer questions they can correctly answer.
In a short interview conducted by the BBC for this article, ChatGPT proves to be a careful communicator, able to express himself clearly and concisely in English.
Did he think artificial intelligence would take over the work of human writers? No – he claimed that “AIs like myself can help writers by providing suggestions and ideas, but ultimately it’s up to the human writer to create the final product.”
Asked about the social impact of AI systems like his own, he said it was “hard to predict”.
Was the system trained on Twitter data? He replied that he did not know.
It wasn’t until the BBC asked him about Hal, the malicious fictional AI from the 2001 film, that he seemed confused.
Although this is most likely a random error – which is not surprising given the amount of interest.
His master’s voice
Other companies that have opened up their conversational AI systems to public use have found that they can be persuaded to say offensive or insulting things.
Many of them consist of huge databases of texts retrieved from the Internet and as a result they learn the worst and the best of human expression.
Meta BlenderBot3 harshly criticized Mark Zuckerberg in a conversation with a BBC reporter.
In 2016, Microsoft issued an apology after an experimental Twitter bot named “Tay” made offensive comments on the platform.
Others have found that sometimes successfully creating a computer disguised as a speaker leads to unexpected problems.
Google’s lambda was so reasonable that a former employee concluded that she was conscious and entitled to rights because of her thinking and conscious being, including the right not to be used in experiments against her will.
ChatGPT’s ability to answer questions made some users wonder if it could replace Google.
Others questioned whether journalists’ jobs were at risk. Emily Bell, of the Tao Center for Digital Journalism, worried that readers might be overwhelmed by the “bullshit”.
“ChatGPT proves my biggest fear about AI and journalism — honest journalists will not be replaced in their work — but that these capabilities will be used by bad actors to generate the most astonishing amount of misleading wedge, stifling reality,” journalist Emily Bell fears in a tweet.
The question-and-answer site had to stop the flood of answers generated by artificial intelligence.
Others have called on ChatGPT to speculate on the impact of AI on media.
General-purpose AI systems such as ChatGPT and others pose a number of ethical and societal risks, according to Carly Kind of the Ada Lovelace Institute.
Among the potential issues Ms. Kind is concerned about is that AI could perpetuate misinformation or “disrupt existing institutions and services – ChatGPT might be able to write a job application, school essay or fair application support, for example”.
There are also issues with copyright infringement, she adds, “and there are also privacy concerns, since these systems often include unethically collected data from Internet users.”
However, she said they can also offer “interesting and hitherto unknown societal benefits.”
ChatGPT learns from human interactions, and OpenAI CEO Sam Altman said on Twitter that those working in this area also have a lot to learn.
AI has “a long way to go and discover big ideas. We will stumble along the way and learn a lot from real-life experiences.”
“It will be messy sometimes. Sometimes we make really bad decisions, and sometimes we have moments of superlative progress and value,” he wrote.
“Hardcore beer fanatic. Falls down a lot. Professional coffee fan. Music ninja.”
More Stories
SALES / PHOTO SALES – Nikon D850 “5 Star” Bare Body Photo Body at €2,539.00
Discovering a new turning point under the Antarctic ice sheet! What are the consequences?
Record number for an insect!