With a new tool that may assist instructors in determining whether a student or artificial intelligence did the coursework, ChatGPT’s creator is attempting to reduce its image as a free-roaming cheating machine. Following weeks of debate at schools and colleges over concerns that ChatGPT’s capacity to write almost anything on demand may encourage academic dishonesty and impede learning, OpenAI on Tuesday unveiled its new AI Text Classifier.
OpenAI warns that its new tool, like previous ones currently on the market, is not perfect. Jan Leike, leader of the OpenAI alignment team charged with making its systems safer, said that the mechanism for recognising AI-written material “is flawed and it will be inaccurate occasionally.”
Because of that, Leike stated, “it shouldn’t be the only factor considered when making judgements.”
Millions of individuals started playing with ChatGPT when it went live as a free application on OpenAI’s website on November 30. Teenagers and college students were among them. The ease with which technology could answer take-home exam questions and help with other tasks prompted a fear among some instructors, despite the fact that many found innovative and safe ways to utilise it.
New York City, Los Angeles, and other significant public school systems started to obstruct its usage in classrooms and on school-owned devices by the time the new year’s classes began. According to Tim Robinson, a district spokesperson, the Seattle Public Schools district first barred ChatGPT on all school devices in December before allowing educators who wanted to utilise it as a teaching tool access.
Robinson stated, “We can’t afford to ignore it.
According to Robinson, the district is also considering introducing ChatGPT into classrooms so that instructors can use it to assist students develop their critical thinking skills and students may use it as a “personal tutor” or to help them come up with fresh ideas while working on an assignment.
According to school districts around the nation, the discussion surrounding ChatGPT is developing swiftly.
The first thought was, “OMG, how are we going to stop all the cheating that will happen with ChatGPT,” according to Devin Page, a technology expert with the Maryland district Calvert County Public School. He said that people are beginning to see that preventing “this is the future” is not the best course of action. Page believes districts like his own will eventually unblock ChatGPT, especially once the company’s detection service is in place. “I think we would be naive if we were not aware of the dangers this tool poses, but we also would fail to serve our students if we banned them and us from using it for all its potential power,” said Page.
In a blog post published on Tuesday, OpenAI acknowledged the drawbacks of their plagiarism detection tool, but said that in addition to preventing plagiarism, it might also assist identify automated misinformation campaigns and other instances where AI has been used to imitate humans.
The programme becomes better at telling if something was written by a human or an AI the longer a chunk of text is. Any text entered into the tool will be categorised as either “extremely improbable, unlikely, uncertain whether it is, probably, or likely” artificial intelligence-generated, whether it be a college entrance essay or a literary study of Ralph Ellison’s “Invisible Man.”
It’s difficult to understand how ChatGPT arrived to a conclusion, similar to how ChatGPT was taught on a vast collection of digitised books, newspapers, and online writings yet often confidently spits forth lies or nonsense.
Leike said, “We don’t really understand what type of pattern it pays attention to or how it operates within. “At this stage, there’s really not much we can tell about how the classifier really works.”
Universities and other institutions of higher learning have started to discuss the ethical usage of AI. One of France’s most esteemed colleges, Sciences Po, banned its usage last week and issued a warning that anybody caught using ChatGPT or other AI tools to generate written or spoken work covertly risked expulsion from Sciences Po and other institutions.
OpenAI said that it has been working for many weeks to create new rules to support instructors in response to the criticism.
According to OpenAI policy researcher Lama Ahmad, “Like many other technologies, it’s possible that one district may determine that it’s not fit for usage in their schools. “We don’t really press them in a certain direction. We just want to provide them with the knowledge they need to choose the best course of action for themselves.”
For the research-focused San Francisco firm, which is now supported by billions of dollars in funding from its partner Microsoft and is facing rising attention from the general public and governments, this is an extremely visible position.
Following a meeting with OpenAI leaders in California, including CEO Sam Altman, France’s minister for the digital economy Jean-Nol Barrot expressed his enthusiasm for the technology to a crowd at the World Economic Forum in Davos, Switzerland, a week later. A former professor at the Massachusetts Institute of Technology and the French business school HEC in Paris, the government minister, however, said that there are also challenging ethical issues that would need to be resolved.
Since ChatGPT, among other technologies, will be able to offer tests that are rather spectacular, he added, “if you’re in the law faculty, there is grounds for worry.” If you are in the economics faculty, you’re good to go since ChatGPT will struggle to locate or provide what is required of you in a graduate-level economics faculty.
According to him, it will become more crucial for users to comprehend the fundamentals of how these systems function so they are aware of any potential biases.