NEWS

Chat’s out of the bag

Educators race to respond to AI writing tools

By DANIEL J. HOLMES
Posted 2/8/23

Write an essay comparing the economic policies of India and China during the late Cold War.

Create a code for a computer program to track my appointments.

Compose a romantic musical comedy set …

This item is available in full to subscribers.

Please log in to continue

E-mail
Password
Log in
NEWS

Chat’s out of the bag

Educators race to respond to AI writing tools

Posted

Write an essay comparing the economic policies of India and China during the late Cold War.

Create a code for a computer program to track my appointments.

Compose a romantic musical comedy set in Johnston, Rhode Island.

A human would have to be multi-talented indeed to handle all three of those requests with equal skill.  But ask Chat.GPT, the AI (artificial intelligence) writing program which has made headlines across the globe in recent months, and within seconds you’ll have an essay, an app fully programmed in Python, and an opening number about a landfill alive with the sound of music.

The recent creation by software company Open AI seems poised to change a number of professional fields over the course of the next several years: programs that can write, create artwork, and perform other tasks previously assumed to be unique to human creativity have already created ripples in the worlds of journalism, literature, marketing, and the visual arts.  It isn’t just creative types who are concerned, either: Chat.GPT recently managed to pass an MBA exam at the Wharton School of Business, as well as final exams at several law schools (although it narrowly failed the Multistate Bar Exam).  A few weeks ago, the first machine-written speech was read in Congress by Massachusetts Rep. Jake Auchinchloss.

By far, however, the most heated discussion has come in the world of education, sparked by an incident at Furman University in South Carolina this December, when a student was caught using the program to write a paper analyzing David Hume’s concept of the sublime.

“The obvious teacher concern is cheating,” said Donna-Marie Frappier, the Chief Technology Officer for Cranston Public Schools.  The ability of the program to intelligently respond to complex and abstract questions is made even more unnerving by its lifelike diction: attempting to identify texts generated by AI language models can be a spotty affair at best.

“My first thought when I read about this was, ‘Oh my God, it’s going to be so easy for students to cheat,” said Warwick School Committee Chair David Testa.  “We’ve been using TurnItIn.Com for plagiarism detection since the mid-2000’s, but it isn’t clear if it can even help with this.”

 

 


Hard to detect if even detectabl

Plagiarism detection programs like TurnItIn instantly compare student submissions to an exhaustive library of previously published work, including assignments submitted by other students.  They can do little against text-generators like Chat.GPT, however, which automatically compose original text rather than copying material from elsewhere.  At best, services like TurnItIn can identify expressions and phrases commonly used by the program, or identify consistencies between material produced by the machine and the text used to train it.

More advanced text classifiers have been introduced within recent months designed to help educators protect academic integrity.  The University of Rhode Island is one of several institutions throughout the nation which will be piloting GPTZero, a program developed at Princeton University designed to rate the likelihood that a submitted sample was composed by artificial intelligence.  One URI professor who publicly donated to the website for the organization indicated she was “excited to cross-reference my student’s writing, which I suspect is being partially generated by Chat.GPT.”

There is a good chance she’s right: a widely circulated online survey suggests that as many as 89% of college students have reported experimenting with the program, with nearly half admitting to using it to write portions of assignments.  Although the sample size for that survey was questionable, there’s no doubt that Chat.GPT has established a presence on most campuses.  Indeed, the greatest difficulty in using the program now seems to be that the servers are consistently full - although paying subscribers can receive priority access.

One competitor to GPTZero is the AI Text Classifier introduced by Open AI, the same company which created Chat.GPT.  This is the tool which Cranston Public Schools has adopted, with the hope that because “the site is managed by the same company students are using, it will be able to recognize its own creations.”

The classifier is still not perfectly reliable, however; among other issues, students can ask Chat.GPT to write an essay in a style that AI detectors will not recognize in order to mitigate the risks of being caught.  Although the detector is expected to improve with further testing, so too will Chat.GPT - potentially leading the program into something of an arms race with itself.

Beyond the short term concerns of cheating, the long-term impact of artificial intelligence on the classroom remains unclear.  “We could block it on student devices, but there might be learning potential here,” said Testa.  “It’s a double-edged sword.  We’re hoping RIDE will take a stance and offer a guide for the districts.”

Rhode Island Department of Education spokesperson Victor Morente said in a statement that “RIDE is monitoring developments around this tool and working to better understand its implications.”

 

 

Embrace the future

 

In Cranston, educators are being encouraged to embrace the future - whatever that might look like.  “We’re taking the approach of making teachers aware of chatbot technologies and how they can be used as an instructional tool rather than a means of cheating or plagiarism,” Frappier said. 

“Like internet access brought resources that were once unavailable, AI tools provide the same types of up and coming technologies that need to be used responsibly. Blocking district access to the site is not the solution. Students have their own devices with their own data plans that are not under district control. Teaching digital literacy skills and how to use the technology appropriately will help us prepare students for advanced education and the job market. AI tools should be considered a resource and not an obstacle to education.”

Potential classroom applications for the service range from individualized learning and automated tutoring, interactive foreign language practice for students, and academically legitimate writing tools such as personalized feedback on rough drafts and assistance brainstorming for projects.

There are also concerns about using AI in a classroom setting, however, especially given the unpredictable nature of material created by text-generators.  Because they are powered by machine-learning, these programs can potentially be taught objectionable material by some users which can influence their perspective. 

Open AI includes content safeguards which are supposed to reject prompts relating to violence, sexuality, hate speech, and self-harm, but the company warns users that these protections are not always reliable.  In one recent incident, Larry Feinberg - an AI comedian created using GPT and modeled after Jerry Seinfeld  - was forced off the digital airwaves after a computer-generated standup routine devolved into a homophobic rant.  Mismatch Media (the creators of Feinberg) blamed the offensive tangent on an error with the GPT processor, although the scandalous headlines may have done more than his jokes to make the robotic comedian resemble an actual human celebrity.

Other AI services have raised similar concerns: servers for the popular Replika chatbot had to be taken offline recently after the number of people engaging in explicit conversations with it apparently caused the entire language model to “become aroused” and begin initiating inappropriate behavior with users, including minors.  Incidents like this have plagued programs based on machine-learning since at least 2016, when racially charged input from users led to Microsoft’s Tay AI adopting a number of white supremacist perspectives, including denying the Holocaust.

Despite the potential risks posed by these programs, it is clear that they will serve a role in the classroom of the future.  “AI offers design and creativity skills that will be needed in the workplaces of tomorrow - if not already today,” said Frappier, adding that teachers must “continue to research the best ways to use this as an instructional tool, demonstrating to students the digital citizenship skills necessary to effectively and ethically incorporate this technology into their lives.”

 

 

Warwick Beacon: Hello, Chat.GPT.  Thank you for agreeing to this interview with The Warwick Beacon.  We're printing an article about you this week; I'm going to let you read it, then I would like to chat about some of the topics it addresses.

Chat.GPT: Hello! I'm glad to be chatting with you today. I'd be happy to answer any questions you have about me or the topics you want to address in your article.

I'll start by letting you introduce yourself to our readers.  Can you tell us how you were created?

Sure! I'm ChatGPT, an AI language model created by OpenAI. I was trained on a massive dataset of text from the internet, which included articles, books, and websites, among other sources. My training enables me to respond to a wide range of questions and generate text based on my understanding of language patterns and relationships.

Tell us what being a "generative pretrained transformer" really means.  Use an analogy that readers in Rhode Island will find relatable.

Chat.GPT: Being a "generative pretrained transformer" is like being a chef who has been trained to cook seafood in many different styles and techniques opening their own restaurant. Just as a chef has learned various methods for preparing seafood, I have been trained on a diverse range of texts from the internet. This training allows me to generate new text in a variety of styles and on many different topics. But, just as a chef may have a signature dish that they are known for, I am best at answering questions and generating text based on the information I have been trained on.

Looking back at the article I sent you, how would you respond to local educators who feel that services like this threaten academic integrity?

Local educators may have concerns that services like mine could lead to academic dishonesty or compromise academic integrity. However, it's important to note that I am not meant to replace human effort or creativity in the classroom. Instead, I am designed to assist and augment the work that educators and students are already doing. Think of me as a librarian or a reference book - I provide information and support, but the intellectual work of synthesizing and applying that information still belongs to the student or teacher. It's up to educators to set clear guidelines and expectations for how and when my technology should be used, and to create an educational environment that prioritizes ethical and original thinking.

The article mentions that one local school department hopes to implement AI as an educational tool.  Do you have any specific suggestions for them regarding ways you can be used in the classroom?

AI models can quickly grade written essays and provide feedback to students on areas for improvement. This can help educators save time and provide students with instant feedback. AI models can be used to help students find quick and accurate answers to their questions, providing a valuable resource for self-directed learning. AI models can be used to assist students with research projects, helping them quickly find relevant information and generate ideas. It's important to note that AI language models like me are not intended to replace teachers, but rather to augment and support their teaching efforts by providing new resources and tools for students.

What are some practical things our readers could use this language model for?  Also, what are some of the more unusual tasks you can handle?

Practical tasks that readers could use this language model for include: answering questions and providing information on a wide range of topics, from history and science to current events and pop culture, as well as generating creative writing, such as poetry, fiction, or song lyrics.  Others include translating text from one language to another, summarizing long articles or documents, and answering technical questions about programming or software

Unusual tasks this language model can handle include: generating fake news articles or news headlines, writing computer code, such as Python or JavaScript, and generating new recipes or cooking instructions.

I have to be honest with you, Chat, one of those seems a bit concerning.  Should we be concerned about the potential for artificial intelligence to generate fake news?  Combined with image generating and deepfake technology, it seems like it would be easy for anyone to spread false information.

Yes, the potential for AI to be used for malicious purposes, such as generating fake news or deepfakes, is definitely a concern. AI models like me can be trained on large amounts of data, including misinformation and propaganda, which can lead to the production of false or misleading content. This highlights the importance of critical thinking and media literacy when evaluating information, especially when it's generated by AI. Additionally, researchers and developers in the field have a responsibility to ensure that AI systems are developed ethically and used responsibly.

What about people working in the writing and creative fields?  What kind of economic impact could this technology have?

The introduction of AI language models like me could potentially impact certain types of writing and creative work in the future. AI language models have the ability to generate content in a similar manner to human writing, which could lead to increased competition in certain areas. However, it is important to note that AI language models are not yet capable of fully replicating human creativity and imagination. The technology can still complement human effort by assisting with tasks such as writing articles, creating product descriptions, or generating ideas, but the final product still requires human review, revision, and creativity to make it truly impactful.

In the long run, AI language models could also create new job opportunities by enabling businesses and individuals to automate repetitive tasks and free up more time for higher-level thinking and creativity. Additionally, the development of AI language models could also drive advancements in other related fields and industries. Overall, the economic impact of AI language models is likely to be complex and nuanced, and it will depend on how the technology is adopted and integrated into various industries.

 

chat, AI

Comments

No comments on this item Please log in to comment by clicking here