Educators are employing artificial intelligence for grading essays, yet certain specialists are voicing ethical apprehensions.

The ChatGPT log visible on the screen of a laptop.

When Diane Gayeski, a strategic communications professor at Ithaca College, receives an essay from one of her students, she runs a portion of it through ChatGPT, seeking the AI tool’s critique and suggestions for improvement.

Gayeski views AI grading as akin to having a teaching or research assistant perform an initial evaluation, noting its effectiveness in this regard. She shares the feedback and revised essay generated by ChatGPT with her students, alongside her own input on areas such as the introduction, fostering discussions about the feedback.

Gayeski mandates that her class of 15 students follow the same process: using ChatGPT to identify areas for enhancement in their drafts.

The integration of AI is reshaping the landscape of education, offering advantages like streamlining tasks to enable more personalized instruction, yet it also presents significant challenges, including concerns regarding accuracy, plagiarism, and academic integrity.

Both educators and students are embracing this new technology. A report by strategy consulting firm Tyton Partners, sponsored by plagiarism detection platform Turnitin, found that half of college students utilized AI tools in Fall 2023. Moreover, while fewer faculty members adopted AI, the percentage grew to 22% in the fall of 2023, up from 9% in the spring of the same year.

Educators are turning to various AI tools and platforms, including ChatGPT, Writable, Grammarly, and EssayGrader, for tasks such as grading papers, providing feedback, designing lesson plans, and crafting assignments. These tools are also utilized to develop quizzes, polls, videos, and interactive content, enhancing classroom engagement.

Meanwhile, students are leveraging tools like ChatGPT and Microsoft CoPilot, integrated into Word, PowerPoint, and other applications.

However, while some schools have established policies governing students’ use of AI for academic tasks, many lack guidelines for teachers. Furthermore, the practice of utilizing AI for grading or providing writing feedback raises ethical concerns. Parents and students, already investing substantial sums in tuition fees, may question the value of an education system heavily reliant on AI-generated and AI-graded content.

Gayeski emphasizes that solely relying on AI for grading, coupled with students using it exclusively to produce final work, is not conducive to effective learning.

The appropriate application of AI in various contexts.

According to Dorothy Leidner, a business ethics professor at the University of Virginia, the use of AI by teachers for grading purposes varies depending on several factors. In situations where the material being assessed in a large class consists mainly of declarative knowledge with clear right and wrong answers, employing AI for grading may even surpass human grading, she explained to CNN. AI offers the advantage of quicker and more consistent grading, eliminating issues related to fatigue or monotony.

However, Leidner emphasized that in smaller classes or assignments with less definitive answers, grading should remain personalized. This allows teachers to provide more tailored feedback and gain insight into a student’s progress over time.

She suggested a hybrid approach where teachers utilize AI to assess certain metrics such as structure, language use, and grammar, providing numerical scores based on these aspects. However, for aspects requiring judgment such as novelty, creativity, and depth of insight, teachers should personally grade students’ work.

In essence, Leidner advocates for teachers to retain ultimate responsibility for grading while leveraging AI as a supplementary tool to enhance efficiency and consistency in certain aspects of assessment.

Leslie Layne instructs her students on effective utilization of ChatGPT, but she disagrees with the approach some educators take in using it for grading papers.

Leslie Layne, who imparts ChatGPT best practices in her writing workshop at the University of Lynchburg in Virginia, acknowledges the benefits for teachers while also recognizing drawbacks.

She expresses concern that relying on feedback not directly from her could undermine the teacher-student relationship. Additionally, Layne highlights the ethical implications of uploading student work to ChatGPT, considering it a potential breach of intellectual property. This concern is shared by ethics professor Leidner, particularly for doctoral dissertations and master’s theses, where students may intend to publish their work. Leidner suggests that students should be made aware of this practice in advance and possibly provide consent.

Some educators utilize software like Writable, which employs ChatGPT to assist in grading papers but ensures anonymity by tokenizing essays, preventing direct sharing with the system. On the other hand, platforms like Turnitin offer plagiarism detection tools to identify assignments generated by ChatGPT and other AI. However, such detection tools are not infallible; OpenAI discontinued its AI-detection tool last year due to low accuracy rates.

Establishing criteria or benchmarks.

Some schools are actively developing policies for both teachers and students regarding the use of AI. Alan Reid, a research associate at Johns Hopkins University’s Center for Research and Reform in Education (CRRE), has worked with K-12 educators who utilize GPT tools to generate personalized end-of-quarter comments on report cards. However, he, like Layne, recognizes the limitations of AI in providing insightful feedback.

Reid is part of a committee at his college tasked with drafting an AI policy for faculty and staff. These discussions extend beyond classroom use to encompass AI’s broader applications in education, including tasks such as creating promotion and tenure files, performance reviews, and job postings.

Nicolas Frank, an associate professor of philosophy at the University of Lynchburg, emphasizes the importance of aligning university policies with the perspectives of professors. He cautions against premature policy-making, highlighting the complexity of integrating AI into daily life and the risk of oversimplifying the challenges associated with its use in grading and instruction.

Frank suggests that educators can begin by identifying clear instances of AI misuse and crafting policies around them. He emphasizes the need for nuanced policy-making to address the complexities of AI integration effectively.

On the other hand, Leidner advocates for universities to prioritize transparency in AI usage, ensuring that students are informed when AI is employed to grade their work. She also suggests outlining what types of information should not be uploaded to or requested from AI systems. Additionally, Leidner underscores the importance of universities remaining adaptable and regularly reassessing their policies as technology and its applications evolve.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like