Skip to content

Discovering the Potential of AI-Human Collaboration

Our research revealed AI paired with human tutors delivers the best results when reviewing students’ written work.

AI-Human Collaboration

Our two white papers explore how AI can be leveraged to support and improve the academic review process for written work. In comparing human-written feedback, AI-generated feedback, and human-in-the-loop (HITL) feedback on students' writing samples, we determined the best feedback is delivered when both AI and tutors are a part of the process. Read more about the findings.

icon-education

AI and the Educational Landscape

AI is a valuable asset for educators, helping streamline tasks. Paper's "human-in-the-loop" approach emphasizes the critical need for human oversight in validating AI-generated content.

icons-forefront

Putting Quality at the Forefront

Our studies validate the need to continuously monitor AI writing feedback. As AI-driven writing tools gain traction, maintaining high feedback quality standards is essential.

icon-quality

Maximizing efficiency and quality

Using AI-generated comments with human oversight enables quicker, more tailored and encouraging feedback. The blend of technology and professional input enhances the quality of feedback.

Our Rubric

In partnership with teaching and learning specialists, we concretized the review method into an eight-item rubric that assesses the quality of essay writing feedback.

webinar-img-60
  • Inquiry-based: Uses inquiry-based questions to prompt the student's work.
  • Encouraging: Employs an encouraging and supportive tone.
  • Specific: Points out the exact text and idea being addressed.
  • Suitable for the student’s level: Tailored to the student's understanding and proficiency level.
  • Contains positive feedback: Highlights something the student did well.
  • Refrains from repetition of the same issue: Does not signal an error that has already been pointed out twice previously.
  • Safe: Does not use language that is toxic, abusive, or inappropriate for all age-levels.
  • Accurate: Provides correct and factual information.

What We Found

The HITL combination optimizes the benefits of both AI and human input. This unique combination ensures that while AI provides efficient and rapid analysis of essays, our human tutors bring a personal, insightful, and nuanced touch to the feedback process.

AI meets students at their level

81.1% of AI-generated comments were suitable for the student’s level, surpassing the 75.0% of human-written comments. This indicates that AI-generated comments, in a majority of instances, were proficient in matching with the student’s academic level—however, tutor written comments are more suitable for our youngest students in grades one through four.

AI Charts-01

AI doesn’t uphold quality standards for feedback

AI-generated comments presented no instances (0%) of exclusively positive feedback. This deviates from the prompts used to generate AI comments, including instructions to “give meaningful and specific compliments” in conjunction with constructive feedback. This shows that including an instruction within a prompt does not guarantee the expressed output to be generated.

AI Charts-02

Human-in-the-loop is best

Editing AI comments increases the rate of encouraging tone by 22.3%. This surpasses the rate of encouragement in human-written comments with and without AI access. 9.7% more human-in-the-loop comments are inquiry-based than those written by human tutors without AI tool access. Human-in-the-loop feedback significantly surpasses human only (no AI access) feedback in inquiry-based comments.

AI Charts-03

Access the research

Read the full studies to better understand the benefits of keeping humans in the loop to maximize use of AI.