At Paper, we are dedicated to enhancing the educational experience by leveraging cutting-edge technology. Our latest research highlights the incredible potential of combining AI with human tutors to deliver the best feedback on students' written work.
Here’s a summary of our findings:
We published two white papers that revealed AI paired with human tutors yields the most effective results in reviewing students' written work. By comparing human-written feedback, AI-generated feedback, and human-in-the-loop (HITL) feedback, we found that the optimal feedback is achieved when both AI and tutors collaborate.
AI is transforming the education landscape by streamlining tasks, and providing personalized support, for both students and educators. Our human-in-the-loop approach underscores the importance of human oversight to validate and enhance AI-generated content—ensuring that the feedback is efficient, accurate, and contextually relevant.
Our studies emphasize the necessity of continuous monitoring to maintain high-quality AI-driven feedback. As AI writing tools become more prevalent, ensuring the quality of feedback remains paramount.
Combining AI-generated comments with human oversight results in faster, more tailored, and encouraging feedback. This blend of technology and professional input significantly enhances the quality of feedback provided to students.
In collaboration with teaching and learning specialists, we developed an eight-item rubric, which was used in our white papers to assess the quality of essay writing feedback. This rubric includes criteria such as inquiry-based questions, encouraging tone, specificity, suitability for the student’s level, positive feedback, avoidance of repetition, safety, and accuracy.
The HITL combination optimizes the benefits of both AI and human input. While AI provides rapid and efficient analysis, human tutors add personal, insightful, and nuanced feedback. Key findings include:
AI meets students at their level: 81.1% of AI-generated comments were suitable for the student’s level, compared to 75.0% for human-written comments. AI comments were particularly proficient for students in grades five through twelve.
AI doesn’t uphold quality standards for feedback: AI-generated comments did not provide exclusively positive feedback, despite prompts to do so. This shows the necessity of human oversight to ensure quality and positivity.
Human-in-the-loop is best: Editing AI comments increased the rate of encouraging tone by 22.3%, surpassing human-written comments. Additionally, 9.7% more HITL comments were inquiry-based compared to those written solely by human tutors.
Our research shows that the combination of AI and human tutors delivers the most effective feedback. This collaboration ensures that students receive high-quality, tailored feedback that supports their academic growth.
Learn More
To delve deeper into our research and findings, visit our dedicated webpage.