Abstract
This study presents an AI-powered robotic assistant's concept and high-level design to revolutionize classroom video-based learning. The proposed system utilizes advanced natural language processing (NLP) and computer vision techniques to potentially generate interactive multiple-choice questions from educational YouTube videos automatically. The envisioned robotic assistant would transcribe video content, segment it, and use language models to create questions projected onto students' desks, potentially creating an immersive and interactive learning experience. The system concept includes a 3D-printed face with a human-like appearance and lip-sync capabilities to enhance communication. Student interaction is proposed through innovative 'flip-flop' devices with ArUco markers, potentially enabling real-time collection and analysis of responses. This paper introduces the system architecture and discusses its potential applications in enhancing video-based learning experiences and reducing teacher workload.
Author Information
Chen Giladi, Sami Shamoon College of Engineering, Israel
Paper Information
Conference: ECE2024
Stream: Teaching Experiences
This paper is part of the ECE2024 Conference Proceedings (View)
Full Paper
View / Download the full paper in a new tab/window
To cite this article:
Giladi C. (2024) Developing an AI-Powered Robotic Assistant for Interactive Video-Based Learning: Engineering Innovations and System Design ISSN: 2188-1162 The European Conference on Education 2024: Official Conference Proceedings (pp. 843-852) https://doi.org/10.22492/issn.2188-1162.2024.65
To link to this article: https://doi.org/10.22492/issn.2188-1162.2024.65
Comments
Powered by WP LinkPress