VRL: Development and experimental of embodied lecturer avatar using marker-based landmark-guided re-targeting deformation transfer
DOI:
https://doi.org/10.21834/e-bpj.v10iSI36.7567Keywords:
Virtual Remote Lecture (VRL), Anthropomorphic Embodied Avatar, Nominal Anonymity, Marker-Based Landmark-Guided Re-TargetingAbstract
This study examines the re-targeting accuracy of landmark points on the embodied lecturer avatar mesh surface to determine the efficiency of the avatar's verbal and non-verbal cues deliverable in Virtual Remote Lecture (VRL). Employing embodied lecturer avatars with different anthropomorphic designs and nominal anonymity examines the efficiency of the deliverable social cues exchanged during communication. This study concludes that landmark points on real human faces cannot be matched entirely or mapped accurately on less visually anthropomorphic avatars. The findings reveal that high re-targeting accuracy on avatar improves pedagogical effectiveness, thus providing a promising education method beyond traditional in-person/online lectures.
References
Amemiya, T., Aoyama, K., & Ito, K. (2022). Effect of face appearance of a teacher avatar on active participation during online live class. In International Conference on Human-Computer Interaction (pp. 99-110). Cham: Springer International Publishing. DOI: https://doi.org/10.1007/978-3-031-06509-5_7
Birla, L., Gupta, P., & Kumar, S. (2022). SUNRISE: Improving 3D mask face anti-spoofing for short videos using pre-emptive split and merge. IEEE Transactions on Dependable and Secure Computing, 20(3), 1927-1940. DOI: https://doi.org/10.1109/TDSC.2022.3168345
Cao, Q., Yu, H., Charisse, P., Qiao, S., & Stevens, B. (2023). Is high-fidelity important for human-like virtual avatars in human computer interactions?. International Journal of Network Dynamics and Intelligence, 15-23. DOI: https://doi.org/10.53941/ijndi0201008
Coesel, A. M., Biancardi, B., Barange, M., & Buisine, S. (2025). The Hidden Face of the Proteus Effect: Deindividuation, Embodiment and Identification. IEEE Transactions on Visualization and Computer Graphics. DOI: https://doi.org/10.1109/TVCG.2025.3549849
Conklin, S. & Garrett Dikkers, A. (2021). Instructor social presence and connectedness in a quick shift from face-to-face to online instruction. Online Learning, 25(1), 135-150. https://doi.org/10.24059/olj.v25i1.2482 DOI: https://doi.org/10.24059/olj.v25i1.2482
Dubosc, C., Gorisse, G., Christmann, O., Fleury, S., Poinsot, K., & Richir, S. (2021). Impact of avatar facial anthropomorphism on body ownership, attractiveness and social presence in collaborative tasks in immersive virtual environments. Computers & Graphics, 101, 82-92. DOI: https://doi.org/10.1016/j.cag.2021.08.011
Feine, J., Gnewuch, U., Morana, S., & Maedche, A. (2019). A taxonomy of social cues for conversational agents. International Journal of Human-Computer Studies, 132, 138-161. DOI: https://doi.org/10.1016/j.ijhcs.2019.07.009
Gratch, J. (2023). The promise and peril of interactive embodied agents for studying non-verbal communication: a machine learning perspective. Philosophical Transactions of the Royal Society B, 378(1875), 20210475. DOI: https://doi.org/10.1098/rstb.2021.0475
Hinderks, A., Schrepp, M., & Thomaschewski, J. (2018). A Benchmark for the Short Version of the User Experience Questionnaire. In WEBIST (pp. 373-377). DOI: https://doi.org/10.5220/0007188303730377
Huang, Y., Gursoy, D., Zhang, M., Nunkoo, R., & Shi, S. (2021). Interactivity in online chat: Conversational cues and visual cues in the service recovery process. International Journal of Information Management, 60, 102360. DOI: https://doi.org/10.1016/j.ijinfomgt.2021.102360
Lang, Y., Xie, K., Gong, S., Wang, Y., & Cao, Y. (2022). The Impact of Emotional Feedback and Elaborated Feedback of a Pedagogical Agent on Multimedia Learning. Frontiers in Psychology, 13, 810194. DOI: https://doi.org/10.3389/fpsyg.2022.810194
Mohd Noor, I. H. (2024). Investigating students’ preferences between online learning and face-to-face learning: a study from UiTM Seremban Campus. Journal of Academia, 12, 85-92.
Nyatsanga, S., Roble, D., & Neff, M. (2025). The impact of avatar retargeting on pointing and conversational communication. IEEE Transactions on Visualization and Computer Graphics. DOI: https://doi.org/10.1109/TVCG.2025.3549171
Onizuka, H., Thomas, D., Uchiyama, H., & Taniguchi, R. I. (2019). Landmark-guided deformation transfer of template facial expressions for automatic generation of avatar blendshapes. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (pp. 0-0). DOI: https://doi.org/10.1109/ICCVW.2019.00265
Png, C. W., Goh, L. I., Chen, Y. K., Yeo, H., & Liu, H. (2024). A comparison of students’ preferences for face-to-face and online laboratory sessions: insights from students’ perception of their learning experiences in an immunology course. Journal of Microbiology and Biology Education, e00181-23. DOI: https://doi.org/10.1128/jmbe.00181-23
Seymour, M., Yuan, L. I., Dennis, A., & Riemer, K. (2021). Have We Crossed the Uncanny Valley? Understanding Affinity, Trustworthiness, and Preference for Realistic Digital Humans in Immersive Environments. Journal of the Association for Information Systems, 22(3), 9. DOI: https://doi.org/10.17705/1jais.00674
Shlomo, A., & Rosenberg-Kima, R. B. (2024). F2F, zoom, or asynchronous learning? Higher education students’ preferences and perceived benefits and pitfalls. International Journal of Science Education, 1-26. DOI: https://doi.org/10.1080/09500693.2024.2355673
Sun, F., Li, L., Meng, S., Teng, X., Payne, T. R., & Craig, P. (2025). Integrating emotional intelligence, memory architecture, and gestures to achieve empathetic humanoid robot interaction in an educational setting. Frontiers in Robotics and AI, 12, 1635419. DOI: https://doi.org/10.3389/frobt.2025.1635419
Tobita, H., & Tomisugi, M. (2025, June). Shared Physical Feedback: Shared VR Space Integrated with Physical Feedback. In International Conference on Extended Reality (pp. 397-407). Cham: Springer Nature Switzerland. DOI: https://doi.org/10.1007/978-3-031-97778-7_29
Wahn, B., Berio, L., Weiß, M., & Newen, A. (2025). Try to see it my way: Humans take the level-1 visual perspective of humanoid robot avatars. International Journal of Social Robotics, 17(3), 523-534. DOI: https://doi.org/10.1007/s12369-023-01036-7
Weimann, T., Fischer, M., & Schlieter, H. (2022). Peer Buddy or Expert?-On the Avatar Design of a Virtual Coach for Obesity Patients. Proceedings of the 55th Hawaii International Conference on System Sciences (HICSS), 1-10. DOI: https://doi.org/10.24251/HICSS.2022.467
Weir, E., Leonards, U., & Roudaut, A. (2025, April). " You Can Fool Me, You Can’t Fool Her!": Autoethnographic Insights from Equine-Assisted Interventions to Inform Therapeutic Robot Design. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1-20). DOI: https://doi.org/10.1145/3706598.3714311
Yin, J., Fang, M., & Ma, W. (2024). Automatic labeling of 3D facial acupoint landmarks. Metaverse. 2024; 5 (1): 2476. DOI: https://doi.org/10.54517/m.v5i1.2476
Zell, E., Aliaga, C., Jarabo, A., Zibrek, K., Gutierrez, D., McDonnell, R., & Botsch, M. (2015). To stylize or not to stylize? The effect of shape and material stylization on the perception of computer-generated faces. ACM Transactions on Graphics (TOG), 34(6), 1-12. DOI: https://doi.org/10.1145/2816795.2818126
Zhang, J., Chen, K., & Zheng, J. (2020). Facial expression retargeting from human to avatar made easy. IEEE Transactions on Visualization and Computer Graphics, 28(2), 1274-1287. DOI: https://doi.org/10.1109/TVCG.2020.3013876
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Siti Nur Shuhada Abu Samah, Erni Marlina Saari, Ahmad Zamzuri Mohamad Ali

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.