We propose a three-stage architecture for metaverse announcers, which is designed to identify events, position cameras, and blend between shots. Based on the architecture, we introduce a Metaverse Announcer User Experience (MAUE) model to identify the factors affecting the users’ Quality of Experience (QoE) from a human-centered perspective. In addition, we implement MetaCast, a practical self-driven metaverse announcer in a university campus metaverse prototype, to conduct user studies for MAUE model. The experimental results have effectively achieved satisfactory announcer settings that align with the preferences of most users, encompassing parameters such as video transition rate, repetition rate, importance threshold value, and image composition.
Continue Reading