Round The Table, Education without the 2d frame constraints: a WebVR experience from a glocal perspective

. Round-the-Table, as a researcher-led initiative, was an experimental virtual roundtable in a 3D format that invited twenty-one organisations worldwide from education, research, and technology to open a broad dialogue about a more sustainable, inclusive, interactive, and accessible educational environment, which may help pedagogical communication beyond the 2D frame. This was made possible by the implementation of a Web-VR platform supported by Mozilla, by which each participant had the opportunity to co-create with the organisers, a collaborative immersive sensory experience, together with the simultaneous dialogue between Local and Global. Participants were asked two critical questions: ‘decentralised education’ and ‘phygital exchanges’ : how can we work beyond the 2d frame and how to distribute tasks between physical and digital. The responses were by far diverse, but it was indeed possible to map a cohesive picture from this cloudy but colourful panorama.


Introduction
Round-the-table -a series of experimental roundtables in web-based virtual reality (VR) -invited guests worldwide to bring about international and interdisciplinary discussion on the future of education.Happened during the pandemic lock down, this series of roundtable tries to facilitate an inclusive and accessible real-time digital communication beyond the conventional 2D telecommunication.It aims to utilise open-source and web-based strategies in democratising VR classrooms to any educators and students with just computers or mobile devices and internet.Also, the design pipeline of the spaces and avatars stresses on collaborative means with very simple ways of configuration for participation and interactivity.It invited 18 institutions and companies both in public and private sectors, 37 guests and speakers working in fields of education and technology, with over 700 participants and viewers from all over the world.
The series of VR roundtables happened in over 20 VR spaces and focused on two topics: 'decentralised education' and 'phygital exchange'; each VR space was customised to the content of discussion.The entire organising and operational team consisted of just two people without any additional support, both located at different places at the time because of the pandemic lock down.The workflow includes tailoring VR spaces and cardboards, 3D modelling, photogrammetry of physical space, personalising avatars, inviting guests, drafting agenda, arranging and composing scenes, rehearsing navigation, broadcasting events, troubleshooting, and documentation.Each event takes about a week to set up the VR rooms and a month to organise agenda and invitation; the cost of the event was zero, and the guests were kind enough to contribute to this open-source experimental effort.A symposium of the same scale and efficiency would normally require at least three teams of people to organise, assist operation, set up physical spaces, sometimes with extra audiovisual technicians; the environmental and monetary cost of bringing in guests physically would also be very considerable without the help of web-based telecommunication and livestreaming tools.
Through utilising Mozilla Hubs for web-based VR (fig.1), this paper induces the theoretical discussion on the embodiment of education in the phygital age, revealing the behind-the-scene to the design of the events, discusses its production pipeline going from Reality Capture, Rhino Grasshopper, Blender, to Hubs for more advanced configurations, tips in broadcasting real-time VR to Youtube and other streaming platforms, and different modes of navigation to enhance engagement.Image credit: authors.

2.
Theoretical Discussion: Future of Education Education is a fundamental human right and one of the Sustainable Development Goals (UN, 2022) that strives to promote comprehensive and equal access to education while maintaining high quality standards.One of the most difficult conditions in this respect is the lack of access to educational infrastructure, which allows us to deliver teaching processes.According to the same goal, the Covid Crisis erased 20 years of educational gains, resulting in a new scenario in which higher education is now serving as a sharing platform for leading and supporting new initiatives based on virtualisation of what we understand of education, opening, exploring, and finally consolidating new areas.
While digital communication has some advantages, such as reducing carbon footprints through transportation, building international communities, and innovating industrial infrastructure (SDGs 9,11,and 13), it can also undermine communication quality, digital equality, mental wellbeing, and decency in work and learning (SDGs 3,4,8,and 10).(Mpungose, 2021).All of these issues are crucial to present educational debates, in which traditional pedagogical methods are increasingly being challenged by pandemic lockdowns, developing online educational platforms, and open-source learning practises (Bailenson,, 2021) (Agnihotri, Bhattacharya, 2022).(Coppola, Neelley, 2004).Both institutional and crowdsourcing methods provide top-down and bottom-up strategies, the two have the ability to meet halfway and harness the strengths and drawbacks of each to change the character of education.For example, Wikipedia has become a frequent tool in research; rather than fully condemning its usage, it is arguably more necessary to understand what influences the quality of its use, and even more pertinent is the concept of dispersed knowledge and crowd intelligence being used to education.This paper focuses on three core issues: data communication, spatial distribution, and volumetric navigation, where the critical scale in open-source tools that errors may average out statistically, open-endedness of the platform so many invisible hands may help the system to consistently self-correct, and the level of digital literacy in the user to filter out misinformation and disinformation (Wiley, 2006).
To begin, institutions must collaborate with the open-source industry to achieve network effects; moodle (Costello, 2013) is an example.Schools all around the world are establishing their various programmes and instructional methods on a platform that allows for the administration and creation of private online portals for management in educational institutions, bridging the demands of instructors, students, and administrators.In this regard, data communication is one of the crucial variables to be studied and developed as a cornerstone on the future of education and how to accelerate changes in a crowd dispersed style of development, comprehending information from two perspectives: abstract (metadata) and spatial (images) (experiential).
Second, textbooks may be reissued in order to self-correct.Nonetheless, tools like Reddit and StackOverflow have already become an efficient way for educators, researchers, and developers to network and discuss problems ranging from grammar to mathematical definitions and coding and are increasingly being used in postgraduate education classrooms for advanced thematic and topical studies.However, there is another more issue to consider in the educational fresh views on where we are studying and teaching inside the space.In this example, the virtual environment enabled an immersive approach, with visual information serving in the same capacity as textbooks and other fresh sources of knowledge on a broadway.
Third, from the standpoint of digital literacy, we are confronted with the significance of employing digital tools in research and collaboration across institutes, as well as open-source initiatives in co-validating material (Reosa, Et al. 2021).(Archibald, Et al.2019).This may be as easy as encouraging academics to contribute and share information for open platforms using their institutional expertise, testing with new technologies firsthand to provide empirical data, or bringing up such debates in classrooms to enhance students' understanding.But in a parallel perspective, how we face digitalisation as an ongoing phenomenon is rapidly giving way to a "business as usual" idea.It is no longer possible to conceive education with digital-only, instead of a digitallydriven learning process, based on a context constantly fed by dynamic information, in which interactivity is fundamental as part of the learning curve.
The concept of a Planetary Classroom is a transformation from the actual current space for learning, not attempting to substitute the dimension about what we know as an effective teaching and learning space, but rather broadening what we know as "frame" into a whole new territory that began with channels such as Zoom, Blackboard, Meet, and MS Teams as primary actors in this virtual educative framework.However, we may also find a different terrain, the world of 3d online spaces, which were traditionally explored by gamers as a place for engagement, as an extended reality that, owing to the expansion of these platforms, is now feasible to exploit the use of WebVR technology (fig.2).
Questions like How can educators with minimum support utilise open-source methods in teaching to evolve in a virtual way of learning?What are the latest available means to enhance engagement and inclusiveness in such education?Are now part of our journey, redefining the concept of transmitter and receptor, professors and students, in where content is not just a lecture in a unidirectional way of bringing content (horizontal way of learning, as opposed to vertical), instead of a space being constantly rewritten by teachers and students rather just a planar approach that translate books from physical to digital.These new educative experiences belong in particular to the concept of "embodiment" in where systems that have evolved for perception, action, and emotion, contribute to "higher" cognitive processes (Glenberg, 2008), being finally the new territory to explore by the use of virtualisation, not as a replacement of physical.In that sense, we are now working on creating new opportunities towards a definitive evolution from static modes of learning based on objectives in a model in which skills are equally validated together with theoretical contents (experience=skills -theory=objectives).

Theoretical Discussion: Future of Education
Mozilla Hubs and Spoke were the VR tools chosen for this project (fig.3).Mozilla Hubs was the frontend to 'meet, share and collaborate together in private 3D virtual spaces', whereas Mozilla Spoke was the backend web editor to build and create 3D social scenes, with 'No external software or 3D modelling experience required' (Mozilla, 2022).This web-based platform was founded in 2018, multi-accessible with any web browser to increase inclusivity in VR, as individuals would not have to pay a considerable amount of headsets to participate in such events.It is also real-time communication, with a limit of 25 players interacting with one another in the same VR room for optimal network communication capacity; it can be opened to a maximum of 100 players at the same time.Hub has distance-aware audio configuration, meaning sounds are quieter afar and louder close up, and can be adjusted with different physical simulations, including audio roll off factors, reference distances, cone angles and gains, with a user-friendly interface.Compared to platforms such as Oculus, Hubs is not as high resolution, nor as sophisticated in its ray-tracing, but these were deliberately designed for its real-time multiplayer purpose, with very light-weight data communication, loading, and processing.

Data Communication
Data communication in the production of VR spaces and avatars involves turning 2D data into 3D and the feedback between organizers and contributors in its processing.For spatial data, it involved turning sets of sequential images into a 3D digital reconstruction of the physical space through photogrammetry, mapping 360 photos and renderings in spherical projections, mapping presentation materials on panel geometries, and designing some of the spaces from scratch (fig.4).For avatar data, it involved the 3D digital reconstruction of the participants using head or full-body shots.Both involve the translation between proprietary and open-sourced data format amongst the different processing and generative software interfaces and algorithms.Guests were invited to submit raw materials in open-sourced formats for us to tailor their VR spaces, or primary base information as .objfile and images for textures.The open-source structure enables free flow of 3D data from Blender, Sketchfab, to Hubs and Spoke.These files were the base for the creation of each scene, creating a synergic relationship between the graphical content, 3d space, and the audience.Images sent by guests were used as the source for photogrammetry in Reality Capture to reconstruct their physical spaces, each space required at least 100-200 images for optimal reconstruction; beyond 500 images, the reconstructed model would be quite heavy for Hubs to load, especially in mobile devices.This constrains the size of the room that can be scanned, if guests wish to scan a larger space, the room resolution has to be lower to leverage real-time constraints.Alternatively, their images or videos can be mapped in panel or spherical projection inside Spoke for a more volumetric experience, especially with 360 photos and videos.The above figure (fig.5) shows the performance check panel from the backend of Spoke, the strict limitation to polygon count, materials, textures, lighting, and file size can be seen.At the same time, certain materials are not supported for visualisation, such as glass or mirrors, and the complexity of generative geometry requires heavy decimation; in one case, one of the guests shared a generative landscape model, which was impossible to load.We used one of the three strategies in resolving such a situation: decimation, voxelization, rendering the models into 360 images for spherical mapping, or splitting the models into several adjacent scenes (fig.6).

Spatial Distribution and Streaming
We were working with the idea of suppressing the typical boundaries provided by a webcam and microphone, as well as the borders of our developed virtual environment.The notion of materiality is redrawn in all of the preceding cases, as well as numerous others, providing the user with such a greater set of possible tools for interactivity and customization.(Including the avatar design), allowing the individual to immerse themselves in a digital persona (a personalized character) that assures the presence (Schuemie, Et al. 2021) of sentiments, emotions, and ties to the place via your digital alias.
Interactivity (Christou, 2010) on the other hand, is described as the capacity to manage events in this virtual environment by utilizing our body and its motions in response to our stimuli, therefore generating a responsible place that responds to our senses.A VR experience is defined as a multi-sensory spatial method that creates the impression of "being there" by linking the user with the place as an experience in a continuous loop of in/out data.
Instead than focusing just on the plain content, we're broadening the scope of what may be learned from experience.The idea is that the learner tries to incorporate new experiences into their current world picture (Barrouillet, 2015).If they are unable to assimilate further knowledge, learners modify their worldview to accommodate the most recent facts.We must adapt to the new experience by reframing our understanding of how the world works (and webVR serves as a catalyst for this notion); we learn from the experience, and learning may therefore be viewed of as a form of active environmental sampling and testing.
In terms of how the teaching and learning experience was configured, we started with the idea of a space that triggers the conversations through the questions that were asked as part of each meeting: Session 1: Decentralized Education: P2p Learning / Session 2: The Phygital in Education / Session 3: Phygital and Cyber-physical discourses.The main target was firstly to ask to our guest about material as images and/or three dimensional models to be used as part or the discussion.The next step in terms of configuring the experience was the distribution of the information inside each space, using basically three main strategies: 1.-a concentrated strategy in which the content was contained in a set of virtual panels inside the virtual space, 2.-a virtual "shared screen" presentation, 3.-a spatial based distribution.In all the cases, the information was in constant dialog with the whole diversity of virtual environments, creating all these possible configurations.
We employed a "spatial derive theory" (Theraulaz, 1999) method in the process, which is founded on the ideas of abandoning standard movement styles and adopting a strategy that results in persistent random wandering inside an area or specific location.Instead of linear motions, we propose a pseudo drift viewpoint as a first step in gathering information and sentiments that will help us understand where we are.In that way, the "random" sequence of spaces, changing sizes and content, allowed us to expand the learning process from the entire experience, starting and finishing in the same introduction area as the conclusion for each spatial loop, eventually calling all visitors to a conversation time.The advantage of adopting VR for teaching is that the situation and content may be modified for each speaker rather of constantly being in the same classroom and surroundings.In Hubs, there are three options for volumetric navigation: walk, teleport, or scene change.WASD buttons on the keyboard and a mouse to tilt the camera were the primary controls for walking, with G enabling flying mode.It took some people around 10 minutes to become acclimated to these controls.In one of the rooms, Hub guests ask various questions on whether users are comfortable with such navigation, and 6 out of 9 visitors with no gaming experience say they are.

Volumetric Navigation
On two instances when flying was required, some of the visitors battled for more than 15 minutes lost in the model since flying mode has no ground sensitivity and it was difficult to figure out the North-West direction within confined places.This occurred even with an additional rehearsal session.Dropped items can also bounce off collidable objects in Hub's gravity simulation, however errors can arise when things disappear through the ground plane (fig.7).

Conclusion and Next Steps
The development of this series of Virtual Round-tables has provided us with the opportunity to demonstrate that a 3D VR approach in education enriches the experience in terms of possible interaction between expositors, resulting in a more confident approach that allows for better communication in a perceptual way (with the use of avatars as mediators).The time and resources invested in the development of these 21 talks under three main conversation topics far outweigh the resources used, with the main issue being training from our side (we spend one day in advance for training to our guests) as well as daily support from us to our guests in the construction of virtual scenes to host each talk.(via tutorials, capturing spaces with pictures, or modelling spaces in advance with them).The final arrangement was part of our job, as were all of the organizational features that allow a team of two (like us) to lead effectively with a one-month gap between events (the distance between the first and the second round-table was just in the limit for the delivery of a good product).
Mobile device and Web VR evolution is still ongoing, and tight ecosystems such as Safari are now impeding appropriate advancement of these technologies owing to compatibility difficulties (a close API for developers).This causes issues in navigation mode for a full VR experience, as well as a lack of ability to recognize textures and shadows, resulting in a restricted experience for consumers.
Because of the limits of current webVR technology, streaming should be addressed as part of the distribution plan (it is not possible to host more than 25 persons without lacking performance and responsiveness).In this regard, the total number of attendees through streaming surpasses this figure 3.6 times in the worst case (second round table) and 13.6 times in the best scenario (first round table), with the last session having 6.8 times more maximum potential users than the baseline from the webVR platform.
The formation of permanent support for this project as a chance to utilize them for similar events as part of the prospective agendas of hybrid forms of communication in the academic environment, rather than simply a standard zoom/meets/teams gathering, are viable future stages.Here, we visualized the fruit of collaborative work between visitors and organizers, in the sense of two types of collaborative environments, which ultimately expanded the richness of the expected results in a massive set of possible educative scenarios that will serve as the foundation for more elaborated designs (asset of archetypes webVR educative scenarios).In the near future of educational strategies, we believe it is perfectly possible to push the boundaries of the actual massification of 2d academic frames into an immersive educational experience.

Fig. 1 .
Fig. 1.The first configured web-based VR room out of 20 on Mozilla Hubs platform.Image credit: authors.

Fig. 2 .
Fig. 2. Multiple players interacting and taking selfies in the VR scenes during the second roundtable event.The avatar was personalised using neural networks by the authors, and the room was tailored by Mozilla

Fig. 3 .
Fig. 3. User-friendly interface of Mozilla Spoke, and its distance-aware audio configuration.Image from the authors.Round-the-table invited a member of the Hubs team for presentation and interview, who shared with us that the team was around 10 people under Mozilla's support; it is very streamlined, their strategy of open-source and crowdsourcing not only plays to their advantage, they contribute back to the Creative Commons by facilitating such open interactive platforms.It enables linking to Github to directly import open-sourced scripts; developers can script onto their platform for further personalization of VR functions, but would require proficient skills in coding.It also has an API (Application Programming Interface) to Sketchfab -an open-source 3D asset database, where users can simply drag-and-drop 3D objects on the front-end.Hubs did not engineer its developer page like many tech enterprises, but based its communication on Discord, where anyone can communicate with their team and share information in real-time, building an open community around such initiatives.Mozilla (moz://a), founded in 1998, is a free software community with open standards, supported by the non-profit Mozilla Foundation, its main product is the web browser Firefox (Mozilla, 2022).

Fig. 4 .
Fig. 4. Photogrammetry spaces with all of the avatars for event 2 and 3, Image from the authors.

Fig. 5 .
Fig. 5. Performance check of one of the scenes before export, showing the figures on 3D assets, and their recommended optimal.Image from the authors.

Fig. 6 .
Fig. 6.Two spaces with objects customised via front-end (Hubs) and backend (Spoke) respectively, showing the amount of control in object manipulation with the two modes.Image from the authors based on Houdini academia and CAAD futures spaces.

Fig. 7 .
Fig. 7. Two scenes where one required flying and another did not.Image from the authors based on Sigradi and Strelka VR scenes.