Documentation:Virtual Co-Locator

From UBC Wiki
Emerging Media Lab
UBC Emerging Media Lab Signature.png
About EML
A collaborative space for UBC faculty, students and staff for exploration of emerging technologies and development of innovative tools and solutions. This wiki is for public-facing documentation for our projects and procedures.
Subpages


Virtual Co-Locator aims to co-locates humans together in virtual waist-up scenarios. Using off-the-shelf software like Tensor Flow, we will provide a series of settings to allow participants to join a common visual environment, granting more visual and aural backdrop and meaning to the gathering. This could range from a carnival, museum, or spaceship-themed settings.

Background

In 2020, the pandemic sparked by the virus, Covid-19 shook the world into silence. Many educational, and professional functions were forced to cease or migrate to a virtual setting. Zoom was amongst one of the most profited virtual meeting software companies during this world-wide phenomenon. While many were forced adjusts to the rhythm of remote working, the sentiments of isolation and disconnect was amongst many.


“Zoom fatigue” is a well-known phenomenon to remote workers in 2021. Two important factors can reduce the impact: “setting the mood”, as if the grid of faces is instead an in-person meeting, and the user simply not looking at themselves. Existing solutions to this problem include virtual reality solutions like StartBehind, which requires expensive VR equipment and experience, vastly reducing the accessibility of the meeting. Other, simpler solutions include Zoom’s “immersive view” feature or Zoom greenscreen. The limits of these are straightforward: the immersive view feature maxes out at 25 users, and recordings of the session merely display the group in the traditional Zoom grid. As for Zoom’s greenscreen feature, it simply appears fake, which impacts believability.  


Thus, the proposal to make an engaging and creative form of virtual meetings was raised by Dr. Patrick Pennefather. The Virtual Co-Locator is also an extended idea from the Digital Dream Play production at UBC Frederic Wood Theatre in 2021.

Objectives

The Virtual Co-Locator aims to:

  • Allow multiple clients to feature users’ faces transposed into a scene featuring responsive animations triggered by factors such as length of time a given user is speaking.
  • Create a mechanism for facial recognition and transposition.
  • Set up client/server architecture
  • Making co-location possible and easy in a virtual setting

Considerations

The team at Virtual Co-Locator understands that accessibility oversees the majority of the considerations. The team understands that not every user will have the same screen size, depending on what device the user connects with. Therefore, distance between the user’s eyes and the camera became one of the most essential targets. The user experience of this app attempts to target all age ranges, as long as an electronic device is available. Nevertheless, the team is still developing the app with the assumption that at least one user in a group will have prior knowledge and access to virtual meeting platforms such as Zoom. Although, this may not be a requirement as the app continues to develop, the team aims to remove the dependency of any use of other virtual meeting platforms.

Main Page

Format

The delivered product of Virtual Co-Locator is a website at https://virtualcolocator.ca/. The hope for this project is that the deliverable would be easy and intuitive to use as this project aims to cater to large range of age groups. From elementary students to seniors available to electronic devices and internet, the Virtual Co-Locator strives to include all the potential users in the range.






User Manual

How to Connect as a Host

The steps are outlined as below:


1.     Open Virtual Co-Locator

2.     Open Zoom or any virtual meeting platform preferred

3.     Turn off camera in the current meeting in session

4.     Click on “Host”

5.     Pick a desired theme

6.     Create a room name – any name works as long as it is unique

7.     Relay the room name to participants

8.     Begin screen sharing on current meeting in session

How to Connect as Participant

The steps are outlined as below:


1.     Open Virtual Co-Locator

2.     Join current Zoom or any virtual meeting platform preferred

3.     Turn off camera in the current meeting in session

4.     Click on “Participant”

5.     Request the room name from the host of the meeting

6.     Return to current meeting in session to see the host’s screen share

How to Head Change Configuration

Change Configuration (Art by Ada Tam)

The steps are outlined as below:

Note: Only the host has the ability to change configuration. Current iteration only allows two videos that exist to change at once.


1.     Click the “Change Configuration” small white button right beside "Connect to Meeting" on the Host's page




Features

Spaceship (Art by Ada Tam)

Placeholder Faces/Heads

This features aims to make all participants of the meeting comfortable with speaking even amidst of a bad mental and/or hair day. The goal of this project is to make virtual meetings more engaging, so even if the participant is not at the best place, they do not have to appear or reveal that way.  Also, the team acknowledges that the maximum amount of participants in each room will not always be reached, therefore to avoid empty holes, placeholders are implemented to fill in the blank.

Change Configuration

This feature exists so that the host can place each participant, including themselves, in a desired head position in case the default positions are not satisfactory. Due to time constraints and limited support on development, this feature is also still in development. However, it is acknowledged that this feature has limitations. One of them is that configuration can only be changed based on the numbers of participants that already exist in the meeting. For example, if a meeting is to have four participants but, the image allows up to six heads, configuration can only be changed between the four videos that exists. The other is that, configuration can only be changed between two holes at a time due to the way that the flow is designed.


Challenges

Project Lead Change

At the very start of this project, the team had another project lead. Unfortunately, the previous project lead could not commit enough hours, therefore they had announced their leave for the project. The team spent some time without a lead until Olivia and Ada became an addition to the team. The team had to quickly shift with a new project lead as well as adapt to working with a designer. The team had to rearrange therefore some more time was lost. Thankfully, the team demonstrated strong adaptability and overcame the time loss.

Small Team

As outlined in the list in “Team Members,” the team of Virtual Co-Locator only consists of four members. This became a minor difficulty because most tasks depended on back-end developers. With the team only having one software developer on deck who also had other projects to attend to in parallel, many of the tasks were difficult to accomplish on time. A solution the team took to solve the problem was to leverage the developer’s work load by having the project lead take on front end developing. Additional support from Catherine, the supervisor, was also very helpful to the process.

Facial Recognition

One of the core technologies the team needed to understand was how to implement facial recognition. Our developer, Dante, researched and went through several methods/iterations of how facial recognition could be accomplished. After trial and errors, Tensor Flow, an off-the-shelf software proved to be the most efficient and fitting technology for the project. The earliest model of Virtual Co-Locator employed a fast model of Tensor Flow called “blazeface.” Blazeface achieves a high frame rate and did not bottleneck the video. However, blazeface did not provide a precise outline of the face. Instead, a face shape must be approximated from a particular dimension given by the model which, required complicated mathematical calculations and the product was not stable. Therefore, Dante moved onto the next model called “facemesh” which, ended up being the technology Virtual Co-Locator settled with. Unlike blazeface, facemesh provided predicted mesh data for the face which, meant that calculations for face outline was not needed. However, problems such as bottlenecking and long syncing periods from the video to the silhouette cutout occured as Dante employed the facemesh model. Nevertheless, these problems were not as major as the ones from blazeface. The solution to these problems was simply trial and error, which was time-consuming. In the current iteration of Virtual Co-Locator, the bottlenecking is still a problem, but it is currently being fixed.

Networking

Another technical challenge the team encountered was the networking of video streams. The way that peer-to-peer, or in other words, connection from each participant to the host, was set up was by employing socket.io, a Node js library that provided a simple interface to create peer-to-peer network connections. This was easy to implement but the networking got more complicated once the participants are grouped in the same room as the host. Nevertheless, this technology is still the most flexible that the team has discovered. The current prototype of the app supports multiple rooms, but the team have encountered some networking issues resulting in some participants not being able to join. This is likely due to different network configurations which can be remediated by using proxy or media relay servers to make the networking more robust.

Future Plans

Some future plans for Virtual Co-Locator includes:

  • Continue implementing the features that have yet been developed mentioned previously
  • Continue to develop and refine a robust system for users
    • Develop an easy and accessible app for users of a large age and interest range
  • Possibly expanding the collection of artworks for users to choose from
    • Possibly a blank canvas instead of head-in-hole style artwork


All of the above, explains the plan that the team at Virtual Co-Locator aims to tackle as the project continues onto the next phase. In the next phase, the team plans to conduct user testing and adjust to what our current technology can perform.

Team Members

Principal Investigator

Dr. Patrick Parra Pennefather

Assistant Professor, UBC Theatre and Film

Master of Digital Media Program

Project Lead

Olivia Chen

Student of BFA in Theatre Design and Production

University of British Columbia


Elyse Wall

Student of BFA in Theatre Design and Production

University of British Columbia

Software Developer

Dante Cerron

Software Developer for Academic Innovation at UBC

Designer/Artist

Ada Tam

Master of Digital Media Program

Mentor

Daniel Lindenberger

Emerging Media Mentor at the Emerging Media Lab at UBC

Supervisor

Catherine Winters

Lab supervisor at the Emerging Media Lab at UBC