Documentation:23-3001 Judicial Interrogatory Simulator

From UBC Wiki
Emerging Media Lab
UBC Emerging Media Lab Signature.png
About EML
A collaborative space for UBC faculty, students and staff for exploration of emerging technologies and development of innovative tools and solutions. This wiki is for public-facing documentation for our projects and procedures.
Subpages


Emerging Media Lab Project Documentation

Introduction

For first-year law students, many often feel intimidating by Moot Court, which is part of their curriculum where they engage in a practice appeal case. Recognizing the need to alleviate stress and anxiety surrounding this crucial aspect of the curriculum, the Emerging Media Lab at UBC has collaborated with the esteemed Peter A. Allard School of Law to develop an innovative solution for law students. Introducing the Judicial Interrogation System (JIS), a tool designed to enhance student preparation and familiarity with the judicial process.

Through the collaborative efforts of the Emerging Media Lab, the Judicial Interrogation System has been meticulously crafted as an accessible and user-friendly web application. This open-source platform serves as a virtual environment where students can engage in realistic simulated trials, empowering them to practice and refine their skills in a supportive and controlled setting.

This application is a continuation of the previously developed project Moot Court.

Background

The Judicial Interrogation Simulator is built to simulate a Moot Court session, which is an integral component of the curriculum, plays a pivotal role in building the confidence and oratory skills of first-year law students. It serves as a platform where students develop their abilities to articulate arguments within a courtroom setting. In a moot, two teams of law students split as appellant and respondent, engage in a simulated appeal case, presenting their cases before a panel of judges.

During the moot, students face a challenging environment where they must adeptly respond to probing questions from the judges while simultaneously honing their argumentative techniques and perfecting their delivery. This experiential exercise not only simulates real-world courtroom dynamics but also serves as a valuable tool for rapidly prototyping legal arguments and fostering a virtual classroom experience.

By participating in JIS, students gain practical insights into the intricacies of legal practice and develop crucial skills in persuasive communication, critical thinking, and legal analysis.

Primary Features

  1. Enhanced accessibility: Identified and resolved any remaining bugs or technical issues in the previous moot court application to ensure it is accessible to all law students.
  2. Implemented AI language tool (Intelli-Judge): Integrated an advanced AI language tool within the JIS application to provide students with comprehensive support in preparing for their court experiences. By leveraging AI capabilities, students can enhance their learning, refine their skills, and improve their overall performance.
  3. Enhanced visuals and animations: Upgraded the visuals and animations within the JIS application to create a truly immersive and realistic courtroom experience. These improvements can be seen with the changes made to the courtroom classroom, and more variety movements for the judges. By providing students with a visually captivating environment, their engagement and immersion in the experience will be heightened.
  4. Developed a VR simulated environment: Began looking into creating an unparalleled simulated environment using Virtual Reality (VR) technologies to expose students to the authentic atmosphere of judicial interrogation. This immersive experience should replicate the physical setting of a courtroom.

Features Carried over from Moot Court

  1. Customizable Moot Court Practice
    • Using the setup page, users can customize their practice moot based on the specifications of their assigned moot as well as their personal needs for their practice session.
  2. Simulated Moot Court Scene
    • In order to create a more realistic and therefore more effective practice tool for the user, a simulated moot court scene has been created to emulate the environment that first-year law students will be conducting their mandatory moot in.
  3. Timer
    • During the practice moot, a timer is available on the bottom right of the screen with colour-coded time indications that transition from the colour green, to yellow, then red as the timer runs out, similar to the timer warnings that are available during in-person moots. The timer can also be paused to allow more control and flexibility for the user as they practice their moot.
  4. Mooting Resources
    • First-year moot resources provided by course instructors are included on the website.

Methods

Developed as a web application using React-Three-Fiber, HTML, CSS, and Typescript. It is currently hosted using AWS CloudFront.

Development

Audio System

  • Replaced Moot Court's speech synthesis with Watson TTS and STT.

3D Animation and Scene

3D animation

  • From Moot Court: The judge characters and pre-existed animations were kept in the new JIS build. Moot Court's animation on the judge avatar used a combination of three resources: DeepMotion, Mixamo and Plask.ai. DeepMotion was used to source an adequate Judge model for the scene. The model obtained from DeepMotion was then imported into Mixamo to rig the model to allow animations. Once the model was rigged in Mixamo, it is put into Plask.ai to record custom animation. According to the previous works from Moot Court, motions were either obtained through online videos sources or filming a person performing the actions.
  • For JIS, the new animations were sourced with Rokoko AI Motion Capture. Similar to DeepMotion and Plask.ai, this uses short films to generate real motion animations used for the judge's movements. Motions were obtained by filming a person doing the needed motion for the judge. The motions that were included are nodding, disapproving nod, pondering, resting position, drinking, writing, reading, and idling. The generated animations were exported as FBX with Mixamo format and brought into Blender along with the Judge model. For initial set up, the judge model had to reset into T-pose using Blender Cat plug-in tool for Blender. If the model is already in T-pose, the Rokoko Retarget Blender plug-in is used to retarget MOCAP animations onto the Judge model. For post-editing, the animation was edited so it remains animation from hip and above. Any motions that jittered or lacked motion were fixed using the graph editor on Blender.
  • To have multiple animations stored in one judge model: Push/store the completed animations as NLA strips. The completed model with animations are exported as GLB file. For the main judge that comes with props, to export as a single animation clip, make sure the toggle on "Group by NLA Track" and name the prop's and model animations with the same name.

3D Scene

  • The application has both the old Moot Court's 3D courtroom scene which was built in SketchUp and an updated courtroom scene. For the old Moot Court scene, we kept the original style to maintain a "classroom" version. New updates were made on the scene with new textures, new lighting and 3D assets replacement to improve the room scene. The new textures were replaced with PBR materials using Blender. Textures were sourced from these website: 3D Texture, AmbientCG
  • Updated 3D assets were made using Maya and Blender, and lighting were added with Three React.
  • For the new updated courtroom, this was added to emulate a realistic courtroom. The model was made using Maya and Blender, and used the same texturing and updated lighting as the Classroom version.
  • Future developers may change the 3D assets as long as the assets is .glb file exports. To implement new or update models, import and upload the model file into the repository. If you are updating any existing models, you may need to replace the model URL with the file path to the model. Any new model assets would require to include file path to the model to render into the scene. Currently, all the models of the scene is stored in the public folder under "models."

OpenAI: Intelli-Judge

  • Hearing: Various settings determine how often requests are sent to the server which can be thought of as hearing. An important feature for interactivity is detecting when the user has stopped talking. The user's microphone volume is recorded, normalized, and averaged. If the normalized average volume is under a set percentage then the user is quiet and a request for a response is made. Watson STT is used to convert the audio received into text which is used to prompt ChatGTP. Whisper by OpenAI was also used but it proved unstable with quiet sections of audio.
  • Responding: There are hard set values on whether a response will be given such as the required delay between responses. ChatGTP function calling is also used to determine a percent probability that a judge would respond based on the content of the conversation. This can be used as a soft way to change to frequency of responses.
  • Speaking: Watson TTS is used to create an expressive voice. Some "Expressive" voice models support the use of SSML (Speech Synthesis Markup Language ). When an SSML model is in use, ChatGTP's text response is converted to SSML format using OpenAI's text-davinci-edit.
  • Multithreading: Interactivity is important for the user experience. To reduce latency, multithreading API requests is used to generate content while also sending back and playing content. ChatGTP Server-Sent Events are received live from OpenAI. As soon as a sentence is detected a Watson TTS begins to generate audio which is a slow process.
  • Generated files are deleted after use or after a set time if not used to avoid storing user data.
  • Playback on the client is done by pushing any received audio to a queue which allows audio to collect and play in order for seamless playback. Playback may have gaps if the audio generation is slow.

Speech Assessment

  • Watson TTS timestamps each detected word. We need to extract word count from this 1D data so that it can plotted against time. Sliding window analysis counts words that fall within a sliding window of time. This method extracts words per minute which can plotted using the d3 library. A previous implementation of this can be found in the speech analysis branch on Git Hub. The previous implementation may still be useful if the TTS method has changed.
  • The presenter's transcript is also provided and is interactively linked with the plot. This gives presenters easy-to-read feedback on how well they met their time and clarity goals.

Deployment and Site Hosting

  • Developers would deploy this application like any other React application. Simply run npm start. Run "npm run build" to create a build folder. The contents of the build folder should be uploaded to the server that is hosting the application. The current application is hosted on the AWS CloudFront. To access the S3 bucket of this project, please inquire the Emerging Media Lab for permissions.
  • The server must also be deployed for intelli-judge to work; however, building the project is not required. The current port in use is 8889. Ensure that this port is accessible via the server's public IP address. The API is made available using AWS "REST" API Gateway. Cloudfront is used to redirect requests from https://ubc.intellijudge.ca to the server. Note that a previous method did not use the API Gateway and required an SSL certificate. Using a domain name is no longer required and the invoke URL produced by the Gateway can be used.

VR JIS

Discussion

Findings

Provide a top level overview of the major challenges encountered and how they were addressed. Cloudfront is also used to

3D Animations

  • Using AI motion capture required to filming another person doing the actions we wanted. While some judge motion videos could work, AI motion capture requires full body view with clear background. When relying on judge videos created animations with lots of jitters or incorrect movements. If next developers want to add custom motions, film a person doing the motion while capturing the full body with clear background. To get clearer animations, it is best to exaggerate the motion during filming. For any animation jitters or motion editing, go to the graph editor of Blender and edit any motions as needed.

Challenges

Provide enough information for a first time user to get up and running.

  • For initial set up, the program is prone to break due to the number of dependency. Please make sure to install all the dependency listed in GitHub. Using "npm install {package} --force" is almost always required due to the different versons of packages used.
  • Intelli-Judge requires the server to be running. Clone it from GitHub and install any dependencies needed. Ensure that it is accessible via the internet (and if deployed it must be accessible via an HTTPS URL, not an HTTP URL). Also, ensure that the server does not go to "sleep". If you have both the client and the server running locally then the server will be accessible using http://localhost:{port#} (currently port# = 8889) but make sure to also configure the client's target for the server inside of Converse.tsx.

Code

  • What version of Unity is required to build this?
  • Where on GitHub can we find the code? (If it's not on GitHub, why not?)
  • Add a section explaining:
    • What third-party assets need to be added to the project? What are their locations on the EML Teamshare?
    • What the third-party asset does
    • Why the third-party asset is needed.

Our code built with Typescript and React.

The GitHub can be found here: LINK

Dependencies

The project relies on these following dependencies:

  • typescript
  • ESLint
  • react three fiber
  • React XR
  • leva
  • react three fiber typescript
  • drei
  • d3


Art Assets

  • Provide information on any artwork developed for the project. This could includes scenes, controllers or assets.

Using the pre-existed room asset, the scene was further improved with adding PBR textures and lighting to allow users to be more immersed in the JIS. Here photos of before and after

old moot courtroom
Classroom/Updated Moot Courtroom
Classroom/Updated Moot Courtroom: In Session
Updated Courtroom
Updated Courtroom: In Session

We've also included an updated Courtroom version, to emulate a realistic courtroom. This was made using Maya and Blender and used the same PBR texture and lighting from the other courtroom versions.

Libraries

What third party assets and plugins were used in the development of this project? Please thoroughly list all of them.

A section regarding Unity Engine components and art related elements pertaining to that Prefab or Gameobject

  • What components need to be present in the Prefab or Gameobject for the feature to work properly (eg. Rigidbody and Colliders)
  • Why the components are needed
  • What tags or layers the Prefab or Gameobject needs to be set as and why
  • Any additional special notes or quirks about the GameObject or Prefab

3D MODELING/ANIMAIONS

  • Blender Plug-ins: Rokoko Retarget Plug-in, Cat Plug-in

First Time Setup Guide

  1. Install Node.js
    • Download and install Node.js from the official website: https://nodejs.org/
    • Follow the installation instructions for your operating system.
  2. Clone the GitHub Project:
  3. Navigate to Project Folder:
    • Move into the project directory after cloning.
  4. Download Project Dependencies:
    • Check the project's GitHub repository for installing all the dependencies.
  5. Run the Project:
    • You can either run the project in the terminal where the project is saved or within Visual Studio Code's integrated terminal.
    • To start the project session, use the following command: npm start

Working with Moot Court Server (FOR Developers)

Poster

Upload a pdf link of your poster to the wiki. An editable version of the document is preferred.

Development Team

Principal Investigators

Jon Festinger, Q.C. (He, Him, His)

Adjunct Professor

Peter A. Allard School of Law

The University of British Columbia

Nikos Harris, Q.C.

Associate Professor of Teaching

Peter A. Allard School of Law

The University of British Columbia

Barbara Wang BA, JD

(Pronouns: She/Her/Hers)

Manager, Student Experience

Peter A. Allard School of Law

The University of British Columbia

Current Team

Team Lead and UX/UI Designer

Jiho Kim

Work Learn at the Emerging Media Lab at UBC

Undergraduate in Bachelor of Science in Cognitive Systems

University of British Columbia

Software Developers

William Watkins

Work Learn at the Emerging Media Lab at UBC

Undergraduate in Bachelor of Science in Biology

University of British Columbia


Leah Marie Fernandez

Work Learn at the Emerging Media Lab at UBC

Undergraduate in Bachelor of Science in Combined Major

University of British Columbia

FAQ

As the title suggests

Bibliography

Provide enough information for a first time user to get up and running.

License

MIT License

Copyright (c) 2023 University of British Columbia

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Last edit: May 18, 2023 by Daniel Lindenberger

Some rights reserved
Permission is granted to copy, distribute and/or modify this document according to the terms in Creative Commons License, Attribution-ShareAlike 4.0. The full text of this license may be found here: CC by-sa 4.0
Attribution-Share-a-like