Documentation:23-3001 Judicial Interrogatory Simulator

From UBC Wiki
Emerging Media Lab
UBC Emerging Media Lab Signature.png
About EML
A collaborative space for UBC faculty, students and staff for exploration of emerging technologies and development of innovative tools and solutions. This wiki is for public-facing documentation for our projects and procedures.
Subpages


Emerging Media Lab Project Documentation

Introduction

For first-year law students, many often feel intimidating by Moot Court, which is part of their curriculum where they engage in a practice appeal case. Recognizing the need to alleviate stress and anxiety surrounding this crucial aspect of the curriculum, the Emerging Media Lab at UBC has collaborated with the esteemed Peter A. Allard School of Law to develop an innovative solution for law students. Introducing the Judicial Interrogatory Simulator (JIS), a tool designed to enhance student preparation and familiarity with the judicial process.

Through the collaborative efforts of the Emerging Media Lab, the Judicial Interrogation Simulator has been meticulously crafted as an accessible, user-friendly web application. This open-source platform serves as a virtual environment where students can engage in realistic simulated trials, empowering them to practice and refine their skills in a supportive and controlled setting.

This application is a continuation of the previously developed project Moot Court.

Background

Moot Court is an important part of first-year Law where students need to participate in speech and argument in court. A moot session involves two teams presenting their argument to a panel of judges. JIS simulates the Judge and courtroom for students to practice their speech and arguments. The EML project produced three versions: Classic, IntelliJudge and the Unreal Engine (UE) VR Version.

Classic version focuses on speech articulation and time management. IntelliJudge serves as an AI language tool by providing live feedback and judge-like questions for students to practice delivering their argument and responding to questions from judges. Unreal Engine VR version acts as an immersive simulated environment for students to further be exposed to courtroom experience.

The final round of project work in 2023-2024 met the following objectives: 1) Resolve remaining bugs or technical issues found from the previous Moot Court, 2) Enhance the application’s visuals, animations, and user experience, 3) Upgrade the user interface (UI) and add more judge motions, 4) Build an AI Judge that allows students to receive more support in their moot preparation while offering an immersive experience.

Judicial Interrogatory Simulator is built to simulate the Moot Court session, an integral component of Law school curriculum. JIS has potential to play a pivotal role in building the confidence and oratory skills of first-year law students. The experiential exercise of JIS not only simulates real-world courtroom dynamics but also offers a high quality, customized virtual classroom experience.

Primary Features

  1. Enhanced accessibility: Identified and resolved any remaining bugs or technical issues in the previous moot court application to ensure it is accessible to all law students.
  2. Implemented AI language tool (Intelli-Judge): Integrated an advanced AI language tool within the JIS application to provide students with comprehensive support in preparing for their court experiences. By leveraging AI capabilities, students can enhance their learning, refine their skills, and improve their overall performance.
  3. Enhanced visuals and animations: Upgraded the visuals and animations within the JIS application to create a truly immersive and realistic courtroom experience. These improvements can be seen with the changes made to the courtroom classroom, and more variety movements for the judges. By providing students with a visually captivating environment, their engagement and immersion in the experience will be heightened.
  4. Developed a VR simulated environment: Began looking into creating an unparalleled simulated environment using Virtual Reality (VR) technologies to expose students to the authentic atmosphere of judicial interrogation. This immersive experience should replicate the physical setting of a courtroom.

Functionalities

Here is a table of all the existing features that were proposed at the start of term and what was achieved.

Intellijudge (Fall 2023)

No. Task Priority Status
F1 AI Backend   Must have Complete
F2 Speech-to-Text Must have Complete
F3 Text-to-Speech Must have Complete  
F4 Subtitles   Nice to have   Complete
F5 Speech Analysis Nice to have Partially Complete


Unreal Engine VR Version (Fall 2023)

No. Task Priority Status
F6 Users can interact and adjust their settings in the Start Menu Must have Complete
F7 Doors to the courtroom open once the user starts the session Must have Complete
F8 Converse with metahuman judge Must have Complete

Features Carried over from Moot Court

  1. Customizable Moot Court Practice
    • Using the setup page, users can customize their practice moot based on the specifications of their assigned moot as well as their personal needs for their practice session.
  2. Simulated Moot Court Scene
    • In order to create a more realistic and therefore more effective practice tool for the user, a simulated moot court scene has been created to emulate the environment that first-year law students will be conducting their mandatory moot in.
  3. Timer
    • During the practice moot, a timer is available on the bottom right of the screen with colour-coded time indications that transition from the colour green, to yellow, then red as the timer runs out, similar to the timer warnings that are available during in-person moots. The timer can also be paused to allow more control and flexibility for the user as they practice their moot.
  4. Mooting Resources
    • First-year moot resources provided by course instructors are included on the website.

Methods

Developed as a web application using React-Three-Fiber, HTML, CSS, and Typescript. It is currently hosted using AWS CloudFront.

Tech Stack and Development Overview

Technical Components (Project Handover May 2024)

INTELLIJUDGE VERSION

High Level Overview Primary Goal: Reduce AI request/response time, improved performance.

Figure 1 illustrates the design implementation pre-refactor. It uses http to communicate with ChatGPT. This feature (from input to output) was implemented in Converse.tsx. Figure 2 illustrates the design implementation post-refactor.

We now send text instead of audio to the server as input. Performance was the primary goal of this refactor, having the speech-to-text done on client side should improve round-trip-time.

Converse.tsx contains the prior implementation of sending an input to the server and receiving an output. It has been renamed to ConverseComponent(DEPRECATED).tsx. The refactor consists of splitting Converse into two separate components – Audio Component and Converse Component. Audio Component is where the Push-To-Talk functionality is also implemented.

Illustration of how the user content interacts with tools to flow to the server. General design for STT and TTS.
Figure 4; General design for STT and TTS between ChatGPT and users of the Judicial Interrogatory Simulator.

The audio component handles receiving user speech and converting it to text (implemented using voskbrowser plugin).  

The converse component takes the text from the audio component as input, sends it to the server, receives the server response, and plays it as audio.

A separate ServerUtility static class for all server-related functionality. See src/components/server/ServerUtility.tsx. The class provides basic functionality like initializing web socket, sending a message to the server and playing audio response (with the option of stream/chunk).

Stream means that it will only play the audio once the full response is received whereas Chunk means it will play the audio as it receives a chunk of the response. By default, it is set to Chunk.  

Assessment Page

Graph of audio components of the JIS software project. Coloured flow chart of how audio input in the tool is being assessed and displayed.
Figure 5: General design for how audio input is being assessed and displayed in JIS project

At the end of sessions an assessment page is offered.


The design was built and discussed with principal investigators and the team.

Vosk-browser Plugin  

The implementation of speech-to-text relies on react-speech-recognition. This implementation was retained. However, the library proved inadequate as it lacks information about the audio and the entirety of the transcribed text. It abstracts the recording process and only outputs sentences. Two issues arose due to this limitation:  

  1. Handling start/end time required a workaround, detecting the duration the PTT key was pressed, which proved highly inaccurate. It doesn't account for delays between pressing the key and actual speech.  
  1. The data sent to the assessment page assumes each sentence is a word, leading to significant inaccuracies.

Timestamps

Two images for how the interval are assessed in the project workflows for JIS
Figure 5.1 and 5.2

After conducting research on speech recognition plugins that offer word-level timestamps, a promising option is vosk-browser, available at https://github.com/ccoreilly/vosk-browser. Should the decision lean towards prioritizing sentence-level pacing over word-level pacing, adjustments can be easily made to suit our requirements.  

The implementation for vosk-browser can be found in AudioComponent.tsx. (Though, renaming it to SpeechToTextComponent.tsx might be more fitting.) Previously, speech detection relied on audio volume in ConverseComponent(DEPRECATED).tsx. It would classify an audio segment as speech if it surpassed a minimum volume threshold. **It's worth noting that the previous implementation used toggling instead of Push-to-Talk.

Speech-to-text analysis

In the previous implementation, the STTAnalysis function can be located in the provided source code, specifically at line 188. Another area of refactoring focused on the STTAnalysis function within AssessmentPage.tsx. Initially, it utilized a sliding window approach to emulate a continuous function suitable for dynamic data. However, considering that assessment occurs only at the conclusion and real-time analysis isn't necessary, a discrete function seemed more appropriate. Thus, a discrete analysis mode was introduced with some modifications, while retaining the old continuous implementation, within STTAnalysis.

Additionally, the previous implementation involved the use of sample rate, which was found problematic. The sample rate, defined as "the sample quality of the analysis, samples/unit time," implied that increasing the number of samples per unit time would enhance the accuracy of the analysis. However, this posed an issue as each sample was associated with a start time. For instance, the word "legal" might have a start time of 1534 ms. Attempting to calculate Words per Interval for seconds 1 – 2 by including sample points with start times outside this window to boost accuracy was not feasible. Consequently, the sampling implementation was removed altogether.

Subtitles

EML server optimizes the delivery of multimedia content by sending audio in small chunks, enabling quicker streaming and reducing wait times for the end-user. This segmented approach to audio transmission ensures a smooth and responsive playback experience. In parallel, subtitled text is also transmitted in fragments. To maintain coherence and readability, the client-side application accumulates these text fragments into a buffer. This refined text is then managed through Zustand, a state management library, which updates and renders the subtitles in the user interface.

Court room visual captured from a screenshot. Shows 3D model design for courtroom with 3 judges, BC and Canadian flags, etc.

UNREAL ENGINE VR VERSION

UE IntelliJudge

This process involves receiving user input in audio format, converting it to text, and transmitting it to the server. The server then processes the text, generating a response which is converted back into an audio file and played for the user. During audio playback, visual feedback is provided through a MetaHuman judge interface with synchronized talking and listening/idle animations.

Multi coloured flowcart of process for input, output and visuals of interactions with the JIS IntelliJudge.

Connection to Server  

The implemented server connection is provided by BP TCP, in UE project folder. Blueprint controls the user’s input, which is voice control -- VRPawn blueprint called “Voice Input Recognition”.

Implementation of AI to MetaHuman:

VRPAWN:  

Within VRPawn, “Voice Input Recognition” contains the implementation responsible for processing audio file to text (STT) using Runtime Speech Recognizer and transcribing the audio input into a string to send to the server. Using BP TCP, it signals the OpenAI server to generate an audio response. The server receives the transcription and generates a response according to the prompt within the config file. The resulting audio output is sent back in sequential chunks or sections and played out accordingly.

VRPawn is connected with LevelBP, BPA_MH, BPA_FACE that communicated with the BPTCP and the MetaHuman animations. The MetaHuman animations animation states change between “idle” and ‘talk’ within the BPA_FACE and BPA_MH that is controlled by the Boolean Key 1 from BPTCP through the LevelBP.

VR Controls

The left-hand controller “X” button activates user speaking. This action input along with other VR controls can be found under VRTemplate in Input and Action folder. The action that activates and controls the speaking state is “IA_Talk”  and is handled in “IMC_Hands”.  

Technical Components (investigated and used prior cycles)

Audio System

  • Replaced Moot Court's speech synthesis [needs update]

3D Animation and Scene

3D animation

  • From Moot Court: The judge characters and pre-existed animations were kept in the new JIS build. Moot Court's animation on the judge avatar used a combination of three resources: DeepMotion, Mixamo and Plask.ai. DeepMotion was used to source an adequate Judge model for the scene. The model obtained from DeepMotion was then imported into Mixamo to rig the model to allow animations. Once the model was rigged in Mixamo, it is put into Plask.ai to record custom animation. According to the previous works from Moot Court, motions were either obtained through online videos sources or filming a person performing the actions.
  • For JIS, the new animations were sourced with Rokoko AI Motion Capture. Similar to DeepMotion and Plask.ai, this uses short films to generate real motion animations used for the judge's movements. Motions were obtained by filming a person doing the needed motion for the judge. The motions that were included are nodding, disapproving nod, pondering, resting position, drinking, writing, reading, and idling. The generated animations were exported as FBX with Mixamo format and brought into Blender along with the Judge model. For initial set up, the judge model had to reset into T-pose using Blender Cat plug-in tool for Blender. If the model is already in T-pose, the Rokoko Retarget Blender plug-in is used to retarget MOCAP animations onto the Judge model. For post-editing, the animation was edited so it remains animation from hip and above. Any motions that jittered or lacked motion were fixed using the graph editor on Blender.
  • To have multiple animations stored in one judge model: Push/store the completed animations as NLA strips. The completed model with animations are exported as GLB file. For the main judge that comes with props, to export as a single animation clip, make sure the toggle on "Group by NLA Track" and name the prop's and model animations with the same name.

3D Scene

  • The application has both the old Moot Court's 3D courtroom scene which was built in SketchUp and an updated courtroom scene. For the old Moot Court scene, we kept the original style to maintain a "classroom" version. New updates were made on the scene with new textures, new lighting and 3D assets replacement to improve the room scene. The new textures were replaced with PBR materials using Blender. Textures were sourced from these website: 3D Texture, AmbientCG
  • Updated 3D assets were made using Maya and Blender, and lighting were added with Three React.
  • For the new updated courtroom, this was added to emulate a realistic courtroom. The model was made using Maya and Blender, and used the same texturing and updated lighting as the Classroom version.
  • Future developers may change the 3D assets as long as the assets is .glb file exports. To implement new or update models, import and upload the model file into the repository. If you are updating any existing models, you may need to replace the model URL with the file path to the model. Any new model assets would require to include file path to the model to render into the scene. Currently, all the models of the scene is stored in the public folder under "models."

OpenAI: Intelli-Judge

  • Hearing: Various settings determine how often requests are sent to the server which can be thought of as hearing. An important feature for interactivity is detecting when the user has stopped talking. The user's microphone volume is recorded, normalized, and averaged. If the normalized average volume is under a set percentage then the user is quiet and a request for a response is made. STT is used to convert the audio received into text which is used to prompt ChatGTP. Whisper by OpenAI was also used but it proved unstable with quiet sections of audio.
  • Responding: There are hard set values on whether a response will be given such as the required delay between responses. ChatGTP function calling is also used to determine a percent probability that a judge would respond based on the content of the conversation. This can be used as a soft way to change to frequency of responses.
  • Speaking: TTS is used to create an expressive voice. Some "Expressive" voice models support the use of SSML (Speech Synthesis Markup Language ). When an SSML model is in use, ChatGTP's text response is converted to SSML format using OpenAI's text-davinci-edit.
  • Multithreading: Interactivity is important for the user experience. To reduce latency, multithreading API requests is used to generate content while also sending back and playing content. ChatGTP Server-Sent Events are received live from OpenAI. As soon as a sentence is detected a TTS begins to generate audio which is a slow process.
  • Generated files are deleted after use or after a set time if not used to avoid storing user data.
  • Playback on the client is done by pushing any received audio to a queue which allows audio to collect and play in order for seamless playback. Playback may have gaps if the audio generation is slow.

Speech Assessment

  • TTS timestamps each detected word. We need to extract word count from this 1D data so that it can plotted against time. Sliding window analysis counts words that fall within a sliding window of time. This method extracts words per minute which can plotted using the d3 library. A previous implementation of this can be found in the speech analysis branch on Git Hub. The previous implementation may still be useful if the TTS method has changed.
  • The presenter's transcript is also provided and is interactively linked with the plot. This gives presenters easy-to-read feedback on how well they met their time and clarity goals.

Deployment and Site Hosting

  • Developers would deploy this application like any other React application. Simply run npm start. Run "npm run build" to create a build folder. The contents of the build folder should be uploaded to the server that is hosting the application. The current application is hosted on the AWS CloudFront. To access the S3 bucket of this project, please inquire the Emerging Media Lab for permissions.
  • The server must also be deployed for intelli-judge to work; however, building the project is not required. The current port in use is 8889. Ensure that this port is accessible via the server's public IP address. The API is made available using AWS "REST" API Gateway. Cloudfront is used to redirect requests from https://ubc.intellijudge.ca to the server. Note that a previous method did not use the API Gateway and required an SSL certificate. Using a domain name is no longer required and the invoke URL produced by the Gateway can be used.

VR JIS

Design Overview

Design Assets

Courtroom Environment

Assets were purchased from the UE Asset Store as necessary credited. But not redistributed in public code publishing.

MetaHuman Judge

The MetaHumans imported in the scene are from default models provided by UE. The body animation is supported by Mixamo while facial animation is recorded using UE facial livesync that is converted into animation.  

Project used two animation blueprints to control the body and facial animations. BPA_Face controls the facial animation from ‘idle’ to ‘talk’. BPA_MH controls the body animation and changes the state from ‘idle’ to ‘talk’ motions. An additional facial bp called BPA_Face_npj (non-player-judge) that loops the idle facial animations. Furthermore, BPA_MH has two animations randomly played with the random sequencer in the animation state “talk”.

Start Menu

The ‘Start Menu’ is implemented with Unreal Engine 5 Widget Blueprints and Actor Blueprints. Functionality of the buttons were implemented with Input Action Triggers, Widget Interactions from VRPawn, Widget Switchers.

NPC Characters

NPC characters are sourced from Mixamo along with the varying sitting animations.

BACKGROUND MUSIC AND SOUND EFFECTS

All sounds are sourced from Freesound.org and stored under VRTemplate in Audio.

First Time Setup Guide

Unreal Engine VR version acts as an immersive simulated environment for students to further be exposed to courtroom experience.

Getting Started with Unreal Engine:

  1. Install Epic Games Launcher
  2. Download Unreal Engine 5.3.2. No need to download Target Platforms of Android, IOS, Linux or TVOS, so can un-check mark these options for installation. Only need VR.
  3. Install required plugins from the Unreal Engine Marketplace (it would appear under the Library Vault) : Runtime Audio Importer; Runtime Speech Recognizer; TCP Socket Plugin; Groom; Live Link; Live Link Control Rig; Apple ARKit Face Support
  4. Under Content/VRTemplate/Maps, open the Courtroom1 map.
  5. Connect Quest headset with Oculus program installed to PC.
  6. Launch Rift on the Quest.
  7. On Unreal Engine, click on Simulate in VR Preview to start the simulation.

For other setup access

  1. Clone the GitHub Project:
    • Clone via GitHub repository: link JIS.
    • Unreal Engine Project Download Link

Known Issues

It is noted that the message regarding the SpeechRecognizerModel appears each time the project is opened. This prompt asks if we want to replace an existing object. It is important to know that clicking 'No' will not affect the functionality of the project. The message is only relevant if the speech recognizer model has been regenerated on a different computer.

Future Plans

Intellijudge

For future considerations of the JIS prototype, adaptations specific to Canadian law and improving realism by making judge interactions more human-like are essential. Priority should be given to completing core functionalities, like refining the AI tool and integrating legal scenarios. Challenges for a new team may include technology availability and potential subscription models for essential tools. Rapid prototyping requires agile responses to evolving tech and user demands.

Unreal Engine VR Version

Unreal Engine VR version could be further improved in the user experience during the session. This includes adding the remaining features that can be found in the IntelliJudge version such as a timer, pause button and subtitles.

There are also bugs that need to be addressed in the VR scene with partially complete functionalities in the VR controls. Players would fall downwards when rotating the controls, or players are able to press “X” and speak to the judge while being in the waiting room without having to click “Start”.

The MetaHuman judge could be more diverse in motion and facial expressions to match the AI’s response. As well as matching sound effects and background music that fits the courtroom environment.


Other Findings

Top level overview of issues encountered and how they were addressed.

Performance

Throughout the 2023 Summer term, significant progress was made on the project. UI enhancements were prioritized for the web version to improve user experience, alongside improving the judge component with additional motions for a more immersive courtroom simulation. Towards the end, an initial attempt to integrate a server was met with challenges due to slow performance.  

In the 2023 Fall term, the focus shifted towards addressing this issue, with the development of a new server infrastructure by the Emerging Media Lab. This revamped server, designed to ensure rapid responses and generate questions using OpenAI's GPT-4 model, was used to elevate user interaction and experience.

AI server:

AI response may vary depending on the provided prompt and available tokens. Thus, depending on the prompt, the response could be adjusted to be more judge-like, informative, or succinct. When running the server, modifying the prompt can affect the judge's response. Tokens refer to sequences of characters generated by the AI, encompassing any part of words, punctuation marks, spaces, or other non-English characters. The number of available tokens, or the token set in the configuration, dictates the input length and the output the AI can generate.  

In projects where student responses are lengthy, insufficient tokens or a high limit may cause the server to fail to respond, student’s response not inputted, or the judge's response to be truncated. To address this in the configuration file, adjust the token limits, ensure there are enough tokens available, and prompt the AI to control the length of the response effectively.  

Accessibility

Subtitles were integrated into the system to aid student comprehension, while word analytics tools were implemented to provide valuable insights into speech and argumentation, facilitating skill enhancement.

3D Animations

Using AI motion capture required to filming another person doing the actions we wanted. While some judge motion videos could work, AI motion capture requires full body view with clear background. When relying on judge videos created animations with lots of jitters or incorrect movements. If next developers want to add custom motions, film a person doing the motion while capturing the full body with clear background. To get clearer animations, it is best to exaggerate the motion during filming. For any animation jitters or motion editing, go to the graph editor of Blender and edit any motions as needed.

Challenges

Some additional considerations for a first time user to get up and running.

  • For initial set up, the program is prone to break due to the number of dependency. Please make sure to install all the dependency listed in GitHub. Using "npm install {package} --force" is almost always required due to the different versions of packages used.
  • Intelli-Judge requires the server to be running. Clone it from GitHub and install any dependencies needed. Ensure that it is accessible via the internet (and if deployed it must be accessible via an HTTPS URL, not an HTTP URL). Also, ensure that the server does not go to "sleep". If you have both the client and the server running locally then the server will be accessible using http://localhost:{port#} (currently port# = 8889) but make sure to also configure the client's target for the server inside of Converse.tsx.

Older GitHub can be found here: LINK

More about Art Assets

Information on earlier and evolving art.

Using the pre-existed room asset, the scene was further improved with adding PBR textures and lighting to allow users to be more immersed in the JIS.

Images of before and after

old moot courtroom
Classroom/Updated Moot Courtroom
Classroom/Updated Moot Courtroom: In Session
Updated Courtroom
Updated Courtroom: In Session

We've also included an updated Courtroom version, to emulate a realistic courtroom. This was made using Maya and Blender and used the same PBR texture and lighting from the other courtroom versions.

Libraries

What third party assets and plugins were used in the development of this project? Please thoroughly list all of them.

A section regarding Unity Engine components and art related elements pertaining to that Prefab or Gameobject

  • What components need to be present in the Prefab or Gameobject for the feature to work properly (eg. Rigidbody and Colliders)
  • Why the components are needed
  • What tags or layers the Prefab or Gameobject needs to be set as and why
  • Any additional special notes or quirks about the GameObject or Prefab

3D MODELING/ANIMAIONS

  • Blender Plug-ins: Rokoko Retarget Plug-in, Cat Plug-in

Development Team

Principal Investigators

Jon Festinger, Q.C. (He, Him, His)

Adjunct Professor

Peter A. Allard School of Law

The University of British Columbia

Nikos Harris, Q.C.

Associate Professor of Teaching

Peter A. Allard School of Law

The University of British Columbia

Barbara Wang BA, JD

Manager, Student Experience

Peter A. Allard School of Law

The University of British Columbia

Student Team

Software Developers

Leah Marie Fernandez

Work Learn at the Emerging Media Lab at UBC

Undergraduate in Bachelor of Science in Combined Major

University of British Columbia

Poh Leanne Kee

Work Learn at the Emerging Media Lab at UBC

Undergraduate

University of British Columbia

William Watkins

Work Learn at the Emerging Media Lab at UBC

Undergraduate in Bachelor of Science in Biology

University of British Columbia

UX/UI Designers

Jiho Kim

Work Learn at the Emerging Media Lab at UBC

Undergraduate in Bachelor of Science in Cognitive Systems

University of British Columbia

Jazzy Wan

Work Learn at the Emerging Media Lab at UBC

Undergraduate

University of British Columbia

Angela Felicia

Work Learn at the Emerging Media Lab at UBC

Undergraduate

University of British Columbia

Team Leads 2023/2024

Jiho Kim

Leah Marie Fernandez

Poster

Upload a pdf link of your poster to the wiki. An editable version of the document is preferred.

License

MIT License

Copyright (c) 2023 University of British Columbia

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Last edit: May 18, 2023 by Daniel Lindenberger

Some rights reserved
Permission is granted to copy, distribute and/or modify this document according to the terms in Creative Commons License, Attribution-ShareAlike 4.0. The full text of this license may be found here: CC by-sa 4.0
Attribution-Share-a-like