X

Mindful Space In Sentences

Resarch Question

How can virtual spatial emotions be assessed objectively using EEG and natural language processing? What are the effects of different parameters, such as shape, height, width, and length, on the emotions experienced in virtual reality spaces? Can EEG measurements of brain waves be used to label virtual rooms with calm and active emotions? How do these labeled emotions compare with human feelings and language descriptions of the VR spatial experience? What potential applications could be developed by combining physiological metrics and AI methods for synthetic design generation and evaluation?

Published Full Paper

MINDFUL SPACE IN SENTENCES - A Dataset of Virtual Emotions for Natural Language Classification | Architecture And Planning Journal (APJ) | ISSN: 2789-8547↗

Recognition

Best Paper Award at ASCAAD Computational Design Conference 2022 ↗

My Role

Research, Virtual Environment Setup, Data Cleaning, Data Analysis

Tool

Meta Ooculus Quest 2 (VR), Muse 2 (EEG), Bert (NLP), SciPy (Data Analysis)

Teammate

Han Tu
(Massachusetts Institute of Technology)

Instructor

Takehiko Nagakura
(Massachusetts Institute of Technology)

Duration

4 months

Background and Goals

VR is a tool to measure spaces in Labs instead of taking people from building to building, EEG is a tool to measure feelings that is hard to tell in words, thus much research uses VR to measure special parameters combined with EEG to measure feelings, however, none of them tells the story of how people can explain their feelings or emotions in spaces using language and how we can use these languages to explain and measure spaces or spatial emotions.

Therefore, our research attempts to offer a useful natural language processing - emotion classification dataset for architectural design improvement using everyday sentences, and our dataset will help architects understand the virtual spatial emotions in everyday description to guide the designs.

Methodology

To achieve this goal, we break down the methodology into three parts. virtual reality to measure the spaces, EEG to measure the emotions such as calm or active, and sentences to build an emotion classification dataset.

Experiment Setup

In the experiments, the participant can look around in the VR rooms and answer the questions in sentences. In the meantime, when the participants experience the rooms, we record a layer of 5 brand waves using EEG.

Experiments Overview

26 participants took part in the test, they were architectural students or faculty members from MIT or Harvard. In the end, we collected 1402 sentences from them.

Test Flow

For example, when we ask the participants what the room could become if it's a public space, they will experience the VR environments, answer our questions, and their brand waves will be recorded at the same time. We would also ask questions like 'what kind of furniture do you want to bring here?','what is your feeling being inside of this room?', etc.

Room #3

Room #7

Room #10

Working Procedure

We have three tools to do our measurements.
- The blue one is the VR spaces to stimulate emotions and sentences;
- The red one is the EEG device (Muse 2) to measure the emotions such as calm or active;
- The yellow one is the audio recording test collection and analysis pipeline to build our dataset.

Spatial Parameters

Virtual Environment Setup

We build 10 rooms with four parameters including shape, height, length, and width. These parameters can stimulate different feelings when the participants experience them in virtual environments.

Below are the ten rooms that participants can see in the VR headsets. They have a 90-degree perspective and the participants would experience them in random sequences.

Room #1

Room #2

Room #3

Room #4

Room #5

Room #6

Room #7

Room #8

Room #9

Room #10

Emotional Labels

The brainwaves from EEG can show whether the participants are calm or active for further analysis of emotion labeling. For example, when Participant No. 26 was experiencing Room No.7, after recording his brainwaves and sentences, we found that he had an active emotion above the zero line, which is the Baseline of his brainwave, so we labeled this sentence with the emotion of active.

Result ⓵ - Language Data Classification

We analyzed all these sentences with emotion labels of calm or active from our EEG data. For example, Participants No.25 and No.20 both had calm emotions in Room No.2, No.3, No.7, and No.10. In this way we recorded 1402 sentences with two emotion labels and translated them into text to build our binary emotion classification dataset.

Result ⓶ - Relationship Between VR Space And Emotions

First, as a result, in terms of the relationship between spaces and emotions, each room can provoke some participants' calm or active emotions depending on the questions and their imagination of the spaces. Therefore some rooms do not have distinct emotional features, such as Room No.1, No.4, and No.8.

'Active Participant'

'Calm Participant'

Furthermore, the VR rooms have individual participants’ differences in emotional arousal. Participants may generally have more active emotions in VR environments, such as Participant No.9, No.10, No.15, and No.26, or calm emotions, such as Participant No.7,  No.13, and No.23. Such deviation may come from different individual spatial cognition or perception.

Correlations Between Emotion Labels of Rooms And Participant ID

Participant_ID

Emotion_label

Participant_ID (Gender/Architects/)

1

.303**

Room_ID

.051

-.035

Length

-.101**

-.037

Width

-.009

.076

Height

-.038

-.020**

Emotion_label

.303**

1

However, you can see in the table, the -.020** emotion label correlation shows significance in the relationship between height and active emotion. This means for the higher spaces, most participants have calmer emotions.

Result ⓷ - Natural Language Classification

Results of Trained BERT Model of Task 2

precision

recall

f1-score

support

Calm

0.35

0.41

0.37

24

Active

0.69

1.00

0.82

54

accuracy

0.69

78

macro avg

0.35

0.50

0.41

78

weighted avg

0.48

0.69

0.57

78

Second, we used BERT - a natural language processing machine learning model to train our dataset. We ended up with an accuracy of 0.69 in the f1-score of the model.

Results of Trained BERT Model of Task 2

Sentences

Our dataset

Twitter dataset SemEval-2018 Task 1

'This space feels more relaxing, but with more sounds like the space for worshipping or for prayers.'

calm

joy

'I feel very energetic and I feel like dancing.'

active

joy

'I cant stop. i finished - dejected. luckily no one is in the bathroom. so i go to a stall and wait until my pants are dry.'

active

fear

'Well stock finished & listed, living room moved around, new editing done & fitted in a visit to the in-laws. #productivityatitsfinest #happy'

active

joy

Third, we compared our dataset with other datasets like the one from Twitter. Our dataset can give some more spatial emotions of calm or active other than solely joy and fear based on Twitter’s dataset. This can give more spatial-anchored information, while Twitter normally just contains some more object-anchored descriptions like food, weather, etc.

Wrap It UP

In conclusion, our spatial dataset contains 1402 sentences and two labels, calm and active, to assess the VR environments. Our analysis shows each data can measure certain spatial parameters, and we offer the potential use of the NLP emotion classification dataset for architectural design improvement with everyday sentences. Our models can be used to evaluate machine-learned or human-generated designs in the age of AI.

Limitation

Virtual Reality Environment - We have only four parameters with some default interior elements, such as windows and doors that may affect emotions. Moreover, participants in our study experience the space at a fixed point, standing or sitting, without navigating the spaces in different positions;
Emotions Capture - We tested with only two emotional lables (active or calm) with the four channels of EEG;
Natural Language Processing - Our dataset contains individual differences, and the dataset is not large enough to compare with some existing emotion classification dataset like the ones for Twitter.

Next Steps

Virtual Reality Environment - We need to have more controlled factors such as the materials of windows and doors and sounds. The real-world experience of space is continuous, and people experience architectural spaces not standing still but with movements, so a 3D scanning model or a moving panorama video may be another good choice;
Emotions Capture - More advanced EEG devices with more channels or other biodevices like GSR and eye checking can help measure more emotional data in spaces;
Natural Language Processing - More participants and a list of specific tasks can help us to build a better dataset in the future.