{"id":1379,"date":"2022-06-06T12:54:46","date_gmt":"2022-06-06T16:54:46","guid":{"rendered":"https:\/\/research.gsd.harvard.edu\/temp-real\/2022\/06\/06\/eeg-memories-vision\/"},"modified":"2025-04-02T13:45:01","modified_gmt":"2025-04-02T17:45:01","slug":"eeg-memories-vision","status":"publish","type":"post","link":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/","title":{"rendered":"EEG: Memories + Vision"},"content":{"rendered":"\n<h1 class=\"wp-block-heading\" id=\"h-eeg-memories-vision\">EEG: Memories + Vision<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-common-realm-real-time-machine-learning-brain-eeg-signal-to-image-model-to-render-memories-over-vision\">Common Realm: Real-time Machine Learning Brain (EEG) Signal-to-Image Model to Render Memories Over Vision<\/h2>\n\n\n\n<p>GSD VIS-2314: Responsive Environments: Poetics of Space (Spring 2021-22)<br><strong>Students<\/strong>: Ibrahim Ibrahim, Kenny Kim, Jason Leo<br><strong>Faculty<\/strong>: Allen Sayegh, Humbi Song<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/research.gsd.harvard.edu\/temp-real\/files\/2022\/06\/kimkenny_402739_15065704_iibrahim_kkim_jleo_2314_as_hs_presentation_6_web-scaled.jpg\"><img decoding=\"async\" src=\"https:\/\/research.gsd.harvard.edu\/temp-real\/wp-content\/uploads\/2024\/11\/kimkenny_402739_15065704_iibrahim_kkim_jleo_2314_as_hs_presentation_6_web-1024x576-4.jpg\" alt=\"Student presentation slide with a man wearing a structure.\" class=\"wp-image-3479\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Common Realm | Harvard GSD Responsive Environments\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/G8l-FHMpkkc?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p><strong>Project Statement<\/strong>:<\/p>\n\n\n\n<p>The project endeavors to interrogate and reflect on the relationship between vision, mind, memory, and environment. Using a Signal-to-Image machine learning model, EEG brain signals are paired with their corresponding images to predict new visuals juxtaposing vision and memories. The product is a wearable device that aims to encourage a critical dialogue around human sensors, mental states, and responsive environments.<\/p>\n\n\n\n<p><i>True&nbsp;<\/i>reality is an amalgamation of vision, memories, and emotions that render our present perception of the world. However, what we see with our eyes is not what our brain always perceives \u2013 extending the hypothesis that our eyes do not tell the complete truth. While our visual sensors \u2013 eyes \u2013 are the dominant mode of environmental perception, a disturbed, stressed, or anxious mind fails to comprehend what is fully presented through the eyes.<\/p>\n\n\n\n<p><i>Common Realm&nbsp;<\/i>is an immersive electroencephalogram (EEG) based device that re-paints our vision with speculative layers seen by the mind through visualizing brain signals. This project discusses the hierarchy of memories and sensory inputs other than vision to depict the actual perception in real-time.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/research.gsd.harvard.edu\/temp-real\/files\/2022\/06\/kimkenny_402739_15065705_iibrahim_kkim_jleo_2314_as_hs_presentation_7_print-scaled.jpg\"><img decoding=\"async\" src=\"https:\/\/research.gsd.harvard.edu\/temp-real\/wp-content\/uploads\/2024\/11\/kimkenny_402739_15065699_iibrahim_kkim_jleo_2314_as_hs_presentation_2_print-3.jpg\" alt=\"process diagram of urban environtments and heatmaps\" class=\"wp-image-3474\" \/><\/a><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>To corroborate the idea of investigating brainwaves to reinterpret reality, we looked at a study published in Cognitive Computation that looks at how the neural correlates human emotional judgment, stimulated by auditory, visual, or combined audio-visual stimuli. (Hiyoshi et al. 2015). This study aimed to demonstrate that EEG can be used to investigate emotional valence and discriminate various emotions.<\/p>\n\n\n\n<p>We trained a signal-to-image model using 5,000 images and labeled their corresponding brain signals for 10 seconds. Using an eye tracker embedded in the prototype device, we track the gaze during training to add hierarchical weights to certain features in the images. Once the RGB images, gaze tracking values, and brain signals are gathered, we begin the training process, followed by real-time validation testing on a segment of the Charles River. Predictions are seen to bring in new elements to the frames in real-time.<\/p>\n\n\n\n<p>It was observed that when noises across the street, e.g., cars, were in the background, the distortion of the frames with vertical noise resembling skyscrapers was evident. At other moments when walking next to a crowd, the predicted images displayed an amalgamation of elliptical forms resembling large crowds.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/research.gsd.harvard.edu\/temp-real\/wp-content\/uploads\/2024\/11\/kimkenny_402739_15065702_iibrahim_kkim_jleo_2314_as_hs_presentation_5_print-cropped-scaled-4.jpg\"><img decoding=\"async\" src=\"https:\/\/research.gsd.harvard.edu\/temp-real\/wp-content\/uploads\/2024\/11\/kimkenny_402739_15065702_iibrahim_kkim_jleo_2314_as_hs_presentation_5_print-cropped-scaled-4.jpg\" alt=\"Main wearing a device with a screen and a camera\" class=\"wp-image-3502\" \/><\/a><\/figure>\n\n\n\n<div style=\"height:100px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>EEG: Memories + Vision Common Realm: Real-time Machine Learning Brain (EEG) Signal-to-Image Model to Render Memories Over Vision GSD VIS-2314: [&hellip;]<\/p>\n","protected":false},"author":194,"featured_media":1378,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[28,1],"tags":[],"class_list":["post-1379","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-portfolio","category-uncategorized"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.7 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>EEG: Memories + Vision - REAL<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"EEG: Memories + Vision - REAL\" \/>\n<meta property=\"og:description\" content=\"EEG: Memories + Vision Common Realm: Real-time Machine Learning Brain (EEG) Signal-to-Image Model to Render Memories Over Vision GSD VIS-2314: [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/\" \/>\n<meta property=\"og:site_name\" content=\"REAL\" \/>\n<meta property=\"article:published_time\" content=\"2022-06-06T16:54:46+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-02T17:45:01+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/research.gsd.harvard.edu\/real\/files\/2024\/11\/kimkenny_402739_15065702_iibrahim_kkim_jleo_2314_as_hs_presentation_5_print-cropped-scaled-4.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1\" \/>\n\t<meta property=\"og:image:height\" content=\"1\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Isa He\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Isa He\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/\"},\"author\":{\"name\":\"Isa He\",\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/#\/schema\/person\/37f0fe6c9958851bba83122b62d712f1\"},\"headline\":\"EEG: Memories + Vision\",\"datePublished\":\"2022-06-06T16:54:46+00:00\",\"dateModified\":\"2025-04-02T17:45:01+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/\"},\"wordCount\":419,\"image\":{\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/research.gsd.harvard.edu\/real\/files\/2024\/11\/kimkenny_402739_15065702_iibrahim_kkim_jleo_2314_as_hs_presentation_5_print-cropped-scaled-4.jpg\",\"articleSection\":[\"Portfolio\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/\",\"url\":\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/\",\"name\":\"EEG: Memories + Vision - REAL\",\"isPartOf\":{\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/research.gsd.harvard.edu\/real\/files\/2024\/11\/kimkenny_402739_15065702_iibrahim_kkim_jleo_2314_as_hs_presentation_5_print-cropped-scaled-4.jpg\",\"datePublished\":\"2022-06-06T16:54:46+00:00\",\"dateModified\":\"2025-04-02T17:45:01+00:00\",\"author\":{\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/#\/schema\/person\/37f0fe6c9958851bba83122b62d712f1\"},\"breadcrumb\":{\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#primaryimage\",\"url\":\"https:\/\/research.gsd.harvard.edu\/real\/files\/2024\/11\/kimkenny_402739_15065702_iibrahim_kkim_jleo_2314_as_hs_presentation_5_print-cropped-scaled-4.jpg\",\"contentUrl\":\"https:\/\/research.gsd.harvard.edu\/real\/files\/2024\/11\/kimkenny_402739_15065702_iibrahim_kkim_jleo_2314_as_hs_presentation_5_print-cropped-scaled-4.jpg\",\"caption\":\"EEG Memory Prototype\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/research.gsd.harvard.edu\/real\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"EEG: Memories + Vision\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/#website\",\"url\":\"https:\/\/research.gsd.harvard.edu\/real\/\",\"name\":\"REAL\",\"description\":\"Responsive Environments and Artifacts Lab\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/research.gsd.harvard.edu\/real\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/research.gsd.harvard.edu\/real\/#\/schema\/person\/37f0fe6c9958851bba83122b62d712f1\",\"name\":\"Isa He\",\"sameAs\":[\"https:\/\/scholar.harvard.edu\/isahe\"],\"url\":\"https:\/\/research.gsd.harvard.edu\/real\/author\/isathe\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"EEG: Memories + Vision - REAL","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/","og_locale":"en_US","og_type":"article","og_title":"EEG: Memories + Vision - REAL","og_description":"EEG: Memories + Vision Common Realm: Real-time Machine Learning Brain (EEG) Signal-to-Image Model to Render Memories Over Vision GSD VIS-2314: [&hellip;]","og_url":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/","og_site_name":"REAL","article_published_time":"2022-06-06T16:54:46+00:00","article_modified_time":"2025-04-02T17:45:01+00:00","og_image":[{"url":"https:\/\/research.gsd.harvard.edu\/real\/files\/2024\/11\/kimkenny_402739_15065702_iibrahim_kkim_jleo_2314_as_hs_presentation_5_print-cropped-scaled-4.jpg","width":1,"height":1,"type":"image\/jpeg"}],"author":"Isa He","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Isa He","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#article","isPartOf":{"@id":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/"},"author":{"name":"Isa He","@id":"https:\/\/research.gsd.harvard.edu\/real\/#\/schema\/person\/37f0fe6c9958851bba83122b62d712f1"},"headline":"EEG: Memories + Vision","datePublished":"2022-06-06T16:54:46+00:00","dateModified":"2025-04-02T17:45:01+00:00","mainEntityOfPage":{"@id":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/"},"wordCount":419,"image":{"@id":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#primaryimage"},"thumbnailUrl":"https:\/\/research.gsd.harvard.edu\/real\/files\/2024\/11\/kimkenny_402739_15065702_iibrahim_kkim_jleo_2314_as_hs_presentation_5_print-cropped-scaled-4.jpg","articleSection":["Portfolio"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/","url":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/","name":"EEG: Memories + Vision - REAL","isPartOf":{"@id":"https:\/\/research.gsd.harvard.edu\/real\/#website"},"primaryImageOfPage":{"@id":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#primaryimage"},"image":{"@id":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#primaryimage"},"thumbnailUrl":"https:\/\/research.gsd.harvard.edu\/real\/files\/2024\/11\/kimkenny_402739_15065702_iibrahim_kkim_jleo_2314_as_hs_presentation_5_print-cropped-scaled-4.jpg","datePublished":"2022-06-06T16:54:46+00:00","dateModified":"2025-04-02T17:45:01+00:00","author":{"@id":"https:\/\/research.gsd.harvard.edu\/real\/#\/schema\/person\/37f0fe6c9958851bba83122b62d712f1"},"breadcrumb":{"@id":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#primaryimage","url":"https:\/\/research.gsd.harvard.edu\/real\/files\/2024\/11\/kimkenny_402739_15065702_iibrahim_kkim_jleo_2314_as_hs_presentation_5_print-cropped-scaled-4.jpg","contentUrl":"https:\/\/research.gsd.harvard.edu\/real\/files\/2024\/11\/kimkenny_402739_15065702_iibrahim_kkim_jleo_2314_as_hs_presentation_5_print-cropped-scaled-4.jpg","caption":"EEG Memory Prototype"},{"@type":"BreadcrumbList","@id":"https:\/\/research.gsd.harvard.edu\/real\/2022\/06\/06\/eeg-memories-vision\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/research.gsd.harvard.edu\/real\/"},{"@type":"ListItem","position":2,"name":"EEG: Memories + Vision"}]},{"@type":"WebSite","@id":"https:\/\/research.gsd.harvard.edu\/real\/#website","url":"https:\/\/research.gsd.harvard.edu\/real\/","name":"REAL","description":"Responsive Environments and Artifacts Lab","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/research.gsd.harvard.edu\/real\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/research.gsd.harvard.edu\/real\/#\/schema\/person\/37f0fe6c9958851bba83122b62d712f1","name":"Isa He","sameAs":["https:\/\/scholar.harvard.edu\/isahe"],"url":"https:\/\/research.gsd.harvard.edu\/real\/author\/isathe\/"}]}},"jetpack_featured_media_url":"https:\/\/research.gsd.harvard.edu\/real\/files\/2024\/11\/kimkenny_402739_15065702_iibrahim_kkim_jleo_2314_as_hs_presentation_5_print-cropped-scaled-4.jpg","_links":{"self":[{"href":"https:\/\/research.gsd.harvard.edu\/real\/wp-json\/wp\/v2\/posts\/1379","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/research.gsd.harvard.edu\/real\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/research.gsd.harvard.edu\/real\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/research.gsd.harvard.edu\/real\/wp-json\/wp\/v2\/users\/194"}],"replies":[{"embeddable":true,"href":"https:\/\/research.gsd.harvard.edu\/real\/wp-json\/wp\/v2\/comments?post=1379"}],"version-history":[{"count":0,"href":"https:\/\/research.gsd.harvard.edu\/real\/wp-json\/wp\/v2\/posts\/1379\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/research.gsd.harvard.edu\/real\/wp-json\/wp\/v2\/media\/1378"}],"wp:attachment":[{"href":"https:\/\/research.gsd.harvard.edu\/real\/wp-json\/wp\/v2\/media?parent=1379"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/research.gsd.harvard.edu\/real\/wp-json\/wp\/v2\/categories?post=1379"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/research.gsd.harvard.edu\/real\/wp-json\/wp\/v2\/tags?post=1379"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}