With today’s science, “nothing is impossible”. As a group of scientists from Kyoto built something of a dream reading machine, which predicts their sleep time visualization.

How it works:

The scientists used an MRI (Magnetic Resonance Imaging) machine, a computer model and thousands of images from the internet to find out what people see as they dream.

 The advent of machine learning helps them to decode the stimulus and the task induced brain activity pattern to reveal visual content. They extend the machine learning approach to decoding the spontaneous brain activity during sleep.

Although dreams are associated with Rapid Eye Movement (REM) and they dissociate with REM, they are sometimes experienced through non-REM. So, they focused on visual imagery because it allowed them to collect many records.

They analyzed the verbal reports using a lexical database to create systematic label for visual contents.

Three people actively participated in Functional Magnetic Resonance Imaging (FMRI) sleep experiment in which they were woken up when an electroencephalogram (EEG) signature was detected and they were asked to give verbal report about the visual experience before being woken up. They repeated this process at least 200 times for each and every participant.

Visual content labeling :

The collected a report, words describing visual objects or scenes were manually extracted and mapped to WordNet, a lexical database in which semantics are grouped as “synsets” in a hierarchical structure. The synsets is a group of data elements that are considered semantically equivalent for the purpose of information retrieval.

The synsets is also known as “WordNet”. It has been used for a number of purposes in information system, including word sense, automatic text classification, automatic text summarization, machine translation and so on.

The FRMI data obtained before each element of which indicated the presence or absence of a base report, then they constructed a decoder for the base synsets report.

Multi-label decoding:

To look into brain activity between perception and sleep onset, the scientists focused on the synsets report that produces content in specific patterns in each sleep experiment. Therefore high scores for unreported synsets may indicate unreported but actual visual content during sleep.

They also performed the same analysis with extended visual content vectors which have high co-occurrence with reported synsets. As a result the extended visual content is identified better and suggesting the multi-label decoding outputs may represent both reported and unreported content.

The researchers also admit that their technology is not perfect because its accuracy is 60 percent. But in future, the technology will get even better.

 Professor Yukiyasu Kamitani from the ATR Computational Neuroscience Laboratories in Kyoto said, “I had a strong belief that decoding dreams should be possible at least for particular aspects of dreaming…I was not surprised by the results, but excited”.

The Spicy Growth Of Swiggy

Coffee – An Emotion , An Emotion And a True Friend