I really like this. Developing on previous work, Shinji Nishimoto and a team at UCB have come up with a neat way to reconstruct visual input from fMRI in visual cortex. Their model uses a bank of simple spatiotemporal filters that behave like visual neurons. First, they derive parameters for these filters using BOLD signals elicited by a set of training videos. Then, they measure the BOLD response to a completely different set of videos, and use a Bayesian approach to estimate what the videos that produced them look like. The results are striking, and have some interesting potential applications. From the group’s website:
Neuroscientists generally assume that all mental processes have a concrete neurobiological basis. Under this assumption, as long as we have good measurements of brain activity and good computational models of the brain, it should be possible in principle to decode the visual content of mental processes like dreams, memory, and imagery… It is currently unknown whether processes like dreaming and imagination are realized in the brain in a way that is functionally similar to perception. If they are, then it should be possible to use the techniques developed in this paper to decode brain activity during dreaming or imagination.
The study, “Reconstructing visual experiences from brain activity evoked by natural movies” is published in Current Biology (subscription required).