Fellows across all the Fellowship community have recently published work carried out before and during their Fellowship Research Placement and beyond in journals such as Science, Nature, and more. Read more to learn about their science and achievements below.
Controlling light tornados
2019 Fellow, Grisha Spektor has first-authored work published in Nano Letters and Science Advances
2019 Fellow, Grisha Spektor and colleagues from Kaiserslautern University, have published work in Nano Letters and Science Advances, presenting spatial and temporal control of surface confined light tornadoes.
Grisha, now a postdoctoral fellow at NIST said: “One of the most interesting properties of light is its ability to carry orbital angular momentum (OAM) in the form of optical vortices. In the last decade, extensive research effort has been dedicated to studying this intriguing phenomenon touching many fields ranging from optical tweezers for biological and chemical purposes through quantum cryptography and communications to astronomy.”
In the published work Grisha and his colleagues introduced the reflection from structural boundaries as a new degree of freedom to generate and control these surface-confined vortices in time by designing vortex cavities with chiral boundaries, in which subsequent reflections increased the vortex OAM by multiples of the chiral cavity order. Concurrently, by combining local and global meta-surface geometries they showed how to create surface-confined OAM that can be switched externally by an arbitrary amount using the illumination polarization and the ability to generate compound vortex fields.
These results generalize the external control over surface-confined OAM and provide a wide toolkit for on-demand tailored angular momentum generation and delivery. Besides fundamental interest, these results could become pivotal in vortex-based applications such as quantum initialization schemes and plasmonic-optical tweezers.
Read more in Grisha’s papers here on Temporal control and Spatial control.
Distinct brain areas found to be responsible for recognizing familiar faces
2020 Fellow, Sofia Landi has recently published work in Science alongside colleagues at the Rockefeller University showing that distinct brain areas are responsible for recognizing the faces of the individuals we personally know.
During her PhD thesis research, Sofia performed whole-brain imaging in macaques and identified a small region in the brain’s temporal pole specifically involved in recognizing familiar faces. She further characterized this area with targeted electrophysiological recordings. Sofia, who is the first author of the paper, found that neurons in the temporal pole were highly selective, responding specifically to familiar faces that the subjects had seen before. These neurons only identify familiar individuals from real life, but not familiar faces seen only in the screen. She is continuing her research during her Placement at the University of Washington exploring the connections between how our brains navigate the world and episodic memory.
Read more in the paper here.
New technology for modifying bacterial genomes
2018 Fellow, Fahim Farzadfard and colleagues at MIT published a paper in Cell Systems, describing a new technology for modifying bacterial genomes. With this new DNA writing system, one can precisely and efficiently edit bacterial genomes without the need for any form of selection, within complex bacterial ecosystems. “Leveraging this technology, we used DNA as a medium to record spatial information about the interactions of bacterial cells within complex communities”, Fahim says. “Building an analogous DNA recording technology in the neurons could help to map the connectome of the brain one day”. The researchers also showed that they could use this technique to engineer a synthetic ecosystem made of bacteria and bacteriophages that can continuously rewrite certain segments of their genome and evolve autonomously with a rate higher than would be possible by natural evolution. Fahim commented: “This approach could be used for evolutionary engineering of cellular traits, or in experimental evolution studies by allowing you to replay the tape of evolution over and over.”
Read more about Fahim’s work and its potential future impact in the MIT News and read the paper here.
Understanding why we find multitasking hard
2021 Fellow, Sebastian Musslick has published work in Trends in Cognitive Sciences and Nature Physics from his PhD work.
2021 Fellow, Sebastian Musslick has published work from his PhD at Princeton University in Trends in Cognitive Sciences that suggests the human brain may trade the benefits of shared representation for rapid learning and generalization—a mechanism increasingly exploited in machine learning—against constraints on multitasking performance.
Sebastian drew on recent insights from neuroscience, psychology, and machine learning, to provide a unified account of constraints on human attention. His work builds on fundamental computational dilemmas in neural architectures and suggests that our limited capacity to multitask arises from representation sharing between tasks. Furthermore, using a combination of mathematical analysis, computational simulation, and behavioral simulation, Sebastian and his colleagues demonstrated that it can be beneficial for a neural system to sacrifice multitasking capability if the use of shared representations can be exploited for learning.
Read more in the paper here and listen to a summary of Sebastian’s work in a podcast episode of WaterCooler Neuroscience.
Predicting multitasking capacity from neural activity
In his second recently published paper in Nature Physics, Sebastian and colleagues suggest that limitations in multitasking arise from the sharing of representations between tasks. If that was the case, why don’t we have bigger brains with more space for task representations? To answer this question, Sebastian and his colleagues at Princeton University, the ISI Foundation, and Intel Labs performed a series of graph-theoretic analyses to study multitasking in large neural networks.
Their calculations in Nature Physics suggest that increases in the size of a network lead to strongly diminishing returns in multitasking capacity: As long as there is some representation sharing in the brain, it doesn’t help to have a bigger brain. Moreover, they were able to apply the same set of analyses to predict the multitasking capacity of deep neural networks trained to perform multiple tasks, solely by measuring the amount of overlap in single task representations. These theoretical results have profound consequences for designing artificial agents that are tasked to perform numerous tasks simultaneously.
Read more about Sebastian’s findings here.
Exploring the geometries that determine the fate of high-dimensional systems
2020 Fellow, Yuanzhao Zhang has recently published a paper in Physical Review Letters with his Fellowship Principal Investigator at Cornell University, Dr. Steven Strogatz. The paper explores basins in high-dimensional spaces, which are fundamental to the analysis of many complex systems such as power grids, neural networks, and spin glasses, where multiple attractors compete for the available state space.
In his paper, Yuanzhao shows that the geometry of basins is often octopus-like, with long tentacles that meander far and wide in state space. Nearly all the basin volume is concentrated in the tentacles. This finding has important implications on how the basin sizes should be measured. In particular, measurements based on local perturbations and approximations based on simple convex shapes like hypercubes can vastly underestimate the basin size.
Read the paper here.