Brain cells, or neurons, constantly tinker with their circuit connections, a crucial feature that allows the brain to store and process information. While neurons frequently test out new potential partners through transient contacts, only a fraction of fledging junctions, called synapses, are selected to become permanent.
The major criterion for excitatory synapse selection is based on how well they engage in response to experience-driven neural activity, but how such selection is implemented at the molecular level has been unclear. In a new study, MIT neuroscientists have identified the gene and protein, CPG15, that allows experience to tap a synapse as a keeper.
In a series of novel experiments described in Cell Reports, the team at MIT’s Picower Institute for Learning and Memory used multi-spectral, high-resolution two-photon microscopy to literally watch potential synapses come and go in the visual cortex of mice — both in the light, or normal visual experience, and in the darkness, where there is no visual input. By comparing observations made in normal mice and ones engineered to lack CPG15, they were able to show that the protein is required in order for visual experience to facilitate the transition of nascent excitatory synapses to permanence.
Mice engineered to lack CPG15 only exhibit one behavioral deficiency: They learn much more slowly than normal mice, says senior author Elly Nedivi, the William R. (1964) and Linda R. Young Professor of Neuroscience in the Picower Institute and a professor of brain and cognitive sciences at MIT. They need more trials and repetitions to learn associations that other mice can learn quickly. The new study suggests that’s because without CPG15, they must rely on circuits where synapses simply happened to take hold, rather than on a circuit architecture that has been refined by experience for optimal efficiency.
“Learning and memory are really specific manifestations of our brain’s ability in general to constantly adapt and change in response to our environment,” Nedivi says. “It’s not that the circuits aren’t there in mice lacking CPG15, they just don’t have that feature — which is really important — of being optimized through use.”
Watching in light and darkness
The first experiment reported in the paper, led by former MIT postdoc Jaichandar Subramanian, who is now an assistant professor at the University of Kansas, is a contribution to neuroscience in and of itself, Nedivi says. The novel labeling and imaging technologies implemented in the study, she says, allowed tracking key events in synapse formation with unprecedented spatial and temporal resolution. The study resolved the emergence of “dendritic spines,” which are the structural protrusions on which excitatory synapses are formed, and the recruitment of the synaptic scaffold, PSD95, that signals that a synapse is there to stay.
The team tracked specially labeled neurons in the visual cortex of mice after normal visual experience, and after two weeks in darkness. To their surprise, they saw that spines would routinely arise and then typically disappear again at the same rate regardless of whether the mice were in light or darkness. This careful scrutiny of spines confirmed that experience doesn’t matter for spine formation, Nedivi said. That upends a common assumption in the field, which held that experience was necessary for spines to even emerge.
By keeping track of the presence of PSD95 they could confirm that the synapses that became stabilized during normal visual experience were the ones that had accumulated that protein. But the question remained: How does experience drive PSD95 to the synapse? The team hypothesized that CPG15, which is activity dependent and associated with synapse stabilization, does that job.
CPG15 represents experience
To investigate that, they repeated the same light-versus-dark experiences, but this time in mice engineered to lack CPG15. In the normal mice, there was much more PSD95 recruitment during the light phase than during the dark, but in the mice without CPG15, the experience of seeing in the light never made a difference. It was as if CPG15-less mice in the light were like normal mice in the dark.
Later they tried another experiment testing whether the low PSD95 recruitment seen when normal mice were in the dark could be rescued by exogenous expression of CPG15. Indeed, PSD95 recruitment shot up, as if the animals were exposed to visual experience. This showed that CPG15 not only carries the message of experience in the light, it can actually substitute for it in the dark, essentially “tricking” PSD95 into acting as if experience had called upon it.
“This is a very exciting result, because it shows that CPG15 is not just required for experience-dependent synapse selection, but it’s also sufficient,” says Nedivi, “That’s unique in relation to all other molecules that are involved in synaptic plasticity.”
A new model and method
In all, the paper’s data allowed Nedivi to propose a new model of experience-dependent synapse stabilization: Regardless of neural activity or experience, spines emerge with fledgling excitatory synapses and the receptors needed for further development. If activity and experience send CPG15 their way, that draws in PSD95 and the synapse stabilizes. If experience doesn’t involve the synapse, it gets no CPG15, very likely no PSD95, and the spine withers away.
The paper potentially has significance beyond the findings about experience-dependent synapse stabilization, Nedivi says. The method it describes of closely monitoring the growth or withering of spines and synapses amid a manipulation (like knocking out or modifying a gene) allows for a whole raft of studies in which examining how a gene, or a drug, or other factors affect synapses.
“You can apply this to any disease model and use this very sensitive tool for seeing what might be wrong at the synapse,” she says.
In addition to Nedivi and Subramanian, the paper’s other authors are Katrin Michel and Marc Benoit.
The National Institutes of Health and the JPB Foundation provided support for the research.
Daniel Z. Freedman, professor emeritus in MIT’s departments of Mathematics and Physics, has been awarded the Special Breakthrough Prize in Fundamental Physics. He shares the $3 million prize with two colleagues, Sergio Ferrara of CERN and Peter van Nieuwenhuizen of Stony Brook University, with whom he developed the theory of supergravity.
The trio is honored for work that combines the principles of supersymmetry, which postulates that all fundamental particles have corresponding, unseen “partner” particles; and Einstein's theory of general relativity, which explains that gravity is the result of the curvature of space-time.
When the theory of supersymmetry was developed in 1973, it solved some key problems in particle physics, such as unifying three forces of nature (electromagnetism, the weak nuclear force, and the strong nuclear force), but it left out a fourth force: gravity. Freedman, Ferrara, and van Nieuwenhuizen addressed this in 1976 with their theory of supergravity, in which the gravitons of general relativity acquire superpartners called gravitinos.
Freedman’s collaboration with Ferrara and van Nieuwenhuizen began late in 1975 at École Normale Supérior in Paris, where he was visiting on a minisabbatical from Stony Brook, where he was a professor. Ferrara had also come to ENS, to work on a different project for a week. The challenge of constructing supergravity was in the air at that time, and Freedman told Ferrara that he was thinking about it. In their discussions, Ferrara suggested that progress could be made via an approach that Freedman had previously used in a related problem involving supersymmetric gauge theories.
“That turned me in the right direction,” Freedman recalls. In short order, he formulated the first step in the construction of supergravity and proved its mathematical consistency. “I returned to Stony Brook convinced that I could quickly find the rest of the theory,” he says. However, “I soon realized that it was harder than I had expected.”
At that point he asked van Nieuwenhuizen to join him on the project. “We worked very hard for several months until the theory came together. That was when our eureka moment occurred,” he says.
“Dan’s work on supergravity has changed how scientists think about physics beyond the standard model, combining principles of supersymmetry and Einstein’s theory of general relativity,” says Michael Sipser, dean of the MIT School of Science and the Donner Professor of Mathematics. “His exemplary research is central to mathematical physics and has given us new pathways to explore in quantum field theory and superstring theory. On behalf of the School of Science, I congratulate Dan and his collaborators for this prestigious award.”
Freedman joined the MIT faculty in 1980, first as professor of applied mathematics and later with a joint appointment in the Center for Theoretical Physics. He regularly taught an advanced graduate course on supersymmetry and supergravity. An unusual feature of the course was that each assigned problem set included suggestions of classical music to accompany students’ work.
“I treasure my 36 years at MIT,” he says, noting that he worked with “outstanding” graduate students with “great resourcefulness as problem solvers.” Freedman fully retired from MIT in 2016.
He is now a visiting professor at Stanford University and lives in Palo Alto, California, with his wife, Miriam, an attorney specializing in public education law.
The son of small-business people, Freedman was the first in his family to attend college. He became interested in physics during his first year at Wesleyan University, when he enrolled in a special class that taught physics in parallel with the calculus necessary to understand its mathematical laws. It was a pivotal experience. “Learning that the laws of physics can exactly describe phenomena in nature — that totally turned me on,” he says.
Freedman learned about winning the Breakthrough Prize upon returning from a morning boxing class, when his wife told him that a Stanford colleague, who was on the Selection Committee, had been trying to reach him. “When I returned the call, I was overwhelmed with the news,” he says.
Freedman, who holds a BA from Wesleyan and an MS and PhD in physics from the University of Wisconsin, is a former Sloan Fellow and a two-time Guggenheim Fellow. The three collaborators received the Dirac Medal and Prize in 1993, and the Dannie Heineman Prize in Mathematical Physics in 2006. He is a fellow of the American Academy of Arts and Sciences.
Founded by a group of Silicon Valley entrepreneurs, the Breakthrough Prizes recognize the world’s top scientists in life sciences, fundamental physics, and mathematics. The Special Breakthrough Prize in Fundamental Physics honors profound contributions to human knowledge in physics. Earlier honorees include Jocelyn Bell Burnell; the LIGO research team, including MIT Professor Emeritus Rainer Weiss; and Stephen Hawking.
MIT computer scientists are hoping to accelerate the use of artificial intelligence to improve medical decision-making, by automating a key step that’s usually done by hand — and that’s becoming more laborious as certain datasets grow ever-larger.
The field of predictive analytics holds increasing promise for helping clinicians diagnose and treat patients. Machine-learning models can be trained to find patterns in patient data to aid in sepsis care, design safer chemotherapy regimens, and predict a patient’s risk of having breast cancer or dying in the ICU, to name just a few examples.
Typically, training datasets consist of many sick and healthy subjects, but with relatively little data for each subject. Experts must then find just those aspects — or “features” — in the datasets that will be important for making predictions.
This “feature engineering” can be a laborious and expensive process. But it’s becoming even more challenging with the rise of wearable sensors, because researchers can more easily monitor patients’ biometrics over long periods, tracking sleeping patterns, gait, and voice activity, for example. After only a week’s worth of monitoring, experts could have several billion data samples for each subject.
In a paper being presented at the Machine Learning for Healthcare conference this week, MIT researchers demonstrate a model that automatically learns features predictive of vocal cord disorders. The features come from a dataset of about 100 subjects, each with about a week’s worth of voice-monitoring data and several billion samples — in other words, a small number of subjects and a large amount of data per subject. The dataset contain signals captured from a little accelerometer sensor mounted on subjects’ necks.
In experiments, the model used features automatically extracted from these data to classify, with high accuracy, patients with and without vocal cord nodules. These are lesions that develop in the larynx, often because of patterns of voice misuse such as belting out songs or yelling. Importantly, the model accomplished this task without a large set of hand-labeled data.
“It’s becoming increasing easy to collect long time-series datasets. But you have physicians that need to apply their knowledge to labeling the dataset,” says lead author Jose Javier Gonzalez Ortiz, a PhD student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to remove that manual part for the experts and offload all feature engineering to a machine-learning model.”
The model can be adapted to learn patterns of any disease or condition. But the ability to detect the daily voice-usage patterns associated with vocal cord nodules is an important step in developing improved methods to prevent, diagnose, and treat the disorder, the researchers say. That could include designing new ways to identify and alert people to potentially damaging vocal behaviors.
Joining Gonzalez Ortiz on the paper is John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering and head of CSAIL’s Data Driven Inference Group; Robert Hillman, Jarrad Van Stan, and Daryush Mehta, all of Massachusetts General Hospital’s Center for Laryngeal Surgery and Voice Rehabilitation; and Marzyeh Ghassemi, an assistant professor of computer science and medicine at the University of Toronto.
For years, the MIT researchers have worked with the Center for Laryngeal Surgery and Voice Rehabilitation to develop and analyze data from a sensor to track subject voice usage during all waking hours. The sensor is an accelerometer with a node that sticks to the neck and is connected to a smartphone. As the person talks, the smartphone gathers data from the displacements in the accelerometer.
In their work, the researchers collected a week’s worth of this data — called “time-series” data — from 104 subjects, half of whom were diagnosed with vocal cord nodules. For each patient, there was also a matching control, meaning a healthy subject of similar age, sex, occupation, and other factors.
Traditionally, experts would need to manually identify features that may be useful for a model to detect various diseases or conditions. That helps prevent a common machine-learning problem in health care: overfitting. That’s when, in training, a model “memorizes” subject data instead of learning just the clinically relevant features. In testing, those models often fail to discern similar patterns in previously unseen subjects.
“Instead of learning features that are clinically significant, a model sees patterns and says, ‘This is Sarah, and I know Sarah is healthy, and this is Peter, who has a vocal cord nodule.’ So, it’s just memorizing patterns of subjects. Then, when it sees data from Andrew, which has a new vocal usage pattern, it can’t figure out if those patterns match a classification,” Gonzalez Ortiz says.
The main challenge, then, was preventing overfitting while automating manual feature engineering. To that end, the researchers forced the model to learn features without subject information. For their task, that meant capturing all moments when subjects speak and the intensity of their voices.
As their model crawls through a subject’s data, it’s programmed to locate voicing segments, which comprise only roughly 10 percent of the data. For each of these voicing windows, the model computes a spectrogram, a visual representation of the spectrum of frequencies varying over time, which is often used for speech processing tasks. The spectrograms are then stored as large matrices of thousands of values.
But those matrices are huge and difficult to process. So, an autoencoder — a neural network optimized to generate efficient data encodings from large amounts of data — first compresses the spectrogram into an encoding of 30 values. It then decompresses that encoding into a separate spectrogram.
Basically, the model must ensure that the decompressed spectrogram closely resembles the original spectrogram input. In doing so, it’s forced to learn the compressed representation of every spectrogram segment input over each subject’s entire time-series data. The compressed representations are the features that help train machine-learning models to make predictions.
Mapping normal and abnormal features
In training, the model learns to map those features to “patients” or “controls.” Patients will have more voicing patterns than will controls. In testing on previously unseen subjects, the model similarly condenses all spectrogram segments into a reduced set of features. Then, it’s majority rules: If the subject has mostly abnormal voicing segments, they’re classified as patients; if they have mostly normal ones, they’re classified as controls.
In experiments, the model performed as accurately as state-of-the-art models that require manual feature engineering. Importantly, the researchers’ model performed accurately in both training and testing, indicating it’s learning clinically relevant patterns from the data, not subject-specific information.
Next, the researchers want to monitor how various treatments — such as surgery and vocal therapy — impact vocal behavior. If patients’ behaviors move form abnormal to normal over time, they’re most likely improving. They also hope to use a similar technique on electrocardiogram data, which is used to track muscular functions of the heart.
At the 2019 MIT Commencement address, Michael Bloomberg highlighted the climate crisis as “the challenge of our time.” Climate change is expected to worsen drought and cause Boston, Massachusetts, sea level to rise by 1.5 feet by 2050. While numerous MIT students and researchers are working to ensure access to clean and sustainable sources of drinking water well into the future, MIT is also responding to the urgency of the climate crisis with a close examination of campus sustainability practices, including a recent focus on its own water consumption.
A working group on campus water use, led by the MIT Office of Sustainability (MITOS) and Department of Facilities, is supported by the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) and includes representatives of numerous other groups, offices, students, and campus leaders. While the MITOS initiative is focusing on campus water management, MIT student clubs are raising local consciousness around drinking-water issues via research and outreach activities. Through all of these efforts, members of the community aim to help MIT change its water usage practices and become a model for sustainable water use at the university level.
The water subcommittee: providing water leadership to promote institutional change
Gathering campus stakeholders to develop sustainability recommendations is a practiced strategy for the Office of Sustainability. MITOS working groups have previously analyzed environmental issues such as energy use, storm water management, and the sustainability of MIT’s food system, another initiative in which J-WAFS has played a role. The current working group addressing campus water use practices is managed by Steven Lanou, sustainability project manager at MITOS. “Work done in the late 1990s reduced campus water use by an estimated 60 percent,” he explains. “And now, we need to look strategically again at all of our systems” to improve water management in the face of future climate uncertainty.
Beginning in fall 2018, MITOS met with local stakeholders, including the Cambridge Water Department, the MIT Department of Facilities, and the MIT Water Club, to explore how water is used and managed on campus.
The water subcommittee falls underneath the Sustainability Leadership Steering Committee, which was created by, and reports to, the Office of the Provost and the Office of the Executive Vice President and Treasurer, upon which Professor John H. Lienhard, director of J-WAFS and Abdul Latif Jameel Professor of Water and Mechanical Engineering, also sits. The steering committee is charged by the provost and the executive vice president and treasurer of MIT to recommend strategies for campus leadership on sustainability issues. The water subcommittee will bring concrete suggestions for water usage changes to the MIT administration and work to implement them across campus. Professor Lienhard has “been key in helping us shape what a water stewardship program might look like,” according to Lanou.
Other J-WAFS staff are also involved in the subcommittee, as well as leaders from the Environmental Solutions Initiative (ESI), Department of Facilities, MIT Dining, the MIT Investment Management Company, and the Water Club. Based on a thorough review of data related to MIT’s water use, the subcommittee has started to identify the most strategic areas for intervention, and is gearing up now to get additional input this fall and begin to develop recommendations for how MIT can reduce water consumption, mitigate its overall climate impact, and adapt to an uncertain future.
Water has been a focus of discussion and planning for sustainable campus practices for several years already. A MITOS stormwater and land management working group devoted to priority-setting for campus sustainability, which convened in the 2014 academic year, identified MIT’s water footprint as one of several key areas for discussion and intervention. Following the release of the stormwater and land management working group recommendations in 2016, MITOS teamed up with the Office of Campus Planning, the Department of Facilities, and the Office of Environment, Health and Safety to explore stormwater management solutions that improve the health of Cambridge, Massachusetts waterways and ecosystems. Among the outcomes was a draft stormwater management and landscape ecology plan that is focused on enhancing the productivity of the campus’ built and ecological systems in order to capture, absorb, reuse, and treat stormwater. This effort has informed the implementation of advanced stormwater management infrastructure on campus, including the recently completed North Corridor improvements in conjunction with the construction of the MIT.nano building.
In addition, MITOS is leading a research effort with the MIT Center for Global Change Science and Department of Facilities to understand campus risks to flooding during current and future climate conditions. The team is evaluating probabilities and flood depths to a range of scenarios, including intense, short-duration rainfall over campus; 24-hour rainfall over campus/Cambridge from tropical storms or nor’easters; sea-level rise and coastal storm surge of the Charles River; and up-river rainfall that raises the level of the Charles River. To understand MIT’s water consumption and key areas for intervention, this year’s water subcommittee is informed by data gathered by Lanou on the water consumption across campus — in buildings, labs, and landscaping processes — as well as the consumption of water by the MIT community.
An additional dimension of water stewardship to be considered by the subcommittee is the role and impact of bottled-water purchases on campus. The subcommittee has begun to look at data on annual bottled-water consumption to help understand the current trends. Understanding the impacts of single-use disposable bottles on campus is important. “I see so much bottled water consumption on campus,” notes John Lienhard. “It’s costly, energy-intensive, and adds plastic to the environment.” Only 9 percent of all plastics manufactured since 2015 has been recycled, and 12 billion metric tons of plastic will end up in landfills by 2050. Mark Hayes, director of MIT Dining and another subcommittee member, has participated in student-led bottled-water reduction efforts on two college campuses, and he hopes to help MIT better understand and address the issue here. Hayes would like to see MIT consider “expanding water refilling stations, exploring the impact and reduction [of] plastic recycling, and increasing campus education on these efforts.” Taking on the challenge of changing campus water consumption habits, and decreasing the associated waste, will hopefully position MIT as a leader in these kinds of sustainability efforts and encourage other campuses to adopt similar policies.
Students taking action
Student groups are also using education around bottled water alternatives to encourage behavior change. Andrew Bouma, a PhD student in John Lienhard’s lab, is investigating local attitudes toward bottled water. His interest in this issue began upon meeting several students who drank mostly bottled water. “It frustrated me that people had this perception that the tap water wasn’t safe,” Bouma explains, “even though Cambridge and Boston have really great water.” He became involved with the MIT Water Club and ran a blind taste test at the 2019 MIT Water Night to evaluate perceptions of tap water, bottled water, and recycled wastewater.
Bouma explained that bottled-water drinkers often cite superior flavor as a motivating factor; however, only four or five of the 70-80 participants correctly identified the different sources, suggesting that the flavor argument holds little water. Many participants also held reservations about water safety. Bouma hopes that the taste test can address these barriers more effectively than sharing statistics. “When people can hold a cup of water in their hands and see it and taste it, it makes people confront their presumptions in a different way,” he explains.
A broader impact
The MIT Water Club, including Bouma, repeated the taste test at the Cambridge River Arts Festival in June to examine public perceptions of public and bottled water. Fewer than 5 percent of the 242 respondents identified all four water sources, approximately the same outcome as would be expected from random guessing. Many participants held concerns about the safety of public water, which the Water Club tried to combat with information about water treatment and testing procedures. Bouma hopes to continue addressing water consumption issues as co-president of the Water Club.
Other student groups are encouraging behavior change around water consumption as well. The MIT Graduate Student Council (GSC) and the GSC Sustainability Subcommittee, with support from the Department of Facilities, funded five water-bottle refilling stations across campus in 2015. These efforts underscore the commitment of MIT students to promoting sustainable water consumption on campus.
A unique “MIT spin” on campus water sustainability
Lanou hopes that MIT will bring its technical strength to bear on water issues by using campus as a living laboratory to test water technologies. For example, Kripa Varanasi, professor of mechanical engineering and a J-WAFS-funded principal investigator, is piloting a water capture project at MIT’s Central Utility Plant that uses electricity to condense fog into liquid water for collection. Varanasi’s lab is able to test the technology in real-world conditions and improve the plant’s water efficiency at the same time. “It's a great example of MIT being willing to use its facilities to test campus research,” explains Lanou. These technological advancements — many of which are supported by J-WAFS — could support water resilience at MIT and elsewhere.
As the climate crisis brings water scarcity issues to the forefront, understanding and modeling water-use practices will become increasingly critical. With the water subcommittee working to bring recommendations for campus water use to the administration, and MIT students engaging with the broader Cambridge community on bottled water issues, the MIT community is poised to rise to the challenge.
We receive half of our genes from each biological parent, so there’s no avoiding inheriting a blend of characteristics from both. Yet, for single-celled organisms like bacteria that reproduce by splitting into two identical cells, injecting variety into the gene pool isn’t so easy. Random mutations add some diversity, but there’s a much faster way for bacteria to reshuffle their genes and confer evolutionary advantages like antibiotic resistance or pathogenicity.
Known as horizontal gene transfer, this process permits bacteria to pass pieces of DNA to their peers, in some cases allowing those genes to be integrated into the recipient’s genome and passed down to the next generation.
The Grossman lab in the MIT Department of Biology studies one class of mobile DNA, known as integrative and conjugative elements (ICEs). While ICEs contain genes that can be beneficial to the recipient bacterium, there’s also a catch — receiving a duplicate copy of an ICE is wasteful, and possibly lethal. The biologists recently uncovered a new system by which one particular ICE, ICEBs1, blocks a donor bacterium from delivering a second, potentially deadly copy.
“Understanding how these elements function and how they're regulated will allow us to determine what drives microbial evolution,” says Alan Grossman, department head and senior author on the study. “These findings not only provide insight into how bacteria block unwanted genetic transfer, but also how we might eventually engineer this system to our own advantage.”
Former graduate student Monika Avello PhD ’18 and current graduate student Kathleen Davis are co-first authors on the study, which appeared online in Molecular Microbiology on July 30.
Checks and balances
Although plasmids are perhaps the best-known mediators of horizontal transfer, ICEs not only outnumber plasmids in most bacterial species, they also come with their own tools to exit the donor, enter the recipient, and integrate themselves into the recipient’s chromosome. Once the donor bacterium makes contact with the recipient, the machinery encoded by the ICE can pump the ICE DNA from one cell to the other through a tiny channel.
For horizontal transfer to proceed, there are physical barriers to overcome, especially in so-called Gram-positive bacteria, which boast thicker cell walls than their Gram-negative counterparts, despite being less widely studied. According to Davis, the transfer machinery essentially has to “punch a hole” through the recipient cell. “It’s a rough ride and a waste of energy for the recipient if that cell already contains an ICE with a specific set of genes,” she says.
Sure, ICEs are “selfish bits of DNA” that persist by spreading themselves as widely as possible, but in order to do so they must not interfere with their host cell’s ability to survive. As Avello explains, ICEs can’t just disseminate their DNA “without certain checks and balances.”
“There comes a point where this transfer comes at a cost to the bacteria or doesn't make sense for the element,” she says. “This study is beginning to get at the question of when, why, and how ICEs might want to block transfer.”
The Grossman lab works in the Gram-positive Bacillus subtilis, and had previously discovered two mechanisms by which ICEBs1 could prevent redundant transfer before it becomes lethal. The first, cell-cell signaling, involves the ICE in the recipient cell releasing a chemical cue that prohibits the donor’s transfer machinery from being assembled. The second, immunity, initiates if the duplicate copy is already inside the cell, and prevents the replicate from being integrated into the chromosome.
However, when the researchers tried eliminating both fail-safes simultaneously, rather than re-instating ICE transfer as they expected, the bacteria still managed to obstruct the duplicate copy. ICEBs1 seemed to have a third blocking strategy, but what might it be?
The third tactic
In this most recent study, they’ve identified the mysterious blocking mechanism as a type of “entry exclusion,” whereby the ICE in the recipient cell encodes molecular machinery that physically prevents the second copy from breaching the cell wall. Scientists had observed other mobile genetic elements capable of exclusion, but this was the first time anyone had witnessed this phenomenon for an ICE from Gram-positive bacteria, according to Avello.
The Grossman lab determined that this exclusion mechanism comes down to two key proteins. Avello identified the first protein, YddJ, expressed by the ICEBs1 in the recipient bacterium, forming a “protective coating” on the outside of the cell and blocking a second ICE from entering.
But the biologists still didn’t know which piece of transfer machinery YddJ was blocking, so Davis performed a screen and various genetic manipulations to pinpoint YddJ’s target. YddJ, it turned out, was obstructing another protein called ConG, which likely forms part of the transfer channel between the donor and recipient bacteria. Davis was surprised to find that, while Gram-negative ICEs encode a protein that’s quite similar to ConG, the Gram-negative YddJ equivalent is actually much different.
“This just goes to show that you can’t assume the transfer machinery in Gram-positive ICEs like ICEBs1 are the same as the well-studied Gram-negative ICEs,” she says.
The team concluded that ICEBs1 must have three different mechanisms to prevent duplicate transfer: the two they’d previously uncovered plus this new one, exclusion.
Cell-cell signaling allows a cell to spread the word to its neighbors that it already has a copy of ICEBs1, so there’s no need to bother assembling the transfer machinery. If this fails, exclusion kicks in to physically block the transfer machinery from penetrating the recipient cell. If that proves unsuccessful and the second copy enters the recipient, immunity will initiate and prevent the second copy from being integrated into the recipient’s chromosome.
“Each mechanism acts at a different step, because none of them alone are 100 percent effective,” Grossman says. “That’s why it’s helpful to have multiple mechanisms.”
They don’t know all the details of this transfer machinery just yet, he adds, but they do know that YddJ and ConG are key players.
“This initial description of the ICEBs1 exclusion system represents the first report that provides mechanistic insights into exclusion in Gram-positive bacteria, and one of only a few mechanistic studies of exclusion in any conjugation system,” says Gary Dunny, a professor of microbiology and immunology at the University of Minnesota who was not involved in the study. “This work is significant medically because ICEs can carry “cargo” genes such as those conferring antibiotic resistance, and also of importance to our basic understanding of horizontal gene transfer systems and how they evolve.”
As researchers continue to probe this blocking mechanism, it might be possible to leverage ICE exclusion to design bacteria with specific functions. For instance, they could engineer the gut microbiome and introduce beneficial genes to help with digestion. Or, one day, they could perhaps block horizontal gene transfer to combat antibiotic resistance.
“We had suspected that Gram-positive ICEs might be capable of exclusion, but we didn’t have proof before this,” Avello says. Now, researchers can start to speculate about how pathogenic Gram-positive species might control the movement of ICEs throughout a bacterial population, with possible ramifications for disease research.
This work was funded by research and predoctoral training grants from the National Institute of General Medical Sciences of the National Institutes of Health.