MIT Latest News
A novel encryption method devised by MIT researchers secures data used in online neural networks, without dramatically slowing their runtimes. This approach holds promise for using cloud-based neural networks for medical-image analysis and other applications that use sensitive data.
Outsourcing machine learning is a rising trend in industry. Major tech firms have launched cloud platforms that conduct computation-heavy tasks, such as, say, running data through a convolutional neural network (CNN) for image classification. Resource-strapped small businesses and other users can upload data to those services for a fee and get back results in several hours.
But what if there are leaks of private data? In recent years, researchers have explored various secure-computation techniques to protect such sensitive data. But those methods have performance drawbacks that make neural network evaluation (testing and validating) sluggish — sometimes as much as million times slower — limiting their wider adoption.
In a paper presented at this week’s USENIX Security Conference, MIT researchers describe a system that blends two conventional techniques — homomorphic encryption and garbled circuits — in a way that helps the networks run orders of magnitude faster than they do with conventional approaches.
The researchers tested the system, called GAZELLE, on two-party image-classification tasks. A user sends encrypted image data to an online server evaluating a CNN running on GAZELLE. After this, both parties share encrypted information back and forth in order to classify the user’s image. Throughout the process, the system ensures that the server never learns any uploaded data, while the user never learns anything about the network parameters. Compared to traditional systems, however, GAZELLE ran 20 to 30 times faster than state-of-the-art models, while reducing the required network bandwidth by an order of magnitude.
One promising application for the system is training CNNs to diagnose diseases. Hospitals could, for instance, train a CNN to learn characteristics of certain medical conditions from magnetic resonance images (MRI) and identify those characteristics in uploaded MRIs. The hospital could make the model available in the cloud for other hospitals. But the model is trained on, and further relies on, private patient data. Because there are no efficient encryption models, this application isn’t quite ready for prime time.
“In this work, we show how to efficiently do this kind of secure two-party communication by combining these two techniques in a clever way,” says first author Chiraag Juvekar, a PhD student in the Department of Electrical Engineering and Computer Science (EECS). “The next step is to take real medical data and show that, even when we scale it for applications real users care about, it still provides acceptable performance.”
Co-authors on the paper are Vinod Vaikuntanathan, an associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory, and Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.
CNNs process image data through multiple linear and nonlinear layers of computation. Linear layers do the complex math, called linear algebra, and assign some values to the data. At a certain threshold, the data is outputted to nonlinear layers that do some simpler computation, make decisions (such as identifying image features), and send the data to the next linear layer. The end result is an image with an assigned class, such as vehicle, animal, person, or anatomical feature.
Recent approaches to securing CNNs have involved applying homomorphic encryption or garbled circuits to process data throughout an entire network. These techniques are effective at securing data. “On paper, this looks like it solves the problem,” Juvekar says. But they render complex neural networks inefficient, “so you wouldn’t use them for any real-world application.”
Homomorphic encryption, used in cloud computing, receives and executes computation all in encrypted data, called ciphertext, and generates an encrypted result that can then be decrypted by a user. When applied to neural networks, this technique is particularly fast and efficient at computing linear algebra. However, it must introduce a little noise into the data at each layer. Over multiple layers, noise accumulates, and the computation needed to filter that noise grows increasingly complex, slowing computation speeds.
Garbled circuits are a form of secure two-party computation. The technique takes an input from both parties, does some computation, and sends two separate inputs to each party. In that way, the parties send data to one another, but they never see the other party’s data, only the relevant output on their side. The bandwidth needed to communicate data between parties, however, scales with computation complexity, not with the size of the input. In an online neural network, this technique works well in the nonlinear layers, where computation is minimal, but the bandwidth becomes unwieldy in math-heavy linear layers.
The MIT researchers, instead, combined the two techniques in a way that gets around their inefficiencies.
In their system, a user will upload ciphertext to a cloud-based CNN. The user must have garbled circuits technique running on their own computer. The CNN does all the computation in the linear layer, then sends the data to the nonlinear layer. At that point, the CNN and user share the data. The user does some computation on garbled circuits, and sends the data back to the CNN. By splitting and sharing the workload, the system restricts the homomorphic encryption to doing complex math one layer at a time, so data doesn’t become too noisy. It also limits the communication of the garbled circuits to just the nonlinear layers, where it performs optimally.
“We’re only using the techniques for where they’re most efficient,” Juvekar says.
The final step was ensuring both homomorphic and garbled circuit layers maintained a common randomization scheme, called “secret sharing.” In this scheme, data is divided into separate parts that are given to separate parties. All parties synch their parts to reconstruct the full data.
In GAZELLE, when a user sends encrypted data to the cloud-based service, it’s split between both parties. Added to each share is a secret key (random numbers) that only the owning party knows. Throughout computation, each party will always have some portion of the data, plus random numbers, so it appears fully random. At the end of computation, the two parties synch their data. Only then does the user ask the cloud-based service for its secret key. The user can then subtract the secret key from all the data to get the result.
“At the end of the computation, we want the first party to get the classification results and the second party to get absolutely nothing,” Juvekar says. Additionally, “the first party learns nothing about the parameters of the model.”
Institute Professor Thomas Magnanti has been honored as one of Singapore’s National Day Award recipients, for his long-term work developing higher education in Singapore.
The government of Singapore announced that Magnanti received the Public Administration Medal (gold) on Aug. 9, the National Day of Singapore, for his role as founding president of the Singapore University of Technology and Design (SUTD). He will receive the medal at a ceremony in Singapore later this year.
“I am quite pleased,” Magnanti says about the award. “It’s quite an honor to receive it.”
SUTD is a recently developed university in Singapore focused on innovation-based technology, and design across several fields. Its curriculum is organized in interdisciplinary clusters to promote research and education across multiple areas of study.
The new honor came as a surprise to Magnanti, who started working to help develop SUTD in 2008 and became its president in October 2009. In January 2010, MIT and SUTD signed a memorandum outlining their partnership for both research and education. After a groundbreaking in 2011, SUTD enrolled its first undergraduate students in 2012 and moved to its permanent campus site in 2015.
MIT and SUTD maintained their education partnership from 2010 to 2017 and continue to work as partners in research through the International Design Center, which has facilities both at MIT and on the SUTD campus.
Magnanti, who is an MIT Institute Professor, is a professor of operations research at the MIT Sloan School of Management, as well as a faculty member in the Department of Electrical Engineering and Computer Science. He is also a former dean of the School of Engineering. Magnanti is an expert on optimization whose work has spanned business and engineering, as well as the theoretical and applied sides of his field.
As an MIT faculty member, he first started working with Singaporean leaders in the late 1990s, helping to develop the Singapore-MIT Alliance (SMA), as well as the Singapore-MIT Alliance for Research and Technology (SMART), a research enterprise established in 2007 between MIT and the National Research Foundation of Singapore (NRF).
Magnanti says his time working on joint educational projects involving MIT and Singapore has been “a wonderful experience.”
Singapore, Magnanti adds, has consistently maintained “a deep commitment to education and to research, and has a very strong relationship with MIT, which has sustained itself now for over 20 years.”
Magnanti says he is pleased by the solid footing now established by the projects he has worked on in Singapore.
“There have been many highlights,” Magnanti says, including the development of an innovative university and degree structure, and novel pedagogy and research. He notes that students from SUTD “have done very well in their placements, in Singapore. Remarkably well.”
Overall, Magnanti adds, simply “developing the university has been one of the highlights. Hiring faculty, bringing in outstanding students and staff. … I am, and I think MIT is, very proud of what’s happened with the university.”
Nearly five years ago, NASA and Lincoln Laboratory made history when the Lunar Laser Communication Demonstration (LLCD) used a pulsed laser beam to transmit data from a satellite orbiting the moon to Earth — more than 239,000 miles — at a record-breaking download speed of 622 megabits per second.
Now, researchers at Lincoln Laboratory are aiming to once again break new ground by applying the laser beam technology used in LLCD to underwater communications.
“Both our undersea effort and LLCD take advantage of very narrow laser beams to deliver the necessary energy to the partner terminal for high-rate communication,” says Stephen Conrad, a staff member in the Control and Autonomous Systems Engineering Group, who developed the pointing, acquisition, and tracking (PAT) algorithm for LLCD. “In regard to using narrow-beam technology, there is a great deal of similarity between the undersea effort and LLCD.”
However, undersea laser communication (lasercom) presents its own set of challenges. In the ocean, laser beams are hampered by significant absorption and scattering, which restrict both the distance the beam can travel and the data signaling rate. To address these problems, the Laboratory is developing narrow-beam optical communications that use a beam from one underwater vehicle pointed precisely at the receive terminal of a second underwater vehicle.
This technique contrasts with the more common undersea communication approach that sends the transmit beam over a wide angle but reduces the achievable range and data rate. “By demonstrating that we can successfully acquire and track narrow optical beams between two mobile vehicles, we have taken an important step toward proving the feasibility of the laboratory’s approach to achieving undersea communication that is 10,000 times more efficient than other modern approaches,” says Scott Hamilton, leader of the Optical Communications Technology Group, which is directing this R&D into undersea communication.
Most above-ground autonomous systems rely on the use of GPS for positioning and timing data; however, because GPS signals do not penetrate the surface of water, submerged vehicles must find other ways to obtain these important data. “Underwater vehicles rely on large, costly inertial navigation systems, which combine accelerometer, gyroscope, and compass data, as well as other data streams when available, to calculate position,” says Thomas Howe of the research team. “The position calculation is noise sensitive and can quickly accumulate errors of hundreds of meters when a vehicle is submerged for significant periods of time.”
This positional uncertainty can make it difficult for an undersea terminal to locate and establish a link with incoming narrow optical beams. For this reason, "We implemented an acquisition scanning function that is used to quickly translate the beam over the uncertain region so that the companion terminal is able to detect the beam and actively lock on to keep it centered on the lasercom terminal’s acquisition and communications detector," researcher Nicolas Hardy explains. Using this methodology, two vehicles can locate, track, and effectively establish a link, despite the independent movement of each vehicle underwater.
Once the two lasercom terminals have locked onto each other and are communicating, the relative position between the two vehicles can be determined very precisely by using wide bandwidth signaling features in the communications waveform. With this method, the relative bearing and range between vehicles can be known precisely, to within a few centimeters, explains Howe, who worked on the undersea vehicles’ controls.
To test their underwater optical communications capability, six members of the team recently completed a demonstration of precision beam pointing and fast acquisition between two moving vehicles in the Boston Sports Club pool in Lexington, Massachusetts. Their tests proved that two underwater vehicles could search for and locate each other in the pool within one second. Once linked, the vehicles could potentially use their established link to transmit hundreds of gigabytes of data in one session.
This summer, the team is traveling to regional field sites to demonstrate this new optical communications capability to U.S. Navy stakeholders. One demonstration will involve underwater communications between two vehicles in an ocean environment — similar to prior testing that the Laboratory undertook at the Naval Undersea Warfare Center in Newport, Rhode Island, in 2016. The team is planning a second exercise to demonstrate communications from above the surface of the water to an underwater vehicle — a proposition that has previously proven to be nearly impossible.
The undersea communication effort could tap into innovative work conducted by other groups at the laboratory. For example, integrated blue-green optoelectronic technologies, including gallium nitride laser arrays and silicon Geiger-mode avalanche photodiode array technologies, could lead to lower size, weight, and power terminal implementation and enhanced communication functionality.
In addition, the ability to move data at megabit-to gigabit-per-second transfer rates over distances that vary from tens of meters in turbid waters to hundreds of meters in clear ocean waters will enable undersea system applications that the laboratory is exploring.
Howe, who has done a significant amount of work with underwater vehicles, both before and after coming to the laboratory, says the team’s work could transform undersea communications and operations. “High-rate, reliable communications could completely change underwater vehicle operations and take a lot of the uncertainty and stress out of the current operation methods."
This summer, international development, government, nonprofit, and philanthropic leaders from two dozen countries gathered at MIT to gain practical evaluation skills as part of Evaluating Social Programs, an Executive Education course offered by the Abdul Latif Jameel Poverty Action Lab (J-PAL).
Nearly 50 leading representatives attended the week-long class to develop the skills necessary to design randomized evaluations of social programs, from antiviolence interventions in Colombia to housing mobility programs in the midwestern United States. The course is J-PAL’s flagship training, offered annually to researchers and policymakers around the world. Instructors, who included academic experts in impact evaluation, covered technical concepts like sample size, data collection, and randomization, but also provided guidance on what makes a study generalizable.
Evaluating Social Programs’ unique curriculum reflects a global movement to advance evidence-based policy and programs. Sessions explored how randomized evaluations are designed in real-world settings, and provided insights into best practices for producing and using evidence. In keeping with that, attendees included government policymakers as well as foundation and nonprofit staff who had varying levels of evaluation experience and wide-ranging interest areas, including public health, labor markets, political economy, and education.
Chinemelu Okafor, a research assistant at the International Finance Corporation and a George Washington University student, says the opportunity to interact with people from across different levels of experience was highly influential at this point in her career.
“J-PAL created a really incredible learning environment for people of all backgrounds, all skills and all levels of experience,” Okafor says. “The environment was super collegial. You were learning from your peers and your peers are learning from you. … I have this goal of being Nigeria’s foreign affairs minister, or [in] some setting where I would be able to implement and inform Nigerian policy. Networking and interacting with my peers during the course affirmed for me that this is the type of work I want to do in the future.”
A key feature of Evaluating Social Programs is its integrated teaching methodology, which mixes interactive lectures with daily small group work. In this summer’s course at MIT, participants were able to explore case studies based on J-PAL affiliated research, including large-scale evaluations of a school-based deworming program in Kenya and a cognitive behavioral therapy program for youth in Chicago.
Throughout the week, participants met in small groups to create a preliminary evaluation plan for a real social program. This exercise helped to solidify theoretical concepts learned throughout the week, and gave participants the opportunity to present their evaluation plans to the larger group for feedback.
Kyle Clements, a learning experience designer at Western Governors University, and his group developed a preliminary evaluation outline for a program focused on alleviating math anxiety among two-year college students in the United States. During the week, Clements was able to see how feasible conducting a randomized evaluation could be not only for his particular program, but also potentially for other programs across his organization.
“In our specific education model, I think it will be really easy to do randomized control trials at the individual level,” Clements says. “We really should be doing this and there’s not a lot of barriers for us not to be.”
By learning alongside peers who were developing related program evaluations, participants could crowdsource innovative evaluation strategies with staying power in complex real-world settings.
"The most beneficial [elements] for me were the practical parts: the people who presented on their governmental experiences from previous RCTs [randomized controlled trials],” says Nora Ghobrial, a project officer at the Sawiris Foundation for Social Development. “I could relate the most to them. The group discussions were also very interesting for me."
At the end of the week participants gained not only a practical set of tools to better understand randomized evaluations, but also confidence that they could conduct randomized evaluations in the context of their own work and use evidence to improve upon their own programs.
“I feel like I have a more balanced and centered view of evaluations, and I think I have some good rails on where I need to be cautious and how to get more information to make those kinds of decisions,” says Julie Moreno, bureau director at the California Franchise Tax Board.
Anyone interested in learning more about topics covered at Evaluating Social Programs and who would like to receive updates on next year’s courses can visit J-PAL’s Training and Education page. Those interested in J-PAL’s MicroMasters in Data, Economics, and Development Policy program can register online by Sept. 11.
The postdoctoral training period is a time when junior researchers learn what it takes to become independent investigators. Pursuing a career in biomedical research can be highly demanding, and young researchers often feel challenged to find time to reflect on various career possibilities, explore options of interest, develop associated professional skills, and still maintain an acceptable work-life balance.
At MIT, about 1,500 postdocs are appointed to more than 50 departments, and serve as vital members of the Institute’s research workforce.
“Expectations for faculty members, particularly in the biomedical sciences, have evolved quite significantly from what they used to be say 10, 20 years ago,” says Sangeeta N. Bhatia, the John J. and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science, and the inaugural director of the Koch Institute’s Marble Center for Cancer Nanomedicine. “Professors are not just managing the lab and classroom mediums, but also serving on advisory boards and launching new research and commercial ventures. They also advocate for evidence-based policy and engage with the wider public about the implications of their research.”
The Marble Center, established through a generous gift from Kathy and Curt Marble ’63, launched the Convergence Scholars Program on National Nanotechnology Day on Oct. 9, 2017, a date that corresponds to the scientific notation — 10-9 — that designates the nanoscale. The aim of the program is to help postdocs hone the skills they need to succeed within and beyond the academic setting.
When divergent needs converge
The brainchild of Tarek R. Fadel, assistant director of the Marble Center, the Convergence Scholars Program is designed to offer career development opportunities that address the needs of the individual trainee and the ever-changing landscape of research. For example, monthly workshops focus on topics such as science communication and management, and leadership of scientific workplaces.
“Virtually all postdocs contemplate careers in academia. I typically hear trainees ask for advice on how they will recruit new scientists, develop budgets, manage multiple and often overlapping projects, and resolve potential conflicts between collaborators,” Fadel says. “We want our young scientists to think about these issues early in their training and to grow a wide network of mentors on whom they can rely during this transitional phase of their career.”
Jacob Martin, a Convergence Scholar in the laboratory of Koch Institute Associate Director Darrell Irvine, a professor of biological engineering and of materials science and engineering, recalls his apprehension about funding and other challenges associated with an academic research career.
“One of the reasons I felt encouraged to apply for the program was that, beyond acknowledging that I should consider ‘alternative’ careers, I didn’t know where to start,” Martin says. “Of course, I knew of some of the options in the biopharmaceutical industry, but I really wanted to put everything back on the table and consider other careers that I might never have realized would be available and enjoyable for me. This idea seemed exciting but also daunting — frankly, overwhelming.”
Fadel adds that “one of the key aims of the Convergence Scholars Program is to serve as a centralized resource, connecting postdocs with training and opportunities without requiring the time or anxiety of having to figure everything out themselves.”
The program also offers insight and inroads into careers in industry, health care, the policy arena, or with federal research or regulatory agencies. In order to offer this wide variety of resources for participants, the program partners with organizations around MIT and off campus, including the MIT BE Communications Lab and Harvard Catalyst. The program also engages a network of mentors from the pharmaceutical industry, the government sector, and elsewhere.
Taking full advantage of the array of opportunities available, recent Convergence Scholar Briana Dunn worked with the education and outreach team at the National Nanotechnology Coordination Office and volunteered doing hands-on nanotechnology experiments with children and families at a national science event. She also explored options in health care and joined the American Medical Writers Association, enrolling in courses to learn more about medical writing and even earning a credential.
“I was lucky that I had the opportunity to explore my interests in an organized and thoughtful way,” says Dunn, then a member of the laboratory of Angela Belcher, the James Mason Crafts Professor.
Off to a strong start
Six postdocs were selected for the inaugural class of Convergence Scholars, one from each of the Marble Center’s member labs:
- Natalie Boehnke, Hammond Laboratory
- Briana Dunn, Belcher Laboratory
- Liangliang Hao, Bhatia Laboratory
- Jacob Martin, Irvine Laboratory
- Ritu Raman, Langer Laboratory
- Kaitlyn Sadtler, Anderson Laboratory
In addition to training opportunities, each scholar also receives a stipend to use for professional activities and travel. This year, such activities ranged from volunteering at the U.S. Science and Engineering Festival and Family Science Days held as part of the annual meeting of the American Association for the Advancement of Science (AAAS), to participating in workshops on leadership in bioscience at the Cold Spring Harbor Laboratory, and science policy at AAAS.
Although the first year of the Convergence Scholars Program has not yet come to a close, participants say through their reviews that the initiative is on the right track. Dunn, for example, has found a job in industry that combines several of her interests.
“Through CSP, I was able to explore my options more deeply and in a way that really focused on my professional development,” she says.
Bhatia and Fadel envision that the next cohort — to be announced in October — will also include postdocs from other centers within the Koch Institute for Integrative Cancer Research at MIT, where the Marble Center is housed.
Last week, the MIT AgeLab participated in a two-day summit in Portland, Maine, dedicated to fighting social isolation among older adults in rural communities. The 2018 Connectivity Summit on Rural Aging, with more than 130 national leaders in attendance, was hosted by Tivity Health in partnership with the MIT AgeLab, Health eVillages, and the Jefferson College of Population Health.
Social isolation carries a greater health risk than obesity or smoking, according to some metrics, and is associated with increased mortality, mobility loss, functional decline, and clinical dementia. Attendees at the summit worked to find actionable ideas to reduce the impact of social isolation and enable better health for those aging in rural areas.
Leaders at the Summit included Donato Tramuto, Tivity Health CEO and Health eVillages founder; Congressman Joe Kennedy III (D-MA); Cara James, co-chair of the CMS Rural Health Council; Robin Lipson of the Executive Office of Elder Affairs for the Commonwealth of Massachusetts; and Joseph Coughlin, founder of the MIT AgeLab and author of The Longevity Economy: Unlocking the World’s Fastest-Growing, Most Misunderstood Market.
A co-organizer of the summit, the MIT AgeLab conducts research on business and policy innovations in the longevity economy and on opportunities to improve quality of life for older adults and those who care for them. The AgeLab also organizes and supports outreach and community groups for older adults and professionals in the field of aging.
Underpinning the importance of the Connectivity Summit on Rural Aging are the results of a new national poll, commissioned by Tivity Health and released on Aug. 7. The poll shows that nearly one-third (29 percent) of rural older adults do not experience daily social interaction, and many are dealing with physical impairments such as vision loss (39 percent), hearing loss (36 percent), and loss of mobility (23 percent). A majority (66 percent) of older adults in rural areas say they want public officials in their states and the business community (67 percent) to do more to address their needs.
“The segments of the older adult population in rural areas that are dealing with social isolation represent a largely unseen and unheard America,” says Coughlin. “Their struggles are more pronounced, and yet they are also harder to reach. The good news is this is also an extraordinarily resilient population, and we have a tremendous opportunity to work together to bring services and solutions to their doorstep that will reverse this trend.”
“A hallmark of Tivity Health’s mission is to promote solutions that improve the quality of life for seniors. That’s why we’re focused on opportunities to improve the health and well-being of older Americans who experience the impacts of social isolation,” says Donato Tramuto, CEO of Tivity Health and founder of Health eVillages. “My vision when launching this summit two years ago was to ensure that actions, not just words, would empower stakeholders to work together to reverse the effects of social isolation. I am pleased that less than two years later we have successfully created a movement that has mobilized actions to support older Americans by addressing the impacts of social isolation.”
MIT scientists have uncovered a sprawling new galaxy cluster hiding in plain sight. The cluster, which sits a mere 2.4 billion light years from Earth, is made up of hundreds of individual galaxies and surrounds an extremely active supermassive black hole, or quasar.
The central quasar goes by the name PKS1353-341 and is intensely bright — so bright that for decades astronomers observing it in the night sky have assumed that the quasar was quite alone in its corner of the universe, shining out as a solitary light source from the center of a single galaxy.
But as the MIT team reports today in the Astrophysical Journal, the quasar’s light is so bright that it has obscured hundreds of galaxies clustered around it.
In their new analysis, the researchers estimate that there are hundreds of individual galaxies in the cluster, which, all told, is about as massive as 690 trillion suns. Our Milky Way galaxy, for comparison, weighs in at around 400 billion solar masses.
The team also calculates that the quasar at the center of the cluster is 46 billion times brighter than the sun. Its extreme luminosity is likely the result of a temporary feeding frenzy: As an immense disk of material swirls around the quasar, big chunks of matter from the disk are falling in and feeding it, causing the black hole to radiate huge amounts of energy out as light.
“This might be a short-lived phase that clusters go through, where the central black hole has a quick meal, gets bright, and then fades away again,” says study author Michael McDonald, assistant professor of physics in MIT’s Kavli Institute for Astrophysics and Space Research. “This could be a blip that we just happened to see. In a million years, this might look like a diffuse fuzzball.”
McDonald and his colleagues believe the discovery of this hidden cluster shows there may be other similar galaxy clusters hiding behind extremely bright objects that astronomers have miscatalogued as single light sources. The researchers are now looking for more hidden galaxy clusters, which could be important clues to estimating how much matter there is in the universe and how fast the universe is expanding.
The paper’s co-authors include lead author and MIT graduate student Taweewat Somboonpanyakul, Henry Lin of Princeton University, Brian Stalder of the Large Synoptic Survey Telescope, and Antony Stark of the Harvard-Smithsonian Center for Astrophysics.
Fluffs or points
In 2012, McDonald and others discovered the Phoenix cluster, one of the most massive and luminous galaxy clusters in the universe. The mystery to McDonald was why this cluster, which was so intensely bright and in a region of the sky that is easily observable, hadn’t been found before.
“We started asking ourselves why we had not found it earlier, because it’s very extreme in its properties and very bright,” McDonald says. “It’s because we had preconceived notions of what a cluster should look like. And this didn’t conform to that, so we missed it.”
For the most part, he says astronomers have assumed that galaxy clusters look “fluffy,” giving off a very diffuse signal in the X-ray band, unlike brighter, point-like sources, which have been interpreted as extremely active quasars or black holes.
“The images are either all points, or fluffs, and the fluffs are these giant million-light-year balls of hot gas that we call clusters, and the points are black holes that are accreting gas and glowing as this gas spirals in,” McDonald says. “This idea that you could have a rapidly accreting black hole at the center of a cluster — we didn’t think that was something that happened in nature.”
But the Phoenix discovery proved that galaxy clusters could indeed host immensely active black holes, prompting McDonald to wonder: Could there be other nearby galaxy clusters that were simply misidentified?
An extreme eater
To answer that question, the researchers set up a survey named CHiPS, for Clusters Hiding in Plain Sight, which is designed to reevaluate X-ray images taken in the past.
“We start from archival data of point sources, or objects that were super bright in the sky,” Somboonpanyakul explains. “We are looking for point sources inside fluffy things.”
For every point source that was previously identified, the researchers noted their coordinates and then studied them more directly using the Magellan Telescope, a powerful optical telescope that sits in the mountains of Chile. If they observed a higher-than-expected number of galaxies surrounding the point source (a sign that the gas may stem from a cluster of galaxies), the researchers looked at the source again, using NASA’s space-based Chandra X-Ray Observatory, to identify an extended, diffuse source around the main point source.
“Some 90 percent of these sources turned out to not be clusters,” McDonald says. “But the fun thing is, the small number of things we are finding are sort of rule-breakers.”
The new paper reports the first results of the CHiPS survey, which has so far confirmed one new galaxy cluster hosting an extremely active central black hole.
“The brightness of the black hole might be related to how much it’s eating,” McDonald says. “This is thousands of times brighter than a typical black hole at the center of a cluster, so it’s very extreme in its feeding. We have no idea how long this has been going on or will continue to go on. Finding more of these things will help us understand, is this an important process, or just a weird thing that there’s only one of in the universe.”
The team plans to comb through more X-ray data in search of galaxy clusters that might have been missed the first time around.
“If the CHiPS survey can find enough of these, we will be able to pinpoint the specific rate of accretion onto the black hole where it switches from generating primarily radiation to generating mechanical energy, the two primary forms of
energy output from black holes,” says Brian McNamara, professor of physics and astronomy at the University of Waterloo, who was not involved in the research. “This particular object is interesting because it bucks the trend. Either the central supermassive black hole’s mass is much lower than expected, or the structure of the accretion flow is abnormal. The oddballs are the ones that teach us the most.”
In addition to shedding light on a black hole’s feeding, or accretion behavior, the detection of more galaxy clusters may help to estimate how fast the universe is expanding.
“Take for instance, the Titanic,” McDonald says. “If you know where the two biggest pieces landed, you could map them backward to see where the ship hit the iceberg. In the same way, if you know where all the galaxy clusters are in the universe, which are the biggest pieces in the universe, and how big they are, and you have some information about what the universe looked like in the beginning, which we know from the Big Bang, then you could map out how the universe expanded.”
This research was supported, in part, by the Kavli Research Investment Fund at MIT, and by NASA.
Imagine a photo of Paris you’ve seen before, whether it’s the Eiffel Tower or an urchin carrying a baguette. Have you ever considered the story behind that picture — why it was taken, and why it’s in circulation today?
If you haven’t, MIT scholar Catherine Clark certainly has. Clark, an associate professor of French studies in MIT’s Global Studies and Languages section, has looked at tens of thousands of photos of Paris over the years. Now, in a new book, Clark takes a deep look at history told through photographs of Paris itself — as a way of understanding how photography’s influence on our historical imaginations has changed since its 19th-century origins.
After all, Paris is where Louis Daguerre unveiled his “daguerreotype” method of photo-making in 1839, and people have been training their cameras on the city ever since. At first, many Parisians were simply documenting their city. In the 20th century, however, Parisian photography became more self-conscious. Many World War II photos of Paris, for instance, are staged images meant to burnish the idea of French resistance, accurately or not.
“Looking at old photographs has its own history,” Clark says. “The book traces the ways in which that evolves — how people’s ideas about what the photograph will do, and do for the study of history, changes.”
Clark’s book, “Paris and the Cliché of History,” is being published this week by Oxford University Press. The title plays on the world “cliché,” which in French also refers to the glass plates that used to serve as photographic negatives in the early days of the medium, as well metal printing plates that combined images and type.
At first: Copying the world
In Clark’s account, Paris has seen at least five distinct historical phases during which the purpose of photographing the city evolved.
The first occurred around 1860, as Paris went through a radical physical transformation led by Georges-Eugene Haussmann, who created the scheme of grand boulevards and clear urban geometry that now defines much of the city. While demolishing much of the old Paris, however, Haussmann sought to chronicle it through the city’s official photographer, Charles Marville. Over time, the city’s Musée Carnavalet also served as a focal point for this kind of effort, acquiring a huge collection of Paris images.
“Part of rebuilding Paris to be an imperial capital and seat of power was preserving its history,” Clark explains.
At this stage, she notes, photography was often straightforwardly documentary, as a medium, replacing paintings and prints as our essential visual representation of the world.
“There were debates in the 19th century about what photos were going to be used for — for science, for art, but also, for people running historical institutions,” Clark notes. “The first thing they’re doing is just cataloging objects. Photographs were imagined as a one-to-one copy of something in the world.”
That changed. A second and distinct phase of Paris’ photographic history, Clark thinks, set in by the 1920s. Photos now became objects of nostalgia for the French, who were dealing with the trauma of World War I, when France suffered millions of casualties and the global order spun out of control.
“It’s this major moment of social upheaval, not just because a lot of people died, but because the world seemed to crumble,” Clark said. “People felt there was a rupture, and they could never go back to what existed before, except now, they could see what the past looked like, because they had photographs. And so a new paradigm arises from that, where photographs are fragments of lost time. And that’s a really powerful way to think about photographs.”
Paris at war: Creating historical narratives
War also produced a third distinct phase of photography in Paris, in Clark’s account — World War II, often depicted through images of seemingly heroic Parisian resistance fighters in moments of dramatic action. But as Clark notes, many such photos were plainly staged. Consider one photo she analyzes in the book, in which three French citizens look through a window, with one aiming a rifle outside. It is almost certainly not a glimpse of real fighting — the men in the picture are too exposed and neatly arranged.
Or take some photos showing ordinary Parisians at barricades in the streets — which would have been a futile tactic in the face of German tanks. But a barricade is a historical trope signifying resistance. To some degree, then, the people in these Parisian photos were “demonstrating political allegiances and performing a certain type of wartime action,” Clark says. “Militarily, the liberation of Paris doesn’t matter that much, but in terms of what it symbolizes, it really matters.”
Moreover, in such resistance photos, we see a familiar process at work, in which people are self-consciously thinking about how the images will be viewed in the future.
“The way in which [the war] was going to be remembered was already being performed on the ground,” Clark says. “It’s not like things happen, and then we create historical narratives about them. We’re already creating historical narratives as we act. These photos published in newspapers, magazines, and books are often in turn the way people learn about such events.”
Plus ça change
Clark believes there are at least two other notable moments when Parisian photography evolved in significant ways. One came in the run-up to city-wide celebrations, in 1951, of Paris’ 2,000th anniversary. Around this time, she observes, Paris photos became more oriented around ordinary people — working-class men, women, and children in everyday life.
“The problem in Paris in 1950 and 1951 is that the city doesn’t look grand,” Clark says. “It’s kind of falling apart in a lot of places. So what do you do when the city doesn’t look great, but you know that it is great? You create narratives about other types of greatness. And I think that’s one reason for this nostalgia for the Parisian working classes in 1950.”
A rather different burst of Parisian photography occurred around 1970s, Clark notes, in the form of a city-sanctioned amateur photography contest that produced 100,000 images of the city. (Clark estimates she has looked at about 15,000 of them.) Here too, Paris officials were trying to capture the city at a moment of physical change, but letting the people do it.
“It’s a great echo for this 1860 moment,” Clark says. “Paris is being modernized again, highways, cars along the banks of the Seine, new high-rises, this feeling of needing to capture what’s being destroyed. [And] in the 1970s, there’s a real sense that the best people to photograph the city would be people who love the city and believe in it. There’s just so much diversity, so many ways of seeing the city in there.”
In the bigger historical picture, Clark thinks, Paris is, on the one hand, ideally suited to an analysis of its photographic self-image — yet hardly the only place where this type of study can be performed.
“The French think of photography as their own invention, and there is a national heritage element to it,” Clark says. “There’s a powerful archival impulse in France … and the historical institutions in Paris are some of the earliest.”
On the other hand, Clark notes, her study is about how photography shaped the historical imagination, and the general concept that Parisian photographic history changed at moment of dramatic historical upheaval may well apply to other cities, too.
“I would love to see someone do a similar study in other places,” Clark says. “After all, in 1839, photography was given to the world by the French government, but the rest of their world made it their own.”
A miniature satellite called ASTERIA (Arcsecond Space Telescope Enabling Research in Astrophysics) has measured the transit of a previously-discovered super-Earth exoplanet, 55 Cancri e. This finding shows that miniature satellites, like ASTERIA, are capable of making of sensitive detections of exoplanets via the transit method.
While observing 55 Cancri e, which is known to transit, ASTERIA measured a miniscule change in brightness, about 0.04 percent, when the super-Earth crossed in front of its star. This transit measurement is the first of its kind for CubeSats (the class of satellites to which ASTERIA belongs) which are about the size of a briefcase and hitch a ride to space as secondary payloads on rockets used for larger spacecraft.
The ASTERIA team presented updates and lessons learned about the mission at the Small Satellite Conference in Logan, Utah, last week.
The ASTERIA project is a collaboration between MIT and NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California, funded through JPL's Phaeton Program. The project started in 2010 as an undergraduate class project in 16.83/12.43 (Space Systems Engineering), involving a technology demonstration of astrophysical measurements using a Cubesat, with a primary goal of training early-career engineers.
The ASTERIA mission — of which Department of Earth, Atmospheric and Planetary Sciences Class of 1941 Professor of Planetary Sciences Sara Seager is the Principal Investigator — was designed to demonstrate key technologies, including very stable pointing and thermal control for making extremely precise measurements of stellar brightness in a tiny satellite. Earlier this year, ASTERIA achieved pointing stability of 0.5 arcseconds and thermal stability of 0.01 degrees Celsius. These technologies are important for precision photometry, i.e., the measurement of stellar brightness over time.
Precision photometry, in turn, provides a way to study stellar activity, transiting exoplanets, and other astrophysical phenomena. Several MIT alumni have been involved in ASTERIA's development from the beginning including Matthew W. Smith PhD '14, Christopher Pong ScD '14, Alessandra Babuscia PhD '12, and Mary Knapp PhD '18. Brice-Olivier Demory, a professor at the University of Bern and a former EAPS postdoc who is also a member of the ASTERIA science team, performed the data reduction that revealed the transit.
ASTERIA's success demonstrates that CubeSats can perform big science in a small package. This finding has earned ASTERIA the honor of "Mission of the Year,” which was awarded at the SmallSat conference. The honor is presented annually to the mission that has demonstrated a significant improvement in the capability of small satellites, which weigh less than 150 kilograms. Eligible missions have launched, established communication, and acquired results from on-orbit after Jan, 1, 2017.
Now that ASTERIA has proven that it can measure exoplanet transits, it will continue observing two bright, nearby stars to search for previously unknown transiting exoplanets. Additional funding for ASTERIA operations was provided by the Heising-Simons Foundation
The administration of elections by states improved overall by six percentage points between 2012 and 2016, according to the new Elections Performance Index (EPI) released by the MIT Election Data and Science Lab (MEDSL).
Using indicators ranging from wait times at the polls and voter turnout to problems with absentee ballots, voter registration, or voting technology, the EPI provides a nonpartisan, objective measure of how well each state is faring in managing national elections. The index, which was developed and managed by The Pew Charitable Trusts before being transferred to MEDSL in 2017, can show the impact of policy changes and where a state might be doing well or facing challenges. Voters, policymakers, and election officials can use its rankings to compare their state with its own past performance, as well as the performance of other states.
“The index is an important foundation for the ongoing discussions on election management,” says Charles Stewart III, the Kenan Sahin Distinguished Professor of Political Science at MIT and MEDSL’s founding director. “The new release of the index helps remind us that election administration is a multidimensional challenge. Significant improvements in the 2016 index also illustrate that when election officials commit themselves to a path of improvement, good things can happen.”
Overall, almost all states improved their index scores in the 2016 election, compared with the 2012 presidential election. Twenty-two states improved at a rate greater than the national average. Overall, Vermont showed the most significant improvement, landing at the top of the index for the first time after expanding the availability of online voter tools, providing online voter registration, and requiring a postelection audit. The District of Columbia, West Virginia, and South Carolina also saw significant gains in their scores and rankings.
Only six states saw their scores decline from 2012. This is largely due to an increase in the residual vote rate, which is a common measure of voting machine performance. However, the residual vote rate can also increase when more voters abstain from voting for president, which appears to have been a significant factor in the decline of four of the six states.
When it launched in 2013, the EPI provided the first comprehensive assessment of election administration in all 50 states and Washington, D.C. Its 17 indicators (which were winnowed down by an advisory committee from an initial list of 40) were selected as measurements for reliable, consistent, and valid data that covered the broad scope of issues involved in managing an election.
“We felt it was important to reflect a wide variety of factors that could determine whether voting was convenient and secure,” says Stewart, who was one of the convenors of the EPI advisory committee. “We selected not only obvious measures like voter turnout, but also less visible factors that nonetheless affect a lot of voters, such as the handling of absentee ballots and residual votes.”
The EPI now includes data from every federal U.S. election since 2008. The online index provides an interactive way to see how election administration has changed over time during this period, and allows users to explore the context behind each measurement, as well as the data. Individual pages show the story for each state, explain what each indicator means, and illustrate differences or trends.
Cybersecurity has been a hot topic in discussions on U.S. election administration; some voters and election officials might understandably wonder whether the EPI sheds light on the issue.
The short answer: “It’s complicated,” says Stewart.
“The EPI tries to measure policy outputs whenever possible, and it’s very difficult to measure the security of a state’s election system directly,” he says. “Certainly, if a major attack on a state’s computer system dramatically affected the ability to vote, it would show up in a number of factors we measure.”
For example, if such an attack occurred, voter turnout might be lower, more provisional ballots might be used, more voters might complain of registration problems, or longer lines might form at the polls. The EPI already measures those factors, as well as whether states require audits of election results, which, says Stewart, “is one way to capture whether the voting machines or tallying systems were tampered with.
“Still, we’d like to develop a measure closer to cybersecurity itself, and we plan to work on that for the next release,” he says.
One interesting thread to trace in the EPI is the issue of long lines at the polls, which made headlines in 2012. This spurred then-President Obama to appoint a commission to study a wide variety of election administration issues, including voting wait times.
The effort that state and local officials put into addressing wait times at polling places paid off, as the significant drop in this indicator for many of 2012’s worst-performing states shows. Florida — which at 45 minutes had the longest average wait to vote in 2012 — dropped to a 2016 wait-time of 5.5 minutes. The District of Columbia, which had 2012’s second-longest average wait, saw wait times drop from 33.9 minutes to 16.3 minutes in 2016. Overall, seven states had average wait times of more than 20 minutes in 2012, but by 2016 that number dropped to zero.
(MIT, along with the Bipartisan Policy Center, has had a major program to work with election officials to record line lengths and reduce wait times. A report on this effort, entitled “Improving the Voter Experience," was released in April.)
There are a few other trends that stand out to seasoned election watchers.
“One of the most surprising things we saw in 2016 was a spike in the residual vote rate,” says Stewart. This indicator measures the performance of voting machines; to do so, it calculates the number of under-votes and over-votes cast in an election, as a percentage of voter turnout. An under-vote occurs when no vote is recorded on a ballot; an over-vote, conversely, means a ballot has votes for more than one candidate in a single-winner race.
Because it’s calculated using the top office on the ballot, the residual vote rate is only calculated every four years, using the presidential vote (the top office for midterm elections varies considerably from state to state). The 2000 election, with its infamous punch cards and hanging chads, still holds the record for highest residual vote rate in the last two decades. Nationwide, the rate in that election was 1.9 percent, with state highs of up to 3.9 percent. In contrast, the average rate for the 2012 election was only 0.99 percent.
The nationwide residual vote rate in 2016, however, spiked up to 1.39 percent. Why such a significant jump? It may be due to an increase in voters abstaining from casting a vote in the contentious presidential race, rather than a decline in the performance of voting machines. Interestingly, Nevada saw a historical low in the residual vote rate in 2016. The state offers an option on its ballots for voters to choose “none of these candidates.”
As MEDSL’s staff look ahead to the next federal election, now less than three months away, they’re already planning out their approach for the 2018 EPI.
“We learned in 2016 that the EPI indicators are remarkably stable within states over time, although changes in election policies can dramatically change a state’s position in the index,” says Cameron Wimpy, the lab’s research director. “Looking ahead to 2018, we’ll be thinking about other objective measures of election administration that vary across the states.”
Nearly 10 years after the first EPI advisory committee was convened, MEDSL will reconvene many of them again — plus a few new faces — to revisit the current indicators and discuss whether any might need to be altered or retired. At the same time, they’ll face the challenge of evaluating new potential measurements and data sources, and identifying whether they have a place in a redesigned index.
The bottom line?
“The core of election administration is making sure that every voter who wants to cast a vote can, and ensuring that only legal votes are cast and counted,” says Stewart. “The EPI exists to illustrate how many factors are involved in doing that, and to help the public understand what needs to be fixed. Monitoring all these factors will be our task as the U.S. rolls into another national election.”
MIT researchers have developed a tool that makes it much easier and more efficient to explore the many compromises that come with designing new products.
Designing any product — from complex car parts down to workaday objects such as wrenches and lamp stands — is a balancing act with conflicting performance tradeoffs. Making something lightweight, for instance, may compromise its durability.
To navigate these tradeoffs, engineers use computer-aided design (CAD) programs to iteratively modify design parameters — say, height, length, and radius of a product — and simulate the results for performance objectives to meet specific needs, such as weight, balance, and durability.
But these programs require users to modify designs and simulate the results for only one performance objective at a time. As products usually must meet multiple, conflicting performance objectives, this process becomes very time-consuming.
In a paper presented at this week’s SIGGRAPH conference, researchers from the Computer Science and Artificial Intelligence Laboratory (CSAIL) describe a visualization tool for CAD that, for the first time, lets users instead interactively explore all designs that best fit multiple, often-conflicting performance tradeoffs, in real time.
The tool first calculates optimal designs for three performance objectives in a precomputation step. It then maps all those designs as color-coded patches on a triangular graph. Users can move a cursor in and around the patches to prioritize one performance objective or another. As the cursor moves, 3-D designs appear that are optimized for that exact spot on the graph.
“Now you can explore the landscape of multiple performance compromises efficiently and interactively, which is something that didn’t exist before,” says Adriana Schulz, a CSAIL postdoc and first author on the paper.
Co-authors on the paper are Harrison Wang, a graduate student in mechanical engineering; Eitan Grinspun, an associate professor of computer science at Columbia University; Justin Solomon, an assistant professor in electrical engineering and computer science; and Wojciech Matusik, an associate professor in electrical engineering and computer science.
The new work builds off a tool, InstantCAD, developed last year by Schulz, Matusik, Grinspun, and other researchers. That tool let users interactively modify product designs and get real-time information on performance. The researchers estimated that tool could reduce the time of some steps in designing complex products to seconds or minutes, instead of hours.
However, a user still had to explore all designs to find one that satisfied all performance tradeoffs, which was time-consuming. This new tool represents “an inverse,” Schulz says: “We’re directly editing the performance space and providing real-time feedback on the designs that give you the best performance. A product may have 100 design parameters … but we really only care about how it behaves in the physical world.”
In the new paper, the researchers home in on a critical aspect of performance called the “Pareto front,” a set of designs optimized for all given performance objectives, where any design change that improves one objective worsens another objective. This front is usually represented in CAD and other software as a point cloud (dozens or hundreds of dots in a multidimensional graph), where each point is a separate design. For instance, one point may represent a wrench optimized for greater torque and less mass, while a nearby point will represent a design with slightly less torque, but more mass.
Engineers laboriously modify designs in CAD to find these Pareto-optimized designs, using a fair amount of guesswork. Then they use the front’s visual representation as a guideline to find a product that meets a specific performance, considering the various compromises.
The researchers’ tool, instead, rapidly finds the entire Pareto front and turns it into an interactive map. Inputted into the model is a product with design parameters, and information about how those parameters correspond to specific performance objectives.
The model first quickly uncovers one design on the Pareto front. Then, it uses some approximation calculations to discover tiny variations in that design. After doing that a few times, it captures all designs on the Pareto front. Those designs are mapped as colored patches on a triangular graph, where each patch represents one Pareto-optimal design, surrounded by its slight variations. Each edge of the graph is labeled with a separate performance objective based on the input data.
In their paper, the researchers tested their tool on various products, including a wrench, bike frame component, and brake hub, each with three or four design parameters, as well as a standing lamp with 21 design parameters.
With the lamp, for example, all 21 parameters relate to the thickness of the lamp’s base, height and orientation of its stand, and length and orientation of three elbowed beams attached to the top that hold the light bulbs. The system generated designs and variations corresponding to more than 50 colored patches reflecting a combination of three performance objectives: focal distance, stability, and mass. Placing the cursor on a patch closer to, say, focal distance and stability generates a design with a taller, straighter stand and longer beams oriented for balance. Moving the cursor farther from focal distance and toward mass and stability generates a design with thicker base and a shorter stand and beams, tilted at different angles.
Some designs change quite dramatically around the same region of performance tradeoffs and even within the same cluster. This is important from an engineer’s perspective, Schulz says. “You’re finding two designs that, even though they’re very different, they behave in similar ways,” she says. Engineers can use that information “to find designs that are actually better to meet specific use cases.”
The work was supported by the Defense Advanced Research Projects Agency, the Army Research Office, the Skoltech-MIT Next Generation Program, and the National Science Foundation.
Jessika Trancik’s life has been one of straddling languages and cultures, both in academia and in her own life.
Born in Boston to a Swedish mother and American father, she lived in Cambridge as a child but spent summers in Sweden and grew up bilingual, and still carries dual citizenship. In her work as an associate professor in MIT’s Institute for Data, Systems and Society, which spans all of MIT’s five schools, she brings together engineering and the social sciences in order to carry out rigorous analyses of the factors that bring about technological changes that transform society.
Trancik says that “growing up in a couple of different cultures, you become very comfortable with that, and to some extent being able to understand and talk to people from different disciplines is [a similar experience].” And, she says, “science is a way to connect with people around the world. There are no national boundaries; it’s like a more unified network of people working on problems and discussing them.”
“I was always drawn to science and engineering because it affords an opportunity to have a positive impact on people and the planet,” she says.
Trancik graduated from Cornell University, where her father was a professor of urban design and her mother was a lecturer in Swedish. While there she studied materials science, analyzing the structure of metals and ceramics using transmission electron microscopy. In graduate school, as a Rhodes Scholar at Oxford University, she focused on polymer materials including spider silk. This exceptionally strong material, which is formed at room temperature, is a promising model for synthetic materials for structural and other applications, she says. “There’s an environmental motivation there, which is that you can produce a higher-performance material at lower energetic costs,” she says.
At MIT, where she earned tenure last year, Trancik’s work focuses on the pace of innovation in different sectors of technology, and the forces that accelerate or retard that progress, for example in energy storage or in photovoltaic cells. That includes “evaluating opportunities for cost reduction and improvement, and what the effective drivers of improvement might be,” she says.
Her work these days focuses on “whether we can predict in advance what kinds of technological innovations will take off,” and how to influence that process, she says. “This whole area of research that I’ve been working on developing spans evaluating technologies in a larger context — their scalability, their costs, their emissions, their performance along different dimensions — and then using that information to inform how we’re developing these technologies.” The work aims to guide decisions by engineers, policymakers, businesses, and investors.
She credits some early teachers for asking open-ended questions that helped spur creative thinking. “I’ve always had an interest and enjoyment in problem solving and answering previously unanswered questions, and in rigorous ways, in testable ways.” And in applying that, she says, “I was really interested in the environment and in design, and so materials science seemed to bring together a lot of these different topics and questions.”
Trancik’s interests were, and are, quite varied indeed. “Growing up I was very interested in languages, I was very interested in design and painting and drawing, and I was also very interested in math and science, and writing. I did a ton of sports, first gymnastics and then tennis and skiing. That really sort of persisted, along with mountain biking, surfing, and lots of other sports like that. And music as well. I played the saxophone.”
As she grew older, she realized she needed to specialize a bit, she says. “But I would have liked to have continued to do everything.” Still, she comes close. “I’m still skiing, backcountry skiing, surfing, mountain biking, these are my favorite sports. And that’s sort of just a part of my life, like brushing my teeth.”
Before her PhD, Trancik spent a summer doing volunteer work in Kenya, and then following her PhD she worked for a couple of years at the United Nations on postconflict sustainable development. “That’s the time when I got interested in taking my research in a new direction, to span studying technology and materials development as well as big societal challenges. And that’s when I started working on energy and climate change,” she recalls. That led to a fellowship at Columbia University’s Earth Institute, where “the research I was doing was on photovoltaic devices and materials, as well as studying trends in solar energy development, and trying to understand why some technologies develop more quickly than others."
After that, she went to a fellowship at the Santa Fe Institute, where “that afforded me a great opportunity to interact with scientists from many different areas — physicists, economists, biologists, engineers, and so forth.”
There, “the environment really supported what I was trying to do, which was to continue to build this research focus that spanned technology evaluation and technology development, and to explain technological progress in a way that we can help inform and direct it in positive directions,” she says.
Last year, Trancik gave a TEDx talk that summarized her views and her research focus. She says that in addressing global issues like climate change “we want to develop technology to help us solve these challenges. We need to understand how to measure technological progress toward these goals that we care about. And we also need to be able to understand and take advantage of the drivers of technological progress.
“We always have limited time and money,” she says. “So how do we make decisions that are going to allow us to get in the direction that we need to? How do we speed up the process?” That’s what Trancik aims to find out.
Three high school girls whose entry in a NASA technology competition was targeted for derailment by a racist and misogynistic online forum were honored by an MIT group recently to celebrate their achievement in engineering a water-filtration system using aerospace technology.
AeroAfro, an unofficial student group within the Department of Aeronautics and Astronautics, sponsored a campus visit on Aug. 3 for the three girls, in recognition of their placing second in NASA’s Optimus Prime Spinoff Promotion and Research Challenge (OPSPARC). The students toured the campus and presented an overview of their winning entry to an appreciative MIT audience.
AeroAfro's stated goal is to advance the diversity and inclusion initiative of AeroAstro. Graduate student Stewart Isaacs, who coordinated the visit, said members of the group were “disgusted” by the girls’ experience with online racism.
“We sought to support their careers in STEM by connecting them with the cutting-edge science and technology resources at MIT,” he said.
The annual OPSPARC competition challenges students to research NASA technology and develop innovative ways to apply it to everyday situations. OPSPARC 2018 received more than 1,500 entrants from the U.S. and Canada.
In April, OPSPARC announced that the team of Mikayla Sharrieff, India Skinner, and Bria Snell was one of eight finalists in this year’s contest. The 17-year-old juniors attend Banneker High School in Washington, where they are in the school’s STEM program. Theirs was the only all-black, all-female team to reach the finals.
Concerned about lead content in their school's drinking fountain water, the three named their project “From H2NO to H2O.” They used simple equipment including glass jars, water contaminated with copper shards, an electric fan, and filtering floss to develop a filtration system that produces water pure and clean enough to drink. Their research was inspired by NASA water purification technology, and conducted at the Inclusive Innovation Incubator (In3), a technology lab near Howard University, where they also volunteer.
“We were impressed how their project used aerospace technology to help solve the problem of clean water access.” Isaacs said. “This is a problem facing many majority black communities in the United States. Their project presentation gave MIT researchers an opportunity to learn how we can follow their lead to support neglected populations with our own work.”
OPSPARC winners are determined by an assessment of the projects by NASA judges and by public voting promoted via social media, and as soon as the finalists were announced, the voting was underway. But only four days later, users from the anonymous Internet forum 4chan, a site infamous for destructive hoaxes and racist, misogynistic, and homophobic comments, attempted to divert votes from the Banneker team.
The 4chan users launched a cyberattack against the young women, arguing that the black community was voting for them only because of their race, and recommending computer programs that would hack the NASA system and give the advantage to a team of boys. Reacting to the attack, NASA announced an early termination to public voting.
“If you want to be successful in this world, you’ll be targeted,” said Snell at the team's presentation. “But it’s OK. You have to persevere.”
Their determination and belief in their project paid off. In May, NASA’s Goddard Space Flight Center announced that the Banneker team had placed second in the OPSPARC competition.
The team was also awarded a $4,000 grant by Washington mayor Muriel E. Bowser in recognition of their success. The grant was given to In3 and used by the young women to purchase materials to implement their water purification system in the D.C. area.
Sharrieff, Skinner, and Snell hope to expand From H2NO to H2O by incorporating a metal filter to extract chlorine from drinking water, as well. Their project has long-ranging potential, from alleviating the water crises in Flint and Baltimore to improving the potability of water in third-world countries.
When they’re not busy in the lab, the three participate on their school’s cheerleading squad, and Snell runs on the Banneker Bulldogs track team. They all plan to attend college in preparation for science-based careers: Sharrieff in biomedical engineering, Skinner in pediatric surgery, and Snell in anesthesiology.
The students’ visit included a tour of MIT’s Wright Brothers Wind Tunnel, the Media Lab, and other facilities, and introductions to MIT professors and researchers. “What an amazing opportunity,” Sharrieff said of her time on campus. “It opened my eyes to what I want to do in the future regarding graduate school.”
Skinner echoed her teammate. “It was great to discover what life is like at MIT. It’s truly amazing here.”
It’s an ambitious goal: to change the status quo. Yet, this is just what Alan Al Yuseef has set out to do. As one of the first people to earn the MITx Data, Economics, and Development Policy MicroMasters credential, he’s working to improve lives in the Middle East through a career in development economics.
For Al Yuseef, poverty, corruption, and inefficient institutions are not just concepts or distant concerns. He knows issues like these directly impact the everyday lives of real people. Not long ago, he was one of those people.
He arrived in Belgium from Syria in 2008. Growing up, he recognized the need for new ideas and big changes in his country’s leadership and institutions and was motivated to do his part to help.
“Being born and raised in Syria and having lived for a few years in Iraq as well, I have firsthand experience of what it means to live in an underdeveloped country with dysfunctional institutions and widespread poverty,” he says. Yet corruption, lack of access to high-quality education, and disrupted communities have waylaid people like himself, who are motivated to mitigate such problems by entering public policy.
“Due to the widespread corruption and nepotism in the public institutions, people are faced with all kinds of wrong incentives,” he explains. “Be it a decision to start up a new business, expand [an] existing business, or getting credit, one must always account for the financial and psychological costs of corruption, which is a huge burden that impedes economic activity and slows down development.”
He knew that the field of economic development held many answers to the problems that Syria faced, so he set out to pursue a career in economics. As it turns out, that career path would take him over 2,000 miles away.
“I have always been curious about the dynamics of economic development, what makes countries and what breaks them,” he says. But it wasn’t until moving to Belgium, “where I reside now and where I hold refugee status, [that] I could finally enroll in university to pursue this goal.”
After earning a bachelor’s degree in applied economics, Al Yuseef joined the Belgian Office of the Commissioner General for Refugees and Stateless Persons, where he worked as a translator and assisted in interpreting asylum applications and interviews. “I heard the stories of hundreds of refugees from different Middle Eastern and Arab countries, and all of them shared one thing: a desperation and lack of trust in the local institutions to guarantee a decent level of livelihood,” he recalls.
This desperation was one he knew well, and his desire for a career in development economics took on new urgency. Unfortunately, he says he could not find a Belgian university that offered a graduate program in development economics and again found himself searching for an education that didn’t seem accessible. Then he discovered the MITx Data, Economics, and Development Policy (DEDP) MicroMasters program.
“I got my hands on the book 'Poor Economics' by MIT professors Esther Duflo and Abhijit Banerjee, which got me excited about their work and their approach to the development question,” he recalls. He was excited to find some of the world’s best researchers doing the kind of work he wanted to do, such as the practical application of economics to real-world problems like those found in his native region. When he learned that Duflo and Banerjee, along with MIT economics Professor Benjamin Olken, had helped develop and co-direct an online program intended to do just that, he says he didn't hesitate for a moment and enrolled right away.
Now, less than a year later, he is among the first learners to earn the MITx DEDP MicroMasters credential in DEDP. Armed with this credential, he feels more prepared for the future and intends to use the credential as a springboard both academically and professionally.
Al Yuseef is now finishing a Master’s degree in Applied Economics from the Vrije Universiteit Brussel in Belgium, and he says the knowledge he gained in the DEDP MicroMasters program will profoundly influence his upcoming thesis focusing on the role of democracy and institutions in economic development. His true interest, however, lies outside of applied business economics, as he is interested in pursuing a career in development economics. After finishing his current program, he intends to apply for the accelerated, blended Master’s degree in data, economics, and development policy at MIT.
“Having had the opportunity to follow the DEDP courses and knowing the quality of education offered by MIT, it will be an enormous personal achievement for me to complete the [residential] program and obtain the DEDP Master’s degree,” he says. In the meantime, “my plan is to work in the development sector starting next year and I am quite confident that the MicroMasters credential will be a great asset for me.”
To think beyond how things are and work toward how things could be, it takes courage and more than just a little commitment. Al Yuseef has plenty of both, and through the power of education — whether in-person, online, or a blend of both — he’s tackling some of his country’s greatest challenges. Like Al Yuseef, many learners enroll in the DEDP MicroMasters program motivated by the desire to be an informed citizen of the world and an effective ally to development communities.
Al Yuseef's story illustrates the power that online learning can hold for refugee learners, whose education has stalled due to factors vastly outside their control. This summer, the DEDP MicroMasters program began partnering with ReACT, an organization at MIT that aims to connect refugees with opportunities in higher education. Together, the two organizations are providing refugee learners with access to all five of the DEDP online courses, in-person skills-building workshops, and paid internships with top organizations in the field.
Launched in Jordan in January, MIT ReACT Hub has successfully advanced its first cohort of participants through its blended learning Computer and Data Science certificate program. Enrollment for the DEDP MicroMasters fall semester starts on Sept. 11.
Nobel laureate, Killian lecturer, and F.G. Keyes Professor Richard Royce Schrock recently announced his retirement from teaching and will officially transition to emeritus status within the Department of Chemistry on Sept. 1.
“I look forward to a period in my life with fewer deadlines, which is the point of retirement,” said Schrock. “However, it is difficult to imagine the next few years without the challenges and joys of fundamental research, which I have enjoyed throughout my career.”
Schrock intends to remain research-active at MIT and will continue to maintain his laboratory and mentor a research group in Cambridge, while simultaneously taking advantage of the spare time that retiring from teaching allows.
“I have had the good fortune to have been part of the discovery and development of an area of research that has spanned 50 years; that growth continues, even at a fundamental level,” he says. “Recently, my group made some potentially important discoveries so I hope to support a few postdoctoral students to complete these studies.”
Schrock also intends to use his retirement to contribute to chemistry as a whole. Part of that plan involves spending winters at the University of California at Riverside, his undergraduate alma mater. His appointment to the inaugural George K. Helmkamp Founder’s Chair in Chemistry will afford him the opportunity to meet with Riverside faculty and students while enjoying warm winters near his family in Long Beach. Schrock will continue to call Winchester, Massachusetts, where he and his wife Nancy live, home surrounded by their friends and hobbies. “Our home includes a bookbinding studio for Nancy, a woodworking shop for me, and a garden and kitchen for both of us,” Schrock says.
As for his retirement “to-do list,” Schrock remains open. “I do not have a so-called ‘bucket list’ of travel goals, but I intend to enjoy traveling to see family — including my youngest son and his family in Atlanta — and friends, as opportunities arise,” he says. “Much of the future is part of the experiment of life, and involves making choices that I cannot predict.”
If you happen to have a box of spaghetti in your pantry, try this experiment: Pull out a single spaghetti stick and hold it at both ends. Now bend it until it breaks. How many fragments did you make? If the answer is three or more, pull out another stick and try again. Can you break the noodle in two? If not, you’re in very good company.
The spaghetti challenge has flummoxed even the likes of famed physicist Richard Feynman ’39, who once spent a good portion of an evening breaking pasta and looking for a theoretical explanation for why the sticks refused to snap in two.
Feynman’s kitchen experiment remained unresolved until 2005, when physicists from France pieced together a theory to describe the forces at work when spaghetti — and any long, thin rod — is bent. They found that when a stick is bent evenly from both ends, it will break near the center, where it is most curved. This initial break triggers a “snap-back” effect and a bending wave, or vibration, that further fractures the stick. Their theory, which won the 2006 Ig Nobel Prize, seemed to solve Feynman’s puzzle. But a question remained: Could spaghetti ever be coerced to break in two?
The answer, according to a new MIT study, is yes — with a twist. In a paper published this week in the Proceedings of the National Academy of Sciences, researchers report that they have found a way to break spaghetti in two, by both bending and twisting the dry noodles. They carried out experiments with hundreds of spaghetti sticks, bending and twisting them with an apparatus they built specifically for the task. The team found that if a stick is twisted past a certain critical degree, then slowly bent in half, it will, against all odds, break in two.
The researchers say the results may have applications beyond culinary curiosities, such as enhancing the understanding of crack formation and how to control fractures in other rod-like materials such as multifiber structures, engineered nanotubes, or even microtubules in cells.
“It will be interesting to see whether and how twist could similarly be used to control the fracture dynamics of two-dimensional and three-dimensional materials,” says co-author Jörn Dunkel, associate professor of physical applied mathematics at MIT. “In any case, this has been a fun interdisciplinary project started and carried out by two brilliant and persistent students — who probably don’t want to see, break, or eat spaghetti for a while.”
The two students are Ronald Heisser ’16, now a graduate student at Cornell University, and Vishal Patil, a mathematics graduate student in Dunkel’s group at MIT. Their co-authors are Norbert Stoop, instructor of mathematics at MIT, and Emmanuel Villermaux of Université Aix Marseille.
Experiments (above) and simulations (below) show how dry spaghetti can be broken into two or more fragments, by twisting and bending.
A deep dish dive
Heisser, together with project partner Edgar Gridello, originally took up the challenge of breaking spaghetti in the spring of 2015, as a final project for 18.354 (Nonlinear Dynamics: Continuum Systems), a course taught by Dunkel. They had read about Feynman’s kitchen experiment, and wondered whether spaghetti could somehow be broken in two and whether this split could be controlled.
“They did some manual tests, tried various things, and came up with an idea that when he twisted the spaghetti really hard and brought the ends together, it seemed to work and it broke into two pieces,” Dunkel says. “But you have to twist really strongly. And Ronald wanted to investigate more deeply.”
So Heisser built a mechanical fracture device to controllably twist and bend sticks of spaghetti. Two clamps on either end of the device hold a stick of spaghetti in place. A clamp at one end can be rotated to twist the dry noodle by various degrees, while the other clamp slides toward the twisting clamp to bring the two ends of the spaghetti together, bending the stick.
Heisser and Patil used the device to bend and twist hundreds of spaghetti sticks, and recorded the entire fragmentation process with a camera, at up to a million frames per second. In the end, they found that by first twisting the spaghetti at almost 360 degrees, then slowly bringing the two clamps together to bend it, the stick snapped exactly in two. The findings were consistent across two types of spaghetti: Barilla No. 5 and Barilla No. 7, which have slightly different diameters.
Experiments (above) and simulations (below) show how dry spaghetti can be broken into two or more fragments, by twisting and bending.
In parallel, Patil began to develop a mathematical model to explain how twisting can snap a stick in two. To do this, he generalized previous work by the French scientists Basile Audoly and Sebastien Neukirch, who developed the original theory to describe the “snap-back effect,” in which a secondary wave caused by a stick’s initial break creates additional fractures, causing spaghetti to mostly snap in three or more fragments.
Patil adapted this theory by adding the element of twisting, and looked at how twist should affect any forces and waves propagating through a stick as it is bent. From his model, he found that, if a 10-inch-long spaghetti stick is first twisted by about 270 degrees and then bent, it will snap in two, mainly due to two effects. The snap-back, in which the stick will spring back in the opposite direction from which it was bent, is weakened in the presence of twist. And, the twist-back, where the stick will essentially unwind to its original straightened configuration, releases energy from the rod, preventing additional fractures.
“Once it breaks, you still have a snap-back because the rod wants to be straight,” Dunkel explains. “But it also doesn’t want to be twisted.”
Just as the snap-back will create a bending wave, in which the stick will wobble back and forth, the unwinding generates a “twist wave,” where the stick essentially corkscrews back and forth until it comes to rest. The twist wave travels faster than the bending wave, dissipating energy so that additional critical stress accumulations, which might cause subsequent fractures, do not occur.
“That’s why you never get this second break when you twist hard enough,” Dunkel says.
The team found that the theoretical predictions of when a thin stick would snap in two pieces, versus three or four, matched with their experimental observations.
“Taken together, our experiments and theoretical results advance the general understanding of how twist affects fracture cascades,” Dunkel says.
For now, he says the model is successful at predicting how twisting and bending will break long, thin, cylindrical rods such as spaghetti. As for other pasta types?
“Linguini is different because it’s more like a ribbon,” Dunkel says. “The way the model is constructed it applies to perfectly cylindrical rods. Although spaghetti isn’t perfect, the theory captures its fracture behavior pretty well,”
The research was supported, in part, by the Alfred P. Sloan Foundation and the James S. McDonnell Foundation.
Visitors roaming the MIT Stratton Student Center chatted with high school students stationed at various booths, as 3-D printers hummed and a remote-controlled inflatable shark swam above their heads. Down the street at the Johnson Ice Rink, self-driving miniature racecars hurtled down a racetrack while onlookers cheered them on.
This was the scene on Sunday, Aug. 5 at the final event of the Beaver Works Summer Institute (BWSI), a four-week summer science, technology, engineering, and math (STEM) program for rising high school seniors. BWSI is an initiative of Beaver Works, a research and education center jointly operated by MIT Lincoln Laboratory and the MIT School of Engineering. BWSI started in 2016 with 46 students. On Sunday, the program concluded its third year with 198 students from 105 schools around the country.
“The Beaver Works Summer Institute is a transformational high school program that finds and attracts talented and motivated students throughout the world to science and engineering,” said Professor Sertac Karaman, an academic director of BWSI and an associate professor in MIT’s Department of Aeronautics and Astronautics. “At their core, all our classes offer hands-on, project-based experiences that strengthen the students’ understanding of fundamental concepts in emerging technologies of tomorrow.”
This year’s BWSI featured eight courses: Autonomous RACECAR (Rapid Autonomous Complex-Environment Competing Ackermann-steering Robot) Grand Prix; Autonomous Air Vehicle Racing; Autonomous Cognitive Assistant; Medlytics: Data Science for Health and Medicine; Build a Cubesat; Unmanned Air System-Synthetic Aperture Radar (UAS-SAR); Embedded Security and Hardware Hacking; and Hack a 3-D Printer. All courses were supplemented by lectures, and by an online portion designed to teach the fundamentals of each topic, which students took prior to arriving at the program.
Students from Mexico, Canada, and Nauset Regional High School in Massachusetts also participated in the courses remotely by building the technologies in their own classrooms and listening in on webcasted lectures. At the end of the program, they travelled to the MIT campus to participate in the final event.
In the spirit of hands-on learning, students made their own 3-D printers and radars, security tested a home door lock system, and designed their own autonomous capabilities for unpiloted aerial vehicles (UAVs) and miniature racecars. Despite the challenging nature of the material, the students caught on quickly.
“They constantly exceeded my already high expectations. Their ability to immediately engage with what is often graduate-level course materials is inspiring,” said Mark Mazumder, a lead instructor of the Autonomous Air Vehicle course and a Lincoln Laboratory staff member.
Medlytics lead instructor and Lincoln Laboratory staff member Danelle Shah said the participants “amazed me every day.”
“Not only are these students remarkably bright, but they are ambitious, curious, and passionate about making a difference,” Shah said.
The students showed off the results of their hard work at the final event after four weeks of learning and building. During the first half of the day, the Autonomous RACECAR teams ran time trials at the Johnson Ice Rink in preparation for the final race later in the afternoon. At Building 31, students in the Autonomous Air Vehicle Racing course raced their drones around an LED track, while nearby, drones from the UAS-SAR course used their radar to image an object obscured by a tarp.
Students from the remaining courses set up booths at the Stratton Student Center and talked to visitors about their projects, which included a design for a miniature satellite to be launched by NASA, a 3-D printer that could print icing on top of cakes, a cognitive assistant similar to Amazon’s Alexa, and a technique for using machine learning to detect cyberbullying on Twitter.
The event culminated in a final grand prix-style race in which the autonomous RACECARs competed to finish an intricate racetrack. Various obstacles such as a windmill and bowling pins were placed on the path, which the cars had to navigate around using lidar, cameras, and motion sensors that the students had integrated into the vehicles. The race was followed by an awards ceremony and closing remarks from the BWSI organizers.
“BWSI 2018 was a huge success, thanks to the passion and dedication of our staff and instructors and the enthusiasm of the students,” said Robert Shin, the director of Beaver Works. “In all eight engineering courses, the students far exceeded our expectations in their achievements. That validated our view that there is no limit to what STEM-focused high school students can achieve under the right circumstances. Our vision is to make this opportunity available to all passionate and motivated high school students everywhere.”
For many BWSI participants, the program does not end after just four weeks. Nine of this year’s associate instructors were former BWSI students, including Karen Nguyen, who now attends MIT.
“Before the program, MIT seemed like an unattainable goal. But working with the same autonomous racecars that were used by MIT students and professors allowed me to see that technology could be both advanced and accessible,” Nguyen said. Now on the teaching side of the program, Nguyen is still benefiting from it. “By helping instruct high school students on autonomous robotics, I furthered my own knowledge within the field and also learned how to make certain topics in STEM more approachable to a wider range of people,” she said.
The organizers hope to expand the program in the future by helping schools develop their own local STEM programs based on the BWSI curricula. They are also brainstorming possible new course topics for next year, such as autonomous marine vehicles, disaster relief technologies, and an assistive technology hackathon. Although the third year of BWSI is now over, the organizers hope the program will have a lasting impact on the students.
“At the end of the program, you just look at the things you accomplished and it utterly changes what you think high schoolers, given the right tools and guidance, are capable of,” said William Shi, a student who participated in the Embedded Security and Hardware Hacking course. “You feel empowered to go out and try your hand at even more ambitious things, to see how far you truly can go.”
A group of high school students, some from as far away as Italy and China, came to MIT’s Edgerton Center this summer to learn more about what it takes to be an engineer — and learned a bit more about themselves as well.
Now in its 12th year, the Edgerton Center’s Engineering Design Workshop (EDW) brought together 27 students in a month-long creative binge to flesh out their own projects. Some were practical, some were whimsical, but all were challenging and fun.
The students started out the summer by learning basic electronics, mechanical fabrication, and a bit of 3-D printing. They then broke up into teams and brainstormed their own creations under the guidance of the program’s mentors, many of whom are EDW alumni themselves.
This year’s final designs, which were showcased in a final presentation for the kids and their parents on Aug. 3, included an automated river water monitoring platform; an improved ship dry dock; an interactive light game; a monowheel unicycle; a bionic exoskeleton; and what can best be described as a cross between a Segway and a Nimbus 2000 broomstick from Harry Potter (but with cup holders).
Many of the kids seemed shy at first to talk about their projects, but Edgerton Center instructor Chris Mayer gently urged them on.
“Why don’t you bring that over to the audience, so they could have a closer look?” he said. Invariably, the kids’ close-up demonstrations of their work elicited amazed gasps and nods from the crowd.
For some like Luo Yan, a senior high school student from the Shanghai Foreign Language School, the workshop was also a hands-on history lesson. He and his team researched the design of Boston’s historic Charlestown Navy Yard and built a scale mock-up showcasing their proposed improvements to its long-defunct dry dock.
“This has a lot of stories behind it,” he said, pointing to the miniature replicas of the yard’s buildings, which were built in 1833. “I just want to see it working again.”
Thirteen-year-old Mohan Hathi of Cambridge Rindge and Latin School said Team Exoskeleton’s big idea was to build an assistive system that could help with repetitive chores, like lifting heavy objects on an assembly line. But with barely a month to build a working prototype, they ended up having to decide whether to build a bionic hand or arm.
“We decided, why not do both? And if it happens then it happens,” he said. “But it ended up working, and I’m really happy.”
Team QUICK (short for aQuatic Underwater Information Collecting Kit) built a submersible sensor platform that could be used for environmental monitoring in the Charles River and other bodies of water. But barely had they presented their final design when the team was already considering how it could be improved — better battery life, perhaps, or more robust sensors.
True to its name, Team LIT (Light Interactive Technology) designed an Arduino-controlled LED wall display, and even came up with a fairy-catching game to go along with it. In the game, a light "fairy" would flit about, and a controller box off to the side allowed players to light up parts of the wall to block its path.
Meanwhile, teams Monowheel and Broomba showcased their unusual transport designs. The former was a one-wheeled single-track vehicle and the latter a self-balancing witch’s broomstick on wheels. More whimsical than practical, they nevertheless offered an interesting and fun way to get around.
Though not everyone was able to get their creations off the ground, Edgerton Center instructor Ed Moriarty ’76 said, the experience is invaluable in itself. Moriarty has been with the workshop from the very beginning and has served as both mentor and friend to all its past and present participants.
“We did not say that you have to succeed in building your project. We said you have to care about your project,” he explained. “We did not set this up as an instructional thing. This is, ‘Hey, what do you want to build?’ ‘Hey, let’s go try it!”
“This isn’t about teaching,” he added. “This is about empowering students to get together and do things.”
That's something that Moriarty takes to heart and has been sharing with high school students — or anyone who happens to drop by the Edgerton Center on a lazy Saturday afternoon — for years now. If you have a big idea, he believes you should always chase it down the rabbit hole, because no matter where you end up, it’ll always be an adventure.
MIT researchers have developed novel photography optics that capture images based on the timing of reflecting light inside the optics, instead of the traditional approach that relies on the arrangement of optical components. These new principles, the researchers say, open doors to new capabilities for time- or depth-sensitive cameras, which are not possible with conventional photography optics.
Specifically, the researchers designed new optics for an ultrafast sensor called a streak camera that resolves images from ultrashort pulses of light. Streak cameras and other ultrafast cameras have been used to make a trillion-frame-per-second video, scan through closed books, and provide depth map of a 3-D scene, among other applications. Such cameras have relied on conventional optics, which have various design constraints. For example, a lens with a given focal length, measured in millimeters or centimeters, has to sit at a distance from an imaging sensor equal to or greater than that focal length to capture an image. This basically means the lenses must be very long.
In a paper published in this week’s Nature Photonics, MIT Media Lab researchers describe a technique that makes a light signal reflect back and forth off carefully positioned mirrors inside the lens system. A fast imaging sensor captures a separate image at each reflection time. The result is a sequence of images — each corresponding to a different point in time, and to a different distance from the lens. Each image can be accessed at its specific time. The researchers have coined this technique “time-folded optics.”
“When you have a fast sensor camera, to resolve light passing through optics, you can trade time for space,” says Barmak Heshmat, first author on the paper. “That’s the core concept of time folding. … You look at the optic at the right time, and that time is equal to looking at it in the right distance. You can then arrange optics in new ways that have capabilities that were not possible before.”
The new optics architecture includes a set of semireflective parallel mirrors that reduce, or “fold,” the focal length every time the light reflects between the mirrors. By placing the set of mirrors between the lens and sensor, the researchers condensed the distance of optics arrangement by an order of magnitude while still capturing an image of the scene.
In their study, the researchers demonstrate three uses for time-folded optics for ultrafast cameras and other depth-sensitive imaging devices. These cameras, also called “time-of-flight” cameras, measure the time that it takes for a pulse of light to reflect off a scene and return to a sensor, to estimate the depth of the 3-D scene.
Co-authors on the paper are Matthew Tancik, a graduate student in the MIT Computer Science and Artificial Intelligence Laboratory; Guy Satat, a PhD student in the Camera Culture Group at the Media Lab; and Ramesh Raskar, an associate professor of media arts and sciences and director of the Camera Culture Group.
Folding the optical path into time
The researchers’ system consists of a component that projects a femtosecond (quadrillionth of a second) laser pulse into a scene to illuminate target objects. Traditional photography optics change the shape of the light signal as it travels through the curved glasses. This shape change creates an image on the sensor. But, with the researchers’ optics, instead of heading right to the sensor, the signal first bounces back and forth between mirrors precisely arranged to trap and reflect light. Each one of these reflections is called a “round trip.” At each round trip, some light is captured by the sensor programed to image at a specific time interval — for example, a 1-nanosecond snapshot every 30 nanoseconds.
A key innovation is that each round trip of light moves the focal point — where a sensor is positioned to capture an image — closer to the lens. This allows the lens to be drastically condensed. Say a streak camera wants to capture an image with the long focal length of a traditional lens. With time-folded optics, the first round-trip pulls the focal point about double the length of the set of mirrors closer to the lens, and each subsequent round trip brings the focal point closer and closer still. Depending on the number of round trips, a sensor can then be placed very near the lens.
By placing the sensor at a precise focal point, determined by total round trips, the camera can capture a sharp final image, as well as different stages of the light signal, each coded at a different time, as the signal changes shape to produce the image. (The first few shots will be blurry, but after several round trips the target object will come into focus.)
In their paper, the researchers demonstrate this by imaging a femtosecond light pulse through a mask engraved with “MIT,” set 53 centimeters away from the lens aperture. To capture the image, the traditional 20-centimeter focal length lens would have to sit around 32 centimeters away from the sensor. The time-folded optics, however, pulled the image into focus after five round trips, with only a 3.1-centimeter lens-sensor distance.
This could be useful, Heshmat says, in designing more compact telescope lenses that capture, say, ultrafast signals from space, or for designing smaller and lighter lenses for satellites to image the surface of the ground.
Multizoom and multicolor
The researchers next imaged two patterns spaced about 50 centimeters apart from each other, but each within line of sight of the camera. An “X” pattern was 55 centimeters from the lens, and a “II” pattern was 4 centimeters from the lens. By precisely rearranging the optics — in part, by placing the lens in between the two mirrors — they shaped the light in a way that each round trip created a new magnification in a single image acquisition. In that way, it’s as if the camera zooms in with each round trip. When they shot the laser into the scene, the result was two separate, focused images, created in one shot — the X pattern captured on the first round trip, and the II pattern captured on the second round trip.
The researchers then demonstrated an ultrafast multispectral (or multicolor) camera. They designed two color-reflecting mirrors and a broadband mirror — one tuned to reflect one color, set closer to the lens, and one tuned to reflect a second color, set farther back from the lens. They imaged a mask with an “A” and “B,” with the A illuminated the second color and the B illuminated the first color, both for a few tenths of a picosecond.
When the light traveled into the camera, wavelengths of the first color immediately reflected back and forth in the first cavity, and the time was clocked by the sensor. Wavelengths of the second color, however, passed through the first cavity, into the second, slightly delaying their time to the sensor. Because the researchers knew which wavelength would hit the sensor at which time, they then overlaid the respective colors onto the image — the first wavelength was the first color, and the second was the second color. This could be used in depth-sensing cameras, which currently only record infrared, Heshmat says.
One key feature of the paper, Heshmat says, is it opens doors for many different optics designs by tweaking the cavity spacing, or by using different types of cavities, sensors, and lenses. “The core message is that when you have a camera that is fast, or has a depth sensor, you don’t need to design optics the way you did for old cameras. You can do much more with the optics by looking at them at the right time,” Heshmat says.
This work “exploits the time dimension to achieve new functionalities in ultrafast cameras that utilize pulsed laser illumination. This opens up a new way to design imaging systems,” says Bahram Jalali, director of the Photonics Laboratory and a professor of electrical and computer engineering at the University of California at Berkeley. “Ultrafast imaging makes it possible to see through diffusive media, such as tissue, and this work hold promise for improving medical imaging in particular for intraoperative microscopes.”
Neutron stars are the smallest, densest stars in the universe, born out of the gravitational collapse of extremely massive stars. True to their name, neutron stars are composed almost entirely of neutrons — neutral subatomic particles that have been compressed into a small, incredibly dense celestial package.
A new study in Nature, co-led by MIT researchers, suggests that some properties of neutron stars may be influenced not only by their multitude of densely packed neutrons, but also by a substantially smaller fraction of protons — positively charged particles that make up just 5 percent of a neutron star.
Instead of gazing at the stars, the researchers came to their conclusion by analyzing the microscopic nuclei of atoms on Earth.
The nucleus of an atom is packed with protons and neutrons, though not quite as densely as in neutron stars. Occasionally, if they are close enough in distance, a proton and a neutron will pair up and streak through an atom’s nucleus with unusually high energy. Such “short-range correlations,” as they are known, can contribute significantly to the energy balance and overall properties of a given atomic nucleus.
The researchers looked for signs of proton and neutron pairs in atoms of carbon, aluminum, iron, and lead, each with a progressively higher ratio of neutrons to protons. They found that, as the relative number of neutrons in an atom increased, so did the probability that a proton would form an energetic pair. The likelihood that a neutron would pair up, however, stayed about the same. This trend suggests that, in objects with high densities of neutrons, the minority protons carry a disproportionally large part of the average energy.
“We think that when you have a neutron-rich nucleus, on average, the protons move faster than the neutrons, so in some sense, protons carry the action,” says study co-author Or Hen, assistant professor of physics at MIT. “We can only imagine what might happen in even more neutron-dense objects like neutron stars. Even though protons are the minority in the star, we think the minority rules. Protons seem to be very active, and we think they might determine several properties of the star.”
Digging through data
Hen and his colleagues based their study on data collected by CLAS — the CEBAF (Continuous Electron Beam Accelerator Facility) Large Acceptance Spectrometer, a particle accelerator and detector based at Jefferson Laboratory in Virginia. CLAS, which operated from 1998 to 2012, was designed to detect and record the multiple particles that are emitted when beams of electrons impinge on atomic targets.
“Having this property of a detector that sees everything and also keeps everything for offline analysis is extremely rare,” Hen says. “It even has kept what people considered ‘noise,’ and we’re now learning that one person’s noise is another person’s signal.”
The team chose to mine CLAF’s archived data for signs of short-range correlations — interactions that the detector was not necessarily meant to produce, but that it captured nonetheless.
“People were using the detector to look at specific interactions, but meanwhile, it also measured in parallel a bunch of other reactions that took place,” says collaborator Larry Weinstein, a professor of physics at Old Dominion University. “So we thought, ‘Let’s dig into this data and see if there’s anything interesting there.’ We want to squeeze as much science as we can out of experiments that have already run.”
A full dance card
The team chose to mine CLAS data collected in 2004, during an experiment in which the detector aimed beams of electrons at carbon, aluminum, iron, and lead atoms, with the goal of observing how particles produced in nuclear interactions travel through each atom’s respectively larger volume. Along with their varying sizes, each of the four types of atoms have different ratios of neutrons to protons in their nuclei, with carbon having the fewest neutrons and lead having the most.
The reanalysis of the data was done by graduate student Meytal Duer from Tel Aviv University in a collaboration with MIT and Old Dominion University, and was led by Hen. The overall study was conducted by an international consortium called the CLAS Collaboration, made up of 182 members from 42 institutions in 9 countries.
The group studied the data for signs of high-energy protons and neutrons — indications that the particles had paired up — and whether the probability of this pairing changed as the ratio of neutrons to protons increased.
“We wanted to start from a symmetric nucleus and see, as we add more neutrons, how things evolve,” Hen says. “We would never get to the symmetries of neutron stars here on Earth, but we could at least see some trend and understand from that, what could be going on in the star.”
In the end, the team observed that as the number of neutrons in an atom’s nucleus increased, the probability of protons having high energies (and having paired up with a neutron) also increased significantly, while the same probability for neutrons remained the same.
“The analogy we like to give is that it’s like going to a dance party,” Hen says, invoking a scenario in which boys who might pair up with girls on the dance floor are vastly outnumbered. “What would happen is, the average boy would … dance a lot more, so even though they were a minority in the party, the boys, like the protons, would be extremely active.”
Hen says this trend of energetic protons in neutron-rich atoms may extend to even more neutron-dense objects, such as neutron stars. The role of protons in these extreme objects may then be more significant than people previously suspected. This revelation, Hen says, may shake up scientists’ understanding of how neutron stars behave. For instance, as protons may carry substantially more energy than previously thought, they may contribute to properties of a neutron star such as its stiffness, its ratio of mass to size, and its process of cooling.
“All these properties then affect how two neutron stars merge together, which we think is one of the main processes in the universe that create nuclei heavier than iron, such as gold,” Hen says. “Now that we know the small fraction of protons in the star are very highly correlated, we will have to rethink how [neutron stars] behave.”
This research was supported, in part, by the U.S. Department of Energy, the National Science Foundation, the Israel Science Foundation, the Chilean Comisión Nacional de Investigación Científica y Tecnológica, the French Centre National de la Recherche Scientifique and Commissariat a l’Energie Atomique, the French-American Cultural Exchange, the Italian Istituto Nazionale di Fisica Nucleare, the National Research Foundation of Korea, and the UK’s Science and Technology Facilities Council.