MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 1 day 7 min ago

The kilo is dead. Long live the kilo!

Thu, 05/16/2019 - 1:02pm

For 130 years, a cylinder made of a platinum-iridium alloy and stored in a suburb of Paris called Saint Cloud has been the official definition of a kilogram, the internationally accepted basic unit of mass. But that will change once and for all on May 20, when for the first time all of the basic units of measurement will be officially defined in terms of  atomic properties and fundamental physics constants, rather than specific, human-made objects.

The other objects on which physical standards are based, such as the standard meter, were already replaced years ago, but the kilogram — generally known as the kilo for short — turned out to be a harder unit to define in absolute terms. Physicists and engineers have been frustrated, however, by the inevitable imprecision of a unit based on a single physical object.

Despite the greatest of precautions, every time the standard kilo was handled — for example, to compare it to another unit that could then be used to calibrate instruments — it would shed some atoms and its mass would be slightly changed. Over its lifetime, that standard kilo is estimated to have lost about 50 micrograms. A better way was needed.

Now, instead of a particular lump of metal in a single location, a kilo is to be defined by fixing the numerical value of a fundamental constant of nature known as the Planck constant. This constant relates the energy of a photon to its frequency, and is referred to by the letter h. It is now defined as 6.62607015 times 10-34 kilograms times square meters per second, thereby defining the kilogram in terms of the second and the meter. Since the second and meter are already defined completely in terms of physical constants, the kilogram is now also defined only in terms of fundamental physical constants.

Some may find this new definition complicated and difficult to understand, but Wolfgang Ketterle, a Nobel Prize winner and the John D. MacArthur Professor of Physics at MIT, doesn’t see it that way. “Conceptually, the definition is very simple,” he says.

Ketterle notes that the new definition of a kilogram corresponds to the mass of an exact number of particles — a very large number of particles. According to his calculations, it is 1.4755214 times 1040 photons (particles of light) of a particular wavelength, which is that of cesium atoms used in atomic clocks.

No, that’s not exactly something your butcher can place on a scale to measure out a kilo of ground beef, but it is something that scientists and engineers everywhere in the world — and even aliens on other worlds — could match precisely, without having to carry a scale to Paris to check it.

Ketterle sees this important shift in measurement standards as a teachable moment, an opportunity to explain some basic principles to a wide audience. “Ideally, every high school teacher would tell his or her science class about this historic change,” he suggests.

To mark the occasion of the official change, which takes place on World Metrology Day, May 20, Ketterle will deliver a talk on the new standards at 4 p.m. that day in MIT’s Huntington Hall, room 10-250, where he will explain both the concepts behind the new definition of the kilogram and the techniques for its measurement.

Explaining how the other basic units have been defined through basic physical constants is a bit more straightforward than it is for the kilo: The second, for example, is defined as a specific number of vibrations of an atom of cesium. The meter, no longer a metal bar in Saint Cloud, is now defined as the distance that light travels (in a vacuum) in a specific interval of time, namely 1/299,792,458 of a second.

Defining the kilo through the mass of photons has to address the fact that photons are constantly whizzing around at the speed of light — after all, they are light — so getting them to sit still on a balance scale is not possible. Instead, they can be trapped between a pair of mirrors, which form an “optical cavity” that keeps them confined. Then, that cavity and its trapped photons can be placed on a balance and measured. The difference between an empty cavity and an identical one full of photons thus provides the mass of the photons themselves. So that’s the concept behind measuring a kilo according to the new definition. 

However, collecting 1040 photons — that’s 1 with 40 zeros after it — is not practical, so instead measurements are made of a much smaller number, and then scaled up step by step. “How do you count to 1040?  Well, you can’t,” Ketterle says. “However, you can do it by using multiple steps.”

He explains this process by analogy to money: “If you win a million dollars, and it is paid in pennies, you don’t want to count pennies. You will first exchange the pennies into dollar bills, and then the dollar bills into 100 dollar bills, and then you count them.”

That’s essentially the principle used for measuring mass, he says. “In metrology, something analogous is done by comparing the atomic clock frequency of the cesium atoms to a much higher atomic frequency. Then you use this frequency to measure the mass of the electron or of a single atom, and only then you start counting,” he says.

In practice, there are currently two known methods for measuring such masses with great precision. These are known as the Kibble balance and the single-crystal silicon sphere. Both are techniques that laboratories around the world can now use to provide a precise standard for mass and measurements of weights, without ever again having to correlate their measurements with a specific physical object at some central repository.

What’s more, now that these units are defined in absolute terms, as soon as better measurement techniques are developed, the accuracy of these measurements will improve accordingly, without the need to revisit the underlying definitions.

The new definitions have great power, Ketterle says, because “every new method to count photons or atoms or measure frequencies will further improve the measurements of mass, since mass is no longer connected to an imprecise, man-made artifact.”

Virtual reality game simulates experiences with race

Thu, 05/16/2019 - 12:20pm

Video games that use virtual reality to create immersive experiences have become increasingly popular for entertainment and for research. However, the representation of race in these simulations is often shallow — and fails to go beyond physical appearance attributes like skin color. 

For a more lived, embodied experience in the virtual world, MIT researchers have developed a new computational model that captures how individuals might have been taught to think about race in their upbringing. The new model of racial and ethnic socialization, presented at the AAAI 2019 Spring Symposium, has the potential to not only enhance video game simulations, but also to facilitate training for teachers and students who might encounter racial issues in the classroom.

“As video game developers, we have the ability within virtual worlds to challenge the biased ideologies that exist in the physical world, rather than continue replicating them,” says Danielle Olson, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, whose dissertation project includes the work reported at the symposium. “My hope is that this work can be a catalyst for dialogue and reflection by teachers, parents, and students in better understanding the devastating social-emotional, academic, and health impacts of racialized encounters and race-based traumatic stress.”

“People are socialized to think about race in a variety of ways — some parents teach their children to ignore race entirely, while others promote an alertness to racial discrimination or cultural pride,” says D. Fox Harrell, professor of digital media and of artificial intelligence and director of the MIT Center for Advanced Virtuality, where he designs virtual technologies to stimulate social change. “The system we’ve developed captures this socialization, and we hope that it may become an effective tool for training people to be thinking more about racial issues, perhaps for teachers and students to minimize discrimination in the classroom.”

Olson and Harrell embedded their new model into a virtual reality software prototype, "Passage Home VR," and conducted user testing to understand the game’s effectiveness. 

"Passage Home VR" serves up an immersive story, grounded in social science work conducted in the physical world on how parents socialize their children to think about race and ethnicity, both verbally and nonverbally, and the impact on how individuals perceive and cope with racial stressors. 

In the game, the user assumes the virtual identity of an African American girl whose high school teacher has accused her of plagiarizing an essay when, in fact, the character is a passionate, high-achieving English student who took the assignment very seriously and wrote the essay herself. 

As users navigate the discriminatory encounter with the teacher, the ways in which they respond to the teacher’s actions — with different body language, verbal responses and more — influence the outcome and feedback presented at the end of the game. 

Overall, the results of the study suggest that the experiences people have in their lives with how they have been socialized to think about the role of race and ethnicity in society — their racial and ethnic socialization — influence their behavior in the game. 

Of the 17 participants in the study who tested out the game, most were identified as “colorblind” by the game, which was also confirmed through semi-structured verbal interviews conducted following the game. Colorblind users were also less likely to explicitly mention race in their thematic analyses of the story in the game. A smaller number of users displayed in-game behavior that identified them as having other socialization strategies, such as “alertness to discrimination” or “preparation for bias.” 

“The game choices were aligned with their real-world socialization of these issues,” Harrell says. 

This feedback for users may be a powerful training tool — serving as an assessment of how prepared people are to think about and respond to racial issues.

Harrell added that his lab is now preparing to deploy and study the efficacy of "Passage Home VR" as a professional development tool for teachers. 

“Learning with virtual reality can only be effective if we present robust simulations that capture experiences as close to the real-world as possible,” Harrell said. “Our hope is that this work can help developers to make their simulations much richer, unlocking the power to address social issues.”

MIT Policy Lab launches MITx course on policy outreach

Thu, 05/16/2019 - 12:00pm

The MIT Policy Lab at the Center for International Studies (PL@CIS) recently launched a new MITx course entitled “Tools for Academic Engagement in Public Policy.” This short course on the edX platform provides a clear, concise, high quality resource for scientists and engineers who are seeking to inform the development of public policy with their research. By providing a basic overview of how governing bodies work, how policy is made, and specific strategies for impacting this process, the PL@CIS hopes to significantly reduce the amount of time it takes for researchers to begin engaging with policymakers and increase their effectiveness at policy outreach.  

The content of the course is informed by over four years of PL@CIS (formerly the International Policy Lab) experience working with MIT faculty to develop strategies for engaging with policymakers. The PL@CIS was created to ensure that public policies are informed by the best available research and that scholars understand the potential policy impact of their own work. This online tool seeks to take the lessons learned by the PL@CIS and make them available to the broader research community.

“MIT generates a lot of research with important implications for public policy that unfortunately doesn't always find its way into policy circles,” said faculty director Chappell Lawson, associate professor of political science. “Many faculty members here want to have an impact on policy but don't feel familiar enough with how the process works to do so efficiently. Creating an online educational tool to help connect the academic and policy communities is another way MIT can fulfill its mission of helping to solve the world's great challenges.”

This short course will provide an essential introduction to the policymaking process through the lens of the U.S. federal government, while providing specific steps researchers can take to engage policy stakeholders and articulate the policy implications of their work. It also includes community discussion forums to receive peer feedback on engagement strategies and to contribute to the online community of scientists interested in informing public policy.

“Academic training rarely covers the importance of engaging with policymakers or provides the tools necessary to do so effectively,” said Dan Pomeroy, PL@CIS managing director and senior policy advisor. “When I decided to transition to work in public policy after receiving a PhD in physics, I struggled to understand how to apply my skill set to this new field. The intent of this tool is to provide a resource for both people within academia wanting to engage with policymakers as well as scientists and engineers interested in pursuing a career in public policy.”

The mission of the PL@CIS is to develop and enhance connections between MIT research and public policy. The PL@CIS accomplishes this mission by helping faculty define realistic policy goals and develop effective outreach strategies based on these goals and the time the faculty member wishes to devote.

The PL@CIS then provides mentorship, staff assistance, and training to help faculty conduct outreach efficiently and effectively. In addition, the PL@CIS provides modest grants for MIT faculty members to translate their scholarship for policy audiences and to cover the costs of engaging with the policy stakeholders. All of these efforts are designed to maximize the impact of faculty members' policy engagements while minimizing the expenditure of faculty time.

This course was produced in partnership with Meghan Perdue, a School of Humanities, Arts, and Social Sciences digital learning fellow, and with the support of MIT’s Office of Open Learning. It was also sponsored, in part, by Harvard Medical School's Scientific Citizenship Initiative, which works to make science more socially responsive and responsible by empowering scientists to collaboratively engage with and lead in their communities and society.

Norman Phillips, former meteorology department head, dies at 95

Thu, 05/16/2019 - 11:30am

Norman A. Phillips, head of the former Department of Meteorology during the 1970s, died on March 15 at the age of 95. His work in atmospheric science showing monthly and seasonal tropospheric patterns led to the creation of first general circulation model, which became the bedrock of weather and climate study today.

Phillips’ introduction to the study of weather, which later blossomed into a passion, started when he was relatively young. Born to Alton Elmer Anton Phillips and Linnea (Larson) Phillips in 1923, Norman Phillips began his life in Chicago, Illinois. He entered the University of Chicago in 1940 to pursue chemistry. However, when World War II began, he was inspired by the work of University of Chicago’s Carl-Gustaf Rossby — now known as the "father of meteorology" and who helped to establish the Department of Meteorology at MIT — to train weather officers and take up a field he had never before encountered. He enlisted in the program, established by Rossby, with the U.S. Army Air Corps. Here, Phillips received computation meteorology training in Mississippi, at the University of Michigan, and in Illinois.

Deployed to the Azores, he obtained atmospheric data and created daily forecasts for the Allied troops in spite of difficult weather conditions and communications difficulties. Working alongside experienced meteorologists, Phillips developed incredible insight and appreciation for the work. After the war, he was discharged as a first lieutenant and returned to Chicago to resume his studies, now with a focus on meteorology. Phillips earned bachelor’s, master’s, and doctoral degrees in 1947, 1948, and 1951, respectively. At this point, the field was just getting off the ground.

Phillips’ first major contribution to meteorology and the creation of reliable forecasting came during his PhD thesis work. Around the time the first computers were coming online, in 1950, the first numerical forecast was generated. It was developed to better understand how weather systems develop and intensify. It treated the atmosphere as a single layer, however, and subsequently meteorologists like Rossby considered a “two-level model” which better captured the dynamics of the atmosphere. Phillips combined this newer model with the work on baroclinic instability by Jule Charney, another giant in the field who later worked alongside and headed the department at MIT after Phillips. This allowed for the numerical growth of atmospheric waves that resembled those in the real world.

Shortly before finishing his degree, Phillips joined the Electronic Computer Project at the Institute for Advanced Study in Princeton, New Jersey, with Charney and several other leading meteorologists with whom Phillips would work with over his career. Expanding on these models, he applied his work to the same cyclonic weather system Phillips used for his thesis and made the first computer simulation reproducing it. “It was my understanding that it was the success with this storm that convinced the Weather Bureau, the Air Force, and the Navy to set up the original version of the NMC [National Meteorological Center] in 1954. This was the Joint Numerical Weather Prediction Unit…” said Phillips in a seminar at MIT in 1988.

After a brief stint in Stockholm to assist Rossby set up a weather model there, Phillips spent time in Oslo understanding the troposphere and different cells of atmospheric circulation capable to transporting heat. In Princeton, Phillips applied this to his general circulation model and was able to construct a forecast, which would breakdown after about a month. He also showed that fronts formed as a result of cyclogenesis, not the other way around as previously thought. This significant achievement became the first climate general circulation model.

Around this time, MIT was building a powerhouse of meteorological and oceanographic experts; in the summer of 1956, then department head Henry Houghton recruited Jule Charney and Norman Phillips. While here, he held the titles of research associate, associate professor, and professor. He led the Department of Meteorology — a precursor to today’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) — for four years, beginning in 1970. Phillips left an indelible mark on MIT and the broader scientific community. He served on numerous national and international committees. As co-editor of the Journal of Atmospheric Science, he took it upon himself to ensure publications of the highest quality.

In July of 1974, Phillips left for the National Weather Service at the National Meteorological Center to pursue numerical weather prediction and data assimilation. He stayed there for four years before retiring in 1988; however, possessing an agile mind, he continued to publish well into his golden years.

Phillips’ research had dramatically changed the way we think about our atmosphere; for these contributions to the field of meteorology, he has been recognized with numerous honors. In 2003, Phillips along with Joseph Smagorinsky received the Benjamin Franklin Medal from the National Weather Service, National Meteorological Center “for their major contributions to the prediction of weather and climate using numerical methods.”

Their seminal and pioneering studies led to the first computer models of weather and climate, as well as to an understanding of the general circulation of the atmosphere, including the transports of heat and moisture that determine the Earth's climate. In addition, Smagorinsky played a leading role in establishing the current global observational network for the atmosphere, and Phillips' leadership fostered the development of effective methods for the use of observations in data assimilation systems.

The American Meteorological Society also selected him for the highly prestigious distinction of an honorary member. From the same institution, he received the Meisinger Award, the Editor’s Award, the Carl-Gustaf Rossby Award (it’s highest honor), the Award for Outstanding Contribution to the Advance of Applied Meteorology, and was appointed a distinguished lecturer. Among his other honors, Phillips was also presented with the Napier Shaw Prize, elected to the National Academy of Sciences, and presented the 6th World Meteorological Organization’s lecture.

Phillips was predeceased by his beloved wife Martha (Nissen) Phillips, whom he married in 1945; his daughter Ruth Walsh; and sister Alice (Phillips) Westphal. Phillips is survived by daughters Janet Grigsby and Ellen Chasse. He is also survived by grandsons Stephen Walsh, Matthew Grigsby, Christopher Grigsby, Derek Chasse and Keith Chasse, plus great grandchildren Ryan and Riley Walsh and Morgan and Travis Chasse.

A memorial service will be held on June 29 at 11 a.m. at the Rivet Funeral Home, 425 Daniel Webster Highway in Merrimack, New Hampshire.

MIT and KTH will collaborate on urban planning and development in Stockholm

Thu, 05/16/2019 - 10:45am

MIT and KTH Royal Institute of Technology, Sweden’s leading technological and engineering university, have announced a research collaboration focused on urban planning and development in Stockholm, Sweden.

The KTH-MIT Senseable Stockholm Lab will use artificial intelligence, big data, and new sensor technologies to help the city evolve into a more livable and sustainable metropolis. The City of Stockholm is part of the collaboration, which will commence work this spring and is planned to span five years.

The announcement was made during the recent 2019 Forum on Future Cities at MIT, a conference produced in association with the World Economic Forum’s Council on Cities and Urbanization.

The initiative will be led at MIT by Carlo Ratti, professor of the practice in MIT’s Department of Urban Studies and Planning and director of the MIT Senseable City Lab.

“We want to use Stockholm as a test bed to explore what it means to be a ‘smart’ city, and to better understand how new ways of analyzing data can create an urban experience that helps people, institutions, nature, and infrastructure better connect,” says Ratti.

As part of the collaboration, researchers from KTH and MIT will work together to gather and analyze vast quantities of data tied to topics such as transport, mobility, energy, and water supply. They will give particular focus to the challenges that Stockholm faces as it strives to balance technological development, sustainability, and growth. 

“This groundbreaking collaboration will provide both institutions with a rare opportunity to do research on cities that is of global importance,” says MIT Associate Provost Richard Lester, who oversees MIT’s international activities. “We greatly appreciate the opportunity to work with our colleagues at KTH and in the City of Stockholm on developing new insights and new data-based tools that will make cities more sustainable and livable.”       

The collaboration will be organized around the concept of co-creation, with researchers in architecture, urban planning, engineering, and computer science from both universities working in tandem to exchange ideas, determine research priorities, and assess the city’s needs. 

“It is fantastic that we are starting such a broad, systematic, and long-term cooperation,” says Sigbritt Karlsson, president of KTH. “This type of interdisciplinary and international collaboration is very valuable. The collaboration between two leading universities of technology, together with the City of Stockholm, with an extensive amount of data already available, is unique in its concept, and through a well-developed partnership we see that this can lead to new types of cooperation in the future. ”

The collaboration between MIT, KTH, and Stockholm offers a unique opportunity for the city — the fastest-growing capital in Europe — to develop and build a smart city. This collaboration will give Stockholm new potential to create a sustainable city for the future, says Stockholm Mayor Anna König Jerlmyr.

The new lab’s first project will focus on urban segregation — that is, how people from varied economic and cultural groups have unequal access to a city and its benefits. Both MIT and KTH have already done extensive research in this area, on which the new collaboration will build. 

The collaboration is formally structured as a consortium between the founding partners KTH, MIT, the city of Stockholm, the Stockholm Chamber of Commerce, and the Newsec Group (a Swedish real estate company). The vision for the future is to expand the collaboration and scale up the research.

The KTH-MIT Senseable Stockholm Lab was formally launched in Sweden in March at a ceremony in Stockholm’s City Hall, with Mayor Jerlmyr, President Karlsson of KTH, and the CEO of the Stockholm Chamber of Commerce in attendance.

A new era in 3-D printing

Thu, 05/16/2019 - 10:35am

In the mid-15th century, a new technology that would change the course of history was invented. Johannes Gutenberg’s printing press, with its movable type, promoted the dissemination of information and ideas that is widely recognized as a major contributing factor for the Renaissance.

Over 500 years later, a new type of printing was invented in the labs of MIT. Emanuel Sachs, professor of mechanical engineering, invented a process known as binder jet printing. In binder jet printing, an inkjet printhead selectively drops a liquid binder material into a powder bed — creating a three-dimensional object layer by layer.

Sachs coined a new name for this process: 3-D printing. “My father was a publisher and my mother was an editor,” explains Sachs. “Growing up, my father would take me to the printing presses where his books were made, which influenced my decision to name the process 3-D printing.”

Sachs’ binder jet printing process was one of several technologies developed in the 1980s and '90s in the field now known as additive manufacturing, a term that has come to describe a wide variety of layer-based production technologies. Over the past three decades, there has been an explosion in additive manufacturing research. These technologies have the potential to transform the way countless products are designed and manufactured.

One of the most immediate applications of 3-D printing has been the rapid prototyping of products. “It takes a long time to prototype using traditional manufacturing methods,” explains Sachs. 3-D printing has transformed this process, enabling rapid iteration and testing during the product development process.

This flexibility has been a game-changer for designers. “You can now create dozens of designs in CAD, input them into a 3-D printer, and in a matter of hours you have all your prototypes,” adds Maria Yang, professor of mechanical engineering and director of MIT’s Ideation Laboratory. “It gives you a level of design exploration that simply wasn’t possible before.”

Throughout MIT’s Department of Mechanical Engineering, many faculty members have been finding new ways to incorporate 3-D printing across a vast array of research areas. Whether it’s printing metal parts for airplanes, printing objects on a nanoscale, or advancing drug discovery by printing complex biomaterial scaffolds, these researchers are testing the limits of 3-D printing technologies in ways that could have lasting impact across industries.

Improving speed, cost, and accuracy

There are several technological hurdles that have prevented additive manufacturing from having an impact on the level of Gutenberg’s printing press. A. John Hart, associate professor of mechanical engineering and director of MIT’s Laboratory for Manufacturing and Productivity, focuses much of his research on addressing those issues.

“One of the most important barriers to making 3-D printing accessible to designers, engineers, and manufacturers across the product life cycle is the speed, cost, and quality of each process,” explains Hart.

His research seeks to overcome these barriers, and to enable the next generation of 3-D printers that can be used in the factories of the future. For this to be accomplished, synergy among machine design, materials processing, and computation is required.

To work toward achieving this synergy, Hart’s research group examined the processes involved in the most well-known style of 3-D printing: extrusion. In extrusion, plastic is melted and squeezed through a nozzle in a printhead.

“We analyzed the process in terms of its fundamental limits — how the polymer could be heated and become molten, how much force is required to push the material through the nozzle, and the speed at which the printhead moves around,” adds Hart.

With these new insights, Hart and his team designed a new printer that operated at speeds 10 times faster than existing printers. A gear that would have taken one to two hours to print could now be ready in five to 10 minutes. This drastic increase in speed is the result of a novel printhead design that Hart hopes will one day be commercialized for both desktop and industrial printers.

While this new technology could improve our ability to print plastics quickly, printing metals requires a different approach. For metals, precise quality control is especially important for industrial use of 3-D printing. Metal 3-D printing has been used to create objects ranging from airplane fuel nozzles to hip implants, yet it is only just beginning to become mainstream. Items made using metal 3-D printing are particularly susceptible to cracks and flaws due to the large thermal gradients inherent in the process.

To solve this problem, Hart is embedding quality control within the printers themselves. “We are building instrumentation and algorithms that monitor the printing process and detect if there are any mistakes — as small as a few micrometers — as the objects are being printed,” Hart explains.

This monitoring is complemented by advanced simulations, including models that can predict how the powder used as the feedstock for printing is distributed and can also identify how to modify the printing process to account for variations.

Hart’s group has been pioneering the use of new materials in 3-D printing. He has developed methods for printing with cellulose, the world’s most abundant polymer, as well as carbon nanotubes, nanomaterials that could be used in flexible electronics and low-cost radio frequency tags.

When it comes to 3-D printing on a nanoscale, Hart’s colleague Nicholas Xuanlai Fang, professor of mechanical engineering, has been pushing the limits of how small these materials can be.

Printing nanomaterials using light

Inspired by the semiconductor and silicon chip industries, Fang has developed a 3-D printing technology that enables printing on a nanoscale. As a PhD student, Fang first got interested in 3-D printing while looking for a more efficient way to make the microsensors and micropumps used for drug delivery.

“Before 3-D printing, you needed expensive facilities to make these microsensors,” explains Fang. “Back then, you’d send design layouts to a silicon manufacturer, then you’d wait four to six months before getting your chip back.” The process was so time-intensive it took one of his labmates four years to get eight small wafers.

As advances in 3-D printing technologies made manufacturing processes for larger products cheaper and more efficient, Fang began to research how these technologies might be used on a much smaller scale.

He turned to a 3-D printing process known as stereolithography. In stereolithography, light is sent through a lens and causes molecules to harden into three-dimensional polymers — a  process known as photopolymerization.

The size of objects that could be printed using stereolithography were limited by the wavelength of the light being sent through the optic lens — or the so-called diffraction limit — which is roughly 400 nanometers. Fang and his team were the first researchers to break this limit.

“We essentially took the precision of optical technology and applied it to 3-D printing,” says Fang. The process, known as projection micro-stereolithography, transforms a beam of light into a series of wavy patterns. The wavy patterns are transferred through silver to produce fine lines as small as 40 nm, which is 10 times smaller than the diffraction limit and 100 times smaller than the width of a strand of hair.

The ability to pattern features this small using 3-D printing holds countless applications. One use for the technology Fang has been researching is the creation of a small foam-like structure that could be used as a substrate for catalytic conversion in automotive engines. This structure could treat greenhouse gases on a molecular level in the moments after an engine starts.

“When you first start your engine, it’s the most problematic for volatile organic components and toxic gases. If we were to heat up this catalytic convertor quickly, we could treat those gases more effectively,” he explains.

Fang has also created a new class of 3-D printed metamaterials using projection micro-stereolithography. These materials are composed of complex structures and geometries. Unlike most solid materials, the metamaterials don’t expand with heat and don’t shrink with cold.

“These metamaterials could be used in circuit boards to prevent overheating or in camera lenses to ensure there is no shrinkage that could cause a lens in a drone or UAV to lose focus,” says Fang.

More recently, Fang has partnered with Linda Griffith, School of Engineering Teaching Innovation Professor of Biological and Mechanical Engineering, to apply projection micro-stereolithography to the field of bioengineering.

Growing human tissue with the help of 3-D printing

Human cells aren’t programmed to grow in a two-dimensional petri dish. While cells taken from a human host might multiply, once they become thick enough they essentially starve to death without a constant supply of blood. This has proved particularly problematic in the field of tissue engineering, where doctors and researchers are interested in growing tissue in a dish to use in organ transplants.

For the cells to grow in a healthy way and organize into tissue in vitro, they need to be placed on a structure or ‘scaffold.’  In the 1990s, Griffith, an expert in tissue engineering and regenerative medicine, turned to a nascent technology to create these scaffolds — 3-D printing.

“I knew that to replicate complex human physiology in vitro, we needed to make microstructures within the scaffolds to carry nutrients to cells and mimic the mechanical stresses present in the actual organ,” explains Griffith.

She co-invented a 3-D printing process to make scaffolds from the same biodegradable material used in sutures. Tiny complex networks of channels with a branching architecture were printed within the structure of these scaffolds. Blood could travel through the channels, allowing cells to grow and eventually start to form tissue. 

Over the past two decades, this process has been used across various fields of medicine, including bone regeneration and growing cartilage in the shape of a human ear. While Griffith and her collaborators originally set out to regenerate a liver, much of their research has focused on how the liver interacts with drugs.

“Once we successfully grew liver tissue, the next step was tackling the challenge of getting useful predicative drug development information from it,” adds Griffith.

To develop more complex scaffolds that provide better predicative information, Griffith collaborated with Fang on applying his nano-3-D printing technologies to tissue engineering. Together, they have built a custom projection micro-stereolithography machine that can print high-resolution scaffolds known as liver mesophysiological systems (LMS). Micro-stereolithography printing allows the scaffolds that make up LMS to have channels as small as 40 microns wide. These small channels enable perfusion of the bioartificial organ at an elevated flow rate, which allows oxygen to diffuse throughout the densely packed cell mass.

“By printing these microstructures in more minute detail, we are getting closer to a system that gives us accurate information about drug development problems like liver inflammation and drug toxicity, in addition to useful data about single-cell cancer metastasis,” says Griffith.

Given the liver’s central role in processing and metabolizing drugs, the ability to mimic its function in a lab has the potential to revolutionize the field of drug discovery.

Griffith’s team is also applying their projection micro-stereolithography technique to create scaffolds for growing induced pluripotent stem cells into human-like brain tissue. “By growing these stem cells in the 3-D printed scaffolds, we are hoping to be able to create the next generation of more mature brain organoids in order to study complex diseases like Alzheimer's,” explains Pierre Sphabmixay, a mechanical engineering PhD candidate in Griffith’s lab.

Partnering with Industry

For 3-D printing to make a lasting impact on how products are both designed and manufactured, researchers need to work closely with industry. To help bridge this gap, the MIT Center for Additive and Digital Advanced Production Technologies (APT) was launched in late 2018.

“The idea was to intersect additive manufacturing research, industrial development, and education across disciplines all under the umbrella of MIT,” explains Hart, who founded and serves as director of APT. “We hope that APT will help accelerate the adoption of 3-D printing, and allow us to better focus our research toward true breakthroughs beyond what can be imagined today.”

Since APT launched in November 2018, MIT and the twelve company founding members — that include companies such as ArcelorMittal, Autodesk, Bosch, Formlabs, General Motors, and the Volkswagen Group — have met both at a large tradeshow in Germany and on campus. Most recently, they convened at MIT for a workshop on scalable workforce training for additive manufacturing.

“We’ve created a collaborative nexus for APT’s members to unite and solve common problems that are currently limiting the adoption of 3-D printing — and more broadly, new concepts in digitally-driven production — at a large scale,” adds Haden Quinlan, program manager of APT.  Many also consider Boston the epicenter of 3-D printing innovation and entrepreneurship, thanks in part to several fast-growing local startups founded by MIT faculty and alumni.

Efforts like APT, coupled with the groundbreaking work being done in the sphere of additive manufacturing at MIT, could reshape the relationship between research, design and manufacturing for new products across industries.

Designers could quickly prototype and iterate the design of products. Safer, more accurate metal hinges could be printed for use in airplanes or cars. Metamaterials could be printed to form electronic chips that don’t overheat. Entire organs could be grown from donor cells on 3-D printed scaffolds. While these technologies may not spark the next Renaissance as the printing press did, they offer solutions to some of the biggest problems society faces in the 21st century.

3Q: Hal Abelson on empowering kids through mobile technology

Wed, 05/15/2019 - 11:59pm

Hal Abelson, the Class of 1922 Professor in the Department of Electrical Engineering in Computer Science, has long been dedicated to democratizing access to technology for children. In the 1970s, he directed the first implementation of the educational programming language Logo for the Apple II computer. During a sabbatical at Google in 2007, he launched App Inventor, a web-based, visual-programming environment that allows children to develop applications for smartphones and tablets. The platform was transferred to MIT in 2010, where it now has over 1 million active users a month, who hail from 195 countries.

As new technologies are rapidly developed and introduced, Abelson feels it is crucial to introduce children to computer science through hands-on learning activities so that they have a better understanding of how they can use and create such technologies. MIT News spoke with Abelson about MIT App Inventor and how it helps kids have an impact on people and communities around the world. 

Q: How did you get the idea for App Inventor, and what did you want it to achieve?

A: It’s crucial that we teach children how they can use technology to become informed and empowered citizens. Everyone is reacting to the enormous influence of computing, in particular how mobile technology has changed everyone’s lives. The question is, can people, in particular children, use mobile technology as a source for becoming informed and a source for becoming empowered? Do they see it as something that they can shape? Or is it purely going to be a consumer product that people react to?

I got the idea for App Inventor when I started thinking about how kids really weren’t using desktop computers anymore, and the real empowerment opportunities in the realm of computer science and technology nowadays are with smartphones. I thought to myself, “Why don’t we launch an initiative to make it possible for kids to make original applications for mobile phones?” When we started App Inventor, smartphones were just coming onto the market, and the notion that kids could be building applications for these devices was a little crazy.

Q: What are some of your favorite applications that kids have created using App Inventor?

A: The goal of App Inventor is to enable kids to participate in what I am referring to as computational action, which means building things that can actually have an impact on you, your family, and your country. We have some great examples of how students are using the platform to not only improve their own lives, but also the lives of the people around them.

One of my favorites apps was developed by a group of young women in Dharavi, which is located in India and is one of the largest slums in the world. These young women are creating apps aimed at improving the lives of their community. One of the apps that they created allows families to schedule time at the community water distribution site, reducing conflicts over access.

Another one that I love was created by a group of high-school girls in Moldova. Moldova has a water quality problem, so this group of students built an application that allows people to provide and access information about water quality around the country. For example, if you’re in Moldova and you go to a source of water, you can take out your phone and upload a picture of the water and information about its quality to the platform. This information is entered into a database that is accessible across the entire nation.

Thanks to this app, if you are driving around Moldova and are wondering if there is a good source of water in the area, you can use the application to find a safe source of drinking water. It’s pretty amazing to think about how four high school students have set up a national, geographic database that allows people to access clean water. This would not have been possible eight years ago, but now students are able to create something like this because there is an incredible technological infrastructure that allows people to do all sorts of amazing things.

Another app I love was created by a couple of kids in junior high school who were worried about bullying. It’s a really simple app that you can download with your friends. It works like this: If you are in the cafeteria and you are worried someone is going to come over and bully you, you press a button, which sends a message to your friends and alerts them to the fact that you need someone to come sit with you in the cafeteria because you are worried about being bullied. It’s really simple, but really effective.

Q: The App Inventor team recently participated in an event with the Cambridge Public Schools called “Freshman Technology Experience,” which was aimed at inspiring more — and more diverse students — to explore computer science. Can you talk about why you decided to participate?

A: Currently the App Inventor team is trying to get more involved with kids in the local area. Partly this is aimed at allowing us to conduct some of the initial testing of the new features we are developing for App Inventor. But it’s also about allowing the App Inventor team a greater ability to interact and work with local kids. If you’re involved in this project, part of the reason why is because you are interested in working with kids, so it’s great to be able to go out into the community and see kids using the platform and help them build stuff.

Even though App Inventor right now is focused on empowering kids to create technology for smartphones and tablets, I feel it should be emblematic of all the changes in technology that are available. Just in the past few years, new smart-home technologies like Alexa and Google Home have been developed that are now becoming widely integrated into people’s everyday existence. We have to think about how could kids be empowered around this emerging area of technology and how kids could help shape these new technologies.

It’s really important that we take the opportunity to educate kids about how to use technology. It doesn’t matter whether it’s kids or our representatives in Congress, in the long run it’s dangerous to our democracy if you have such powerful tools that people don’t understand. It’s particularly important that kids get this sense that technology is something they can shape. Even if they never go off and become programmers or do anything in the field of computer science, it’s important they understand that technology is something that they could influence or control.

Susan Silbey earns faculty’s prestigious Killian Award

Wed, 05/15/2019 - 3:59pm

Susan Silbey, an MIT sociologist whose pathbreaking work has examined the U.S. legal system as experienced in everyday life, has been named the recipient of the 2019-2020 James R. Killian Jr. Faculty Achievement Award.

The Killian Award is the highest honor the Institute faculty bestows on one of its members, and is granted to one professor per year.

“It is most special to have people recognize the work you have done,” Silbey says. “It’s extraordinarily gratifying and humbling.” She adds: “I take pride that this is for social science.”

Silbey is the Leon and Anne Goldberg Professor of Humanities, Sociology, and Anthropology in the School of Humanities, Arts, and Social Sciences, and professor of behavioral and policy sciences at the Sloan School of Management.

Silbey’s research has long examined the relationship between abstract law and daily life, illuminating the varied ways people conceive of the law, and in turn obey, manipulate, or struggle against it. Her much-lauded 1998 book, “The Common Place of Law,” co-authored with Patricia Ewick, explored these issues in American society broadly. More recently, Silbey has produced original studies about the relationship between law and the everyday practices of science, exploring how laboratories, for instance, vary in their interpretations of environmental, health, and safety regulations while conducting complex and challenging experiments.

“Professor Silbey is a world-renowned sociologist of law, celebrated for her groundbreaking work on legal consciousness and regulatory governance, most recently in scientific contexts,” states Silbey’s award citation, which also notes that “in order to understand how the rules of law operate, we need to see how they are interpreted, defended, negotiated, and resisted by people as they do their jobs and go about their daily lives.”

At MIT, Silbey has also extensively studied gender roles in science and engineering. She has produced numerous empirical studies and papers developing the concept of “professional role confidence,” a highly gendered phenomenon that can explain why talented women may decline to pursue careers in the STEM fields.

“We are delighted to have this opportunity to honor Professor Susan S. Silbey for her insatiable intellectual curiosity, unstoppable productivity, and overwhelmingly generous mentorship and leadership,” the citation adds.

Silbey earned her BA in political science from Brooklyn College, and her MA and PhD in political science from the University of Chicago. But she identifies herself as a sociologist in disciplinary terms, and was a faculty member in Wellesley College’s Department of Sociology from 1974 through 2000. She first joined the MIT faculty in 2000.

Silbey was already quite familiar with MIT even before then; her husband was the late Robert Silbey, former head of MIT’s Department of Chemistry, director of the Center for Materials Science and Engineering, and dean of the School of Science from 2000 to 2007.

“My husband would have been very pleased,” Silbey says about receiving the Killian Award. “He was always very supportive of my research and professional career. He would have been very proud that our colleagues are recognizing my scholarship this way.”

While continuing her research, Silbey has been a highly active faculty citizen within the MIT community. Among many other things, she is currently chair of the MIT faculty and has helped guide the faculty input into the development of the MIT Stephen A. Schwarzman College of Computing. From 2006 to 2014, Silbey was head of the Anthropology Section at MIT, leading its expansion and a round of new faculty hires.

“I have the most marvelous colleagues,” Silbey says. “That has to be understood. … I’ve never met an academic group like this. And they are just superb at what they do. The books they write, and the teaching in this department, is extraordinary.”

Silbey also praises the excellence of the sociologists at MIT Sloan, with whom she has supervised dozens of doctoral students and regularly teaches graduate courses on social theory and research methods for the social sciences.

Silbey has received many previous honors during her career. She was elected as a fellow of the American Academy of Political and Social Science in 2001, received a Guggenheim Foundation Fellowship in 2009, a Russell Sage Foundation Fellowship in 2014, and in 2015 was appointed to the Institute for Advanced Studies in Paris. 

James Livingston, senior lecturer emeritus in materials science and engineering, dies at 88

Wed, 05/15/2019 - 12:20pm

James D. Livingston, MIT senior lecturer emeritus, and his wife, Sherry H. Penney, died last week at their home in Sarasota, Florida, where they spent their winters. Married for 34 years, he was 88 and she was 81; the cause was accidental carbon monoxide poisoning.

Livingston's undergraduate education was in engineering physics at Cornell University, and his doctorate was in applied physics from Harvard University. He joined MIT in 1989, coming from Schenectady, New York, where he had been a physicist at GE Corporate Research and Development. The move to Boston was driven by his wife’s career; she was the chancellor of the University of Massachusetts at Boston and served for a time as the acting president of the UMass system. His work focused on magnetic materials, including metallic superconductors and rare-earth permanent magnets.

At MIT, Livingston remained active in research, collaborating with several colleagues, mentoring graduate and undergraduate students, and adding valuable expertise to MIT’s teaching and research enterprise. He taught in the Department of Materials Science and Engineering (Course 3, DMSE) undergraduate core and served as a first-year advisor, teaching a seminar on magnets. His accomplishments were recognized with awards including membership in the National Academy of Engineering and election to fellow of both ASM International and the American Physical Society, as well as MIT's Best First-Year Advisor Award. He authored many journal articles and books, on a variety of topics: "Electronic Properties of Engineering Materials" was based on his lectures for the DMSE undergrad curriculum; "Driving Force" and "Rising Force" are popular science books on magnetism; and, in a departure from science, he wrote "A Very Dangerous Woman: Martha Wright and Women's Rights" (co-authored with his wife) and "Arsenic and Clam Chowder: Murder in Gilded Age New York," both histories that grew out of family research.

Although Livingston held a part-time position at MIT, he was fully committed to the students and their needs, and to the Institute. He taught many different classes, was a thesis advisor for graduate students, participated in professional education programs, and served as an ambassador for materials science and STEM education. His textbook, "Electronic Properties of Engineering Materials," was praised by students who used it — one even mentioned it on the subject evaluation, saying "the textbook ROCKS."  

Conversation with Livingston was entertaining, stimulating, and educational, studded with humor and anecdotes that helped make his point. Professor Emeritus Sam Allen says, "Both his writing and lecturing style were engaging because of Jim's conversational style and use of easily grasped examples to teach complex concepts." In one case, he told a class that the easiest thing to do was to find a needle in a haystack; "Needles are magnetic, hay isn't. All you need is a magnet," remembers Chris Schuh, head of DMSE.

A memorial service will be held at 11 a.m. May 25 in First Baptist Church in Hingham, Massachusetts. The service will be led by Rev. Kenneth Read-Brown, minister of First Parish, Hingham, known as Old Ship Church, a Unitarian Universalist congregation to which the couple belonged for many years, and which was unavailable due to repair work. A memorial service in Sarasota will be announced.

New surface treatment could improve refrigeration efficiency

Wed, 05/15/2019 - 10:59am

Unlike water, liquid refrigerants and other fluids that have a low surface tension tend to spread quickly into a sheet when they come into contact with a surface. But for many industrial processes it would be better if the fluids formed droplets, which could roll or fall off the surface and carry heat away with them.

Now, researchers at MIT have made significant progress in promoting droplet formation and shedding in such fluids. This approach could lead to efficiency improvements in many large-scale industrial processes including refrigeration, thus saving energy and reducing greenhouse gas emissions.

The new findings are described in the journal Joule, in a paper by graduate student Karim Khalil, professor of mechanical engineering Kripa Varanasi, professor of chemical engineering and Associate Provost Karen Gleason, and four others.

Over the years, Varanasi and his collaborators have made great progress in improving the efficiency of condensation systems that use water, such as the cooling systems used for fossil-fuel or nuclear power generation. But other kinds of fluids — such as those used in refrigeration systems, liquification, waste heat recovery, and distillation plants, or materials such as methane in oil and gas liquifaction plants — often have very low surface tension compared to water, meaning that it is very hard to get them to form droplets on a surface. Instead, they tend to spread out in a sheet, a property known as wetting.

But when these sheets of liquid coat a surface, they provide an insulating layer that inhibits heat transfer, and easy heat transfer is crucial to making these processes work efficiently. “If it forms a film, it becomes a barrier to heat transfer,” Varanasi says. But that heat transfer is enhanced when the liquid quickly forms droplets, which then coalesce and grow and fall away under the force of gravity. Getting low-surface-tension liquids to form droplets and shed them easily has been a serious challenge.

In condensing systems that use water, the overall efficiency of the process can be around 40 percent, but with low-surface-tension fluids, the efficiency can be limited to about 20 percent. Because these processes are so widespread in industry, even a tiny improvement in that efficiency could lead to dramatic savings in fuel, and therefore in greenhouse gas emissions, Varanasi says.

By promoting droplet formation, he says, it’s possible to achieve a four- to eightfold improvement in heat transfer. Because the condensation is just one part of a complex cycle, that translates into an overall efficiency improvement of about 2 percent. That may not sound like much, but in these huge industrial processes even a fraction of a percent improvement is considered a major achievement with great potential impact. “In this field, you’re fighting for tenths of a percent,” Khalil says.

Unlike the surface treatments Varanasi and his team have developed for other kinds of fluids, which rely on a liquid material held in place by a surface texture, in this case they were able to accomplish the fluid-repelling effect using a very thin solid coating — less than a micron thick (one millionth of a meter). That thinness is important, to ensure that the coating itself doesn’t contribute to blocking heat transfer, Khalil explains.

The coating, made of a specially formulated polymer, is deposited on the surface using a process called initiated chemical vapor deposition (iCVD), in which the coating material is vaporized and grafts onto the surface to be treated, such as a metal pipe, to form a thin coating. This process was developed at MIT by Gleason and is now widely used.

The authors optimized the iCVD process by tuning the grafting of coating molecules onto the surface, in order to minimize the pinning of condensing droplets and facilitate their easy shedding. The process could be carried out on location in industrial-scale equipment, and could be retrofitted into existing installations to provide a boost in efficiency. The process is “materials agnostic,” Khalil says, and can be applied on either flat surfaces or tubing made of stainless steel, titanium, or other metals commonly used in condensation heat-transfer processes that involve these low-surface-tension fluids. “Whatever materials are used in your facility's heat exchanger, it tends to be scalable with this process,” he adds.

Video shows the condensation of pentane, a low-surface-tension fluid. On the left, streaking of drops impair heat transfer, while pentane with the new coating, at right, shows high droplet formation and good heat transfer.

The net result is that on these surfaces, condensing fluids like the hydrocarbons pentane or liquid methane, or alcohols like ethanol, will readily form small droplets that quickly fall off the surface, making room for more to form, and in the process shedding heat from the metal to the droplets that fall away.

One area where such coatings could play a useful role, Varanasi says, is in organic Rankine cycle systems, which are widely used for generating power from waste heat in a variety of industrial processes. “These are inherently inefficient systems,” he says, “but this could make them more efficient.”

The new coating is shown promoting condensation on a titanium surface, a material widely used in industrial heat exchangers.

“This new approach to condensation is significant because it promotes drop formation (rather than film formation) even for low-surface-tension fluids, which significantly improves the heat transfer efficiency,” says Jonathan Boreyko, an assistant professor of mechanical engineering at Virginia Tech, who was not connected to this research. While the iCVD process itself is not new, he says, “showing here that it can be used even for the condensation of low-surface-tension fluids is of significant practical importance, as many real-life phase-change systems do not use water.”

Saying the work is “of very high quality,” Boreyko adds that “simply showing for the first time that a thin, durable, and dry coating can promote the dropwise condensation of low-surface-tension fluids is very important for a wide variety of practical condenser systems.”

The research was supported by the Shell-MIT Energy Initiative partnership. The team included former MIT graduate students Taylor Farnham and Adam Paxson, and former postdocs Dan Soto and Asli Ugur Katmis.

Grad student John Urschel tackles his lifelong balance of math and football in new memoir

Wed, 05/15/2019 - 12:00am

It’s been nearly two years since John Urschel retired from the NFL at the age of 26, trading a career as a professional football player at the height of his game for a chance at a PhD in mathematics at MIT. From the looks of it, he couldn’t be happier.

The former offensive lineman for the Baltimore Ravens is now a full-time graduate student who spends his days in Building 2, poring over academic papers and puzzling over problems in graph theory, machine learning, and numerical analysis.

In his new memoir, “Mind and Matter: A Life in Math and Football,” co-written with his wife, journalist and historian Louisa Thomas, Urschel writes about how he has balanced the messy, physically punishing world of football, with the elegant, cerebral field of mathematics.

Urschel presents his life chronologically, through chapters that alternate in focus between math and football, as it often did in real life. For instance, he writes about a moment, following an ecstatic win as part of Penn State’s offensive line, when a coach pulled him aside with a message: With a little more work, he had a shot at the NFL.

With that in mind, he writes, “I went home elated. … I left the football building with a new sense of purpose, a mission.” That same night, he opened his laptop and got to work on a paper that he planned to submit with his advisor to a top linear algebra journal. “Suddenly, surprisingly, I had a strange feeling: I felt torn,” he recalls.

For those who see Urschel as a walking contradiction, or praise him as an exceptional outlier, he poses, in his book, a challenge:

“So often, people want to divide the world into two: matter and energy. Wave and particle. Athlete and mathematician. Why can’t something (or someone) be both?”

A refuge in math

Before he could speak in full sentences, Urschel’s mother could tell that the toddler had a mind for patterns. To occupy the increasingly active youngster, she gave him workbooks filled with puzzles, which he eagerly devoured at the kitchen table. As he got older, she encouraged him further, and often competitively, with games of reasoning and calculation, such as Monopoly and Battleship. And in the grocery store, she let him keep the change if he could calculate the correct amount before the cashier rang it up.

His mother made math a game, and by doing so, lit a lifelong spark. He credits her with recognizing and nurturing his natural interests — something that he hopes to do for his own toddler, Joanna, to whom he dedicates the book.

When he was 5 years old, he saw a picture of his father in full pads, as a linebacker for the University of Alberta — his first exposure to the sport of football. From that moment, Urschel wanted to be like his dad, and he wanted to play football.

And play he did, though he writes that he wasn’t driven by any innate athletic talent.

“The only thing that set me apart from other kids when I played sports was my intensity as a competitor. I couldn’t stand losing — so much so that I would do everything in my power to try to win,” Urschel writes.

This fierce drive earned him a full ride to Penn State University, where he forged a lasting connection with the college and its football team. His seemingly disparate talents in math and football started gaining some media attention, as a bright spot for Penn State in an otherwise dark period. (The team was facing national scrutiny as a consequence of the trial of former coach Jerry Sandusky.) But the more news outlets referred to him as a “student-athlete,” the more the moniker grated against him.

“[The term ‘student-athlete’] is widely considered a joke of sorts in America,” Urschel says. “But it’s something you can actually do. It takes up a great deal of your time, and it’s not easy. But it is possible to be good at sports while tearing it up in academics.”

Urschel proved this in back-to-back years at Penn State, culminating in 2013 with a paper he co-wrote with his advisor, Ludmil Zikatanov, on the spectral bisection of graphs and connectedness, which would later be named the Urschel-Zikatanov theorem. The following year, he was drafted, in the fifth round, by the Baltimore Ravens.

He played his entire professional football career as a guard with the Ravens, in 40 games over two years, 13 of which he started. In 2015, in a full-pads practice at training camp with the team, Urschel was knocked flat with a concussion. Just weeks earlier, he had learned that he had been accepted to MIT, where he hoped to pursue a PhD in applied mathematics, during the NFL offseason.

In the weeks following the concussion, he writes: “I’d reach for a theorem that I knew I knew, and it wouldn’t be there. I would try to visualize patterns, or to stretch or twist shapes — a skill that had always come particularly easy to me — and I would be unable to see the structures or make things move.”

He eventually did regain his facility for math, along with, surprisingly, his need to compete on the field. Despite the possibility of suffering another concussion, he continued to play with the Ravens through 2015. During the off-season, in January 2016, Urschel set foot on the MIT campus to begin work on his PhD.

A quantitative mindset

“It was like stepping into my personal vision of paradise,” Urschel writes of his first time walking through MIT’s math department in Building 2, noting the chalkboards that lined the hallways, where “casual conversations quickly became discussions of open conjectures.” Urschel was no less impressed by MIT’s football team, whose practices he joined each Monday during that first semester.

“These students have so much to do at MIT — it’s a very stressful place,” Urschel says. “And this is Division III football. It’s not high level, and they don’t have packed stands of fans — they’re truly just playing for the love of the game.”

He says he was reluctant to return to pro football that summer, and realized throughout that season that he couldn’t wait for Sundays and the prospect of cracking open a math book and tackling problems with collaborators back at MIT and Penn State.

An article in the New York Times in July 2017 tipped the scales that had, up until then, kept math and football as equal passions for Urschel. The article outlined a brain study of 111 deceased NFL players, showing 110 of those players had signs of CTE, or chronic traumatic encephalopathy, associated with repeated blows to the head. Urschel writes that the study didn’t change his love for football, but it did make him reevaluate his choices.

Two days after reading that article, Urschel announced his retirement from the NFL and packed his bags for a permanent move to MIT.

Since then, he has focused his considerable energy on his  research, as well as teaching. Last spring, he was a teaching assistant for the first time, in 18.03 (Differential Equations).

“I love teaching,” says Urschel, who hopes to be a university math professor and encourages students in class to think creatively, rather than simply memorize the formulas that they’re taught.

“I’m fighting against the idea of blindly applying formulas you just learned, and instead teaching students to use their brains,” Urschel says.

He’s also making time to visit local high schools to talk math, and STEM education in general.

“I’m a visible mathematician,” says Urschel — an understatement to be sure. “I have a responsibility to try to help popularize math, and remove some of its stigma.”

His enthusiasm for the subject is highly effective, judging from the overwhelmingly positive reviews from his 18.03 students. Above all, though, he hopes to convey the importance of a “quantitative mindset.”

“I don’t care so much if a random person on the street knows the quadratic formula,” Urschel says. “But I do care if they’re able to think through different problems, whether involving loans of two different rates, or how much you need to put in your 401k. Being capable of thinking quantitatively — it’s the single most important thing.”

Leaving room for a little improvisation

Tue, 05/14/2019 - 2:40pm

Senior Tony Zhang says his curiosity about physics was piqued by an unlikely source: a rubber band. 

“When I was little, I would stretch rubber bands across cabinet and drawer handles,” says Zhang. “A rubber band produces a different pitch when you pluck it, depending on the material and depending on the tension. So I wondered if I could make an entire scale.” When he succeeded, Zhang says he wanted to know how it worked. 

Zhang has since pondered the science behind many more observations — and played scales of a more traditional variety. At MIT, he is double-majoring in physics and mathematics with computer science, and minoring in music. Zhang says his double major allowed him to pursue all three of his academic interests, forming what he calls a “math and friends umbrella.”

“What draws me to these academic fields is that I tend to be pretty analytical,” he says. “Computers are cool and math is fun, but I really like this particular way of thinking — being able to understand something from first principles.”

Trying to understand the science underlying an observation is something Zhang thinks about often in everyday life. Once, while playing a board game with some friends on the 30th or so floor of an apartment building, Zhang says the group noticed that the sun seemed to be setting later than would be expected. Someone suggested it was because they were up so high. 

“Usually people will think, ‘Maybe that’s it,’ and move on,” says Zhang. “But I do physics, so that is not an acceptable answer.” While everyone else carried on playing the game, Zhang says he worked out that the sunset should be delayed by a few minutes at their current height. “Sometimes problems just stick and then you just have to solve it,” he says. “Or you want to solve it just because you can.” 

A desire to understand the world around him is what drives Zhang’s studies, as well as his research. Since his junior year, Zhang has worked in the lab of Isaac Chuang focusing on quantum information, as well as atomic, molecular, and optical (AMO) physics. As Zhang explains, while everything is made up of atoms and molecules, AMO physics examines the uniquely atomic and molecular properties that occur at very low temperatures, or when a single atom is trapped in free space, for example. His current research involves trying to implement a simple quantum algorithm in real life through an experiment on a single ion of the element strontium. 

He also enjoys seeing physics come to life. “Your professors weren't lying when they say atoms behave really weirdly,” Zhang says. “Experimental AMO is an opportunity for you to see all the wacky things they promise you happen in physics theory classes. You can actually see and measure that behavior in real life.”

A different note

While he came to MIT confident in his academic pursuits, Zhang said he expected to have to give up playing the piano in order to focus on his studies. But Zhang had played since he was 7, and said he started to realize how much he enjoyed it. Impressed by all he learned about MIT’s music program, he auditioned for an Emerson Scholarship. He was selected for the program, which helps fund piano lessons for talented students. He has largely studied with David Deveau, a senior lecturer in music at MIT. 

“Slowly, instead of phasing it out, piano became an even larger part of my life than it was before coming here,” Zhang says. 

It’s even become a priority for Zhang to learn about the music departments in the schools he’s applying to for graduate work.

“People will ask you whether music informs physics or vice versa. I think the answer is: not really, but I think they're very complementary,” Zhang says. “It’s just very nice to have something completely unrelated to academics to think about and work on.”

Beyond the break it affords, Zhang says playing piano was a great way to connect with new people. He says he met one of his closest friends, a violinist, in a piano trio on campus, and that he has found the MIT undergraduate student body to be very musical. 

During his first year at MIT, Zhang surprised himself by signing up for yet another activity outside of academics. After a friend convinced him to audition, Zhang joined the MIT Asian Dance Team. “I had absolutely zero experience with dance coming into MIT,” he says. “But now I have been dancing my whole time in undergrad — poorly, I will add.”

In addition to acting as stress reducers and opportunities to work hard physically, Zhang says these non-academic activities helped him grow as a person. Music, he says, helped him become more observant about how he spends his time and makes decisions about how to maximize his study and practice time. Both music and dance helped him look at himself differently. “I came into MIT not necessarily shy, but also perhaps maybe not fully comfortable with myself,” Zhang says. “I think working on piano very deeply and trying out dance, both have done a lot in helping me feel more confident and comfortable as myself.” 

Zhang also joined the MIT Association of Taiwanese Students (ATS), and eventually became co-president for his sophomore and junior years. While Zhang isn’t Taiwanese, he said joining ATS was more about building community and spending time with people with similar interests. 

“There is something so nice about sharing unique parts of your culture with other people who may not have grown up with the same culture, but who also find it interesting,” he says. 

While each of his pursuits added to his MIT experience, Zhang says he’s the first to admit that it was sometimes more than he could readily manage. “I spent most of my time at MIT doing way too much,” he says. “I was always thinking about commitments.” 

As a senior, Zhang has a slightly lighter course load and fewer extracurricular activities. He says this reduced plate has allowed him to catch his breath a bit and enjoy his final year at MIT. 

“College is important to set you up for your future, but it also is an experience to be enjoyed in and of itself,” he says. “It’s amazing to have more free mental time.” 

Plus, Zhang says, “it's where a lot of unexpected breakthroughs happen.” During rehearsal for a piano trio he was a part of, for example, Zhang remembers a special moment when he let his intuition guide his playing. “I suddenly thought, what if I add pedal, but just like a very small amount of pedal? Maybe it will sound better,” he says. “And it did. If I were just drilling, drilling, drilling sections, I wouldn't have had that realization.”

Leaving room for improvisation is just one of many lessons Zhang says he’s learned at MIT. “I decided to come because I thought there would be a lot of people I would click with, and I thought this would be the best place for me to grow,” he says. “All of that has been borne out by the past four years.”

After graduation, Zhang plans to attend graduate school to continue studying physics and satisfying his curiosity about the natural world. In physics, he says, there’s still so much to explore.

“That’s why science is cool in general: Everything just gains an extra dimension of cool when you know how it works,” Zhang says. “Or when you know that nobody knows how it works."

Robert Langer wins 2019 Dreyfus Prize for Chemistry in Support of Human Health

Tue, 05/14/2019 - 2:00pm

Robert S. Langer, the David H. Koch Institute Professor at MIT, has been awarded the 2019 Dreyfus Prize for Chemistry in Support of Human Health. The biennial prize includes a $250,000 award; an award ceremony will be held at MIT on Sept. 26 and will include a lecture by Langer.

Langer is honored for “discoveries and inventions of materials for drug delivery systems and tissue engineering that have had a transformative impact on human health through chemistry.” The citation explains that “the drug delivery technologies that he invented have been lauded as the cornerstone of that industry, positively impacting hundreds of millions of people worldwide. The impact and influence of his work is vast, and his papers have been cited in scientific publications more than any other engineer in history.”
 
Langer has written more than 1,400 articles and has over 1,350 issued and pending patents worldwide. His patents have been licensed or sublicensed to over 400 pharmaceutical, chemical, biotechnology, and medical device companies. He is one of four living individuals to have received both the National Medal of Science (2006) and the National Medal of Technology and Innovation (2011), both bestowed by the president of the United States. He has received over 220 major awards, including the 1998 Lemelson-MIT Prize, the world's largest prize for invention, for being "one of history's most prolific inventors in medicine."

“Bob Langer created two rich fields at the intersection of chemistry and medicine: controlled release materials for delivery of therapeutic macromolecules and tissue engineering,” states Matthew Tirrell, chair of the Dreyfus Foundation Scientific Affairs Committee and Director of the Institute for Molecular Engineering at the University of Chicago. “His discoveries have been translated, often by Langer himself, to many products that profoundly impact human health. In a diverse field of chemists and chemical engineers with many powerful contributors, the enormous body and influence of Bob Langer’s work stands out in a singular way.”

The Dreyfus Prize in the Chemical Sciences, initiated in 2009, is conferred in a specific area of chemistry in each cycle. It is the highest honor of the Camille and Henry Dreyfus Foundation. The foundation was established in 1946 by chemist, inventor, and businessman Camille Dreyfus, with the mission to advance the science of chemistry, chemical engineering, and related sciences as a means of improving human relations and circumstances throughout the world.

Tropical Pacific is major player in global ocean heat transport

Tue, 05/14/2019 - 2:00pm

Far from the vast, fixed bodies of water oceanographers thought they were a century ago, oceans today are known to be interconnected, highly influential agents in Earth’s climate system.

A major turning point in our understanding of ocean circulation came in the early 1980s, when research began to indicate that water flowed between remote regions, a concept later termed the “great ocean conveyor belt.”

The theory holds that warm, shallow water from the South Pacific flows to the Indian and Atlantic oceans, where, upon encountering frigid Arctic water, it cools and sinks to great depth. This cold water then cycles back to the Pacific, where it reheats and rises to the surface, beginning the cycle again.

This migration of water has long been thought to play a vital role in circulating warm water, and thus heat, around the globe. Without it, estimates put the average winter temperatures in Europe several degrees cooler.

However, recent research indicates that these global-scale seawater pathways may play less of a role in Earth’s heat budget than traditionally thought. Instead, one region may be doing most of the heavy lifting.

paper published in April in Nature Geoscience by Gael Forget, a research scientist in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and a member of the Program in Atmospheres, Oceans,and Climate, and David Ferreira, an associate professor in the Department of Meteorology at the University of Reading (and former EAPS postdoc), found that global ocean heat transport is dominated by heat export from the tropical Pacific.

Using a state-of-the-art ocean circulation model with nearly complete global ocean data sets, the researchers demonstrated the overwhelming predominance of the tropical Pacific in distributing heat across the globe, from the equator to the poles. In particular, they found the region exports four times as much heat as is imported in the Atlantic and Arctic.

“We are not questioning the fact that there is a lot of water going from one basin into another,” says Forget. “What we're saying is, the net effect of these flows on heat transport is relatively small. This result indicates that the global conveyor belt may not be the most useful framework in which to understand global ocean heat transport.”

Updating ECCO

The study was performed using a modernized version of a global ocean circulation model called Estimating the Circulation and Climate of the Ocean (ECCO). ECCO is the brain child of Carl Wunsch, EAPS professor emeritus of physical oceanography, who envisioned its massive undertaking in the 1980s.

Today, ECCO is often considered the best record of ocean circulation to date. Recently, Forget has spearheaded extensive updates to ECCO, resulting in its fourth generation, which has since been adopted by NASA.

One of the major updates made under Forget’s leadership was the addition of the Arctic Ocean. Previous versions omitted the area due to a grid design that squeezed resolution at the poles. In the new version, however, the grid mimics the pattern of a volleyball, with six equally distributed grid areas covering the globe.

Forget and his collaborators also added in new data sets (on things like sea ice and geothermal heat fluxes) and refined the treatment of others. To do so, they took advantage of the advent of worldwide data collection efforts, like ARGO, which for 15 years has been deploying autonomous profiling floats across the globe to collect ocean temperature and salinity profiles.

“These are good examples of the kind of data sets that we need to inform this problem on a global scale,” say Forget. “They're also the kind of data sets that have allowed us to constrain crucial model parameters.”

Parameters, which represent events that occur on too small of a scale to be included in a model’s finite resolution, play an important role in how realistic the model’s results are (in other words, how closely its findings match up with what we see in the real world). One of many updates Forget made to ECOO involved the ability to adjust (within the model) parameters that represent mixing of the ocean on the small scale and mesoscale.

“By allowing the estimation system to adjust those parameters, we improved the fit to the data significantly,” says Forget.

The balancing act

With a new and improved foundational framework, Forget and Ferreira then sought to resolve another contentious issue: how to best measure and interpret ocean heat transport.

Ocean heat transport is calculated as both the product of seawater temperature and velocity and the exchange of heat between the ocean and the atmosphere. How to balance these events — the exchange of heat from the “source to sink” — requires sussing out which factors matter the most, and where.

Forget and Ferreira’s is the first framework that reconciles both the atmospheric and oceanic perspectives. Combining satellite data, which captures the intersection of the air and sea surface, with field data on what’s happening below the surface, the researchers created a three-dimensional representation of how heat transfers between the air, sea surface, and ocean columns.

Their results revealed a new perspective on ocean heat transport: that net ocean heat redistribution takes place primarily within oceanic basins rather than via the global seawater pathways that compose the great conveyor belt.

When the researchers removed internal ocean heat loops from the equation, they found that heat redistribution within the Pacific was the largest source of heat exchange. The region, they found, dominates the transfer of heat from the equator to the poles in both hemispheres.  

“We think this is a really important finding,” says Forget. “It clarifies a lot of things and, hopefully, puts us, as a community, on stronger footing in terms of better understanding ocean heat transport.”

Future implications

The findings have profound implications on how scientists may observe and monitor the ocean going forwardsays Forget.

The community that deals with ocean heat transport, on the ocean side, tends to focus a lot on the notion that there is a region of loss, and maybe overlooks a little bit how important the region of gain may be,” says Forget.

In practice, this has meant a focus on the North Atlantic and Arctic oceans, where heat is lost, and less focus on the tropical Pacific, where the ocean gains heat. These viewpoints often dictate priorities for funding and observational strategies, including where instruments are deployed.  

“Sometimes it’s a balance between putting a lot of measurements in one specific place, which can cost a lot of money, versus having a program that's really trying to cover a global effort,” says Forget. “Those two things sometimes compete with each other.”

In the article, Forget and Ferreira make the case that sustained observation of the global ocean as whole, not just at a few locations and gates separating ocean basins, is crucial to monitor and understand ocean heat transport.

Forget also acknowledges that the findings go against some established schools of thought, and is eager to continue research in the area and hear different perspectives.

“We are expecting to stimulate some debate, and I think it's going to be exciting to see,” says Forget. “If there is pushback, all the better.”

Generating high-quality single photons for quantum computing

Tue, 05/14/2019 - 12:00am

MIT researchers have designed a way to generate, at room temperature, more single photons for carrying quantum information. The design, they say, holds promise for the development of practical quantum computers.

Quantum emitters generate photons that can be detected one at a time. Consumer quantum computers and devices could potentially leverage certain properties of those photons as quantum bits (“qubits”) to execute computations. While classical computers process and store information in bits of either 0s or 1s, qubits can be 0 and 1 simultaneously. That means quantum computers could potentially solve problems that are intractable for classical computers.

A key challenge, however, is producing single photons with identical quantum properties — known as “indistinguishable” photons. To improve the indistinguishability, emitters funnel light through an optical cavity where the photons bounce back and forth, a process that helps match their properties to the cavity. Generally, the longer photons stay in the cavity, the more they match.

But there’s also a tradeoff. In large cavities, quantum emitters generate photons spontaneously, resulting in only a small fraction of photons staying in the cavity, making the process inefficient. Smaller cavities extract higher percentages of photons, but the photons are lower quality, or “distinguishable.”

In a paper published today in Physical Review Letters, the researchers split one cavity into two, each with a designated task. A smaller cavity handles the efficient extraction of photons, while an attached large cavity stores them a bit longer to boost indistinguishability.

Compared to a single cavity, the researchers’ coupled cavity generated photons with around 95 percent indistinguishability, compared to 80 percent indistinguishability, with around three times higher efficiency.

“In short, two is better than one,” says first author Hyeongrak “Chuck” Choi, a graduate student in the MIT Research Laboratory of Electronics (RLE). “What we found is that in this architecture, we can separate the roles of the two cavities: The first cavity merely focuses on collecting photons for high efficiency, while the second focuses on indistinguishability in a single channel. One cavity playing both roles can’t meet both metrics, but two cavities achieves both simultaneously.”

Joining Choi on the paper are: Dirk Englund, an associate professor of electrical engineering and computer science, a researcher in RLE, and head of the Quantum Photonics Laboratory; Di Zhu, a graduate student in RLE; and Yoseob Yoon, a graduate student in the Department of Chemistry.

The relatively new quantum emitters, known as “single-photon emitters,” are created by defects in otherwise pure materials, such as diamonds, doped carbon nanotubes, or quantum dots. Light produced from these “artificial atoms” is captured by a tiny optical cavity in photonic crystal — a nanostructure acting as a mirror. Some photons escape, but others bounce around the cavity, which forces the photons to have the same quantum properties — mainly, various frequency properties. When they’re measured to match, they exit the cavity through a waveguide.

But single-photon emitters also experience tons of environmental noise, such as lattice vibrations or electric charge fluctuation, that produce different wavelength or phase. Photons with different properties cannot be “interfered,” such that their waves overlap, resulting in interference patterns. That interference pattern is basically what a quantum computer observes and measures to do computational tasks.

Photon indistinguishability is a measure of photons’ potential to interfere. In that way, it’s a valuable metric to simulate their usage for practical quantum computing. “Even before photon interference, with indistinguishability, we can specify the ability for the photons to interfere,” Choi says. “If we know that ability, we can calculate what’s going to happen if they are using it for quantum technologies, such as quantum computers, communications, or repeaters.”

In the researchers’ system, a small cavity sits attached to an emitter, which in their studies was an optical defect in a diamond, called a “silicon-vacancy center” — a silicon atom replacing two carbon atoms in a diamond lattice. Light produced by the defect is collected into the first cavity. Because of its light-focusing structure, photons are extracted with very high rates. Then, the nanocavity channels the photons into a second, larger cavity. There, the photons bounce back and forth for a certain period of time. When they reach a high indistinguishability, the photons exit through a partial mirror formed by holes connecting the cavity to a waveguide.

Importantly, Choi says, neither cavity has to meet rigorous design requirements for efficiency or indistinguishability as traditional cavities, called the “quality factor (Q-factor).” The higher the Q-factor, the lower the energy loss in optical cavities. But cavities with high Q-factors are technologically challenging to make.

In the study, the researchers’ coupled cavity produced higher quality photons than any possible single-cavity system. Even when its Q factor was roughly one-hundredth the quality of the single-cavity system, they could achieve the same indistinguishability with three times higher efficiency.

The cavities can be tuned to optimize for efficiency versus indistinguishability — and to consider any constraints on the Q factor — depending on the application. That’s important, Choi adds, because today’s emitters that operate at room temperature can vary greatly in quality and properties.

Next, the researchers are testing the ultimate theoretical limit of multiple cavities. One more cavity would still handle the initial extraction efficiently, but then would be linked to multiple cavities that photons for various sizes to achieve some optimal indistinguishability. But there will most likely be a limit, Choi says: “With two cavities, there is just one connection, so it can be efficient. But if there are multiple cavities, the multiple connections could make it inefficient. We’re now studying the fundamental limit for cavities for use in quantum computing.”

In cancer research, a winding road to discovery

Tue, 05/14/2019 - 12:00am

In 1961, people in the suburb of Niles, Illinois, experienced what they termed a “cancer epidemic.” Over a dozen children in the town were diagnosed with leukemia within a short time. Fears quickly spread that the illness could be contagious, carried by some type of “cancer virus.” News coverage soon identified several other towns with apparent “cancer clusters,” as well. Belief that cancer was a simple contagion, like polio or the flu, kept bubbling up.

“People wrote [to medical authorities] well into the 1960s asking, ‘I lived in a house where somebody had cancer. Am I going to catch cancer?’” says Robin Scheffler, the Leo Marx CD Assistant Professor in the History and Culture of Science and Technology at MIT.

Those fears were taken seriously. The National Cancer Institute (NCI) created the Special Virus Leukemia Program in 1964 and over the next 15 years spent more than $6.5 billion (in 2017 dollars) on cancer virus research intended to develop a vaccine. That’s more than the funding for the subsequent Human Genome Project, as Scheffler points out.

The results of that funding were complex, unanticipated — and significant, as Scheffler details in his new book, “A Contagious Cause: The American Hunt for Cancer Viruses and the Rise of Molecular Medicine,” published this week by the University of Chicago Press.

In the process, scientists did not find — and never have — a single viral cause of cancer. On the other hand, as a direct result of the NCI’s funding project, scientists did find oncogenes, the type of gene which, when activated, can cause many forms of cancer.

“That investment helped drive the field of modern molecular biology,” Scheffler says. “It didn’t find the human cancer virus. But instead of closing down, it invented a new idea of how cancer is caused, which is the oncogene theory.”

As research has continued, scientists today have identified hundreds of types of cancer, and about one out of every six cases has viral origins. While there is not one “cancer virus,” some vaccinations reduce susceptibility to certain kinds of cancer. In short, our understanding of cancer has become more sophisticated, specific, and effective — but the path of progress has had many twists and turns. 

Less insurance, more research

As Scheffler details in his book, fears that cancer was a simple contagion can be traced back at least to the 18th century. They appear to have gained significant ground in the early 20th-century U.S., however, influencing medical research and even hospital design.

The rise of massive funding for cancer research is mostly a post-World War II phenomenon; like much of Scheffler’s narrative, its story contains developments that would have been very hard to predict.

For instance, as Scheffler chronicles, one of the key figures in the growth of cancer research was the midcentury health care activist Mary Lasker, who with her husband had founded the Lasker Foundation in 1942, and over time helped transform the American Cancer Society.

During the presidency of Harry S. Truman, however, Lasker’s main goal was the creation of universal health insurance for Americans — an idea that seemed realistic for a time but was eventually shot down in Washington. That was a major setback for Lasker. In response, though, she became a powerful advocate for federal funding of medical research — especially through the National Institutes of Health (NIH), and the NCI, one of the NIH’s arms.

Scheffler calls this tradeoff — less government health insurance, but more biomedical research — the “biomedical settlement,” and notes that it was unique to the U.S. at the time. By contrast, in grappling with cancer through the 1960s, Britain and France, for example, put more relative emphasis on treatment, and Germany looked more extensively at environmental issues. Since the 1970s, there has been more convergence in the approaches of many countries.

“The term ‘biomedical settlement’ is a phrase I created to describe an idea that seems commonplace in the United States but is actually very extraordinary in the context of other industrial nations — which is, we will not federalize health care, but we will federalize health research,” Scheffler says. “It’s remarkable to keep the government out of one but invite it into the other.”

And while observers of the U.S. scientific establishment today know the NIH as a singular research force, they probably don’t think of it as compensation, in a sense, for the failed policy aims of Lasker and her allies.

“Someone like Mary Lasker is one of the architects of the settlement out of her conviction there were ways to involve the federal government even if they couldn’t provide medical care,” Scheffler adds.

Fighting through frustration

The core of “A Contagious Cause” chronicles critical research developments in the 1960s and 1970s, as biologists made headway in understanding many forms of cancer. But beyond its rich narrative about the search for a single cancer virus, “A Contagious Cause” also contains plenty of material that underscores the highly contingent, unpredictable nature of scientific discovery.

From stymied scientists to angry activists, many key figures in the book seemed to have reached dead ends before making the advances we now recognize. Yes, science needs funding, new instrumentation, and rich theories to advance. But it can also be fueled by frustration.

“The thing I find interesting is that there are a lot of moments of frustration,” Scheffler says. “Things don’t go the way people want, and they have to decide what they’re going to do next. I think often the history of science focuses on moments of discovery, or highlights great innovations and their successes. But talking about frustration and failure is also a very important topic to highlight in terms of how we understand the history of science.”

“A Contagious Cause” has received praise from other scholars. Angela Creager, a historian of science at Princeton University, has called it “powerfully argued” and “vital reading for historians of science and political historians alike.”

For his part, Scheffler says he hopes his book will both illuminate the history of cancer research in the U.S. and underscore the need for policymakers to apply a broad set of tools as they guide our ongoing efforts to combat cancer.

“Cancer is a molecular disease, but it’s also an environmental disease and a social disease. We need to understand the problem at all those levels to come up with a policy that best confronts it,” Scheffler says.

How a declining environment affects populations

Mon, 05/13/2019 - 11:59pm

Stable ecosystems occasionally experience events that cause widespread death — for example, bacteria in the human gut may be wiped out by antibiotics, or ocean life may be depleted by overfishing. A new study from MIT physicists reveals how these events affect dynamics between different species within a community.

In their studies, performed in bacteria, the researchers found that a species with a small population size under normal conditions can increase in abundance as conditions deteriorate. These findings are consistent with a theoretical model that had been previously developed but has been difficult to test in larger organisms.

“For a single species within a complex community, an increase in mortality doesn’t necessarily mean that the net effect is that you’re going to be harmed. It could be that although the mortality itself is not good for you, the fact that your competitor species are also experiencing an increase in mortality, and they may be more sensitive to it than you are, means that you could do better,” says Jeff Gore, an MIT associate professor of physics and the senior author of the study.

The findings in bacteria may also be applicable to larger organisms in real-world populations, which are much more difficult to study because it is usually impossible to control the conditions of the experiment the way researchers can with bacteria growing in a test tube.

“We think that this may be happening in complex communities in natural environments, but it’s hard to do the experiments that are necessary to really nail it down. Whereas in the context of the lab, we can make very clear measurements where you see this effect in a very obvious way,” Gore says.

Clare Abreu, an MIT graduate student, is the lead author of the study, which appeared in Nature Communications on May 9. Vilhelm Andersen Woltz, an MIT undergraduate, and Jonathan Friedman, a former MIT postdoc, are also authors of the paper.

Competition for resources

Microbial communities, such as those found in soil, oceans, or the human gut, usually contain thousands of different species. Gore’s lab is interested in studying the factors that determine which species are present in a given environment, and how the composition of those populations affect their functions, whether that’s cycling carbon in the ocean or helping each other resist antibiotic treatment in the gut.

By performing controlled experiments in the lab, Gore hopes to learn how different species interact with each other, and to test hypotheses that predict how populations respond to their environment. In 2013, he discovered early signs that warn of population collapse, in yeast, and he has also studied how different species of bacteria can protect each other against antibiotics.

“We’re using experimentally tractable, simple communities to try to determine the principles that determine which species can coexist, and how that changes in different environments,” Gore says.

To explore whether these experimental results might be applicable to larger communities, last year Gore and his colleagues published a paper in which they showed that interactions between pairs of species that compete for resources can be used to predict, with about 90 percent accuracy, the outcome when three of the species compete with each other.

In the new study, Gore and Abreu decided to see if they could use pairwise interactions to predict how trios of competing species would respond as environmental conditions deteriorate. To simulate this in the lab, the researchers used the process of dilution — that is, discarding a large percentage (ranging from 90 percent to 99.999999 percent) of the population at the end of each day and transferring the remainder to fresh resources. This could be analogous to real-world conditions such as overfishing or loss of habitat.

“We’re trying to get at the general question of how an increase in mortality might change the composition of a community,” Gore says.

The researchers studied combinations of five species of soil bacteria. In their experiments, in which they tested pairs of species at a time, they found a specific pattern that fit the predictions made by a classical model of species interactions, known as the Lotka-Volterra model.

According to this model, declining environmental conditions should favor faster growers. The researchers found that this was the case: Even in conditions where a slower grower originally dominated the population, as the dilution rate was increased, the populations shifted until eventually the faster grower either became the larger fraction of the population or took over completely. The final outcome depends on how strong each competitor is, as well as their relative abundance in the starting population.

The researchers also found that the results of the pairwise competitions could accurately predict what would happen when three species grew together in an environment with deteriorating conditions.

“This is an exciting advance in our understanding of microbial ecology,” says Sean Gibbons, an assistant professor at the Institute for Systems Biology, who was not involved in the research. “The observation that nonspecific mortality rates can alter competitive outcomes is surprising, although more work needs to be done to understand whether or not dilution is having a more nuanced effect on environmental conditions.”

Population models

The Lotka-Volterra model analyzed in this study was originally developed for interactions between larger organisms. Such models are easier to test in microbial populations because it is much easier to control experimental conditions for bacteria than for, say, deer living in a forest.

“There’s no particular reason to believe that the models are more applicable to microbes than they are to macroorganisms. It’s just that with microbes, we can study hundreds of these communities at a time, and turn the experimental knobs and make clear measurements,” Gore says. “With microorganisms, we can arrive at a clear understanding of when is it that these models are working and when is it that they’re not.”

Gore and his students are now studying how specific environmental changes, including changes in temperature and resources, can alter the composition of microbial communities. They are also working on experimentally manipulating populations that include more than two bacterial species.

The research was funded, in part, by the National Institutes of Health.

Measuring chromosome imbalance could clarify cancer prognosis

Mon, 05/13/2019 - 2:59pm

Most human cells have 23 pairs of chromosomes. Any deviation from this number can be fatal for cells, and several genetic disorders, such as Down syndrome, are caused by abnormal numbers of chromosomes.

For decades, biologists have also known that cancer cells often have too few or too many copies of some chromosomes, a state known as aneuploidy. In a new study of prostate cancer, researchers have found that higher levels of aneuploidy lead to much greater lethality risk among patients.

The findings suggest a possible way to more accurately predict patients’ prognosis, and could be used to alert doctors which patients might need to be treated more aggressively, says Angelika Amon, the Kathleen and Curtis Marble Professor in Cancer Research in the Department of Biology and a member of the Koch Institute for Integrative Cancer Research.

“To me, the exciting opportunity here is the ability to inform treatment, because prostate cancer is such a prevalent cancer,” says Amon, who co-led this study with Lorelei Mucci, an associate professor of epidemiology at the Harvard T.H. Chan School of Public Health.

Konrad Stopsack, a research associate at Memorial Sloan Kettering Cancer Center, is the lead author of the paper, which appears in the Proceedings of the National Academy of Sciences the week of May 13. Charles Whittaker, a Koch Institute research scientist; Travis Gerke, a member of the Moffitt Cancer Center; Massimo Loda, chair of pathology and laboratory medicine at New York Presbyterian/Weill Cornell Medicine; and Philip Kantoff, chair of medicine at Memorial Sloan Kettering; are also authors of the study.

Better predictions

Aneuploidy occurs when cells make errors sorting their chromosomes during cell division. When aneuploidy occurs in embryonic cells, it is almost always fatal to the organism. For human embryos, extra copies of any chromosome are lethal, with the exceptions of chromosome 21, which produces Down syndrome; chromosomes 13 and 18, which lead to developmental disorders known as Patau and Edwards syndromes; and the X and Y sex chromosomes. Extra copies of the sex chromosomes can cause various disorders but are not usually lethal.

Most cancers also show very high prevalence of aneuploidy, which poses a paradox: Why does aneuploidy impair normal cells’ ability to survive, while aneuploid tumor cells are able to grow uncontrollably? There is evidence that aneuploidy makes cancer cells more aggressive, but it has been difficult to definitively demonstrate that link because in most types of cancer nearly all tumors are aneuploid, making it difficult to perform comparisons.

Prostate cancer is an ideal model to explore the link between aneuploidy and cancer aggressiveness, Amon says, because, unlike most other solid tumors, many prostate cancers (25 percent) are not aneuploid or have only a few altered chromosomes. This allows researchers to more easily assess the impact of aneuploidy on cancer progression.

What made the study possible was a collection of prostate tumor samples from the Health Professionals Follow-up Study and Physicians’ Health Study, run by the Harvard T.H. Chan School of Public Health over the course of more than 30 years. The researchers had genetic sequencing information for these samples, as well as data on whether and when their prostate cancer had spread to other organs and whether they had died from the disease.

Led by Stopsack, the researchers came up with a way to calculate the degree of aneuploidy of each sample, by comparing the genetic sequences of those samples with aneuploidy data from prostate genomes in The Cancer Genome Atlas. They could then correlate aneuploidy with patient outcomes, and they found that patients with a higher degree of aneuploidy were five times more likely to die from the disease. This was true even after accounting for differences in Gleason score, a measure of how much the patient’s cells resemble cancer cells or normal cells under a microscope, which is currently used by doctors to determine severity of disease.

The findings suggest that measuring aneuploidy could offer additional information for doctors who are deciding how to treat patients with prostate cancer, Amon says.

“Prostate cancer is terribly overdiagnosed and terribly overtreated,” she says. “So many people have radical prostatectomies, which has significant impact on people’s lives. On the other hand, thousands of men die from prostate cancer every year. Assessing aneuploidy could be an additional way of helping to inform risk stratification and treatment, especially among people who have tumors with high Gleason scores and are therefore at higher risk of dying from their cancer.”

“When you’re looking for prognostic factors, you want to find something that goes beyond known factors like Gleason score and PSA [prostate-specific antigen],” says Bruce Trock, a professor of urology at Johns Hopkins School of Medicine, who was not involved in the research. “If this kind of test could be done right after a prostatectomy, it could give physicians information to help them decide what might be the best treatment course.”

Amon is now working with researchers from the Harvard T.H. Chan School of Public Health to explore whether aneuploidy can be reliably measured from small biopsy samples.

Aneuploidy and cancer aggressiveness

The researchers found that the chromosomes that are most commonly aneuploid in prostate tumors are chromosomes 7 and 8. They are now trying to identify specific genes located on those chromosomes that might help cancer cells to survive and spread, and they are also studying why some prostate cancers have higher levels of aneuploidy than others.

“This research highlights the strengths of interdisciplinary, team science approaches to tackle outstanding questions in prostate cancer,” Mucci says. “We plan to translate these findings clinically in prostate biopsy specimens and experimentally to understand why aneuploidy occurs in prostate tumors.”

Another type of cancer where most patients have low levels of aneuploidy is thyroid cancer, so Amon now hopes to study whether thyroid cancer patients with higher levels of aneuploidy also have higher death rates.

“A very small proportion of thyroid tumors is highly aggressive and lethal, and I’m starting to wonder whether those are the ones that have some aneuploidy,” she says.

The research was funded by the Koch Institute Dana Farber/Harvard Cancer Center Bridge Project and by the National Institutes of Health, including the Koch Institute Support (core) Grant.

Local high school girls build dye-sensitized solar cells at MIT

Mon, 05/13/2019 - 12:55pm

The Materials Research Science and Engineering Center (MRSEC) at the MIT Materials Research Laboratory hosted 26 high school girls from the greater Boston area last month for lunch and a science-and-engineering project building dye-sensitized solar cells. The participants were a subset of about 150 girls attending the day-long Science, Engineering, and Technology (SET) in the City program sponsored by the Boston Area Girls STEM Collaborative.

The girls were led by three MRSEC graduate students and eight undergraduate members of the MIT Society of Women Engineers. The solar cell activity involved staining a conductive glass slide with a thin film of titanium dioxide (TiO2) and raspberry juice. A second slide was coated with graphite. After clipping the two slides together, an electrolyte solution was applied. The girls measured the voltage and current output of their cells by attaching them to a multimeter.

The day-long SET in the City program began at Boston University with two kickoff speakers: a chemistry professor from Simmons University and a scientist from Cellino Biotech. This was followed by a “Science Information Bazaar” featuring tabletop demonstrations, posters, exhibits, and computer applications presented by students from member universities.

The 150 high school participants then divided into five sections, each bused to a different venue for lunch and a science/engineering activity. The five locations were MIT, Harvard University, Emmanuel College, Biogen, and Simmons University. After the activity, the entire group reconvened at Merck Research Laboratories in Boston for a keynote address by two Merck scientists titled “Shaping the Future of Research: Innovation and Discovery into Clinical Breakthroughs.” A panel of five college and graduate students ended the program by sharing their passion for science and engineering, and describing their college and university experiences.

The Boston Area Girls STEM Collaborative is a group of women representing local nonprofit organizations, colleges, and universities who are committed to increasing young women’s participation in science, technology, engineering, and mathematics. SET in the City is one of the programs developed by the collaborative to offer girls opportunities to learn more about STEM while interacting with female role models. This was the 11th year that SET in the City was offered.

Where design meets assembly for three MIT alumnae at Microsoft

Mon, 05/13/2019 - 12:30pm

Microsoft’s sprawling campus in Redmond, Washington, houses over 40,000 of its employees. It contains 125 buildings across 502 acres of land. Despite the vastness of its campus, three MIT Department of Mechanical Engineering alumnae found themselves not only in the same building, but working for the same team.

“It’s great having a small part of the MechE community here,” says Ann McInroy ’18, who joined Microsoft as a Design for Assembly (DFA) mechanical engineer last August. “We have this shared experience and knowledge base.”

McInroy joined fellow alumnae Jacklyn (Holmes) Herbst ’10, MEngM ’11 and Isabella DiDio ’16 as a member of the Design for Assembly Team. The DFA Team is a part of the overarching Design for Excellence (DFX) Team at Microsoft.

The DFA Team helps facilitate a product’s journey from initial prototype through to mass production. “In the early stages of any product, our team works with the mechanical design team to optimize the parts so they are easy to assemble,” explains DiDio.

Once the team has a working prototype of a design, they analyze the product to ensure it can be made at scale. Whether it’s a power button on Microsoft’s Surface Pro or a screw in the HoloLens headband, the team ensures every component of a product lends itself to manufacturability.

“After the design is finished, we are in charge of outlining all of the steps for mass producing the product in the factory,” explains McInroy. “It’s a cool ‘in-between’ stage that not every company has.”

While the three often work on different products or different phases of the development cycle, they bring their shared experiences studying mechanical engineering at MIT to help each other solve problems.

“There’s a lot of cross-product problem solving on our team,” explains Herbst. “If Ann or Isabella are stuck on some part of the process they can come to the team for guidance. Having a strong MechE connection on the team definitely helps us when we are solving those problems.”

Herbst was the first to join Microsoft’s DFA Team in January 2016. After earning her bachelor’s in mechanical engineering in 2010, she enrolled in the master of engineering in manufacturing (MEngM) degree. She worked with Brian Anthony, principal research scientist, on developing a new way of producing electrodes for Daktari Diagnostics.

Herbst then moved on to Boeing for four years before joining Microsoft. During her time at Boeing, she worked on installation planning for commercial airplanes, as well as dimensional management. “At Boeing I was very specialized in what I did, but the work I do at Microsoft provides a much broader view of getting a product from design to mass production,” adds Herbst. One of Herbst’s first tasks was working on the Surface Book i7 model.

Eight months after Herbst started at Microsoft, Isabella DiDio walked into her office. “On my first day, my manager brought me around to everyone’s office to introduce me,” recalls DiDio. “When he brought me to Jackie’s office he told me that Jackie also went to MIT and made us do a fist bump with our MIT class rings.”

As an undergrad at MIT, DiDio was most impacted by class 2.008 (Design and Manufacturing II). Students in 2.008 are charged with designing a yo-yo and producing 50 copies. “That class really opened my eyes to manufacturing and the bigger picture of any consumer product,” says DiDio. The experience inspired her to pursue an internship on Microsoft’s manufacturing team.

After graduating with her bachelor’s, DiDio joined Microsoft full time as a DFX engineer. One of her first projects was working on the Microsoft HoloLens, a holographic computer that users wear like sunglasses.

“For the HoloLens I helped set the entire assembly flow, including the order all the parts are assembled in and instructions for operators at our contract manufacturer,” explains DiDio.

About a year after starting at Microsoft, DiDio served as a peer mentor for a group of interns, one of whom was Ann McInroy.

McInroy was inspired by classes like 2.72 (Elements of Mechanical Design), taught by Professor Marty Culpepper, to pursue a career in manufacturing. In the class, students design and construct a single prototype of a high-precision desktop manual lathe. “That class built my confidence as an engineer,” recalls McInroy. “It helped push me toward a career that incorporated some aspects of design and manufacturing.” 

While applying for internships, McInroy was drawn to the blend of design and manufacturing offered at Microsoft. As an intern, she worked on designing buttons that would lend themselves to manufacturability in the future.

McInroy joined the DFA Team after graduating from MIT last summer. Being a part of a small tribe of MechE alumnae working on the same team is something the doesn’t take for granted.

“I really appreciate having a cohort of women engineers that I belong to here at Microsoft,” McInroy adds.

While the trio are at varying stages of their careers and have taken different paths to Redmond, they often draw upon their time at MIT.

“We still talk about some of those MechE connections — we talk about our products in 2.009 or our yo-yos in 2.008,” adds Herbst. “That common bond helps us when we are working together.”

Pages