MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 1 year 6 months ago

Microfluidics device helps diagnose sepsis in minutes

Tue, 07/23/2019 - 12:00am

A novel sensor designed by MIT researchers could dramatically accelerate the process of diagnosing sepsis, a leading cause of death in U.S. hospitals that kills nearly 250,000 patients annually.

Sepsis occurs when the body’s immune response to infection triggers an inflammation chain reaction throughout the body, causing high heart rate, high fever, shortness of breath, and other issues. If left unchecked, it can lead to septic shock, where blood pressure falls and organs shut down. To diagnose sepsis, doctors traditionally rely on various diagnostic tools, including vital signs, blood tests, and other imaging and lab tests.

In recent years, researchers have found protein biomarkers in the blood that are early indicators of sepsis. One promising candidate is interleukin-6 (IL-6), a protein produced in response to inflammation. In sepsis patients, IL-6 levels can rise hours before other symptoms begin to show. But even at these elevated levels, the concentration of this protein in the blood is too low overall for traditional assay devices to detect it quickly.

In a paper being presented this week at the Engineering in Medicine and Biology Conference, MIT researchers describe a microfluidics-based system that automatically detects clinically significant levels of IL-6 for sepsis diagnosis in about 25 minutes, using less than a finger prick of blood.

In one microfluidic channel, microbeads laced with antibodies mix with a blood sample to capture the IL-6 biomarker. In another channel, only beads containing the biomarker attach to an electrode. Running voltage through the electrode produces an electrical signal for each biomarker-laced bead, which is then converted into the biomarker concentration level.

“For an acute disease, such as sepsis, which progresses very rapidly and can be life-threatening, it’s helpful to have a system that rapidly measures these nonabundant biomarkers,” says first author Dan Wu, a PhD student in the Department of Mechanical Engineering. “You can also frequently monitor the disease as it progresses.”

Joining Wu on the paper is Joel Voldman, a professor and associate head of the Department of Electrical Engineering and Computer Science, co-director of the Medical Electronic Device Realization Center, and a principal investigator in the Research Laboratory of Electronics and the Microsystems Technology Laboratories.

Integrated, automated design

Traditional assays that detect protein biomarkers are bulky, expensive machines relegated to labs that require about a milliliter of blood and produce results in hours. In recent years, portable “point-of-care” systems have been developed that use microliters of blood to get similar results in about 30 minutes.

But point-of-care systems can be very expensive since most use pricey optical components to detect the biomarkers. They also capture only a small number of proteins, many of which are among the more abundant ones in blood. Any efforts to decrease the price, shrink down components, or increase protein ranges negatively impacts their sensitivity.

In their work, the researchers wanted to shrink components of the magnetic-bead-based assay, which is often used in labs, onto an automated microfluidics device that’s roughly several square centimeters. That required manipulating beads in micron-sized channels and fabricating a device in the Microsystems Technology Laboratory that automated the movement of fluids.

The beads are coated with an antibody that attracts IL-6, as well as a catalyzing enzyme called horseradish peroxidase. The beads and blood sample are injected into the device, entering into an “analyte-capture zone,” which is basically a loop. Along the loop is a peristaltic pump — commonly used for controlling liquids — with valves automatically controlled by an external circuit. Opening and closing the valves in specific sequences circulates the blood and beads to mix together. After about 10 minutes, the IL-6 proteins have bound to the antibodies on the beads.

Automatically reconfiguring the valves at that time forces the mixture into a smaller loop, called the “detection zone,” where they stay trapped. A tiny magnet collects the beads for a brief wash before releasing them around the loop. After about 10 minutes, many beads have stuck on an electrode coated with a separate antibody that attracts IL-6. At that time, a solution flows into the loop and washes the untethered beads, while the ones with IL-6 protein remain on the electrode.

The solution carries a specific molecule that reacts to the horseradish enzyme to create a compound that responds to electricity. When a voltage is applied to the solution, each remaining bead creates a small current. A common chemistry technique called “amperometry” converts that current into a readable signal. The device counts the signals and calculates the concentration of IL-6.

“On their end, doctors just load in a blood sample using a pipette. Then, they press a button and 25 minutes later they know the IL-6 concentration,” Wu says.

The device uses about 5 microliters of blood, which is about a quarter the volume of blood drawn from a finger prick and a fraction of the 100 microliters required to detect protein biomarkers in lab-based assays. The device captures IL-6 concentrations as low as 16 picograms per milliliter, which is below the concentrations that signal sepsis, meaning the device is sensitive enough to provide clinically relevant detection.

A general platform

The current design has eight separate microfluidics channels to measure as many different biomarkers or blood samples in parallel. Different antibodies and enzymes can be used in separate channels to detect different biomarkers, or different antibodies can be used in the same channel to detect several biomarkers simultaneously.

Next, the researchers plan to create a panel of important sepsis biomarkers for the device to capture, including interleukin-6, interleukin-8, C-reactive protein, and procalcitonin. But there’s really no limit to how many different biomarkers the device can measure, for any disease, Wu says. Notably, more than 200 protein biomarkers for various diseases and conditions have been approved by the U.S. Food and Drug Administration.

“This is a very general platform,” Wu says. “If you want to increase the device’s physical footprint, you can scale up and design more channels to detect as many biomarkers as you want.”

The work was funded by Analog Devices, Maxim Integrated, and the Novartis Institutes of Biomedical Research.

Celebrating a curious mind: Steven Keating 1988-2019

Mon, 07/22/2019 - 2:35pm

Alumnus Steven John Keating SM '12, PhD '16 passed away from brain cancer on July 19 at the age of 31.

Keating received his master’s degree and PhD in mechanical engineering and was a member of the MIT Media Lab’s Mediated Matter team. He inspired countless people with his courageous, research-driven approach to battling cancer and was a champion for patient access to health data. 

Curiosity was a driving force in Keating’s life. Growing up in Calgary, Canada, he spent a large portion of his childhood tinkering with and building devices. This predilection for making led to not only his love of engineering, but also his affinity for film and photography. As an undergraduate at Queen’s University in Kingston, Canada, Keating pursued his twin passions — earning a dual degree in mechanical and materials engineering alongside a degree in film and media.

In 2010, Keating fulfilled his lifelong dream of attending MIT and enrolled as a graduate student studying mechanical engineering. He joined the Media Lab’s Mediated Matter group under his co-advisor Neri Oxman, the Sony Corporation Career Development Associate Professor of Media Arts and Sciences.

At the Media Lab, Keating conducted research on additive manufacturing and synthetic biology. He pushed the limits of 3-D printing and developed a technology that could 3-D print the foundation of a building. This technology was recently acquired by NASA for potential applications in their pursuit of landing on the moon by 2024.

“Steve utilized humor while solving equations and inspired a sense of empathy when discussing ethical issues associated with robotics and synthetic biology,” Oxman reflects. “The projects he left behind are very much alive and will continue to have meaningful impact on the physical and societal landscapes we inhabit.”

In the Department of Mechanical Engineering, Keating served as a teaching assistant for the senior capstone class 2.009 (Product Engineering Processes), alongside his co-advisor, David Wallace, professor of mechanical engineering. He also helped teach the popular introductory course 2.00b (Toy Product Design) with Wallace.

“Steve had an infectious kindness and curiosity that elevated those around him, exploring simply for the joy and thrill of learning,” says Wallace. “His teaching contributions in our freshman toy product design and senior product engineering design classes made an enduring impact.”

Four years into his graduate studies at MIT, Keating’s world was turned upside down when a baseball-sized tumor was found in his brain. The innate curiosity that had brought him to MIT ultimately led to his diagnosis. In a 2014 speech at the Koch Institute, Keating recalled: “Curiosity is why we are here [at MIT] doing research, and ironically that’s how I found my tumor.”

As an undergraduate in 2007, Keating had participated in a brain study purely out of curiosity. His MRI scans revealed a small dime-sized abnormality located near the smell center of his brain. This knowledge prompted Keating to seek medical attention when, in the summer of 2014, he began smelling vinegar and getting headaches. A new MRI scan showed a low-grade glioma in the frontal left lobe of his brain that would require immediate surgery.

After receiving this news, Yoel Fink, professor of materials science and engineering, entered Keating’s life. Fink had previously developed a fiber optic scalpel that enabled minimally invasive surgery on brain tumors. As a result, he was connected to the top neurosurgeons in the world. Fink put Keating in touch with E. Antonio Chiocca, neurosurgeon-in-chief and chair of the Department of Neurosurgery at Brigham and Women’s Hospital in Boston. Chiocca performed the surgery to remove his tumor.

In an email to his friends and family in advance of the surgery, Keating wrote: “The world is a lovely, splendid, and fascinating place. But most of all, to me, it is beautifully curious.”

Keating proved to be anything but an average patient. “Steve confronted his disease like a true “MITer”: He studied it, researched it, applied his creativity and interest in the sciences and engineering to see how best to face this enemy,” explains Chiocca.

Ever the researcher, Keating craved every possible data point he could get about his diagnosis and treatment. Upon learning that accessing medical data required the approval of a medical doctor, he enrolled in an MD program while finishing up his PhD – earning him the nickname ‘MacGyver’ among his colleagues in the Media Lab.

Keating poured over footage of his 10-hour surgery, analyzed his own MRI scans, and had his microbiome sequenced. He even 3-D printed a model of his tumor, which he gifted his friends and family as a very unique Christmas ornament. This model led to a partnership with colleagues at the Media Lab and Harvard University to develop a new method to 3-D print more detailed models from medical images.

Keating collected 200 gigabytes of his own medical data. Given that knowledge of his own MRI scans and medical data led to his timely diagnosis, he became a staunch advocate for open-sourcing patient data. He wanted to empower patients to gain access to their own health information.

“Steve became a voice for patients’ desires to have access and own the data for their disease,” adds Chiocca. “He did this with humility, courage, joy and affability.”

Keating’s crusade on behalf of patients everywhere led to a New York Times article about his efforts in March 2015. His story was covered widely by the media and inspired millions of people. He gave a TEDx Talk about his experiences, joined the Federal Precision Medicine Task Force, and received an invitation to the White House by President Barack Obama.

“For him it was all about awareness — he was willing to give up his privacy and share his data with the world to advance the likelihood of an eventual cure for this disease,” says Fink, who along with Oxman remained close to Keating and his family throughout the years.

In remission thanks to the efforts of Chiocca and his team of doctors, Keating continued his work with Oxman in the Mediated Matter group. “Even and especially while battling cancer, Steve remained noble in his ways,” adds Oxman. “Whether taking the initiative on group-based work or gathering the team to discuss a new publication, it was humbling to watch him help others as he battled his challenging condition.”

Keating graduated with his PhD in 2016. He moved to Silicon Valley, where he worked as a design engineer at Apple.

Last summer after a routine check-up, he was told he had glioblastoma, a malignant and incurable form of brain cancer. Even after receiving this devastating diagnosis, Keating never lost sight of the impact he could have on others. He tirelessly advocated for patient access to medical data in an effort to save the lives of others, all while undergoing multiple experimental trials and courageously fighting for his own life.

“A defining element of his character was to be gracious and giving while he was fighting the battle of his life,” says Fink.

Though Keating ultimately succumbed to the disease, others will take up his mantle in the fight for a cure and greater access to patient data. Two days before he passed away, the first ever Glioblastoma Awareness Day was observed to raise awareness and honor those who have lost their lives to this aggressive form of brain cancer.

“Steve never let the knowledge that glioblastoma remains incurable stop him from living his life to the fullest without anger and disappointment,” adds Chiocca. “As cancer scientists, we will continue to research this disease so that Steve’s fight remains our fight.“

His passion and spirit will live on with his former colleagues at MIT. “Steven’s presence was luminous and so is his legacy,” says Oxman. “My team and I are honored to continue where our very own ‘MacGyver’ left off.”

Keating is survived by his parents, John and Lynn, and his sister, Laura. In lieu of a traditional memorial service, Keating’s family will be launching a “cyber celebration” as a forum for people to honor and celebrate his inspiring, curiosity-driven life. This article will be updated with more information when it’s available.

New leadership for Bernard M. Gordon-MIT Engineering Leadership Program

Mon, 07/22/2019 - 12:20pm

Olivier de Weck, professor of aeronautics and astronautics and of engineering systems at MIT, has been named the new faculty co-director of the Bernard M. Gordon-MIT Engineering Leadership Program (GEL). He joins Reza Rahaman, who was appointed the Bernard M. Gordon-MIT Engineering Leadership Program industry co-director and senior lecturer on July 1, 2018.

“Professor de Weck has a longstanding commitment to engineering leadership, both as an educator and a researcher. I look forward to working with him and the GEL team as they continue to strengthen their outstanding undergraduate program and develop the new program for graduate students,” says Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

A leader in systems engineering, de Weck researches how complex human-made systems such as aircraft, spacecraft, automobiles, and infrastructures are designed, manufactured, and operated. By investigating their lifecycle properties, de Weck and members of his research group have developed a range of novel techniques broadly adopted by industry to maximize the value of these systems over time.

A fellow of the International Council on Systems Engineering (INCOSE), de Weck was honored with their Outstanding Service Award in 2018 for his work as editor-in-chief of Systems Engineering. He is also an associate fellow of the American Institute of Aeronautics and Astronautics (AAIA), where he previously served as associate editor for the Journal of Spacecraft and Rockets and chair of the AIAA Space Logistics Technical Committee. De Weck is a past recipient of the Capers and Marion McDonald Award for Excellence in Mentoring and Advising from the MIT School of Engineering, and the Teaching with Digital Technology Award from the MIT Office of Open Learning.

A member of the MIT faculty since 2001, de Weck earned a BS in industrial engineering at ETH Zurich in 1993 and an MS and PhD in aerospace systems at MIT in 1999 and 2001. He previously served as associate head of the engineering systems division and as executive director of Production in the Innovation Economy (PIE) commission at MIT. He recently returned to campus after a two-year leave of absence at Airbus in Toulouse, France, where he served as senior vice president and was responsible for planning and roadmapping the group’s $1 billion research and technology portfolio.

Since the launch of GEL in 2007, de Weck has taught 16.669/6.914 (Project Engineering) — a popular bootcamp-style class offered during independent activities period. Besides learning how to better plan and execute engineering projects, the class has helped cohorts of students create a sense of community and belonging.

De Weck succeeds Joel Schindall, co-director for GEL since 2007 and the Bernard M. Gordon Professor of the Practice in electrical engineering and computer science. “Drawing on his many years of experience and success in industry, Joel has been an exceptional leader for the GEL program,” Chandrakasan says. “He has instilled the character and the skills that will enable our students to be both the thought leaders and the ‘do leaders’ of the future.”

Reza Rahaman earned a BEng in chemical engineering at Imperial College London in 1984 and an MS in chemical engineering practice and PhD in chemical engineering at MIT in 1985 and 1989.

Rahaman’s career in industry spanned nearly three decades across consumer packaged goods, pharmaceuticals, and agricultural chemicals. Before returning to MIT, he was the vice president of research, development, and innovation at the Clorox Company, where he guided new innovation strategies and coordinated technology roadmaps for 45 percent of the company’s portfolio. Rahaman also serves as vice chair of the board of directors for Out & Equal Workplace Advocates, the largest nonprofit dedicated to LGBTQ workplace equality in the world.

“Reza has deep expertise in leading large, highly matrixed organizations and spearheading complex technical projects to produce category-changing innovation,” says Chandrakasan. “His experience in industry, as well as his technical depth and inclusive leadership style, are a wonderful asset to our students. By imparting his knowledge, and guiding our students’ attitudes and thought processes, he is helping to create the next generation of exemplary leaders.”

3Q: John Tirman on a new US human rights commission

Mon, 07/22/2019 - 12:02pm

U.S. Secretary of State Mike Pompeo has launched a Commission on Unalienable Rights at the State Department. Human rights, he says, are no longer guided by the principles established by America’s founders and are unmooring America from the principles of liberal democracy. A moral foreign policy should be grounded in the definition of unalienable rights, writes Pompeo in an opinion piece in The Wall Street Journal. A serious debate on human rights is urgent, Pompeo argues, and compares the commission to the panel Eleanor Roosevelt convened in 1947, which resulted in the “Universal Declaration on Human Rights.”  

John Tirman, executive director and principal research scientist at MIT's Center for International Studies, provides context behind the newly created commission and describes its potential impact on the human rights movement. Tirman is a co-founding director of the new Human Rights and Technology Program at MIT, leads the MIT Persian Gulf Initiative, and is a member of the Inter-University Committee on International Migration. He is author, most recently, of "Dream Chasers: Immigration and the American Backlash" and "The Deaths of Others: The Fate of Civilians in America’s Wars."

Q: What is the political impetus behind Secretary Pompeo’s commission?

A: Most knowledgeable observers see Pompeo’s commission as an attempt to curtail the gradual expansion of human rights that include marginalized groups — LGBT rights, most particularly, but also of indigenous peoples, immigrants, children, workers, and so on. They also see this as a pushback on reproductive rights for women, and abortion rights specifically. Of course, these rights were not envisioned by the founders. By focusing on “unalienable” rights, he signals that this effort is embedded in so-called natural law, which to many people implies religious origin and legitimacy — rights are only endowed by God. The Declaration of Independence is replete with references to natural rights conveyed by the Creator. If rights are a product of religious faith and practice, then the gatekeepers of religion will likely be the arbiters of rights.

The international dimension serves certain specific ends: an opportunity to chastise states that the Trump administration doesn’t like — such as Iran, Cuba, Venezuela — and provide a boost for some friends in troublesome areas — Israel’s 50-year occupation of Palestinian lands, for example. It will brace the U.S. aid policy of denying abortions. A very narrow definition of human rights would also enable the U.S. government to ignore, more than ever, the human-rights abuses of states with which we are friendly. If one doesn’t grant indigenous peoples’ rights to their original lands, to cite one pressing case, then what Brazil’s president, Jair Bolsonaro, is doing in the Amazon is not of our concern. Similar indifference can be applied to the Rohingya, Kurds, Shia in Saudi Arabia, and dozens of other cases.

Q: How have human rights evolved throughout history?

A: Human rights has a long history, even if they weren’t always recognized as such. In Europe, and Britain particularly, rights evolved in part to curtail the power of the monarch. Gradually, notions of tolerance and protected dissent became keystones. Documents like the Magna Carta (1215) and the English Bill of Rights (1689) established norms that not only formed Britain’s unwritten constitution, but shaped ours as well. Conservatives have always insisted that such precedents are the root and branch of legitimate political principles and practice. The appeal to “natural law” should be understood in that context. 

This holds some irony in the Trump era. Those English documents insisted on parliamentary rights to keep the sovereign accountable, for example. We see today the White House violating Congress’s subpoena and oversight power, which is hardly consonant with conservative values. That is having deleterious impact on, for example, the rights of asylum seekers at the U.S.-Mexico border. Those early precedents also addressed economic rights. In the last 75 years, beginning with Franklin Roosevelt, economic rights — in the idiom of liberty, “freedom from want” — have been high on the agenda of liberals, and resisted by the right wing. Economic rights as human rights have long been embedded in the most important English precedents, however — the right to inheritance, for example, or to fair taxation. And the Universal Declaration of Human Rights, promulgated at the United Nations in 1948 and cited approvingly by Pompeo, is filled with economic rights, including several demands for equality.  

In the category of “Be careful what you wish for,” this is rich irony indeed.  

Q: What are your chief concerns?

A: At a time when human rights are under siege from authoritarian rulers around the world, pulling back and constraining the definition and applicability of human rights is especially vexing. There is, moreover, a robust discourse about rights in academia and civil society. Hundreds, if not thousands, of nonprofit human-rights organizations are at work, and it is through this work that rights are challenged and redefined. The notion that we need a panel of carefully selected conservative thinkers appointed by the government to reassess rights is almost absurd.  

The assertion that human rights only embody what is articulated in America’s founding documents — particularly the Bill of Rights — is worrisome. Our notions of political, economic, social, and cultural rights and obligations change over time, and this should be readily acknowledged by all parties. The founders did not envision or provision a standing army, yet the prospect of America without a military would be considered quite eccentric. Today, we clearly embrace the idea of women’s equality, even if the means to achieve that are contested. Privacy is nowhere mentioned in the Constitution, yet Americans consider their personal sphere to be inviolable. The human cost of war has prompted the international community — often led by the United States — to recognize rights of non-combatants during wartime, or rights of refugees, or rights of women to protection. Little of this was considered before the 20th century.

There are numerous examples of these rights in the international arena, and nearly always they evolved to give voice and standing to otherwise-powerless and, often, victimized peoples. That is the great moral thrust of the human-rights revolution and one of the most encouraging achievements in the history of international relations.

Prime Minister Mark Rutte of the Netherlands tours MIT

Sun, 07/21/2019 - 11:59pm

Prime Minister Mark Rutte of the Netherlands visited MIT on Friday, taking an innovation-oriented campus tour with a focus on computing and robotics.

Rutte’s visit was centered in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), where he watched robotics demonstrations and spoke with faculty and students about a variety of topics concerning innovation.

Rutte was also accompanied by a larger delegation of Dutch government and business leaders, who are on a four-day visit to the Boston area, examining research in AI, robots, biotechnology, and health care. The group included Bruno Bruins, the Netherlands’ minister of medical care, as well as about 40 Dutch innovators in the areas of AI and robotics. 

On the MIT tour, Rutte was principally hosted by Daniela Rus, director of CSAIL and the Andrew and Erna Viterba Professor of Electrical Engineering and Computer Science. Rutte was also greeted by Frans Kashooek, the Charles Piper Professor in MIT’s Department of Electrical Engineering and Computer Science, who is also a CSAIL member; Kashooek is a native of the Netherlands.

Rus told Rutte she was “delighted to welcome you to CSAIL and to MIT,” and, along with several CSAIL graduate students and researchers, guided him through a series of demonstrations highlighting different aspects of robotics research and development.

The projects Rutte observed included a muscle-controlled robotic system CSAIL researchers call “RoboRaise,” in which sensors on human muscles relay signals to a robot, showing it how much to, for instance, help lift objects. The system could have applications in construction or manufacturing.

“In the future, the machines will be always adapting to us,” Rus noted.

Rutte was also given demonstrations about inexpensive 3-D printed robots; the incorporation of new, soft materials in robots; a robotic fish; and “M-Blocks,” a set of square blocks that reconfigure themselves and could be the basis for self-assembling forms of robots.

Rutte was highly engaged in the demonstations and asked a series of questions about them — querying about the exact mechanisms that, for instance, allow the M-Blocks to both move and stay attached to each other.

“You make it look so easy,” Rutte marveled to the robotics researchers, at one point during his CSAIL tour.

Rutte also had a sit-down conversation with CSAIL professors Peter Szolovits and David Sontag, whose work is at the junction of computing and health care research. Szolovits is, among other things, the principal investigator in the MIT-Philips alliance, a five-year research agreement formalized in 2015 between MIT and Royal Philips N.V., the giant Dutch technology firm which has a major division in health care innovation. Philips North America moved its headquarters to Cambridge, Massachusetts, last year.

“Everything is here,” Rutte noted when talking to Sontag about the advantages of doing research in the Boston area — a reference to the ecosystem of universities, technology firms, hospitals, and capital available in the region.  

Rutte also remarked on the informal layout of the Stata Center, where CSAIL is housed, and asked Szolovits and Sontag about the “overall atmosphere” at the Institute.

“It is a wonderful atmosphere,” Szolovits replied. “But for me, the best thing is the students. If I don’t know something, I ask my students.”

Rutte has been prime minister of the Netherlands since 2010 and is currently serving his third term. He studied history at Leiden University, the oldest university in the Netherlands, and worked in a managerial role at Unilever before first being elected as a member of parliament in 2003.

Rus also presented Rutte with gifts from MIT, including a hand-crafted glass sculpture made at the MIT Glass Lab, and an MIT cap which, she noted, could be worn by Rutte when he is cycling to work. Rutte is known, in part, for bicycling to the office, and the Netherlands has the densest set of bike paths in the world.

Making it easier to program and protect the web

Sat, 07/20/2019 - 11:59pm

Behind the scenes of every web service, from a secure web browser to an entertaining app, is a programmer’s code, carefully written to ensure everything runs quickly, smoothly, and securely. For years, MIT Associate Professor Adam Chlipala has been toiling away behind behind-the-scenes, developing tools to help programmers more quickly and easily generate their code — and prove it does what it’s supposed to do.

Scanning the many publications on Chlipala’s webpage, you’ll find some commonly repeated keywords, such as “easy,” “automated,” and “proof.” Much of his work centers on designing simplified programming languages and app-making tools for programmers, systems that automatically generate optimized algorithms for specific tasks, and compilers that automatically prove that the complex math written in code is correct.

“I hope to save a lot of people a lot of time doing boring repetitive work, by automating programming work as well as decreasing the cost of building secure, reliable systems,” says Chlipala, who is a recently tenured professor of computer science, a researcher in the Computer Science and Artificial Laboratory (CSAIL), and head of the Programming Languages and Verification Group.

One of Chlipala’s recent systems automatically generates optimized — and mathematically proven — cryptographic algorithms, freeing programmers from hours upon hours of manually writing and verifying code by hand. And that system is now behind nearly all secure Google Chrome communications.

But Chlipala’s code-generating and mathematical proof systems can be used for a wide range of applications, from protecting financial transactions against fraud to ensuring autonomous vehicles operate safely. The aim, he says, is catching coding errors before they lead to real-world consequences.

“Today, we just assume that there’s going to be a constant flow of serious security problems in all major operating systems. But using formal mathematical methods, we should be able to automatically guarantee there will be far fewer surprises of that kind,” he says. “With a fixed engineering budget, we can suddenly do a lot more, without causing embarrassing or life-threatening disasters.”

A heart for system infrastructure

As he was growing up in the Lehigh Valley region of Pennsylvania, programming became “an important part of my self-identity,” Chlipala says. In the late 1980s, when Chlipala was young, his father, a researcher who ran physics experiments for AT&T Bell Laboratories, taught him some basic programming skills. He quickly became hooked.

In the late 1990s, when the family finally connected to the internet, Chlipala had access to various developer resources that helped him delve “into more serious stuff,” meaning designing larger, more complex programs. He worked on compilers — programs that translate programming language into machine-readable code — and web applications, “when apps were an avant-garde subject.”  

In fact, apps were then called “CGI scripts.” CGI is an acronym for Common Gateway Interface, which is a protocol that enables a program (or “script”) to talk to a server. In high school, Chlipala and some friends designed CGI scripts that connected them in an online forum for young programmers. “It was a means for us to start building our own system infrastructure,” he says.

And as an avid computer gamer, the logical thing for a teenaged Chlipala to do was design his own games. His first attempts were text-based adventures coded in the BASIC programming language. Later, in the C programming language, he designed a “Street Fighter”-like game, called Brimstone, and some simulated combat tabletop games.

It was exciting stuff for a high schooler. “But my heart was always in systems infrastructure, like code compilers and building help tools for old Windows operating systems,” Chlipala says.

From then on, Chlipala worked far in the background of web services, building the programming foundations for developers. “I’m several levels of abstraction removed from the type of computer programming that’s of any interest to any end-user,” he says, laughing.

Impact in the real world

After high school, in 2000, Chlipala enrolled at Carnegie Melon University, where he majored in computer science and got involved in a programming language compiler research group. In 2007, he earned his PhD in computer science from University of California at Berkeley, where his work focused on developing methods that can prove the mathematical correctness of algorithms.

After completing a postdoc at Harvard University, Chlipala came to MIT in 2011 to begin his teaching career. What drew Chlipala to MIT, in part, was an opportunity “to plug in a gap, where no one was doing my kind of proofs of computer systems’ correctness,” he says. “I enjoyed building that subject here from the ground up.”

Testing the source code that powers web services and computer systems today is computationally intensive. It mostly relies on running the code through tons of simulations, and correcting any caught bugs, until the code produces a desired output. But it’s nearly impossible to run the code through every possible scenario to prove it’s completely without error.

Chlipala’s research group instead focuses on eliminating the need for those simulations, by designing proven mathematical theorems that capture exactly how a given web service or computer system is supposed to behave. From that, they build algorithms that check if the source code operates according to that theorem, meaning it performs exactly how it’s supposed to, mostly during code compiling.

Even though such methods can be applied to any application, Chlipala likes to run his research group like a startup, encouraging students to target specific, practical applications for their research projects. In fact, two of his former students recently joined startups doing work connected to their thesis research.  

One student is working on developing a platform that lets people rapidly design, fabricate, and test their own computer chips. Another is designing mathematical proven systems to ensure the source code powering driverless car systems doesn’t contain errors that’ll lead to mistakes on the road. “In driverless cars, a bug could literally cause a crash, not just the ‘blue-screen death’ type of a crash,” Chlipala says.

Now on sabbatical from this summer until the end of the year, Chlipala is splitting his time between MIT research projects and launching his own startup based around tools that help people without programming experience create advanced apps. One such tool, which lets nonexperts build scheduling apps, has already found users among faculty and staff in his own department. About the new company, he says: “I’ve been into entrepreneurship over the last few years. But now that I have tenure, it’s a good time to get started.”

Professor Patrick Winston, former director of MIT’s Artificial Intelligence Laboratory, dies at 76

Fri, 07/19/2019 - 5:06pm

Patrick Winston, a beloved professor and computer scientist at MIT, died on July 19 at Massachusetts General Hospital in Boston. He was 76.
A professor at MIT for almost 50 years, Winston was director of MIT’s Artificial Intelligence Laboratory from 1972 to 1997 before it merged with the Laboratory for Computer Science to become MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
A devoted teacher and cherished colleague, Winston led CSAIL’s Genesis Group, which focused on developing AI systems that have human-like intelligence, including the ability to tell, perceive, and comprehend stories. He believed that such work could help illuminate aspects of human intelligence that scientists don’t yet understand.
“My principal interest is in figuring out what’s going on inside our heads, and I’m convinced that one of the defining features of human intelligence is that we can understand stories,'” said Winston, the Ford Professor of Artificial Intelligence and Computer Science, in a 2011 interview for CSAIL. “Believing as I do that stories are important, it was natural for me to try to build systems that understand stories, and that shed light on what the story-understanding process is all about.”
He was renowned for his accessible and informative lectures, and gave a hugely popular talk every year during the Independent Activities Period called “How to Speak.” 
“As a speaker he always had his audience in the palm of his hand,” says MIT Professor Peter Szolovits. “He put a tremendous amount of work into his lectures, and yet managed to make them feel loose and spontaneous. He wasn’t flashy, but he was compelling and direct. ”
Winston’s dedication to teaching earned him many accolades over the years, including the Baker Award, the Eta Kappa Nu Teaching Award, and the Graduate Student Council Teaching Award.
“Patrick’s humanity and his commitment to the highest principles made him the soul of EECS,” MIT President L. Rafael Reif wrote in a letter to the MIT community. “I called on him often for advice and feedback, and he always responded with kindness, candor, wisdom and integrity.  I will be forever grateful for his counsel, his objectivity, and his tremendous inspiration and dedication to our students.”
Teaching computers to think

Born Feb. 5, 1943 in Peoria, Illinois, Winston was always exceptionally curious about science, technology and how to use such tools to explore what it means to be human. He was an MIT-lifer starting in 1961, earning his bachelor’s, master’s and doctoral degrees from the Institute before joining the faculty of the Department of Electrical Engineering and Computer Science in 1970.
His thesis work with Marvin Minsky centered on the difficulty of learning, setting off a trajectory of work where he put a playful, yet laser-sharp focus on fine-tuning AI systems to better understand stories.
His Genesis project aimed to faithfully model computers after human intelligence in order to fully grasp the inner workings of our own motivations, rationality, and perception. Using MIT research scientist Boris Katz’s START natural language processing system and a vision system developed by former MIT PhD student Sajit Rao, Genesis can digest short, simple chunks of text, then spit out reports about how it interpreted connections between events.
While the system has processed many works, Winston chose “Macbeth” as a primary text because the tragedy offers an opportunity to take big human themes, such as greed and revenge, and map out their components.
“[Shakespeare] was pretty good at his portrayal of ‘the human condition,’ as my friends in the humanities would say,” Winston told The Boston Globe. “So there’s all kinds of stuff in there about what’s typical when we humans wander through the world.”
His deep fascination with humanity, human intelligence, and how we communicate information spilled over into what he often described as his favorite academic activity: teaching.
“He was a superb educator who introduced the field to generations of students,” says MIT Professor and longtime colleague Randall Davis. “His lectures had an uncanny ability to move in minutes from the details of an algorithm to the larger issues it illustrated, to yet larger lessons about how to be a scientist and a human being.”
A past president of the Association for the Advancement of Artificial Intelligence (AAAI), Winston also wrote and edited numerous books, including a seminal textbook on AI that’s still used in classrooms around the world. Outside of the lab he also co-founded Ascent Technology, which produces scheduling and workforce management applications for major airports.
He is survived by his wife Karen Prendergast and his daughter Sarah.

Genetic study takes research on sex differences to new heights

Thu, 07/18/2019 - 2:00pm

Throughout the animal kingdom, males and females frequently exhibit sexual dimorphism: differences in characteristic traits that often make it easy to tell them apart. In mammals, one of the most common sex-biased traits is size, with males typically being larger than females. This is true in humans: Men are, on average, taller than women. However, biological differences among males and females aren’t limited to physical traits like height. They’re also common in disease. For example, women are much more likely to develop autoimmune diseases, while men are more likely to develop cardiovascular diseases.

In spite of the widespread nature of these sex biases, and their significant implications for medical research and treatment, little is known about the underlying biology that causes sex differences in characteristic traits or disease. In order to address this gap in understanding, Whitehead Institute Director David Page has transformed the focus of his lab in recent years from studying the X and Y sex chromosomes to working to understand the broader biology of sex differences throughout the body. In a paper published in Science, Page, a professor of biology at MIT and a Howard Hughes Medical Institute investigator; Sahin Naqvi, first author and former MIT graduate student (now a postdoc at Stanford University); and colleagues present the results of a wide-ranging investigation into sex biases in gene expression, revealing differences in the levels at which particular genes are expressed in males versus females.

The researchers’ findings span 12 tissue types in five species of mammals, including humans, and led to the discovery that a combination of sex-biased genes accounts for approximately 12 percent of the average height difference between men and women. This finding demonstrates a functional role for sex-biased gene expression in contributing to sex differences. The researchers also found that the majority of sex biases in gene expression are not shared between mammalian species, suggesting that — in some cases — sex-biased gene expression that can contribute to disease may differ between humans and the animals used as models in medical research.

Having the same gene expressed at different levels in each sex is one way to perpetuate sex differences in traits in spite of the genetic similarity of males and females within a species — since with the exception of the 46th chromosome (the Y in males or the second X in females), the sexes share the same pool of genes. For example, if a tall parent passes on a gene associated with an increase in height to both a son and a daughter, but the gene has male-biased expression, then that gene will be more highly expressed in the son, and so may contribute more height to the son than the daughter.

The researchers searched for sex-biased genes in tissues across the body in humans, macaques, mice, rats, and dogs, and they found hundreds of examples in every tissue. They used height for their first demonstration of the contribution of sex-biased gene expression to sex differences in traits because height is an easy-to-measure and heavily studied trait in quantitative genetics.

“Discovering contributions of sex-biased gene expression to height is exciting because identifying the determinants of height is a classic, century-old problem, and yet by looking at sex differences in this new way we were able to provide new insights,” Page says. “My hope is that we and other researchers can repeat this model to similarly gain new insights into diseases that show sex bias."

Because height is so well studied, the researchers had access to public data on the identity of hundreds of genes that affect height. Naqvi decided to see how many of those height genes appeared in the researchers’ new dataset of sex-biased genes, and whether the genes’ sex biases corresponded to the expected effects on height. He found that sex-biased gene expression contributed approximately 1.6 centimeters to the average height difference between men and women, or 12 percent of the overall observed difference.

The scope of the researchers’ findings goes beyond height, however. Their database contains thousands of sex-biased genes. Slightly less than a quarter of the sex-biased genes that they catalogued appear to have evolved that sex bias in an early mammalian ancestor, and to have maintained that sex bias today in at least four of the five species studied. The majority of the genes appear to have evolved their sex biases more recently, and are specific to either one species or a certain lineage, such as rodents or primates.

Whether or not a sex-biased gene is shared across species is a particularly important consideration for medical and pharmaceutical research using animal models. For example, previous research identified certain genetic variants that increase the risk of Type 2 diabetes specifically in women; however, the same variants increase the risk of Type 2 diabetes indiscriminately in male and female mice. Therefore, mice would not be a good model to study the genetic basis of this sex difference in humans. Even when the animal appears to have the same sex difference in disease as humans, the specific sex-biased genes involved might be different. Based on their finding that most sex bias is not shared between species, Page and colleagues urge researchers to use caution when picking an animal model to study sex differences at the level of gene expression.

“We’re not saying to avoid animal models in sex-differences research, only not to take for granted that the sex-biased gene expression behind a trait or disease observed in an animal will be the same as that in humans. Now that researchers have species and tissue-specific data available to them, we hope they will use it to inform their interpretation of results from animal models,” Naqvi says.

The researchers have also begun to explore what exactly causes sex-biased expression of genes not found on the sex chromosomes. Naqvi discovered a mechanism by which sex-biased expression may be enabled: through sex-biased transcription factors, proteins that help to regulate gene expression. Transcription factors bind to specific DNA sequences called motifs, and he found that certain sex-biased genes had the motif for a sex-biased transcription factor in their promoter regions, the sections of DNA that turn on gene expression. This means that, for example, a male-biased transcription factor was selectively binding to the promoter region for, and so increasing the expression of, male-biased genes — and likewise for female-biased transcription factors and female-biased genes. The question of what regulates the transcription factors remains for further study — but all sex differences are ultimately controlled by either the sex chromosomes or sex hormones.

The researchers see the collective findings of this paper as a foundation for future sex-differences research.

“We’re beginning to build the infrastructure for a systematic understanding of sex biases throughout the body,” Page says. “We hope these datasets are used for further research, and we hope this work gives people a greater appreciation of the need for, and value of, research into the molecular differences in male and female biology.”

This work was supported by Biogen, Whitehead Institute, National Institutes of Health, Howard Hughes Medical Institute, and generous gifts from Brit and Alexander d’Arbeloff and Arthur W. and Carol Tobin Brill.

Portraits of mentoring excellence

Thu, 07/18/2019 - 11:50am

What makes a great faculty mentor? Appreciative graduate students from across the Institute have thoughts — lots of them.

In letters of nomination to the Committed to Caring (C2C) program over the past five years, students have lauded faculty who validate them, who encourage work-life balance, and who foster an inclusive work environment, among other caring actions. Professors Eytan Modiano, Erin Kelly, and Ju Li especially excel at advocating for students, sharing behind-the-scenes information, and demonstrating empathy.

The pool of C2C honorees is still expanding, along with a growing catalog of supportive actions known as Mentoring Guideposts. A new selection round has just begun, and the C2C program invites all graduate students to nominate professors for their outstanding mentorship by July 26.

Eytan Modiano: listening and advocating

Eytan Modiano is professor of aeronautics and astronautics and the associate director of the Laboratory for Information and Decision Systems (LIDS). His work addresses communication networks and protocols with application to satellite, wireless, and optical networks. The primary goal of his research is the design of network architectures that are cost-effective, scalable, and robust. His research group crosses disciplinary boundaries by combining techniques from network optimization; queueing theory; graph theory; network protocols and algorithms; machine learning; and physical layer communications.

When students reach out to Modiano for advice, he makes time in his schedule to meet with them, usually the same day or the next. In doing so, students say that Modiano offers invaluable support and shows students that he prioritizes them.

Modiano provides his students with channels to express their difficulties (a Mentoring Guidepost identified by the C2C program). For example, he allots unstructured time during individual and group meetings for student feedback. “These weekly meetings are mainly focused on research,” Modiano says, “but I always make sure to leave time at the end to talk about anything else that is on a student's mind, such as concerns about their career plans, coursework, or anything else.”

He also reaches out to student groups about how the department and lab could better serve them. As associate director of LIDS, Modiano has responded to such feedback in a number of ways, including working alongside the LIDS Social Committee to organize graduate student events. He has advocated for funding of MIT Graduate Women in Aerospace Engineering, and was a key proponent of the Exploring Aerospace Day, an event the group hosted for interested high school students.

Modiano does not think in binary terms about success and failure: “No single event, or even a series of events, is likely to define a career.” Rather, a career should be seen as a path “with ups and downs and whose trajectory we try to shape.”

Modiano advises, “If you persist, you are likely to find a path that you are happy with, and meet your goals.”

Erin Kelly: sustainably moving forward

In her students’ estimation, Erin Kelly, the Sloan Distinguished Professor of Work and Organization Studies, rises to the level of exceptional mentorship by channeling her expertise in work and organization studies to the benefit of her advisees.

Kelly investigates the implications of workplace policies and management strategies for workers, firms, and families; previous research has examined scheduling and work-family supports, family leaves, harassment policies, and diversity initiatives. As part of the Work, Family, and Health Network, she has evaluated innovative approaches to work redesign with group-randomized trials in professional/technical and health care workforces. Her book with Phyllis Moen, "Overload: How Good Jobs Went Bad and What to Do About It," will be published by Princeton University Press in early 2020. 

In Kelly’s words, she tries to “promote working in ways that feel sane and sustainable.” She does not count how many hours her students spend on projects or pay attention to where they work or how quickly they respond to emails. Kelly says that she knows her students are committed to this effort long-term, and that everyone works differently.

One student nominator noted that Kelly was extremely supportive of her decision to have a child during graduate school, offering her advice about how to balance work and home as well as how to transition back into school after maternity leave. The nominator notes, “Erin does not view the baby as an impediment to my professional career.”

In addition to providing advice on course selection and dissertation planning, Kelly offers her students “informal” advising (a Mentoring Guidepost) that goes beyond the usual academic parameters. Kelly “explained to me the importance of networking in finding an academic job,” another student says, “I’ve appreciated this informal mentoring, particularly because I am a woman trying to enter a male-dominated field; understanding how to succeed professionally is important, but is not always obvious.”

Ju Li: a proven mentor and friend

Ju Li is the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering at MIT. Li’s research focuses on mechanical properties of materials, and energy storage and conversion. His lab also studies the effects of radiation and aggressive environments on microstructure and materials properties.

Li shows empathy for students’ experiences (a Mentoring Guidepost identified by the C2C program). One student remarked that when they were not confident in their own abilities, Li was “extremely patient” and showed faith in their work. Li “lifted me up with his encouraging words and shared his own experiences and even struggles.”

He concerns himself with both training academic researchers and also preparing students for life after MIT, whether their paths lead them to academic, industry, governmental, or entrepreneurial endeavors. Li’s attention to his students and their aims does not go unnoticed. One C2C nominator says that former group members often come back to visit and to seek advice from Li whenever possible, “and nobody regrets being a member of our group.”

It is clear from their letters of nomination that Li’s students deeply admire his character and hold him up as a lifelong role model. In addition to his caring actions, they cite his humility and his treatment of students as “equals and true friends.”

Just as Li’s students admire him, Li was inspired by his own graduate mentor, Sydney Yip, professor emeritus of nuclear science and engineering, and materials science and engineering at MIT. Li says that Yip taught everyone who encountered him to become better researchers and better people. In graduate school, Li says, “I benefited so much by watching how Sid managed his group, and how he interacted with the world … I felt lucky every day.”

More on Committed to Caring (C2C)

The Committed to Caring (C2C) program, an initiative of the Office of Graduate Education, honors faculty members from across the Institute for their outstanding support of graduate students. By sharing the stories of great mentors, like professors Modiano, Kelly, and Li, the C2C Program hopes to encourage exceptional mentorship at MIT.

Selection criteria for the award include the scope and reach of advisor impact on the experience of graduate students, excellence in scholarship, and demonstrated commitment to diversity and inclusion.

Nominations for the next round of honorees must be submitted by July 26. Selections will be announced in late September.

Behind the scenes of the Apollo mission at MIT

Thu, 07/18/2019 - 9:23am

Fifty years ago this week, humanity made its first expedition to another world, when Apollo 11 touched down on the moon and two astronauts walked on its surface. That moment changed the world in ways that still reverberate today.

MIT’s deep and varied connections to that epochal event — many of which have been described on MIT News — began years before the actual landing, when the MIT Instrumentation Laboratory (now Draper Labs) signed the very first contract to be awarded for the Apollo program after its announcement by President John F. Kennedy in 1961. The Institute’s involvement continued throughout the program — and is still ongoing today.

MIT’s role in creating the navigation and guidance system that got the mission to the moon and back has been widely recognized in books, movies, and television series. But many other aspects of the Institute’s involvement in the Apollo program and its legacy, including advances in mechanical and computational engineering, simulation technology, biomedical studies, and the geophysics of planet formation, have remained less celebrated.

Amid the growing chorus of recollections in various media that have been appearing around this 50th anniversary, here is a small collection of bits and pieces about some of the unsung heroes and lesser-known facts from the Apollo program and MIT’s central role in it.

A new age in electronics

The computer system and its software that controlled the spacecraft — called the Apollo Guidance Computer and designed by the MIT Instrumentation Lab team under the leadership of Eldon Hall — were remarkable achievements that helped push technology forward in many ways.

The AGC’s programs were written in one of the first-ever compiler languages, called MAC, which was developed by Instrumentation Lab engineer Hal Laning. The computer itself, the 1-cubic-foot Apollo Guidance Computer, was the first significant use of silicon integrated circuit chips and greatly accelerated the development of the microchip technology that has gone on to change virtually every consumer product.

In an age when most computers took up entire climate-controlled rooms, the compact AGC was uniquely small and lightweight. But most of its “software” was actually hard-wired: The programs were woven, with tiny donut-shaped metal “cores” strung like beads along a set of wires, with a given wire passing outside the donut to represent a zero, or through the hole for a 1. These so-called rope memories were made in the Boston suburbs at Raytheon, mostly by women who had been hired because they had experience in the weaving industry. Once made, there was no way to change individual bits within the rope, so any change to the software required weaving a whole new rope, and last-minute changes were impossible.

As David Mindell, the Frances and David Dibner Professor of the History of Engineering and Manufacturing, points out in his book “Digital Apollo,” that system represented the first time a computer of any kind had been used to control, in real-time, many functions of a vehicle carrying human beings — a trend that continues to accelerate as the world moves toward self-driving vehicles. Right after the Apollo successes, the AGC was directly adapted to an F-8 fighter jet, to create the first-ever fly-by-wire system for aircraft, where the plane’s control surfaces are moved via a computer rather than direct cables and hydraulic systems. This approach is now widespread in the aerospace industry, says John Tylko, who teaches MIT’s class 16.895J (Engineering Apollo: The Moon Project as a Complex System), which is taught every other year.

As sophisticated as the computer was for its time, computer users today would barely recognize it as such. Its keyboard and display screen looked more like those on a microwave oven than a computer: a simple numeric keypad and a few lines of five-digit luminous displays. Even the big mainframe computer used to test the code as it was being developed had no keyboard or monitor that the programmers ever saw. Programmers wrote their code by hand, then typed it onto punch cards — one card per line — and handed the deck of cards to a computer operator. The next day, the cards would be returned with a printout of the program’s output. And in this time long before email, communications among the team often relied on handwritten paper notes.

Priceless rocks

MIT’s involvement in the geophysical side of the Apollo program also extends back to the early planning stages — and continues today. For example, Professor Nafi Toksöz, an expert in seismology, helped to develop a seismic monitoring station that the astronauts placed on the moon, where it helped lead to a greater understanding of the moon’s structure and formation. “It was the hardest work I have ever done, but definitely the most exciting,” he has said.

Toksöz says that the data from the Apollo seismometers “changed our understanding of the moon completely.” The seismic waves, which on Earth continue for a few minutes, lasted for two hours, which turned out to be the result of the moon’s extreme lack of water. “That was something we never expected, and had never seen,” he recalls.

The first seismometer was placed on the moon’s surface very shortly after the astronauts landed, and seismologists including Toksöz started seeing the data right away — including every footstep the astronauts took on the surface. Even when the astronauts returned to the lander to sleep before the morning takeoff, the team could see that Buzz Aldrin ScD ’63 and Neil Armstrong were having a sleepless night, with every toss and turn dutifully recorded on the seismic traces.

MIT Professor Gene Simmons was among the first group of scientists to gain access to the lunar samples as soon as NASA released them from quarantine, and he and others in what is now the Department of Earth, Planetary and Atmospheric Sciences (EAPS) have continued to work on these samples ever since. As part of a conference on campus, he exhibited some samples of lunar rock and soil in their first close-up display to the public, where some people may even have had a chance to touch the samples.

Others in EAPS have also been studying those Apollo samples almost from the beginning. Timothy Grove, the Robert R. Shrock Professor of Earth and Planetary Sciences, started studying the Apollo samples in 1971 as a graduate student at Harvard University, and has been doing research on them ever since. Grove says that these samples have led to major new understandings of planetary formation processes that have helped us understand the Earth and other planets better as well.

Among other findings, the rocks showed that ratios of the isotopes of oxygen and other elements in the moon rocks were identical to those in terrestrial rocks but completely different than those of any meteorites, proving that the Earth and the moon had a common origin and leading to the hypothesis that the moon was created through a giant impact from a planet-sized body. The rocks also showed that the entire surface of the moon had likely been molten at one time. The idea that a planetary body could be covered by an ocean of magma was a major surprise to geologists, Grove says.

Many puzzles remain to this day, and the analysis of the rock and soil samples goes on. “There’s still a lot of exciting stuff” being found in these samples, Grove says.

Sorting out the facts

In the spate of publicity and new books, articles, and programs about Apollo, inevitably some of the facts — some trivial, some substantive — have been scrambled along the way. “There are some myths being advanced,” says Tylko, some of which he addresses in his “Engineering Apollo” class. “People tend to oversimplify” many aspects of the mission, he says.

For example, many accounts have described the sequence of alarms that came from the guidance computer during the last four minutes of the mission, forcing mission controllers to make the daring decision to go ahead despite the unknown nature of the problem. But Don Eyles, one of the Instrumentation Lab’s programmers who had written the landing software for the AGC, says that he can’t think of a single account he’s read about that sequence of events that gets it entirely right. According to Eyles, many have claimed the problem was caused by the fact that the rendezvous radar switch had been left on, so that its data were overloading the computer and causing it to reboot.

But Eyles says the actual reason was a much more complex sequence of events, including a crucial mismatch between two circuits that would only occur in rare circumstances and thus would have been hard to detect in testing, and a probably last-minute decion to put a vital switch in a position that allowed it to happen. Eyles has described these details in a memoir about the Apollo years and in a technical paper available online, but he says they are difficult to summarize simply. But he thinks the author Norman Mailer may have come closest, capturing the essence of it in his book “Of a Fire on the Moon,” where he describes the issue as caused by a “sneak circuit” and an “undetectable” error in the onboard checklist.

Some accounts have described the AGC as a very limited and primitive computer compared to today’s average smartphone, and Tylko acknowledges that it had a tiny fraction of the power of today’s smart devices — but, he says, “that doesn’t mean they were unsophisticated.” While the AGC only had about 36 kilobytes of read-only memory and 2 kilobytes of random-access memory, “it was exceptionally sophisticated and made the best use of the resources available at the time,” he says.

In some ways it was even ahead of its time, Tylko says. For example, the compiler language developed by Laning along with Ramon Alonso at the Instrumentation Lab used an architecture that he says was relatively intuitive and easy to interact with. Based on a system of “verbs” (actions to be performed) and “nouns” (data to be worked on), “it could probably have made its way into the architecture of PCs,” he says. “It’s an elegant interface based on the way humans think.”

Some accounts go so far as to claim that the computer failed during the descent and astronaut Neil Armstrong had to take over the controls and land manually, but in fact partial manual control was always part of the plan, and the computer remained in ultimate control throughout the mission. None of the onboard computers ever malfunctioned through the entire Apollo program, according to astronaut David Scott SM ’62, who used the computer on two Apollo missions: “We never had a failure, and I think that is a remarkable achievement.”

Behind the scenes

At the peak of the program, a total of about 1,700 people at MIT’s Instrumentation Lab were working on the Apollo program’s software and hardware, according to Draper Laboratory, the Instrumentation Lab’s successor, which spun off from MIT in 1973. A few of those, such as the near-legendary “Doc” Draper himself — Charles Stark Draper ’26, SM ’28, ScD ’38, former head of the Department of Aeronautics and Astronautics (AeroAstro) — have become widely known for their roles in the mission, but most did their work in near-anonymity, and many went on to entirely different kinds of work after the Apollo program’s end.

Margaret Hamilton, who directed the Instrumentation Lab’s Software Engineering Division, was little known outside of the program itself until an iconic photo of her next to the original stacks of AGC code began making the rounds on social media in the mid 2010s. In 2016, when she was awarded the Presidential Medal of Freedom by President Barack Obama, MIT Professor Jaime Peraire, then head of AeroAstro, said of Hamilton that “She was a true software engineering pioneer, and it’s not hyperbole to say that she, and the Instrumentation Lab’s Software Engineering Division that she led, put us on the moon.” After Apollo, Hamilton went on to found a software services company, which she still leads.

Many others who played major roles in that software and hardware development have also had their roles little recognized over the years. For example, Hal Laning ’40, PhD ’47, who developed the programming language for the AGC, also devised its executive operating system, which employed what was at the time a new way of handling multiple programs at once, by assigning each one a priority level so that the most important tasks, such as controlling the lunar module’s thrusters, would always be taken care of. “Hal was the most brilliant person we ever had the chance to work with,” Instrumentation Lab engineer Dan Lickly told MIT Technology Review. And that priority-driven operating system proved crucial in allowing the Apollo 11 landing to proceed safely in spite of the 1202 alarms going off during the lunar descent.

While the majority of the team working on the project was male, software engineer Dana Densmore recalls that compared to the heavily male-dominated workforce at NASA at the time, the MIT lab was relatively welcoming to women. Densmore, who was a control supervisor for the lunar landing software, told The Wall Street Journal that “NASA had a few women, and they kept them hidden. At the lab it was very different,” and there were opportunities for women there to take on significant roles in the project.

Hamilton recalls the atmosphere at the Instrumentation Lab in those days as one of real dedication and meritocracy. As she told MIT News in 2009, “Coming up with solutions and new ideas was an adventure. Dedication and commitment were a given. Mutual respect was across the board. Because software was a mystery, a black box, upper management gave us total freedom and trust. We had to find a way and we did. Looking back, we were the luckiest people in the world; there was no choice but to be pioneers.”

Unmasking mutant cancer cells

Wed, 07/17/2019 - 9:35am

As cancer cells progress, they accumulate hundreds and even thousands of genetic and epigenetic changes, resulting in protein expression profiles that are radically different from that of healthy cells. But despite their heavily mutated proteome, cancer cells can evade recognition and attack by the immune system.

Immunotherapies, particularly checkpoint inhibitors that reinvigorate exhausted T cells, have revolutionized the treatment of certain forms of cancer. These breakthrough therapies have resulted in unprecedented response rates for some patients. Unfortunately, most cancers fail to respond to immunotherapies and new strategies are therefore needed to realize their full potential.

A team of cancer biologists including members of the laboratories of David H. Koch Professor of Biology Tyler Jacks, director of the Koch Institute for Integrative Cancer Research at MIT, and fellow Koch Institute member Forest White, the Ned C. and Janet Bemis Rice Professor and member of the MIT Center for Precision Cancer Medicine, took a complementary approach to boosting the immune system.

Although cancer cells are rife with mutant proteins, few of those proteins appear on a cell’s surface, where they can be recognized by immune cells. The researchers repurposed a well-studied class of anti-cancer drugs, heat shock protein 90 (HSP90) inhibitors, that make cancer cells easier to recognize by revealing their mutant proteomes.

Many HSP90 inhibitors have been studied extensively for the past several decades as potential cancer treatments. HSP90 protects the folded structure of a number of proteins when cells undergo stress, and in cancer cells plays an important role in stabilizing protein structure undermined by pervasive mutations. However, despite promising preclinical evidence, HSP90 inhibitors have produced discouraging outcomes in clinical trials, and none have achieved FDA approval.

In a study appearing in Clinical Cancer Research, the researchers identified a potential reason behind those disappointing results. HSP90 inhibitors have only been clinically tested at bolus doses — intermittent, large doses — that often result in unwanted side effects in patients. 

RNA profiling of human clinical samples and cultured cancer cell lines revealed that this bolus-dosing schedule results in the profound suppression of immune activity as well as the activation of heat shock factor 1 protein (HSF1). Not only does HSF1 activate the cell’s heat shock response, which counteracts the effect of the HSP90 inhibitor, but it is known to be a powerful enabler of cancer cell malignancy.

In striking contrast, the researchers used cancer mouse models with intact immune systems to show that sustained, low-level dosing of HSP90 inhibitors avoids triggering both the heat shock response and the immunosuppression associated with high doses.

Using a method devised by the White lab that combines mass spectrometry-based proteomics and computational modeling, the researchers discovered that the new dosing regimen increased the number and diversity of peptides (protein fragments) on the cell surface. These peptides, which the team found to be released by HSP90 during sustained low-level inhibition, were then free to be taken up by the cell’s antigen-presenting machinery and used to flag patrolling immune cells.

“These results connect a fundamental aspect of cell biology — protein folding — to anti-tumor immune responses” says lead author Alex Jaeger, a postdoctoral fellow in the Jacks lab and a former member of the laboratory of the late MIT biologist and Professor Susan Lindquist, whose work inspired the study’s HSP90 dosing scheule. “Hopefully, our findings can reinvigorate interest in HSP90 inhibition as a complementary approach for immunotherapy.”

Using the new dosing regimen, the researchers were able to clear tumors in mouse models at drug concentrations that are 25-50 times lower than those used in clinical trials, significantly reducing the risk for toxic side effects in patients. Importantly, because several forms of HSP90 inhibitors have already undergone extensive clinical testing, the new dosing regimen can be tested in patients quickly.

This work was supported in part by the Damon Runyon Cancer Research Foundation, the Takeda Pharmaceuticals Immune Oncology Research Fund, and an MIT Training Grant in Environmental Science; foundational work on HSF1 was supported by the Koch Institute Frontier Research Program.

J-PAL North America announces second round of competition partners

Wed, 07/17/2019 - 9:25am

J-PAL North America, a research center at MIT, will partner with two leading education technology nonprofits to test promising models to improve learning, as part of the center’s second Education, Technology, and Opportunity Innovation Competition. 

Running in its second year, J-PAL North America’s Education, Technology, and Opportunity Innovation Competition supports education leaders in using randomized evaluations to generate evidence on how technology can improve student learning, particularly for students from disadvantaged backgrounds. Last year, J-PAL North America partnered with the Family Engagement Lab to develop an evaluation of a multilingual digital messaging platform, and with Western Governors University’s Center for Applied Learning Science to evaluate scalable models to improve student learning in math.

This year, J-PAL North America will continue its work to support rigorous evaluations of educational technologies aimed to reduce disparities by partnering with Boys and Girls Clubs of Greater Houston, a youth-development organization that provides education and social services to at-risk students, and MIND Research Institute, a nonprofit committed to improving math education.

“Even just within the first and second year of the J-PAL ed-tech competition, there continues to be an explosion in promising new initiatives,” says Philip Oreopoulos, professor of economics at the University of Toronto and co-chair of the J-PAL Education, Technology, and Opportunity Initiative. “We’re excited to try to help steer this development towards the most promising and effective programs for improving academic success and student well-being.”

Boys and Girls Clubs of Greater Houston will partner with J-PAL North America to develop an evaluation of the BookNook reading app, a research-based intervention technology that aims to improve literacy skills of K-8 students.

“One of our commitments to our youth is to prepare them to be better citizens in life, and we do this through our programming, which supplements the education they receive in school,” says Michael Ewing, director of programs at Boys & Girls Clubs of Greater Houston. “BookNook is one of our programs that we know can increase reading literacy and help students achieve at a higher level. We are excited about this opportunity to conduct a rigorous evaluation of BookNook’s technology because we can substantially increase our own accountability as an organization, ensuring that we are able to track the literacy gains of our students when the program is implemented with fidelity.”

Children who do not master reading by a young age are often placed at a significant disadvantage to their peers throughout the rest of their development. However, many effective interventions for students struggling with reading involve one-on-one or small-group instruction that places a heavy demand on school resources and teacher time. This makes it particularly challenging for schools that are already resource-strapped and face a shortage of teachers to meet the needs of students who are struggling with reading.

The BookNook app offers a channel to bring research-proven literacy intervention strategies to greater numbers of students through accessible technology. The program is heavily scaffolded so that both teachers and non-teachers can use it effectively, allowing after-school staff like those at Boys & Girls Clubs of Greater Houston to provide adaptive instruction to students struggling with reading.

“Our main priority at BookNook is student success,” says Nate Strong, head of partnerships at for the BookNook team. “We are really excited to partner with J-PAL and with Boys & Girls Clubs of Greater Houston to track the success of students in Houston and learn how we can do better for them over the long haul.”

MIND Research Institute seeks to partner with J-PAL North America to develop a scalable model that will increase students’ conceptual understanding of mathematical concepts. MIND’s Spatial Temporal (ST) math program is a pre-K-8 visual instructional program that leverages the brain's spatial-temporal reasoning ability using challenging visual puzzles, non-routine problem solving, and animated informative feedback to understand and solve mathematical problems.

“We’re thrilled and honored to begin this partnership with J-PAL to build our capacity to conduct randomized evaluations,” says Andrew Coulson, chief data science officer for MIND. “It's vital we continue to rigorously evaluate the ability of ST Math's spatial-temporal approach to provide a level playing field for every student, and to show substantial effects on any assessment. With the combination of talent and experience that J-PAL brings, I expect that we will also be exploring innovative research questions, metrics and outcomes, methods and techniques to improve the applicability, validity and real-world usability of the findings.”

J-PAL North America is excited to work with these two organizations and continue to support rigorous evaluations that will help us better understand the role technology should play in learning. Boys & Girls Clubs of Greater Houston and MIND Research Institute will help J-PAL contribute to growing evidence base on education technology that can help guide decision-makers in understanding which uses of education technology are truly helping students learn amidst a rapidly-changing technological landscape.

J-PAL North America is a regional office of the Abdul Latif Jameel Poverty Action Lab. J-PAL was established in 2003 as a research center at MIT’s Department of Economics. Since then, it has built a global network of affiliated professors based at over 58 universities and regional offices in Africa, Europe, Latin America and the Caribbean, North America, South Asia, and Southeast Asia. J-PAL North America was established with support from the Alfred P. Sloan Foundation and Arnold Ventures and works to improve the effectiveness of social programs in North America through three core activities: research, policy outreach, and capacity building. J-PAL North America’s education technology work is supported by the Overdeck Family Foundation and Arnold Ventures.

MIT and Fashion Institute of Technology join forces to create innovative textiles

Wed, 07/17/2019 - 9:00am

If you knew that hundreds of millions of running shoes are disposed of in landfills each year, would you prefer a high-performance athletic shoe that is biodegradable? Would being able to monitor your fitness in real time and help you avoid injury while you are running appeal to you? If so, look no further than the collaboration between MIT and the Fashion Institute of Technology (FIT). 

For the second consecutive year, students from each institution teamed up for two weeks in late June to create product concepts exploring the use of advanced fibers and technology. The workshops were held collaboratively with Advanced Functional Fabrics of America (AFFOA), a Cambridge, Massachusetts-based national nonprofit whose goal is to enable a manufacturing-based transformation of traditional fibers, yarns, and textiles into highly sophisticated, integrated, and networked devices and systems. 

“Humans have made use of natural fibers for millennia. They are essential as tools, clothing and shelter,” says Gregory C. Rutledge, lead principal investigator for MIT in AFFOA and the Lammot du Pont Professor in Chemical Engineering. “Today, new fiber-based solutions can have a significant and timely impact on the challenges facing our world.” 

The students had the opportunity this year to respond to a project challenge posed by footwear and apparel manufacturer New Balance, a member of the AFFOA network. Students spent their first week in Cambridge learning new technologies at MIT and the second at FIT, a college of the State University of New York, in New York City working on projects and prototypes. On the last day of the workshop, the teams presented their final projects at the headquarters of Lafayette 148 at the Brooklyn Navy Yard, with New Balance Creative Manager of Computational Design Onur Yuce Gun in attendance.

Team Natural Futurism presented a concept to develop a biodegradable lifestyle shoe using natural material alternatives, including bacterial cellulose and mycelium, and advanced fiber concepts to avoid use of chemical dyes. The result was a shoe that is both sustainable and aesthetic. Team members included: Giulia de Garay (FIT, Textile Development and Marketing), Rebecca Grekin ’19 (Chemical Engineering), rising senior Kedi Hu (Chemical Engineering/Architecture), Nga Yi "Amy" Lam (FIT, Textile Development and Marketing), Daniella Koller (FIT, Fashion Design), and Stephanie Stickle (FIT, Textile Surface Design).

Team CoMIT to Safety Before ProFIT explored the various ways that runners get hurt, sometimes from acute injuries but more often from overuse. Their solution was to incorporate intuitive textiles, as well as tech elements such as a silent alarm and LED display, into athletic clothing and shoes for entry-level, competitive, and expert runners. The goal is to help runners at all levels to eliminate distraction, know their physical limits, and be able to call for help. Team members included Rachel Cheang (FIT, Fashion Design/Knitwear), Jonathan Mateer (FIT, Accessories Design), Caroline Liu ’19 (Materials Science and Engineering), and Xin Wen ’19 (Electrical Engineering and Computer Science).

"It is critical for design students to work in a team environment engaging in the latest technologies. This interaction will support the invention of products that will define our future," comments Joanne Arbuckle, deputy to the president for industry partnerships and collaborative programs at FIT.

The specific content of this workshop was co-designed by MIT postdocs Katia Zolotovsky of the Department of Biological Engineering and Mehmet Kanik of the Research Laboratory of Electronics, with assistant professor of fashion design Andy Liu from FIT, to teach the fundamentals of fiber fabrication, 3-D printing with light, sensing, and biosensing. Participating MIT faculty included Yoel Fink, who is CEO of AFFOA and professor of materials science and electrical engineering; Polina Anikeeva, who is associate professor in the departments of Materials Science and Engineering and Brain and Cognitive Sciences; and Nicholas Xuanlai Fang, professor of mechanical engineering. Participating FIT faculty were Preeti Arya, assistant professor, Textile Development and Marketing; Patrice George, associate professor, Textile Development and Marketing; Suzanne Goetz, associate professor, Textile Surface Design; Tom Scott, Fashion Design; David Ulan, adjunct assistant professor, Accessories Design; and Gregg Woodcock, adjunct instructor, Accessories Design.  

To facilitate the intersection of design and engineering for products made of advanced functional fibers, yarns, and textiles, a brand-new workforce must be created and inspired by future opportunities. “The purpose of the program is to bring together undergraduate students from different backgrounds, and provide them with a cross-disciplinary, project-oriented experience that gets them thinking about what can be done with these new materials,” Rutledge adds. 

The goal of MIT, FIT, AFFOA, and industrial partner New Balance is to accelerate innovation in high-tech, U.S.-based manufacturing involving fibers and textiles, and potentially to create a whole new industry based on breakthroughs in fiber technology and manufacturing. AFFOA, a Manufacturing Innovation Institute founded in 2016, is a public-private partnership between industry, academia, and both state and federal governments.

“Collaboration and teamwork are DNA-level attributes of the New Balance workplace,” says Chris Wawrousek, senior creative design lead in the NB Innovation Studio. “We were very excited to participate in the program from a multitude of perspectives. The program allowed us to see some of the emerging research in the field of technical textiles. In some cases, these technologies are still very nascent, but give us a window into future developments.”  

“The diverse pairing and short time period also remind us of the energy captured in an academic crash course, and just how much teams can do in a condensed period of time,” Wawrousek adds. “Finally, it’s a great chance to connect with this future generation of designers and engineers, hopefully giving them an exciting window into the work of our brand.”

By building upon their different points of view from design and science, the teams demonstrated what is possible when creative individuals from each area act and think as one. “When designers and engineers come together and open their minds to creating new technologies that ultimately will impact the world, we can imagine exciting new multi-material fibers that open up a new spectrum of applications in various markets, from clothing to medical and beyond,” says Yuly Fuentes, MIT Materials Research Laboratory project manager for fiber technologies. 

How does your productivity stack up?

Tue, 07/16/2019 - 5:00pm

You know that person who always seems to be ahead of their deadlines, despite being swamped? Do you look at them with envy and wonder how they do it?

"Regardless of location, industry, or occupation, productivity is a challenge faced by every professional," says Robert Pozen, senior lecturer at the MIT Sloan School of Management.

As part of his ongoing research and aided by MIT undergraduate Kevin Downey, Pozen surveyed 20,000 self-selected individuals in management from six continents to learn why some people are more productive than others.

The survey tool, dubbed the Pozen Productivity Rating, consists of 21 questions divided into seven categories: planning your schedule, developing daily routines, coping with your messages, getting a lot done, improving your communication skills, running effective meetings, and delegating to others. These particular habits and skills are core to Pozen’s MIT Sloan Executive Education program, Maximizing Your Productivity: How to Become an Efficient and Effective Executive, and his bestselling book, "Extreme Productivity: Boost Your Results, Reduce Your Hours."

After cleaning up the data, Pozen and Downey obtained a complete set of answers from 19,957 respondents. Roughly half were residents of North America; another 21 percent were residents of Europe, and 19 percent were residents of Asia. The remaining 10 percent included residents of Australia, South America, and Africa.

They identified the groups of people with the highest productivity ratings and found that professionals with the highest scores tended to do well on the same clusters of habits:

  • They planned their work based on their top priorities and then acted with a definite objective;
  • they developed effective techniques for managing a high volume of information and tasks; and
  • they understood the needs of their colleagues, enabling short meetings, responsive communications, and clear directions.

The results were also interesting when parsed by the demographics of the survey participants.

Geographically, the average productivity score for respondents from North America was in the middle of the pack, even though Americans tend to work longer hours. In fact, the North American score was significantly lower than the average productivity scores for respondents from Europe, Asia, and Australia.

Age and seniority were highly correlated with personal productivity — older and more senior professionals recorded higher scores than younger and more junior colleagues. Habits of these more senior respondents included developing routines for low-value activities, managing message flow, running effective meetings, and delegating tasks to others.

While the overall productivity scores of male and female professionals were almost the same, there were some noteworthy differences in how women and men managed to be so productive. For example, women tended to score particularly high when it came to running effective meetings — keeping meetings to less than 90 minutes and finishing with an agreement of next steps. By contrast, men did particularly well at coping with high message volume — not looking at their emails too frequently and skipping over the messages of low value.

Coping with your daily flood of messages

While it’s clear that the ability to deal with inbox overload is key to productivity, how that’s accomplished may be less clear to many of us who shudder at our continuous backlog of emails.

“We all have so much small stuff, like email, that overwhelms us, and we wind up dedicating precious time to it,” says Pozen. “Most of us look at email every three to five minutes. Instead, look every hour or two, and when you do look, look only at subject matter and sender, and essentially skip over 60-80 percent of it, because most emails you get aren’t very useful.” Pozen also encourages answering important emails immediately instead of flagging them and then finding them again later (or forgetting altogether), as well as flagging important contacts and making ample use of email filters.

However, Pozen stresses that managing incoming emails, while an important skill, needs to be paired with other, more big-picture habits in order to be effective, such as defining your highest priorities. He warns that without a specific set of goals to pursue — both personal and professional — many ambitious people devote insufficient time to activities that actually support their top goals.

More tips for maximizing your productivity

If you want to become more productive, try developing the “habit clusters” demonstrated in Pozen’s survey results and possessed by the most productive professionals. This includes:

  • Focusing on your primary objectives: Every night, revise your next day’s schedule to stress your top priorities. Decide your purpose for reading any lengthy material, before you start.
  • Managing your work overload: Skip over 50-80 percent of your emails based on the sender and the subject. Break large projects into small steps — and start with step one.
  • Supporting your colleagues: Limit any meeting to 90 minutes or less and end each meeting with clearly defined next steps. Agree on success metrics with your team.

Pozen's survey tool is still available online. Those completing it will receive a feedback report offering practical tips for improving productivity. You can also learn from Pozen firsthand in his MIT Executive Education program, Maximizing Your Personal Productivity.

Why urban planners should pay attention to restaurant-review sites

Mon, 07/15/2019 - 2:59pm

Apartment seekers in big cities often use the presence of restaurants to determine if a neighborhood would be a good place to live. It turns out there is a lot to this rule of thumb: MIT urban studies scholars have now found that in China, restaurant data can be used to predict key socioeconomic attributes of neighborhoods.

Indeed, using online restaurant data, the researchers say, they can effectively predict a neighborhood’s daytime population, nighttime population, the number of businesses located in it, and the amount of overall spending in the neighborhood.

“The restaurant industry is one of the most decentralized and deregulated local consumption industries,” says Siqi Zheng, an urban studies professor at MIT and co-author of a new paper outlining the findings. “It is highly correlated with local socioeconomic attributes, like population, wealth, and consumption.” 

Using restaurant data as a proxy for other economic indicators can have a practical purpose for urban planners and policymakers, the researchers say. In China, as in many places, a census is only taken once a decade, and it may be difficult to analyze the dynamics of a city’s ever-changing areas on a faster-paced basis. Thus new methods of quantifying residential levels and economic activity could help guide city officials.

“Even without census data, we can predict a variety of a neighborhood’s attributes, which is very valuable,” adds Zheng, who is the Samuel Tak Lee Associate Professor of Real Estate Development and Entrepreneurship, and faculty director of the MIT China Future City Lab.

“Today there is a big data divide,” says Carlo Ratti, director of MIT’s Senseable City Lab, and a co-author of the paper. “Data is crucial to better understanding cities, but in many places we don’t have much [official] data. At the same time, we have more and more data generated by apps and websites. If we use this method we [can] understand socioeconomic data in cities where they don’t collect data.”

The paper, “Predicting neighborhoods’ socioeconomic attributes using restaurant data,” appears this week in the Proceedings of the National Academy of Sciences. The authors are Zheng, who is the corresponding author; Ratti; and Lei Dong, a postdoc co-hosted by the MIT China Future City Lab and the Senseable City Lab.

The study takes a close neighborhood-level look at nine cities in China: Baoding, Beijing, Chengdu, Hengyang, Kunming, Shenyang, Shenzen, Yueyang, and Zhengzhou. To conduct the study, the researchers extracted restaurant data from the website Dianping, which they describe as the Chinese equivalent of Yelp, the English-language business-review site.

By matching the Dianping data to reliable, existing data for those cities — including anonymized and aggregated mobile phone location data from 56.3 million people, bank card records, company registration records, and some census data — the researchers found they could predict 95 percent of the variation in daytime population among neighborhoods. They also predicted 95 percent of the variation in nighttime population, 93 percent of the variation in the number of businesses, and 90 percent of the variation in levels of consumer consumption.

“We have used new publicly available data and developed new data augmentation methods to address these urban issues,” says Dong, who adds that the study‘s model is a “new contribution to [the use of] both data science for social good, and big data for urban economics communities.” 

The researchers note that this is a more accurate proxy for estimating neighborhood-level demographic and economic activity than other methods previously used. For instance, other researchers have used satellite imaging to calculate the amount of nightime light in cities, and in turn used the quantity of light to estimate neighborhood-level activity. While that method fares well for population estimates, the restaurant-data method is better overall, and much better at estimating business activity and consumer spending.

Zheng says she feels “confident” that the researchers’ model could be applied to other Chinese cities because it already shows good predictive power across cities. But the researchers also believe the method they employed — which uses machine learning techniques to zero in on significant correlations — could potentially be applied to cities around the globe.

“These results indicate the restaurant data can capture common indicators of socioeconomic outcomes, and these commonalities can be transferred … with reasonable accuracy in cities where survey outcomes are unobserved,” the researchers state in the paper.

As the scholars acknowledge, their study observed correlations between restaurant data and neighborhood characteristics, rather than specifying the exact causal mechanisms at work. Ratti notes that the causal link between restaurants and neighborhood characteristics can run both ways: Sometimes restaurants can fill demand in already-thriving area, while at other times their presence is a harbinger of future development.

“There is always [both] a push and a pull” between restaurants and neighborhood development, Ratti says. “But we show the socioeconomic data is very well-reflected in the restaurant landscape, in the cities we look at. The interesting finding is that this seems to be so good as a proxy.”

Zheng says she hopes additional scholars will pick up on the method, which in principle could be applied to many urban studies topics.

“The restaurant data itself, as well as the variety of neighborhood attributes it predicts, can help other researchers study all kinds of urban issues, which is very valuable,” Zheng says.

The research grew out of an ongoing collaboration between MIT’s China Future City Lab and the MIT Senseable City Lab Consortium, which both use a broad range of data sources to shed new light on urban dynamics.

The study was also supported, in part, by the National Science Foundation of China.

New team to lead MIT Nuclear Reactor Laboratory

Mon, 07/15/2019 - 11:30am

The Office of the Vice President for Research announced the appointment of a new leadership team for the Nuclear Reactor Laboratory (NRL). The team will consist of Gordon Kohse, managing director for operations; Jacopo Buongiorno, science and technology director and director for strategic R&D partnerships; and Lance Snead, senior advisor for strategic partnerships and business development and leader of the NRL Irradiation Materials Sciences Group. The team will succeed David Moncton, who plans to return to his research after taking a department head sabbatical. Moncton has served as director of the NRL since 2004.

The new leadership team will collectively oversee an updated organizational model for the NRL that will allow the laboratory to more closely align its operations with the scientific research agenda of the Department of Nuclear Science and Engineering and other MIT researchers. “I look forward to working with this thoughtful and experienced team as they implement their vision for a vibrant operation supporting the critical work of our research community,” says Maria Zuber, vice president for research.

Kohse, a principal research scientist with the NRL and previously the deputy director of research and services, has worked with the NRL for over 40 years, ensuring the smooth operation of experiments at the laboratory. As managing director for operations, Kohse will oversee reactor operations, the newly created program management group, quality assurance, and the irradiation engineering group, and will work closely with Lance Snead on overseeing the Irradiation Materials Sciences Group. Kohse says, “I look forward to a new chapter in my work at the NRL. This is an exciting opportunity to build on the skills and dedication of the laboratory staff and to renew and strengthen cooperation with MIT faculty. My goal is to continue safe, reliable operation of the reactor, and to expand its capabilities in the service of expanding missions in nuclear research and education.”

In his new NRL leadership role, Jacopo Buongiorno, the TEPCO Professor of Nuclear Science and Engineering, will oversee the NRL’s Centers for Irradiation Materials Science. These centers will focus on a variety of research questions ranging from new nuclear fuels, to in-core sensors, to nuclear materials degradation. All experimental research utilizing the MIT reactor will be coordinated through the Centers for Irradiation Materials Science. Ongoing and installed programs will be managed through the program management group.

Buongiorno is also the director of the Center for Advanced Energy Systems (CANES), which is one of eight Low-Carbon-Energy Centers (LCEC) of the MIT Energy Initiative (MITEI); he is also the director of the recently completed MIT study on the Future of Nuclear Energy in a Carbon-Constrained World. 

Buongiorno and Snead, an MIT research scientist and former corporate fellow with Oak Ridge National Laboratory, will spearhead efforts to expand external collaborations with federal and industry sponsors and work with MIT’s faculty to identify ways the NRL can provide the needed experimental support for their research and education objectives. “Our vision is to grow the MIT reactor value to MIT’s own research community as well as position it at the center of the worldwide efforts to develop new nuclear technologies that contribute to energy security and decarbonization of the global economy,” says Buongiorno. 

This new leadership team will build on NRL’s accomplishments under the direction of David Moncton. Moncton was instrumental in the 20-year relicensing of the reactor, led the NRL in developing the research program which boasts the most productive and innovative program for in-core studies of structural materials, new fuel cladding composites, new generations of nuclear instrumentation based on ultrasonic sensors and fiber optics, and studies of the properties of liquid salt in a radiation environment for use as a coolant in a new generation of high-temperature reactors. The NRL has become a key partner of the Nuclear Science User Facilities (NSUF) sponsored by Idaho National Laboratory, and it has established a world-class reputation for its in-core irradiation program.

Anne White, professor and head of the Department of Nuclear Science and Engineering, notes, “The unique capabilities of NRL together with the Centers for Irradiation Materials Science will create a new and exciting nexus for nuclear-related research and education at MIT, opening up opportunities not only for faculty in the nuclear science and engineering department (Course 22), but across the entire Institute.”

The new leadership team will begin their tenure effective Aug. 1, 2019.  

How expectation influences perception

Mon, 07/15/2019 - 11:00am

For decades, research has shown that our perception of the world is influenced by our expectations. These expectations, also called “prior beliefs,” help us make sense of what we are perceiving in the present, based on similar past experiences. Consider, for instance, how a shadow on a patient’s X-ray image, easily missed by a less experienced intern, jumps out at a seasoned physician. The physician’s prior experience helps her arrive at the most probable interpretation of a weak signal.

The process of combining prior knowledge with uncertain evidence is known as Bayesian integration and is believed to widely impact our perceptions, thoughts, and actions. Now, MIT neuroscientists have discovered distinctive brain signals that encode these prior beliefs. They have also found how the brain uses these signals to make judicious decisions in the face of uncertainty.

“How these beliefs come to influence brain activity and bias our perceptions was the question we wanted to answer,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The researchers trained animals to perform a timing task in which they had to reproduce different time intervals. Performing this task is challenging because our sense of time is imperfect and can go too fast or too slow. However, when intervals are consistently within a fixed range, the best strategy is to bias responses toward the middle of the range. This is exactly what animals did. Moreover, recording from neurons in the frontal cortex revealed a simple mechanism for Bayesian integration: Prior experience warped the representation of time in the brain so that patterns of neural activity associated with different intervals were biased toward those that were within the expected range.

MIT postdoc Hansem Sohn, former postdoc Devika Narain, and graduate student Nicolas Meirhaeghe are the lead authors of the study, which appears in the July 15 issue of Neuron.

Ready, set, go

Statisticians have known for centuries that Bayesian integration is the optimal strategy for handling uncertain information. When we are uncertain about something, we automatically rely on our prior experiences to optimize behavior.

“If you can’t quite tell what something is, but from your prior experience you have some expectation of what it ought to be, then you will use that information to guide your judgment,” Jazayeri says. “We do this all the time.”

In this new study, Jazayeri and his team wanted to understand how the brain encodes prior beliefs, and put those beliefs to use in the control of behavior. To that end, the researchers trained animals to reproduce a time interval, using a task called “ready-set-go.” In this task, animals measure the time between two flashes of light (“ready” and “set”) and then generate a “go” signal by making a delayed response after the same amount of time has elapsed.

They trained the animals to perform this task in two contexts. In the “Short” scenario, intervals varied between 480 and 800 milliseconds, and in the “Long” context, intervals were between 800 and 1,200 milliseconds. At the beginning of the task, the animals were given the information about the context (via a visual cue), and therefore knew to expect intervals from either the shorter or longer range.

Jazayeri had previously shown that humans performing this task tend to bias their responses toward the middle of the range. Here, they found that animals do the same. For example, if animals believed the interval would be short, and were given an interval of 800 milliseconds, the interval they produced was a little shorter than 800 milliseconds. Conversely, if they believed it would be longer, and were given the same 800-millisecond interval, they produced an interval a bit longer than 800 milliseconds.  

“Trials that were identical in almost every possible way, except the animal’s belief led to different behaviors,” Jazayeri says. “That was compelling experimental evidence that the animal is relying on its own belief.”

Once they had established that the animals relied on their prior beliefs, the researchers set out to find how the brain encodes prior beliefs to guide behavior. They recorded activity from about 1,400 neurons in a region of the frontal cortex, which they have previously shown is involved in timing.

During the “ready-set” epoch, the activity profile of each neuron evolved in its own way, and about 60 percent of the neurons had different activity patterns depending on the context (Short versus Long). To make sense of these signals, the researchers analyzed the evolution of neural activity across the entire population over time, and found that prior beliefs bias behavioral responses by warping the neural representation of time toward the middle of the expected range.

“We have never seen such a concrete example of how the brain uses prior experience to modify the neural dynamics by which it generates sequences of neural activities, to correct for its own imprecision. This is the unique strength of this paper: bringing together perception, neural dynamics, and Bayesian computation into a coherent framework, supported by both theory and measurements of behavior and neural activities,” says Mate Lengyel, a professor of computational neuroscience at Cambridge University, who was not involved in the study.

Embedded knowledge

Researchers believe that prior experiences change the strength of connections between neurons. The strength of these connections, also known as synapses, determines how neurons act upon one another and constrains the patterns of activity that a network of interconnected neurons can generate. The finding that prior experiences warp the patterns of neural activity provides a window onto how experience alters synaptic connections. “The brain seems to embed prior experiences into synaptic connections so that patterns of brain activity are appropriately biased,” Jazayeri says.

As an independent test of these ideas, the researchers developed a computer model consisting of a network of neurons that could perform the same ready-set-go task. Using techniques borrowed from machine learning, they were able to modify the synaptic connections and create a model that behaved like the animals.

These models are extremely valuable as they provide a substrate for the detailed analysis of the underlying mechanisms, a procedure that is known as "reverse-engineering.” Remarkably, reverse-engineering the model revealed that it solved the task the same way the monkeys’ brain did. The model also had a warped representation of time according to prior experience.  

The researchers used the computer model to further dissect the underlying mechanisms using perturbation experiments that are currently impossible to do in the brain. Using this approach, they were able to show that unwarping the neural representations removes the bias in the behavior. This important finding validated the critical role of warping in Bayesian integration of prior knowledge.

The researchers now plan to study how the brain builds up and slowly fine-tunes the synaptic connections that encode prior beliefs as an animal is learning to perform the timing task.

The research was funded by the Center for Sensorimotor Neural Engineering, the Netherlands Scientific Organization, the Marie Sklodowska Curie Reintegration Grant, the National Institutes of Health, the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the McKnight Foundation, and the McGovern Institute.

MIT Press and Harvard Data Science Initiative launch the Harvard Data Science Review

Mon, 07/15/2019 - 10:30am

The following is adapted from a joint release from the MIT Press and the Harvard Data Science Initiative.

The MIT Press and the Harvard Data Science Initiative (HDSI) have announced the launch of the Harvard Data Science Review (HDSR). The open-access journal, published by MIT Press and hosted online via the multimedia platform PubPub, an initiative of the MIT Knowledge Futures group, will feature leading global thinkers in the burgeoning field of data science, making research, educational resources, and commentary accessible to academics, professionals, and the interested public. With demand for data scientists booming, HDSR will provide a centralized, authoritative, and peer-reviewed publishing community to service the growing profession.

The first issue features articles on topics ranging from authorship attribution of John Lennon-Paul McCartney songs to machine learning models for predicting drug approvals to artificial intelligence (AI). Future content will have a similar range of general interest, academic, and professional content intended to foster dialogue among researchers, educators, and practitioners about data science research, practice, literacy, and workforce development. HDSR will prioritize quality over quantity, with a primary emphasis on substance and readability, attracting readers via inspiring, informative, and intriguing papers, essays, stories, interviews, debates, guest columns, and data science news. By doing so, HDSR intends to help define and shape the profession as a scientifically rigorous and globally impactful multidisciplinary field.

Combining features of a premier research journal, a leading educational publication, and a popular magazine, HDSR will leverage digital technologies and advances to facilitate author-reader interactions globally and learning across various media.

The Harvard Data Science Review will serve as a hub for high-quality work in the growing field of data science, noted by the Harvard Business Review as the "sexiest job of the 21st century." It will feature articles that provide expert overviews of complex ideas and topics from leading thinkers with direct applications for teaching, research, business, government, and more. It will highlight content in the form of commentaries, overviews, and debates intended for a wide readership; fundamental philosophical, theoretical, and methodological research; innovations and advances in learning, teaching, and communicating data science; and short communications and letters to the editor.

The dynamic digital edition is freely available on the PubPub platform to readers around the globe.

Amy Brand, director of the MIT Press, states, “For too long the important work of data scientists has been opaque, appearing mainly in academic journals with limited reach. We are thrilled to partner with the Harvard Data Science Initiative to publish work that will have a deep impact on popular understanding of the growing field of data science. The Review will be an unparalleled resource for advancing data literacy in society.”

Francesca Dominici, the Clarence James Gamble Professor of Biostatistics, Population and Data Science, and David Parkes, the George F. Colony Professor of Computer Science, both at Harvard University, announce, “As codirectors of the Harvard Data Science Initiative, we’re thrilled for the launch of this new journal. With its rigorous and cross-disciplinary thinking, the Harvard Data ScienceReview will advance the new science of data. By sharing stories of positive transformational impact as well as raising questions, this collective endeavor will reveal the contours that will shape future research and practice.”

Xiao-li Meng, the Whipple V.N. Jones Professor of Statistics at Harvard and founding editor-in-chief of HDSR, explains, “The revolutionary ability to collect, process, and apply new analytics to extract powerful insights from data has a tremendous influence on our lives. However, hype and misinformation have emerged as unfortunate side effects of data science’s meteoric rise. The Harvard Data Science Review is designed to cut through the hype to engage readers with substantive and informed articles from the leading data science experts and practitioners, ranging from philosophers of ethics and historians of science to AI researchers and data science educators. In short, it is ‘everything data science and data science for everyone.’”

Elizabeth Langdon-Gray, inaugural executive director of HDSI, comments, “The Harvard Data Science Initiative was founded to foster collaboration in both research and teaching and to catalyze research that will benefit our society and economy. The Review plays a vital part in our effort to empower research progress and education globally and to solve some of the world’s most important challenges.”

The inaugural issue of HDSR will publish contributions from internationally renowned scholars and educators, as well as leading researchers in industry and government, such as Christine Borgman (University of California at Los Angeles), Rodney Brooks (MIT), Emmanuel Candes (Stanford University), David Donoho (Stanford University), Luciano Floridi (Oxford/The Alan Turing Institute), Alan M. Garber (Harvard), Barbara J. Grosz (Harvard), Alfred Hero (University of Michigan), Sabina Leonelli (University of Exeter), Michael I. Jordan (University of California at Berkeley), Andrew Lo (MIT), Maja Matarić (University of Southern California), Brendan McCord (U.S. Department of Defense), Nathan Sanders (WarnerMedia), Rebecca Willett (University of Chicago), and Jeannette Wing (Columbia University).

Making high-quality education accessible to all

Mon, 07/15/2019 - 10:00am

One of the earliest interactive course videos offered by MIT BLOSSOMS (Blended Learning Open Source Science or Math Studies) looks at the physics of donkey carts — used frequently in the streets of Pakistan. The lesson, created by Naveed Malik ‘81, looks at Newton’s Third Law of Motion, teaching how gravity can affect how two objects interact through the very visual, real-world example of a donkey pulling a cart.

At the recent LINC 2019 conference, Professor Richard Larson, principal investigator of BLOSSOMS and founding director of LINC, provided this example from 2010 of teaching STEM concepts in an engaging and locally-relevant way. Both BLOSSOMS and LINC have grown substantially over the last decade, continuing to explore and expand on the ways that technology-enabled education can improve education access — particularly for developing countries and underserved populations.

Vijay Kumar, executive director of the Abdul Latif Jameel World Education Lab (J-WEL) and associate dean for open learning at MIT, welcomed the very international LINC 2019 audience, comprising approximately 130 attendees representing 31 countries.

Kumar noted that the themes of the conference mirror the central mission of J-WEL, especially applying the innovation and research of MIT to catalyze change — with a particular focus on the developing world and emerging economies — "to address hard problems of education access and inequality.”

This year, LINC focused on “the new learning society,” trying to understand how best to address educational opportunities for diverse learners from around the world with different aspirations, motivations, and needs. Included in this group are many people who are displaced or face other financial or social obstacles to accessing a quality education. In addition to new types of learners, new tools and technologies have emerged. With the explosion of online education, digital learning has become central to the discourse on educational change.

“We are looking at questions of how technology might allow us to think more deeply about learning outcomes,” says Kumar. “How do you initiate change, how do you share resources, how do you create process to scale change, and how do you generate and maintain learning communities?”

“Leapfrogging” for bigger advances in education

Keynote speaker Rebecca Winthrop, director of the Center for Universal Education and senior fellow for global economy and development at the Brookings Institution, talked about innovations that aim to scale education to ensure that all young people across the globe develop the skills needed for a fast-changing world. Winthrop is the author of "Leapfrogging Inequality: Remaking Education to Help Young People Thrive," published by the Brookings Institution in 2018.

Many young people throughout the world — for a variety of reasons — do not have access to quality education. The Brookings Institution has identified a “100-year gap” between levels of education in wealthy and developing countries — meaning that without substantial changes in current education systems, it will take 100 years for children in developing countries to reach the education levels of children in developed countries.

Compounding this challenge is the reality that this 100-year gap refers solely to current education — to the best practices of education today. With new technologies shifting the landscape of what work might look like in the future, education needs to evolve, as well.

“We need to shift to skills that will prepare students for the future,” said Winthrop. “Students need a broad set of competencies, as well as social and emotional skills.”

Winthrop noted that research indicates that, without any significant change in practices and policies, 884 million young people worldwide will not have basic secondary-level skills by 2030.

She discussed the potential of a “leapfrogging” approach to reforming education. Like the word implies, a “leapfrogging” approach to education focuses mostly on “rapid, non-linear progress.” This approach seeks to provide access, quality, and relevance all at once, rather than in stages or steps. There is an emphasis on more student-centered teaching and learning and individualized programs that are results-oriented.

Winthrop provided a variety of examples of specific efforts that in some way reflect this approach, including a satellite education program in Brazil that divides the teaching profession into lecturing and mentoring teachers to reach more students in rural communities; a tablet-based, distance-learning program based in Sudan, Jordan, Lebanon, and Uganda; a literacy and numeracy game that started in Colombia; and tablets preloaded with localized educational content provided to small groups of students in India.

Winthrop emphasized the importance of designing for scale at the beginning, considering the cost per student and what is most important about the program.

“You need to know what is the essence of why the thing is successful, and you need to make sure that core element is preserved when moving to another context or scale,” she said.

Advancing education at MIT and beyond

A panel discussion on “Learning Everywhere” provided some examples of innovative approaches to expanding education access, including the Refugee Action Hub (ReACT) Certificate Program, which was launched during an MIT Solve competition at the Institute. The program seeks to provide pathways to education for refugees, who very rarely have access to higher education, and it includes in-person lectures, online classes, and a paid internship. Key elements of this program, and of many others discussed, are human interaction and community-building.

Another example of an innovative education program with some “leapfrogging” characteristics is the Program in Data Science, created by CoLAB, a hub of disruptive innovation organizations in Uruguay. CoLAB also supports up to 500 students over the next four years to participate in a blended learning program in data science offered through the Uruguay Technological University (UTEC). Developed through membership in J-WEL Higher Education, the Program in Data Science includes online courses from MITx and Uruguayan universities, online activities facilitated by J-WEL staff, and on-site workshops run by J-WEL and MIT International Science and Technology Initiatives (MISTI).

Although a wide variety of creative and impactful efforts were highlighted at LINC 2019, many larger education systems have not yet undergone significant changes.

“Education is super innovative — it’s just largely at the margins and not at the center of systems,” says Winthrop. “It’s a problem of how we harness that for larger systemic change.”

The LINC 2019 participants and J-WEL, as a whole, aim to address this challenge.

“It’s tremendously exciting to see all the people who have come together to share their ideas and experiences,” says Kumar. “New technologies and approaches are enabling new, shared opportunities of increasing education equality. J-WEL supports and strengthens these efforts to enable substantive educational change.”

Professor Emeritus Fernando Corbató, MIT computing pioneer, dies at 93

Mon, 07/15/2019 - 9:01am

Fernando “Corby” Corbató, an MIT professor emeritus whose work in the 1960s on time-sharing systems broke important ground in democratizing the use of computers, died on Friday, July 12, at his home in Newburyport, Massachusetts. He was 93.

Decades before the existence of concepts like cybersecurity and the cloud, Corbató led the development of one of the world’s first operating systems. His “Compatible Time-Sharing System” (CTSS) allowed multiple people to use a computer at the same time, greatly increasing the speed at which programmers could work. It’s also widely credited as the first computer system to use passwords

After CTSS Corbató led a time-sharing effort called Multics, which directly inspired operating systems like Linux and laid the foundation for many aspects of modern computing. Multics doubled as a fertile training ground for an emerging generation of programmers that included C programming language creator Dennis Ritchie, Unix developer Ken Thompson, and spreadsheet inventors Dan Bricklin and Bob Frankston.

Before time-sharing, using a computer was tedious and required detailed knowledge. Users would create programs on cards and submit them in batches to an operator, who would enter them to be run one at a time over a series of hours. Minor errors would require repeating this sequence, often more than once.

But with CTSS, which was first demonstrated in 1961, answers came back in mere seconds, forever changing the model of program development. Decades before the PC revolution, Corbató and his colleagues also opened up communication between users with early versions of email, instant messaging, and word processing. 

“Corby was one of the most important researchers for making computing available to many people for many purposes,” says long-time colleague Tom Van Vleck. “He saw that these concepts don’t just make things more efficient; they fundamentally change the way people use information.”

Besides making computing more efficient, CTSS also inadvertently helped establish the very concept of digital privacy itself. With different users wanting to keep their own files private, CTSS introduced the idea of having people create individual accounts with personal passwords. Corbató’s vision of making high-performance computers available to more people also foreshadowed trends in cloud computing, in which tech giants like Amazon and Microsoft rent out shared servers to companies around the world. 

“Other people had proposed the idea of time-sharing before,” says Jerry Saltzer, who worked on CTSS with Corbató after starting out as his teaching assistant. “But what he brought to the table was the vision and the persistence to get it done.”

CTSS was also the spark that convinced MIT to launch “Project MAC,” the precursor to the Laboratory for Computer Science (LCS). LCS later merged with the Artificial Intelligence Lab to become MIT’s largest research lab, the Computer Science and Artificial Intelligence Laboratory (CSAIL), which is now home to more than 600 researchers. 

“It’s no overstatement to say that Corby’s work on time-sharing fundamentally transformed computers as we know them today,” says CSAIL Director Daniela Rus. “From PCs to smartphones, the digital revolution can directly trace its roots back to the work that he led at MIT nearly 60 years ago.” 

In 1990 Corbató was honored for his work with the Association of Computing Machinery’s Turing Award, often described as “the Nobel Prize for computing.”

From sonar to CTSS

Corbató was born on July 1, 1926 in Oakland, California. At 17 he enlisted as a technician in the U.S. Navy, where he first got the engineering bug working on a range of radar and sonar systems. After World War II he earned his bachelor's degree at Caltech before heading to MIT to complete a PhD in physics. 

As a PhD student, Corbató met Professor Philip Morse, who recruited him to work with his team on Project Whirlwind, the first computer capable of real-time computation. After graduating, Corbató joined MIT's Computation Center as a research assistant, soon moving up to become deputy director of the entire center. 

It was there that he started thinking about ways to make computing more efficient. For all its innovation, Whirlwind was still a rather clunky machine. Researchers often had trouble getting much work done on it, since they had to take turns using it for half-hour chunks of time. (Corbató said that it had a habit of crashing every 20 minutes or so.) 

Since computer input and output devices were much slower than the computer itself, in the late 1950s a scheme called multiprogramming was developed to allow a second program to run whenever the first program was waiting for some device to finish. Time-sharing built on this idea, allowing other programs to run while the first program was waiting for a human user to type a request, thus allowing the user to interact directly with the first program.

Saltzer says that Corbató pioneered a programming approach that would be described today as agile design. 

“It’s a buzzword now, but back then it was just this iterative approach to coding that Corby encouraged and that seemed to work especially well,” he says.  

In 1962 Corbató published a paper about CTSS that quickly became the talk of the slowly-growing computer science community. The following year MIT invited several hundred programmers to campus to try out the system, spurring a flurry of further research on time-sharing.

Foreshadowing future technological innovation, Corbató was amazed — and amused — by how quickly people got habituated to CTSS’ efficiency.

“Once a user gets accustomed to [immediate] computer response, delays of even a fraction of a minute are exasperatingly long,” he presciently wrote in his 1962 paper. “First indications are that programmers would readily use such a system if it were generally available.”

Multics, meanwhile, expanded on CTSS’ more ad hoc design with a hierarchical file system, better interfaces to email and instant messaging, and more precise privacy controls. Peter Neumann, who worked at Bell Labs when they were collaborating with MIT on Multics, says that its design prevented the possibility of many vulnerabilities that impact modern systems, like “buffer overflow” (which happens when a program tries to write data outside the computer’s short-term memory). 

“Multics was so far ahead of the rest of the industry,” says Neumann. “It was intensely software-engineered, years before software engineering was even viewed as a discipline.” 

In spearheading these time-sharing efforts, Corbató served as a soft-spoken but driven commander in chief — a logical thinker who led by example and had a distinctly systems-oriented view of the world.

“One thing I liked about working for Corby was that I knew he could do my job if he wanted to,” says Van Vleck. “His understanding of all the gory details of our work inspired intense devotion to Multics, all while still being a true gentleman to everyone on the team.” 

Another legacy of the professor’s is “Corbató’s Law,” which states that the number of lines of code someone can write in a day is the same regardless of the language used. This maxim is often cited by programmers when arguing in favor of using higher-level languages.

Corbató was an active member of the MIT community, serving as associate department head for computer science and engineering from 1974 to 1978 and 1983 to 1993. He was a member of the National Academy of Engineering, and a fellow of the Institute of Electrical and Electronics Engineers and the American Association for the Advancement of Science. 

Corbató is survived by his wife, Emily Corbató, from Brooklyn, New York; his stepsons, David and Jason Gish; his brother, Charles; and his daughters, Carolyn and Nancy, from his marriage to his late wife Isabel; and five grandchildren. 

In lieu of flowers, gifts may be made to MIT’s Fernando Corbató Fellowship Fund via Bonny Kellermann in the Memorial Gifts Office. 

CSAIL will host an event to honor and celebrate Corbató in the coming months.