MIT Latest News
Approximately 290 students recently moved into their newly renovated home, thanks to a concerted team effort to complete the reconstruction of New House in time for the start of the fall semester. The 14-month construction project followed months of planning in which the architects, student residents, and staff from the Division of Student Life (DSL) and Campus Construction worked together to envision the future needs of the community. The result — a residence with improved connectivity between houses, new amenities (including cluster kitchens and quiet lounges in each house), enhanced accessibility, green roofs, and revitalized courtyards.
“Renovating a residence hall is a tall order at any time,” said Suzy Nelson, vice president and dean for student life. “Everyone involved — students, faculty, staff, and the architects and project managers — did a fantastic job of balancing the desires of residents with the needs of an up-to-date residence hall and MIT’s expectations for the future. And to get that all done in a year is truly extraordinary.”
The decision to renovate the more than 40-year-old, 115,000 square-foot residence was based on results of a 2016 feasibility study conducted by the Office of Campus Planning and the MIT Capital Projects group.
“While this project has helped drive down our deferred maintenance, what it has really done is demonstrate our desire to enhance the living and learning environment for our students for the 21st century, and work with each community to develop how each building can better support their needs,” says David Friedrich, senior director of housing operations and renewal planning.
Flexible design features focus on community
Constructed in 1975, New House is home to a community that encompasses nine living groups, including the cultural groups Chocolate City, French House, German House, iHouse, and Spanish House. The primary goals of the renovation included retaining the 288-bed count in New House, which was achieved, and preserving the nine communities while enhancing the connections among the houses. A 275-foot corridor now runs the entire length of the building, enabling residents to easily and accessibly move between communities on every level. The new design’s flexibility lets the communities’ populations change and allows for adaptability in assigning rooms to residents.
Goody Clancy led the redesign effort, collaborating with students and student life staff to understand residents’ needs. Using MIT’s Architectural Principles, the teams envisioned the ground-floor arcade as the heart of the building with shared features such as a large community-shared country kitchen and an expanded multi-purpose room, makerspace, laundry, and fitness room located along its path. Placement of these features next to the house lounge on the arcade level enables those spaces to spill out onto the adjacent courtyards, providing an open, communal space encouraging creative connections among students.
In addition, large windows in the arcade level bring in views of the Charles River and allow more daylight. “Taking down the large wall that was in place on the north side of the arcade has opened up a north-south view through the ground floor, bringing the outside in,” says Goody Clancy Associate Amanda Sanders.
Some of New House’s added construction features and improvements include:
- a first-floor arcade that includes a house lounge, game room, the shared country kitchen, expanded makerspace, multi-purpose room, laundry room, fitness room, and music room;
- a new roof, along with six green roofs facing Memorial Drive that absorb water and reduce water waste;
- new energy-efficient windows throughout the building;
- connecting corridors on the upper floors with two new elevators providing accessibility;
- accessible student rooms and bathrooms in each community;
- revitalized courtyards providing social space for occupants; and
- a new covered 150-bike storage enclosure.
Creative work phasing minimized student relocations
One of the challenges to this whole-building renovation was managing the construction in phases to ensure that a number of New House residents could continue to live in the building for the 2017-18 academic year. By staging the work in phases and maintaining one unoccupied house as a buffer against construction noise, 100 residents continued to live in the building. This creative approach, managed and coordinated by Suffolk Construction Company, minimized the need for students to relocate.
“The students who lived in New House during construction were an integral part of the success of this project,” says Kevin Carr, project manager for Campus Construction. “We hosted a welcome back pizza party and a building tour when the students returned in January after phase one was complete, and the positive feedback was overwhelming, and it really touched us in a special way.”
Community engagement laid foundation for redesign
As with many projects on campus, the community engagement between student residents, student life staff, and the construction and design teams regarding the design and direction of New House was critical to the successful completion of the project. The presidents of each of the houses were involved throughout and contributed ideas and opinions right down to color schemes and furniture options.
“In my 17 years as head of house for New House, I saw how the students lived, worked, and connected with one another,” says Wesley Harris, the Charles Stark Draper Professor of Aeronautics and Astronautics. “The freshness and openness that this renovation breathes will be most welcomed by our students, and the new east-west horizontal integration will be a substantial improvement in the quality of life. I also commend all who were involved in this project, including the administration, students, architect, and construction team who did a wonderful job.”
Paul Murphy, program manager for Special Projects, says “this was one of the bigger renovation projects within the past two years for MIT, and it’s a real testament to teamwork and collaboration that it went off without any major hitches and completed on time for students to move back in for the fall semester.”
“When we walk through it now and see students smiling — it’s why we do what we do,” Murphy says.
From airplane wings to overhead powerlines to the giant blades of wind turbines, a buildup of ice can cause problems ranging from impaired performance all the way to catastrophic failure. But preventing that buildup usually requires energy-intensive heating systems or chemical sprays that are environmentally harmful. Now, MIT researchers have developed a completely passive, solar-powered way of combating ice buildup.
The system is remarkably simple, based on a three-layered material that can be applied or even sprayed onto the surfaces to be treated. It collects solar radiation, converts it to heat, and spreads that heat around so that the melting is not just confined to the areas exposed directly to the sunlight. And, once applied, it requires no further action or power source. It can even do its de-icing work at night, using artificial lighting.
The new system is described today in the journal Science Advances, in a paper by MIT associate professor of mechanical engineering Kripa Varanasi and postdocs Susmita Dash and Jolet de Ruiter.
“Icing is a major problem for aircraft, for wind turbines, power lines, offshore oil platforms, and many other places,” Varanasi says. “The conventional ways of getting around it are de-icing sprays or by heating, but those have issues.”
Inspired by the sun
The usual de-icing sprays for aircraft and other applications use ethylene glycol, a chemical that is environmentally unfriendly. Airlines don’t like to use active heating, both for cost and safety reasons. Varanasi and other researchers have investigated the use of superhydrophobic surfaces to prevent icing passively, but those coatings can be impaired by frost formation, which tends to fill the microscopic textures that give the surface its ice-shedding properties.
As an alternate line of inquiry, Varanasi and his team considered the energy given off by the sun. They wanted to see, he says, whether “there is a way to capture that heat and use it in a passive approach.” They found that there was.
It’s not necessary to produce enough heat to melt the bulk of the ice that forms, the team found. All that’s needed is for the boundary layer, right where the ice meets the surface, to melt enough to create a thin layer of water, which will make the surface slippery enough so any ice will just slide right off. This is what the team has achieved with the three-layered material they’ve developed.
Layer by layer
The top layer is an absorber, which traps incoming sunlight and converts it to heat. The material the team used is highly efficient, absorbing 95 percent of the incident sunlight, and losing only 3 percent to re-radiation, Varanasi says.
In principle, that layer could in itself help to prevent frost formation, but with two limitations: It would only work in the areas directly in sunlight, and much of the heat would be lost back into the substrate material — the airplane wing or powerline, for example — and would not help with the de-icing.
So, to compensate for the localization, the team added a spreader layer — a very thin layer of aluminum, just 400 micrometers thick, which is heated by the absorber layer above it and very efficiently spreads that heat out laterally to cover the entire surface. The material was selected to have “thermal response that is fast enough so that the heating takes place faster than the freezing,” Varanasi says.
Finally, the bottom layer is simply foam insulation, to keep any of that heat from being wasted downward and keep it where it’s needed, at the surface.
“In addition to passive de-icing, the photothermal trap stays at an elevated temperature, thus preventing ice build-up altogether,” Dash says.
The three layers, all made of inexpensive commercially available material, are then bonded together, and can be bonded to the surface that needs to be protected. For some applications, the materials could instead be sprayed onto a surface, one layer at a time, the researchers say.
The team carried out extensive tests, including real-world outdoor testing of the materials and detailed laboratory measurements, to prove the effectiveness of the system.
“The use of photothermal absorbers is a smart idea and straightforward to implement,” says Manish Tiwari, a professor of nanoengineering at University College London, who was not associated with this research. “Scalability of these approaches and thinking about appropriate packaging, specific weight, etc., of the de-icing layer are important practical challenges going ahead, especially when it comes to the aerospace application. The paper also opens up intriguing possibilities around smart and flexible thermal packaging, and thermal metamaterials research to realize its full potential. Overall, an excellent step forward,” he says.
The system could find even wider commercial uses, such as panels to prevent icing on roofs of homes, schools, and other buildings, Varanasi adds. The team is planning to continue work on the system, testing it for longevity and for optimal methods of application. But the basic system could essentially be applied almost immediately for some uses, especially stationary applications, he says.
The research was supported by Alstom and the Netherlands Organization for Scientific Research.
Jean Pierre de Monchaux, an idealistic and optimistic planner and architect who served as dean of the MIT School of Architecture and Planning from 1981 to 1992, passed away on April 30, after living with Parkinson’s disease for 20 years. He was 81.
De Monchaux, also known as John, came to MIT after many years’ professional experience in the United States, the United Kingdom, South America, Australia, and Southeast Asia. His international upbringing in Dublin, Montréal, New York City, Bogotá, Sydney, and London produced lasting memories of life onboard the ocean liners and tramp steamers that ferried him between these places as a boy and young man.
His diverse background informed his vision of urban planning as a conciliatory practice of listening and learning between constituencies and professionals. He understood all of the world’s cities as neighborhoods of a single global village — as shared places of possibility, and of messy meaning, that transcended false notions of order and border.
“John’s legacy is all around us,” says Hashim Sarkis, dean of the School of Architecture and Planning. “His influence is reflected every day through our classes and research, in our passion to serve the world, and in the thoughtful, caring, and supportive community that is a hallmark of SA+P.”
As dean, de Monchaux was known for his ability to nurture dialogue, to forge consensus, and to build bridges between SA+P and other schools at the Institute. He achieved major milestones in the school’s history, including the completion of the award-winning Rotch Library extension in Building 7, the establishment of the Center for Real Estate (the first program of its kind in the United States), and the opening — in the newly-designed I.M. Pei building — of the Media Lab, an endeavor that de Monchaux was proud to have named after many wordier and narrower possibilities were considered.
After stepping down as dean in 1992, de Monchaux took a four-year partial leave from MIT to serve as general manager of the Aga Khan Trust for Culture, a Geneva-based foundation concerned with architecture and urban design as a catalyst for cultural and social development in the Muslim world.
In 1996, he returned to MIT and spent the next dozen years teaching in two departments: Urban Studies and Planning and Architecture. From 1996 until 2004, he served as head of the Special Program in Urban Regional Studies (SPURS), a one-year program designed for mid-career professionals from developing countries.
“He helped many of us, faculty and students alike, to design better cities,” says DUSP department head Eran Ben-Joseph, who worked with de Monchaux in the department. “He was a true friend, mentor, and colleague — a person of genuine integrity, great wisdom, and a gentle soul who will be sorely missed.”
De Monchaux was also a dedicated presence in the Boston design community, serving on the boards of the Boston Society of Architects and the Boston Architectural College, and founding the Boston Civic Design Commission as well as serving as its first chair. He was a trustee of the Boston Foundation for Architecture, and a trustee and overseer of the Museum of Fine Arts, Boston.
Born in Dublin, Ireland, to a French-Australian family, de Monchaux was educated at St John’s College of the University of Sydney in Australia, and at the Harvard University Graduate School of Design, where later, in 1971, he would become a member of that school’s second class of Loeb Fellows. He began his teaching career at the Bartlett School of Architecture at University College, London, in 1964, the beginning of what would become a long collaboration with then-professor Lord Richard Llewellyn-Davies.
De Monchaux had been admitted to MIT’s bachelor’s in architecture program in 1954 from Stuyvesant High School in New York City, but was unable to afford the tuition and enroll as a student. He returned to MIT in 1981 with a particular dedication to opening the Institute’s doors ever wider.
With his wife, British sociologist Suzanne de Monchaux, as part of the design team, he was principal planner for Milton Keynes, a new city in Buckinghamshire, England, that was conceived in the late 1960s as the crowning achievement of Great Britain’s utopian postwar New Towns Movement. In more than two decades of practice as a planner, primarily with global planning partnership Llewellyn Davies and its successor firms, he played a leading role in advocacy design assistance in Watts, Detroit, and Chicago. He also participated in urban plans and environmental impact studies throughout Australia, China, the Middle East, and Southeast Asia, with a particular interest in the developing world, vernacular typologies, and informal urbanisms.
De Monchaux is survived by his twin sons: Nicholas de Monchaux, an associate professor of architecture and urban design at the University of California at Berkeley and a founding partner of the interdisciplinary architecture firm modem; and Thomas de Monchaux, an author, designer, and adjunct assistant professor of architecture at Columbia University.
For a story published in 2007 in PLAN on the occasion of de Monchaux’s nominal retirement from teaching, Lois Craig, who served as associate dean, recalled, “He had a method of getting agreement from people, forming friendships and professional alliances that supported his policies. He created a sense of functional togetherness. He was a conciliator and an enabler, bringing people together.”
In that same article, Professor Julian Beinart, who co-taught many urban design studios with de Monchaux, reflected on his colleague’s studio technique: “John always took the epistemologically cool position: Let’s think about your proposition, let’s untie the knots of your argument, to the extent we can, let’s see if we can reframe some of the parts, let’s see where that takes us.”
A memorial service will be held at 9:30 a.m. on Saturday, Sept. 29, in the MIT Chapel.
Shih-Ying Lee, a longtime MIT mechanical engineering professor and expert in process control, measurement, and instrumentation, passed away peacefully on July 2 in Lincoln, Massachusetts. Lee '43, SCD '45 had recently celebrated his 100th birthday in April.
Lee’s career spanned over six decades and included positions in both academia and industry. In 2015, he provided an overview of his professional and personal achievements in his autobiography entitled, “From Tsinghua to MIT — My Journey from Education to Entrepreneurship.”
Born in Beijing (known at the time as Beiping), China, on April 30, 1918, Lee was drawn to engineering at an early age. He received a bachelor’s degree from Tsinghua University in the midst of World War II and the Second Sino-Japanese War. Upon graduating, Lee worked as a bridge designer and hydraulic power research engineer for the Chinese government.
Eager to continue his education in the United States, Lee made a harrowing journey halfway around the world in the midst of global conflict. He flew first to India, then took a ship to the U.S. via South America. In 1942 he enrolled at MIT, where he received master's and doctor of science degrees in civil engineering.
After a two-year stint at Cram and Ferguson Architects, Lee returned to MIT as a research engineer in the Dynamic Analysis and Control Lab. He joined the faculty in the Department of Mechanical Engineering in 1952. Throughout his tenure as a professor, Lee made extensive improvements to several courses including 2.171 (at the time, Fluid Power Control) and 2.173 (Measurement and Control).
Lee’s interest in measurement and instrumentation extended beyond the classrooms of MIT. He shared an entrepreneurial spirit and interest in startups with his brother, MIT professor of aeronautics Yao-Tzu Li SM '38, SCD '39. In 1953, they co-founded Dynisco Inc., which manufactured pressure-measuring instruments. To focus on his work at MIT, Lee sold Dynisco to the American Brake Shoe Company in 1960.
Less than a decade later, the brothers formed Setra Systems Inc., which specialized in instruments for sensing and measuring. The company designed and manufactured devices such as accelerometers, pressure transducers, and laboratory balances. These instruments, and all other products produced by Setra, had variable capacitance sensors, an application co-developed by Lee and his brother.
In 1974, Lee retired after 22 years on the mechanical engineering faculty at MIT. For the next three decades, much of his professional focus was on Setra Systems, where he served as chair and chief executive officer in the 1990s. Many of his patents involved pressure and force sensing products developed at Setra.
Throughout his career, Lee received a number of prestigious awards in recognition of his many contributions to the fields of process control, instrumentation, and sensing. In 1981 he received the Rufus Oldenburger Medal from the American Society of Mechanical Engineers for his permanent contribution to the field of automatic control. Several years later, he was elected to the National Academy of Engineering for “original research on control valve stability, for innovative dynamic measurement instrumentation, and for successful entrepreneurial commercialization of his inventions.” He also received the Technical Excellence Award from the International Society of Weighing and Measurement for his introduction of a new force and weight sensing method.
Lee was married to his first wife, May Kao Lee, for 22 years until her death. He was married to his second wife, Lena Yin Lee for 45 years until her death in May 2018. In 1991, Lee and Lena established the Shih-Ying (1943) & Lena Y. Lee Endowed Fellowship Fund in the Department of Mechanical Engineering. The scholarship was most recently awarded to a graduate student in 2016.
Later in his life, Lee enjoyed keeping up with the latest personal computing devices, staying fit with his daily walks and exercises, connecting with his children and grandchildren, and playing Scrabble with his wife at their home in Lincoln. He is survived by their four children: Carol Lee; David Lee ME '73, PhD ’80; Linda Lee PhD '85; and Eileen Brooks.
Since it was conceived as an online offering in 2012, the MITx massive open online course (MOOC), Introduction to Computer Science using Python, has become the most popular MOOC in MIT history with 1.2 million enrollments to date.
The course is derived from a campus-based and Open CourseWare subject at MIT developed and originally taught at MIT by John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering. “Although on the surface it’s a computer programming course with Python, it’s really not about Python or even programming,” explains Guttag. “It’s about teaching students to use computation, in this case described by Python, to build models and explore broader questions of what can be done with computation to understand the world.”
The first MITx version of this course, launched in 2012, was co-developed by Guttag and Eric Grimson, the Bernard M. Gordon Professor of Medical Engineering and professor of computer science. It was one of the very first MOOCs offered by MIT on the edX platform.
“This course is designed to help students begin to think like a computer scientist,” says Grimson. “By the end of it, the student should feel very confident that given a problem, whether it’s something from work or their personal life, they could use computation to solve that problem.”
The course was initially developed as a 13-week course, but in 2014 it was separated into two courses, 6.00.1x and 6.00.2x. “We achieved 1.4 million enrollments at the beginning of the summer with both courses combined,” says Ana Bell, lecturer in electrical engineering and computer science, who keeps the MOOC current by adding new problem sets, adding exercises, and coordinating staff volunteer teaching assistants (TAs). “At its core, the 6.00 series teaches computational thinking,” adds Bell. “It does this using the Python programming language, but the course also teaches programming concepts that can be applied in any other programming language.”
Enrollment is already high for the next 6.001x course, which starts today. Guttag, Grimson, and Bell suggest several reasons for the course’s popularity. For example, many learners are older or are switching careers and either have not been exposed to computer science much or are looking for new skills. “Many learners take it because they see computer science as a path forward and something they need to know,” says Grimson.
Providing new lives for refugees
Such is the case of Muhammad Enjari, a 39-year-old petroleum engineer from Homs, Syria. He fled Homs with his wife and 3 children at the beginning of the Syrian revolution and settled in Jordan soon after. “I have a degree in petroleum engineering but in Jordan I could not find a job,” he says.
In his journey to jumpstart a new career, Enjari enrolled in 6.00.1x as part of the MIT Refugee Action Hub, or ReACT, a yearlong Computer and Data Science Program (CDSP) curriculum. He received a 100 percent on the final exam and a 100 percent final grade. “Because of this course and others, I will be starting a new job in two weeks as a paid intern in computer engineering with Edraak, a MOOC platform similar to edX for Arabic-speaking students,” he adds.
Similarly, when 23-year-old Manda Awad, another ReACT CDSP student, enrolled in the 6.00.1x course as a refugee from Palestine living in Jordan, she learned that some of the material/topics covered in the course series was not included in her computer science curriculum at the University of Jordan. This, coupled with a lack of support for women in tech, inspired Awad to write a proposal that would update the engineering department’s computer science curricula by integrating the 6.00 series coursework, and expand access to the material across the student body. “I want to take what I have learned and teach other students, particularly women,” she says. Awad is currently setting up a programming club with a weekly instructional segment. She has a goal of introducing a “Women who Code” group to the Zaatari refugee camp in Jordan, which she plans to launch in the next year.
Expanding career options
Grain farmer Matt Reimer of Manitoba, Canada, enrolled in the course to develop a computer program to improve his farm’s efficiency, productivity, and profitability. He gained the skills needed to use remote-control technology to accelerate harvest production using his farm’s auto-steering tractor integrated with his grain combine harvester. The result: The driverless tractor unloaded grain from the combine over 500 times, saving the farm an estimated $5,000 or more.
When Ruchi Garg decided to re-enter the workplace after being the primary caretaker for her two young children, she enrolled in the course to get her former technology career moving again. She was worried that her skillset had grown stale in the wake of rapidly advancing technologies and evolving computer engineering practices. After completing 6.00.1x, Garg has gone on to become a data analyst at The Weather Company, an IBM subsidiary.
Aditi, a blind data security professional based in India, enrolled in 6.00.1x to help create the next generation of security tools. The MIT course was the first completely accessible course she had ever taken online. After finishing the 6.00 series, Aditi will be attending Georgia Tech in the fall for her master’s degree.
And in 2017, MITx partnered with Silicon Valley-based San Jose City College to offer the course as part of a program for students in the area who traditionally have not had access to computer science curriculum. When students complete the course, they are matched with prospective employers for internships and possible employment in the area’s technology industry.
Past students stay involved
Because of her own enthusiasm with the course, Estefania Cassingena Navone became a Community TA for MITx from Venezuela. She has written several supporting documents with visualizations to demystify some of the more complex ideas in the course. “This course gave me the hope I needed,” she says. “Hope that living in a developing country would not be a barrier to achieve what I truly want to achieve in life, it gave me the opportunity to be part of an online community where hard work and dedication really helps you thrive.”
After taking the course, MITx TA Thomas Ballatore felt empowered to learn more about using computers for his own teaching. Although he has already earned a PhD, he has entered a master’s program majoring in digital media design learning how to produce his own online courses. “I became a TA because of my love of teaching and knew that the best way to truly learn material is to explain it to others.” Now on his fourth cycle of assisting, he has created several tutorial videos, motivated by helping others get their ‘ah-hah’ moments as well.
“This course essentially embodies the MIT spirit of drinking from the firehose,” says Ana Bell. “It's a tough course and fast-paced. If you get through it, you are rewarded with an immense feeling of accomplishment.” And perhaps, also, a new life-changing opportunity.
In the wake of a disaster, responding agencies need to assess damage quickly in order to figure out where to focus their efforts and where debris might block rescue crews.
Adam Norige, associate leader of MIT Lincoln Laboratory's Humanitarian Assistance and Disaster Relief Systems Group, says currently these assessments are conducted by “literally driving around or flying a small aircraft and taking digital camera pictures to document the damage.”
But manually monitoring debris is a slow process. So Lincoln Laboratory is undertaking a multidivisional effort to revolutionize this task in disaster response.
“We are trying to show, from a research and development perspective, that we can automate the debris quantification process with Lincoln Laboratory optics and specialized algorithms,” Norige says. By automating the analysis of debris data, laboratory researchers hope to better prepare the Federal Emergency Management Agency (FEMA) for future disasters and reduce the time and cost of planning tasks to repair or clear damages.
A team in Puerto Rico has used the Airborne Optical Systems Testbed (AOSTB) to develop a baseline lidar map of the entire island, showing the latest topographical conditions and debris resulting from Hurricane Irma and Hurricane Maria in 2017. If another hurricane hits the island in the future, FEMA can track the damage that occurs by comparing subsequent lidar scans to the baseline data.
The AOSTB utilizes single-photon-sensitive, time-of-flight imaging technology to collect information about the surface characteristics of the land below. This advanced lidar system, developed by the Active Optical Systems Group, is 10 to 100 times more capable than any commercial system available and can collect wide-area, high-resolution, 3-D datasets very rapidly.
Jalal Khan, leader of the Active Optical Systems Group, says: “Data is great, but what people really want are answers to specific questions … Where can I drive? Where can I position relief supplies? Where can I pitch tents? Where are there downed power lines? Where can I land a helicopter?”
The maps generated by the AOSTB will help FEMA personnel assess damages, quantify debris, inspect infrastructure, and monitor erosion and reconstruction.
Since the first mission was flown on May 31, laboratory staff, assisted by engineers from 3DEO (a small business located in Massachusetts), have now mapped the entire island of Puerto Rico and the Puerto Rican islands of Vieques and Culebra, crisscrossing over the land during nightly sorties on a BT-67 (a remanufactured and modified DC-3) aircraft. They plan to complete two additional flight campaigns over the next nine months, or as the hurricane season demands.
“One of our goals in this effort was to increase our daily lidar area collection rate,” says John Aldridge, the assistant leader of the Humanitarian Assistance and Disaster Relief Systems Group. “This aircraft is a key enabler, in that it is a long-endurance aircraft capable of eight-hour missions with plenty of space on board for flight crew and support equipment.”
Back on the ground, Anthony Lapadula and Matthew Daggett of the Humanitarian Assistance and Disaster Relief Systems Group led the data management and analysis efforts, while Luke Skelly and Alexandru Vasile of the Active Optical Systems Group led the development of advanced data exploitation algorithms.
Khan says the foundational data collected in Puerto Rico “will form the basis of all analysis and will also be used in the development of automated algorithms that will find points of interests — buildings, roads, powerlines.”
“The map will also form the baseline data against which to compare future maps,” Khan says. “If another hurricane hits, we will be able to see the damage. That is really powerful.”
Since completing the first flight campaign, the team is now working with FEMA to understand how the lidar data can best be translated for operational uses. The FEMA Transportation Sector can immediately use the data to identify sections of roads that were damaged or washed away by Hurricane Maria. Data such as those used to model flood plains are valuable for planning new infrastructure.
Over the next few months, three staff members will be stationed at the FEMA Joint Recovery Office in Puerto Rico to help facilitate this work.
"There are so many questions that can be asked and answered with the data and we are only just now getting started," Aldridge says.
Lincoln Laboratory previously used the AOSTB to assess damage in Houston, Texas, after Hurricane Harvey.
Eventually, Lincoln Laboratory staff plan to add additional sensors to the test bed and make maps of other areas throughout the United States that are susceptible to disasters.
“We hope [the AOSTB] opens the door to all the other sensors that could be used on this platform,” Norige says. “For example, if you used infrared sensors after a disaster, you could find people who are stuck on rooftops or under rubble. There are quite a few modalities we envision coming together on an aircraft.”
While still an undergraduate at MIT, Luisa Kenausis ’17 co-founded MIT Students for Nuclear Arms Control. The organization’s goal: to raise awareness of nuclear arms control issues. As a Herbert Scoville Jr. Peace Fellowship at the Center for Arms Control and Non-Proliferation this spring, Kenausis continued her work to raise public awareness of these issues.
Kenausis grew up in Bethel, Connecticut, a small town a couple of hours from New York City, and attended the town’s public high school, Bethel High School, where she played the saxophone in the school’s award-winning marching band and developed a love for mathematics. “I ran out of math classes to take when I was a sophomore,” Kenausis recalls, “so during my junior year I started taking math classes at a community college.” At MIT, she double-majored in political science and nuclear science and engineering, working with professors Scott Kemp and Vipin Narang. Her senior thesis, “North Korea’s Nuclear Weapons: Understanding the Nuclear Tests and the Current Trajectory of the Weapons Program,” explored both technical and political aspects of North Korea’s nuclear weapons program. Kenausis has just accepted a job at the Stanley Foundation in Iowa where she will continue work in nuclear policy.
Kenausis spoke with the Department of Nuclear Science and Engineering about her fellowship experience and next steps in her career.
Q: What did you hope to achieve this spring though the Herbert Scoville Jr. Peace Fellowship?
A: I had a couple of goals. I wanted to meet and connect with people in the field — the fellowship has really helped a lot with that. We had small meetings of the Scoville Fellows with leaders who were expert in a particular field. We met senior level staff from non-governmental organizations and former government officials though these sessions. Those were super helpful, not only because we got to speak with that person but for practicing how to speak with someone in that position and thinking about how to formulate questions around issues. My other goal was to expand my issue areas, expose myself to new questions, and new areas of research that I might not have been aware of when I was a student. As a result I have become interested in defense spending policy — the way that money is authorized and allocated to the military, for instance. It’s an area that is really fascinating but not really well understood.
Q: What was the most rewarding part of this fellowship experience?
A: A couple of individual experiences stand out. We prepared questions to send to the congressional offices before Secretary of State Mike Pompeo testified in front of the Senate Foreign Relations Committee. It was just super exciting to be working on questions that might be asked in a congressional hearing in a directly meaningful way. Another was a little unexpected and rewarding — I was able to contribute and participate as part of a team straight out of college. Being a Scoville Fellow gave me credibility with people I met for the first time and opened doors to experiences I would never have had otherwise.
Q: What’s your favorite memory from MIT?
A: It’s from my senior year. My group in the senior nuclear design class, taught by Professor Mike Short, won the final competition for the best project. As the winners we got a trip to Singapore to participate in a hackathon. We not only competed in the hackathon but also got to explore the city. It was just such an amazing and fun experience. We had an awesome time.
Q: How did you become interested in nuclear policy?
A: I started out in nuclear science, but I was not really in love with just the technical side. I thought it was interesting and challenging, and I’ve always loved to be challenged and it was one of the things I really liked about NSE, but it didn’t really ignite my passion. Then when I took my first class on nuclear weapons proliferation, with nuclear weapons historian Professor Frank Gavin, during my junior fall semester. That’s when I felt “wait, this is why I think nuclear science is so important”, this is so insane and such a huge issue, and it just seems like people aren’t thinking about it. So after I took that one class I added political science as my second major and built up my own little nuclear weapons-focused curriculum. I got into nuclear policy because I just didn’t understand why people weren’t really concerned about this.
Q: What do you do for fun outside of research and work?
A: I like to work out, and I really like dancing, and I really love cooking. In the last couple of years, I’ve really gotten into cooking. Now I spend a lot of my weekends just cooking for the week. It’s really fun.
MIT engineers have united the principles of self-assembly and 3-D printing using a new technique, which they highlight today in the journal Advanced Materials.
By their direct-write colloidal assembly process, the researchers can build centimeter-high crystals, each made from billions of individual colloids, defined as particles that are between 1 nanometer and 1 micrometer across.
“If you blew up each particle to the size of a soccer ball, it would be like stacking a whole lot of soccer balls to make something as tall as a skyscraper,” says study co-author Alvin Tan, a graduate student in MIT’s Department of Materials Science and Engineering. “That’s what we’re doing at the nanoscale.”
The researchers found a way to print colloids such as polymer nanoparticles in highly ordered arrangements, similar to the atomic structures in crystals. They printed various structures, such as tiny towers and helices, that interact with light in specific ways depending on the size of the individual particles within each structure.
Nanoparticles dispensed from a needle onto a rotating stage, creating a helical crystal containing billions of nanoparticles. (Credit: Alvin Tan)
The team sees the 3-D printing technique as a new way to build self-asssembled materials that leverage the novel properties of nanocrystals, at larger scales, such as optical sensors, color displays, and light-guided electronics.
“If you could 3-D print a circuit that manipulates photons instead of electrons, that could pave the way for future applications in light-based computing, that manipulate light instead of electricity so that devices can be faster and more energy efficient,” Tan says.
Tan’s co-authors are graduate student Justin Beroz, assistant professor of mechanical engineering Mathias Kolle, and associate professor of mechanical engineering A. John Hart.
Out of the fog
Colloids are any large molecules or small particles, typically measuring between 1 nanometer and 1 micrometer in diameter, that are suspended in a liquid or gas. Common examples of colloids are fog, which is made up of soot and other ultrafine particles dispersed in air, and whipped cream, which is a suspension of air bubbles in heavy cream. The particles in these everyday colloids are completely random in their size and the ways in which they are dispersed through the solution.
If uniformly sized colloidal particles are driven together via evaporation of their liquid solvent, causing them to assemble into ordered crystals, it is possible to create structures that, as a whole, exhibit unique optical, chemical, and mechanical properties. These crystals can exhibit properties similar to interesting structures in nature, such as the iridescent cells in butterfly wings, and the microscopic, skeletal fibers in sea sponges.
So far, scientists have developed techniques to evaporate and assemble colloidal particles into thin films to form displays that filter light and create colors based on the size and arrangement of the individual particles. But until now, such colloidal assemblies have been limited to thin films and other planar structures.
“For the first time, we’ve shown that it’s possible to build macroscale self-assembled colloidal materials, and we expect this technique can build any 3-D shape, and be applied to an incredible variety of materials,” says Hart, the senior author of the paper.
Building a particle bridge
The researchers created tiny three-dimensional towers of colloidal particles using a custom-built 3-D-printing apparatus consisting of a glass syringe and needle, mounted above two heated aluminum plates. The needle passes through a hole in the top plate and dispenses a colloid solution onto a substrate attached to the bottom plate.
The team evenly heats both aluminum plates so that as the needle dispenses the colloid solution, the liquid slowly evaporates, leaving only the particles. The bottom plate can be rotated and moved up and down to manipulate the shape of the overall structure, similar to how you might move a bowl under a soft ice cream dispenser to create twists or swirls.
Beroz says that as the colloid solution is pushed through the needle, the liquid acts as a bridge, or mold, for the particles in the solution. The particles “rain down” through the liquid, forming a structure in the shape of the liquid stream. After the liquid evaporates, surface tension between the particles holds them in place, in an ordered configuration.
As a first demonstration of their colloid printing technique, the team worked with solutions of polystyrene particles in water, and created centimeter-high towers and helices. Each of these structures contains 3 billion particles. In subsequent trials, they tested solutions containing different sizes of polystyrene particles and were able to print towers that reflected specific colors, depending on the individual particles’ size.
“By changing the size of these particles, you drastically change the color of the structure,” Beroz says. “It’s due to the way the particles are assembled, in this periodic, ordered way, and the interference of light as it interacts with particles at this scale. We’re essentially 3-D-printing crystals.”
The team also experimented with more exotic colloidal particles, namely silica and gold nanoparticles, which can exhibit unique optical and electronic properties. They printed millimeter-tall towers made from 200-nanometer diameter silica nanoparticles, and 80-nanometer gold nanoparticles, each of which reflected light in different ways.
“There are a lot of things you can do with different kinds of particles ranging from conductive metal particles to semiconducting quantum dots, which we are looking into,” Tan says. “Combining them into different crystal structures and forming them into different geometries for novel device architectures, I think that would be very effective in fields including sensing, energy storage, and photonics.”
This work was supported, in part, by the National Science Foundation, the Singapore Defense Science Organization Postgraduate Fellowship, and the National Defense Science and Engineering Graduate Fellowship Program.
To diagnose depression, clinicians interview patients, asking specific questions — about, say, past mental illnesses, lifestyle, and mood — and identify the condition based on the patient’s responses.
In recent years, machine learning has been championed as a useful aid for diagnostics. Machine-learning models, for instance, have been developed that can detect words and intonations of speech that may indicate depression. But these models tend to predict that a person is depressed or not, based on the person’s specific answers to specific questions. These methods are accurate, but their reliance on the type of question being asked limits how and where they can be used.
In a paper being presented at the Interspeech conference, MIT researchers detail a neural-network model that can be unleashed on raw text and audio data from interviews to discover speech patterns indicative of depression. Given a new subject, it can accurately predict if the individual is depressed, without needing any other information about the questions and answers.
The researchers hope this method can be used to develop tools to detect signs of depression in natural conversation. In the future, the model could, for instance, power mobile apps that monitor a user’s text and voice for mental distress and send alerts. This could be especially useful for those who can’t get to a clinician for an initial diagnosis, due to distance, cost, or a lack of awareness that something may be wrong.
“The first hints we have that a person is happy, excited, sad, or has some serious cognitive condition, such as depression, is through their speech,” says first author Tuka Alhanai, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “If you want to deploy [depression-detection] models in scalable way … you want to minimize the amount of constraints you have on the data you’re using. You want to deploy it in any regular conversation and have the model pick up, from the natural interaction, the state of the individual.”
The technology could still, of course, be used for identifying mental distress in casual conversations in clinical offices, adds co-author James Glass, a senior research scientist in CSAIL. “Every patient will talk differently, and if the model sees changes maybe it will be a flag to the doctors,” he says. “This is a step forward in seeing if we can do something assistive to help clinicians.”
The other co-author on the paper is Mohammad Ghassemi, a member of the Institute for Medical Engineering and Science (IMES).
The key innovation of the model lies in its ability to detect patterns indicative of depression, and then map those patterns to new individuals, with no additional information. “We call it ‘context-free,’ because you’re not putting any constraints into the types of questions you’re looking for and the type of responses to those questions,” Alhanai says.
Other models are provided with a specific set of questions, and then given examples of how a person without depression responds and examples of how a person with depression responds — for example, the straightforward inquiry, “Do you have a history of depression?” It uses those exact responses to then determine if a new individual is depressed when asked the exact same question. “But that’s not how natural conversations work,” Alhanai says.
The researchers, on the other hand, used a technique called sequence modeling, often used for speech processing. With this technique, they fed the model sequences of text and audio data from questions and answers, from both depressed and non-depressed individuals, one by one. As the sequences accumulated, the model extracted speech patterns that emerged for people with or without depression. Words such as, say, “sad,” “low,” or “down,” may be paired with audio signals that are flatter and more monotone. Individuals with depression may also speak slower and use longer pauses between words. These text and audio identifiers for mental distress have been explored in previous research. It was ultimately up to the model to determine if any patterns were predictive of depression or not.
“The model sees sequences of words or speaking style, and determines that these patterns are more likely to be seen in people who are depressed or not depressed,” Alhanai says. “Then, if it sees the same sequences in new subjects, it can predict if they’re depressed too.”
This sequencing technique also helps the model look at the conversation as a whole and note differences between how people with and without depression speak over time.
The researchers trained and tested their model on a dataset of 142 interactions from the Distress Analysis Interview Corpus that contains audio, text, and video interviews of patients with mental-health issues and virtual agents controlled by humans. Each subject is rated in terms of depression on a scale between 0 to 27, using the Personal Health Questionnaire. Scores above a cutoff between moderate (10 to 14) and moderately severe (15 to 19) are considered depressed, while all others below that threshold are considered not depressed. Out of all the subjects in the dataset, 28 (20 percent) are labeled as depressed.
In experiments, the model was evaluated using metrics of precision and recall. Precision measures which of the depressed subjects identified by the model were diagnosed as depressed. Recall measures the accuracy of the model in detecting all subjects who were diagnosed as depressed in the entire dataset. In precision, the model scored 71 percent and, on recall, scored 83 percent. The averaged combined score for those metrics, considering any errors, was 77 percent. In the majority of tests, the researchers’ model outperformed nearly all other models.
One key insight from the research, Alhanai notes, is that, during experiments, the model needed much more data to predict depression from audio than text. With text, the model can accurately detect depression using an average of seven question-answer sequences. With audio, the model needed around 30 sequences. “That implies that the patterns in words people use that are predictive of depression happen in shorter time span in text than in audio,” Alhanai says. Such insights could help the MIT researchers, and others, further refine their models.
This work represents a “very encouraging” pilot, Glass says. But now the researchers seek to discover what specific patterns the model identifies across scores of raw data. “Right now it’s a bit of a black box,” Glass says. “These systems, however, are more believable when you have an explanation of what they’re picking up. … The next challenge is finding out what data it’s seized upon.”
The researchers also aim to test these methods on additional data from many more subjects with other cognitive conditions, such as dementia. “It’s not so much detecting depression, but it’s a similar concept of evaluating, from an everyday signal in speech, if someone has cognitive impairment or not,” Alhanai says.
In intensive care units, where patients come in with a wide range of health conditions, triaging relies heavily on clinical judgment. ICU staff run numerous physiological tests, such as bloodwork and checking vital signs, to determine if patients are at immediate risk of dying if not treated aggressively.
Enter: machine learning. Numerous models have been developed in recent years to help predict patient mortality in the ICU, based on various health factors during their stay. These models, however, have performance drawbacks. One common type of “global” model is trained on a single large patient population. These might work well on average, but poorly on some patient subpopulations. On the other hand, another type of model analyzes different subpopulations — for instance, those grouped by similar conditions, patient ages, or hospital departments — but often have limited data for training and testing.
In a paper recently presented at the Proceedings of Knowledge Discovery and Data Mining conference, MIT researchers describe a machine-learning model that functions as the best of both worlds: It trains specifically on patient subpopulations, but also shares data across all subpopulations to get better predictions. In doing so, the model can better predict a patient’s risk of mortality during their first two days in the ICU, compared to strictly global and other models.
The model first crunches physiological data in electronic health records of previously admitted ICU patients, some who had died during their stay. In doing so, it learns high predictors of mortality, such as low heart rate, high blood pressure, and various lab test results — high glucose levels and white blood cell count, among others — over the first few days and breaks the patients into subpopulations based on their health status. Given a new patient, the model can look at that patient’s physiological data from the first 24 hours and, using what it’s learned through analyzing those patient subpopulations, better estimate the likelihood that the new patient will also die in the following 48 hours.
Moreover, the researchers found that evaluating (testing and validating) the model by specific subpopulations also highlights performance disparities of global models in predicting mortality across patient subpopulations. This is important information for developing models that can more accurately work with specific patients.
“ICUs are very high-bandwidth, with a lot of patients,” says first author Harini Suresh, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “It’s important to figure out well ahead of time which patients are actually at risk and in more need of immediate attention.”
Co-authors on the paper are CSAIL graduate student Jen Gong, and John Guttag, the Dugald C. Jackson Professor in Electrical Engineering.
Multitasking and patient subpopulations
A key innovation of the work is that, during training, the model separates patients into distinct subpopulations, which captures aspects of a patient’s overall state of health and mortality risks. It does so by calculating a combination of physiological data, broken down by the hour. Physiological data include, for example, levels of glucose, potassium, and nitrogen, as well as heart rate, blood pH, oxygen saturation, and respiratory rate. Increases in blood pressure and potassium levels — a sign of a heart failure — may indicate health problems over other subpopulations.
Next, the model employs a multitasking method of learning to build predictive models. When the patients are broken into subpopulations, differently tuned models are assigned to each subpopulation. Each variant model can then more accurately make predictions for its personalized group of patients. This approach also allows the model to share data across all subpopulations when it’s making predictions. When given a new patient, it will match the patient’s physiological data to all subpopulations, find the best fit, and then better estimate the mortality risk from there.
“We’re using all the patient data and sharing information across populations where it’s relevant,” Suresh says. “In this way, we’re able to … not suffer from data scarcity problems, while taking into account the differences between the different patient subpopulations.”
“Patients admitted to the ICU often differ in why they’re there and what their health status is like. Because of this, they’ll be treated very differently,” Gong adds. Clinical decision-making aids “should account for the heterogeneity of these patient populations … and make sure there is enough data for accurate predictions.”
A key insight from this method, Gong says, came from using a multitasking approach to also evaluate a model’s performance on specific subpopulations. Global models are often evaluated in overall performance, across entire patient populations. But the researchers’ experiments showed these models actually underperform on subpopulations. The global model tested in the paper predicted mortality fairly accurately overall, but dropped several percentage points in accuracy when tested on individual subpopulations.
Such performance disparities are difficult to measure without evaluating by subpopulations, Gong says: “We want to evaluate how well our model does, not just on a whole cohort of patients, but also when we break it down for each cohort with different medical characteristics. That can help researchers in better predictive model training and evaluation.”
The researchers tested their model using data from the MIMIC Critical Care Database, which contains scores of data on heterogeneous patient populations. Of around 32,000 patients in the dataset, more than 2,200 died in the hospital. They used 80 percent of the dataset to train, and 20 percent to test the model.
In using data from the first 24 hours, the model clustered the patients into subpopulations with important clinical differences. Two subpopulations, for instance, contained patients with elevated blood pressure over the first several hours — but one decreased over time, while the other maintained the elevation throughout the day. This subpopulation had the highest mortality rate.
Using those subpopulations, the model predicted the mortality of the patients over the following 48 hours with high specificity and sensitivity, and various other metrics. The multitasking model significantly outperformed a global model by several percentage points.
Next, the researchers aim to use more data from electronic health records, such as treatments the patients are receiving. They also hope, in the future, to train the model to extract keywords from digitized clinical notes and other information.
The work was supported by the National Institutes of Health.
The skies were varying shades of blue, and the trees were drawn in different shapes and colors. Some of the paintings were serene, while others were abstract, but all of them reflected the creativity of the MIT community.
The Office of the Chancellor and MindHandHeart sponsored a Paint Nite for MIT graduate students on Aug. 22 at The Thirsty Ear Pub. Chancellor Cynthia Barnhart and MindHandHeart Executive Administrator Maryanne Kirkbride provided opening remarks, informing students of their offices’ programs and services, such as the Accessing Resources Coalition, the MIT Student Support Hub, and the MindHandHeart Innovation Fund.
While the students enjoyed hors d'oeuvres and refreshments, a local art instructor led them in painting a tree against a sky background. The sold-out event was organized by the Graduate Student Council Activities Committee, and spearheaded by graduate students Shaiyan Keshvari, Mukund Gupta, and Xueying Zhao.
The student organizers were motivated to establish Paint Nites to provide their peers with an opportunity to unwind in a community setting.
“We love the space at The Thirsty and want to use it as much as possible,” Keshvari said. “I thought about what activities we could do that are fun, easy to run, and absolutely stress-free for participants. I’d heard about Paint Nites from a friend, and I thought we should try it out here.”
The first in a series of Paint Nites, the event appeared to be a success. “Some people are absorbed in their painting, some people are socializing, and it seems like everyone is having a good time, which was the goal,” said Keshvari.
Zhao added: “I think most MIT grad students are scientists and engineers, and this will help them to realize that they can also be artists. Everyone’s life needs some art and painting can be de-stressing. It’s not a competition and no one is judging your work — it’s just enjoyable.”
Graduate student Nicole Moody said of her painting: “It has sort-of evolved into a willow tree. When I was making the leaves some of the paint dripped down, so it looks like a fall scene. I’m here with four of my friends and we plan to hang our paintings along the staircase of our residence, so we can see our nice trees and remember this fun activity.”
Upcoming Paint Nites will be sponsored by the Office of the Vice Chancellor, the Office of Graduate Education, Community Wellness at MIT Medical, the International Students Office, and the Atlas Service Center.
Ian Marius Peters, now an MIT research scientist, was working on solar energy research in Singapore in 2013 when he encountered an extraordinary cloud of pollution. The city was suddenly engulfed in a foul-smelling cloud of haze so thick that from one side of a street you couldn’t see the buildings on the other side, and the air had the acrid smell of burning. The event, triggered by forest fires in Indonesia and concentrated by unusual wind patterns, lasted two weeks, quickly causing stores to run out of face masks as citizens snapped them up to aid their breathing.
While others were addressing the public health issues of the thick air pollution, Peters’ co-worker Andre Nobre from Cleantech Energy Corp., whose field is also solar energy, wondered about what impact such hazes might have on the output of solar panels in the area. That led to a years-long project to try to quantify just how urban-based solar installations are affected by hazes, which tend to be concentrated in dense cities.
Now, the results of that research have just been published in the journal Energy & Environmental Science, and the findings show that these effects are indeed substantial. In some cases it can mean the difference between a successful solar power installation and one that ends up failing to meet expected production levels — and possibly operates at a loss.
After initially collecting data on both the amount of solar radiation reaching the ground, and the amount of particulate matter in the air as measured by other instruments, Peters worked with MIT associate professor of mechanical engineering Tonio Buonassisi and three others to find a way to calculate the amount of sunlight that was being absorbed or scattered by haze before reaching the solar panels. Finding the necessary data to determine that level of absorption proved to be surprisingly difficult.
Eventually, they were able to collect data in Delhi, India, providing measures of insolation and of pollution over a two-year period — and confirmed significant reductions in the solar-panel output. But unlike Singapore, what they found was that “in Delhi it’s constant. There’s never a day without pollution,” Peters says. There, they found the annual average level of attenuation of the solar panel output was about 12 percent.
While that might not sound like such a large amount, Peters points out that it is larger than the profit margins for some solar installations, and thus could literally be enough to make the difference between a successful project and one that fails — not only impacting that project, but also potentially causing a ripple effect by deterring others from investing in solar projects. If the size of an installation is based on expected levels of sunlight reaching the ground in that area, without considering the effects of haze, it will instead fall short of meeting its intended output and its expected revenues.
“When you’re doing project planning, if you haven’t considered air pollution, you’re going to undersize, and get a wrong estimate of your return on investment,” Peters says
After their detailed Delhi study, the team examined preliminary data from 16 other cities around the world, and found impacts ranging from 2 percent for Singapore to over 9 percent for Beijing, Dakha, Ulan Bator, and Kolkata. In addition, they looked at how the different types of solar cells — gallium arsenide, cadmium telluride, and perovskite — are affected by the hazes, because of their different spectral responses. All of them were affected even more strongly than the standard silicon panels they initially studied, with perovskite, a highly promising newer solar cell material, being affected the most (with over 17 percent attenuation in Delhi).
Many countries around the world have been moving toward greater installation of urban solar panels, with India aiming for 40 gigawatts (GW) of rooftop solar installations, while China already has 22 GW of them. Most of these are in urban areas. So the impact of these reductions in output could be quite severe, the researchers say.
In Delhi alone, the lost revenue from power generation could amount to as much as $20 million annually; for Kolkata about $16 million; and for Beijing and Shanghai it’s about $10 million annually each, the team estimates. Planned installations in Los Angeles could lose between $6 million and $9 million.
Overall, they project, the potential losses “could easily amount to hundreds of millions, if not billions of dollars annually.” And if systems are under-designed because of a failure to take hazes into account, that could also affect overall system reliability, they say.
Peters says that the major health benefits related to reducing levels of air pollution should be motivation enough for nations to take strong measures, but this study “hopefully is another small piece of showing that we really should improve air quality in cities, and showing that it really matters.”
The research team also included S. Karthik of Cleantech Energy Corp. in Singapore, and Haohui L. of the National University of Singapore. The work was supported by Singapore’s National Research Foundation through the Singapore-MIT Alliance for Research and Technology and by the U.S. Department of Energy and National Science Foundation.
For many people, the Inca city of Machu Picchu in the Andes of Peru is one of the most recognizable icons of archaeological and adventure tourism in the world. However, for the Peruvian people and for the international scientific community, Machu Picchu is much more than a tourist destination. In addition to being a United Nations Educational, Scientific and Cultural Organization (UNESCO) World Heritage Site, the historic sanctuary has great cultural and economic importance for Peru and the region of Cusco.
The first references to attempts to document the city of Machu Picchu date back to the late 19th century, when Peruvian and European explorers toured the rugged mountains around the meandering Urubamba River. Some of the explorers did not hesitate to register their visit in the rock. On a wall of the Temple of the Three Windows, Agustin Lizarraga recorded, "July 14, 1902".
But it was Yale University Professor Hiram Bingham who extensively documented the site during his expedition in 1911, and made known to the international community the existence of the lost ruins of the Incas. Over the last 100 years, dozens of archaeological expeditions have contributed to increasing the architectural value and interest in the site, as well as the scientific knowledge of the extraordinary technologies developed by the Incas.
In order to digitally document and develop the foundations for future research, a laboratory team from the MIT Department of Architecture, led by Professor Takehiko Nagakura and PhD student Paloma Gonzales, has been working on the MISTI Global Seed Fund Machu Picchu Design Heritage project since 2016.
The team, the Architecture Representation and Computation Group, has led the first extensive expedition to digitally document Machu Picchu, using the latest generation of instruments and techniques to explore the site’s architectural and urban importance and develop a 3-D site map using virtual reality and augmented reality. The Architecture Representation and Computation Group has an important record of working with digital capturing technologies on World Heritage Sites in Italy, China, Singapore, and Japan.
"We believe that documentation through computational techniques for the digitalization of architectural monuments is key to the preservation of the cultural heritage of humanity," Nagakura says. “But it is just a simple idea for old practice. From Renaissance time, architects have been going to building sites, and drawing them up to study them. We are just replacing tape measures and Mylar sheets with scanning tools and VR headsets.”
For the project in Peru, the team visited the archaeological complex on two occasions for several weeks in mid-2017 and early 2018. At the site, more than 9,000 images were collected through panoramic cameras, photogrammetric scanning tools, and drones. Gonzales says the working hours were “intense.”
“We had to reach the archaeological monument before the arrival of the tourists and stay after the closure of the monument," she says. “The great commitment and joint work of the MIT team and the San Antonio Abad del Cusco University, supported by the Decentralized Directorate of Culture of Cusco, made the work fruitful and rewarding.”
Based on the photogrammetric data they sampled, the team developed 3-D models and are working on creating virtual reality experiences that would allow people to immerse themselves in Machu Picchu from anywhere on the planet. The same 3-D models are also being deployed to make a new interactive map of Machu Picchu that superimposes the photographic 3-D view of the site through augmented reality.
Last December, the team launched the MIT Design Heritage Platform, where visitors can see and explore part of the work they have done. In addition, they plan to make this platform a tool to collect images from those who can contribute to the data bank through crowdsourcing.
The project has also managed to document the architectural characteristics and construction materials of the city with high-resolution photographic techniques. The images constitute a unique database with rich information on aspects such as landscape and vegetation at the time the photographs were taken. The team will make all of the information collected available to the authorities of the archaeological monument.
At the same time, they expect that other disciplines can use the databases and photogrammetric models they are developing. The documentation has already been used in conservation efforts, including in the reconstruction of Wiñay Wayna, an archeological site located on the Inca Trail leading to Machu Picchu that was destroyed by a recent flood storm.
Fernando Astete, anthropologist and head of the National Archaeological Park of Machu Picchu, says: "We are very excited with the MIT team work. We welcome all efforts to research and preserve Machu Picchu. We have to protect our heritage for the next generations.”
Architect Cesar Medina, responsible for the digitization of the national park, believes that the collaboration with MIT has been enriching.
“We have been working in 3-D documentation since 2013, but the collaboration with the MIT team lead by Professor Nagakura, with the support of our local university, has allowed us to exhaustively document Machu Picchu, making use of the latest technologies and innovative techniques,” Medina says. “Moreover, we have had the opportunity to visit and know the work of his lab; we see with great interest to continue working in the future with MIT.”
The Architecture Representation and Computation Group is already in conversation with institutions of higher education and heritage conservation of Peru to continue advancing the project of digital inheritance. In addition to continuing in Machu Picchu, they may extend the documentation areas to other archaeological sites of Peru. The project has also opened the doors to possible interdisciplinary collaborations with materials science researchers, urban planners, hydrologist, engineers, archaeologists, and historians.
The Machu Picchu Design Heritage project was made possible thanks to the MISTI Global Seed Funds. MISTI is a part of the Center for International Studies within the School of Humanities, Arts, and Social Sciences (SHASS). The project was also sponsored by the Council of Science, Technology and Technological Innovation of Peru, with the support of the National University of Saint Anthony the Abbot in Cuzco and the Decentralized Directorate of Culture of Cusco.
On a sunny Monday morning on the oval lawn in front of Kresge Auditorium, MIT President L. Rafael Reif and the Institute’s top administrators and selected faculty delivered their annual Convocation to welcome the incoming first-year class of 2022.
Reif recalled some of his fears and concerns when he first arrived at MIT in 1980 as an assistant professor, including whether he would fit in, whether his accent would interfere with communications — and how he would cope with New England’s winters, after arriving from his native Venezuela, where winters were, as he described, “like this,” referring to the summery day with temperatures in the 80s.
“I knew almost no one, I was very far from home, and I was worried,” he said, in a story he described as being typical of many who arrive at MIT. “I was worried about fitting in on a campus more than 2,000 miles from my home.” But he concluded that “I’m here to tell you that all of my concerns, those anxious moments wondering if I had made the right decision, all of them were unfounded.”
Instead, he said, “I discovered a community of students, faculty, researchers, and staff that were a lot like me: They were curious, they asked questions, they were passionate, they liked to tinker. Most of them came from somewhere else. And they cared about helping each other and serving society.”
MIT is still that way, Reif said, in a way that’s deeply ingrained in its culture: “At MIT, I found my home.”
But it won’t always be easy, he told the incoming students. “Your classes will require equal parts of hard work, discipline, and dedication. You will enjoy great moments of success, but you may experience moments of doubt too,” he said. At those moments, he suggested, they should remember three things: “First, you belong here.” The selections overseen by Dean of Admissions Stuart Schmill, he said, show “a remarkable knack for finding the right students” for each year’s classes. In short, he said, “In Stu we trust.”
Second, he said, “all of us experience doubts about ourselves, even the distinguished professors you see on stage.” Those doubts often arise when pushing oneself or trying something new, he said. “If you have doubts about yourself, it’s just a sign that you are learning.”
And finally, he said, “you are surrounded by a community that cares about you. All of us are dedicated to your success, and we believe in you.”
Three faculty members spoke about their own experiences of MIT and of their initial feelings when they arrived here. Yoel Fink, a professor of materials science and engineering and director of the $300 million Advanced Functional Fabrics of America (AFFOA) institute, spoke of his time when he came to MIT as a graduate student, and couldn’t figure out where the Institute was after getting off the subway at Kendall Square. Growing up in Jerusalem, he was accustomed to schools and universities being surrounded by fences and guards, and was surprised to find the campus so open.
He came to realize that such openness was “an important aspect of MIT culture,” he said, and part of what makes it special: “openness, freedom, and with very few imposed boundaries.”
Fink recalled some humbling moments from his early days here, including when he scored a failing grade of 55 on one of his first midterm exams. After interviewing with dozens of professors, he still hadn’t come up with a research project after a year of trying, and his first manuscript was rejected by referees as being neither new nor interesting. And a professor offered him a table in her office because she said he was so shy and socially inept.
Then he described an early meeting after he had been given a thick book describing a research project that had just been funded by the U.S. Defense Advanced Research Projects Agency (DARPA). When he read the plan, he found that the solutions being proposed were “beautiful, but highly complex and certainly not very practical.” He thought of a much simpler approach, and was surprised that it wasn’t even mentioned in the report. As a student, he was at first reluctant to say anything about this, since the plan had been drawn up by highly respected, world-leading professors in their field. But at the end of the meeting, he decided to ask about this idea, and his question was greeted with a stunned silence.
In fact, the idea he was suggesting then turned out to form the basis for the discovery of a new type of mirror; then a paper in Nature; an invention that The New York Times described as “the perfect mirror”; his faculty position at MIT; much of his research since then; and the basis for a medical device that helped to cure 300,000 people — many of them with brain tumors. The idea also laid the groundwork for the creation of AFFOA, he said. “I owe my career to that chance moment and to asking that question,” he said. “MIT is the world’s best launchpad for ideas.”
Fink opened his comments by paying tribute to the recently deceased Sen. John McCain, citing his heroism and his long struggle with brain cancer — a struggle that is shared by Fink’s 14-year-old son and by a recent MIT student, he said.
Robotics researcher Cynthia Breazeal, an associate professor of media arts and sciences, described an event she helped to organize when she was a graduate student here. As part of a mini-Olympics during the January Independent Activities Period, she and her classmates decided to put on a tug-of-war, and to make it more interesting they decided to do it with a jello pit — and, being January, it all had to be indoors. She outlined the considerable research and labor needed to bring that about, including finding the supplies and making 500 gallons of non-edible green gelatin, and conducting the event without leaving any residue that would have to be cleaned up by others.
The story, she said, illustrated both MIT’s tolerance for wild and creative forms of play, and for MIT students’ willingness to work long and hard to achieve success in even their most frivolous undertakings. “Go out there and have some hard fun together,” she advised.
Collin Stultz, a professor of electrical engineering and computer science and head of MIT’s Institute for Medical Engineering and Science, said that there is one word that for him “really typifies the MIT experience … and that word is family — the MIT family.”
He said that, like a family, “we comfort each other in difficult times, we help each other when we struggle in the classroom or otherwise, and we collectively celebrate our accomplishments as students and faculty alike. Moreover, the distinction between faculty and students is a bit blurred here, more than in other institutions that I have been at in the past.” (Stultz is a graduate of Harvard University). “We all learn from one another, we grow with one another, and I hope that I have imparted to my students as much as they have imparted to me. … Welcome to the MIT family!”
When spraying paint or coatings onto a surface, or fertilizers or pesticides onto crops, the size of the droplets makes a big difference. Bigger drops will drift less in the wind, allowing them to strike their intended targets more accurately, but smaller droplets are more likely to stick when they land instead of bouncing off.
Now, a team of MIT researchers has found a way to balance those properties and get the best of both — sprays that don’t drift too far but provide tiny droplets to stick to the surface. The team accomplished this in a surprisingly simple way, by placing a fine mesh in between the spray and the intended target to break up droplets into ones that are only one-thousandth as big.
The findings are reported today in the journal Physical Review Fluids, in a paper by MIT associate professor of mechanical engineering Kripa Varanasi, former postdoc Dan Soto, graduate student Henri-Louis Girard, and three others at MIT and at CNRS in Paris.
(Courtesy of the Varanasi Research Group)
Earlier work by Varanasi and his team had focused on ways to get the droplets to stick more effectively to the surfaces they strike rather than bouncing away. The new study focuses on the other end of the problem — how to get the droplets to reach the surface in the first place. Varanasi explains that typically less that 5 percent of sprayed liquids actually stick to their intended targets; of the 95 percent or more that gets wasted, about half is lost to drift and never even gets there, and the other half bounces away.
Atomizers — devices that can spray liquids in the form of droplets so small that they remain suspended in air rather than settling out — are crucial parts of many industrial processes, including painting and coating, spraying fuel into engines or water into cooling towers, and printing with fine droplets of ink. The new advance developed by this team was to make the initial spray in the form of larger drops, which are much less affected by breezes and more likely to reach their targets, and then to create the much finer droplets just before they reach the surface, by placing a mesh screen in between.
Though the process could apply to many different spraying applications, “the big motivation is agriculture,” Varanasi says. The runoff of pesticides that miss their target and fall on the ground can be a significant cause of pollution and a waste of the expensive chemicals. What’s more, the impact of finer droplets is less likely to damage or weaken certain plants.
Farmers already cover some kinds of crops with fabric meshes, to protect against birds and insects devouring the plants, so the process is already familiar and widely used. Many kinds of mesh materials would work, the researchers say — what matters is the size of the openings in the mesh and the material’s thickness, parameters the team has precisely quantified through a series of lab experiments and mathematical analysis. For their experiments, the researchers primarily used a commonly available and inexpensive fine stainless steel mesh.
The researchers propose that, after deploying the mesh over the crop, either directly supported by the plant stalks or supported on a framework, a farmer could simply use a conventional sprayer that produces larger drops, which would stay on course even in breezy conditions. Then, as the drops reach the plants, they would be broken up by the mesh into fine droplets, each about a tenth of a millimeter across, which would greatly increase their chances of sticking.
(Courtesy of the Varanasi Research Group)
As an extra bonus, the presence of the mesh over the crops could also protect them from damage from rainstorms, by also breaking up the raindrops into smaller droplets that place less stress on the plant when they strike. Crop damage from storms, which can seriously reduce yields in some cases, may be reduced in the process, the researchers say. In addition, bigger drops cause more splashing, which can lead to a spread of pathogens.
Besides being more efficient, the process may also reduce the problem of drift of pesticides, which sometimes blow from one farmer’s field to another, and even from one state to another, Varanasi says, and also sometimes end up in people’s homes. “People want to fix this. They’re looking for solutions.”
The same principle could be applied to other uses, Girard points out, such as the spraying of water into cooling towers such as those used for electric power plants and many industrial or chemical plants. Using a mesh below the spray heads in such towers “can create finer droplets, which evaporate faster and provide better cooling,” he says. Cooling efficiency is related to the drop’s surface area, which is three orders of magnitude greater with the finer droplets, he says.
In recent work, Varanasi and his team found a way to recover much of the water that gets lost to evaporation from such cooling towers, by using a different kind of mesh over the towers’ top. The new finding could be combined with that method, thus improving power plant efficiency on both the input and output sides.
For painting and for applying other kinds of coatings, the finer the droplets are, the better they cover and adhere, Girard says, so the process could improve the quality and durability of the coatings.
While most existing atomization methods rely on high pressure to force liquid through a narrow opening, which requires energy to create the pressure, this method is purely passive and mechanical, Girard says. “Here, we let the mesh do the atomization essentially for free.”
James Bird, an assistant professor of mechanical engineering at Boston University, who was not involved in this research, says this work “demonstrates a clever, and seemingly practical, method to aerosolize and disperse droplets. Yet, what impressed me most in this study is the elegance by which the authors dissect and recombine the complex dynamics to develop a fundamental understanding that is more than the sum of its parts.”
The team included Antoine Le Helloco, and Thomas Binder at MIT and David Quere at CNRS in Paris. The work was supported by the MIT-France program.
Today, scientists at CERN, the European Organization for Nuclear Research, have announced that, for the first time, they have observed the Higgs boson transforming into elementary particles known as bottom quarks as it decays. Physicists have predicted this to be the most common way in which most Higgs bosons should decay, but until now, it has been extremely difficult to pick out the decay’s subtle signals. The discovery is a significant step towards understanding how the Higgs boson gives mass to all the fundamental particles in the universe.
The scientists made their discovery using the ATLAS and the CMS detectors, two major experiments designed to analyze the high-energy particle collisions generated by CERN’s Large Hadron Collider (LHC) — the largest, most powerful particle accelerator in the world.
Higgs bosons, which were first discovered in 2012, are an incredible rarity, and are produced in just one out of every billion LHC collisions. Once smashed into existence, the particles vanish almost immediately, decaying into a stream of secondary particles. The Standard Model of physics, which is the most widely accepted theory for describing the interactions of most particles in the universe, predicts that nearly 60 percent of Higgs bosons should decay to bottom quarks, elementary particles that are about four times as massive as a proton.
Both the ATLAS and CMS teams spent several years refining techniques and incorporating more data in their hunt for this most common Higgs boson decay. Both experiments ultimately confirmed that, for the first time, they saw evidence of a Higgs boson decaying to a bottom quark, with a statistically high degree of confidence.
MIT physicists in the Laboratory for Nuclear Science have been involved in analyzing and interpreting data for this new discovery, including Philip Harris, assistant professor of physics. MIT News spoke with Harris, who is also a member of the CMS experiment, about the mind-bending search for a vanishing transformation, and how the new Higgs discovery may help physicists to understand why the universe has mass.
Q: Put this discovery in context for us a bit. How significant is it that your team has observed the Higgs boson decaying to bottom quarks?
A: The Higgs boson has two distinct mechanisms: It gives mass to the force particles involved in electroweak interactions, the force responsible for nuclear beta decay; and it gives mass to the fundamental particles inside the atom, the quarks and the leptons (such as electrons and muons). Despite the fact that it is responsible for both mechanisms, the Higgs discovery and the subsequent Higgs property measurements have largely been performed with the electroweak force particles. We have only recently directly observed Higgs interactions with matter. This measurement, the Higgs boson decaying to a bottom quark, is the first time we have directly observed Higgs-to-quark interactions. This confirms that quarks do indeed get mass from the Higgs mechanism.
Q: How tricky was this detection to make, and how was it finally observed?
A: Roughly 60 percent of all Higgs decays are to bottom quarks. This is the largest single decay channel of the Higgs boson. However, it is also the channel that has the largest background [noise from surrounding particles]. Depending on how you count it, it’s about a million times larger than the channels we used to discover the Higgs boson.
People like to compare Higgs measurements with finding a needle in a haystack. Here, I think that a more apt analogy is a magic-eye stereogram. You are looking for a broad distortion in the data that is very difficult to see. The trick of trying to see this distortion is like a magic eye: You have to figure out how to focus right.
To calibrate our “focus,” we looked at the electroweak force particle, the Z boson, and its decay to bottom quarks. Once we were able to see the Z boson going into bottom quarks, we set our target to the Higgs boson, and there it was. I should stress that to see this distortion clearly we had to rely on technology that was at its infancy at the time of the Higgs boson discovery, including some of the most recent advancements in machine learning. In fact, only a few years ago it was taught in your standard particle physics class that it was impossible to observe the Higgs decays in some of these channels.
Q: The original discovery of the Higgs boson has been touted as a landmark discovery that will ultimately reveal the mystery to why atoms have mass. How will this new discovery of the Higgs decay help to solve this mystery?
A: Following the Higgs boson discovery, we have learned a lot about how the Higgs mechanism gives mass to different particles. However, many would argue that after the Higgs boson discovery, high energy physics has gotten even more interesting because it is starting to look like our conventional view of particle physics doesn’t fit just right.
One of the best ways to test our view is by measuring the properties of the Higgs boson. The Higgs-to-bottom-quark decay is essential to this understanding because it allows us to directly probe the properties of Higgs and quark matter interactions and because of its large decay rate, which means we can measure the Higgs boson in all sorts of scenarios that are not possible with other decay modes.
This observation gives us a new and powerful tool to probe the Higgs boson. In fact, as part of this measurement, we were able to measure Higgs bosons with energies over twice the energy of the highest Higgs bosons previously observed.
Nearly 100 years ago, the German chemist Otto Warburg discovered that cancer cells metabolize nutrients differently than most normal cells. His discovery launched the field of cancer metabolism research, but interest in this area waned; by the 1970s most cancer scientists had shifted their focus to the genetic mutations that drive cancer development.
In the past decade or so, interest in cancer metabolism has resurged, and the first drugs that target cancer cells’ abnormal metabolism were approved to treat leukemia in 2017.
“Cancer metabolism is a very sophisticated field at this point,” says Matthew Vander Heiden, an associate professor of biology at MIT. “We have a lot better understanding of what nutrients cancer cells use and what determines how those nutrients are used. This has led to different ways to think about drugs.”
Vander Heiden, who is also a member of MIT’s Koch Institute for Integrative Cancer Research, is one of the people responsible for the recent surge in cancer metabolism research. As a graduate student and postdoc, he published some of the first studies of how cancer cells alter their metabolism, and now his lab at MIT is devoted to the topic.
“All of the time that I was in grad school and working as a postdoc, I was never working in a lab that was dedicated to studying metabolism. So my vision, if someone gave me a job, was to set up a lab that could really be built in a way that would allow us to ask questions about metabolism,” he says.
Metabolism and cancer
Vander Heiden grew up in a small town in Wisconsin, and unlike most of his high school classmates, he headed out of state for college, to the University of Chicago. He was interested in science, so decided on a pre-med track. A work-study job in a plant biology lab led him to discover that he also enjoyed doing research.
“At that point I already had this idea I was going to go to medical school, but then the idea of MD/PhD came up, and I ended up going down that path,” Vander Heiden says.
While in the MD/PhD program at the University of Chicago Medical School, he worked in the lab of Craig Thompson, now president of Memorial Sloan Kettering Cancer Center. At that time, Thompson was studying the biochemical regulation of apoptosis, the programmed cell death pathway. For his PhD thesis, Vander Heiden investigated the function of a protein called Bcl-x, which is a regulator of apoptosis found in the membranes of mitochondria — cell organelles responsible for generating energy.
“That project really got me thinking about how the mitochondria work and how metabolism works,” Vander Heiden recalls. “At the time, I came to the realization that we don’t understand cell metabolism anywhere near as well as we thought we did, and someone should really study this.”
After finishing his degrees, he spent five years doing clinical training, then decided to pursue research in cancer metabolism.
“Altered metabolism has been known about in cancer for 100 years, but few people were studying it,” Vander Heiden says. “The challenge was finding a lab that would allow me to study metabolism and cancer, which in 2004-2005 was not such an obvious thing to do.”
He ended up going to Harvard Medical School to work with Lewis Cantley, who studies signaling pathways in cells and was receptive to the idea of exploring cancer metabolism. There, Vander Heiden began studying an enzyme called pyruvate kinase M2 (PKM2), which is involved in regulation of glycolysis, a biochemical process that cells use to break down sugar for energy.
In 2008, Vander Heiden, Cantley, and others at Harvard Medical School reported that when cells shift between normal and Warburg (cancer-associated) metabolism, they start using PKM2 instead of PKM1, the enzyme that adult cells normally use for glycolysis. Cantley and Craig Thompson have since founded a company, Agios Pharmaceuticals, that is developing potential drugs that target PKM2, as well as other molecules involved in cancer metabolism.
While at Harvard, Vander Heiden also worked on a paper that contributed to the eventual development of drugs that target cancer cells with a mutation in the IDH gene. These drugs, the first modern FDA-approved cancer drugs that target metabolism, shut off an alternative pathway used by cancer cells with the IDH mutation.
New drug targets
In 2010, Vander Heiden became one of the first new faculty members hired after the creation of MIT’s Koch Institute, where he set up a lab focused on metabolism, particularly cancer metabolism.
His research has yielded many insights into the abnormal metabolism of cancer cells. In one study, together with other MIT researchers, he found that tumor cells turn on an alternative pathway that allows them to build lipids from the amino acid glutamine instead of the glucose that healthy cells normally use. He also found that altering the behavior of PKM2 to make it act more like PKM1 could stop tumor cell growth.
Studies such as these can offer insights that may help researchers to develop drugs that starve tumor cells of the nutrients they need, offering a new way to fight cancer, Vander Heiden says.
“If one wants to develop drugs that target metabolism, one really needs to focus on the context in which it’s happening, which is the environment of the cell plus the genetics of the cell,” he says. “That is what defines the sensitivity to drugs.”
About 30 percent of the proteins encoded by the human genome are membrane proteins — proteins that span the cell membrane so they can facilitate communication between cells and their environment. These molecules are critical for learning, seeing, and sensing odors, among many other functions.
Despite the prevalence of these proteins, scientists have had difficulty studying their structures and functions because the membrane-bound portions are very hydrophobic, so they cannot be dissolved in water. This makes it much harder to do structural analyses, such as X-ray crystallography.
In an advance that could make it easier to perform this type of structural study, MIT researchers have developed a way to make these proteins water-soluble by swapping some of their hydrophobic amino acids for hydrophilic ones. The technique is based on a code that is much simpler than previously developed methods for making these proteins soluble, which rely on computer algorithms that have to be adapted to each protein on a case-by-case basis.
“If there is no rule to follow, it’s difficult for people to understand how to do it,” says Shuguang Zhang, a principal research scientist in the MIT Media Lab’s Center for Bits and Atoms. “The tool has to be simple, something that anyone can use, not a sophisticated computer simulation that only a few people know how to use.”
Zhang is the senior author of the study, which appears in the Proceedings of the National Academy of Sciences the week of Aug. 27. Other MIT authors are former visiting professor Fei Tao, postdoc Rui Qing, former visiting professor Hongzhi Tang, graduate student Michael Skuhersky, former undergraduate Karolina Corin ’03, SM ’05, PhD ’11, former postdoc Lotta Tegler, graduate student Asmamaw Wassie, and former undergraduate Brook Wassie ’14.
A simple code
Of the approximately 8,000 known membrane proteins found in human cells, scientists have discovered structures for about 50. They are widely viewed as very difficult to work with because once they are extracted from the cell membrane, they only maintain their structure if they are suspended in a detergent, which mimics the hydrophobic environment of the cell membrane. These detergents are expensive, and there is no universal detergent that works for all membrane proteins.
Zhang started working on a new way to tackle this problem in 2010, inspired by the late Alexander Rich, an MIT professor of biology. Rich posed the question of whether protein structures called alpha helices, which make up the bulk of the membrane-embedded portion of proteins, could be switched from hydrophobic to hydrophilic. Zhang immediately began working out possible solutions, but the problem proved difficult. Over the past eight years, he has had several students and visiting researchers help work on his idea, most recently Qing, who achieved success.
The key idea that allowed Zhang to develop the code is the fact that a handful of hydrophobic amino acids have very similar structures to some hydrophilic amino acids. This similarities allowed Zhang to come up with a code in which leucine is converted to glutamine, isoleucine and valine are converted to threonine, and phenylalanine is converted to tyrosine.
Another important factor is that none of these amino acids are charged, so swapping them appears to have a minimal effect on the overall protein structure. In fact, isoleucine and threonine are so similar that ribosomes, the cell structures that assemble proteins, occasionally insert the wrong one — about once in every 200 to 400 occurrences.
The researchers call their code the QTY code, after the three letters that represent glutamine, threonine, and tyrosine, respectively.
In their earliest efforts to implement this code, the researchers substituted only a small fraction of the hydrophobic amino acids embedded in the membrane, but the resulting proteins still needed some detergent to dissolve. They increased the replacement rate to about 50 percent, but the proteins were still not fully water-soluble, so they replaced all instances of glutamine, isoleucine, valine, and phenylalanine embedded in the membranes. This time, they achieved success.
“It’s only when we replace all the hydrophobic residues in the transmembrane regions that we’re able to get proteins that are stable and completely free of detergent in an aqueous system,” Qing says.
In this study, the researchers demonstrated their technique on four proteins that belong to a class of proteins known as G protein-coupled receptors. These proteins help cells to recognize molecules, such as hormones, or immune molecules, called chemokines, and trigger an appropriate response within the cell.
Joel Sussman, a professor of structural biology at the Weizmann Institute of Science, described the new method as “incredibly simple and elegant.”
“Although a number of scientists have been trying to find a way to ‘solubilize’ G protein-coupled receptors and other integral membrane proteins, until now their methods have not been of general use and often involved very complex computational methods that would not be widely applicable,” says Sussman, who was not involved in the research.
The researchers are still working towards obtaining the precise structures of these proteins using X-ray crystallography or nuclear magnetic resonance (NMR), but they performed some experiments that suggest the structures are similar. In one, they showed that the water-soluble proteins denature at nearly the same temperature as the original versions of the proteins. They also showed that the modified proteins bind to the same target molecules that the original proteins bind to, although not as strongly.
Being able to synthesize water-soluble versions of these proteins could enable new applications, such as sensors that can detect environmental pollutants, the researchers say.
Another possibility is designing water-soluble versions of the proteins that bind to molecules normally expressed by cancer cells, which could be used to diagnose tumors or identify metastatic cancer cells in blood samples, Zhang says. Researchers could also create water-soluble molecules in which a membrane-bound receptor that viruses normally bind to is attached to part of an antibody. If these “decoy therapies” were injected into the body, viruses would bind to the receptors and then be cleared by the immune system, which would be activated by the antibody portion.
The research was funded by OH2 Laboratories and the MIT Center for Bits and Atoms Consortium, which includes the Bay Valley Innovation Center.
MIT’s Solar Electric Vehicle Team (SEVT) completed their first race in three years, the 2018 American Solar Challenge (ASC), last month. The team was awarded 5th place overall in the single-occupant vehicle class.
The event was a series of competitions during which the team proved their exceptional talent and problem-solving abilities. In order to qualify for the American Solar Challenge, teams have to successfully complete various tests and races that fall under one of two categories: Scrutineering and the Formula Sun Grand Prix (FSGP).
Scrutineering is a four-day process during which race officials test each car to confirm they are in line with challenge regulations. Included are electrical, dynamic, and mechanical tests. The team performed exceptionally well in electrical scrutineering. Mechanical scrutineering, however, brought bsome bumps in the road, mainly issues arising from the vehicle’s suspension system. The group did not let this setback bring them down, however.
“The team was able to locate and debug each issue efficiently, collaboratively, and successfully,” noted MIT SEVT captain and junior Caroline Jordan.
The group successfully passed all mechanical and dynamic tests on the fourth and final day of Scrutineering. Jordan recalled that — although it was a stressful time for the team — “we gained a lot of knowledge and grew as engineers and people.”
Qualifying continued with the FSGP, a three-day track race held at the Motorsport Park Hastings in Hastings, Nebraska. Teams who successfully completed the race would be granted entrance to the American Solar Challenge.
At the start of FSGP, an issue surfaced for the MIT SEVT: The tires on their vehicle were burning through. Instead of feeling discouraged, the team came together and inspected the car to find a solution. The following day, they were back in the game. The team drove 107 laps and received fourth place for single-occupant vehicles.
“At this point, the car was doing amazingly and we qualified for ASC that day,” Jordan said.
The ASC itself lasted nine days. This year's challenge was a 1762.7 mile race that followed the Oregon Trail from Omaha, Nebraska to Bend, Oregon.
“Because of all of the fixes we had already identified, the car was very reliable going into the road race,” Jordan explains. The group performed well in spite of some small issues due to weather and race operations. MIT SEVT’s vehicle completed the race within their time limit using only solar power. The team received fifth place in the single-occupant vehicle class.
Team member and sophomore Cece Chu notes that in spite of some technical difficulties, her team members kept her motivated throughout the competition.
“The amount of planning, time, and effort that was put into the car during and leading up to the race was extraordinary, and the team had to display a lot of determination and sheer grit to get us through the qualifications,” she said. “My teammates are honestly the most hardworking and dedicated people I know, and seeing these qualities brought out firsthand during the race was incredibly motivating for me.”
Jordan noted that the upcoming school year is set to be a big one for the MIT SEVT.
“We will be entering a design year, during which we can pull from the vast amount of knowledge we gained during this race to use in the design process of our next car,” she explained. “It will be a very exciting time for the team.”
The release of Julia 1.0 is a huge Julia milestone since MIT Professor Alan Edelman, Jeff Bezanson, Stefan Karpinski, and Viral Shah released Julia to developers in 2012, says Edelman.
“Julia has been revolutionizing scientific and technical computing since 2009,” says Edelman, the year the creators started working on a new language that combined the best features of Ruby, MatLab, C, Python, R, and others. Edelman is director of the Julia Lab at MIT and one of the co-creators of the language at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL).
Julia, which was developed and incubated at MIT, is free and open source, with more than 700 active open source contributors, 1,900 registered packages, 41,000 GitHub stars, 2 million downloads, and a reported 101 percent annual rate of download growth. It is used at more than 700 universities and research institutions and by companies such as Aviva, BlackRock, Capital One, and Netflix.
At MIT, Julia users and developers include professors Steven Johnson, Juan Pablo Vielma, Gilbert Strang, Robin Deits, Twan Koolen, and Robert Moss. Julia is also used by MIT Lincoln Laboratory and the Federal Aviation Administration to develop the Next-Generation Airborne Collision Avoidance System (ACAS-X), by the MIT Operations Research Center to optimize school bus routing for Boston Public Schools, and by the MIT Robot Locomotion Group for robot navigation and movement.
Julia is the only high-level dynamic programming language in the “petaflop club,” having achieved 1.5 petaflop/s using 1.3 million threads, 650,000 cores and 9,300 Knights Landing (KNL) nodes to catalogue 188 million stars, galaxies, and other astronomical objects in 14.6 minutes on the world’s sixth-most powerful supercomputer.
“The release of Julia 1.0 signals that Julia is now ready to change the technical world by combining the high-level productivity and ease of use of Python and R with the lightning-fast speed of C++,” Edelman says.