MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 3 months 12 hours ago

Automated cryptocode generator is helping secure the web

Mon, 06/17/2019 - 11:44am

Nearly every time you open up a secure Google Chrome browser, a new MIT-developed cryptographic system is helping better protect your data.

In a paper presented at the recent IEEE Symposium on Security and Privacy, MIT researchers detail a system that, for the first time, automatically generates optimized cryptography code that’s usually written by hand. Deployed in early 2018, the system is now being widely used by Google and other tech firms.

The paper now demonstrates for other researchers in the field how automated methods can be implemented to prevent human-made errors in generating cryptocode, and how key adjustments to components of the system can help achieve higher performance.

To secure online communications, cryptographic protocols run complex mathematical algorithms that do some complex arithmetic on large numbers. Behind the scenes, however, a small group of experts write and rewrite those algorithms by hand. For each algorithm, they must weigh various mathematical techniques and chip architectures to optimize for performance. When the underlying math or architecture changes, they essentially start over from scratch. Apart from being labor-intensive, this manual process can produce nonoptimal algorithms and often introduces bugs that are later caught and fixed.

Researchers from the Computer Science and Artificial Intelligence Laboratory (CSAIL) instead designed “Fiat Cryptography,” a system that automatically generates — and simultaneously verifies — optimized cryptographic algorithms for all hardware platforms. In tests, the researchers found their system can generate algorithms that match performance of the best handwritten code, but much faster.

The researchers’ automatically generated code has populated Google’s BoringSSL, an open-source cryptographic library. Google Chrome, Android apps, and other programs use BoringSSL to generate the various keys and certificates used to encrypt and decrypt data. According to the researchers, about 90 percent of secure Chrome communications currently run their code.

“Cryptography is implemented by doing arithmetic on large numbers. [Fiat Cryptography] makes it more straightforward to implement the mathematical algorithms … because we automate the construction of the code and provide proofs that the code is correct,” says paper co-author Adam Chlipala, a CSAIL researcher and associate professor of electrical engineering and computer science and head of the Programming Languages and Verification group. “It’s basically like taking a process that ran in human brains and understanding it well enough to write code that mimics that process.”

Jonathan Protzenko of Microsoft Research, a cryptography expert who was not involved in this research, sees the work as representing a shift in industry thinking.

“Fiat Cryptography being used in BoringSSL benefits the whole [cryptographic] community,” he says. “[It’s] a sign that the times are changing and that large software projects are realizing that insecure cryptography is a liability, [and shows] that verified software is mature enough to enter the mainstream. It is my hope that more and more established software projects will make the switch to verified cryptography. Perhaps within the next few years, verified software will become usable not just for cryptographic algorithms, but also for other application domains.”

Joining Chlipala on the paper are: first author Andres Erbsen and co-authors Jade Philipoom and Jason Gross, who are all CSAIL graduate students; as well as Robert Sloan MEng ’17.

Splitting the bits

Cryptography protocols use mathematical algorithms to generate public and private keys, which are basically a long string of bits. Algorithms use these keys to provide secure communication channels between a browser and a server. One of the most popular efficient and secure families of cryptographic algorithms is called elliptical curve cryptography (ECC). Basically, it generates keys of various sizes for users by choosing numerical points at random along a numbered curved line on a graph.

Most chips can’t store such large numbers in one place, so they briefly split them into smaller digits that are stored on units called registers. But the number of registers and the amount of storage they provide varies from one chip to another. “You have to split the bits across a bunch of different places, but it turns out that how you split the bits has different performance consequences,” Chlipala says.

Traditionally, experts writing ECC algorithms manually implement those bit-splitting decisions in their code. In their work, the MIT researchers leveraged those human decisions to automatically generate a library of optimized ECC algorithms for any hardware.

Their researchers first explored existing implementations of handwritten ECC algorithms, in the C programming and assembly languages, and transferred those techniques into their code library. This generates a list of best-performing algorithms for each architecture. Then, it uses a compiler — a program that converts programming languages into code computers understand — that has been proven correct with a proofing tool, called Coq. Basically, all code produced by that compiler will always be mathematically verified. It then simulates each algorithm and selects the best-performing one for each chip architecture.

Next, the researchers are working on ways to make their compiler run even faster in searching for optimized algorithms.

Optimized compiling

There’s one additional innovation that ensures the system quickly selects the best bit-splitting implementations. The researchers equipped their Coq-based compiler with an optimization technique, called “partial evaluation,” which basically precomputes certain variables to speed things up during computation.

In the researchers’ system, it precomputes all the bit-splitting methods. When matching them to a given chip architecture, it immediately discards all algorithms that just won’t work for that architecture. This dramatically reduces the time it takes to search the library. After the system zeroes in on the optimal algorithm, it finalizes the code compiling.

From that, the researchers then amassed a library of best ways to split ECC algorithms for a variety of chip architectures. It’s now implemented in BoringSSL, so users are mostly drawing from the researchers’ code. The library can be automatically updated similarly for new architectures and new types of math.

“We’ve essentially written a library that, once and for all, is correct for every way you can possibly split numbers,” Chlipala says. “You can automatically explore the space of possible representations of the large numbers, compile each representation to measure the performance, and take whichever one runs fastest for a given scenario.”

How to detect life on Mars

Mon, 06/17/2019 - 9:00am

When MIT research scientist Christopher Carr visited a green sand beach in Hawaii at the age of 9, he probably didn’t think that he’d use the little olivine crystals beneath his feet to one day search for extraterrestrial life. Carr, now the science principal investigator for the Search for Extraterrestrial Genomes (SETG) instrument being developed jointly by the Department of Earth, Atmospheric and Planetary Sciences (EAPS) at MIT and Massachusetts General Hospital, works to wed the worlds of biology, geology, and planetary science to help understand how life evolved in the universe.

“Our history revealed by science is a truly incredible story,” Carr says. “You and I are a part of an unbroken chain of 4 billion years of evolution. I want to know more about that story.”  

SETG was initially proposed by professor of genetics at Harvard Medical School Gary Ruvkun, and since 2005 has been led by Maria Zuber, the E. A. Griswold Professor of Geophysics in EAPS and vice president for research at MIT.

As the science principle investigator of SETG, Carr, along with a large team of scientists and engineers, has helped develop instrumentation that could withstand radiation and detect DNA, a type of nucleic acid that carries genetic information in most living organisms, in spaceflight environments. Now, Carr and his colleagues are working to fine-tune the instrumentation to work on the red planet. To do that, the team needed to simulate the kinds of soils thought to preserve evidence of life on Mars, and for that, they needed a geologist.

Angel Mojarro, a graduate student in EAPS, was up for the task. Mojarro spent months synthesizing Martian soils that represented different regions on Mars, as established by Martian rover data.

“Turns out you can buy most of the rocks and minerals found on Mars online,” Mojarro says. But not all.

One of the hard-to-find components of the soils was olivine from the beach Carr had visited as a child: “I called up my folks and said, ‘Hey, can you find the olivine sand in the basement and send me some of that?’”

After creating a collection of different Mars analog soils, Mojarro wanted to find out whether SETG could extract and detect small amounts of DNA embedded in those soils as it would do on a future Mars mission. While many technologies already exist on Earth to detect and sequence DNA, scaling down the instrumentation to fit on a rover, survive transport from Earth, and conduct high fidelity sequencing in a harsh Martian environment is a unique challenge. “That’s a whole bunch of steps, no matter what the sequencing technology is right now,” Carr says.

The SETG instrumentation has evolved and improved since its development began in 2005, and, currently, the team is working to integrate a new method, called nanopore sequencing, into their work. “In nanopore sequencing, DNA strands travel through nano-sized holes, and the sequence of bases are detected via changes in an ionic current,” Mojarro says.

By themselves, Mojarro’s Mars analog soils didn’t contain microbes, so to test and develop nanopore sequencing of DNA in Mars analog soils, Mojarro added known quantities of spores from the bacterium Bacillus subtilis to the soils. Without human help on Mars, SETG instrumentation would need to be able to collect, purify, and enable the DNA to be sequenced, a process which usually necessitates about a microgram of DNA on Earth, Mojarro says.

The group’s results using the new sequencing and preparation method, which were reported in Astrobiology, pushed the limits of detection to the parts-per-billion scale — which means even the tiniest traces of life could be detected and sequenced by the instrument.

“This doesn’t just apply to Mars … these results have implications in other fields, too,” Mojarro says. Similar methods of DNA sequencing on Earth have been used to help manage and track Ebola outbreaks and in medical research. And further, improvements to SETG could have important implications for planetary protection, which aims to prevent and minimize Earth-originating biological contamination of space environments.

Even at the new detection limit for the SETG instrumentation, Mojarro was able to differentiate between human DNA and the Bacillus DNA. “If we detect life on other planets,” Mojarro says, “we need a technique that can tell apart hitchhiking microbes from Earth and Martian life.”

In their publication, Mojarro and Carr suggest that these developments may fill in some of the missing gaps in the story of life on Earth. “If there’s life on Mars, there’s a good chance it’s related to us,” Carr says, citing previous studies describing the planetary exchange of materials during the Late Heavy Bombardment period (4.1 to 3.8 billion years ago).

If SETG detects and sequences DNA on Mars in the future, Carr says the results could “rewrite our very notion of our own origins.”

A droplet walks into an electric field …

Mon, 06/17/2019 - 12:00am

When a raindrop falls through a thundercloud, it is subject to strong electric fields that pull and tug on the droplet, like a soap bubble in the wind. If the electric field is strong enough, it can cause the droplet to burst apart, creating a fine, electrified mist.

Scientists began taking notice of how droplets behave in electric fields in the early 1900s, amid concerns over lightning strikes that were damaging newly erected power lines. They soon realized that the power lines’ own electric fields were causing raindrops to burst around them, providing a conductive path for lightning to strike. This revelation led engineers to design thicker coverings around power lines to limit lightning strikes.

Today, scientists understand that the stronger the electric field, the more likely it is that a droplet within it will burst. But, calculating the exact field strength that will burst a particular droplet has always been an involved mathematical task.

Now, MIT researchers have found that the conditions for which a droplet bursts in an electric field all boil down to one simple formula, which the team has derived for the first time.

With this simple new equation, the researchers can predict the exact strength an electric field should be to burst a droplet or keep it stable. The formula applies to three cases previously analyzed separately: a droplet pinned on a surface, sliding on a surface, or free-floating in the air.

Their results, published today in the journal Physical Review Letters, may help engineers tune the electric field or the size of droplets for a range of applications that depend on electrifying droplets. These include  technologies for air or water purification, space propulsion, and molecular analysis.

“Before our result, engineers and scientists had to perform computationally intensive simulations to assess the stability of an electrified droplet,” says lead author Justin Beroz, a graduate student in MIT’s departments of Mechanical Engineering and Physics. “With our equation, one can predict this behavior immediately, with a simple paper-and-pencil calculation. This is of great practical benefit to engineers working with, or trying to design, any system that involves liquids and electricity.”

Beroz’ co-authors are A. John Hart, associate professor of mechanical engineering, and John Bush, professor of mathematics.

“Something unexpectedly simple”

Droplets tend to form as perfect little spheres due to surface tension, the cohesive force that binds water molecules at a droplet’s surface and pulls the molecules inward. The droplet may distort from its spherical shape in the presence of other forces, such as the force from an electric field. While surface tension acts to hold a droplet together, the electric field acts as an opposing force, pulling outward on the droplet as charge builds on its surface.

“At some point, if the electric field is strong enough, the droplet can’t find a shape that balances the electrical force, and at that point, it becomes unstable and bursts,” Beroz explains.

He and his team were interested in the moment just before bursting, when the droplet has been distorted to its critically stable shape. The team set up an experiment in which they slowly dispensed water droplets onto a metal plate that was electrified to produce an electric field, and used a high-speed camera to record the distorted shapes of each droplet.

“The experiment is really boring at first — you’re watching the droplet slowly change shape, and then all of a sudden it just bursts,” Beroz says.

After experimenting on droplets of different sizes and under various electric field strengths, Beroz isolated the video frame just before each droplet burst, then outlined its critically stable shape and calculated several parameters such as the droplet’s volume, height, and radius. He plotted the data from each droplet and found, to his surprise, that they all fell along an unmistakably straight line.

“From a theoretical point of view, it was an unexpectedly simple result given the mathematical complexity of the problem,” Beroz says. “It suggested that there might be an overlooked, yet simple, way to calculate the burst criterion for the droplets.”

A water droplet, subject to an electric field of slowly increasing strength, suddenly bursts by emitting a fine, electrified mist from its apex.

Volume above height

Physicists have long known that a liquid droplet in an electric field can be represented by a set of coupled nonlinear differential equations. These equations, however, are incredibly difficult to solve. To find a solution requires determining the configuration of the electric field, the shape of the droplet, and the pressure inside the droplet, simultaneously.

“This is commonly the case in physics: It’s easy to write down the governing equations but very hard to actually solve them,” Beroz says. “But for the droplets, it turns out that if you choose a particular combination of physical parameters to define the problem from the start, a solution can be derived in a few lines. Otherwise, it’s impossible.”

Physicists who attempted to solve these equations in the past did so by factoring in, among other parameters, a droplet’s height — an easy and natural choice for characterizing a droplet’s shape. But Beroz made a different choice, reframing the equations in terms of a droplet’s volume rather than its height. This was the key insight for reformulating the problem into an easy-to-solve formula.

“For the last 100 years, the convention was to choose height,” Beroz says. “But as a droplet deforms, its height changes, and therefore the mathematical complexity of the problem is inherent in the height. On the other hand, a droplet’s volume remains fixed regardless of how it deforms in the electric field.”

By formulating the equations using only parameters that are “fixed” in the same sense as a droplet’s volume, “the complicated, unsolvable parts of the equation cancel out, leaving a simple equation that matches the experimental results,” Beroz says.

Specifically, the new formula the team derived relates five parameters: a droplet’s surface tension, radius, volume, electric field strength, and the electric permittivity of the air surrounding the droplet. Plugging any four of these parameters into the formula will calculate the fifth.

Beroz says engineers can use the formula to develop techniques such as electrospraying, which involves the bursting of a droplet maintained at the orifice of an electrified nozzle to produce a fine spray. Electrospraying is commonly used to aerosolize biomolecules from a solution, so that they can pass through a spectrometer for detailed analysis. The technique is also used to produce thrust and propel satellites in space.

“If you’re designing a system that involves liquids and electricity, it’s very practical to have an equation like this, that you can use every day,” Beroz says.

This research was funded in part by the MIT Deshpande Center for Technological Innovation, BAE Systems, the Assistant Secretary of Defense for Research and Engineering via MIT Lincoln Laboratory, the National Science Foundation, and a Department of Defense National Defence Science and Engineering Graduate Fellowship.

Teaching artificial intelligence to connect senses like vision and touch

Mon, 06/17/2019 - 12:00am

In Canadian author Margaret Atwood’s book "Blind Assassins," she says that “touch comes before sight, before speech. It’s the first language and the last, and it always tells the truth.”

While our sense of touch gives us a channel to feel the physical world, our eyes help us immediately understand the full picture of these tactile signals.

Robots that have been programmed to see or feel can’t use these signals quite as interchangeably. To better bridge this sensory gap, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have come up with a predictive artificial intelligence (AI) that can learn to see by touching, and learn to feel by seeing.

The team’s system can create realistic tactile signals from visual inputs, and predict which object and what part is being touched directly from those tactile inputs. They used a KUKA robot arm with a special tactile sensor called GelSight, designed by another group at MIT.

Using a simple web camera, the team recorded nearly 200 objects, such as tools, household products, fabrics, and more, being touched more than 12,000 times. Breaking those 12,000 video clips down into static frames, the team compiled “VisGel,” a dataset of more than 3 million visual/tactile-paired images.

“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge”, says Yunzhu Li, CSAIL PhD student and lead author on a new paper about the system. “By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings. Bringing these two senses together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects.”

Recent work to equip robots with more human-like physical senses, such as MIT’s 2016 project using deep learning to visually indicate sounds, or a model that predicts objects’ responses to physical forces, both use large datasets that aren’t available for understanding interactions between vision and touch.

The team’s technique gets around this by using the VisGel dataset, and something called generative adversarial networks (GANs).

GANs use visual or tactile images to generate images in the other modality. They work by using a “generator” and a “discriminator” that compete with each other, where the generator aims to create real-looking images to fool the discriminator. Every time the discriminator “catches” the generator, it has to expose the internal reasoning for the decision, which allows the generator to repeatedly improve itself.

Vision to touch

Humans can infer how an object feels just by seeing it. To better give machines this power, the system first had to locate the position of the touch, and then deduce information about the shape and feel of the region.

The reference images — without any robot-object interaction — helped the system encode details about the objects and the environment. Then, when the robot arm was operating, the model could simply compare the current frame with its reference image, and easily identify the location and scale of the touch.

This might look something like feeding the system an image of a computer mouse, and then “seeing” the area where the model predicts the object should be touched for pickup — which could vastly help machines plan safer and more efficient actions.

Touch to vision

For touch to vision, the aim was for the model to produce a visual image based on tactile data. The model analyzed a tactile image, and then figured out the shape and material of the contact position. It then looked back to the reference image to “hallucinate” the interaction.

For example, if during testing the model was fed tactile data on a shoe, it could produce an image of where that shoe was most likely to be touched.

This type of ability could be helpful for accomplishing tasks in cases where there’s no visual data, like when a light is off, or if a person is blindly reaching into a box or unknown area.

Looking ahead

The current dataset only has examples of interactions in a controlled environment. The team hopes to improve this by collecting data in more unstructured areas, or by using a new MIT-designed tactile glove, to better increase the size and diversity of the dataset.

There are still details that can be tricky to infer from switching modes, like telling the color of an object by just touching it, or telling how soft a sofa is without actually pressing on it. The researchers say this could be improved by creating more robust models for uncertainty, to expand the distribution of possible outcomes.

In the future, this type of model could help with a more harmonious relationship between vision and robotics, especially for object recognition, grasping, better scene understanding, and helping with seamless human-robot integration in an assistive or manufacturing setting.

“This is the first method that can convincingly translate between visual and touch signals”, says Andrew Owens, a postdoc at the University of California at Berkeley. “Methods like this have the potential to be very useful for robotics, where you need to answer questions like ‘is this object hard or soft?’, or ‘if I lift this mug by its handle, how good will my grip be?’ This is a very challenging problem, since the signals are so different, and this model has demonstrated great capability.”

Li wrote the paper alongside MIT professors Russ Tedrake and Antonio Torralba, and MIT postdoc Jun-Yan Zhu. It will be presented next week at The Conference on Computer Vision and Pattern Recognition in Long Beach, California.

A road paved with open learning

Fri, 06/14/2019 - 2:00pm

After graduating from college in India, Samip Jain spent three years of evenings learning online, taking courses on web development, data analysis, music theory — anything that sated his appetite to learn ... all while working full-time as a software developer for a health care startup.

“Every night, I would learn something new,” he says. “I never want to stop learning and thinking beyond my limits.”

Now a graduate student in the Integrated Design and Management (IDM) program, Jain is taking a more focused approach. The residential program, which brings together the fields of engineering, management, and design, has proved to be the perfect fit for someone who has always been drawn to making things and to starting new business ventures. Moreover, he’s taking advantage of being campus-bound, participating in several hackathons, and, as part of a class project and with two of his IDM classmates, developing and honing Savor, a playful spice set that brings families and friends together at the dinner table.

His relationship with MIT, however, goes all the way back to his high school years. As a young student in India, he struggled to learn about circuits and electronics in science class. Then a senior classmate told him about MIT OpenCourseWare (OCW).  

“I was asking my senior classmates, ‘can you teach me this stuff?’ and one of them told me about online videos from MIT,” he says. “It helped me to learn the concepts. I thought, at least I can learn from what they are putting online. At the time, I never thought that I’d end up here.”

With these educational opportunities at his fingertips, Jain said that OCW opened up a new academic world for him — and it turned out that it was just the start of what MIT’s commitment to extending educational opportunities beyond the boundaries of campus would do for him.

In his senior year of college, Jain and a friend launched their first startup. They had been designing crafts in their spare time, and they wanted a way to sell their creations online. They had also heard from other small business owners in India who were struggling to reach customers outside their local community, and Jain wanted to find a way to help them.

The business, Pikachoo, was modeled after Etsy. The platform provided an easy-to-use, online marketplace for people in India doing creative work who wanted to reach more customers. Pikachoo helped them to find customers to sell their products to.

But within a month, the business wasn’t getting much traction, so Jain turned to MIT’s massive open online courses, offered through MITx, for help.

“I rushed through a course on entrepreneurship, and we tried changing a few things to see if we could reach out to customers more efficiently,” he says. “It was working.”

Then, an email arrived that captured Jain’s interest. MIT Bootcamps would soon bring together aspiring entrepreneurs from all over the world for an intense week of learning from some of the greatest business minds associated with MIT.

Jain applied and was accepted, making his way to Cambridge for the first time in August 2015. The experience of connecting with 60 other people from 30 different countries — everyone drawn to the startup life — he says, was life changing.

“The inspiring and motivational factor for me was the community we created through Bootcamp, which felt like another family to me,” Jain says. “To learn from them, and to learn from the amazing coaches … We were all in it together.”

After the Bootcamp experience, he knew that he wanted to come back to MIT for graduate school. “If I was going to do a master’s, MIT was where I needed to be,” he says. Having already applied to a short list of schools, Jain decided to start over and apply to IDM.

“IDM is a sort of a short entrepreneurship journey,” he says. “You start with a problem statement, go into user research, concept generation, and then you start making prototypes and eventually create a viable business.” The program philosophy mirrors his own educational journey: self-motivated, self-created, and entrepreneurial.

Even as a full-time grad student, Jain still engages with MIT online learning materials.

“When I’m taking a class and want to learn about something I don’t know much about, I go straight to OCW or MITx. Having all those courses online is really helpful.”

Toward artificial intelligence that learns to write code

Fri, 06/14/2019 - 1:10pm

Learning to code involves recognizing how to structure a program, and how to fill in every last detail correctly. No wonder it can be so frustrating.

A new program-writing AI, SketchAdapt, offers a way out. Trained on tens of thousands of program examples, SketchAdapt learns how to compose short, high-level programs, while letting a second set of algorithms find the right sub-programs to fill in the details. Unlike similar approaches for automated program-writing, SketchAdapt knows when to switch from statistical pattern-matching to a less efficient, but more versatile, symbolic reasoning mode to fill in the gaps.

“Neural nets are pretty good at getting the structure right, but not the details,” says Armando Solar-Lezama, a professor at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “By dividing up the labor — letting the neural nets handle the high-level structure, and using a search strategy to fill in the blanks — we can write efficient programs that give the right answer.”

SketchAdapt is a collaboration between Solar-Lezama and Josh Tenenbaum, a professor at CSAIL and MIT’s Center for Brains, Minds and Machines. The work will be presented at the International Conference on Machine Learning June 10-15.

Program synthesis, or teaching computers to code, has long been a goal of AI researchers. A computer that can program itself is more likely to learn language faster, converse fluently, and even model human cognition. All of this drew Solar-Lezama to the field as a graduate student, where he laid the foundation for SketchAdapt.

Solar-Lezama’s early work, Sketch, is based on the idea that a program’s low-level details could be found mechanically if a high-level structure is provided. Among other applications, Sketch inspired spinoffs to automatically grade programming homework and convert hand-drawn diagrams into code. Later, as neural networks grew in popularity, students from Tenenbaum’s computational cognitive science lab suggested a collaboration, out of which SketchAdapt formed.

Rather than rely on experts to define program structure, SketchAdapt figures it out using deep learning. The researchers also added a twist: When the neural networks are unsure of what code to place where, SketchAdapt is programmed to leave the spot blank for search algorithms to fill.

“The system decides for itself what it knows and doesn’t know,” says the study’s lead author, Maxwell Nye, a graduate student in MIT’s Department of Brain and Cognitive Sciences.  “When it gets stuck, and has no familiar patterns to draw on, it leaves placeholders in the code. It then uses a guess-and-check strategy to fill the holes.”

The researchers compared SketchAdapt’s performance to programs modeled after Microsoft’s proprietary RobustFill and DeepCoder software, successors to Excel’s FlashFill feature, which analyzes adjacent cells to offer suggestions as you type — learning to transform a column of names into a column of corresponding email addresses, for example. RobustFill uses deep learning to write high-level programs from examples, while DeepCoder specializes in finding and filling in low-level details.

The researchers found that SketchAdapt outperformed their reimplemented versions of RobustFill and DeepCoder at their respective specialized tasks. SketchAdapt outperformed the RobustFill-like program at string transformations; for example, writing a program to abbreviate Social Security numbers as three digits, and first names by their first letter. SketchAdapt also did better than the DeepCoder-like program at writing programs to transform a list of numbers. Trained only on examples of three-line list-processing programs, SketchAdapt was better able to transfer its knowledge to a new scenario and write correct four-line programs.

In yet another task, SketchAdapt outperformed both programs at converting math problems from English to code, and calculating the answer.

Key to its success is the ability to switch from neural pattern-matching to a rules-based symbolic search, says Rishabh Singh, a former graduate student of Solar-Lezama’s, now a researcher at Google Brain. “SketchAdapt learns how much pattern recognition is needed to write familiar parts of the program, and how much symbolic reasoning is needed to fill in details which may involve new or complicated concepts.”

SketchAdapt is limited to writing very short programs. Anything more requires too much computation. Nonetheless, it’s intended more to complement programmers rather than replace them, the researchers say. “Our focus is on giving programming tools to people who want them,” says Nye. “They can tell the computer what they want to do, and the computer can write the program.”

Programming, after all, has always evolved. When Fortran was introduced in the 1950s, it was meant to replace human programmers. “Its full name was Fortran Automatic Coding System, and its goal was to write programs as well as humans, but without the errors,” says Solar-Lezama. “What it really did was automate much of what programmers did before Fortran. It changed the nature of programming.”

The study’s other co-author is Luke Hewitt. Funding was provided by the U.S. Air Force Office of Scientific Research, MIT-IBM Watson AI Lab and U.S. National Science Foundation.

Materials Research Laboratory welcomes area teachers, community college students

Fri, 06/14/2019 - 1:00pm

Three greater Boston-area school teachers and five community college students will work in MIT faculty-led research groups this summer through the MIT Materials Research Laboratory and its Materials Research Science and Engineering Center (MRSEC) program.

For summer 2019 there are two Boston high school teachers: Allyson Grasso, who teaches environmental science at the Match Charter Public High School; and Camilia Chodkowski, who teaches chemistry and environmental science at the Dearborn STEM Academy.

The third teacher, Wendy Moy, teaches physical science at the Diamond Middle School in Lexington, Massachusetts. Moy spent summer 2018 in the research group of assistant professor of physics Riccardo Comin. She will return to the Comin lab this summer on a part-time basis to develop classroom material for teaching her middle school students about her MIT lab experience.

The MIT MRSEC, a National Science Foundation-supported program, has hosted local science teachers to conduct research on campus since 1999. All participants are selected on the basis of their teaching experience, statements of interest, and plans to incorporate their research experience into their teaching.

Five local community college students will participate in an eight-week summer internship program at the MIT Materials Research Laboratory MRSEC from mid June through early August. They were selected based on their academic records, community college faculty recommendations, and interviews with MIT faculty and graduate students.

Two of the interns are from Roxbury Community College: Jimmy Dorielan is an engineering major, and Nancy Berger is majoring in biotechnology. Students Ayat Labouyard and Michael Feltis, both engineering majors, hail from Bunker Hill Community College (BHCC). These four students will be funded through the MIT MRSEC.

The fifth student, Amanda Kozicki, a biology student from BHCC, will be funded by Professor Rafael Jaramillo’s Guided Academic Industry Network (GAIN). GAIN is supported by Jaramillo’s NSF CAREER grant. After her summer as an intern at MIT, Kozicki will spend the following summer in an industrial internship.

Cyber protection technology moves from the lab to the marketplace

Fri, 06/14/2019 - 12:45pm

Popular commodity software applications, such as browsers, business tools, and document readers, have been the target of sophisticated cyberattackers who find vulnerabilities in the application code to gain entry into and compromise an enterprise. Significantly contributing to large-scale cyberattacks is the homogeneity of these commodity applications. Because all installations of such applications look alike, when perpetrators develop an attack against an application, they can easily undermine millions of computers at once. Moreover, the closed-source nature of these applications makes them notoriously hard to defend because protection techniques often require access to the application’s source code.

MIT Lincoln Laboratory has been conducting R&D on technology that will protect commodity Windows applications from attack. The technology, Timely Randomization Applied to Commodity Executables at Runtime (TRACER), has been licensed by a major cybersecurity company and will soon be offered as part of a security suite to protect enterprises.

TRACER protects Windows applications against sophisticated, modern attacks by automatically and transparently re-randomizing the applications’ sensitive internal data and layout at every output from the application. Other randomization technologies, such as techniques to randomize the memory layout, the compiler-based code, or the instruction set, employ a one-time randomization strategy that can make these technologies susceptible to data leakage; hence, attackers can exploit that information leakage to analyze the method of randomization and then subvert it. By re-randomizing the sensitive internal data and layout of an application every time any output is generated, TRACER renders leaked information stale and resists attacks that can otherwise bypass randomization defenses.

TRACER was developed for use with Windows applications because of their ubiquity. Reported estimates are that more than 90 percent of desktop computers run Microsoft Windows with commodity applications.

TRACER was developed under the sponsorship of the U.S. Department of Defense and the Department of Homeland Security (DHS) Science and Technology Directorate Transition to Practice (TTP) program. Each fiscal year, the TTP program identifies the most promising cybersecurity technologies developed at federal laboratories, federally funded research and development centers, and universities, and supports their improvements and demonstrations to facilitate transition to market.

“The TTP program helps innovative technologies mature into products that can meet the needs of government and private-sector users,” says Nadia Carlsten, director of commercialization at the DHS Science and Technology Directorate. “This transition of TRACER will enable the broader deployment of solutions to complex cybersecurity problems.”

TRACER is an attractive package for organizations seeking to protect their Windows-running systems. It prevents the most common and highly advanced control-hijacking attacks against Windows applications. It is implemented as a single dynamic link library file that takes minutes to install on a machine and is seamless to operate after the initial installation. It does not interfere with normal maintenance, patching, software inventory, or debugging facilities of an enterprise network. And, perhaps most importantly to companies, TRACER does not require access to the source code or modification of the Windows operating system.

TRACER’s R&D was led by Hamed Okhravi of Lincoln Laboratory’s Secure Resilient Systems and Technology Group and included contributions by Jason Martin, David Bigelow, David Perry, Kristin Dahl, Robert Rudd, Thomas Hobson, and William Streilein.

“One of our primary goals for TRACER was to make it as easy to use as possible. The current prototype requires minimal steps to set up and requires no user interaction during its operation, which we hope facilitates its widespread adoption,” Okhravi says.

The team is currently conducting R&D on a foundationally secure computer design for the future of computer systems.

A scholar and teacher re-examines moments in the history of STEM

Thu, 06/13/2019 - 11:59pm

When Clare Kim began her fall 2017 semester as the teaching assistant for 21H.S01, the inaugural “MIT and Slavery” course, she didn’t know she and her students would be creating a historical moment of their own at the Institute.

Along with Craig Steven Wilder, the Barton L. Weller Professor of History, and Nora Murphy, an archivist for researcher services in the MIT Libraries, Kim helped a team of students use archival materials to examine the Institute’s ties to slavery and how that legacy has impacted the modern structure of scientific institutions. The findings that came to light through the class thrust Kim and her students onto a prominent stage. They spoke about their research in media interviews and at a standing-room-only community forum, and helped bring MIT into a national conversation about universities and the institution of slavery in the United States.

For Kim, a PhD student in MIT’s Program in History, Anthropology, and Science, Technology, and Society (HASTS), it was especially rewarding to help the students to think critically about their own scientific work through a historical context. She enjoyed seeing how the course challenged conventional ideas that had been presented to them about their various fields of study.

“I think people tend to think too much about history as a series of true facts where the narrative that gets constructed is stabilized. Conducting historical research is fun because you have a chance to re-examine evidence, examine archival materials, reinterpret some of what has already been written, and craft a new narrative as a result,” Kim says.

This year, Kim was awarded the prestigious Goodwin Medal for her work as a TA for several MIT courses. The award recognizes graduate teaching assistants that have gone the extra mile in the classroom. Faculty, colleagues, and former students praised Kim for her compassionate, supportive, and individual approach to teaching.

“I love teaching,” she says. “I like to have conversations with my students about what I’m thinking about. It’s not that I’m just imparting knowledge, but I want them to develop a critical way of thinking. I want them to be able to challenge whatever analyses I introduce to them.”

Kim also applies this critical-thinking lens to her own scholarship in the history of mathematics. She is particularly interested in studying math this way because the field is often perceived as “all-stable” and contained, when in fact its boundaries have been much more fluid.

Mathematics and creativity

Kim’s own work re-examines the history of mathematical thought and how it has impacted nonscientific and technical fields in U.S. intellectual life. Her dissertation focuses on the history of mathematics and the ways that mathematicians interacted with artists, humanists, and philosophers throughout the 20th century. She looks at the dialogue and negotiations between different scholars, exploring how they reconfigured the boundaries between academic disciplines.

Kim says that this moment in history is particularly interesting because it reframes mathematics as a field that hasn’t operated autonomously, but rather has engaged with humanistic and artistic practices. This creative perspective, she says, suggests an ongoing, historical relationship between mathematics and the arts and humanities that may come as a surprise to those more likely to associate mathematics with technical and military applications, at least in terms of practical uses.

“Accepting this clean divide between mathematics and the arts occludes all of these fascinating interactions and conversations between mathematicians and nonmathematicians about what it meant to be modern and creative,” Kim says. One such moment of interaction she explores is between mathematicians and design theorists in the 1930s, who worked together in an attempt to develop and teach a mathematical theory of “aesthetic measure,” a way of ascribing judgments of beauty and taste.  

Building the foundation

With an engineering professor father and a mathematician mother, Kim has long been interested in science and mathematics. However, she says influences from her family, which includes a twin sister who is a classicist and an older sister who studied structural biology, ensured that she would also develop a strong background in the humanities and literature.

Kim entered college thinking that she would pursue a technical field, though likely not math itself — she jokes that her math career peaked during her time competing in MATHCOUNTS as a kid. But during her undergraduate years at Brown University, she took a course on the history of science taught by Joan Richards, a professor specializing in the history of mathematics. There, she discovered her interest in studying not just scientific knowledge, but the people who pursue it.

After earning a bachelor’s in history at Brown, with a focus in mathematics and science, Kim decided to pursue a doctoral degree. MIT’s HASTS program appealed to her because of its interdisciplinary approach to studying the social and political components of science and technology.

“In addition to receiving more formal training in the history of science itself, HASTS trained me in anthropological inquiry, political theory, and all these different kinds of methods that could be brought to bear on the social sciences and humanities more generally,” Kim says.

After defending her thesis, Kim will begin a postdoc at Washington University in St. Louis, where she will continue her research and begin converting her dissertation into a book manuscript. She will also be teaching a course she has developed called “Code and Craft,” a course that explores, in a variety of historical contexts, the artful and artisanal components of AI, computing, and otherwise “technical” domains.

In her free time, Kim practices taekwondo (she has a first-degree black belt) and enjoys taking long walks through Cambridge, which she says is how she gets some of her best thinking done.

Using gene editing, neuroscientists develop a new model for autism

Wed, 06/12/2019 - 1:00pm

Using the genome-editing system CRISPR, researchers at MIT and in China have engineered macaque monkeys to express a gene mutation linked to autism and other neurodevelopmental disorders in humans. These monkeys show some behavioral traits and brain connectivity patterns similar to those seen in humans with these conditions.

Mouse studies of autism and other neurodevelopmental disorders have yielded drug candidates that have been tested in clinical trials, but none of them have succeeded. Many pharmaceutical companies have given up on testing such drugs because of the poor track record so far.

The new type of model, however, could help scientists to develop better treatment options for some neurodevelopmental disorders, says Guoping Feng, who is the James W. and Patricia Poitras Professor of Neuroscience, a member of MIT’s McGovern Institute for Brain Research, and one of the senior authors of the study.

“Our goal is to generate a model to help us better understand the neural biological mechanism of autism, and ultimately to discover treatment options that will be much more translatable to humans,” says Feng, who is also an institute member of the Broad Institute of MIT and Harvard and a senior scientist in the Broad’s Stanley Center for Psychiatric Research.

“We urgently need new treatment options for autism spectrum disorder, and treatments developed in mice have so far been disappointing. While the mouse research remains very important, we believe that primate genetic models will help us to develop better medicines and possibly even gene therapies for some severe forms of autism,” says Robert Desimone, the director of MIT’s McGovern Institute for Brain Research, the Doris and Don Berkey Professor of Neuroscience, and an author of the paper.

Huihui Zhou of the Shenzhen Institutes of Advanced Technology, Andy Peng Xiang of Sun Yat-Sen University, and Shihua Yang of South China Agricultural University are also senior authors of the study, which appears in the June 12 online edition of Nature. The paper’s lead authors are former MIT postdoc Yang Zhou, MIT research scientist Jitendra Sharma, Broad Institute group leader Rogier Landman, and Qiong Ke of Sun Yat-Sen University. The research team also includes Mriganka Sur, the Paul and Lilah E. Newton Professor in the Department of Brain and Cognitive Sciences and a member of MIT’s Picower Institute for Learning and Memory.

Gene variants

Scientists have identified hundreds of genetic variants associated with autism spectrum disorder, many of which individually confer only a small degree of risk. In this study, the researchers focused on one gene with a strong association, known as Shank3. In addition to its link with autism, mutations or deletions of Shank3 can also cause a related rare disorder called Phelan-McDermid Syndrome, whose most common characteristics include intellectual disability, impaired speech and sleep, and repetitive behaviors. The majority of these individuals are also diagnosed with autism spectrum disorder, as many of the symptoms overlap.

The protein encoded by Shank3 is found in synapses — the junctions between brain cells that allow them to communicate with each other. It is particularly active in a part of the brain called the striatum, which is involved in motor planning, motivation, and habitual behavior. Feng and his colleagues have previously studied mice with Shank3 mutations and found that they show some of the traits associated with autism, including avoidance of social interaction and obsessive, repetitive behavior.

Although mouse studies can provide a great deal of information on the molecular underpinnings of disease, there are drawbacks to using them to study neurodevelopmental disorders, Feng says. In particular, mice lack the highly developed prefrontal cortex that is the seat of many uniquely primate traits, such as making decisions, sustaining focused attention, and interpreting social cues, which are often affected by brain disorders.

The recent development of the CRISPR genome-editing technique offered a way to engineer gene variants into macaque monkeys, which has previously been very difficult to do. CRISPR consists of a DNA-cutting enzyme called Cas9 and a short RNA sequence that guides the enzyme to a specific area of the genome. It can be used to disrupt genes or to introduce new genetic sequences at a particular location.

Members of the research team based in China, where primate reproductive technology is much more advanced than in the United States, injected the CRISPR components into fertilized macaque eggs, producing embryos that carried the Shank3 mutation.

Researchers at MIT, where much of the data was analyzed, found that the macaques with Shank3 mutations showed behavioral patterns similar to those seen in humans with the mutated gene. They tended to wake up frequently during the night, and they showed repetitive behaviors. They also engaged in fewer social interactions than other macaques.

Magnetic resonance imaging (MRI) scans also revealed similar patterns to humans with autism spectrum disorder. Neurons showed reduced functional connectivity in the striatum as well as the thalamus, which relays sensory and motor signals and is also involved in sleep regulation. Meanwhile, connectivity was strengthened in other regions, including the sensory cortex.

Michael Platt, a professor of neuroscience and psychology at the University of Pennsylvania, says the macaque models should help to overcome some of the limitations of studying neurological disorders in mice, whose behavioral symptoms and underlying neurobiology are often different from those seen in humans.

“Because the macaque model shows a much more complete recapitulation of the human behavioral phenotype, I think we should stand a much greater chance of identifying the degree to which any particular therapy, whether it’s a drug or any other intervention, addresses the core symptoms,” says Platt, who was not involved in the study.

Drug development

Within the next year, the researchers hope to begin testing treatments that may affect autism-related symptoms. They also hope to identify biomarkers, such as the distinctive functional brain connectivity patterns seen in MRI scans, that would help them to evaluate whether drug treatments are having an effect.

A similar approach could also be useful for studying other types of neurological disorders caused by well-characterized genetic mutations, such as Rett Syndrome and Fragile X Syndrome. Fragile X is the most common inherited form of intellectual disability in the world, affecting about 1 in 4,000 males and 1 in 8,000 females. Rett Syndrome, which is more rare and almost exclusively affects girls, produces severe impairments in language and motor skills and can also cause seizures and breathing problems.

“Given the limitations of mouse models, patients really need this kind of advance to bring them hope,” Feng says. “We don’t know whether this will succeed in developing treatments, but we will see in the next few years how this can help us to translate some of the findings from the lab to the clinic.”

The research was funded, in part, by the Shenzhen Overseas Innovation Team Project, the Guangdong Innovative and Entrepreneurial Research Team Program, the National Key R&D Program of China, the External Cooperation Program of the Chinese Academy of Sciences, the Patrick J. McGovern Foundation, the National Natural Science Foundation of China, the Shenzhen Science, Technology Commission, the James and Patricia Poitras Center for Psychiatric Disorders Research at the McGovern Institute at MIT, the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, and the Hock E. Tan and K. Lisa Yang Center for Autism Research at the McGovern Institute at MIT. The research facilities in China where the primate work was conducted are accredited by AAALAC International, a private, nonprofit organization that promotes the humane treatment of animals in science through voluntary accreditation and assessment programs. 

An escape route for carbon

Wed, 06/12/2019 - 1:00pm

As many of us may recall from grade school science class, the Earth’s carbon cycle goes something like this: As plants take up carbon dioxide and convert it into organic carbon, they release oxygen back into the air. Complex life forms such as ourselves breathe in this oxygen and respire carbon dioxide. When microbes eat away at decaying plants, they also consume the carbon within, which they convert and release back into the atmosphere as carbon dioxide. And so the cycle continues.

The vast majority of the planet’s carbon loops perpetually through this cycle, driven by photosynthesis and respiration. There is, however, a tiny fraction of organic carbon that is continually escaping through a “leak” in the cycle, the cause of which is largely unknown. Scientists do know that, through this leak, some minute amount of carbon is constantly locked away and preserved in the form of rock for hundreds of millions of  years.

Now, researchers from MIT and elsewhere have found evidence for what may be responsible for carbon’s slow and steady escape route.

In a paper published today in the journal Nature, the team reports that organic carbon is leaking out of the carbon cycle mainly due to a mechanism they call “mineral protection.” In this process, carbon, in the form of decomposed bits of plant and phytoplankton material, gloms onto particles of clay and other minerals, for instance at the bottom of a river or ocean, and is preserved in the form of sediments and, ultimately, rock.

Mineral protection may also explain why there is oxygen on Earth in the first place: If something causes carbon to leak out of the carbon cycle, this leaves more oxygen to accumulate in the atmosphere.

“Fundamentally, this tiny leak is one reason why we exist,” says Daniel Rothman, professor of geophysics in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “It’s what allows oxygen to accumulate over geologic time, and it’s why aerobic organisms evolved, and it has everything to do with the history of life on the planet.”

Rothman’s co-authors on the paper include Jordon Hemingway, who led the work as a graduate student at MIT and the Woods Hole Oceanographic Institution and is now a postdoc at Harvard University, along with Katherine Grant, Sarah Rosengard, Timothy Eglinton, Louis Derry, and Valier Galy.

Burning dirt

Scientists have entertained two main possibilities for how carbon has been leaking out of the Earth’s carbon cycle. The first has to do with “selectivity,” the idea that some types of organic matter, due to their molecular makeup, may be harder to break down than others. Based on this idea, the carbon that is not consumed, and therefore leaks out, has been “selected” to do so, based on the initial organic matter’s molecular structure.

The second possibility involves “accessibility,” the notion that some organic matter leaks out of the carbon cycle because it has been made inaccessible for consumption via some secondary process. Some scientists believe that secondary process could be mineral protection — interactions between organic carbon and clay-based minerals that bind the two together in an inaccessible, unconsumable form.

To test which of these mechanisms better explains Earth’s carbon leak, Hemingway analyzed sediment samples collected from around the world, each containing organic matter and minerals from a range of river and coastal environments. If mineral preservation is indeed responsible for locking away and preserving carbon over geologic timescales, Hemingway hypothesized that organic carbon bound with clay minerals should last longer in the environment compared with unbound carbon, resisting degradation by foraging microbes, or even other forces such as extreme heat.  

The researchers tested this idea by burning each sediment sample and measuring the amount and type of organic carbon that remained as they heated the sample at progressively higher temperatures. They did so using a device that Hemingway developed as part of his PhD thesis.

“It’s been hypothesized that organic matter that sticks to mineral surfaces will stick around longer in the environment,” Hemingway says. “But there was never a tool to directly quantify that.”

“Beating up a natural process”

In the end, they found the organic matter that lasted the longest, and withstood the highest temperatures, was bound to clay minerals. Importantly, in a finding that went against the idea of selectivity, it didn’t matter what the molecular structure of that organic matter was — as long as it was bound to clay, it was preserved.

The results point to accessibility, and mineral preservation in particular, as the main mechanism for Earth’s carbon leak. In other words, all around the world, clay minerals are slowly and steadily drawing down tiny amounts of carbon, and storing it away for thousands of years.

“It’s this clay-bound protection that seems to be the mechanism, and it seems to be a globally coherent phenomenon,” Hemingways says. “It’s a slow leak happening all the time, everywhere. And when you integrate that over geologic timescales, it becomes a really important sink of carbon.”

The researchers believe mineral protection has made it possible for vast reservoirs of carbon to be buried and stored in the Earth, some of which has been pressed and heated into petroleum over millions of years. At the Earth’s geologic pace, this carbon preserved in rocks eventually resurfaces through mountain uplift and gradually erodes, releasing carbon dioxide back into the atmosphere ever so slowly.

“What we do today with fossil fuel burning is speeding up this natural process,” Rothman says. “We’re getting it out of the ground and burning it right away, and we’re changing the rate at which the carbon that was leaked out is being returned to the system, by a couple orders of magnitude.”

Could mineral preservation somehow be harnessed to sequester even more carbon, in an effort to mitigate fossil-fuel-induced climate change?

“If we magically had the ability to take a fraction of organic matter in rivers or oceans and attach it to a mineral to hold onto it for 1,000 years, it could have some advantages,” Rothman says. “That’s not the focus of this study. But the longer soils can lock up organic matter, the slower their return to the atmosphere. You can imagine if you could slow that return process down just a little bit, it could make a big difference over 10 to 100 years.”

This research was supported, in part, by NASA and the National Science Foundation.

How we tune out distractions

Wed, 06/12/2019 - 10:59am

Imagine trying to focus on a friend’s voice at a noisy party, or blocking out the phone conversation of the person sitting next to you on the bus while you try to read. Both of these tasks require your brain to somehow suppress the distracting signal so you can focus on your chosen input.

MIT neuroscientists have now identified a brain circuit that helps us to do just that. The circuit they identified, which is controlled by the prefrontal cortex, filters out unwanted background noise or other distracting sensory stimuli. When this circuit is engaged, the prefrontal cortex selectively suppresses sensory input as it flows into the thalamus, the site where most sensory information enters the brain.

“This is a fundamental operation that cleans up all the signals that come in, in a goal-directed way,” says Michael Halassa, an assistant professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The researchers are now exploring whether impairments of this circuit may be involved in the hypersensitivity to noise and other stimuli that is often seen in people with autism.

Miho Nakajima, an MIT postdoc, is the lead author of the paper, which appears in the June 12 issue of Neuron. Research scientist L. Ian Schmitt is also an author of the paper.

Shifting attention

Our brains are constantly bombarded with sensory information, and we are able to tune out much of it automatically, without even realizing it. Other distractions that are more intrusive, such as your seatmate’s phone conversation, require a conscious effort to suppress.

In a 2015 paper, Halassa and his colleagues explored how attention can be consciously shifted between different types of sensory input, by training mice to switch their focus between a visual and auditory cue. They found that during this task, mice suppress the competing sensory input, allowing them to focus on the cue that will earn them a reward.

This process appeared to originate in the prefrontal cortex (PFC), which is critical for complex cognitive behavior such as planning and decision-making. The researchers also found that a part of the thalamus that processes vision was inhibited when the animals were focusing on sound cues. However, there are no direct physical connections from the prefrontal cortex to the sensory thalamus, so it was unclear exactly how the PFC was exerting this control, Halassa says.

In the new study, the researchers again trained mice to switch their attention between visual and auditory stimuli, then mapped the brain connections that were involved. They first examined the outputs of the PFC that were essential for this task, by systematically inhibiting PFC projection terminals in every target. This allowed them to discover that the PFC connection to a brain region known as the striatum is necessary to suppress visual input when the animals are paying attention to the auditory cue.

Further mapping revealed that the striatum then sends input to a region called the globus pallidus, which is part of the basal ganglia. The basal ganglia then suppress activity in the part of the thalamus that processes visual information.

Using a similar experimental setup, the researchers also identified a parallel circuit that suppresses auditory input when animals pay attention to the visual cue. In that case, the circuit travels through parts of the striatum and thalamus that are associated with processing sound, rather than vision.

The findings offer some of the first evidence that the basal ganglia, which are known to be critical for planning movement, also play a role in controlling attention, Halassa says.

“What we realized here is that the connection between PFC and sensory processing at this level is mediated through the basal ganglia, and in that sense, the basal ganglia influence control of sensory processing,” he says. “We now have a very clear idea of how the basal ganglia can be involved in purely attentional processes that have nothing to do with motor preparation.”

Noise sensitivity

The researchers also found that the same circuits are employed not only for switching between different types of sensory input such as visual and auditory stimuli, but also for suppressing distracting input within the same sense — for example, blocking out background noise while focusing on one person’s voice.

The team also showed that when the animals are alerted that the task is going to be noisy, their performance actually improves, as they use this circuit to focus their attention.

“This study uses a dazzling array of techniques for neural circuit dissection to identify a distributed pathway, linking the prefrontal cortex to the basal ganglia to the thalamic reticular nucleus, that allows the mouse brain to enhance relevant sensory features and suppress distractors at opportune moments,” says Daniel Polley, an associate professor of otolaryngology at Harvard Medical School, who was not involved in the research. “By paring down the complexities of the sensory stimulus only to its core relevant features in the thalamus — before it reaches the cortex — our cortex can more efficiently encode just the essential features of the sensory world.”

Halassa’s lab is now doing similar experiments in mice that are genetically engineered to develop symptoms similar to those of people with autism. One common feature of autism spectrum disorder is hypersensitivity to noise, which could be caused by impairments of this brain circuit, Halassa says. He is now studying whether boosting the activity of this circuit might reduce sensitivity to noise.

“Controlling noise is something that patients with autism have trouble with all the time,” he says. “Now there are multiple nodes in the pathway that we can start looking at to try to understand this.”

The research was funded by the National Institutes of Mental Health, the National Institute of Neurological Disorders and Stroke, the Simons Foundation, the Alfred P. Sloan Foundation, the Esther A. and Joseph Klingenstein Fund, and the Human Frontier Science Program.

Spotlight on engineering staff

Wed, 06/12/2019 - 10:55am

The School of Engineering hosted its 19th annual Infinite Mile Awards ceremony on May 22 to recognize and reward members of the school’s administrative, support, service, and research staff whose work is of the highest caliber. The awards support the Institute’s and the School of Engineering’s objectives for excellence.

Nominations are made by department heads and laboratory directors, and the awards are presented to individuals and teams who stand out due to their high level of commitment, energy, and enthusiasm. Since their inception in 2001, the Infinite Mile Awards have been presented to nearly 250 staff members. 

For the quality of their contributions, the individuals who earned the Infinite Mile Award for Excellence were:

  • Priyanka Chaudhuri from the Department of Materials Science and Engineering;
  • Sharece Corner from the Department of Chemical Engineering;
  • Eileen Demarkles from the Department of Chemical Engineering;
  • Reimi Hicks from the Office of Engineering Outreach Programs;
  • Magdalena Rieb from the Department of Materials Science and Engineering; and
  • Faika Weche from the Office of Engineering Outreach Programs.

In addition to the Infinite Mile Awards, the School of Engineering presented an Ellen J. Mandigo Award for Outstanding Service. Established in 2009, the award recognizes staff who have demonstrated, over an extended period of time, the qualities Ellen J. Mandigo valued and possessed during her long career at MIT: intelligence, skill, hard work, and dedication to the Institute. This award is made possible by a bequest from Mandigo, a member of the MIT engineering community for nearly five decades.

The 2019 recipient was Angelita Mireles from the Department of Materials Science and Engineering.

Transmedia Storytelling Initiative launches with $1.1 million gift

Wed, 06/12/2019 - 10:00am

Driven by the rise of transformative digital technologies and the proliferation of data, human storytelling is rapidly evolving in ways that challenge and expand our very understanding of narrative. Transmedia — where stories and data operate across multiple platforms and social transformations — and its wide range of theoretical, philosophical, and creative perspectives, needs shared critique around making and understanding.

MIT’s School of Architecture and Planning (SA+P), working closely with faculty in the MIT School of Humanities, Arts, and Social Sciences (SHASS) and others across the Institute, has launched the Transmedia Storytelling Initiative under the direction of Professor Caroline Jones, an art historian, critic, and curator in the History, Theory, Criticism section of SA+P’s Department of Architecture. The initiative will build on MIT’s bold tradition of art education, research, production, and innovation in media-based storytelling, from film through augmented reality. Supported by a foundational gift from David and Nina Fialkow, this initiative will create an influential hub for pedagogy and research in time-based media.

The goal of the program is to create new partnerships among faculty across schools, offer pioneering pedagogy to students at the graduate and undergraduate levels, convene conversations among makers and theorists of time-based media, and encourage shared debate and public knowledge about pressing social issues, aesthetic theories, and technologies of the moving image.

The program will bring together faculty from SA+P and SHASS, including the Comparative Media Studies/Writing program, and from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). The formation of the MIT Stephen A. Schwarzman College of Computing adds another powerful dimension to the collaborative potential.

“We are grateful to Nina and David for helping us build on the rich heritage of MIT in this domain and carry it forward,” says SA+P Dean Hashim Sarkis. “Their passion for both innovation and art is invaluable as we embark on this new venture.”

The Fialkows’ interest in the initiative stems from their longstanding engagement with filmmaking. David Fialkow, cofounder and managing director of venture capital firm General Catalyst, earned the 2018 Academy Award for producing the year's best documentary, “Icarus.” Nina Fialkow has worked as an independent film producer for PBS as well as on several award-winning documentaries. Nina has served as chair of the Massachusetts Cultural Council since 2016.

“We are thrilled and humbled to support MIT’s vision for storytelling,” say David and Nina Fialkow. “We hope to tap into our ecosystem of premier thinkers, creators, and funders to grow this initiative into a transformative program for MIT’s students, the broader community, and our society.”

The building blocks

The Transmedia Storytelling Initiative draws on MIT’s long commitment to provocative work produced at the intersection of art and technology.

In 1967, the Department of Architecture established the Film Section and founded the Center for Advanced Visual Studies (CAVS). Over time, CAVS brought scores of important video, computer, and “systems” artists to campus. In parallel, the Film Section trained generations of filmmakers as part of Architecture’s Visual Arts Program (VAP). SA+P uniquely brought making together with theorizing, as Urban Studies and Architecture departments fostered sections such as History, Theory, Criticism (HTC), and the Architecture Machine group that became the Media Lab in 1985.

A major proponent of “direct cinema,” the Film Section was based in the Department of Architecture until it relocated to the Media Lab. With the retirement of its charismatic leader, Professor Richard Leacock, its energies shifted to the Media Lab’s Interactive Cinema group (1987–2004) under the direction of the lab’s research scientist and Leacock’s former student, Glorianna Davenport.

The 1990s’ shift from analog film and video to “digitally convergent” forms (based on bits, bytes, and algorithms) transformed production and critical understanding of time-based media, distributing storytelling and making across the Institute (and across media platforms, going “viral” around the globe).

In parallel to Davenport’s Interactive Cinema group and preceding the Media Lab’s Future Storytelling group (2008–2017), the Comparative Media Studies program — now Comparative Media Studies/Writing (CMS/W) — emerged in SHASS in 1999 and quickly proved to be a leader in cross-media studies. The research of CMS/W scholars such as Henry Jenkins gave rise to the terms “transmedia storytelling” and “convergence” that have since become widely adopted.

The program’s commitment to MIT’s “mens-et-manus” (“mind-and-hand”) ethos takes the form of several field-shaping research labs, including: the Open Documentary Lab, which partners with Sundance and Oculus, explores storytelling and storyfinding with interactive, immersive, and machine learning systems; and the Game Lab, which draws on emergent technologies and partners with colleagues in the Department of Computer Science and Engineering to create rule-based ludic narratives. Current CMS/W faculty such as professors William Uricchio, Nick Montfort, D. Fox Harrell, and Lisa Parks each lead labs that draw fellows and postdocs to their explorations of expressive systems. All have been actively involved in the discussions leading to and shaping this new initiative.

Reflecting on the new initiative, Melissa Nobles, Kenan Sahin Dean of SHASS, says, “For more than two decades, the media, writing, and literature faculty in MIT SHASS have been at the forefront of examining the changing nature of media to empower storytelling, collaborating with other schools across the Institute. The Transmedia Initiative will enable our faculty in CMS/W and other disciplines in our school to work with the SA+P faculty and build new partnerships that apply the humanistic lens to emerging media, especially as it becomes increasingly digital and ever more influential in our society.”

The Transmedia Storytelling initiative will draw on these related conversations across MIT, in the urgent social project of revealing stories created within data by filters and algorithms, as well as producing new stories through the emerging media of the future.

“For the first time since the analog days of the Film Section, there will be a shared conversation around the moving image and its relationship to our lived realities,” says Caroline Jones. “Transmedia’s existing capacity to multiply storylines and allow users to participate in co-creation will be amplified by the collaborative force of MIT makers and theorists. MIT is the perfect place to launch this, and now is the time.”

Involving members of several schools will be important to the success of the new initiative. Increasingly, faculty across SA+P use moving images, cinematic tropes, and powerful narratives to model potential realities and tell stories with design in the world. Media theorists in SHASS use humanistic tools to decode the stories embedded in our algorithms and the feelings provoked by media, from immersion to surveillance. 

SA+P’s Art, Culture and Technology program — the successor to VAP and CAVS — currently includes three faculty who are renowned for theorizing and producing innovative forms of what has long been theorized as “expanded cinema”: Judith Barry (filmic installations and media theory); Renée Green (“Free Agent Media,” “Cinematic Migrations”); and Nida Sinnokrot (“Horizontal Cinema”). In these artists’ works, the historical “new media” of cinema is reanimated, deconstructed, and reassembled to address wholly contemporary concerns.

Vision for the initiative

Understandings of narrative, the making of time-based media, and modes of alternative storytelling go well beyond “film.” CMS in particular ranges across popular culture entities such as music video, computer games, and graphic novels, as well as more academically focused practices from computational poetry to net art.

The Transmedia Storytelling Initiative will draw together the various strands of such compelling research and teaching about time-based media to meet the 21st century’s unprecedented demands, including consideration of ethical dimensions.

“Stories unwind to reveal humans’ moral thinking,” says Jones. “Implicit in the Transmedia Storytelling Initiative is the imperative to convene an ethical conversation about what narratives are propelling the platforms we share and how we can mindfully create new stories together.”

Aiming ultimately for a physical footprint offering gathering, production, and presentation spaces, the initiative will begin to coordinate pedagogy for a proposed undergraduate minor in Transmedia. This course of study will encompass storytelling via production and theory, spanning from computational platforms that convert data to affective videos to artistic documentary forms, to analysis and critique of contemporary media technologies.

New gene-editing system precisely inserts large DNA sequences into cellular DNA

Wed, 06/12/2019 - 9:35am

The following press release was issued yesterday by the Broad Institute of MIT and Harvard.

A team led by researchers from Broad Institute of MIT and Harvard, and the McGovern Institute for Brain Research at MIT, has characterized and engineered a new gene-editing system that can precisely and efficiently insert large DNA sequences into a genome. The system, harnessed from cyanobacteria and called CRISPR-associated transposase (CAST), allows efficient introduction of DNA while reducing the potential error-prone steps in the process — adding key capabilities to gene-editing technology and addressing a long-sought goal for precision gene editing.

Precise insertion of DNA has the potential to treat a large swath of genetic diseases by integrating new DNA into the genome while disabling the disease-related sequence. To accomplish this in cells, researchers have typically used CRISPR enzymes to cut the genome at the site of the deleterious sequence, and then relied on the cell’s own repair machinery to stitch the old and new DNA elements together. However, this approach has many limitations.

Using Escherichia coli bacteria, the researchers have now demonstrated that CAST can be programmed to efficiently insert new DNA at a designated site, with minimal editing errors and without relying on the cell’s own repair machinery. The system holds potential for much more efficient gene insertion compared to previous technologies, according to the team.

The researchers are working to apply this editing platform in eukaryotic organisms, including plant and animal cells, for precision research and therapeutic applications.

The team molecularly characterized and harnessed CAST from two cyanobacteria, Scytonema hofmanni and Anabaena cylindrica, and additionally revealed a new way that some CRISPR systems perform in nature: not to protect bacteria from viruses, but to facilitate the spread of transposon DNA.

The work, appearing in Science, was led by first author Jonathan Strecker, a postdoctoral fellow at the Broad Institute; graduate student Alim Ladha at MIT; and senior author Feng Zhang, a core institute member at the Broad Institute, investigator at the McGovern Institute for Brain Research at MIT, the James and Patricia Poitras Professor of Neuroscience at MIT, and an associate professor at MIT, with joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering. Collaborators include Eugene Koonin at the National Institutes of Health.

A new role for a CRISPR-associated system

“One of the long-sought-after applications for molecular biology is the ability to introduce new DNA into the genome precisely, efficiently, and safely,” explains Zhang. “We have worked on many bacterial proteins in the past to harness them for editing in human cells, and we’re excited to further develop CAST and open up these new capabilities for manipulating the genome.”

To expand the gene-editing toolbox, the team turned to transposons. Transposons (sometimes called “jumping genes”) are DNA sequences with associated proteins — transposases — that allow the DNA to be cut-and-pasted into other places.

Most transposons appear to jump randomly throughout the cellular genome and out to viruses or plasmids that may also be inhabiting a cell. However, some transposon subtypes in cyanobacteria have been computationally associated with CRISPR systems, suggesting that these transposons may naturally be guided towards more-specific genetic targets. This theorized function would be a new role for CRISPR systems; most known CRISPR elements are instead part of a bacterial immune system, in which Cas enzymes and their guide RNA will target and destroy viruses or plasmids.

In this paper, the research team identified the mechanisms at work and determined that some CRISPR-associated transposases have hijacked an enzyme called Cas12k and its guide to insert DNA at specific targets, rather than just cutting the target for defensive purposes.

“We dove deeply into this system in cyanobacteria, began taking CAST apart to understand all of its components, and discovered this novel biological function,” says Strecker, a postdoctoral fellow in Zhang’s lab at the Broad Institute. “CRISPR-based tools are often DNA-cutting tools, and they’re very efficient at disrupting genes. In contrast, CAST is naturally set up to integrate genes. To our knowledge, it’s the first system of this kind that has been characterized and manipulated.”

Harnessing CAST for genome editing

Once all the elements and molecular requirements of the CAST system were laid bare, the team focused on programming CAST to insert DNA at desired sites in E. coli.

“We reconstituted the system in E. coli and co-opted this mechanism in a way that was useful,” says Strecker. “We reprogrammed the system to introduce new DNA, up to 10 kilobase pairs long, into specific locations in the genome.”

The team envisions basic research, agricultural, or therapeutic applications based on this platform, such as introducing new genes to replace DNA that has mutated in a harmful way — for example, in sickle cell disease. Systems developed with CAST could potentially be used to integrate a healthy version of a gene into a cell’s genome, disabling or overriding the DNA causing problems.

Alternatively, rather than inserting DNA with the purpose of fixing a deleterious version of a gene, CAST may be used to augment healthy cells with elements that are therapeutically beneficial, according to the team. For example, in immunotherapy, a researcher may want to introduce a “chimeric antigen receptor” (CAR) into a specific spot in the genome of a T cell — enabling the T cell to recognize and destroy cancer cells.

“For any situation where people want to insert DNA, CAST could be a much more attractive approach,” says Zhang. “This just underscores how diverse nature can be and how many unexpected features we have yet to find.”

Support for this study was provided in part by the Human Frontier Science Program, New York Stem Cell Foundation, Mathers Foundation, NIH (1R01-HG009761, 1R01-MH110049, and 1DP1-HL141201), Howard Hughes Medical Institute, Poitras Center for Psychiatric Disorders Research, J. and P. Poitras, and Hock E. Tan and K. Lisa Yang Center for Autism Research.

J.S. and F.Z. are co-inventors on US provisional patent application no. 62/780,658 filed by the Broad Institute, relating to CRISPR-associated transposases.

Expression plasmids are available from Addgene.

Engineers set the standards

Wed, 06/12/2019 - 12:00am

It might not seem consequential now, but in 1863, Scientific American weighed in on a pressing technological issue: the standardization of screw threads in U.S. machine shops. Given standard-size threads — the ridges running around screws and bolts — screws missing from machinery could be replaced with hardware from any producer. But without a standard, fixing industrial equipment would be harder or even impossible.

Moreover, Great Britain had begun standardizing the size of screw threads, so why couldn’t the U.S.? After energetic campaigning by a mechanical engineer named William Sellers, both the U.S. Navy and the Pennsylvania Railroad got on board with the idea, greatly helping standardization take hold.

Why did it matter? The latter half of the 1800s was an unprecedented time of industrial expansion. But the products and tools of the time were not necessarily uniform. Making them compatible served as an accelerant for industrialization. The standardization of screw threads was a signature moment in this process — along with new standards for steam boilers (which had a nasty habit of exploding) and for the steel rails used in train tracks.

Moreover, what goes for 19th-century hardware goes for hundreds of things used in daily life today. From software languages to batteries, transmission lines to power plants, cement, and more, standardization still helps fuel economic growth.

“Everything around us is full of standards,” says JoAnne Yates, the Sloan Distinguished Professor of Management at MIT. “None of us could function without standards.”

But how did this all come about? One might expect government treaties to be essential for global standards to exist. But time and again, Yates notes, industrial standards are voluntary and have the same source: engineers. Or, more precisely, nongovernmental standard-setting bodies dominated by engineers, which work to make technology uniform across borders.

“On one end of a continuum is government regulation, and on the other are market forces, and in between is an invisible infrastructure of organizations that helps us arrive at voluntary standards without which we couldn’t operate,” Yates says.

Now Yates is the co-author of a new history that makes the role of engineers in setting standards more visible than ever. The book, “Engineering Rules: Global Standard Setting since 1880,” is being published this week by Johns Hopkins University Press. It is co-authored by Yates, who teaches in the MIT Sloan School of Management, and Craig N. Murphy, who is the Betty Freyhof Johnson ’44 Professor of International Relations at Wellesley College.

Joint research project

As it happens, Murphy is also Yates’ husband — and, for the first time, they have collaborated on a research project.

“He’s a political scientist and I’m a business historian, but we had said throughout our careers, ‘Some day we should write a book together,’” Yates says. When it crossed their radar as a topic, the evolution of standards “immediately appealed to both of us,” she adds. “From Craig’s point of view, he studies global governance, which also includes nongovernmental institutions like this. I saw it as important because of the way firms play a role in it.”

As Yates and Murphy see it, there have been three distinct historical “waves” of technological standardization. The first, the late 19th- and early 20th-century industrial phase, was spurred by the professionalization of engineering itself. Those engineers were trying to impose order on a world far less organized than ours: Although the U.S. Constitution gives Congress the power to set standards, a U.S. National Bureau of Standards was not created until 1901, when there were still 25 different basic units of length — such as “rods” — being used in the country.

Much of this industrial standardization occured country by country. But by the early 20th century, engineers ramped up their efforts to make standards international — and some, like the British engineer Charles le Maistre, a key figure in the book, were very aspirational about global standards.

“Technology evangelists, like le Maistre, spread the word about the importance of standardizing and how technical standards should transcend politics and transcend national boundaries,” Yates says, adding that many had a “social movement-like fervor, feeling that they were contributing to the common good. They even thought it would create world peace.”

It didn’t. Still, the momentum for standards created by Le Maistre carried into the post-World War II era, the second wave detailed in the book. This new phase, Yates notes, is exemplified by the creation of the standardized shipping container, which made world-wide commerce vastly easier in terms of logistics and efficiency.

“This second wave was all about integrating the global market,” Yates says. 

The third and most recent wave of standardization, as Yates and Murphy see it, is centered on information technology — where engineers have once again toiled, often with a sense of greater purpose, to develop global standards.

To some degree this is an MIT story; Tim Berners-Lee, inventor of the World Wide Web, moved to MIT to establish a global standards consortium for the web, W3C, which was founded in 1994, with the Institute’s backing. More broadly, Yates and Murphy note, the era is marked by efforts to speed up the process of standard-setting, “to respond to a more rapid pace of technological change” in the world.

Setting a historical standard

Intriguingly, as Yates and Murphy document, many efforts to standardize technologies required firms and business leaders to put aside their short-term interests for a longer-term good — whether for a business, an industry, or society generally.

“You can’t explain the standards world entirely by economics,” Yates says. “And you can’t explain the standards world entirely by power.”

Other scholars regard the book as a significant contribution to the history of business and globalization. Yates and Murphy “demonstrate the crucial impact of private and informal standard setting on our daily lives,” according to Thomas G. Weiss, a professor of international relations and global governance at the Graduate Center of the City University of New York. Weiss calls the book “essential reading for anyone wishing to understand the major changes in the global economy.”

For her part, Yates says she hopes readers will, among other things, reflect on the idealism and energy of the engineers who regarded international standards as a higher cause.

“It is a story about engineers thinking they could contribute something good for the world, and then putting the necessary organizations into place.” Yates notes. “Standardization didn’t create world peace, but it has been good for the world.”

Taking a city’s pulse with moveable sensors

Tue, 06/11/2019 - 4:49pm

Suppose you have 10 taxis in Manhattan. What portion of the borough’s streets do they cover in a typical day?

Before we answer that, let’s examine why it would be useful to know this fact. Cities have a lot of things that need measuring: air pollution, weather, traffic patterns, road quality, and more. Some of these can be measured by instruments attached to buildings. But researchers can also affix inexpensive sensors to taxis and capture measurements across a larger portion of a city.

So, how many taxis would it take to cover a certain amount of ground?

To find out, an MIT-based team of researchers analyzed traffic data from nine major cities on three continents, and emerged with several new findings. A few taxis can cover a surprisingly large amount of ground, but it takes many more taxis to cover a city more comprehensively than that. Intriguingly, this pattern seems to replicate itself in metro areas around the world.

More specifically: Just 10 taxis typically cover one-third of Manhattan’s streets in a day. It also takes about 30 taxis to cover half of Manhattan in a day. But because taxis tend to have convergent routes, over 1,000 taxis are required in order to cover 85 percent of Manhattan in a day.

“The sensing power of taxis is unexpectedly large,” says Kevin O’Keeffe, a postdoc at the MIT Senseable City Lab and co-author of a newly published paper detailing the study’s results.

However, O’Keeffe observes, “There is a law of diminishing returns” at play as well. “You get the first one-third of streets almost free, with 10 random taxis. But … then it gets progressively harder.”

A similar numerical relationship occurs in Chicago, San Francisco, Vienna, Beijing, Shanghai, Singpore, and some other major global cities.

“Our results were showing that the sensing power of taxis in each city was very similar,” O’Keeffe observes. “We repeated the analysis, and lo, and behold, all the curves [plotting taxi coverage] were the same shape.”

The paper, “Quantifying the sensing power of vehicle fleets,” is appearing this week in Proceedings of the National Academy of Sciences. In addition to O’Keeffe, who is the corresponding author, the co-authors are Amin Anjomshoaa, a researcher at the Senseable City Lab; Steven Strogatz, a professor of mathematics at Cornell University; Paolo Santi, a research scientist at the Senseable City Lab and the Institute of Informatics and Telematics of CNR in Pisa, Italy; and Carlo Ratti, director of the Senseable City Lab and professor of the practice in MIT’s Department of Urban Studies and Planning (DUSP).

Members of the Senseable City Lab have long been studying cities based on data from sensors. In doing so, they have observed that some traditional deployments of sensors come with tradeoffs. Sensors on buildings, for example, can provide consistent daily data, but their reach is very limited.

“They’re good in time, but not space,” says O’Keeffe of fixed-location sensors. “Airborne sensors have inverse properties. They’re good in space but not time. A satellite can take a photo of an entire city — but only when it is passing over the city, which is a relatively short time interval. We asked the question, ‘Is there something that combines the strengths of the two approaches, that explores this city well in both space and time?’”

Putting sensors on vehicles is one solution. But which vehicles? Buses, which have fixed routes, cover limited ground. Members of the Senseable City Lab have fixed sensors to garbage trucks in Cambridge, Massachusetts, among other things, but even so, they did not collect as much data as taxis might.

That research helped lead to the current study, which uses data from a variety of municipalities and private-sector research efforts to better understand taxi-coverage patterns. The first place the reseacrhers studied was Manhattan, which they divided into about 8,000 street segments, and obtained their initial results.

Still, Manhattan has some distinct features — an usually regular street grid, for example — and there was no guarantee the metrics it produced would be similar in other places. But in city after city, the same phenomenon emerged: A small number of taxis can circulate over a one-third of a city in a day, and a slightly larger number can reach half the city, but after that, a much bigger fleet is needed.

“It’s a very strong result and I’m surprised to see it, both from a practical point of view and a theoretical point of view,” O’Keeffe says.

The practical side of the study is that city planners and policymakers, among others, now potentially have a more concrete idea about the investment needed for certain levels of mobile sensing, as well as the extent of the results they would likely obtain. An air pollution study, for instance, could be drawn up with this kind of data in mind.

“Urban environmental sensing is crucial for human health,” says Ratti. “Until today, sensing has been performed primarily with a small number of fixed and expensive monitoring stations. … However, a comprehensive framework to understand the power of mobile sensing is still missing and is the motivation for our research. Results have been incredibly surprising, in terms of how well we can cover a large city with just a few moving probes.”

As O’Keeffe readily acknowledges, one practical way to construct a mobile-sensing project might be to place sensors on taxis, then deploy a relatively small fleet of vehicles (as Google does for mapping projects) to reach streets where taxis virtually never venture.

“You bias, almost by definition, popular areas,” O’Keeffe says. “And you’re potentially underserving deprived areas. The way to get around that is with a hybrid approach. [If] you put sensors on taxis, then you augment it with a few dedicated vehicles.”

For his part, O’Keeffe, a physicist by training, thinks the result bodes well for the continued use of mobile sensors in urban studies, across the globe.

“There is a science to how cities work, and we can use it to make things better,” says O’Keeffe.

Ribbon cutting launches auxiliary Beaver Works space

Tue, 06/11/2019 - 2:00pm

MIT Lincoln Laboratory Beaver Works is opening a new space in the recently renovated Department of Aeronautics and Astronautics (AeroAstro) Building 31. This new facility builds on the successful partnership between Lincoln Laboratory and the School of Engineering by providing another location for innovation, collaboration, and hands-on development. The new site will also strengthen connections between AeroAstro researchers and practicing engineers at the laboratory while supporting collaboration on projects such as the Transiting Exoplanet Survey Satellite and cutting-edge research on autonomous drone systems.

To celebrate the opening of the new space, a ribbon-cutting ceremony was held on May 24. Speakers included Eric Evans, director of Lincoln Laboratory; Professor Daniel Hastings, MIT Aeronautics and Astronautics department head; and Professor Jaime Peraire, the H.N. Slater Professor of Aeronautics and Astronautics.

“It was the generosity and enthusiasm of our extended MIT family that made this vision a reality. Generations of researchers and students will use this greatly improved space to conduct research that will benefit the world,” says Peraire.

Beaver Works has a history of bringing Lincoln Laboratory and AeroAstro together to generate innovative solutions and expose students to opportunities in engineering, research, and service to the nation. Beaver Works pursues this mission through a broad range of research and educational activities that include capstone courses, joint research projects, the Undergraduate Research Opportunities Program, undergraduate internships, and STEM (science, technology, engineering, and mathematics) outreach for local schools. The new facility will also support multiple Independent Activities Period courses and community outreach classes for middle and high school students, including coding classes and the four-week Beaver Works Summer Institute.

“This facility will enable great students, staff, and faculty to work together on complex system prototypes,” said Evans. “We are looking forward to the creative, new technologies that will be developed here.”

The renovation added a second facility that is 4,000 square feet for Beaver Works researchers to use. With this space, laboratory and MIT affiliates will continue to enable research and development in autonomous air systems, bold air vehicle designs, small satellite designs, and new drone research areas to face coming challenges in subjects ranging from transportation to self-driving drone races.

Of the newly renovated space, Hastings said: “This facility will enable us to undertake real-world projects with our students in a manner that exemplifies ‘mens et manus.’” The Latin motto, adopted by MIT, translates to “mind and hand.” This motto reflects the ideal of cooperation between knowledge and practice — a partnership that the new Beaver Works space exemplifies.

A platform for Africa’s mobile innovators

Mon, 06/10/2019 - 11:59pm

Sam Gikandi ’05 SM ’06 and Eston Kimani ’05 have always believed in the potential of Africa’s entrepreneurial community. Their years at MIT, beginning in 2001 when they left their home country of Kenya, only reinforced that belief.

Through the MIT-Africa initiative and other campus programs that allowed them to work in regions across the African continent, they met hundreds of established and aspiring software developers, many of whom were in various stages of starting companies.

In order for these developers to maximize their impact, Gikandi and Kimani knew they’d need to reach the hundreds of millions of Africans who own cell phones but not smartphones. That has traditionally required entrepreneurs to go through several long and complex processes, including applying for access to telecommunications infrastructure from mobile operators, setting up the necessary technical integrations, and gaining approval from regulatory agencies in each region they wanted to operate in.

Gikandi and Kimani felt those hurdles were holding Africa’s businesses back, so they founded Africa’s Talking to unleash entrepreneurs’ full potential.

Since 2012, the company, known colloquially as AT, has been helping businesses in Africa communicate and transact with customers — whether they have a smartphone or not — through text, voice, and other mobile-centered application programming interfaces, or APIs.

The APIs act as plug-and-play capabilities for developers to quickly add mobile features, including the ability to send and receive payments, to their solution. Gikandi describes the company as “telecom in a box.”

Africa’s Talking currently operates in 18 countries around Africa and supports about 5,000 businesses ranging from early-stage startups to large organizations. Businesses can add APIs as new needs arise and pay as they go, dramatically reducing the risks and time commitment traditionally associated with telecom integrations.

This spring, the company launched AT Labs, which aims to leverage its network, expertise, and infrastructure to help entrepreneurs create impactful companies in the shortest possible timeframe.

Gikandi, who ceded his CEO role at Africa’s Talking to lead AT Labs, says the new program will take a small stake in the companies it supports. But he also wants to incentivize founders to give back to AT Labs once they’ve had success.

He says the business model is in line with the larger symbiotic relationship between Africa’s Talking and its customers, in which all parties feed off of each other’s success: “We have a big advantage with Africa’s Talking, but we feel we only grow when the local ecosystem grows.”

Removing barriers to innovation

The rise in cell phone ownership among Africans over the last 15 years has given entrepreneurs the opportunity to create transformative solutions on the continent. But Gikandi says telecom companies make the process of gaining access to their infrastructure very difficult, sometimes forcing entrepreneurs to obtain multiple contracts for the same service or denying their requests outright.

“That’s basically a full-time business in itself,” Gikandi says of gaining approvals from telecom companies. “A lot of innovation wasn’t happening because developers didn’t see how they could leverage that infrastructure. We really lowered the barrier.”

Now, if an entrepreneur builds a financial lending solution, for example, they might use AT’s texting API to allow people to register for the service through an SMS message. The entrepreneur may then use another AT API, known as Unstructured Supplementary Service Data (USSD), to gather more information (think of prompts such as “Reply X for more information on Y”). After a customer is registered, it could be useful to send them text- or voice-based payment reminders. And AT’s payments API makes it easy for businesses to send and receive money through text messages, a powerful tool for working with the millions of Africans without bank accounts.

Africa’s Talking even offers businesses a call center and an analytics platform for tracking customer contacts and engagement.

“The developers just have to tap into AT, and then we can coordinate [everything],” Gikandi says. “The developers can outsource their telecom infrastructure to AT and just focus on their core business.”

Scaling for impact

Gikandi says Africa’s Talking is still in growth mode after raising an $8.6 million funding round last year. Since 2016, the company has had a presence in several countries in east Africa and in Nigeria. The new funds have allowed it to spread into southern Africa (including in Zimbabwe, Zambia, South Africa, and Botswana) and west Africa (including Côte d’Ivoire and Senegal).

It can be difficult for entrepreneurs in the West to appreciate just how huge these markets are: At around 1.2 billion people, Africa’s population is nearly equal to the populations of Europe and North America combined. Each country Africa’s Talking expands to brings a wave of entrepreneurs eager to improve lives with innovative, mobile-based solutions.

“We think it’s really powerful,” Gikandi says. “Let’s say we add a new payment integration in Nigeria. You could then run your business in Nigeria without changing anything in your core business. It creates economies of scale, and allows businesses to focus on what’s important: The value they’re delivering to their customers.”

In Februrary, Gikandi handed his CEO role at Africa’s Talking over to longtime chief operating officer Bilha Ndirangu ’06. Gikandi says he knows Ndirangu can continue growing the company while he puts more time into AT Labs, which is still in the early stages of building its incubator-like support model. For AT Labs, Gikandi envisions a studio that brings people with ideas together with technical talent, infrastructure, and business expertise.

With both Africa’s Talking and AT Labs, Gikandi’s goal is to support the African continent by tapping into its most valuable resource: its people.

“Africa is full of industry and consumers,” Gikandi says. “So the goal is to create a single platform where entrepreneurs can access the entire African market.”

Algorithm tells robots where nearby humans are headed

Mon, 06/10/2019 - 11:59pm

In 2018, researchers at MIT and the auto manufacturer BMW were testing ways in which humans and robots might work in close proximity to assemble car parts. In a replica of a factory floor setting, the team rigged up a robot on rails, designed to deliver parts between work stations. Meanwhile, human workers crossed its path every so often to work at nearby stations. 

The robot was programmed to stop momentarily if a person passed by. But the researchers noticed that the robot would often freeze in place, overly cautious, long before a person had crossed its path. If this took place in a real manufacturing setting, such unnecessary pauses could accumulate into significant inefficiencies.

The team traced the problem to a limitation in the robot’s trajectory alignment algorithms used by the robot’s motion predicting software. While they could reasonably predict where a person was headed, due to the poor time alignment the algorithms couldn’t anticipate how long that person spent at any point along their predicted path — and in this case, how long it would take for a person to stop, then double back and cross the robot’s path again.

Now, members of that same MIT team have come up with a solution: an algorithm that accurately aligns partial trajectories in real-time, allowing motion predictors to accurately anticipate the timing of a person’s motion. When they applied the new algorithm to the BMW factory floor experiments, they found that, instead of freezing in place, the robot simply rolled on and was safely out of the way by the time the person walked by again.

“This algorithm builds in components that help a robot understand and monitor stops and overlaps in movement, which are a core part of human motion,” says Julie Shah, associate professor of aeronautics and astronautics at MIT. “This technique is one of the many way we’re working on robots better understanding people.”

Shah and her colleagues, including project lead and graduate student Przemyslaw “Pem” Lasota, will present their results this month at the Robotics: Science and Systems conference in Germany.

Clustered up

To enable robots to predict human movements, researchers typically borrow algorithms from music and speech processing. These algorithms are designed to align two complete time series, or sets of related data, such as an audio track of a musical performance and a scrolling video of that piece’s musical notation.

Researchers have used similar alignment algorithms to sync up real-time and previously recorded measurements of human motion, to predict where a person will be, say, five seconds from now. But unlike music or speech, human motion can be messy and highly variable. Even for repetitive movements, such as reaching across a table to screw in a bolt, one person may move slightly differently each time.

Existing algorithms typically take in streaming motion data, in the form of dots representing the position of a person over time, and compare the trajectory of those dots to a library of common trajectories for the given scenario. An algorithm maps a trajectory in terms of the relative distance between dots.

But Lasota says algorithms that predict trajectories based on distance alone can get easily confused in certain common situations, such as temporary stops, in which a person pauses before continuing on their path. While paused, dots representing the person’s position can bunch up in the same spot.

“When you look at  the data, you have a whole bunch of points clustered together when a person is stopped,” Lasota says. “If you’re only looking at the distance between points as your alignment metric, that can be confusing, because they’re all close together, and you don’t have a good idea of which point you have to align to.”

The same goes with overlapping trajectories — instances when a person moves back and forth along a similar path. Lasota says that while a person’s current position may line up with a dot on a reference trajectory, existing algorithms can’t differentiate between whether that position is part of a trajectory heading away, or coming back along the same path.

“You may have points close together in terms of distance, but in terms of time, a person’s position may actually be far from a reference point,” Lasota says.

It’s all in the timing

As a solution, Lasota and Shah devised a “partial trajectory” algorithm that aligns segments of a person’s trajectory in real-time with a library of previously collected reference trajectories. Importantly, the new algorithm aligns trajectories in both distance and timing, and in so doing, is able to accurately anticipate stops and overlaps in a person’s path.

“Say you’ve executed this much of a motion,” Lasota explains. “Old techniques will say, ‘this is the closest point on this representative trajectory for that motion.’ But since you only completed this much of it in a short amount of time, the timing part of the algorithm will say, ‘based on the timing, it’s unlikely that you’re already on your way back, because you just started your motion.’”

The team tested the algorithm on two human motion datasets: one in which a person intermittently crossed a robot’s path in a factory setting (these data were obtained from the team’s experiments with BMW), and another in which the group previously recorded hand movements of participants reaching across a table to install a bolt that a robot would then secure by brushing sealant on the bolt.

For both datasets, the team’s algorithm was able to make better estimates of a person’s progress through a trajectory, compared with two commonly used partial trajectory alignment algorithms. Furthermore, the team found that when they integrated the alignment algorithm with their motion predictors, the robot could more accurately anticipate the timing of a person’s motion. In the factory floor scenario, for example, they found the robot was less prone to freezing in place, and instead smoothly resumed its task shortly after a person crossed its path.

While the algorithm was evaluated in the context of motion prediction, it can also be used as a preprocessing step for other techniques in the field of human-robot interaction, such as action recognition and gesture detection. Shah says the algorithm will be a key tool in enabling robots to recognize and respond to patterns of human movements and behaviors. Ultimately, this can help humans and robots work together in structured environments, such as factory settings and even, in some cases, the home.

“This technique could apply to any environment where humans exhibit typical patterns of behavior,” Shah says. “The key is that the [robotic] system can observe patterns that occur over and over, so that it can learn something about human behavior. This is all in the vein of work of the robot better understand aspects of human motion, to be able to collaborate with us better.”

This research was funded, in part, by a NASA Space Technology Research Fellowship and the National Science Foundation.

Pages