MIT Latest News
Humans have long been masters of dexterity, a skill that can largely be credited to the help of our eyes. Robots, meanwhile, are still catching up.
Certainly there’s been some progress: For decades, robots in controlled environments like assembly lines have been able to pick up the same object over and over again. More recently, breakthroughs in computer vision have enabled robots to make basic distinctions between objects. Even then, though, the systems don’t truly understand objects’ shapes, so there’s little the robots can do after a quick pick-up.
In a new paper, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), say that they’ve made a key development in this area of work: a system that lets robots inspect random objects, and visually understand them enough to accomplish specific tasks without ever having seen them before.
The system, called Dense Object Nets (DON), looks at objects as collections of points that serve as sort of visual roadmaps. This approach lets robots better understand and manipulate items, and, most importantly, allows them to even pick up a specific object among a clutter of similar — a valuable skill for the kinds of machines that companies like Amazon and Walmart use in their warehouses.
For example, someone might use DON to get a robot to grab onto a specific spot on an object, say, the tongue of a shoe. From that, it can look at a shoe it has never seen before, and successfully grab its tongue.
"Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” says PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow PhD student Pete Florence, alongside MIT Professor Russ Tedrake. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side."
The team views potential applications not just in manufacturing settings, but also in homes. Imagine giving the system an image of a tidy house, and letting it clean while you’re at work, or using an image of dishes so that the system puts your plates away while you’re on vacation.
What’s also noteworthy is that none of the data was actually labeled by humans. Instead, the system is what the team calls “self-supervised,” not requiring any human annotations.
Two common approaches to robot grasping involve either task-specific learning, or creating a general grasping algorithm. These techniques both have obstacles: Task-specific methods are difficult to generalize to other tasks, and general grasping doesn’t get specific enough to deal with the nuances of particular tasks, like putting objects in specific spots.
The DON system, however, essentially creates a series of coordinates on a given object, which serve as a kind of visual roadmap, to give the robot a better understanding of what it needs to grasp, and where.
The team trained the system to look at objects as a series of points that make up a larger coordinate system. It can then map different points together to visualize an object’s 3-D shape, similar to how panoramic photos are stitched together from multiple photos. After training, if a person specifies a point on a object, the robot can take a photo of that object, and identify and match points to be able to then pick up the object at that specified point.
This is different from systems like UC-Berkeley’s DexNet, which can grasp many different items, but can’t satisfy a specific request. Imagine a child at 18 months old, who doesn't understand which toy you want it to play with but can still grab lots of items, versus a four-year old who can respond to "go grab your truck by the red end of it.”
In one set of tests done on a soft caterpillar toy, a Kuka robotic arm powered by DON could grasp the toy’s right ear from a range of different configurations. This showed that, among other things, the system has the ability to distinguish left from right on symmetrical objects.
When testing on a bin of different baseball hats, DON could pick out a specific target hat despite all of the hats having very similar designs — and having never seen pictures of the hats in training data before.
“In factories robots often need complex part feeders to work reliably,” says Florence. “But a system like this that can understand objects’ orientations could just take a picture and be able to grasp and adjust the object accordingly.”
In the future, the team hopes to improve the system to a place where it can perform specific tasks with a deeper understanding of the corresponding objects, like learning how to grasp an object and move it with the ultimate goal of say, cleaning a desk.
The team will present their paper on the system next month at the Conference on Robot Learning in Zürich, Switzerland.
Will Dickson ’14 has parked General Motors’ first self-driving vehicle, the Cruise AV, on campus and invited MIT students to think flexibly about its design opportunities. “You are future engineers and thought leaders in the area of new machines,” he says. “How do you design future vehicles like this one better for a safe and autonomous experience?”
Fostering innovative thinking is at the heart of Dickson’s career as an innovation champion for General Motors on campus. Dickson bridges the gap between GM’s advanced engineering teams and students on campus by creating opportunities to work on technical problems together. He recruits students, builds partnerships, and scours the MIT startup community and beyond for collaborations both unconventional and fitting.
“I pride myself on attempting nontraditional things and pulling in tons of stakeholders. I am the person trying something first and clearing out the hurdles and blazing new trails,” says Dickson, who studied materials science and engineering at MIT and during graduate work at the University of California at Berkeley before joining GM and thriving in a series of positions, including currently as innovation champion within iHub, General Motors’ innovation incubator and consultancy.
Among other things, Dickson is transcending traditional corporate engagement in higher education with its heavy reliance on job fairs and research sponsorship. He’s instead focusing his energy, which is described as engaging and dogged, on working with students, faculty, and administrators within some of MIT’s most innovative programs.
Real projects and real machines
MIT’s new project-centric cross-departmental program, the New Engineering Education Transformation (NEET), launched as a pilot last year and is redefining engineering education, says Dickson. Its hands-on focus on applying fundamental and systems engineering to real-world projects inspired General Motors to sponsor NEET’s project thread on “Autonomous Machines” this fall. (There are now over 120 sophomores and juniors in this and the other three NEET threads: Clean Energy Systems; Advanced Materials Machines; and Living Machines.)
“I’m excited for you to be exposed to real projects that our engineers are kicking around and looking for a new perspective on,” says Dickson to NEET students in a pizza-filled classroom near the autonomous test vehicle parked outside. “We’re going to bring engineering leaders to campus to see you in action. To see you working on projects. To see you doing stuff in teams. To interact with you.”
One of Dickson’s talents involves helping other young people with engineering backgrounds bridge the gap between technical expertise and creative and meaningful application in industry. “It’s not just about being the smartest person in the room,” he tells the rapt students.
“It’s about who can let the other people talk when they need to. Who can lead? Who can be a great project manager? Who can communicate their technical findings to people without the same background as you? Who can identify the right problem to be solving?”
At 6 feet 8 inches, Dickson towers over most of the young people in the room. He speaks to them with friendly confidence and a level of industry knowledge that sets him apart despite the slim difference in age between he and them.
“Will engages with students in such a personable manner,” whispers NEET’s executive director Amitava "Babi" Mitra as he watches Dickson from the back of the classroom. “As an MIT alum, he’s passionate about NEET. He wants to do right by MIT and by GM,” says Mitra, describing Dickson as instrumental in securing the new sponsorship from GM and in working closely with NEET to help create project and other opportunities for NEET students. He adds with a smile: “Will may look a little intimidating at that height but he’s extremely approachable.”
Making spots at GM
MIT student Sebastian Uribe knows Dickson’s mentorship is of great impact. Last winter, Uribe was among four winners of a hackathon organized by Dickson and sponsored by GM during Independent Activities Period (IAP) this past January. He and his teammates earned a summer internship that involved automating complex test protocols, working with engineers on autonomous vehicles, and with sensors, Super Cruise, and other innovations.
Now Uribe is enrolled in the “Autonomous Machines” thread of the NEET program. Today Dickson shares the second-year student’s success story with others in the room as a kind of lesson in nontraditional learning and networking. “Sebastian here had an internship with us this past summer,” says Dickson with a smile toward Uribe in the second row.
“At a career fair, we wouldn’t have looked at Sebastian. There are just too many people — but Sebastian and his team killed it during our inaugural BlacktopBuild during IAP.”
Dickson tells the room that during the hackathon, he shared a real engineering problem with students in a tent built in a parking lot at MIT in the middle of a New England winter, which resulted in powerful solutions that were well-received by engineering leaders within GM.
“Leaders at GM wanted this team to come back for an internship,” says Dickson. “GM created spots out of nothing for them — outside of the ordinary process — which I think is wild.” Then Dickson pauses for impact and adds, “After the fact, I was like, “Oh, by the way, they’re freshmen.” The classroom fills with laugher and a sense of promise.
Popular mentor and role model
“Will knows his way around MIT and has positively influenced my professional career,” says Uribe in a later interview. He says networking as a first-year student was “very much a nervous, sweaty-hands environment” but Dickson’s advice made him “far more confident in a professional space.”
Today Uribe describes Dickson as a friend. “Will encourages me to build a network in which I can actively reach out and share thoughts and ideas just as he does,” says Uribe. “It wasn’t difficult to get along with Will. He is very social and outgoing and inspiring.”
Dickson’s high energy is the first quality that comes to mind for Jinane Abounadi, executive director of MIT Sandbox, which opens pathways for student innovators by connecting them with educational experiences, mentoring, and funding.
Dickson played an important role in signing up GM as a sponsor of Sandbox and has been an important connection to all the teams looking to develop technologies with relevance to the automotive industry, she says. He held workshops providing important perspectives for students on how a big company like GM works with suppliers and on the process of innovation that he’s been a champion for at GM. He mentored dozens of Sandbox teams and plays an important role in providing a deep understanding of real-world problems to the teams.
“It is quite impressive that Will is able to play the leadership role within GM at such a young professional age,“ says Abounadi. “He’s able to create meaningful connections between GM and MIT students, developing a nice model for direct industry engagement with students and faculty. Adds Mitra, “Will is truly the consummate engineer and people person.”
The ad hoc task force on open access to MIT’s research has released “Open Access at MIT and Beyond: A White Paper of the MIT Ad Hoc Task Force on Open Access to MIT's Research,” which examines efforts to make research and scholarship openly and freely available. The white paper provides a backdrop to the ongoing work of the task force: identifying new, updated, or revised open access policies and practices that might advance the Institute’s mission to share its knowledge with the world.
Co-chaired by Class of 1922 Professor of Electrical Engineering and Computer Science Hal Abelson and Director of Libraries Chris Bourg, the task force was convened in July 2017 by Provost Martin Schmidt, in consultation with the vice president for research, the chair of the faculty, and the director of the libraries. The group was charged with exploring actions MIT should undertake to “further the Institute’s mission of disseminating the fruits of its research and scholarship as widely as possible.”
“The MIT community has long been at the forefront of sharing knowledge with the world, whether through OpenCourseWare or our campus-wide faculty open access policy,” says Chris Bourg. “The task force is looking to see how we can expand that commitment even further, considering how to share not only scholarly articles and books, but also data, educational materials, code, and more.”
Convening the task force was one of the 10 recommendations presented in the 2016 preliminary report of the MIT Task Force on the Future of Libraries. In addition, the task force has been charged to take up a question raised by the 2013 Report to the President on MIT and the Prosecution of Aaron Swartz, which is whether MIT should strengthen its activities in support of open access to the research and educational contributions of the MIT community. The task force is composed of a diverse and multi-disciplinary group of faculty, staff, postdocs, and graduate and undergraduate students.
Throughout the 2017-18 academic year, task force members consulted widely with domain experts across campus and beyond to develop an understanding of current local, national, and global practices, policies, and possibilities. The first part of the white paper provides an overview of current open access policies and movements in the United States and Europe, examining the ways that different funding models, political structures, and priorities shape how open access is achieved. The second part explores MIT researchers’ approaches to making their publications, data, code, and educational materials openly available.
The task force is in the process of developing a set of draft recommendations across a wide range of scholarly outputs, including publications, data, computer code, and educational materials, and will be gathering community feedback on those recommendations throughout the coming academic year.
The MIT community is invited to offer ideas of new, updated, or revised policies or practices that might further the sharing of the Institute’s research and scholarship as widely as possible. Ideas can be submitted via the task force idea bank, at upcoming community forums (details forthcoming), or via email to the task force.
Each year MIT professors invite thousands of undergraduates into their labs to work on cutting edge research through MIT’s Undergraduate Research Opportunities Program (UROP). Starting this fall, the MIT Quest for Intelligence will add a suite of new projects to the mix, allowing students to explore the latest ideas and applications in human and machine learning.
Through the generosity of several sponsors — including former Alphabet executive chairman Eric Schmidt and his wife, Wendy; the MIT-IBM Watson AI Lab; and the MIT-SenseTime Alliance on Artificial Intelligence — The Quest will fund up to 100 students to participate in Quest-themed UROP projects each semester.
“We’re going to advance the frontiers of brain science and artificial intelligence by harnessing the brain power of our students,” says The Quest’s director, Antonio Torralba, a professor of electrical engineering and computer science who also heads the MIT-IBM Watson AI Lab. “We thank our partners for the funding that has made these new student research positions possible.”
MIT President L. Rafael Reif launched The Quest in February, framing its mission in a pair of questions: “How does human intelligence work, in engineering terms? And how can we use that deep grasp of human intelligence to build wiser and more useful machines to benefit society?”
To answer those questions, The Quest brings together more than 250 MIT researchers in artificial intelligence, cognitive science, neuroscience, social sciences, and ethics. Organized to ensure that breakthroughs in the lab are matched by the creation of useful tools for everyday people, The Quest’s advances might include new insights into how humans and machines learn, or new technologies for diagnosing and treating disease, discovering new drugs and materials, and designing safer automated systems.
The Quest has been met with enthusiasm by the MIT community and prominent technology leaders, many of whom spoke at a kick-off event in Kresge Auditorium in March. “I think MIT is uniquely positioned to do this,” said Eric Schmidt, a founding advisor to The Quest and a current MIT Innovation Fellow. “I think you can turn Cambridge into a genuine AI center.”
Now in its 49th year, UROP allows undergraduates to work closely with faculty, graduate students and other classmates on original research. More than 91 percent of graduating seniors participate in at least one UROP project in their time at MIT, with about 2,600 students participating each year. Students gain experience in their major or exposure to a new field, and practice writing proposals and communicating their results.
Each spring, accepted MIT students trek to Cambridge, Massachusetts, for the Institute’s annual Campus Preview Weekend. They visit labs, explore campus, and speak with faculty to help decide whether or not MIT would be a good fit. For Alex Hattori, a rising senior studying mechanical engineering, a pivotal moment came during his Campus Preview Weekend as he was browsing The Coop, MIT’s bookstore.
“I found MIT yo-yos at The Coop. No other school I had visited had yo-yos, so I was really excited,” Hattori says.
Yo-yos have played a central role in Hattori’s life. He has won first place in the dual yo-yo division of the National Yo-Yo Contest for the past six years straight. The first time he went on an airplane was to attend the World Yo-Yo Contest, an event he has participated in every year since 2010.
This passion for yo-yoing started when Hattori entered middle school in his hometown of Torrance, California. A group of his friends were yo-yoing at school one day. Hattori picked up a yo-yo and has been playing ever since.
“Within a month of starting to yo-yo, I competed in my first contest and managed to win,” he recalls. While he had a knack for yo-yoing, he owes his success to lots of practice. He also benefitted from being in the right place at the right time. “Back when I started yo-yoing, southern California was a hub for yo-yoing,” he says.
On Saturdays throughout middle school and high school, Hattori and his friends would visit the kite store at a local pier for free yo-yo lessons. As he honed his skills, he gravitated toward the 3A yo-yo style — a challenging type of yo-yoing that involves two long spinning yo-yos. “I like having a second yo-yo because it effectively gives me an n-squared amount of tricks I can do,” Hattori explains.
When it was time for him to choose a college, the yo-yo at The Coop wasn’t the only thing that tipped the scales in MIT’s favor. Two classes in particular offered in the Department of Mechanical Engineering would allow Hattori to pursue his two biggest passions: robotics and yo-yoing.
In course 2.007 (Design and Manufacturing I), Hattori was able to design a robot from scratch and compete in a final robot competition. Meanwhile course 2.008 (Design and Manufacturing II) culminates in a group project where the goal is to produce 50 identical yo-yos using different manufacturing processes. “I spent a lot of my time building robots and yo-yoing, so the combination of 2.007 and 2.008 sealed the deal for me,” he adds.
Hattori took 2.008 in fall 2017, right around the same time he was working on a new design to use at the 2018 World Yo-Yo Contest this summer. Hattori and his team made an aluminum mold of his design and injected hot plastic, which cooled into the shape of the mold — a process many yo-yo companies use to produce plastic yo-yos. He used the same design he developed in 2.008 when competing at the 2018 World Yo-Yo Contest in Shanghai last month, where he took second place in the 3A division.
While studying mechanical engineering has afforded Hattori the opportunity to explore his passions inside the classroom, the various makerspaces on MIT’s campus have allowed him to build robots and yo-yos on his own time.
“A lot of students come to MIT because they want to build things, but they’re usually pretty shy about it at first. Alex wasn’t shy,” says Hattori’s freshman advisor Marty Culpepper, professor of mechanical engineering and MIT’s Maker Czar.
Culpepper helped Hattori find the makerspaces and workshops where he could build robots and make yo-yos. “It’s one thing if you’re forced to build in class, but having spaces where students can hang out with their friends and build something helps them make connections and put the things they’ve learned into practice,” Culpepper adds.
Long before Hattori enrolled in class 2.008, he was able to make yo-yos in these makerspaces. “As soon as I got to MIT as a freshman I started making yo-yos in my free time,” Hattori says.
As he enters his final year of undergraduate study at MIT, Hattori has a full plate. This spring he participated in the Discovery Channel’s BattleBots as part of Team SawBlaze. Last month, he finished in the top 16 at the Fighting My Bots World Cup, which took place in Shanghai the same week as the World Yo-Yo Contest. He is president of both the MIT Combat Robotics Club and the MIT Electronic Research Society (MITERS) — a student-run makerspace. This fall, he will also begin the process of applying to graduate school, where he hopes to study robotics.
Wherever the future brings him, Hattori plans to have his yo-yo in tow. “I don’t ever plan on stopping yo-yoing,” he says. “It’s so easy to bring it around with me anywhere I go.”
In 1938, an ambitious young Texas congressman named Lyndon Johnson voted for a bill called the Fair Labor Standards Act, which established the minimum wage. Most of Johnson’s Democratic Party colleagues joined him.
In 1947, however, Johnson, now a seasoned representative, voted for another bill, the Taft-Hartley Act, which limited the power of labor unions. Passing through a Republican-controlled congress with the help of Southern Democrats, the Taft-Hartley Act helped put the brakes on years of progressive momentum established by the Democratic Party.
“It was an incredibly consequential shift that basically set the limits of the New Deal,” says MIT political scientist Devin Caughey. “It was a critical turning point in American political development.”
It’s fair to say Johnson — later the 36th president of the U.S. — was inconsistent with regard to the interests of labor, as well as his own party. But why? For what reason would a popular Democratic Party politician, in a region controlled by the Democrats, have to zigzag on policy matters? This was the famous “solid South” of the mid-20th century, after all.
To Caughey, there is a clear explanation for why Johnson, and many of his Southern colleagues, reversed course: public pressure. In 1947, Johnson was on the eve of his first U.S. Senate campaign in Texas (which he barely won), and he moved back toward the right politically to help his chances. The strategy seemed necessary because Southern politics had shifted over the previous decade. In the 1930s, the region supported economically progressive legislation, but by the 1940s, much of the South had soured on the New Deal.
“The consequences of this transformation were momentous at the time and continue to reverberate today,” Caughey writes in his new book, “The Unsolid South: Mass Politics and National Representation in a One-Party Enclave,” published this month by Princeton University Press.
As the title suggests, Caughey believes the supposedly “solid South” was not a unitary bloc: Battles within the Democratic Party in the region served as a proxy for for national battles between the two major parties.
“Even though there was no partisan competition in the South, there was intraparty competition,” Caughey says, noting that “once members of Congress were elected, they would divide in ways that aligned either with the Democrats or Republicans nationally.”
But while other interpretations of the Democratic Party in the South at the time depict it as being controlled by elites who ignored the masses, Caughey contends that Southern politicians backed away from their party’s program because voters would not have kept electing them otherwise.
“What really hasn’t been looked at is the connection between mass politics and public opinion, on the one hand, and congressional behavior, on the other,” Caughey says.
Caughey is well-positioned to offer this kind of analysis. Along with his colleague Christopher Warshaw (formerly of MIT, now of George Washington University), and with the aid of student researchers from MIT’s Undergraduate Research Opportunities Program (UROP), Caughey has helped build a massive and unique database of policy decisions and public opinion, spanning the years 1936–2014, which he draws on in his analysis.
Those data have led him to conclude that while one-party domination meant Southern politics were not especially responsive to public opinion at the state level, the two-party competition nationally, between Democrats and Republicans, meant that at the federal level, Southern members of Congress had to heed public opinion. Without doing so, they would lose in Democratic Party primaries to politicians who were more aligned with their constituencies.
“A lot of Democratic Party primary contests in the South were often on the kinds of issues that divided Democrats and Republicans nationally, about the role of government, how high taxes should be, and other classic New Deal issues,” Caughey says.
Of course, as Caughey details in the book, any discussion about public opinion in the South in this era comes with a huge qualification: Segregation prevented almost all African-Americans from voting, so the public opinion that swayed politicians was strictly white public opinion.
“A large chunk of the population was disenfranchised,” Caughey says. “The distinctive regime in the South for most of the first part of the 20th century featured both disenfranchisement and a lack of party competition.”
The issue of racial relations, Caughey notes, also strongly informs the South’s reversal regarding the New Deal. In the 1930s, much of the South supported the New Deal in large part because it brought jobs and infrastructure to what was the country’s most economically lagging region.
White Southerners thus benefitted greatly from the early stages of New Deal legislation. But the emerging, proposed New Deal legislation of the 1940s did not so obviously favor white Southerners specifically. Indeed, an extension of economically progressive legislation may well have dealt a major setback to segregation.
“Part of it was the growing fear that the New Deal state posed a potential and maybe actual threat to Jim Crow in the South,” Caughey says. “So racial fears came to the fore.”
At the same time, Southerners were already more resistant to unions than people in other regions; the extent to which the New Deal might help organized labor also fed Southern antipathy toward economically liberal politicians. As Caughey notes in the book, by 1944, 81 percent of white Southerners stated they would oppose a candidate supported by the Congress of Industrial Organizations (CIO), as opposed to 61 percent in the rest of the country.
Scholars say Caughey’s book is a valuable addition to the field. David Mayhew, a professor of political science at Yale University, has called it “fresh, convincing, and written with the utmost skill, intelligence, and knowledge of the historical territory.”
As Caughey discusses in the book, the South’s turn against the New Deal is just one of two major reversals the region witnessed in the 20th century. The other was its even more famous flip away from the Democrats after the Civil Rights Act of 1964 — signed by, yes, President Lyndon Johnson — to the point where the region is now heavily controlled by the Republican Party.
The current dynamics, Caughey writes, still “exhibit an extraordinary degree of ideological and partisan polarization by race.” For his part, Caughey adds, he would like the book to open up avenues for further research about conditions of one-party domination in politics, something he affirms in the book’s conclusion: “My hope is the questions raised in this book will spur other scholars to pursue a broader research agenda on representation and democracy in one-party settings around the world.”
The MIT Educational Justice Institute will lead a consortium to support expanding access to postsecondary education to people currently and formerly in prison statewide, fueled by a grant by the Vera Institute of Justice (Vera) and the Andrew W. Mellon Foundation. Other member schools include Boston University, Emerson College, Mt. Wachusett Community College, and Tufts University.
The effort will draw upon the expertise of MIT’s Lee Perlman, a lecturer in philosophy who has taught prior classes to cohorts consisting of MIT students and incarcerated students, and Carole Caffery, a program administrator with over 25 years of experience as a corrections professional.
The co-directors of the new institute are also members of the MIT Experimental Study Group (ESG), a first-year learning community that has long supported a related effort to expose MIT students to the challenges and opportunities of bringing learning opportunities to local correctional facilities.
“This is a marvelous vote of confidence for us to build upon our past work,” says Perlman. “I’m excited about the transformational opportunities we can bring to the lives of those who are incarcerated by taking advantage of MIT’s hands-on pedagogy, commitment to social justice, and novel teaching technologies.”
The programming will not only feature a strong foundation in the sciences and humanities, but also career and technical training that will begin during incarceration and continue into the community. The consortium will also be responsible for creating academic and career advising specific to the needs of justice-involved students. Establishing the right learning context and wraparound support is paramount.
“Even the best education, integrating the robust resources of all of our partners, on its own, is not enough,” adds Cafferty, “Preparing returning citizens for the workforce builds resilience and promotes success. We also need to address practical challenges such as seamlessly transferring credits, conferring degrees, and assisting with comprehensive discharge plans. The ultimate goal is ambitious, but achievable: changing lives, increasing economic opportunity, and creating safer communities in Massachusetts”
While a 2013 study by the RAND Corporation found that people in prison who receive some form of postsecondary education are 43 percent less likely to reoffend than people who do not, a federal ban on Pell Grants for people in prisons and other state-based provisions, make accessing postsecondary education extraordinarily difficult.
The Education Justice Institute will be tasked with finding solutions or advocating for legislative changes to remove barriers for incarcerated individuals who are motivated to learn and ready complete their degrees. In other words, finding ways to deliver traditional college classes (in-person or online) and teach skills-building are just two aspects Perlman and Cafferty are exploring.
Creating an “education pipeline” will build the strong connections needed to foster a brighter future for former prisoners. “All too often, barriers to reentry start inside prisons only to follow people after they leave,” said Fred Patrick, director of the Center on Sentencing and Corrections at Vera. “This consortium will help establish Massachusetts as a leader in postsecondary education for people who are or were formerly incarcerated, which has a proven track record of transforming lives and communities.”
The MIT co-directors are also excited about the opportunities for MIT students to become engaged and better informed about incarceration in America. One student admissions blogger who took Perlman’s prison-based classes found the experience life changing and the determination of the prisoners inspiring, writing: “If there was ever any reflection of the adage that human spirits can stay strong in the face of darkness, that people can make personally uplifting opportunities out of absolutely anywhere and anything, it was reflected in those inspiring women.”
“At MIT we take pride in our commitment to build a better world. That often means new technologies, companies, or products. I want to make sure we use the broadest possible definition of that mandate and help those often living just out of site in our own backyards,” said Perlman.
Also involved in the effort will be Roxbury Community College, Harvard University, and Wellesley College. Additional consortium members will include the Massachusetts Department of Correction, The Petey Greene Program, The Massachusetts Parole Board, The Massachusetts Probation Service, and other organizations in the state focused on serving currently and formerly incarcerated individuals.
J-PAL North America announced today that it will partner with three U.S. state and local governments to evaluate promising solutions related to education, housing, and economic security.
The California Franchise Tax Board, the Minneapolis Public Housing Authority, and the New Mexico Legislative Finance Committee were selected from among more than two dozen applicants in the latest round of the J-PAL State and Local Innovation Initiative. These governments will partner with J-PAL and its network of leading academic researchers to develop randomized evaluations, also known as randomized controlled trials or RCTs, that have the potential to yield rigorous evidence about which programs and policies are most effective.
“We are honored to partner with these governments to both build on existing evidence and generate new insights about what works,” says Mary Ann Bates, executive director of J-PAL North America and initiative co-chair. “The evidence generated through this initiative has the potential to inform important policy decisions, in these jurisdictions and in others across the country.”
The California Franchise Tax Board will partner with J-PAL North America and the California Policy Lab to evaluate the impact of strategies to encourage households to file for the California Earned Income Tax Credit (CalEITC). Like the Federal EITC, one of the nation’s largest anti-poverty programs, the CalEITC is a refundable income tax credit that provides cash back to low-income working families. Existing research has shown that reminders and simplified materials can increase participation in the EITC and other benefit programs, and the Franchise Tax Board will test strategies to increase EITC take-up among eligible Californians.
“We are always looking for new ways to connect California families with the EITC, and we are delighted to have new partners who are committed to helping us build engagement with this powerful anti-poverty program,” says California Franchise Tax Board Executive Officer Selvi Stanislaus.
The Minneapolis Public Housing Authority and the Metropolitan Council Housing and Redevelopment Authority will partner with J-PAL North America and researchers Nathaniel Hendren and Christopher Palmer to pilot interventions aimed at helping low-income families move to opportunity neighborhoods. Previous research has shown that children whose families move from high poverty areas to lower poverty areas of opportunity have improved educational and earnings outcomes. An important question for policymakers is how best to assist families in making those types of moves successfully. By partnering with researchers to develop a randomized evaluation, the Minneapolis Public Housing Authority and the Metropolitan Council Housing and Redevelopment Authority hope to generate rigorous evidence that can be used by policymakers across the country to increase upward economic mobility, especially among children.
“The neighborhood where a low-income child lives can literally make a lifetime of difference. We have data that shows this,” Minneapolis Public Housing Authority Executive Director Greg Russ says. “What we need now is the data to help choose among the different tools in our mobility toolkit. Given limited funds, how can housing and social service agencies best help families make that opportunity move and lock in the long-term benefits?”
Terri Smith, director of the Metropolitan Council Housing and Redevelopment Authority, says her agency is “thrilled to be part of this exciting research, to assist families as they strive for better outcomes for their children and their futures.”
“The needs of low-income families in the Twin Cities metro region are far beyond what federal resources can support; through funding like this grant, hopefully we can identify the most effective ways to assist families,” Smith says. “All families should have housing options, and the opportunity to fully share in our region's prosperity.”
The New Mexico Legislative Finance Committee will partner with J-PAL North America to explore an evaluation of the state’s early college high schools, which aim to increase college preparation and degree attainment, especially among low-income students and those who would be the first among their families to attend college. The New Mexico Legislative Finance Committee aims to test how early college high schools impact longer-term educational and workforce outcomes and to use this evidence to inform the state’s budget and policy decisions.
"New Mexico has demonstrated its commitment to higher education by investing more in higher education per student than most states but only about a third of our young adults have associates degrees or higher,” says New Mexico State Representative Patricia Lundstrom, Chairwoman of the Legislative Finance Committee. ”Early college high schools have been promoted as a way to help more low-income students and other underrepresented groups succeed in college. We need the data to ensure it’s a good investment for New Mexico."
The California Franchise Tax Board, the Minneapolis Public Housing Authority, and the New Mexico Legislative Finance Committee join eight state and local governments selected through previous rounds of the J-PAL State and Local Innovation Initiative: Baltimore; King County, Washington; Pennsylvania; Philadelphia; Puerto Rico; Rochester, New York; Santa Clara, California; and South Carolina. These state and local governments are part of a growing movement to use evidence to improve the effectiveness of policies and programs and ultimately the lives of people experiencing poverty.
NASA has recognized the science team behind the discovery of a distant planetary system with a Group Achievement Award. The award, given by NASA's Jet Propulsion Laboratory (JPL), cites the team for "the outstanding scientific achievement in uncovering the nature of the TRAPPIST-1 system, revealing seven potentially habitable planets around a nearby cool red star." TRAPPIST is the Transiting Planets and Planetesimals Small Telescope, a key tool used in the discovery and the namesake of the system's host star.
Co-investigator Julien de Wit, an assistant professor of planetary sciences in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS), accepted the award on Aug. 28 on behalf of the TRAPPIST-1 discovery team at an award ceremony held at JPL.
In February 2017, the researchers, including de Wit and colleagues from the University of Liège in Belgium, announced their discovery, which marked a new record in exoplanet research. The TRAPPIST-1 system is the largest known of its kind outside our solar system, with a total of seven rocky, Earth-sized planets orbiting in the habitable zone — the range around their host star where temperatures could potentially sustain liquid water.
The Group Achievement Award is one among the prestigious NASA Honor Awards, which are presented to a number of carefully selected individuals and groups, both government and non-government, who have distinguished themselves by making outstanding contributions to the space agency’s mission.
In major legislation passed at the end of August, California committed to creating a 100 percent carbon-free electricity grid — once again leading other nations, states, and cities in setting aggressive policies for slashing greenhouse gas emissions. Now, a study by MIT researchers provides guidelines for cost-effective and reliable ways to build such a zero-carbon electricity system.
The best way to tackle emissions from electricity, the study finds, is to use the most inclusive mix of low-carbon electricity sources.
Costs have declined rapidly for wind power, solar power, and energy storage batteries in recent years, leading some researchers, politicians, and advocates to suggest that these sources alone can power a carbon-free grid. But the new study finds that across a wide range of scenarios and locations, pairing these sources with steady carbon-free resources that can be counted on to meet demand in all seasons and over long periods — such as nuclear, geothermal, bioenergy, and natural gas with carbon capture — is a less costly and lower-risk route to a carbon-free grid.
The new findings are described in a paper published today in the journal Joule, by MIT doctoral student Nestor Sepulveda, Jesse Jenkins PhD ’18, Fernando de Sisternes PhD ’14, and professor of nuclear science and engineering and Associate Provost Richard Lester.
The need for cost effectiveness
“In this paper, we’re looking for robust strategies to get us to a zero-carbon electricity supply, which is the linchpin in overall efforts to mitigate climate change risk across the economy,” Jenkins says. To achieve that, “we need not only to get to zero emissions in the electricity sector, but we also have to do so at a low enough cost that electricity is an attractive substitute for oil, natural gas, and coal in the transportation, heat, and industrial sectors, where decarbonization is typically even more challenging than in electricity. ”
Sepulveda also emphasizes the importance of cost-effective paths to carbon-free electricity, adding that in today’s world, “we have so many problems, and climate change is a very complex and important one, but not the only one. So every extra dollar we spend addressing climate change is also another dollar we can’t use to tackle other pressing societal problems, such as eliminating poverty or disease.” Thus, it’s important for research not only to identify technically achievable options to decarbonize electricity, but also to find ways to achieve carbon reductions at the most reasonable possible cost.
To evaluate the costs of different strategies for deep decarbonization of electricity generation, the team looked at nearly 1,000 different scenarios involving different assumptions about the availability and cost of low-carbon technologies, geographical variations in the availability of renewable resources, and different policies on their use.
Regarding the policies, the team compared two different approaches. The “restrictive” approach permitted only the use of solar and wind generation plus battery storage, augmented by measures to reduce and shift the timing of demand for electricity, as well as long-distance transmission lines to help smooth out local and regional variations. The “inclusive” approach used all of those technologies but also permitted the option of using continual carbon-free sources, such as nuclear power, bioenergy, and natural gas with a system for capturing and storing carbon emissions. Under every case the team studied, the broader mix of sources was found to be more affordable.
The cost savings of the more inclusive approach relative to the more restricted case were substantial. Including continual, or “firm,” low-carbon resources in a zero-carbon resource mix lowered costs anywhere from 10 percent to as much as 62 percent, across the many scenarios analyzed. That’s important to know, the authors stress, because in many cases existing and proposed regulations and economic incentives favor, or even mandate, a more restricted range of energy resources.
“The results of this research challenge what has become conventional wisdom on both sides of the climate change debate,” Lester says. “Contrary to fears that effective climate mitigation efforts will be cripplingly expensive, our work shows that even deep decarbonization of the electric power sector is achievable at relatively modest additional cost. But contrary to beliefs that carbon-free electricity can be generated easily and cheaply with wind, solar energy, and storage batteries alone, our analysis makes clear that the societal cost of achieving deep decarbonization that way will likely be far more expensive than is necessary.”
A new taxonomy for electricity sources
In looking at options for new power generation in different scenarios, the team found that the traditional way of describing different types of power sources in the electrical industry — “baseload,” “load following,” and “peaking” resources — is outdated and no longer useful, given the way new resources are being used.
Rather, they suggest, it’s more appropriate to think of power sources in three new categories: “fuel-saving” resources, which include solar, wind and run-of-the-river (that is, without dams) hydropower; “fast-burst” resources, providing rapid but short-duration responses to fluctuations in electricity demand and supply, including battery storage and technologies and pricing strategies to enhance the responsiveness of demand; and “firm” resources, such as nuclear, hydro with large reservoirs, biogas, and geothermal.
“Because we can’t know with certainty the future cost and availability of many of these resources,” Sepulveda notes, “the cases studied covered a wide range of possibilities, in order to make the overall conclusions of the study robust across that range of uncertainties.”
Range of scenarios
The group used a range of projections, made by agencies such as the National Renewable Energy Laboratory, as to the expected costs of different power sources over the coming decades, including costs similar to today’s and anticipated cost reductions as new or improved systems are developed and brought online. For each technology, the researchers chose a projected mid-range cost, along with a low-end and high-end cost estimate, and then studied many combinations of these possible future costs.
Under every scenario, cases that were restricted to using fuel-saving and fast-burst technologies had a higher overall cost of electricity than cases using firm low-carbon sources as well, “even with the most optimistic set of assumptions about future cost reductions,” Sepulveda says.
That’s true, Jenkins adds, “even when we assume, for example, that nuclear remains as expensive as it is today, and wind and solar and batteries get much cheaper.”
The authors also found that across all of the wind-solar-batteries-only cases, the cost of electricity rises rapidly as systems move toward zero emissions, but when firm power sources are also available, electricity costs increase much more gradually as emissions decline to zero.
“If we decide to pursue decarbonization primarily with wind, solar, and batteries,” Jenkins says, “we are effectively ‘going all in’ and betting the planet on achieving very low costs for all of these resources,” as well as the ability to build out continental-scale high-voltage transmission lines and to induce much more flexible electricity demand.
In contrast, “an electricity system that uses firm low-carbon resources together with solar, wind, and storage can achieve zero emissions with only modest increases in cost even under pessimistic assumptions about how cheap these carbon-free resources become or our ability to unlock flexible demand or expand the grid,” says Jenkins. This shows how the addition of firm low-carbon resources “is an effective hedging strategy that reduces both the cost and risk” for fully decarbonizing power systems, he says.
Even though a fully carbon-free electricity supply is years away in most regions, it is important to do this analysis today, Sepulveda says, because decisions made now about power plant construction, research investments, or climate policies have impacts that can last for decades.
“If we don’t start now” in developing and deploying the widest range of carbon-free alternatives, he says, “that could substantially reduce the likelihood of getting to zero emissions.”
David Victor, a professor of international relations at the University of California at San Diego, who was not involved in this study, says, “After decades of ignoring the problem of climate change, finally policymakers are grappling with how they might make deep cuts in emissions. This new paper in Joule shows that deep decarbonization must include a big role for reliable, firm sources of electric power. The study, one of the few rigorous numerical analyses of how the grid might actually operate with low-emission technologies, offers some sobering news for policymakers who think they can decarbonize the economy with wind and solar alone.”
The research received support from the MIT Energy Initiative, the Martin Family Trust, and the Chilean Navy.
Inside Ana Miljacki’s office in MIT’s Department of Architecture, a sign hangs on the wall bearing a wry message:
UTOPIA IS HERE
JUST FOR TODAY
By itself, that sign could be a lot of things: an earnest plea to enjoy the moment, an ironic commentary on the futility of seeking perfection, or a wistful nod to the impermanence of everything.
In Miljacki’s case, it is all of those, and a reference to architects she has studied and written about. Miljacki is an architectural historian, curator, and designer who has written books on postwar design, co-curated the U.S. pavillion at Venice Biennale, and heads the Master of Architecture program at MIT.
Miljacki’s first book was about the hopes and compromises of architects in postwar Czechoslovakia, covering the first three decades of their attempts to develop roles within the larger project of “constructing socialism.” Some of them, such as a group called SIAL from Liberec, made what Miljacki calls “a genuine effort to practice utopia” under the circumstances.
In terms of architecture, Miljacki has written, this meant that “utopia was no longer synonymous with the production of fantastical images of a perfect world sometime and somewhere else.” For the designers of SIAL themselves, this meant an “attempt to work out an effective role for architecture and architects within the confines” of a repressive political system.
Thus the SIAL architects had dreams but were realists, and the tension between these two things defined their careers.
“I have empathy for architects who operated in that context,” Miljacki says. “I don’t try to simplify the story, but I’m not unsympathetic to what they were trying to do, including survive.”
Today, in a very different time and place, Miljacki still ponders these ideas when evaluating her own career.
“Utopia is never something to strive for as a complete and frozen condition,” Miljacki says. But she is an idealist about the discipline of architecture, and, above all, about teaching it to MIT students. Indeed, Miljacki says, the best way to think about teaching is as a form of utopia.
“The classroom is where I practice and cultivate a kind of utopia with my students, in the best possible sense,” she says.
Beginning in Belgrade
By Miljacki’s account, it is unsurprising that she became a architect. She grew up in Belgrade as the child of two architects who designed “large swaths of housing” for the former Yugoslavia.
“For me, architecture seemed an obvious choice,” Miljacki recounts. She went to an architectural high school in Belgrade and was accepted to architectural college in Belgrade, just as the Yugoslavian war was breaking out in the early 1990s. Helped by a family friend, Miljacki spent a year attending high school and applying to colleges in the U.S.
“I’d never been to the U.S., but I was always dreaming of big things,” Miljacki says. And she got a full scholarship to Bennington College, the liberal arts college in Vermont.
“A liberal arts school as a model didn’t exist in my world,” Miljacki says, “but I suddenly had room to think about philosophy and literature and architecture and set design in the same context.” Moreover, Bennington’s educational philosophy — including no letter grades for students — helped her become a better, more inquisitive student.
“Bennington had no grades, and I had been a very good ‘A’ student, so I knew what it took to get good grades,” Miljacki recounts, calling the new model “an important shock to the system.” Instead, she notes, “[w]ith grades irrelevant, we were all left to our own — and our teachers’ — more nuanced judgment about what was relevant. I began working to satisfy my own standards, not somebody else’s, and I think that was really important for me at that moment in time.”
Miljacki then got her MA in architecture at Rice University, and entered the PhD program at Harvard University — where she wrote about postwar Czech architects and their struggles to practice and live under socialism as her dissertation.
In doing so, Miljacki was, in a distant way, digging into her own past, given her parents’ lives as architects in the former Yugoslavia. Her writing about Czech architects was “informed by my experiences in Serbian context, having watched my parents there. But I didn’t want to be a historian of my backyard.”
Miljacki’s academic career then took her to Columbia University, before she was hired on to the MIT faculty. For her research, design projects, and teaching, she was granted tenure in 2017.
Next in Venice
Miljacki’s own design projects are numerous. She has been principal in the design firm Project_ since 2002, and has designed and curated a long list of exhibitions. The highest-profile of these efforts was the U.S. pavilion at the Venice Biennale in 2014, called “OfficeUS” and co-curated by Miljacki, Eva Franch I Gilabert, now director of the Architectural Association school in London, and Ashley Schafer, a professor at Ohio State University.
Spurred by the event’s director, famed architect Rem Koolhaas, to dig into architectural history, the U.S. pavilion depicted a modernist office with pamphlets on the wall that themselves presented historical research about the spread of U.S. architecture around the world during a period when the country’s “soft power” expanded globally.
“The century has been the ‘American Century,’ so the pavilion had a real responsibility to think about how the U.S. had impacted the world during that century,” Miljacki says. “Our project, OfficeUS, was about starting a conversation. It was the first time the body of American architecture abroad was ever constituted as such.”
Miljacki also recently co-edited a book of essays about the architectural profession and issues of authorship, influence, reproduction, and copyright.
“[People] are exposed to immense amounts of work, through images,” Miljacki says. “This is unprecedented. And so … there is much more copying in the most superficial of ways, across the board.”
Still, beyond research, writing, designing, and curating, Miljacki emphasizes that she always feels at home while teaching.
“It’s always been about students first,” Miljacki says. “And the MIT students are amazing. … They are both earnest and sophisticated. They are thoughtful and open to being taught, and they’re good students.” She adds, “The students at MIT have the best time. They are able to go across the spectrum of our discipline groups and faculty, and in the end tailor their particular academic diets to their own interests.”
So while life as an architecture student may never be utopian, thanks to people like Miljacki, it is getting closer, day by day.
Software applications provide people with many kinds of automated decisions, such as identifying what an individual's credit risk is, informing a recruiter of which job candidate to hire, or determining whether someone is a threat to the public. In recent years, news headlines have warned of a future in which machines operate in the background of society, deciding the course of human lives while using untrustworthy logic.
Part of this fear is derived from the obscure way in which many machine learning models operate. Known as black-box models, they are defined as systems in which the journey from input to output is next to impossible for even their developers to comprehend.
"As machine learning becomes ubiquitous and is used for applications with more serious consequences, there's a need for people to understand how it's making predictions so they'll trust it when it's doing more than serving up an advertisement," says Jonathan Su, a member of the technical staff in MIT Lincoln Laboratory's Informatics and Decision Support Group.
Currently, researchers either use post hoc techniques or an interpretable model such as a decision tree to explain how a black-box model reaches its conclusion. With post hoc techniques, researchers observe an algorithm's inputs and outputs and then try to construct an approximate explanation for what happened inside the black box. The issue with this method is that researchers can only guess at the inner workings, and the explanations can often be wrong. Decision trees, which map choices and their potential consequences in a tree-like construction, work nicely for categorical data whose features are meaningful, but these trees are not interpretable in important domains, such as computer vision and other complex data problems.
Su leads a team at the laboratory that is collaborating with Professor Cynthia Rudin at Duke University, along with Duke students Chaofan Chen, Oscar Li, and Alina Barnett, to research methods for replacing black-box models with prediction methods that are more transparent. Their project, called Adaptable Interpretable Machine Learning (AIM), focuses on two approaches: interpretable neural networks as well as adaptable and interpretable Bayesian rule lists (BRLs).
A neural network is a computing system composed of many interconnected processing elements. These networks are typically used for image analysis and object recognition. For instance, an algorithm can be taught to recognize whether a photograph includes a dog by first being shown photos of dogs. Researchers say the problem with these neural networks is that their functions are nonlinear and recursive, as well as complicated and confusing to humans, and the end result is that it is difficult to pinpoint what exactly the network has defined as "dogness" within the photos and what led it to that conclusion.
To address this problem, the team is developing what it calls "prototype neural networks." These are different from traditional neural networks in that they naturally encode explanations for each of their predictions by creating prototypes, which are particularly representative parts of an input image. These networks make their predictions based on the similarity of parts of the input image to each prototype.
As an example, if a network is tasked with identifying whether an image is a dog, cat, or horse, it would compare parts of the image to prototypes of important parts of each animal and use this information to make a prediction. A paper on this work: "This looks like that: deep learning for interpretable image recognition," was recently featured in an episode of the "Data Science at Home" podcast. A previous paper, "Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions," used entire images as prototypes, rather than parts.
The other area the research team is investigating is BRLs, which are less-complicated, one-sided decision trees that are suitable for tabular data and often as accurate as other models. BRLs are made of a sequence of conditional statements that naturally form an interpretable model. For example, if blood pressure is high, then risk of heart disease is high. Su and colleagues are using properties of BRLs to enable users to indicate which features are important for a prediction. They are also developing interactive BRLs, which can be adapted immediately when new data arrive rather than recalibrated from scratch on an ever-growing dataset.
Stephanie Carnell, a graduate student from the University of Florida and a summer intern in the Informatics and Decision Support Group, is applying the interactive BRLs from the AIM program to a project to help medical students become better at interviewing and diagnosing patients. Currently, medical students practice these skills by interviewing virtual patients and receiving a score on how much important diagnostic information they were able to uncover. But the score does not include an explanation of what, precisely, in the interview the students did to achieve their score. The AIM project hopes to change this.
"I can imagine that most medical students are pretty frustrated to receive a prediction regarding success without some concrete reason why," Carnell says. "The rule lists generated by AIM should be an ideal method for giving the students data-driven, understandable feedback."
The AIM program is part of ongoing research at the laboratory in human-systems engineering — or the practice of designing systems that are more compatible with how people think and function, such as understandable, rather than obscure, algorithms.
"The laboratory has the opportunity to be a global leader in bringing humans and technology together," says Hayley Reynolds, assistant leader of the Informatics and Decision Support Group. "We're on the cusp of huge advancements."
Melva James is another technical staff member in the Informatics and Decision Support Group involved in the AIM project. "We at the laboratory have developed Python implementations of both BRL and interactive BRLs," she says. "[We] are concurrently testing the output of the BRL and interactive BRL implementations on different operating systems and hardware platforms to establish portability and reproducibility. We are also identifying additional practical applications of these algorithms."
Su explains: "We're hoping to build a new strategic capability for the laboratory — machine learning algorithms that people trust because they understand them."
Nearly 150 years ago, the physicist James Maxwell proposed that a circular lens that is thickest at its center, and that gradually thins out at its edges, should exhibit some fascinating optical behavior. Namely, when light is shone through such a lens, it should travel around in perfect circles, creating highly unusual, curved paths of light.
He also noted that such a lens, at least broadly speaking, resembles the eye of a fish. The lens configuration he devised has since been known in physics as Maxwell’s fish-eye lens — a theoretical construct that is only slightly similar to commercially available fish-eye lenses for cameras and telescopes.
Now scientists at MIT and Harvard University have for the first time studied this unique, theoretical lens from a quantum mechanical perspective, to see how individual atoms and photons may behave within the lens. In a study published Wednesday in Physical Review A, they report that the unique configuration of the fish-eye lens enables it to guide single photons through the lens, in such a way as to entangle pairs of atoms, even over relatively long distances.
Entanglement is a quantum phenomenon in which the properties of one particle are linked, or correlated, with those of another particle, even over vast distances. The team’s findings suggest that fish-eye lenses may be a promising vehicle for entangling atoms and other quantum bits, which are the necessary building blocks for designing quantum computers.
“We found that the fish-eye lens has something that no other two-dimensional device has, which is maintaining this entangling ability over large distances, not just for two atoms, but for multiple pairs of distant atoms,” says first author Janos Perczel, a graduate student in MIT’s Department of Physics. “Entanglement and connecting these various quantum bits can be really the name of the game in making a push forward and trying to find applications of quantum mechanics.”
The team also found that the fish-eye lens, contrary to recent claims, does not produce a perfect image. Scientists have thought that Maxwell’s fish-eye may be a candidate for a “perfect lens” — a lens that can go beyond the diffraction limit, meaning that it can focus light to a point that is smaller than the light’s own wavelength. This perfect imaging, scientist predict, should produce an image with essentially unlimited resolution and extreme clarity.
However, by modeling the behavior of photons through a simulated fish-eye lens, at the quantum level, Perczel and his colleagues concluded that it cannot produce a perfect image, as originally predicted.
“This tells you that there are these limits in physics that are really difficult to break,” Perczel says. “Even in this system, which seemed to be a perfect candidate, this limit seems to be obeyed. Perhaps perfect imaging may still be possible with the fish eye in some other, more complicated way, but not as originally proposed.”
Perczel’s co-authors on the paper are Peter Komar and Mikhail Lukin from Harvard University.
A circular path
Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges. The denser a material, the slower light moves through it. This explains the optical effect when a straw is placed in a glass half full of water. Because the water is so much denser than the air above it, light suddenly moves more slowly, bending as it travels through water and creating an image that looks as if the straw is disjointed.
In the theoretical fish-eye lens, the differences in density are much more gradual and are distributed in a circular pattern, in such a way that it curves rather bends light, guiding light in perfect circles within the lens.
In 2009, Ulf Leonhardt, a physicist at the Weizmann Institute of Science in Israel was studying the optical properties of Maxwell’s fish-eye lens and observed that, when photons are released through the lens from a single point source, the light travels in perfect circles through the lens and collects at a single point at the opposite end, with very little loss of light.
“None of the light rays wander off in unwanted directions,” Perczel says. “Everything follows a perfect trajectory, and all the light will meet at the same time at the same spot.”
Leonhardt, in reporting his results, made a brief mention as to whether the fish-eye lens’ single-point focus might be useful in precisely entangling pairs of atoms at opposite ends of the lens.
“Mikhail [Lukin] asked him whether he had worked out the answer, and he said he hadn’t,” Perczel says. “That’s how we started this project and started digging deeper into how well this entangling operation works within the fish-eye lens.”
Playing photon ping-pong
To investigate the quantum potential of the fish-eye lens, the researchers modeled the lens as the simplest possible system, consisting of two atoms, one at either end of a two-dimensional fish-eye lens, and a single photon, aimed at the first atom. Using established equations of quantum mechanics, the team tracked the photon at any given point in time as it traveled through the lens, and calculated the state of both atoms and their energy levels through time.
They found that when a single photon is shone through the lens, it is temporarily absorbed by an atom at one end of the lens. It then circles through the lens, to the second atom at the precise opposite end of the lens. This second atom momentarily absorbs the photon before sending it back through the lens, where the light collects precisely back on the first atom.
“The photon is bounced back and forth, and the atoms are basically playing ping pong,” Perczel says. “Initially only one of the atoms has the photon, and then the other one. But between these two extremes, there’s a point where both of them kind of have it. It’s this mind-blowing quantum mechanics idea of entanglement, where the photon is completely shared equally between the two atoms.”
Perczel says that the photon is able to entangle the atoms because of the unique geometry of the fish-eye lens. The lens’ density is distributed in such a way that it guides light in a perfectly circular pattern and can cause even a single photon to bounce back and forth between two precise points along a circular path.
“If the photon just flew away in all directions, there wouldn’t be any entanglement,” Perczel says. “But the fish-eye gives this total control over the light rays, so you have an entangled system over long distances, which is a precious quantum system that you can use.”
As they increased the size of the fish-eye lens in their model, the atoms remained entangled, even over relatively large distances of tens of microns. They also observed that, even if some light escaped the lens, the atoms were able to share enough of a photon’s energy to remain entangled. Finally, as they placed more pairs of atoms in the lens, opposite to one another, along with corresponding photons, these atoms also became simultaneously entangled.
“You can use the fish eye to entangle multiple pairs of atoms at a time, which is what makes it useful and promising,” Perczel says.
In modeling the behavior of photons and atoms in the fish-eye lens, the researchers also found that, as light collected on the opposite end of the lens, it did so within an area that was larger than the wavelength of the photon’s light, meaning that the lens likely cannot produce a perfect image.
“We can precisely ask the question during this photon exchange, what’s the size of the spot to which the photon gets recollected? And we found that it’s comparable to the wavelength of the photon, and not smaller,” Perczel says. “Perfect imaging would imply it would focus on an infinitely sharp spot. However, that is not what our quantum mechanical calculations showed us.”
Going forward, the team hopes to work with experimentalists to test the quantum behaviors they observed in their modeling. In fact, in their paper, the team also briefly proposes a way to design a fish-eye lens for quantum entanglement experiments.
“The fish-eye lens still has its secrets, and remarkable physics buried in it,” Perczel says. “But now it’s making an appearance in quantum technologies where it turns out this lens could be really useful for entangling distant quantum bits, which is the basic building block for building any useful quantum computer or quantum information processing device.”
Typically, when architects or engineers design a new building, it’s only at the end of the process — if ever — that a lifecycle analysis of the building’s environmental impact is carried out. And by then, it may be too late to make significant changes. Now, a faster and easier system for doing such analyses could change all that, making the analysis an integral part of the design process from the beginning.
The new process, described in the journal Building and Environment in a paper by MIT researchers Jeremy Gregory, Franz-Josef Ulm and Randolph Kirchain, and recent graduate Joshua Hester PhD ’18, is simple enough that it could be integrated into the software already used by building designers so that it becomes a seamless addition to their design process.
Lifecycle analysis, known as LCA, is a process of examining all the materials; design elements; location and orientation; heating, cooling, and other energy systems; and expected ultimate disposal of a building, in terms of costs, environmental impacts, or both. Ulm, a professor of civil and environmental engineering and director of MIT’s Concrete Sustainability Hub (CSH), says that typically LCA is applied “only when a building is fully designed, so it is rather a post-mortem tool but not an actual design tool.” That’s what the team set out to correct.
“We wanted to address how to bridge that gap between using LCA at the end of the process and getting architects and engineers to use it as a design tool,” he says. The big question was whether it would be possible to incorporate LCA evaluations into the design process without having it impose too many restrictions on the design choices, thus making it unappealing to the building designers. Ulm wondered, “How much does the LCA restrict the flexibility of the design?”
Measuring freedom of design
To address that question systematically, the team had to come up with a process of measuring the flexibility of design choices in a quantitative way. They settled on a measure they call “entropy,” analogous to the use of that term in physics. In physics, a system with greater entropy is “hotter,” with its molecules moving around rapidly. In the team’s use of the term, higher entropy represents a greater variety of available choices at a given point, while lower entropy represents a more restricted range of choices.
To the researchers’ surprise, they found use of their LCA system had very little impact on reducing the range of design choices. “That’s the most remarkable result,” Ulm says. When introducing the LCA into the early stages of the design process, “you barely touch the design flexibility,” he says. “I was convinced we would come to a compromise,” where design flexibility would have to be limited in order to gain better lifecycle performance, Ulm says. “But in fact, the results proved me wrong.”
The system looks at the full range of climate impacts from a new structure, including all three phases: construction, including examining the embodied energy in all the materials used in the building; operation of the building, including all of the energy sources needed to provide heating, cooling, and electrical service; and the final dismantling and disposal, or repurposing of the structure, at the end of its service.
To evaluate the lifecycle impact of design choices requires looking at a wide range of factors. These include: the location’s climate (for their research, they chose Arizona and New England as two very different cases of U.S. climate); the building’s dimensions and orientation; the ratio of walls to windows on each side; the materials used for walls, foundations, and roofing; the type of heating and cooling systems used; and so on. As each of these factors gets decided, the range of possibilities for the building get narrower and narrower — but not much more so than in any conventional design process.
At any point, the program “would also provide information about a lot of the things that are not yet defined,” essentially offering a menu of choices that could lead to a more environmentally friendly design, says Kirchain, who is a principal research scientist at MIT and co-director of the CSH, which supported the project.
While designed particularly for reducing the climate impact of a building, the same tool could also be used to optimize a building for other criteria, such as simply to minimize cost, the researchers say.
Getting in early
Thinking about issues such as the ultimate fate of a building at the end of its functional life tends to be “not in the same order of interest for the designing architect, when they first work on a design,” compared to more immediate factors such as how the building will look to the client, and meeting any particular functional requirements for the structure, Ulm says. But if the new LCA tools are integrated right into the design software they are using, then indications of how a given design choice can affect the outcome would be constantly available and able to easily influence choices even in small, subtle ways early in the process.
By comparing the design process with and without the use of such tools, the researchers found that the overall greenhouse gas emissions associated with a building could be reduced by 75 percent “without a reduction in the flexibility of the design process,” Ulm says.
Ulm compares it to indicators in a gym that provide feedback on how many calories are being burned at any point in an exercise regime, providing a constant incentive to improve results — without ever prescribing what exercises the person should do or how to do them.
While the program is currently designed to evaluate relatively simple single-family homes — which represent the vast majority of living spaces in the U.S. — the team hopes to expand it to be able to work on much bigger residential or commercial buildings as well.
At this point, the software the team designed is a standalone package, so “one of our tasks going forward is to actually transition to making it a plug-in to some of the software tools that are out there” for architectural design, says Kirchain.
While there are many software tools available to help with evaluating a building’s environmental impact, Kirchain says, “we don’t see a lot of architects using these tools.” But that’s partly because these tend to be too prescriptive, he says, pointing toward an optimal design and constricting the designer’s choices. “Our theory is that any designer doesn’t want to be told that this is how the design must be. Their role is to design without undue constraints,” he says.
Demonstrating a means of applying machine vision and signal processing to a complex mechanical system has won MIT Aeronautics and Astronautics Department graduate students Sebastien Mannai and Antoni Rosinol Vidal first prize in the multi-university FutureMakers Challenge.
The FutureMakers Challenge involved students from five U.S. universities who worked on next-generation software concepts to foster innovation, and to develop the next-generation digital engineering workforce. The competition was hosted by the Siemens Corporate Technology in collaboration with MIT and the Institute’s Industrial Liaison Program. Similar challenges were held at Carnegie Mellon University, the University of California at Berkeley, Princeton University, Rutgers University, and Georgia Tech.
The competition gave student teams 24 hours to create software solutions for Siemens' Mindsphere cloud-based operating system that can be applied to emerging technology trends such as cybersecurity, machine learning, artificial intelligence, industrial automation, and smart manufacturing.
Winners were selected by a panel of Siemens experts based on "innovation, out-of-the-box thinking, and relevance to market needs.”
Mannai and Rosinol Vidal demonstrated the utility of machine vision and signal processing in a complex mechanical engineering system. Machine vision refers to applications in which a combination of hardware and software provide operational guidance, based on the capture and processing of images, for devices in the execution of their functions. The students’ concept uses existing CCTV video feeds to reconstruct at the surface in real-time 3-D the movement of an oil well and use that data to infer the motion of the pump deep beneath the ground. By integrating a mechanical engineering system with an artificial intelligence system, monitoring and control of engineering systems via machine vision can lead to equipment safety and performance enhancements.
Siemens will invest $140,000 to support follow-up MIT research on Mannai and Rosinol Vidal’s idea.
Mannai is a PhD candidate in the Gas Turbine Laboratory (GTL), where his advisor is Choon Sooi Tan. Rosinol Vidal, also a PhD candidate, is associated with the Sensing, Perception, Autonomy, and Robot Kinetics Laboratory (SPARK) and is advised by Assistant Professor Luca Carlone.
The students expect to apply the resources provided by Siemens to demonstrate their conceptual framework on a gas turbine engine subsystem or a wind turbine model. This will be first implemented on a computational test bed to be followed by applying the computational framework to a hardware system of interest to Siemens for assessment.
The study of supply chain logistics has risen to prominence in the wake of global commerce, but it has primarily focused on global and regional networks. Now we’re seeing a growing body of research into what experts call “last-mile” logistics for delivery of products in urban environments. The growing congestion of cities and the explosion in e-commerce home delivery have challenged traditional last-mile logistics strategies that have focused on point-of-sale delivery.
Even before the e-commerce boom, last-mile logistics have been complicated by increasing gridlock in fast-growing megacities. The complexity is heightened by the often-conflicting demands of retailers, e-tailers, unions, government officials, citizen activists, and a diverse ecosystem of shipping firms.
“In the city, shipments are typically much smaller and more fragmented than in regional transport,” says Matthias Winkenbach, a research scientist at MIT’s Center for Transportation and Logistics, and director of the Megacities Logistics Lab. “There’s greater uncertainty and complexity caused by increasingly dense and congested cities.”
E-commerce has significantly increased that complexity. Not only are there more trucks plying the streets, but they make more stops, which further hinders traffic.
“Home delivery routes of e-commerce shipments typically consist of 50 to 150 stops per day, depending on the type of vehicle,” says Winkenbach. “By comparison, beverage distributors to commercial clients have routes of 10 to 15 deliveries. The process of looking for parking spaces — and the practice of double parking when none can be found — are the key drivers of inefficiency and congestion.”
Consumer e-commerce also boosts the chance of delivery failure, which adds to complexity and cost. “You often need to schedule deliveries for customer specific time windows, and there’s a greater risk the customer will not be home,” says Winkenbach.
Last-mile logistics planners need to accept the new reality of internet shopping because it’s only likely to keep growing, says Winkenbach. “People are getting used to the convenience of ordering products online and receiving them the next day or even the same day. That creates a lot more traffic, congestion, noise, and emissions.”
Back to the city
In recent decades, logistics centers have moved from the cities to the exurbs, due in part to lower real estate costs. With increasing shipments in urban areas, however, there are now more multi-tier distribution systems, in which hubs are augmented with smaller logistic centers and fulfillment operations in the city.
“Now that people expect faster, more tailored, and more flexible e-commerce delivery, logistics is moving closer to the customer with multi-tier systems,” says Winkenbach. “We are helping companies answer questions like how many satellite facilities are needed, where they should be located, and what their function should be. Should certain facilities be limited to transshipment, or should some also hold inventory?”
Retailers and delivery firms are also starting to augment their fleets by outsourcing delivery to third-party services.
“Companies are experimenting with on-demand fleet services including crowdsourced delivery providers like UberRUSH,” says Winkenbach. “On-demand services create flexibility for logistics service providers and retailers by letting them temporarily expand delivery capacity. They can cover the baseload with their own fleets, and then use on-demand services to cover peak periods, as well as the most urgent and cost-insensitive delivery requests. That might be more cost effective than owning a larger fleet that is less utilized most of the time.”
Linking GPS and transactional data
The complexity of last-mile logistics would seem to be a natural fit for big data. Yet, most companies are better served with more traditional database analytics, Winkenbach advises.
“Companies have a lot of data to sift through, but it tends to be simple data like transactions, delivery records, and customer information, primarily stored in-house,” he says. “By combining it properly, you can generate a lot of insight into how demand is structured, how your customers behave, and how you can adapt your delivery systems to better serve customer needs.”
Route planners are often insulated from the complex realities of the delivery process, which leads to erroneous assumptions, says Winkenbach. “They often assume that drivers can park the vehicle in front of the customer’s house, but this is often not the case. The drivers know where they can potentially park, which might be three blocks away, but that information rarely makes it into the planning process.”
One way to integrate these insights is through location tracking, which is greatly enhancing last mile logistics.
“Most fleets now have GPS tracking, and the resolution and accuracy is improving,” says Winkenbach. “Movement data is extremely useful for extracting local driver knowledge. By connecting movement data with transactional data, you can know where the vehicle parked and which customers were served from that stop. This enables route planners to come up with more realistic plans that drivers can actually adhere to.”
GPS and traffic data are also used for on-the-fly routing. At least two MIT based startups provide smart routing software for urban fleets, says Winkenbach. “They let you redesign the route based on the most current information about congestion patterns.”
Exchange of last-mile delivery information between companies — or even sharing the deliveries themselves — are two often overlooked ways to improve service.
“I have ridden on half-empty delivery trucks of several consumer product companies in Mexico City that serve the same customers often at the same times,” says Winkenbach. “If they were willing to cooperate or even consolidate shipments, they could create tremendous economic savings and positive impacts on congestion and emissions. We can act as a neutral entity to bring companies together without fear of revealing confidential data.”
Smart lockers, smart infrastructure, and autonomous vehicles
Beyond GPS, there are a variety of technological solutions that can improve last mile logistics. In Europe, for example, DHL has pioneered the use of neighborhood smart lockers.
“The customer usually likes smart lockers because they can walk a short distance to receive their package whenever it’s convenient,” says Winkenbach. “Logistics service providers like them because they consolidate demand, letting them drop a lot of shipments at one stop. This reduces the risk of failed delivery to almost zero, while increasing efficiency and lowering cost.”
Smart city infrastructure can also be useful for last-mile logistics.
“Companies like GE and Siemens are working on smart street lighting with sensors that detect where free parking spaces open up,” says Winkenbach. “If you made that data available to service providers, it would streamline deliveries and reduce double parking.”
Autonomous vehicles have been proposed for last-mile delivery in cities with high labor costs. Yet, the conveniences imagined for self-driving taxi services do not translate to package delivery. With autonomous taxis, the routes can be coordinated to pick up new passengers near the drop-off point, so the vehicle rarely drives empty. Package delivery, however, is usually a one-way process: Delivery trucks distribute the goods, and then return to the warehouse empty.
Autonomous vehicles might reduce labor costs, but they add to vehicle and infrastructure costs, says Winkenbach. The last few yards are especially problematical. You would either need many more smart lockers positioned so that the vehicles could directly fill them with robotic extensions, or the vehicles would need to incorporate smart lockers. In either case, doorway delivery would be unlikely.
Amazon’s drone delivery video has sparked imaginations, but drones pose even more problems. “Drone technology is getting sufficiently advanced to make delivery possible, but you also need the infrastructure,” says Winkenbach. “Most households lack space for a landing patch, and you would need to regulate and coordinate thousands of drones so they could fly efficient routes without crashing.” In addition, cargo space is limited, and people might not appreciate the noise of thousands of drones buzzing around.
Winkenbach does, however, see a potential future application for combining autonomous vehicles with drones. “Autonomous vans could drive through the city, launching drones that would make short hop deliveries to consumers, and then return to the van. That minimizes the number of drones and the distance they fly, and because the van never stops, it speeds delivery and alleviates congestion.” On the other hand, the drones would still need to safely and gently deposit packages on doorsteps, and larger packages would be off limits.
The role of regulation
Government regulations can both hinder and help last mile logistics, says Winkenbach.
“One bad example we see a lot in Latin America are governments imposing access restrictions for commercial vehicles based on vehicle type,” he explains. “The restrictions actually lead to an increase in the number of vehicles because companies split the load into smaller vehicles that are allowed in.”
Regulation can also help, however. For example, the government of Santiago, Chile, now dedicates parking spots for freight vehicles during certain hours.
“This alleviates congestion and improves the efficiency of last mile delivery,” says Winkenbach. “The challenge is determining how many such spots you need, where they should be located, when they are available, and how they are regulated. This is where our GPS-driven analytics services can help: by identifying how freight demands vary in different parts of the city.”
Winkenbach believes that carbon taxes are a better way to regulate last-mile delivery than access restrictions. “A carbon tax might encourage companies to be more efficient in the way they route their vehicles, and will probably incentivize them to change their choice of vehicle type,” he says. “I don’t think it would change consumer behavior, however. We are used to ordering on Amazon and receiving the goods the next day, and it’s unlikely that will change due to rising delivery costs.”
As part of an initiative to support the development of nuclear fusion as a future practical energy source, the U. S. Department of Energy is renewing three-year funding for two Plasma Science and Fusion Center (PSFC) projects on the Wendelstein7-X (W7-X) stellarator at the Max Planck Institute for Plasma Physics in Greifswald, Germany.
The largest stellarator in the world, W7-X was built with helically-shaped superconducting magnets to investigate the stability and confinement of high temperature plasma in an optimized toroidal configuration, ultimately leading to an economical steady state fusion power plant. With plasma discharges planned to be up to 30 minutes long, researchers anticipate W7-X will demonstrate the possibility of continuous operation of a toroidal magnetically-confined fusion plasma.
PSFC principal research scientist Jim Terry is being funded to build and install on the stellarator a new diagnostic called “Gas-Puff Imaging,” which measures the turbulence at the boundary of the hot plasma by taking images in visible light at 2 million frames per second. The light is emitted as the plasma interacts with gas that is introduced locally at the measurement location. This fast frame rate allows researchers to see the dynamics of the turbulence. Observing plasma turbulence in fusion devices will help researchers understand how to better confine the plasma, while at the same time handling the plasma’s exhaust heat.
The new funding of $891,000 is a renewal of a three-year grant that ran from 2015 to 2018, during which time this diagnostic was designed. Terry’s team includes PSFC research scientist Seung Gyou Baek, as well as graduate student Sean Ballinger of the Department of Nuclear Science and Engineering and undergraduate physics major Kevin Tang, both of whom have had extended stays on-site at W7-X.
Over the past three years, professor of physics Miklos Porkolab and his team have designed and installed a “phase contrast imaging” (PCI) diagnostic on W7-X. PCI is a unique interferometer method using a continuous wave coherent carbon dioxide laser and additional specialized optical components that allow it to measure instantaneously the turbulent density fluctuations in the core of the hot plasma.
Using data collected over the past year, the team is analyzing the measured turbulence levels and comparing them with predictions of state-of-the-art gyrokinetic codes, assessing how turbulence contributes to the loss of energy and particles in an optimized stellarator. The renewal of this three-year grant, for $900,000, will fund not only personnel to continue analysis of experimental data, but also necessary upgrades to allow simultaneous imaging of core and edge fluctuations, making the PCI diagnostic versatile in its ability to measure a wide range of waves and instabilities.
In addition to Porkolab, members of the team include former PSFC staff scientist Eric Edlund, now an assistant professor at SUNY Cortland, who played a key role in the design of this diagnostic; and PSFC postdoc Zhouji Huang, who is stationed onsite in Greifswald. PSFC research physicist Alessandro Marinoni and postdoc Evan Davis (both stationed at DIII-D, an MIT collaboration in San Diego) also contributed to the project during the summer of 2018.
MIT has been selected by the National Science Foundation (NSF) as an Innovation Corps (I-Corps) Node, and awarded $4.2 million in order to develop programs and resources that will accelerate the translation of fundamental research to practical applications.
NSF I-Corps Nodes are critical in supporting regional needs for innovation education, infrastructure, and research. The program aims to improve the quality of life and increase the economic competitiveness of the United States.
Grantees of the NSF’s I-Corps program learn to identify valuable product opportunities that can emerge from academic research, and gain skills in entrepreneurship through training in customer discovery. The program prepares scientists and engineers to extend their focus beyond university laboratories and accelerates the economic and societal benefits of basic-research projects that are ready to move toward commercialization.
“It has become more critical than ever for university research to feed innovation that benefits society, especially in tackling the world’s biggest problems. MIT is excited to take a leadership role in advancing this initiative to increase the translation of fundamental research into technologies put into practical use, and to accelerate the time from idea to commercialization,” says MIT Provost Martin Schmidt, who serves as the principal investigator on the award.
Since the NSF I-Corps program was created in 2011, more than 1,200 teams, from 248 universities in 47 states, have completed the national NSF curriculum. So far, this has resulted in the creation of more than 577 companies that have collectively raised more than $400 million in follow-on funding.
“NSF-funded I-Corps Nodes work cooperatively to create a sustainable national innovation ecosystem that further enhances the development of technologies, products, and processes that benefit society. We are thrilled to welcome another I-Corps Node into the ecosystem to foster ideas in the New England region, and to further support national innovation and entrepreneurial excellence,” says Barry W. Johnson, division director of industrial innovation and partnerships at the NSF.
The $4.2 million award to MIT, spanning five years, will allow the Institute to lead the New England Regional Innovation Node (NERIN). NERIN, headquartered at MIT, will contribute to the NSF National Innovation Network as the ninth regional I-Corps Node, and will be instrumental in assisting researchers across the region, with its dense concentration of universities and world-class research.
NERIN’s activities will include a variety of short training programs offered across the region, as well as the ability to qualify for application to the prestigious NSF National I-Corps Teams program, which provides an immersive seven-week innovation experience. NERIN will also collaborate with key organizations in the regional innovation and entrepreneurship ecosystem that can provide support and resources to help advance these scientific and technological breakthroughs to achieve societal impact. NERIN plans to add academic partners as it grows.
“The NSF I-Corps program is about the genesis of ideas and emergence of opportunities, the birth of new organizations, their evolution into new companies, and the transformation of scientists into leaders. It is also about providing the foundation for future innovation by others,” says Roman M. Lubynsky, who will serve as the executive director of NERIN.
NERIN intends to develop programs and resources that will result in increased partnerships between academia and industry. It will reach and influence researchers across New England to consider practical applications arising from fundamental research and to initiate the exploration of getting their inventions and discoveries to the marketplace.
The NSF I-Corps program was established in 2011, and connects scientific research with the technological, entrepreneurial, and business communities to help create a stronger national ecosystem for innovation that couples scientific discovery with technology development and societal needs.
How can the world achieve the deep carbon emissions reductions that are necessary to slow or reverse the impacts of climate change? The authors of a new MIT study say that unless nuclear energy is meaningfully incorporated into the global mix of low-carbon energy technologies, the challenge of climate change will be much more difficult and costly to solve. For nuclear energy to take its place as a major low-carbon energy source, however, issues of cost and policy need to be addressed.
In "The Future of Nuclear Energy in a Carbon-Constrained World," released by the MIT Energy Initiative (MITEI) on Sept. 3, the authors analyze the reasons for the current global stall of nuclear energy capacity — which currently accounts for only 5 percent of global primary energy production — and discuss measures that could be taken to arrest and reverse that trend.
The study group, led by MIT researchers in collaboration with colleagues from Idaho National Laboratory and the University of Wisconsin at Madison, is presenting its findings and recommendations at events in London, Paris, and Brussels this week, followed by events on Sept. 25 in Washington, and on Oct. 9 in Tokyo. MIT graduate and undergraduate students and postdocs, as well as faculty from Harvard University and members of various think tanks, also contributed to the study as members of the research team.
“Our analysis demonstrates that realizing nuclear energy’s potential is essential to achieving a deeply decarbonized energy future in many regions of the world,” says study co-chair Jacopo Buongiorno, the TEPCO Professor and associate department head of the Department of Nuclear Science and Engineering at MIT. He adds, “Incorporating new policy and business models, as well as innovations in construction that may make deployment of cost-effective nuclear power plants more affordable, could enable nuclear energy to help meet the growing global demand for energy generation while decreasing emissions to address climate change.”
The study team notes that the electricity sector in particular is a prime candidate for deep decarbonization. Global electricity consumption is on track to grow 45 percent by 2040, and the team’s analysis shows that the exclusion of nuclear from low-carbon scenarios could cause the average cost of electricity to escalate dramatically.
“Understanding the opportunities and challenges facing the nuclear energy industry requires a comprehensive analysis of technical, commercial, and policy dimensions,” says Robert Armstrong, director of MITEI and the Chevron Professor of Chemical Engineering. “Over the past two years, this team has examined each issue, and the resulting report contains guidance policymakers and industry leaders may find valuable as they evaluate options for the future.”
The report discusses recommendations for nuclear plant construction, current and future reactor technologies, business models and policies, and reactor safety regulation and licensing. The researchers find that changes in reactor construction are needed to usher in an era of safer, more cost-effective reactors, including proven construction management practices that can keep nuclear projects on time and on budget.
“A shift towards serial manufacturing of standardized plants, including more aggressive use of fabrication in factories and shipyards, can be a viable cost-reduction strategy in countries where the productivity of the traditional construction sector is low,” says MIT visiting research scientist David Petti, study executive director and Laboratory Fellow at the Idaho National Laboratory. “Future projects should also incorporate reactor designs with inherent and passive safety features.”
These safety features could include core materials with high chemical and physical stability and engineered safety systems that require limited or no emergency AC power and minimal external intervention. Features like these can reduce the probability of severe accidents occurring and mitigate offsite consequences in the event of an incident. Such designs can also ease the licensing of new plants and accelerate their global deployment.
“The role of government will be critical if we are to take advantage of the economic opportunity and low-carbon potential that nuclear has to offer,” says John Parsons, study co-chair and senior lecturer at MIT’s Sloan School of Management. “If this future is to be realized, government officials must create new decarbonization policies that put all low-carbon energy technologies (i.e. renewables, nuclear, fossil fuels with carbon capture) on an equal footing, while also exploring options that spur private investment in nuclear advancement.”
The study lays out detailed options for government support of nuclear. For example, the authors recommend that policymakers should avoid premature closures of existing plants, which undermine efforts to reduce emissions and increase the cost of achieving emission reduction targets. One way to avoid these closures is the implementation of zero-emissions credits — payments made to electricity producers where electricity is generated without greenhouse gas emissions — which the researchers note are currently in place in New York, Illinois, and New Jersey.
Another suggestion from the study is that the government support development and demonstration of new nuclear technologies through the use of four “levers”: funding to share regulatory licensing costs; funding to share research and development costs; funding for the achievement of specific technical milestones; and funding for production credits to reward successful demonstration of new designs.
The study includes an examination of the current nuclear regulatory climate, both in the United States and internationally. While the authors note that significant social, political, and cultural differences may exist among many of the countries in the nuclear energy community, they say that the fundamental basis for assessing the safety of nuclear reactor programs is fairly uniform, and should be reflected in a series of basic aligned regulatory principles. They recommend regulatory requirements for advanced reactors be coordinated and aligned internationally to enable international deployment of commercial reactor designs, and to standardize and ensure a high level of safety worldwide.
The study concludes with an emphasis on the urgent need for both cost-cutting advancements and forward-thinking policymaking to make the future of nuclear energy a reality.
"The Future of Nuclear Energy in a Carbon-Constrained World" is the eighth in the "Future of…" series of studies that are intended to serve as guides to researchers, policymakers, and industry. Each report explores the role of technologies that might contribute at scale in meeting rapidly growing global energy demand in a carbon-constrained world. Nuclear power was the subject of the first of these interdisciplinary studies, with the 2003 "Future of Nuclear Power" report (an update was published in 2009). The series has also included a study on the future of the nuclear fuel cycle. Other reports in the series have focused on carbon dioxide sequestration, natural gas, the electric grid, and solar power. These comprehensive reports are written by multidisciplinary teams of researchers. The research is informed by a distinguished external advisory committee.
One of the most common complications of sickle-cell disease occurs when deformed red blood cells clump together, blocking tiny blood vessels and causing severe pain and swelling in the affected body parts.
A new study from MIT sheds light on how these events, known as vaso-occlusive pain crises, arise. The findings also represent a step toward being able to predict when such a crisis might occur.
“These painful crises are very much unpredictable. In a sense, we understand why they happen, but we don’t have a good way to predict them yet,” says Ming Dao, a principal research scientist in MIT’s Department of Materials Science and Engineering and one of the senior authors of the study.
The researchers found that these painful events are most likely to be produced by immature red blood cells, called reticulocytes, which are more prone to stick to blood vessel walls.
Subra Suresh, president of Singapore’s Nanyang Technological University, former dean of engineering at MIT, and the Vannevar Bush Professor Emeritus of Engineering, is also a senior author of the study, which appears in Proceedings of the National Academy of Sciences the week of Sept. 3. The paper’s lead authors are MIT postdoc Dimitrios Papageorgiou and former postdoc Sabia Abidi.
V1: Different types of adherent sickle cells to the microchannel surface under hypoxia (low oxygen) and shear flow, including i) sickle reticulocytes (young red blood cells): a, b; ii) sickle mature red blood cells: d, g, h, i, f; and iii) irreversibly sickled cells: m. (Credit: Courtesy of the researchers)
Simulating blood flow
Patients with sickle cell disease have a single mutation in the gene that encodes hemoglobin, the protein that allows red blood cells to carry oxygen. This produces misshapen red blood cells: Instead of the characteristic disc shape, cells become sickle-shaped, especially in low-oxygen conditions. Patients often suffer from anemia because the abnormal hemoglobin can’t carry as much oxygen, as well as from vaso-occlusive pain crises, which are usually treated with opioids or other drugs.
To probe how red blood cells interact with blood vessels to set off a vaso-occlusive crisis, the researchers built a specialized microfluidic system that mimics the post-capillary vessels, which carry deoxygenated blood away from the capillaries. These vessels, about 10-20 microns in diameter, are where vaso-occlusions are most likely to occur.
V2: Left: Simultaneous adhesion & polymerization under low oxygen of a sickle reticulocyte (young red blood cell), showing multiple sickle hemoglobin fibers growing out of cell bulk; Right: The same adherent sickle reticulocyte after hypoxia-to-reoxygenation cycle, showing polymerized hemoglobin fiber dissolution/retraction and residual adhesion sites. (Credit: Courtesy of the researchers)
The microfluidic system is designed to allow the researchers to control the oxygen level. They found that when oxygen is very low, or under hypoxia, similar to what is seen in post-capillary vessels, sickle red cells are two to four times more likely to get stuck to the blood vessel walls than they are at normal oxygen levels.
When oxygen is low, hemoglobin inside the sickle cells forms stiff fibers that grow and push the cell membrane outward. These fibers also help the cells stick more firmly to the lining of the blood vessel.
“There has been little understanding of why, under hypoxia, there is much more adhesion,” Suresh says. “The experiments of this study provide some key insights into the processes and mechanisms responsible for increased adhesion.”
The researchers also found that in patients with sickle cell disease, immature red blood cells called reticulocytes are most likely to adhere to blood vessels. These young sickle red cells, just released from bone marrow, carry more cell membrane surface area than mature red blood cells, allowing them to create more adhesion sites.
“We observed the growth of sickle hemoglobin fibers stretching reticulocytes within minutes,” Papageorgiou says. “It looks like they’re trying to grab more of the surface and adhere more strongly.”
Left: Simultaneous adhesion & polymerization of an irreversibly sickled cell under low oxygen, where the cell adheres to the surface and flips around the adhesion site aligning with the flow direction; Right: Computer simulation of the adhesion of an irreversibly sickled cell under shear flow, where the green dots represent an array of adhesion sites on the surface. (Credit: Courtesy of the researchers)
The researchers now hope to devise a more complete model of vaso-occlusion that combines their new findings on adhesion with previous work in which they measured how long it takes blood cells from sickle cell patients to stiffen, making them more likely to block blood flow in tiny blood vessels. Not all patients with sickle cell disease experience vaso-occlusion, and the frequency of attacks can vary widely between patients. The MIT researchers hope that their findings may help them to devise a way to predict these crises for individual patients.
“Blood cell adhesion is indeed a very complex process, and we had to develop new models based on such microfluidic experiments. These adhesion experiments and corresponding simulations for sickle red cells under hypoxia are quantitative and unique,” says George Karniadakis, a professor of applied mathematics at Brown University and a senior author of the study.
“The work done on sickle cell disease by Dao and Suresh over the last decade is remarkable,” says Antoine Jerusalem, an associate professor of engineering science at the University of Oxford who was not involved in the research. “This paper in particular couples numerical and experimental state-of-the-art techniques to enhance the understanding of polymerization and adhesion of these cells under hypoxia, a drastic step towards the elucidation of how vaso-occlusion can arise in sickle cell disease.”
The research was funded by the National Institutes of Health.