A new MIT-developed technique enables robots to quickly identify objects hidden in a three-dimensional cloud of data, reminiscent of how some people can make sense of a densely patterned “Magic Eye” image if they observe it in just the right way.
Robots typically “see” their environment through sensors that collect and translate a visual scene into a matrix of dots. Think of the world of, well, “The Matrix,” except that the 1s and 0s seen by the fictional character Neo are replaced by dots — lots of dots — whose patterns and densities outline the objects in a particular scene.
Conventional techniques that try to pick out objects from such clouds of dots, or point clouds, can do so with either speed or accuracy, but not both.
With their new technique, the researchers say a robot can accurately pick out an object, such as a small animal, that is otherwise obscured within a dense cloud of dots, within seconds of receiving the visual data. The team says the technique can be used to improve a host of situations in which machine perception must be both speedy and accurate, including driverless cars and robotic assistants in the factory and the home.
“The surprising thing about this work is, if I ask you to find a bunny in this cloud of thousands of points, there’s no way you could do that,” says Luca Carlone, assistant professor of aeronautics and astronautics and a member of MIT’s Laboratory for Information and Decision Systems (LIDS). “But our algorithm is able to see the object through all this clutter. So we’re getting to a level of superhuman performance in localizing objects.”
Carlone and graduate student Heng Yang will present details of the technique later this month at the Robotics: Science and Systems conference in Germany.
“Failing without knowing”
Robots currently attempt to identify objects in a point cloud by comparing a template object — a 3-D dot representation of an object, such as a rabbit — with a point cloud representation of the real world that may contain that object. The template image includes “features,” or collections of dots that indicate characteristic curvatures or angles of that object, such the bunny’s ear or tail. Existing algorithms first extract similar features from the real-life point cloud, then attempt to match those features and the template’s features, and ultimately rotate and align the features to the template to determine if the point cloud contains the object in question.
But the point cloud data that streams into a robot’s sensor invariably includes errors, in the form of dots that are in the wrong position or incorrectly spaced, which can significantly confuse the process of feature extraction and matching. As a consequence, robots can make a huge number of wrong associations, or what researchers call “outliers” between point clouds, and ultimately misidentify objects or miss them entirely.
Carlone says state-of-the-art algorithms are able to sift the bad associations from the good once features have been matched, but they do so in “exponential time,” meaning that even a cluster of processing-heavy computers, sifting through dense point cloud data with existing algorithms, would not be able to solve the problem in a reasonable time. Such techniques, while accurate, are impractical for analyzing larger, real-life datasets containing dense point clouds.
Other algorithms that can quickly identify features and associations do so hastily, creating a huge number of outliers or misdetections in the process, without being aware of these errors.
“That’s terrible if this is running on a self-driving car, or any safety-critical application,” Carlone says. “Failing without knowing you’re failing is the worst thing an algorithm can do.”
A relaxed view
Yang and Carlone instead devised a technique that prunes away outliers in “polynomial time,” meaning that it can do so quickly, even for increasingly dense clouds of dots. The technique can thus quickly and accurately identify objects hidden in cluttered scenes.
The MIT-developed technique quickly and smoothly matches objects to those hidden in dense point clouds (left), versus existing techniques (right) that produce incorrect, disjointed matches. Gif: Courtesy of the researchers
The researchers first used conventional techniques to extract features of a template object from a point cloud. They then developed a three-step process to match the size, position, and orientation of the object in a point cloud with the template object, while simultaneously identifying good from bad feature associations.
The team developed an “adaptive voting scheme” algorithm to prune outliers and match an object’s size and position. For size, the algorithm makes associations between template and point cloud features, then compares the relative distance between features in a template and corresponding features in the point cloud. If, say, the distance between two features in the point cloud is five times that of the corresponding points in the template, the algorithm assigns a “vote” to the hypothesis that the object is five times larger than the template object.
The algorithm does this for every feature association. Then, the algorithm selects those associations that fall under the size hypothesis with the most votes, and identifies those as the correct associations, while pruning away the others. In this way, the technique simultaneously reveals the correct associations and the relative size of the object represented by those associations. The same process is used to determine the object’s position.
The researchers developed a separate algorithm for rotation, which finds the orientation of the template object in three-dimensional space.
To do this is an incredibly tricky computational task. Imagine holding a mug and trying to tilt it just so, to match a blurry image of something that might be that same mug. There are any number of angles you could tilt that mug, and each of those angles has a certain likelihood of matching the blurry image.
Existing techniques handle this problem by considering each possible tilt or rotation of the object as a “cost” — the lower the cost, the more likely that that rotation creates an accurate match between features. Each rotation and associated cost is represented in a topographic map of sorts, made up of multiple hills and valleys, with lower elevations associated with lower cost.
But Carlone says this can easily confuse an algorithm, especially if there are multiple valleys and no discernible lowest point representing the true, exact match between a particular rotation of an object and the object in a point cloud. Instead, the team developed a “convex relaxation” algorithm that simplifies the topographic map, with one single valley representing the optimal rotation. In this way, the algorithm is able to quickly identify the rotation that defines the orientation of the object in the point cloud.
With their approach, the team was able to quickly and accurately identify three different objects — a bunny, a dragon, and a Buddha — hidden in point clouds of increasing density. They were also able to identify objects in real-life scenes, including a living room, in which the algorithm quickly was able to spot a cereal box and a baseball hat.
Carlone says that because the approach is able to work in “polynomial time,” it can be easily scaled up to analyze even denser point clouds, resembling the complexity of sensor data for driverless cars, for example.
“Navigation, collaborative manufacturing, domestic robots, search and rescue, and self-driving cars is where we hope to make an impact,” Carlone says.
This research was supported in part by the Army Research Laboratory, the Office of Naval Research, and the Google Daydream Research Program.
EdX, Arizona State University, and MIT have announced the launch of an online master’s degree program in supply chain management. This unique credit pathway between MIT and ASU takes a MicroMasters program from one university, MIT, and stacks it up to a full master’s degree on edX from ASU. Learners who complete and pass the Supply Chain Management MicroMasters program and then apply and gain admission to ASU are eligible to earn a top-ranked graduate degree from ASU’s W. P. Carey School of Business and ASU Online. MIT and ASU are both currently ranked in the top three for graduate supply chain and logistics by U.S. News and World Report.
This new master’s degree is the latest program to launch following edX’s October 2018 announcement of 10 disruptively priced and top-ranked online master’s degree programs available on edX.org. Master’s degrees on edX are unique because they are stacked, degree-granting programs with a MicroMasters program component. A MicroMasters program is a series of graduate-level courses that provides learners with valuable standalone skills that translate into career-focused advancement, as well as the option to use the completed coursework as a stepping stone toward credit in a full master’s degree program.
“We are excited to strengthen our relationship with ASU to offer this innovative, top-ranked online master’s degree program in supply chain management,” says Anant Agarwal, edX CEO and MIT professor. “This announcement comes at a time when the workplace is changing more rapidly than ever before, and employers are in need of highly skilled talent, especially in the fields most impacted by advances in technology. This new offering truly transforms traditional graduate education by bringing together two top-ranked schools in supply chain management to create the world’s first stackable, hybrid graduate degree program. This approach to a stackable, flexible, top-quality online master’s degree is the latest milestone in addressing today’s global skills gap.”
ASU’s online master’s degree program will help prepare a highly technical and competent global workforce for advancement in supply chain management careers across a broad diversity of industries and functions. Students enrolled in the program will also gain an in-depth understanding of the role the supply chain manager can play in an enterprise supply chain and in determining overall strategy.
“We’re very excited to collaborate with MIT and edX to increase accessibility to a top-ranked degree in supply chain management,” says Amy Hillman, dean of the W. P. Carey School of Business at ASU. “We believe there will be many students who are eager to dive deeper after their MicroMasters program to earn a master's degree from ASU, and that more learners will be drawn to the MIT Supply Chain Management MicroMasters program as this new pathway to a graduate degree within the edX platform becomes available.”
With this new pathway, the MIT Supply Chain Management MicroMasters program now offers learners pathways to completing a master’s degree at 21 institutions. This new program with ASU for the supply chain management online master’s degree offers a seamless learner experience through an easy transition of credit and a timely completion of degree requirements without leaving the edX platform.
“Learners who complete the MITx MicroMasters program credential from the MIT Center for Transportation and Logistics will now have the opportunity to transition seamlessly online to a full master’s degree from ASU,” says Krishna Rajagopal, dean for digital learning at MIT Open Learning. “We are delighted to add this program to MIT’s growing number of pathways that provide learners with increased access to higher education and career advancement opportunities in a flexible, affordable manner.”
The online Master of Science in supply chain management from ASU will launch in January 2020. Students currently enrolled in, or who have already completed, the MITx Supply Chain Management MicroMasters program can apply now for the degree program, with an application deadline of Dec. 16.
Each year, the Armed Forces Communications and Electronics Association (AFCEA) presents the Young AFCEA 40 Under 40 award to 40 individuals under age 40 for their significant contributions to science, technology, engineering, and mathematics (STEM). This year, the Lincoln Laboratory is home to four winners: Anu Myne, Mark Veillette, Meredith Drennan, and Alexander Stolyarov. The recipients are chosen by AFCEA for the innovation, leadership, and support they provide to their organizations, particularly through the application of information technology to make advancements in STEM.
Myne currently serves as an associate technology officer within the Lincoln Laboratory Technology Office, where she supports the strategic development of the laboratory’s internal investments and innovation initiatives, and furthers collaboration with MIT campus. In this role, she’s focusing on the laboratory’s overall strategies for advancing research and development in artificial intelligence (AI) for national security. Among various other projects, Myne organized the AI Technical Interchange meeting for laboratory-wide participation last year, and is now planning the inaugural Recent Advances in AI for National Security Workshop that will be held in 2019.
"It has been a distinct privilege to work with Anu, one of our rising stars at the laboratory," says Robert Bond, chief technology officer. "Anu has an impressively diverse professional resume spanning hardware design, signal processing, and machine learning. This background, coupled with her natural inquisitiveness and ability to zero in on the important technology issues, has made her ideal for her current role as associate technology officer."
Before joining the Technology Office, Myne made significant contributions to a diverse set of problems addressing challenges in electronic warfare and radar systems for next-generation defense. Her efforts ranged from system analysis and development of simulation tools to hardware design, implementation, and testing.
Myne believes that every opportunity she's had to develop, test, and demonstrate system concepts was a great experience. She’s been most recognized for her efforts in developing a novel electromagnetic environment simulation tool and a Bayesian network approach for intelligent test design — successes she attributes to an appreciation for both real hardware and software design challenges and her willingness to try out new ideas or approaches.
Upon winning the award, Myne said, "The laboratory is filled with talent and I'm honored to be recognized this way."
Veillette began working in the Air Traffic Control Systems Group in 2010. Since then, his main focus has been the application of AI and machine learning in weather sensing and forecasting. "As you can imagine, weather involves a lot of data and uncertainty, so I think it's a very rich and exciting space to be applying these types of algorithms," Veillette says.
Currently, Veillette is working on a project to create a global picture of synthetic weather radar. The data used for the project are similar to weather radar imagery seen on the news, except that these data will be available globally, even in areas without weather radar.
"Mark is not only an expert in his field, he is a consummate teacher," colleague Christopher Mattioli says. "Even at his most busy and stressful times, he's always willing to offer technical guidance and listen to new ideas. His type of character fosters a healthy working environment, which ultimately strengthens and expedites innovation."
Veillette says he is particularly proud of organizing and teaching a technical education course titled Decision Making Under Uncertainty. He has also been involved in various support roles across groups and divisions, and has served on the Laboratory's Advanced Concepts Committee for the past two and a half years.
"There are so many talented people here at the laboratory and at other institutions supporting the Department of Defense, so to be recognized by the AFCEA is very nice," Veillette says. "I’m thankful to the Director’s Office for nominating me."
Drennan has been working at the laboratory since 2010, when she started as an associate staff member in the Integrated Systems and Concepts Group. Since that time, she has worked on an assortment of laboratory projects and is now an assistant leader of her group.
From 2010-14, as part of the Multi-Aperture Sparse Imager Video System and Wide-Area Infrared System for Persistent Surveillance teams, she developed software for wide-area motion imagery processing.
Since 2014, Drennan has been the lead flight software developer for the SensorSat program — a project to build a next-generation surveillance satellite. She was made program manager of this project in 2018.
What Drennan appreciates most about her work at the laboratory has been the opportunity to contribute technically while working with great people on difficult problems. She pointed to her work on SensorSat as the reason for her receiving this award: "While I admit I worked very hard on that program, I was one of many. Any successes the satellite has had are a result of the hard work and dedication of dozens of individuals, not just one."
Alexander (Sasha) Stolyarov
Stolyarov, a staff member in the Chemical, Microsystem, and Nanoscale Technologies Group, currently leads the Defense Fabric Discovery Center (DFDC) — an end-to-end advanced fabrics prototyping facility focused on developing multifunctional fibers and fabrics for national security. The DFDC, opened in October 2017, is one of a planned network of fabric discovery centers and was built in a joint venture among the laboratory, the Commonwealth of Massachusetts, the Advanced Functional Fabrics of America, and the Combat Capabilities Development Command Soldier Center (formerly called the U.S. Army Natick Soldier Research, Development and Engineering Center).
Shortly after joining the laboratory in 2014, Stolyarov began working on a program involving multimaterial fiber devices. The program seeks to incorporate these devices into fabrics for a variety of uses, including fabric-based chemical sensors and optical communication systems.
Stolyarov says that his greatest accomplishment at the laboratory has been "starting and growing the advanced fibers technical area, which has grown from a group project to an enterprise involving collaborations with many of the laboratory’s system divisions."
Livia Racz, associate leader of the Chemical, Microsystem, and Nanoscale Technologies Group, says, "Sasha had a passion for this subject since he first started at the laboratory. When we first saw his proposal, we realized that it promised to become a perfect example of what we were looking for — a rapid, scalable way to break the paradigm of electronics on flat circuit boards."
Urban residents hear a lot about public transit fares, but to what extent do transportation costs really affect riders? A group of urban studies researchers at MIT has conducted a new experiment — a randomized, controlled trial — on Boston’s MBTA system showing that if low-income people are offered a 50 percent fare discount, their ridership increases by over 30 percent. A new white paper with the results was issued this month. The paper’s lead author is MIT PhD student Jeffrey Rosenblum; his co-authors are Department of Urban Studies and Planning professors Jinhua Zhao, Mariana Arcaya, Justin Steil, and Chris Zegras. MIT News spoke to Rosenblum about the results.
Q: What was the impetus for the study, and what did you find?
A: The idea was to look at travel behavior of riders. One of the things we don’t ordinarily have access to is how low-income people use the system. We can track seniors because seniors have a special card. But for low-income people, a lot of the information had previously been anecdotal.
There were hardly any studies to help me understand how low-income riders would respond to fare decreases. When I have to look back to a 1964 study from New York City as one of the prime examples that looked at low-income riders, you know there’s some missing data.
There have been two hypotheses in this area. One is that low-income people have no choice but to use public transit, so they have to take it out of their food budget or child budget. The other is that they do change behavior when fares decrease. The second is what we ended up finding: Low-income people did take significantly more trips, about a third more, based on the analysis. This suggests that for the low-income people in the study group, who were selected out of food stamps recipients, affordability was a big factor. So that’s really the take-home message.
Q: There is another layer to the results, though, which is that the increased use of public transit was strongly linked to certain purposes, such as using social services.
A: This gets into an important concept in transportation. No one gets on a bus to get on a bus. They want to go someplace. In the past transit systems really just cared about the numbers of people using the system, and they didn’t really care about the purposes of those trips.
In most categories of trip purpose, we didn’t see much difference, but in the social services category, we did. Usually when people think of public transportation, they think of commuting to work. And when people think about low-income riders, they don’t think about other really important things in life. Low-income people also spend more time on public transit doing errands, visiting family, as well as going to social services and health care providers.
Q: So this is not just a matter of household finance, since it seems like lower fares for low-income people have a kind of multiplier effect, allowing them to access other goods, right?
A: Yes. And any decisions related to implementation and the impact on the system would be as important as trying to find the money to fund such a program. Whenever studies like this get done, the implication is that this is an important issue to address.
But then one question is: Who is going to pay for it, and how? And the second is: Who would administer it? One option would be just to say the MBTA has to do it all. A more creative option would be to incorporate it into an existing government program, like Mass Health, or SNAP, the food stamps program, where those agencies already have a whole customer-service system set up, a database of low-income people, and are already issuing them cards. Imagine if a low-income person had one card, with a debit-card for food stamps, the Mass Health information, and a Charlie Card [an MBTA metro card] chip embedded in it. That’s where government efficiency counts. The technology is there but the lack of interagency coordination is a significant barrier.
MIT researchers have devised a novel method to glean more information from images used to train machine-learning models, including those that can analyze medical scans to help diagnose and treat brain conditions.
An active new area in medicine involves training deep-learning models to detect structural patterns in brain scans associated with neurological diseases and disorders, such as Alzheimer’s disease and multiple sclerosis. But collecting the training data is laborious: All anatomical structures in each scan must be separately outlined or hand-labeled by neurological experts. And, in some cases, such as for rare brain conditions in children, only a few scans may be available in the first place.
In a paper presented at the recent Conference on Computer Vision and Pattern Recognition, the MIT researchers describe a system that uses a single labeled scan, along with unlabeled scans, to automatically synthesize a massive dataset of distinct training examples. The dataset can be used to better train machine-learning models to find anatomical structures in new scans — the more training data, the better those predictions.
The crux of the work is automatically generating data for the “image segmentation” process, which partitions an image into regions of pixels that are more meaningful and easier to analyze. To do so, the system uses a convolutional neural network (CNN), a machine-learning model that’s become a powerhouse for image-processing tasks. The network analyzes a lot of unlabeled scans from different patients and different equipment to “learn” anatomical, brightness, and contrast variations. Then, it applies a random combination of those learned variations to a single labeled scan to synthesize new scans that are both realistic and accurately labeled. These newly synthesized scans are then fed into a different CNN that learns how to segment new images.
“We’re hoping this will make image segmentation more accessible in realistic situations where you don’t have a lot of training data,” says first author Amy Zhao, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and Computer Science and Artificial Intelligence Laboratory (CSAIL). “In our approach, you can learn to mimic the variations in unlabeled scans to intelligently synthesize a large dataset to train your network.”
There’s interest in using the system, for instance, to help train predictive-analytics models at Massachusetts General Hospital, Zhao says, where only one or two labeled scans may exist of particularly uncommon brain conditions among child patients.
Joining Zhao on the paper are: Guha Balakrishnan, a postdoc in EECS and CSAIL; EECS professors Fredo Durand and John Guttag, and senior author Adrian Dalca, who is also a faculty member in radiology at Harvard Medical School.
The “Magic” behind the system
Although now applied to medical imaging, the system actually started as a means to synthesize training data for a smartphone app that could identify and retrieve information about cards from the popular collectable card game, “Magic: The Gathering.” Released in the early 1990s, “Magic” has more than 20,000 unique cards — with more released every few months — that players can use to build custom playing decks.
Zhao, an avid “Magic” player, wanted to develop a CNN-powered app that took a photo of any card with a smartphone camera and automatically pulled information such as price and rating from online card databases. “When I was picking out cards from a game store, I got tired of entering all their names into my phone and looking up ratings and combos,” Zhao says. “Wouldn’t it be awesome if I could scan them with my phone and pull up that information?”
But she realized that’s a very tough computer-vision training task. “You’d need many photos of all 20,000 cards, under all different lighting conditions and angles. No one is going to collect that dataset,” Zhao says.
Instead, Zhao trained a CNN on smaller dataset of around 200 cards, with 10 distinct photos of each card, to learn how to warp a card into various positions. It computed different lighting, angles, and reflections — for when cards are placed in plastic sleeves — to synthesized realistic warped versions of any card in the dataset. It was an exciting passion project, Zhao says: “But we realized this approach was really well-suited for medical images, because this type of warping fits really well with MRIs.”
Magnetic resonance images (MRIs) are composed of three-dimensional pixels, called voxels. When segmenting MRIs, experts separate and label voxel regions based on the anatomical structure containing them. The diversity of scans, caused by variations in individual brains and equipment used, poses a challenge to using machine learning to automate this process.
Some existing methods can synthesize training examples from labeled scans using “data augmentation,” which warps labeled voxels into different positions. But these methods require experts to hand-write various augmentation guidelines, and some synthesized scans look nothing like a realistic human brain, which may be detrimental to the learning process.
Instead, the researchers’ system automatically learns how to synthesize realistic scans. The researchers trained their system on 100 unlabeled scans from real patients to compute spatial transformations — anatomical correspondences from scan to scan. This generated as many “flow fields,” which model how voxels move from one scan to another. Simultaneously, it computes intensity transformations, which capture appearance variations caused by image contrast, noise, and other factors.
In generating a new scan, the system applies a random flow field to the original labeled scan, which shifts around voxels until it structurally matches a real, unlabeled scan. Then, it overlays a random intensity transformation. Finally, the system maps the labels to the new structures, by following how the voxels moved in the flow field. In the end, the synthesized scans closely resemble the real, unlabeled scans — but with accurate labels.
To test their automated segmentation accuracy, the researchers used Dice scores, which measure how well one 3-D shape fits over another, on a scale of 0 to 1. They compared their system to traditional segmentation methods — manual and automated — on 30 different brain structures across 100 held-out test scans. Large structures were comparably accurate among all the methods. But the researchers’ system outperformed all other approaches on smaller structures, such as the hippocampus, which occupies only about 0.6 percent of a brain, by volume.
“That shows that our method improves over other methods, especially as you get into the smaller structures, which can be very important in understanding disease,” Zhao says. “And we did that while only needing a single hand-labeled scan.”
In a nod to the work’s “Magic” roots, the code is publicly available on Github under the name of one of the game’s cards, “Brainstorm.”
Hearing aids, dental crowns, and limb prosthetics are some of the medical devices that can now be digitally designed and customized for individual patients, thanks to 3-D printing. However, these devices are typically designed to replace or support bones and other rigid parts of the body, and are often printed from solid, relatively inflexible material.
Now MIT engineers have designed pliable, 3-D-printed mesh materials whose flexibility and toughness they can tune to emulate and support softer tissues such as muscles and tendons. They can tailor the intricate structures in each mesh, and they envision the tough yet stretchy fabric-like material being used as personalized, wearable supports, including ankle or knee braces, and even implantable devices, such as hernia meshes, that better match to a person’s body.
As a demonstration, the team printed a flexible mesh for use in an ankle brace. They tailored the mesh’s structure to prevent the ankle from turning inward — a common cause of injury — while allowing the joint to move freely in other directions. The researchers also fabricated a knee brace design that could conform to the knee even as it bends. And, they produced a glove with a 3-D-printed mesh sewn into its top surface, which conforms to a wearer’s knuckles, providing resistance against involuntary clenching that can occur following a stroke.
“This work is new in that it focuses on the mechanical properties and geometries required to support soft tissues,” says Sebastian Pattinson, who conducted the research as a postdoc at MIT.
Pattinson, now on the faculty at Cambridge University, is the lead author of a study published today in the journal Advanced Functional Materials. His MIT co-authors include Meghan Huber, Sanha Kim, Jongwoo Lee, Sarah Grunsfeld, Ricardo Roberts, Gregory Dreifus, Christoph Meier, and Lei Liu, as well as Sun Jae Professor in Mechanical Engineering Neville Hogan and associate professor of mechanical engineering A. John Hart.
Riding collagen’s wave
The team’s flexible meshes were inspired by the pliable, conformable nature of fabrics.
“3-D-printed clothing and devices tend to be very bulky,” Pattinson says. “We were trying to think of how we can make 3-D-printed constructs more flexible and comfortable, like textiles and fabrics.”
Pattinson found further inspiration in collagen, the structural protein that makes up much of the body’s soft tissues and is found in ligaments, tendons, and muscles. Under a microscope, collagen can resemble curvy, intertwined strands, similar to loosely braided elastic ribbons. When stretched, this collagen initially does so easily, as the kinks in its structure straighten out. But once taut, the strands are harder to extend.
Inspired by collagen’s molecular structure, Pattinson designed wavy patterns, which he 3-D-printed using thermoplastic polyurethane as the printing material. He then fabricated a mesh configuration to resemble stretchy yet tough, pliable fabric. The taller he designed the waves, the more the mesh could be stretched at low strain before becoming more stiff — a design principle that can help to tailor a mesh’s degree of flexibility and helped it to mimic soft tissue.
The researchers printed a long strip of the mesh and tested its support on the ankles of several healthy volunteers. For each volunteer, the team adhered a strip along the length of the outside of the ankle, in an orientation that they predicted would support the ankle if it turned inward. They then put each volunteer’s ankle into an ankle stiffness measurement robot — named, logically, Anklebot — that was developed in Hogan’s lab. The Anklebot moved their ankle in 12 different directions, and then measured the force the ankle exerted with each movement, with the mesh and without it, to understand how the mesh affected the ankle’s stiffness in different directions.
In general, they found the mesh increased the ankle’s stiffness during inversion, while leaving it relatively unaffected as it moved in other directions.
“The beauty of this technique lies in its simplicity and versatility. Mesh can be made on a basic desktop 3-D printer, and the mechanics can be tailored to precisely match those of soft tissue,” Hart says.
Stiffer, cooler drapes
The team’s ankle brace was made using relatively stretchy material. But for other applications, such as implantable hernia meshes, it might be useful to include a stiffer material, that is at the same time just as conformable. To this end, the team developed a way to incorporate stronger and stiffer fibers and threads into a pliable mesh, by printing stainless steel fibers over regions of an elastic mesh where stiffer properties would be needed, then printing a third elastic layer over the steel to sandwich the stiffer thread into the mesh.
The combination of stiff and elastic materials can give a mesh the ability to stretch easily up to a point, after which it starts to stiffen, providing stronger support to prevent, for instance, a muscle from overstraining.
The team also developed two other techniques to give the printed mesh an almost fabric-like quality, enabling it to conform easily to the body, even while in motion.
“One of the reasons textiles are so flexible is that the fibers are able to move relative to each other easily,” Pattinson says. “We also wanted to mimic that capability in the 3-D-printed parts.”
In traditional 3-D printing, a material is printed through a heated nozzle, layer by layer. When heated polymer is extruded it bonds with the layer underneath it. Pattinson found that, once he printed a first layer, if he raised the print nozzle slightly, the material coming out of the nozzle would take a bit longer to land on the layer below, giving the material time to cool. As a result, it would be less sticky. By printing a mesh pattern in this way, Pattinson was able to create a layers that, rather than being fully bonded, were free to move relative to each other, and he demonstrated this in a multilayer mesh that draped over and conformed to the shape of a golf ball.
Finally, the team designed meshes that incorporated auxetic structures — patterns that become wider when you pull on them. For instance, they were able to print meshes, the middle of which consisted of structures that, when stretched, became wider rather than contracting as a normal mesh would. This property is useful for supporting highly curved surfaces of the body. To that end, the researchers fashioned an auxetic mesh into a potential knee brace design and found that it conformed to the joint.
“There’s potential to make all sorts of devices that interface with the human body,” Pattinson says. Surgical meshes, orthoses, even cardiovascular devices like stents — you can imagine all potentially benefiting from the kinds of structures we show.”
This research was supported in part by the National Science Foundation, the MIT-Skoltech Next Generation Program, and the Eric P. and Evelyn E. Newman Fund at MIT.